false
Catalog
Faculty Development Workshop
ACGME: A Guide to Improving Assessments and Evalua ...
ACGME: A Guide to Improving Assessments and Evaluations Sleep Medicine Milestones and Assessments
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
My name is Laura Edgar, I am the Executive, or the Vice President for Milestones Development at ACGME. And for this talk today, they asked me to give you a little, sort of a 101 course on assessment, right? Because there's so much that we feel like we should know about assessment, and yet there's so much that we don't know about assessment. And so we're going to start today with some really basic things that you're going to be able to take back to your program and make some changes right away. Let's see if we can, okay, we got it right. So really to start with is this idea of the assessment system. How many different assessment tools do you think you use in your program? There's not that many, so go ahead and shout it out. Direct observation, okay. Online training, how about some multi-source feedback? What about, how many of you do chart audits with your fellows? A few of you, kind of, sort of? Well there's a lot of different tools out there and you're probably all spending a lot of time using them, but how much time did you spend training anybody on how to use any of those tools? Have you ever sat down with your faculty and said, okay, let's look at this tool and let's all agree on what it is that we're measuring. Let's all agree on how we're going to use it to measure. Has anybody ever done that? I didn't think so. It doesn't, well. Yes, and well, and actually that was Eric Holmboe, right, and Eric Holmboe is my boss at ACGME now and we actually, we still offer that course at ACGME and I'll talk a little bit about that later on, but yes, that is something we often fail to do and so what happens is that you can have two people observing the exact same activity and walk away thinking you're observing something different, right. You don't even necessarily look at the same thing that's happening with the fellow in front of you. How many of you use your residents or your fellows as an active part of assessment? So not just that you're doing it to them, but you're assessing with them about their own activities? Again, well, I can tell you that is another really great thing to do because when you include residents as active agents in their own education, then what happens is that they get more involved and they want, you know, we all know that they all want to do better. We all know that they want to get to where they can practice independently and be out on their own, but again, having them as part of that discussion, part of that decision making is quite often a better way to go about it. How many of you have an actual program of assessment or do you just have assessment tools? Well, if you went to the course, yes, you better have one or Eric would be very disappointed. A program of assessment, again, this gets back to really how I started with this whole idea that we need to make sure that when we're thinking about assessment that it's not just those one-off end of rotation evaluations that you're doing or the direct observation that, you know, somebody said, hey, by the way, you haven't done any direct observation, go do some, right? This has to be something where you're looking at your assessment program really as a whole being, right, from the beginning to the end and all the different pieces and parts. And although there is no way that I can cover all of that in this one-hour presentation, I can actually give you some, again, some really good building blocks that you're going to be able to start with. So we're going to start out talking a little bit about assessment tools in general, the shared mental model, which is an activity you can do with your faculty, how it is that you can make residents active agents in the work that you do, and then finally some things you can think about in terms of faculty development. And there's a couple of things that because we got started a few minutes late that I was going to have you work in some small groups. We're going to skip that part when I get to it and I'll just, I'll give you some guidance on what you might be able to do with your own faculty. So when we start out talking about assessment tools, quite often what we end up falling back into is really this idea of checklists, right? It's kind of like, did they do A, did they do B, did they do C? Not necessarily evaluating what it is that they're doing or how smoothly they're doing it or how efficiently they're doing it. It really becomes quite often these checklists. How many of you, does anybody want to share either a good example or a bad example of an assessment tool that you have? Yes and you get a little bit of that you've got the people who just love everybody and everybody's a nine right and then you have the people who on the other side are like yeah they they just don't teach them like they used to right and so they're sevens or sixes right and you don't too often get people evaluating less than that but sometimes you come across a faculty member who does but you're right especially when you have a nine-point scale or even a five-point scale that can quite often happen any other examples don't worry I won't torture you I know there's not a lot of you here but when we're thinking about those assessment tools some of the things that we want to make sure that we do is really you want to mix quantitative and qualitative methods remember when you're in your clinical competency committee yes you have to consider all of those those actual numbers right you have to consider the rating scales that were filled out you have to consider all of those different pieces but you want to make sure that you also have a lot of qualitative information because in the end it's the comments that make the difference it's the comments that tell you really what those numbers mean right because if somebody puts a seven and in the notes writes one of the best fellows I've ever seen it doesn't quite match right and so you need to make sure that you have both of those in there you want to make sure that as you're going throughout your every six months or even within every rotation that you're taking the time to sit down and and really think about those written comments that your faculty are doing and if your faculty are not giving you written comments about your fellows you want to go back and start pushing them to make sure that they do it and as we go on a little later I'll give you some tricks on how you can sometimes get a little more information from them the other thing that now bring this in a little bit now is from the resident point of view a lot of times if you can arrange so that you're you're I keep saying resident I I apologize I'll probably keep doing it I've been giving lots of resident talks lately from the fellow point of view one of the great things you can do is you can actually say to them ahead of an observation you can say what is it that you want me to focus on right what is it that you are worried about or that you don't think you're doing as well as you should be you can then take the step back and you're able you're going to be able to give a lot more comments about that because even if you fill out numbers the fellow has already said here's what I need you know here's what I'm looking for and so you have to make sure that everybody is engaged in that in medical education alright so this is a new phenomenon that we're seeing and I have to tell you it's really interesting because I work with every medical specialty all hundred and fifty something of them and it's something that we're starting to see and so I think that one of the things you're going to notice over the next couple of years is more and more faculty who are coming in with some of this base knowledge and I think these are the folks who are going to also help you when it comes to thinking about what you're doing in your assessment tools because we're thinking about good assessment there's really three things that we always think about reliability validity and feasibility quite often what happens is we get one or two of these but we forget about the third now remember reliability that's accuracy right that's stability you're getting the same shot every time validity means you're measuring what you intended to measure so often I'll give you an example I'm doing in the middle of a research project right now where I go and I'm observing clinical competency committees and I'm amazed at how often discussion about medical knowledge turns into discussion of personality right so what that's telling me is that in their assessment tools that they're doing they're not really doing a good job with these assessment tools looking at their medical knowledge so they don't have good validity with the tool that they're using right so you want to make sure that you're actually measuring what you intend to measure so right so you've all seen these pictures before nothing new here this is not reliable or valid right this is reliable but not valid and here's one that's that's both you just want to make sure you're getting all of them together at the same time now feasibility is the part that I think we all struggle with actually the most the thing with feasibility is that you have to make sure it's something that's useful for both the learner and for the rater you have to make sure that it's accepted right because how many of you have faculty that will say to you if you send me another assessment tool I'm just gonna delete it yeah if you view the law how many of you have assessment tools that have more than 10 questions how about more than 20 oh well that's good no don't tell me that but what happens is that when you start putting a lot of questions into your evaluations that's when due to time pressure due to cost containment due to just being completely overwhelmed with everything else that that has to be done you know including those little things take care of those little things called patients you know that the thing that's always behind them when there's when you want them to evaluate you have to make sure that you have a feasible tool that they're going to be willing to use because theory is great but if it's not a practical tool if it's not something at this point I almost think that if your evaluation isn't in a phone and something they can do in 30 seconds or less it actually gets harder and harder to get them to do you know quite often people complain about their millennial learners well I always say well think about your Gen X faculty right there probably just as bad about getting these things turned around and they want those apps and they want it to be fast and they want it to be easy and so we have to make sure that we do things that are practical but in reality you know again there's just too much going on there's too much that's necessary there's too much cognitive load for a lot of these faculty and again what happens is that when you start expanding your evaluations and they start getting longer and longer you're going to get a less and less reliable and valid assessment of your learners the good thing for you is that luckily most of your programs are fairly small and so you probably know all of your learners really well and that makes a big difference and that actually saves you even though some of these other things have problems because you know your fellows so well you can sometimes overcome some of these issues just by that daily knowledge that working one on one with them as much as you do but even with small programs even when you only have a couple of fellows long evaluations and short rotations are trouble in assessment land right you're just going to keep getting less and less valuable information always think about your balance right so think about that balance between the theoretical you know is it valid does it hold the learner accountable right is the learner going to actually be able to take this and look at what it is that they're doing every day in practice and be able to make a change is it's going to be useful for them in improving because it also has to balance with being practical make sure that they understand well starts with make sure your faculty understands what it is that they're measuring and then make sure that the learner understands what it is that's being measured because just like two faculty members can look at the same thing and see something different imagine what your learner is getting out of that if there's not been any sense creating a shared mental model so what it wouldn't for what good information do you need what is it that you can do to actually start thinking about improving your assessment system well first make sure your Raiders understand what they're supposed to do and I will sound like a broken record by the time we get to the end of this I promise make sure that you're asking them to evaluate things that are really happening it's something that in the development of the milestones we sometimes struggle with because there are things that are so important for for learners to demonstrate but yet when we sit back we actually have to have the discussions about okay well when will we actually be able to measure this when will we be able to sit back and say okay did they demonstrate that they understand the cost barriers of whatever kind of treatment in this particular patient in this individual patient right and that's not something that people think about very regularly and it's not something that's probably on most of your assessment tools except of course for those of you who use milestones as your assessments and you also want to make sure their behaviors that are likely to be observed again something that you want to make sure that you have some way of measuring even if it's through something as simple as self-reflection or in Journal Club you know things that they're talking about that you're able to take what it is that they're saying and be able to apply it back to some of these other assessments that you're doing you could go out and you can create lots of formal assessment tools right in the long run you need to if you're gonna do a truly valid assessment tool you have to do all of these things takes a lot of time it frankly it takes a lot of money to your gonna really do a superbly good job at it and really make sure that you've got something that's gonna stand the test of time what I recommend is that you think about the assessments tools that you're using the things that work for your program every group of faculty is different right every every group of learners is different and as much as we don't always like to to say you know every specialty has its own personality I can assure you just about every specialty has its own personality having worked with all of them it's something that we get to see and you want to make sure that whatever you end up selecting is gonna work for you in your circumstance now back to using milestones and tools they really weren't designed for that there are some milestones in some specialties that work because they're so granular that they become something that can be used but the thing that we have to remember is that the milestones don't cover everything right the milestones are not the the entirety of everything that you're teaching them and so you just need to make sure that your assessment tools are also covering those things that are not included in the milestones sometimes there are some really important factors that that aren't included in there that you need to think a little bit more about the other thing is that not every milestone should be evaluated in every rotation right I mean there are some I think that are more obvious than others but there are some milestones that you know again I by by over saturating your faculty with having to evaluate all of them every time you are you're getting that you're hitting that cognitive load error that we talked about before and that's going to actually make your response rate worse so you want to make sure that you're again balancing so a shared mental model now this is a great exercise that I recommend for every program to do and a shared mental model is something you can do with your clinical competency committee you can do it with your faculty and you can even do it with your with your fellows and essentially what you're doing is you would sit down together you would pick up an assessment tool and you would say okay everybody write down what you think question number one is really looking at if you have ten people in the room you're probably gonna get eight different answers right because again not everybody is thinking about the same thing you may have to go through that a few times but what you do is you take what everybody wrote down and then you start having the discussion as a group having the discussion and saying okay well these are this is the variety of the answers that we had let's now start thinking about how it is we can come to a common thought about what it is that we're actually measuring this is an activity that can take some time depending on how diverse your faculty are sometimes I've seen some groups do it very quickly but it takes time it takes effort your faculty have to be willing to sit down and do this and but once you've created the shared mental model everything goes much quicker when they start doing their evaluations right when they're when they're doing a direct observation or an end of rotation evaluation they're able to complete these evaluations much more efficiently because number one they know what they're actually evaluating right and number two they understand what everybody else is evaluating at the same time when the clinical competency committee gets this information the CCC can be assured that everybody was looking at the same thing and so it actually in the long run saves time for everybody we first rolled out one of the new tools we're doing with milestones called the supplemental guide which if you come back at 330 you'll get to hear all about the supplemental guide is it an extra tool that you can use with your the new milestones that will be coming out for public comment later this summer and essentially what it does is it's a tool to help you form a shared mental model around the milestones when we rolled these out in our international programs there were some programs that reported saving as much as half the amount of time for their CCC meetings because they were so much more efficient and able to focus more on the comments that were coming out about what it is that they could do to help the learner you know whether it was the struggling learner who needed a little bit of remediation or if it was the advanced learner who needed some way to get pushed a little bit further a little bit harder and learn a little bit more so practically what I would say is that you might need to do because most faculty don't have the opportunity to say okay we're gonna devote an afternoon right to make sure that we're doing this and we're doing it right so you you figure out what sort of the stress point is for your group which typically is between an hour and 90 minutes for most for most departments that's about what they can do and you have to make sure that you have somebody facilitate for you and I will say if you go to your DIO if your DIO can't do it themselves they probably have somebody in their office who would be happy to help you facilitate this and essentially like I said you go one tool at a time and you go through and everybody comes up with an agreement about what they mean when the new milestones come out I think that's going to be a really good opportunity and for everybody to go back and try this exercise partly because we've sort of done half the work for you the supplemental guide and that we've given you some information about what the group that developed them thought but now then you'll have to translate that into what your group thinks and so I think it's going to be a little bit easier we sort of set you up at least from the milestone standpoint but then you'll be able to do the same exercise with your actual assessment tools you know the other thing is I know you have a lot of things going on in your specialty societies but I always encourage program director groups to share tools amongst each other to try to work together to find you know best practices best examples all of those kinds of things to do it but this really is an individual program exercise that has to be done if you were to do all of your assessment tools and let's say you have ten different assessment tools most people don't but we'll pretend that you have ten different assessment tools that you're using out there you would probably need to schedule between seven and ten meetings right it just to get through the assessment tools if you've got a short assessment tool you're going to get through it a little bit faster and I can even tell you my favorite assessment tools have two questions my favorite assessment tools the first question is what do they have to do to improve and the second question is what is it that they're already doing really well and that's great for that quick weekly feedback and I think you get more information from those comments than you do from most rating scales and actually we're about to I'm going to show you that with the rating scales in just a minute so this is an example this is a real evaluation tool we're basically the way they evaluate the residents is either not satisfactory borderline progressing or not applicable well the problem with this is what does borderline mean there was nothing on this form to indicate it there was nothing on this form that listed the year of the resident what year of training they were in it literally was this there was no rhyme or reason for why they used borderline and it was a very interesting meeting to watch because this is something that again once they sit down and work on it they're going to be able to find all the problems that are going to come up with it and you can see that with this the real problem is that what if you have two residents let's say I don't remember what specialty this was even but let's say it's a it's a three-year program right let's let's think about the core I am people if you have a pg3 and a pg1 and they're both marked borderline does that mean that they're at the same level when you have evaluation tools like this there is no way for the learner to know the difference right unless you sit down and start having that conversation but again these your faculty even know the difference between what borderline means for a pg1 versus a pg3 and maybe it is the same thing from an outsider point of view to me I would think that they're different you have different expectations right of your pg1 versus your pg3 so we have to make sure we look at all of our assessment tools and look for catch things like this something that does not actually identify what it is that they're measuring what do you think you could do to this rating scale what does this rating scale tell you exactly it might as well be a group of notes or my favorite you know what do you think would your fellows like to get this kind of evaluation and of course this is what a lot of our rating scales look like is we even bunch them for the for the learners right we say okay if you're down here you're not meeting expectations but that's all it says that you're not meeting expectations does it really matter if you're a one or a four right my husband teaches high school math and he loves it when parents call and say well I understand he has an F but how bad of an F is it again it's an F it doesn't matter it's below the scale but here's how you can start to improve those one of the things if you are going to use rating scales you want to try to add behavioral anchors to everything that you're doing you want to have a way that your faculty is able to identify the difference between the different numbers that they're giving and whether you're using a scale of 5 or scale of 10 I'm sorry you have a question yeah okay again if everybody has the same understanding and you have an anchor that you're able to put with it then it's going to be something that again is going to be a little bit more consistent when you're different faculty are looking at it what so what most people end up doing is I mean I did this just to for the example but what most people end up doing is they end up picking a phrase so instead of it being a sentence you know the long sentence they pick a phrase that they're able to put there so that yes there's still words that they have to read but it's not paragraphs and paragraphs the issue with with using the nine and and you know sure it it's been used for a long time obviously people got something out of it but over time people forget and so the other piece with the shared mental model is that you always have to keep revisiting it it's not the same level of visitation as the first time you create a shared mental model but you do have to periodically revisit and have everybody sit down again and make sure that everybody is looking at the same thing because people will fall back into old habits people will start though they will have watched some video or some read some book and they'll start doing something different that takes away from what the shared mental model was so if you have a shared mental model and they're able to maintain it then I think it's okay to not have any behavioral anchors but as soon as you take those behavioral anchors away it doesn't give them that visual reminder of what it is that they're supposed to be looking at in each of those numbers but again it's if if you've got a small enough faculty and you can maintain that I think it's great anything that's easier for the for the faculty that's also helpful for the for the fellows that's what it's all about yes exactly and and that's why I said even if it's just a small phrase something in there I think is always good and useful to use for that so I don't know if this looks familiar to anybody it's from the shared meant that the shared milestones this is one of those statements that if your group sat around the first time thinking about this you know everybody was thinking about something different right there is no way that with something this complex without any real definition to it, that you're able to sort of go through and break it down and say, OK, here's what I want to look for when I read this. So if you have something that you want to look at that's really complex, again, from an assessment point of view, I would break it down to its parts. Because you might have somebody who's really good at managing very sick patients, right? But maybe they're not so good at collecting that collateral history, right? Or maybe they're not so good at evaluating their sleep study, right? I hope that's not the case being at sleep medicine. But I'm not a content expert. I have to make this stuff up. But there's always going to be some element that they can continue to improve on, right? Everybody has an area of weakness. And so you want to make sure that if you've got something really complex, that you break it down in your assessments so that you're able to go back and identify where those bigger problems, even if they seem fantastic, but where is it that they can continue to work and improve? So going back to the idea of residents as active agents, and I know there's a lot of words on here. Don't worry about all these words. This is free. So you can read them when you go back, when you get the slides later on. But if you could sit down with your learners and have a really frank conversation with them about these are the assessments, and here's what we're looking for, right? Let's figure out where you are, which, by the way, do any of you do like an early milestone assessment when they first come in, like within the first three months? Yeah, that's a very good idea. Because in a short fellowship like this, you don't have a lot of time for remediation. So we want to make sure we're doing the best we can. And although your learners are coming in with a base understanding of milestones, I can tell you that from my experience, that base understanding can be miles apart. I have sat and talked to graduating residents who swore that they had never gotten a milestone evaluation from their faculty. And yet, I saw the signed forms in their file, right? So it's not necessarily sinking in that that's what it is that you're doing, kind of like when they say that you have to sometimes say to your learners, yes, right now I'm giving you feedback, right? So sometimes you have to do it when you're talking about assessment as well, that you want them to be an active part of it, that they're going to take the time to ask you to observe something that they want to make sure they're doing well. Or maybe it was something that was pointed out to them by somebody else. They said, hey, you really want to work on this part of your history taking. So you ask somebody else to come and observe you to see how it is that they're improving over time. Is it getting better over time? The other part of it is that when they're taking an active part of it, it's going to be really instrumental for them when they actually get out of your program, right? Continuing education doesn't go away. I should say education doesn't go away. They're continually going to have to keep learning new things, whether it's a new treatment protocol, maybe it's a new testing protocol that comes through. There's always going to be something new that's coming up that they're going to have to be able to assess themselves to figure out how much they have to learn with whatever this new thing is. Maintenance of certification, right? Their continuous certification, I guess, now is the new name. But it's how are all of those things, how are they going to be able to take part in that and understand what their piece of that is as they keep going? When they're active agents, you're starting to teach that about them while they're still in your program. And in fact, several years ago, two years ago actually, I worked with a group of residents and fellows to create a milestone guidebook for residents and fellows. It was the residents and fellows who wrote this. And it was really interesting for me because it wasn't until that time that I realized how little these learners really understand about the assessment that goes on for them and about why they get the type of feedback they get, why they get the amount of assessment that they get, why even though they get tired of the comment, read more, maybe it's because they really do need to read more. They have to be a little bit better prepared. And so we want to make sure that we're continuing to remind them of these things so that they're able again to be better prepared when they leave. And actually, I'm going to go back to, I forgot I had this on here, for the individualized learning plan. One of the new common program requirements is that learners have to have an individualized learning plan. One of the really great ways that you can do that is by having your fellows do a self-assessment, whether it's with the milestones or if you want to pick an assessment tool or two to have them do, and then do a comparison. Because they can use that information to create that individualized learning plan. Obviously, they can't do it by themselves because they don't know what they don't know. But they can see the differences between where they thought they were. So if you're using a milestone, let's say it was on the non-invasive procedures. And let's say they gave themselves a two. And you gave them a one. You gave them critical deficiencies. That's a big learning moment for them. And so again, by making them part of all of this, they're going to be better prepared to keep moving forward. The other important thing, too, is to make sure that you allow your residents to ask for feedback. Now, occasionally, I hear from a faculty member, you know, if this resident asked me for feedback again, I'm getting a little tired of always having to give feedback. And yes, there's always going to be one or two learners who are going to go overboard with asking for feedback. But one of the ways that you can help that situation is by saying to your resident, so I don't know how long some of your rotations are or how long you might be working with one individual fellow at a time. But one of the things I recommend is to say to your fellow at the beginning of the week, on Friday, I'm going to give you some really specific feedback about the week. Right? So that way, if you have somebody who likes to ask and ask and ask, they already have the expectation. And at the same time, at the end of the week, you can give them a five-minute feedback or a five-minute explanation of, here's what you did great, and here's where you need to keep working. Yeah, go ahead. One thing that you can do. So one of the things that you can do for that is, like I said, I think that a weekly, very brief, informal feedback is a first step, right? Especially in a short program, you've got to do this better next week. The other thing you can do is, since you don't really have full-fledged rotations, I think the new requirement is every two months is the minimum that you have to do one of these evaluations. Now, the only bad thing with waiting two months to do a formal evaluation is that in a one-year program, OK, you've just lost a lot of time if something is identified. Now, if nothing's identified and there's no problems, you haven't lost anything. But if you have somebody that came into your program that had some deficits that you didn't realize right away, they were able to get by, so to speak, and then all of a sudden they come to light, you need to make sure you have a way of giving them that formal assessment so that they'll know what their next steps are. And as much as you always hate to think about it or talk about it, how many of you have had to fire a fellow? Nobody. Oh, you are the luckiest group of people I have sat with in a long time. Firing a fellow is very difficult. It is not something that is easy to do. Most of you, you will get a lot of pushback from your GME office, from the attorneys, from HR. And part of it is the thing that keeps most people in a program that shouldn't be in a program is because there has not been enough assessment. There's not been enough documentation to demonstrate to the office that, you know what, this person does need to be removed from this program. You know what, there's a lot of different ways you can do that. So the perfect example is what they do in the ER. Because the ER is, if they're in emergency medicine, it's absolutely that way. They go in and they do shift cards. So at the end of their 12-hour shifts, the faculty member goes back and does a very brief couple of comments for the program director. So you could have something like that. You could have somebody who is, you know, everybody sends their one or two sentences at the end of the week to a particular mentor or advisor that your fellows may have. Because that is difficult. If they're working with five different people in five days, that's not going to be very consistent. That also puts them in a bigger danger spot of not getting enough feedback, right, to know. Because if they're not getting good feedback, the assessment is almost pointless, right? It's good for you as faculty, but it doesn't help the learner. And in the end, that's what assessment is, right? Assessment and evaluation is to help the learner figure out how to keep growing and do better. So I think, so to sort of make sure I've got this right, basically, how many people, what percentage of the faculty has to agree that somebody is so bad that they need to go? So that, right. So that's going to depend a lot on your institution. Every institution is different. Typically, when somebody is asked to leave a program, there is either something so egregious, you know, like a big professionalism issue, that frankly, it just has to be the program director that reports it, and it's enough. There are other things, if it's based on knowledge, like their actual medical knowledge or their management of patients. Typically, you're going to need to have some consensus between the majority of your faculty. I would say that if you have four faculty, at least three have to be in agreement. But again, it goes back to the assessments. If you don't have assessments showing that there have been issues in this learner's medical knowledge, right, and you have not shown a remediation plan where you've gone back to try to help this resident, you're just perpetuating the problem. Because the GME office is not going to dismiss somebody that doesn't have demonstration of those kinds of problems. Again, professionalism is probably the one area that most GME offices, if they do something egregious, there will be no issue. So yeah, unfortunately, I can't give you a final response of you have to have this many. If there are performance issues, I think you should do it at least monthly. Because again, it's a 12-month program. And so you don't want to string somebody along for three or four months if you really don't think they're going to get better. So you want to, I would do it a little bit sooner. When you set up those performance improvement plans for your learners, you want to make sure you have smart goals, right? They have to be measurable. They have to be simple. They have to be able to be easily understood. So that, I think, is a, you know, but again, your GME office can always help you out with that. You always want to go to your GME office when you have a learner who is going to be trouble. Don't try to hold on to that yourself. Don't try to necessarily resolve those problems on your own. Because your GME office, they have a wealth of experience and a wealth of knowledge, not only about the system, but about what your system's policies are, right? So I would definitely include them. In a one-year program, even though it feels like overkill, I would probably do it either every three months or every six months. I think it depends on the size of your program. And the number of other people that they're working with. I would not do it more frequently than every three months. And even at every three months, I am hesitant to say that. But again, that only gives you four evaluations over the course of a year if you're going to do that type of, get that type of feedback from the other people they work with. And you may ask me. Tell them to go ahead. Try. Well, I am really good at this. I'm not trying to say this is the problem or something. I'm just trying to explain to you the extreme. Sure. So for a resident to do that, they have to, I mean, there's actually quite a bit that they have to do. They have to submit a letter to ACGME that not only lays out the issues, but lays out what they did to try to correct it or to resolve it internally. And then once that complaint is received by ACGME, a copy of it is sent to the program director to respond to. And I will tell you that nine out of every 10 complaints that come in don't go any further than that. We get the response from the program director. And we say, OK. We look back at the program history in terms of the resident surveys and the faculty surveys and all of that. And they make a decision. Sometimes we do have to come out and do a site visit. Sometimes we have to come out, especially if all of your fellows, what's your largest program in here? Anybody have more than five? A couple of you? If you have five fellows and all five of them say that basically you're mean to them all the time and you instill fear and intimidation, they would probably send out a site visitor. But if your resident surveys have been fine and somebody files this, again, you would get a letter back from ACGME. But quite often, that's about where it stops. Right. And one of the things I actually recommend along those lines is decide ahead of time. So at the beginning of the year, especially if you're changing any of your assessments or you're changing any of your curriculum, at the beginning of the year, decide, OK. In this rotation, these are the skills that we want to look at. And these are the faculty member who we're going to have evaluate. And let those faculty know. Because then they're prepared for it. But at the same time, it also says I'm not asking you to answer 15 questions every month on all of our learners. So I think that's a good way to do it. So I've only got a couple of minutes left. So I want to speed through some of these slides. I think we're done a quarter till. Correct? Yeah. OK. So this just sums up everything I've been saying. Make sure that your assessment is clear, that it provides meaning to the faculty, that it has meaning for the learner. Make sure you're assessing all six competencies and thinking ahead of time about how you are going to be able to accomplish this. And really, what I want you to do, I was further along than I thought I was. What I really want you to do is when you leave today, is start thinking about how it is that you can improve your assessment tools. There is a lot of information available. And this is the activity we skipped. So this is basically you sitting back and thinking about what you've currently got. And I know they're going to make these slides available on the website. And so if you can sit down, even if you just sit down with some of your core faculty or your CCC to start thinking about these things and talking about them, you might find some real easy fixes that you can do, some little tweaks that you can do to your assessment program that will help to make a lot of improvements. There is a lot of information. Again, direct observation is key, faculty development, and a lot of it, and most of the stuff I've said already. So a couple of really important pieces of information that you can find from ACGME. And this information, these things are all free. But everybody has trouble finding the milestones web page. So you have to go under what we do. And that's how you can find the milestones. We have several sections. The milestone resources, I think, is one of the most important ones for all of you. This is where we have guidebooks. We have the milestone guidebook. We have the clinical competency committee guidebook and the guidebook for residents and fellows. We are in the process of updating it. But when your new class of fellows starts in the next, what, six weeks, most of yours, when they come in, share this with them. It's an easy link to share. And again, it's a lot of really good information for them. The other thing, and this is a little bit newer, we just added a new learning management platform. And we have two that I think could be helpful for you. And you easily share this with your faculty. One is called Assessment 101. And that is a great, I think it's a 20-minute on-demand webcast. It's free. Eric Holmboe narrates it. There's also an introduction to milestones. So if you have somebody come into your program that really doesn't know what milestones are, let's say you get somebody from the community that hasn't been in an academic center, otherwise I can't believe they wouldn't know what milestones are, these are all free on here. And we're constantly developing more. We have this great program developing faculty competencies and assessment. If you are somebody who's really interested in assessment, I highly encourage you to try to attend one of these. There's two different ones. The one at the ACGME offices, and sadly, these are not free. But the one that's in the ACGME offices is six days. And it's offered three times a year. I have never met anybody that has attended one of these that has not walked away feeling like they have really gained a whole host of knowledge. It's so popular, in fact, that we've actually, in the last five years, we've had more than 1,000 people come through this workshop, which is pretty exciting considering we max out at 40 for each one. There are also three-day workshops at a lot of different hubs. And you can find that. I put that on there. I don't know if that's going to show up over there. Oh, it does. On the ACGME website, if you follow the thing for workshops, meetings and workshops, that's where you can get the information for those and for the three-day hubs. A lot of the same information as the six-day course, just compressed a little bit more. We are always here to help. If you have questions about assessment, you have questions about milestones, please feel free to reach out. We are always there. We're happy to answer questions and happy to work with you. So I'm pretty sure that time is up. So thank you all very much. And I'll stick around for a few minutes if you've got some other questions. Thank you. So yes, I'm Laura Edgar. I'm the Vice President for Milestones Development at ACGME. And I had the pleasure of working with this fantastic group of people. And we were actually missing a few, because we had two fellows on our committee. One gave birth a week before the last meeting. And one was so far along in her pregnancy that her doctors told her she couldn't travel anymore. So it was really a great group of people that came together. And a big, big thank you to this group of people. And I know some of you are here, so wave your hands. There you go, at least I have a few of you here. So thank you, thank you, thank you. You know, quite often people ask me about how the milestones are created. And I say it's not me, it's them. So just remember, it really is, it's a great process. And I think that everybody walks away from those meetings feeling like they've learned something new. So I hope they all enjoyed it as much as I did. What I want to do this afternoon is I'm going to just talk very briefly about the framework that we used for developing the milestones. Which by the way, I don't know if everybody's aware or not, but you are getting specialty-specific milestones. So later this summer, you are going to see specific sleep medicine milestones. It's no longer gonna be sort of the generic set that you've been using for the last number of years. And so I want to make sure that you're all well aware and prepared for that. We're gonna talk about how the milestones are used, what we call harmonized milestones, and then again, the supplemental guide. Is there anybody in here that does not know what milestones are? Oh, good. So, you know, essentially when we started working with milestones, we really want to make sure that everybody understands that what we're talking about is really the points of development as the fellows go through their program, right? So how is it that they're growing in each of these different competencies? Now, in a one-year program, sometimes it's a little harder to detect and measure, but we want to make sure that we're doing our best. Now, this is one of my favorite slides to show, and it's for a lot of reasons. First of all, almost everything on here, we had assumed would happen with the milestones. So we actually first created this slide back in 2011. And what's great is that almost all of it has already demonstrated itself back to us. And when we think about the purposes and implications, a couple of important things. ACGME, of course, is a big stakeholder in the milestones. And really what we use the milestones for is a couple of things. One is quality improvement. We use it to look back at all of our other policies and to see if there are things that need to be changed. Now, it didn't happen so much for sleep medicine this time because of the milestones you had that you've been using, but there were other specialties that based on the results of the milestones, it actually had implications for their program requirements. There were things that were changed because we were seeing how it was that people were performing on each of those different sub-competencies. We also do a lot of research with the milestones, and we do a lot of different types of research. If you're interested, we actually have a bibliography on our webpage, not just our research, but research overall in terms of milestones. There are over 200 peer-reviewed articles out there now on milestones that have come out the last number of years. Of course, milestones are used for accreditation, but they're only used in terms of yes or no. Yes, you've submitted your milestones, or no, you have not. The review committee never sees the data you submit, and we do that on purpose. If the committee were to see your data, there'd be a lot more hesitancy of graduating somebody not at a level four, right? Because then it's sort of like that little bit of pressure that, oh, well, if I don't give them a four, are they gonna accredit me? And we just wanna take all of that away. So the review committee never actually sees your data. Now, if there seems to be an issue with your data, we will catch that in the milestones department, and we'll look at it, and we might call and ask you a question about it, but it never goes beyond that. It never goes back to the review committee. U.S. training programs are, of course, another big stakeholder in terms of the milestones. And a lot of the things that happened at the beginning with the milestones, and you might see it again when the new ones come out, and that is that the milestones are very useful in finding curricular gaps or finding assessment gaps, right? If there's something in the milestones that you're not assessing, well, it's gonna be kind of hard to give it an evaluation. Now, when the changes come out, there will probably be some things that you're gonna have to look back at your assessment tools again to see whether or not you have gaps in your assessment tools. The two things that we've heard over and over from program directors is that the milestones are very helpful for two specific types of residents. One of them, everybody probably knows. You're finding out if you need to do remediation or if you have somebody who's struggling a little bit sooner, right? Because it's something that if you're, especially if you're doing your milestones quarterly, if you find a problem, you have enough time to remediate, right, and help them catch up. If you don't find out until six months, well, in a one-year program, quite often, that's pretty late, right? Unless there's something egregiously wrong, that person is gonna end up probably completing the program. The other place that it's helped is with those trainees who are so far above and beyond their peers that you're able to find them other things to do or maybe help lead them a little bit more into different types of research or different types of clinical experience because they've already mastered some of the things that you would have expected everybody to learn over time. The reason that that is so important is that research in all areas, it's never actually been done in medicine, which is very interesting to me, but all the research shows that when you have somebody who's an overachiever, when they're not challenged, they actually stop learning, and we need to make sure that that doesn't happen, and so you wanna keep challenging all of your folks all the way through. Now, for residents and fellows, the milestones have had a really big impact. We did a lot of research before and after the milestones came out, and residents and fellows all said that not only did they receive more feedback after the milestones came out, they got better feedback, so it was more specific. They knew what to do with it because they had something they could point to in the milestones and say, oh, this is what I'm not doing yet, right? So they knew what they had to do in order to improve. At the same time, if you use the milestones for self-assessment, if you ask your fellows to assess themselves on the milestones, it's teaching them how to better guide what their actual knowledge, skills, and abilities are. And that's really important because after they get into practice, they're gonna have to continuously monitor that, right? They have to keep learning all the new things that are coming through, whether it's new treatment protocols, whether it's new drugs, whether it's new equipment. They're gonna have to continue to learn, and if they're not able to assess themselves correctly ahead of time, they're gonna struggle with that when they get out into practice. The last stakeholder group is certification boards. Now, to be really clear, no, there is not a single board that uses the milestones for any type of decision-making. With the certification boards, we have been doing a lot of research with them, whether it's looking at in-training exams or certification exams, or if it's a group that has written and orals, looking at all of those to see where the differences are. We do not want the milestones to be used for any high-stakes decision-making. When we hear about that happening, we actually will contact the organization that is trying to use it for a high-stakes decision-making and encourage them not to. A recent example is in the state of California. Anybody that had been transferring in, if they did not have all fours, even if they were only a PG-1, if they didn't have all fours, they weren't letting them have a license because they didn't have a trainee license. Well, their trainee license goes into effect, I think, July 1. It's either July 1 or January 1. But so now, hopefully, that problem's gonna go away. But they were actually using the milestones to stop people from transferring in, at least in internal medicine. That was where we got all of our complaints from. Sometimes we hear things about other organizations. Joint Commission sometimes starts talking about milestones and we right away go back to them and say, hey, no, you can't do this, and here's why. So if we know about it, we do everything we can to stop it because it changes the nature and the reasoning behind why we actually do milestones, which is for that formative assessment. When the milestones were created, it was really created out of a couple of different areas. It really, and it kind of goes back to this idea of curriculum and assessment. Curriculum has always fed into assessment. Assessment has always fed into curriculum. And it's just been this circular object that each one impacts the other. Well, in the late 90s, when the competencies first came out and then, of course, the milestone, that actually gave a little more feedback and input into that curriculum and assessment. It gave you different ways to think about what it was that they needed to be learning and how they needed to be evaluated. We're now able to take all of this and start looking at it in light of the quadriplane. Now, this is important for a couple of reasons. One of the things that we're just now starting to measure is better outcomes for our patients. Now, this is something we haven't been able to do before because we're just now getting groups graduating that have to report outcomes and primarily surgical specialties, right? There's not a lot of other specialties that report outcomes on an ongoing basis. But we're gonna be looking at whether or not the use of the milestones and assessment of the milestones has actually done anything to change what those outcomes look like once they're out in practice. All of the milestones are based on the Dreyfus Developmental Model. So anybody who knows what those are, ours are just, we tweak them a little bit simply because to make the model fit, we had to change some of the ways we think about it. And I think this is gonna be a really important time in a specialty like sleep medicine because we're trying to get away, although we call everything levels, we have level one, two, three, four, and five, really what they relate to is that level one is a novice, right, so it's somebody who really has very minimal knowledge about the specialty for which they're learning in. With this one, what we have to do is make sure that your learner is following the rules, right, if they're a novice, you tell them where to stand and what to do. Then they become an advanced beginner. That's when they're starting to be able to do a few things on their own. Then they become competent. Now, competent is really where they're starting to do that dual process thinking. It's where they're not only to think about the problem in front of them, but they're able to think about the consequences of various types of decisions later on down the road. And they're able to really think out the process two, three, four steps later. Now, in level four, which is proficient, that means they're able to do those same things, but for really complex patients, right, patients with really complex problems that they have to really be able to think more broadly about. More importantly, at level four, when they're proficient, they're able to live with the idea of ambiguity. They understand that they're not always gonna know the answer, and they're gonna be able to keep moving forward in spite of that. Now, expert is where we start to change our definition a little, because typically, when you think of expert, you think of probably some of you in this room, right? You're experts in sleep medicine. Well, in this case, we're thinking about the expert fellow. You've all had that fellow that when you think back, you think, oh, there was this person. They're the ones who I'm always gonna remember, and I'm comparing everybody else to for the rest of my career, right? So you've all had those fellows who truly are experts for being a learner. And that's what we're using as the definition for that level five moving forward. What's really great about this is that it really aligns with a lot of other developmental models that are out there, and these are all developmental models that have been studied within medicine, whether we're looking at learner behavior, transition to practitioner, or level of supervision. All of them have five levels, and all of them were able to make really good arrangements when we think about them, and in some cases, we even use them together. And I am happy, by the way, to make these slides available for everybody afterwards, so you can absolutely have them. Now, so what have we been doing and learning over the last number of years? So there was a few things. One is too many sub-competencies, right? There's just too many things that you had to evaluate to submit. The language was too complex. Too much edu-speak was included in a lot of them. Sometimes too much in each milestone set. When it starts going over a full page, it gets to be a little long and hard to evaluate. We heard that more people wanted to participate, which was something that we had gotten very early on. And we also have validity evidence. Now, I don't actually have evidence for sleep medicine, but the evidence we have for other specialties, we've been finding the same results in every specialty that we have done the evaluations in. And so we're fairly confident that we can apply that evidence to all of our specialties, because we're all using the same models. We also heard from DIOs and the ACGME public members that there was a lot of dissatisfaction with the non-patient care medical knowledge milestones. Now, this was primarily for a couple of reasons. It just sort of seemed like, why does this specialty have to be accountable, and this specialty doesn't? Why does this specialty have to understand the cost effectiveness, and this specialty doesn't? And so we went back and did a study, and it was quite interesting. And now this was just looking at the residency level. So the 26 core residency programs, along with TY, and we found that self-directed learning was listed out 88 different ways. Can anybody, do you think anybody can come up with 88 different ways to talk about self-directed learning that is different, right? That's what we thought. Especially when there were more than 200 different ways to describe professionalism. These are things that, obviously, there's a lot of commonality, regardless of what specialty that you are. Because of this study and what we had heard, we created what we're calling harmonized milestones. We had four separate groups. They were interprofessional and interdisciplinary. They came together for each of those other four competencies and created a set of sub-competencies with the milestones that we're gonna be able to use for everybody. They had gone out for public comment. We had well over 1,000 responses. And there was only one sub-competency that we had to go back and make changes to. Beyond that, everybody felt that they were good for everybody. What's important about these harmonized milestones is that the intent is not for every specialty to copy them. We say harmonized, not identical, because there are differences in each specialty. For example, if I tell you, if I say communicating with patients, the difference between internal medicine or neurology and pathology is very different, right? What they have to be able to communicate and how they communicate is going to be very different. And so we wanted each specialty to make those kinds of changes to the milestones as they looked at them. They had to be right for sleep medicine. So what we ended up doing was we changed them so that we made sure that there was enough breadth of sub-competencies. We didn't want groups to go the opposite extreme and try to have as few as possible without them having much meaning. We make sure that there's enough specificity for understanding. So the first time around, a lot of people felt that some of the milestones were much too vague. And so we wanted to make sure that they were clear and understandable. There's no more than three rows of information in any one given sub-competency. So in other words, you had some now that you've got five or six different items in a single level. You now will only have three, okay? So that has been cut down dramatically. We're also making sure that all of them are complete developmental levels. So in other words, you won't have something that only shows up at level three, right? It's sort of there's nothing before it and nothing after it. All of the content, all the individual milestones, there's developmental language going across the scale. So it's gonna be a little bit easier for you to identify if somebody has a problem in any particular area. We have moved to all positive language. So levels one and two are now truly levels one and two. All positive language, the critical deficiencies level has gone away. That being said, there is a box at the bottom that you can check critical deficiencies. So if you have somebody who truly is deficient, truly cannot do level one, you can still mark them as critical deficiencies. Although I'm pretty sure that there were either none or close to no critical deficiencies marked in sleep medicine, at least when I looked at the data about a year and a half ago. Like I said, they're all specialty specific and they are including those harmonized milestones. So across those other four. What that means for you is that you might have shared tools within your institutions. Because this again, remember this came from the DIOs. And so it's something that your DIOs are gonna be able to look across all of their programs, especially if you're in a large institution. I think of someplace like University of Colorado, where I think they have 106 or 110 accredited programs. So again, it just makes it a lot easier in terms of assessment tools as well. Now I am not a content expert. However, I'm just gonna give you the new sub-confidencies that you have. So under patient care, gather and synthesize information from sleep medicine patients across the lifespan. Utilization of diagnostic tools. I'm not gonna, the last part's the same in all of them. Interpretation of physiologic testing and management plan for sleep medicine patients. Your medical knowledge, our sleep medicine clinical science and therapeutic knowledge for sleep disorders. So as you can see, they're very specific to the work that you're gonna be doing. That's a duplicate. But these are now your harmonized milestones. Under systems-based practice, patient safety and QI, system navigation for patient-centered care, physician role in healthcare systems. Practice-based learning and improve it. Evidence-based and informed practice. Reflective practice and commitment to personal growth. Under professionalism, professional behavior and ethical principles. Accountability and conscientiousness. Self-awareness and help-seeking. Communications, you have patient and family-centered communication. Barrier and bias mitigation. Interprofessional and team communication. Communication within healthcare systems. Now, what's great about these harmonized milestones is that most of them, if you can demonstrate that you're doing them, you're meeting all the brand new common program requirements. So there is a, we can show you the new common program requirement for each of these sub-competencies. So in a lot of ways, this is actually going to crossover and I think help you out with some of those changes if you haven't yet started to think about them. Now, what's also fun with the milestones is that this time around, we've also created what's called the supplemental guide. Now, the whole purpose of this is to help guide you. In the assessment session, we talked a lot about creating shared mental models and we're trying to get you started with this. We're giving you examples for each level. We're giving you assessment methods that you can use for each one. And also giving you some resources that can help you think about how to either better evaluate or if you need remediation or other resources that you can send people to. There are five parts to the supplemental guide within each one and I know you can't read this. It's not really intended. But the way the supplemental guide is laid out is that the milestones are on the left and the examples are on the right next to it. So this is kind of what, this is actually one of your pages. There is an extra line in here called curriculum mapping. This would be something that if you wanted to fill it in, you could. Now, to be really clear about this, these are just examples being given to you by the committee. We are making this document available to you as a Word document so that you can go back and sit down with your CCC or with your faculty and you come up with the examples that you have in your program. So let's say, for example, your program is a little bit more pediatrics focused. So maybe you have in your program, they're doing more PEDS patients than what we have examples for. You would wanna go through and as a group come to consensus about what you expect to see at each of these levels in your practice, all right? So in your institution, in your program, what is it that you expect to see? When we rolled out these supplemental guides in our international programs, most of our international programs reported that they saved almost half the amount of time within their CCCs because they had already decided ahead of time what each learner looked like at each of those levels. And so it just made the process a lot easier and they could spend more time on discussion about what was next for the learner instead of arguing about, is it a three or three and a half? So again, take these supplemental guides, review them with your CCC, your faculty, and your learners. The more people who know about it, the better. When you can have a shared mental model with your entire 360 group that's been working with these learners, you're only going to make the experience better for the learners. They're gonna get more out of it and they're gonna be able to take it to keep moving forward even when they're done in your program. You're gonna wanna think about your assessment methods. Which tools are you using? When do you wanna evaluate it? Because you don't wanna evaluate necessarily every milestone set every time you do a monthly or a global evaluation, right? So you wanna really think about that as you're doing that forward. Talked about that. So the timeline, this is really important for you. The milestones will be out for public comment, I believe, the end of this month. Otherwise, it will be in early to mid-July. The latest is mid-July. We plan to have the final version published, excuse me, this fall. And it will be for you starting with your next academic year so the 2020-21 academic year. Your responses are really critical to this process. If we only have 15 people respond to the survey, then that doesn't give the group a whole lot of input into what other, if they need to make any changes. We ask you four questions for each of the subcompetencies. We ask you if the order is correct. In other words, is one to five in the right order? We ask if it can help you to determine differences in two people with different levels of competency. If we ask you if you know how to assess it. And then we ask you if the supplemental guide was helpful. Okay, and we do that for each of the subcompetencies. I recommend that you go ahead and open up the supplemental guide. Because again, the great thing is that the milestones are right there. You don't have to go back and forth between two documents. Take some time to read through it and then go back and answer the questions. Have your faculty, send the link to your faculty, to your fellows, and ask them to review it. Again, the more input we get, the more we can make sure that we have a tool that's useful for everybody. Okay, because we don't want it to be something that again, you feel is, we had to do this. This is what was given to us. We want you to have a voice and let us know what you think.
Video Summary
In the video, Laura Edgar, the Executive Vice President for Milestones Development at ACGME, provides an introductory course on assessment in educational programs. She emphasizes the need for understanding and effectively using the assessment system. Edgar discusses various assessment tools, including direct observation, online training, multi-source feedback, and chart audits. She emphasizes the importance of proper training for using these tools and agreeing on what is being measured and how it is being used. Edgar also highlights the significance of involving residents or fellows as active participants in the assessment process to increase motivation and engagement.<br /><br />Edgar emphasizes the need for reliable, valid, and feasible assessment tools, suggesting a mix of quantitative and qualitative methods. She encourages providing comments along with ratings to give meaning to the numbers and breaking down complex assessments into smaller parts. Edgar also recommends involving learners in self-assessment and using their feedback to create individualized learning plans. She explores strategies to make assessment tools more practical and efficient, considering the challenge of assessment overload.<br /><br />Furthermore, Edgar addresses the process of dismissing a learner, stressing the importance of consensus among faculty members and proper assessment and remediation plans. The video introduces harmonized milestones as a common framework for assessing competencies across different specialties. Edgar encourages program directors to align their assessment methods with the milestones using the provided supplemental guide, which offers examples and resources.<br /><br />The video concludes by mentioning upcoming changes to the milestones and inviting viewers to provide feedback through a survey to ensure their applicability to specific specialties.<br /><br />Credits: The video features Laura Edgar, the Executive Vice President for Milestones Development at ACGME, as the speaker.
Keywords
assessment
educational programs
assessment tools
direct observation
online training
multi-source feedback
chart audits
training
measurement
residents
fellows
×
Please select your language
1
English