首页> 外文期刊>The Journal of Graduate Medical Education >We Need to Stop Drowning—A Proposal for Change in the Evaluation Process and the Role of the Clinical Competency Committee
【24h】

We Need to Stop Drowning—A Proposal for Change in the Evaluation Process and the Role of the Clinical Competency Committee

机译:我们需要停止溺水 - 提案改变评估过程和临床能力委员会的作用

获取原文
           

摘要

In 2012, Nasca and colleagues1 described proposals for the adoption of the Next Accreditation System, including assessment using milestones and a clinical competency committee (CCC). As programs attempted to implement these changes, the time commitment for assessment and documentation has become significant. We need to consider a change in our approach to evaluations, for we are drowning in a sea of increasing real and perceived documentation requirements and have to wade to the shore. I attended a program director boot camp at the American College of Cardiology meeting this past March. Program directors in cardiovascular disease fellowship training talked about the role of the CCC in determining trainees' progress on the subspecialty milestones. I had attended presentations on this topic sponsored by the Alliance for Academic Internal Medicine in 2012 and 2014. At both meetings, speakers described a system using the transfer of information from postrotational evaluations to a tool for competency-based assessment in entrustable professional activities. A common theme was that the process of incorporating evaluations into milestone-based assessments was complex and sometimes convoluted.2,3 Significant time requirements were reported—as much as 3 to 6 hours of staff and faculty work per trainee were necessary to prepare data for submission to the CCC, with much of this related to culling information from evaluation forms. CCC meetings resulted in even more time commitments, averaging 1 to 3 hours per meeting. This effort likely is not sustainable and, at the least, creates a sense of dread of the evaluation process. We need to change our approach to evaluations and reestablish joy in teaching, including providing prompt formative feedback to our trainees. As program directors and coordinators, we bird-dog faculty to complete multipage evaluations after clinical rotations, requiring supervisors to assess trainee progress in processes ranging from medical interviewing to performing procedures. Evaluations include a space for narrative comments, but after spending 5 to 15 minutes filling in check boxes, formative comments are found only in the minority of evaluations. Yet these comments are the most important tool to direct trainee efforts to improve competency. We need to make evaluations less of a chore, and more relevant in making trainees become better physicians. This means we need to change our approach to evaluations and provide more concurrent feedback. In the current system, the preparation of the material needed for the evaluation of progress along the milestones often is so time consuming that many CCCs will evaluate each trainee only a few times per year. Feedback is most effective if given when areas for improvement are identified. The onus of the process to give milestone-based evaluations limits our abilities to provide effective formative feedback. We need to change this. In our program's efforts to deal with these issues, we have blown up and reinvented the evaluation system. Instead of asking faculty members to fill in the check boxes on an assessment tool after each rotation, we ask them to write down formative comments. This may be a single line describing strengths and areas for improvement for a trainee, or a paragraph describing the assessment of a core competency. We try to decrease the time requirements and hassle factors associated with performing evaluations. These efforts have resulted in more timely evaluations and higher completion rates. To provide more timely and relevant feedback, our CCC meets every 4 to 5 weeks. We evaluate each class of fellows by academic year. This allows us to compare fellows of similar levels of training. We have a small program, encompassing 7 fellows and 13 key clinical faculty members. In exchange for increasing the frequency of our CCC meetings, we limit meetings to 30 minutes. We have asked that key clinical faculty attend each meeting, and we usually have at least 7 fac
机译:2012年,NASCA及其同事1描述了采用下一个认证制度的建议,包括使用里程碑和临床能力委员会(CCC)的评估。随着程序试图实施这些变化,评估和文件的时间承诺变得显着。我们需要考虑我们对评估方法的变化,因为我们在增加真实和感知的文件要求的海洋中淹没,并且必须涉及岸边。我在三月举行的美国心脏病学院会议上参加了一名计划主任营地。心血管疾病的计划董事兼职培训谈到了CCC在确定学员在亚特色里程碑上的进展方面的作用。我参加了由2012年和2014年的学术内科联盟赞助的这一主题的介绍。在两次会议上,发言者描述了使用从托管评估的信息转移到委托专业活动中基于能力的评估的工具的系统。一个共同的主题是将评估纳入基于里程碑的评估的过程复杂,有时会报告卷积的2,3个大量时间要求 - 尽可能多地为3到6个小时的工作人员和教师工作是必要的准备数据提交给CCC,许多这与评估表格中的剔除信息有关。 CCC会议导致更多的时间承诺,每次会议平均1至3小时。这项努力可能是不可持续的,最少地创造了评估过程的恐惧感。我们需要更改我们的评估方法和重建教学中的快乐,包括向我们的学员提供迅速的形成反馈。作为方案董事和协调员,我们在临床旋转后鸟狗学院完成多端评估,要求监督员评估从医疗面试到表演程序的过程中的实习进程。评估包括叙述评论的空间,但在填写5到15分钟后填写复选框后,表现性评论仅在少数评估中找到。然而,这些评论是指导学员努力提高能力的最重要工具。我们需要使评估减少核心,并在使学员成为更好的医生方面更具相关性。这意味着我们需要更改我们的评估方法并提供更多并发反馈。在目前的系统中,沿着里程碑评估进展所需的材料的制备往往是耗时的,许多CCC将每年只评估每位实习生每年只有几次。如果识别出改进的区域,则反馈最有效。基于里程碑的评估的过程的责任限制了我们提供有效的形成反馈的能力。我们需要改变这个。在我们的计划努力处理这些问题的努力中,我们已经爆炸并重新发明了评估系统。每次旋转后,我们都会在评估工具上填写学院成员来填写复选框,而不是填写评估工具中的复选框,我们要求他们编写表现性评论。这可以是描述实习生改进的强度和区域的单线,或描述核心竞争力评估的段落。我们试图减少与执行评估相关的时间要求和麻烦因子。这些努力导致更及时的评估和更高的完成率。为了提供更及时和相关的反馈,我们的CCC每4到5周达成每4至5周。我们通过学年评估每一班研究员。这使我们能够比较类似培训水平的研究员。我们有一个小程序,包括7名研究员和13名关键临床教职员。换取增加CCC会议的频率,我们将会议限制为30分钟。我们已经要求关键的临床教师参加每个会议,我们通常至少有7个FAC

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号