Abstract: The goal of ophthalmology residency training is to produce competent ophthalmologists. Appropriate assessments must be employed to ensure this goal is met. Valid and reliable workplace-based assessments are designed to assess competence in the many domains required of a good ophthalmologist. These assessments increase standardization and objectivity as compared to simple observational feedback. When used appropriately, workplace based assessments not only provide measures of competence but also facilitate effective formative feedback and enhance learning.
Desired physician competencies have been defined by both the Royal College of Physicians and Surgeons of Canada and the United States’ Accreditation Council for Graduate Medical Education (ACGME) (1,2). Competence can be defined as “the ability to do something well.” The goal of ophthalmology residency training is to produce competent ophthalmologists. The Royal College developed the CanMEDS framework that described abilities required to be competent physicians (Table 1) (1). The ACGME’s Outcomes Project which described six general competencies that every physician should achieve (Table 2) (2).
Medical expert: possess the knowledge and skills required to provide up-to-date, ethical, and resource efficient clinical care. This is the central role of physicians and requires all of the roles listed below |
Communicator: able to effectively manage the doctor-patient relationship |
Collaborator: able to work effectively in the health care team to provide optimal patient care |
Manager: able to organize practices, allocate resources appropriately, and contribute to the effectiveness of the healthcare system |
Health advocate: able to advance the health and well-being of patients, communities and populations |
Scholar: able to demonstrate life-long learning principles to enhance professional activities, create and apply new medical information and educate students, patients and peers |
Professional: practice ethically and have high standards of personal behavior |
Patient care: residents must be able to provide patient care that is compassionate, appropriate, and effective for the treatment of health problems and the promotion of health |
Medical knowledge: residents must demonstrate knowledge of established and evolving biomedical, clinical, epidemiological and social behavioral sciences, as well as the application of this knowledge to patient care |
Practice based learning and improvement: residents must demonstrate the ability to investigate and evaluate their care of patients, to appraise and assimilate scientific evidence, and to continuously improve patient care based on constant self-evaluation and life-long learning |
Interpersonal and communication skills: residents must demonstrate interpersonal and communication skills that result in the effective exchange of information and collaboration with patients, their families, and health professionals |
Professionalism: residents must demonstrate a commitment to carrying out professional responsibilities and an adherence to ethical principles |
Systems based practice: residents must demonstrate an awareness of and responsiveness to the larger context and system of health care, as well as the ability to call effectively on other resources in the system to provide optimal health care |
Competence cannot be assumed simply because one completes a training program. Medical knowledge is typically assessed with written or oral examination but most competencies (e.g., surgical skill) cannot be adequately assessed in this way. Workplace-based assessments (WPBA) are structured assessment tools designed to objectively assess surgical skill, patient care, professionalism and communication ability. WPBA usually consist of rubrics ideally with behavioral descriptors at each rating level. A rubric is a tool that can help one give timely, specific, structured feedback and is defined as an explicit set of criteria used to assess a particular skill. Good rubrics consist of three parts: (I) dimensions (e.g., steps of a surgical procedure); (II) levels (e.g., score of 1–5 or novice, beginner, advanced beginner, competent, expert); and (III) behavioral descriptors (what it means to perform at a certain level for any of the dimensions). For example, an assessment rubric for cataract surgery might include dimensions of prepping and draping the patient, levels from 1–5, and descriptions of exactly what behavior is necessary to score 1–5. Types of WPBA include rubrics for directly observed procedural skills, directly observed patient care and multisource feedback (360-degree evaluation).
WPBA should adhere to important assessment principles including brevity balanced with reliability and validity, testing application of knowledge and covering the spectrum of competence required (3). The majority of completed WPBA should be shared with the resident and feedback designed to improve performance—it is called teaching! Fortunately, a variety of ophthalmology-specific WPBA have already been developed to assess the competencies described in Tables 1,2.
Several WPBAs of surgical skill have been devised. Cremers and associates developed the “Objective Assessment of Skills in Intraocular Surgery” (OASIS), a one-page objective evaluation form to assess residents’ skills in cataract surgery (4). The form is completed by an evaluator who directly observes the surgical procedure and includes objective data such as wound placement and size, phacoemulsification time, and total surgical time, etc. They showed that the OASIS had both face and content validity. To complement this objective assessment the same group developed a subjective rating of surgical skills named “Global Rating Assessment of Skills in Intraocular Surgery” (GRASIS) (5). This one-page form allows the evaluator to assign scores from 1–5 based on a behaviorally anchored rubric to domains such as pre-operative knowledge, microscope use, instrument handling, and tissue treatment in addition to seven other areas. Thus, the combination of the OASIS and GRASIS provides objective and subjective evaluation of surgical skill. Feldman and Geist described the Subjective Phacoemulsification Skills Assessment as an evaluative instrument designed specifically for intraoperative assessment of resident phacoemulsification cataract extraction (PCE) surgery (6). This form delineates PCE into overall performance and specific steps of the procedure [e.g., capsulorhexis, hydrodelineation, intraocular lens (IOL) implantation, etc.]. The performance was graded with a rubric defining a good outcome at each step and asking the evaluator to rate on a 1–5 spectrum from strongly agree to strongly disagree. They were able to show a degree of inter-rater reliability. The Royal College of Ophthalmologists In the United Kingdom have developed an extensive number of WPBAs called either direct observation of clinical skills (DOCS) or objective structured assessment of technical skills (OSATS) (7). These WPBAs are designed to cover all important procedures and surgeries in ophthalmology. Similar to the other WPBAs described, the rubrics do not contain behavioral descriptors for every rating and leaves the assessment significantly subjective in nature.
Saleh and colleagues described an assessment tool called the “Objective Structured Assessment of Cataract Surgical Skill” (OSACSS) (8). This tool breaks down the phacoemulsification procedure into 20 steps that are scored on a 5-point Likert scale. The scale anchors are: 1= “poorly or inadequately performed”, 3= “performed with some errors or hesitation”, and 5= “performed well with no prompting or hesitation”. There are no scale anchors for scores of 2 or 4. An international panel of authors modified the OSACSS by producing a globally-applicable rubric with levels based on the Dreyfus model of skill acquisition (novice, beginner, advanced beginner, competent, and expert) and with behavioral anchors for each level in each step of the surgical procedure was created (9). Once drafted, content and face validity were achieved by having an international panel of 15 experts review the draft instrument and provide feedback. After incorporating suggestions from the international panel, a final document, the ICO-Ophthalmology Surgical Competency Assessment Rubric (OSCAR)—phacoemulsification was produced (9). In a similar fashion internationally applicable assessment tools for extracapsular cataract surgery (ICO-OSCAR:ECCE) (9), small incision cataract surgery (ICO-OSCAR:SICS) (10), lateral tarsal strip surgery (ICO-OSCAR:LTS) (11), strabismus surgery (ICO-OSCAR:strabismus) (12), and pediatric cataract surgery (ICO-OSCAR:pedscat) (13) were developed. Furthermore, the ICO-OSCAR:phaco and ICO-OSCAR:strabismus tools have been shown to have inter-rater reliability (14,15). Similar tools for trabeculectomy (ICO-OSCAR:trab), panretinal photocoagulation (ICO-OSCAR:PRP), and vitrectomy (ICO-OSCAR:Vit) are in press at the time of this manuscript preparation. More recently another cataract surgery assessment tool was developed in Canada and was shown to have a degree of validity and reliability (16). It was not created by an international panel nor does it have a specific behaviorally grounded rubric.
The ICO-OSCAR assessment tools serve a variety of purposes: (I) they are internationally applicable, as comments from an international panel of experts were used to adapt it and make it flexible to any setting; (II) they will decrease subjectivity of the assessment by clearly defining for the assessor what behavior must be observed for each level of proficiency; (III) the rubric will clearly communicate to the learner what is expected to attain competence, and thus this tool can be used for both assessment and teaching.
The Ophthalmic Clinical Evaluation Exercise (OCEX) completed by a teaching physician as they observe the resident performing a patient history, examination and then listens to the case presentation (17). The teaching physician completes scoring in 33 categories that rate the residents’ ability to communicate effectively, perform a history and examination, and synthesize the information into a differential diagnosis and plan. Importantly, a rubric that describes the behavior necessary to achieve each grade on the OCEX was developed. The OCEX has been shown to have content validity and inter-rater reliability (17,18). It was not developed by an international panel and thus may need to be modified to reflect cultural differences. The ICO is currently modifying the OCEX to be internationally applicable. The OCEX is a valid and reliable WPBA (at least in North America) for assessing the competencies of patient care, medical knowledge and communication skills. It is available in multiple languages on the ICO website (www.icoph.org). The Royal Colleges WPBA handbook referenced above also contains rubrics for assessing a variety of patient care competencies (7). Like the WPBAs for surgical skill there no rubrics with behavioral descriptors for every category.
Professionalism and communication skill can be difficult to assess. Traditionally, the teaching faculty assesses these competencies in addition to procedural skill and patient care. However, traditional evaluators may not be best for these competencies as residents are usually on their best behavior around these evaluators. Therefore, multisource (360-degree) WPBA is needed to provide residents feedback regarding professionalism and communication skills. Multisource refers to who is doing the evaluating. In addition to teaching faculty, nurses, assistants, patients and peers are evaluators. Of course, questions on a multi-source WPBA are tailored to the assessor. A nurse or assistant would not rate a physician’s medical knowledge but rather their professionalism and communication skills. Probyn and associates used a multisource WPBA and also asked for resident self-assessment (19). They found self-assessment scores were significantly lower than multisource scores. Interestingly, but not surprisingly, a teaching physician was more likely to rate the resident highly in professionalism than a secretary or program assistant. This emphasizes the importance of obtaining information about professionalism and communication skills from someone other than the resident’s supervisor. Jagadeesan and associates have shown their patient satisfaction survey can discriminate levels of resident communication skill and thus may be useful to assess this competency (20). Internationally, cultural differences may produce difficulties with this type of tool. It is crucial that the evaluators using multisource WPBA believe it is anonymous and that the information is to be used to improve the young doctors’ performance. To my knowledge, no internationally valid 360-degree WPBA exists and thus the ICO has developed one in a manner similar to development of the ICO-OSCARs (in submitted for publication at the time this manuscript was written).
WPBA provide a more objective measure of whether residents have become competent. In addition, they improve performance by facilitating effective, specific and timely feedback. Ideally, every resident training program would at least utilize WPBAs of procedural skills, patient care, professionalism and communication skills. WPBA of resident performance is essential in both teaching and demonstrating graduating residents are able to function as competent ophthalmologists.