Show simple item record

dc.contributor.authorTan, Jean-Yinen
dc.contributor.authorMa, Irene W. Y.en
dc.contributor.authorHunt, Julie A.en
dc.contributor.authorKwong, Grace P.S.en
dc.contributor.authorFarrell, Robinen
dc.contributor.authorBell, Catrionaen
dc.contributor.authorRead, Emma K.en
dc.date.accessioned2021-11-24T16:14:42Z
dc.date.available2021-11-24T16:14:42Z
dc.date.issued2021-08-02
dc.identifier.citationTan, J.-Y., Ma, I.W.Y., Hunt, J.A., Kwong, G.P.S., Farrell, R., Bell, C. and Read, E.K. (2021) ‘Video recording in veterinary medicine OSCEs: Feasibility and inter-rater agreement between live performance examiners and video recording reviewing examiners’, Journal of Veterinary Medical Education, 48(4), pp. 485-491.en
dc.identifier.issn0748-321Xen
dc.identifier.issn1943-7218
dc.identifier.urihttps://doi.org/10.3138/jvme-2019-0142
dc.identifier.urihttps://eresearch.qmu.ac.uk/handle/20.500.12289/11604
dc.descriptionCatriona Bell – ORCID: 0000-0001-8501-1697 https://orcid.org/0000-0001-8501-1697en
dc.descriptionItem not available in this repository.
dc.description.abstractThe Objective Structured Clinical Examination (OSCE) is a valid, reliable assessment of veterinary students’ clinical skills that requires significant examiner training and scoring time. This article seeks to investigate the utility of implementing video recording by scoring OSCEs in real-time using live examiners, and afterwards using video examiners from within and outside the learners’ home institution. Using checklists, learners (n=33) were assessed by one live examiner and five video examiners on three OSCE stations: suturing, arthrocentesis, and thoracocentesis. When stations were considered collectively, there was no difference between pass/fail outcome between live and video examiners (χ2 = 0.37, p = .55). However, when considered individually, stations (χ2 = 16.64, p < .001) and interaction between station and type of examiner (χ2 = 7.13, p = .03) demonstrated a significant effect on pass/fail outcome. Specifically, learners being assessed on suturing with a video examiner had increased odds of passing the station as compared with their arthrocentesis or thoracocentesis stations. Internal consistency was fair to moderate (0.34–0.45). Inter-rater reliability measures varied but were mostly moderate to strong (0.56–0.82). Video examiners spent longer assessing learners than live raters (mean of 21 min/learner vs. 13 min/learner). Station-specific differences among video examiners may be due to intermittent visibility issues during video capture. Overall, video recording learner performances appears reliable and feasible, although there were time, cost, and technical issues that may limit its routine use.en
dc.description.urihttps://doi.org/10.3138/jvme-2019-0142en
dc.format.extent485-491en
dc.language.isoenen
dc.publisherAmerican Association of Veterinary Medical Collegesen
dc.relation.ispartofJournal of Veterinary Medical Educationen
dc.titleVideo recording in veterinary medicine OSCEs: Feasibility and inter-rater agreement between live performance examiners and video recording reviewing examinersen
dc.typeArticleen
dcterms.accessRightsnone
dc.description.volume48en
dc.description.ispublishedpub
rioxxterms.typeBooken
rioxxterms.publicationdate2021-07-01
refterms.depositExceptionNAen
refterms.accessExceptionNAen
refterms.technicalExceptionNAen
refterms.panelUnspecifieden
qmu.authorBell, Catrionaen
dc.description.statuspub
dc.description.number4en
refterms.versionNAen


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record