Queen Margaret University logo
    • Login
    View Item 
    •   QMU Repositories
    • eResearch
    • School of Arts, Social Sciences and Management
    • Psychology, Sociology and Education
    • View Item
    •   QMU Repositories
    • eResearch
    • School of Arts, Social Sciences and Management
    • Psychology, Sociology and Education
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Video recording in veterinary medicine OSCEs: Feasibility and inter-rater agreement between live performance examiners and video recording reviewing examiners

    Date
    2021-08-02
    Author
    Tan, Jean-Yin
    Ma, Irene W. Y.
    Hunt, Julie A.
    Kwong, Grace P.S.
    Farrell, Robin
    Bell, Catriona
    Read, Emma K.
    Metadata
    Show full item record
    Citation
    Tan, J.-Y., Ma, I.W.Y., Hunt, J.A., Kwong, G.P.S., Farrell, R., Bell, C. and Read, E.K. (2021) ‘Video recording in veterinary medicine OSCEs: Feasibility and inter-rater agreement between live performance examiners and video recording reviewing examiners’, Journal of Veterinary Medical Education, 48(4), pp. 485-491.
    Abstract
    The Objective Structured Clinical Examination (OSCE) is a valid, reliable assessment of veterinary students’ clinical skills that requires significant examiner training and scoring time. This article seeks to investigate the utility of implementing video recording by scoring OSCEs in real-time using live examiners, and afterwards using video examiners from within and outside the learners’ home institution. Using checklists, learners (n=33) were assessed by one live examiner and five video examiners on three OSCE stations: suturing, arthrocentesis, and thoracocentesis. When stations were considered collectively, there was no difference between pass/fail outcome between live and video examiners (χ2 = 0.37, p = .55). However, when considered individually, stations (χ2 = 16.64, p < .001) and interaction between station and type of examiner (χ2 = 7.13, p = .03) demonstrated a significant effect on pass/fail outcome. Specifically, learners being assessed on suturing with a video examiner had increased odds of passing the station as compared with their arthrocentesis or thoracocentesis stations. Internal consistency was fair to moderate (0.34–0.45). Inter-rater reliability measures varied but were mostly moderate to strong (0.56–0.82). Video examiners spent longer assessing learners than live raters (mean of 21 min/learner vs. 13 min/learner). Station-specific differences among video examiners may be due to intermittent visibility issues during video capture. Overall, video recording learner performances appears reliable and feasible, although there were time, cost, and technical issues that may limit its routine use.
    Official URL
    https://doi.org/10.3138/jvme-2019-0142
    URI
    https://eresearch.qmu.ac.uk/handle/20.500.12289/11604
    Collections
    • Psychology, Sociology and Education

    Queen Margaret University: Research Repositories
    Accessibility Statement | Repository Policies | Contact Us | Send Feedback | HTML Sitemap

     

    Browse

    All QMU RepositoriesCommunities & CollectionsBy YearBy PersonBy TitleBy QMU AuthorBy Research CentreThis CollectionBy YearBy PersonBy TitleBy QMU AuthorBy Research Centre

    My Account

    LoginRegister

    Queen Margaret University: Research Repositories
    Accessibility Statement | Repository Policies | Contact Us | Send Feedback | HTML Sitemap