Status
Conditions
Treatments
About
The goal of this comparative blinded assessment study is to compare ratings of crowd workers and expert ratings in simulated robot-assisted radical prostatectomies
The main question[s] it aims to answer are:
Full description
3.4 Video material The participants will be assessing the videos using the assessment tool in a survey sent by E-boks. The surveys will be sent using a URL-link from Redcap. All videos are stored at 23video system and a link to the videos will be included in the survey. The survey has successfully been tested on different devices.
The investigators will randomly choose videos from the third repetition from 5 novice surgeons, 5 experienced robotic surgeons and 5 experienced robotic surgeons in RARP. The investigators will use edited videos to the length of maximum 5 minutes. The videos will be edited from start (0 minutes) and 5 minutes forward, where the video will be stopped. Therefore, the videos will show how far the surgeon has come after 5 minutes of simulated operation. A total of 4548 edited videos will be used for crowd-sourced assessment.
To secure response process of Messick's framework all participants will be blinded to the identity and skill level of the surgeon on the recorded video. The experienced surgeons could potentially rate their own videos, which could be a threat to validity for the response process, but as the vide-os are blinded, they will not know which videos are their own. In addition, there will be a signifi-cant time delay between them having performed the task and rating the videos. Thus, it is unlikely that they will be able to identify their own videos. All videos will be given a randomly allocated identification ID.
3.5 Video-rating Each participant will rate ten randomly chosen videos using GEARS. The participants will be given a randomized ID number, which is used to match the ten videos to the participant. They will be asked to evaluate each video with the five different domains of GEARS on a scale from one to five. After rating the video, the participant will be asked to answer 'yes' or 'no' to the question: 'Would the participant trust this doctor to operate on he participant, if the participant were to have their prostate removed using robotic-assisted surgery?'. The participants will fill in the answers after the video-rating in RedCap.
3.6 Evaluation questions After the crowd-raters finish the video-ratings, they will receive a final questionnaire in RedCap, where they are asked their opinion about a possible future role as crowd-raters regarding time use and possible payment level (appendix 4).
3.7 Data-collection All data will be collected and stored in RedCap, which is a platform designed to store research data. All data will be pseudo anonymized as all participants will get a unique link only known to the participant and the principal investigator (RGO). The participants can only rate the video once. The data will be blinded by RGO prior to statistical analysis.
The expert panel will be invited by e-mail.
Enrollment
Sex
Ages
Volunteers
Inclusion and exclusion criteria
Laypersons
Inclusion Criteria:
Exclusion Criteria:
Expert raters
Inclusion Criteria:
Exclusion Criteria:
Primary purpose
Allocation
Interventional model
Masking
151 participants in 1 patient group
Loading...
Data sourced from clinicaltrials.gov
Clinical trials
Research sites
Resources
Legal