Status
Conditions
Treatments
About
The study was conducted in two sequential phases to evaluate the reliability and user experience of a GBDA tailored for community-dwelling older adults.
Phase 1: Reliability of digitalized Brief-BESTest assessment In the first phase, participants performed a single balance assessment session, during which both the clinician-administered Brief-BESTest and the digitalized Brief-BESTest were scored concurrently. This approach enabled direct comparison between clinical and automated assessments under identical task conditions.
Testing was conducted in a controlled indoor setting featuring a 1 m × 1 m, 10 cm-thick EVA foam mat (35D density) and safety handrails on three sides. Prior to the assessment, participants completed a baseline questionnaire collecting demographic data (age, sex), anthropometric measurements (height, weight), and fall history (past 12 months). Written informed consent was obtained from all participants.
During the assessment, a certified physical therapist delivered standardized verbal instructions and rated each task using the validated Brief-BESTest rubric (maximum score = 24). Simultaneously, the digitalized Brief-BESTest system recorded participants' movements using a monocular 4K camera and calculated scores via an algorithm that mirrors the original scoring criteria. The torso and joint movements were analyzed in real time, and balance scores were automatically computed.
To evaluate inter-rater reliability, a second trained clinician independently rated 20% of the sample. This concurrent scoring design ensured consistent task execution while enabling evaluation of inter-method reliability of the automated system's scoring against expert clinician judgment.
Phase 2: Impact of GBDA on User Experience The second phase involved a parallel group randomized controlled trial to assess the impact of gamification on user experience. Participants were randomly assigned (1:1) to either the control group (uses digitalized Brief-BESTest) or the experimental group (uses GDBA) through a simple coin-randomization method by a blinded researcher. Testing was conducted in a 1 m × 3 m evaluation zone equipped with front, side, and rear safety railings, and a centrally placed EVA foam pad (identical to Phase 1). The DBTS system included a display screen, a Logitech Brio 4K webcam (30 fps) for motion tracking, and a built-in speaker for voice prompts. A detachable, ergonomically designed user console-compliant with Chinese anthropometric standards-was mounted on the front railing for interface navigation (see Figure 2).
In the control group, participants performed balance tasks following pre-recorded verbal instructions from a certified physical therapist. In the experimental group, tasks were presented via the GDBA interface, which included animated avatars, voice guidance, progress indicators, and real-time performance feedback. Each participant completed one practice trial per task to minimize learning effects, followed by the formal assessment. A 2-minute seated rest period was provided between tasks to reduce fatigue.
Immediately following the assessment, participants completed self-report measures on perceived exertion, intrinsic motivation, and intention for continued use. They then participated in a brief semi-structured interview exploring their perceptions of system usability and engagement. All interviews were audio-recorded and transcribed for thematic analysis. Participants received a nominal compensation (USD $10 equivalent) upon study completion.
Enrollment
Sex
Ages
Volunteers
Inclusion criteria
Exclusion criteria
Primary purpose
Allocation
Interventional model
Masking
40 participants in 3 patient groups
Loading...
Data sourced from clinicaltrials.gov
Clinical trials
Research sites
Resources
Legal