论文标题
步态转换:基于视频的时空步态分析
Transforming Gait: Video-Based Spatiotemporal Gait Analysis
论文作者
论文摘要
人类姿势估计是一个迅速发展的领域,为人类运动科学和康复提供了巨大的希望。这种潜力通过较小的工作来调节,以确保输出在临床上有意义且适当地校准。通常在专用实验室中进行的步态分析会产生精确的测量值,包括运动学和阶梯时间。使用来自仪器步态分析实验室的7000多个单眼视频,我们训练了一个神经网络,以绘制3D关节轨迹以及个体的高度到可解释的生物力学输出,包括步态循环时间和矢状平面关节运动学和时空轨迹。该任务特定的层产生了对脚接触和脚部事件的时间安排的准确估计。在将运动学输出解析为单个步态周期后,它还可以通过周期循环估计,逐步时间,双重和单个支撑时间,步行速度和步长。
Human pose estimation from monocular video is a rapidly advancing field that offers great promise to human movement science and rehabilitation. This potential is tempered by the smaller body of work ensuring the outputs are clinically meaningful and properly calibrated. Gait analysis, typically performed in a dedicated lab, produces precise measurements including kinematics and step timing. Using over 7000 monocular video from an instrumented gait analysis lab, we trained a neural network to map 3D joint trajectories and the height of individuals onto interpretable biomechanical outputs including gait cycle timing and sagittal plane joint kinematics and spatiotemporal trajectories. This task specific layer produces accurate estimates of the timing of foot contact and foot off events. After parsing the kinematic outputs into individual gait cycles, it also enables accurate cycle-by-cycle estimates of cadence, step time, double and single support time, walking speed and step length.