David Thomas

Associate Professor



720-848-0134


Department of Radiation Oncology

University of Colorado Anschutz Medical Campus

Department of Radiation Oncology – University of Colorado Denver | Anschutz Medical Campus
1665 Aurora Court, Suite 1032
Mail Stop F706
Aurora, CO 80045



Computer Vision assisted Collision Avoidance for Radiation Therapy.


Background:

Collision avoidance is a key issue for patient safety in radiation therapy (RT). Noncoplanar treatment fields (P. Dong et al. 2013) and more complex immobilization devices (Dougherty et al. 2021) offer improved treatment quality, but come with increased risk of collisions between gantry, couch, and patient (Nguyen et al. 2019). Various strategies based on patient’s computed tomography (CT) imaging have been proposed to predict collisions (Miao et al. 2020), but due to limited CT length the patient’s full body position often remains unknown. (Wang et al. 2021).

Methods:
We propose ‘Avatar guided-RT’ (AgRT), a CV-assisted CA-technique. AgRT uses patient specific ‘avatars’ to detect and track patient positioning during treatment. 3D patient pose is estimated from multiple 2D cameras using a state-of-the-art markerless motion capture algorithm (J. Dong et al. 2019). Patient pose is mapped to a patient specific ‘avatar’, a posable skin mesh-model based on a recently published realistic 3D model of the human surface anatomy learned from >10,000 3D body scans (Osman, Bolkart, and Black 2020). Avatars can be fitted to CT-based surface measurements and account for the effect of gender and BMI on pose-dependent surface variation. Avatars can then be accurately monitored within a virtual LINAC environment to predict any potential collisions.

AgRT was tested using a synchronized and calibrated multiple camera system (Fig1a). The system’s ability to extract real-time 3D pose from multiple 2D images was tested on a healthy volunteer (Fig1b-c). We then developed a virtual treatment room environment in a 3D modelling software (Blender, v3.2.2) to design a collision-prediction tool for gantry, couch, and patient. A virtual patient was positioned on the treatment couch (Fig2a) and monitored using five virtual 2D cameras. The 3D patient pose was calculated and mapped to an avatar with matching gender and BMI. The treatment plan parameters of a non-coplanar volumetric arc therapy (VMAT) stereotactic body RT (SBRT) plan were exported from the treatment planning system and used to control the couch and gantry motion of a virtual LINAC model.


Results:
Using a state-of-the art CV algorithm to enable multi-view 3D pose estimation we can quickly and robustly recover a patient’s 3D pose. A calibrated multiple camera system showed that patient pose can be acquired in real time, enabling full body patient-specific positioning and collision prediction with sub-centimeter accuracy. The distance between LINAC gantry, couch and patient was accurately tracked in real time during a virtual non-coplanar VMAT SBRT delivery.

Conclusions:
Due to lack of reliable systems to predict collisions between the gantry and the patient, collisions remain a concern for LINACs and proton gantries alike. Real time full body patient modelling using computer vision algorithms has the potential to contribute to improvement in the safety and efficiency of the treatment workflow.


Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in