Computer-assisted interventions can improve intraoperative guidance, particularly through deep learning methods that harness the spatiotemporal information in surgical videos. However, the severe data imbalance often found in surgical video datasets hinders the development of high-performing models. In this work, we aim to overcome the data im- balance by synthesizing surgical videos. We propose a unique two-stage, text-conditioned diffusion-based method to generate high-fidelity surgi- cal videos for under-represented classes. Our approach conditions the generation process on text prompts and decouples spatial and temporal modeling by utilizing a 2D latent diffusion model to capture spatial con- tent and then integrating temporal attention layers to ensure temporal consistency. Furthermore, we introduce a rejection sampling strategy to select the most suitable synthetic samples, effectively augmenting exist- ing datasets to address class imbalance. We evaluate our method on two downstream tasks—surgical action recognition and intra-operative event prediction—demonstrating that incorporating synthetic videos from our approach substantially enhances model performance.
        
        
        
        
        
        
        
        
        @article{venkatesh2025mission,
  title={Mission Balance: Generating Under-represented Class Samples using Video Diffusion Models},
  author={Venkatesh, Danush Kumar and Funke, Isabel and Pfeiffer, Micha and Kolbinger, Fiona and Schmeiser, Hanna Maria and Weitz, Juergen and Distler, Marius and Speidel, Stefanie},
  journal={arXiv preprint arXiv:2505.09858},
  year={2025}
}