Dominick Reilly

prof_pic2.jpeg

Hello, I am a fourth-year PhD student in Computer Science at the University of North Carolina at Charlotte advised by Dr. Srijan Das, and am a member of the Charlotte Machine Learning Lab (CharMLab). My current focus is on multi-modal learning in Vision Language Models (VLMs) for video understanding and robotic control. I have worked on tasks including ego-exo viewpoint transfer, cross-modal domain adaptation, and fine-grained action understanding. I am interested in developing simple methods that are generalizable and scalable.

News

Feb 2025 One paper, “LLAVIDAL: A Large LAnguage VIsion Model for Daily Activities of Living”, is accepted to CVPR 2025!
Dec 2024 One paper, “SKI Models: Skeleton Induced Vision-Language Embeddings for Understanding Activities of Daily Living”, is accepted to AAAI 2025!
Jun 2024 Started summer internship at Honda Research Institute in San Jose, California as a student researcher!
Feb 2024 One paper, “Just Add π! Pose Induced Video Transformers for Understanding Activities of Daily Living”, is accepted to CVPR 2024!

Selected publications

  1. ×
    VisCoP: Visual Probing for Video Domain Adaptation of Vision Language Models
    Dominick Reilly, Manish Kumar Govind, Le Xue, and Srijan Das
    2025
  2. ×
    LLAVIDAL: A Large LAnguage VIsion Model for Daily Activities of Living
    Dominick Reilly, Rajatsubhra Chakraborty, Arkaprava Sinha, Manish Kumar Govind, and 4 more authors
    In CVPR, 2025
  3. ×
    From My View to Yours: Ego-Augmented Learning in Large Vision Language Models for Understanding Exocentric Daily Living Activities
    Dominick Reilly, Manish Kumar Govind, and Srijan Das
    In arXiv, 2025
  4. ×
    Just Add π! Pose Induced Video Transformers for Understanding Activities of Daily Living
    Dominick Reilly, and Srijan Das
    In CVPR, 2024
  5. ×
    Limited Data, Unlimited Potential: A Study on ViTs Augmented by Masked Autoencoders
    Srijan Das, Tanmay Jain, Dominick Reilly, Pranav Balaji, and 5 more authors
    In WACV, 2024