Research
I am leading the Perception and Activity Understanding Group, where we develop AI models for theory of mind, behavioral understanding, and social perception. Our research combines machine learning, computer vision, multimodal signal processing, and insights from the social sciences to study human activities and behaviors from real-world sensor data.
While earlier work in the group focused on detection, tracking, and pose estimation, our current research centers on higher-level behavior understanding. We design models that recognize and interpret non-verbal behaviors, and that connect low-level temporal signals to richer constructs such as gestures, activities, communication patterns, social behaviors, interpersonal relationships, personality traits, and health-related indicators.
A particular strength of the group is the study of gaze and attention, which are fundamental to cognitive processes such as intention understanding, action prediction, and communication. By modeling where and how attention is directed, we aim to build systems that better understand social cognition and human interaction.
Our work has applications in areas including surveillance, animal behavior monitoring, human behavior analysis, mental health assessment, interactive systems, and social robotics.
Google Scholar ProfileORCID Profile
Some recent publications (full list)
OmniHead: A Unified Model for Dynamic Nonverbal Facial Behaviors
P. Vuillecard and J.-M. Odobez
Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Findings (CVPR) 2026.
Graph neural network-based surrogate modeling for fast and scalable simulations of meshed district heating networks
R. Boghetti, J.-M. Odobez and Jérome Kaemp
Energy and AI Journal, Vol 24, March 2016.
End-to-End Shared Attention Estimation via Group Detection with Feedback Refinement
C. Nakatani, N. Ukita and J.-M. Odobez
Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (GAZE) Workshops (CVPRW), June 2026.
Enhancing 3D Gaze Estimation in the Wild using Weak Supervision with Gaze Following Labels
P. Vuillecard and J.-M. Odobez
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2025.
From Forest to Zoo: Great Ape Behavior Recognition with ChimpBehave
M. Fuchs, E. Genty, K. Zuberbühler, J.-M. Odobez and P. Cotofrei
Int. Journal of Computer Vision (IJCV), Vol 133, pp 6668–6688, Oct. 2025.
Toward Semantic Gaze Target Detection
S. Tafasca, A. Gupta, V. Bros and J.-M. Odobez
38th Conf. on Neural Information Processing System (NeurIPS), December 2024.
MTGS: A Novel Framework for Multi-Person Temporal Gaze Following and Social Gaze Prediction
A. Gupta, S. Tafasca, A. Farkhondeh, P. Vuillecard and J.-M. Odobez
38th Conf. on Neural Information Processing System (NeurIPS), December 2024.
ChildPlay-Hand: A Dataset of Hand Manipulations in the Wild
A. Farkhondeh, S. Tafasca and J.-M. Odobez
European Conference on Computer Vision Workshop: Observing and Understanding Hands in Action, September 2024.
Sharingan: A Transformer Architecture for Multi-Person Gaze Following
S. Tafasca, A. Gupta and J.-M. Odobez
Int. Conference Computer Vision and Pattern Recognition (CVPR), Seatle, June 2024.
ChildPlay: A New Benchmark for Understanding Children’s Gaze Behaviour
S. Tafasca, A. Gupta and J.-M. Odobez
Int. Conference on Computer Vision (ICCV), Paris, October 2023.
Robust Unsupervised Gaze Calibration using Conversation and Manipulation Attention Priors
R. Siegfried and J.-M. Odobez
ACM Transactions on Multimedia Computing, Communications, and Applications, Vol. 18(1), pp 20:1-20:27, 2022.
A Differential Approach for Gaze Estimation
G. Liu, Y. Yu, K. Funes and J.-M. Odobez
in IEEE Transaction of Pattern Analysis and Machine Intelligence (PAMI), Vol 43(3): 1092-1099, 2021.
Neural Network Adaptation and Data Augmentation for Multi-Speaker Direction-of-Arrival Estimation
W. He, P. Motlicek and J.-M. Odobez
IEEE/ACM Transactions on Audio, Speech and Language Processing, Vol 29:1303-1317, 2021.

