Profile Picture

Rajmund Nagy

scholar | github | linkedin | email

I am a third-year PhD student at KTH Royal Institute of Technology, advised by Gustav Eje Henter. My background is in mathematical modelling, software engineering, and machine learning.

Research

My research focuses on 3D character animation and deep generative models, with a central question: how can we create immersive 3D character experiences that resonate with audiences at scale? To explore this, I record real human performances using motion capture, and develop generative models for automated, controllable motion synthesis. By conducting large-scale human evaluations, I aim to identify the strengths and limitations of current systems, with the goal of improving them for real-world applications.

Diary

Nov 4, 2024 I will co-organise the fifth GENEA workshop on the Generation and Evaluation of Non-verbal Behaviour for Embodied Agents at ICMI 2024 in San José, Costa Rica.
Sep 29, 2024 Co-organised the first workshop on Expressive Encounters: Co-speech gestures across cultures in the wild at ECCV 2024 in Milan, Italy.
Aug 19, 2024 Co-organised a study trip to various labs in Japan – RIKEN, NII, Google DeepMind, CyberAgent, Kyoto University, and Nagoya University – for a group of Swedish PhD students working on machine learning for media applications. (LinkedIn post)
Oct 9, 2023 Co-organised the fourth GENEA workshop on the Generation and Evaluation of Non-verbal Behaviour for Embodied Agents at ICMI 2023 in Paris, France.
Aug 7, 2023 I presented our work Listen, denoise, action! Audio-driven motion synthesis with diffusion models at SIGGRAPH 2023 in Los Angeles, USA.
May 2, 2023 We launched the third GENEA Challenge, focused on generating and evaluating synthetic gestures in dyadic conversations, with 12 submitting teams in total!
Oct 11, 2022 Started my PhD at KTH.

Publications

(* denotes equal contribution)
  1. The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings
    Taras Kucherenko*, Rajmund Nagy*, Youngwoo Yoon*, Jieyeon Woo, Teodor Nikolov, Mihail Tsakov, Gustav Eje Henter
    ICMI 2023
    paper | website | citation
  2. Listen, denoise, action! Audio-driven motion synthesis with diffusion models
    Simon Alexanderson, Rajmund Nagy, Jonas Beskow, Gustav Eje Henter
    SIGGRAPH 2023 and ACM Transactions on Graphics (TOG)
    paper | website | code | citation
  3. Multimodal analysis of the predictability of hand-gesture properties
    Taras Kucherenko, Rajmund Nagy, Michael Neff, Hedvig Kjellström, Gustav Eje Henter
    AAMAS 2022
    paper | citation
  4. Speech2Properties2Gestures: Gesture-Property Prediction as a Tool for Generating Representational Gestures from Speech
    IVA 2021 - extended abstract
    paper | website | citation
  5. A Framework for Integrating Gesture Generation Models into Interactive Conversational Agents
    Rajmund Nagy*, Taras Kucherenko*, Birger Moell, André Pereira, Hedvig Kjellström, Ulysses Bernardet
    AAMAS 2021 - demo track
    paper | citation