author-avatar

Yuhang He(Henry)

Ph.D. in AI/ML/CV
Department of Computer Science, University of Oxford

I am a DPhil (aka Ph.D.) in Computer Science Department (opens new window), University of Oxford (opens new window), and a member of St Hugh's (opens new window) college. I am co-advised by Prof. Andrew Markham (opens new window) and Prof. Niki Trigoni (opens new window). I interned at Mitsubishi Electric Research Laboratories (MERL (opens new window)) and will intern at Microsoft Applied Sciences in Munich (Microsoft (opens new window)) from April 2024. I received B.Eng. from Wuhan University (opens new window), China.

My research interest lies in Multimodal Embodied AI; Audio-visual Multimodal Learning; 3D Multimodal AR/VR; Signal Processing (Fourier/Wavelet Transform) inspired Deep Learning; Physics-informed Deep Learning, etc.

Drop me an email (yuhang.he[at]cs.ox.ac.uk) if you want to contact. I write blogs as part of my research notes, you are welcome to support a cup of coffee (opens new window) if you find them helpful.

Interests

  • Multimodal Embodied AI
  • Audio-visual Multimodal Learning
  • Embodied Robotics
  • 3D Multimodal AR/VR

Education

  • Ph.D. in Computer Science
    University of Oxford
  • B.Eng. in Remote Sensing and Photogrammetry
    Wuhan University

# News

# Publications

For full publication list, please refer to Google Scholar (opens new window) or → Full list

SoundCount: Sound Counting from Raw Audio with Dyadic Decomposition Neural Network

Yuhang He, Zhuangzhuang Dai, Long Chen, Niki Trigoni, Andrew Markham.

The 38th Annual AAAI Conference on Artificial Intelligence (AAAI), 2024.

We introduce a learnable dyadic decomposition framework that learns more representative time-frequency representation from highly polyphonic and loudness varying sound waveform. It dyadically decomposes the waveform in multi-stage hierarchical manner.

Sound3DVDet: 3D Sound Source Detection Using Multiview Microphone Array and RGB Images

Yuhang He, Sangyun Shin, Anoop Cherian, Niki Trigoni, Andrew Markham.

IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024.

We introduce a novel 3D sound source localization and classification from multiview acoustic-camera recordings task. The sound source lies on object's physical surface but visually non-observable, which reflects some application cases like gas leaking.

Metric-Free Exploration for Topological Mapping by Task and Motion Imitation in Feature Space

Yuhang He, Irving Fang, Yiming Li, Rushi Bhavesh Shah, Chen Feng.

Robotics: Science and Systems (RSS), 2023.

We propose metric-free DeepExplorer to efficiently construct topological map to represent an environment. DeepExplorers exhibits strong sim2sim and sim2real generalization capability.

SoundSynp: Sound Source Detection from Raw Waveforms with Multi-Scale Synperiodic Filterbanks

Yuhang He, Andrew Markham

International Conference on Artificial Intelligence and Statistics (AISTATS), 2023.

We propose a novel framework to construct learnable sound signal processing filter banks that achieve multi-scale processing in both time and frequency domain.

SoundDoA: Learn Sound Source Direction of Arrival and Semantics from Sound Raw Waveforms

Yuhang He, Andrew Markham

Interspeech, 2022.

We propose a novel sound event direction of arrival (DoA) estimation framework with a novel filter bank to jointly learn sound event semantics and spatial location relevant representations.

SoundDet: Polyphonic Moving Sound Event Detection and Localization from Raw Waveform

Yuhang He, Niki Trigoni, Andrew Markham

International Conference on Machine Learning (ICML), 2021.

We propose a novel sound event detection framework for polyphonic and moving sound event detection. We also propose novel object-based evaluation metrics to evaluate performance more objectively.

# PUBLIC OFFICE HOURS

I am always happy to chat with people who are interested in my work. You can check the following office hour I keep update to book a time slot if you want to chat.