PhD Studentship - Next generation of MoCap system

The motion capture system is designed to capture human or animal body motion data for motion modeling. It has found extensive applications in animation and game production, sport and rehabilitation training, and movie VFX production. However, traditional motion capture methods, such as Vicon and Xsens, typically require a set of cameras, skin-tight suits with numerous markers or sensors, and green screen compositing. These requirements make it expensive and inconvenient for small and medium-sized enterprises (SMEs) to incorporate motion capture into their product development processes. 

The aim of this project is to develop a novel MoCap system, which can capture motion data from a single footage with arbitrary backgrounds, can capture facial expression, hand and body motion simultaneously, and can capture both skeletal motion data and full-body shape motion data. Currently, there are three kinds of methods for human body motion modelling from footages

 (1) Human Body Modelling Using Depth Sensors, which perform 3D reconstruction from depth or RGB-Depth (RGBD) images such as the Kinect sensor. This kind of methods can work in real time. However, depth sensors are not available on the consumer-level laptops and smartphones.

(2) Multi-View Human Body Modelling, which uses multiple, sparsely-deployed RGB cameras to observe a scene. It can recover not only the movements of skeletons, but also the possible non-rigid temporal deformations from multi-view image sequences. It is required to track the skeletons while consistently estimating the surface variations over time. This kind of methods usually require deploy multiple RGB cameras (at least two cameras) and lead to a large computational burden.

(3) Single-View Human Body Modelling. Unlike specialized setups or dedicated equipment for 3D reconstruction, reconstruction using a single RGB camera such as the integrated camera on smart phones, has much more potential value for real-life applications. There have been intensive research on the issue of estimating human body pose and shape from a single image. However, due to the high dynamics of the human body and the ambiguity that results from perspective projection, automatic human modelling using monocular RGB videos is limited by many assumptions of light source directions or intensities, as well as the insufficient constraint of single-image silhouettes or a good parametric model as the input. Particularly, full-body motion capture remains challenging. 

This project aims to advance motion capture (MoCap) technology from a single footage approach to developing the next generation of MoCap system that can work on regular laptops and smartphones. The main challenges include multiple object reconstruction from crowded scenarios and reconstructing personalized body shape details and clothing geometry. 

The ideal applicant for this project should have programming experience in C, or C++, or Python, or JavaScript. 

This is a fully-funded PhD studentship which includes a stipend of £17,668 each year to support your living costs. 

Key information

Next start date:

22 January 2024

Location:

Bournemouth University, Talbot Campus

Duration:

36 months

Entry requirements:

Outstanding academic potential as measured normally by either a 1st class honours degree or equivalent Grade Point Average (GPA), or a Master’s degree with distinction or equivalent. If English is not your first language you'll need IELTS (Academic) score of 6.5 minimum (with a minimum 6.0 in each component, or equivalent). For more information check out our full entry requirements