XRAnimator
XR Animator, inherited from my previous desktop gadget project known as System Animator, is a video/webcam-based AI motion capture application designed for VTubing and the metaverse era. It uses the machine learning (ML) solution from MediaPipe and TensorFlow.js to detect the 3D poses from a live webcam video, which is then used to drive the 3D avatar (MMD/VRM model) as if you are controlling it with your body. It can be used for VTubing and various XR/3D purposes.
Prerequisite
Tracking Software
Avatar Puppeteer programs
OBS Studio
Before Starting...
Programs like VSeeFace and Warudo are Avatar Puppeteer programs and can normally do the tracking themselves, but in order to use them on Linux they must run under wine/proton and they don’t play well with webcams. Thus the need of an external program to do the tracking
The tracking program send the information through the network (technically, but usually it is sent and received on the same computer) and this allows the tracking information to reach the application running under wine/proton. For this we have to enable the VMC protocol on both the tracker (sender) and the avatar program (receiver)
How to Install
Download the latest release for linux-64 of XR Animator
Place the downloaded archive in a convenient folder and extract it.
Navigate to the electron app folder inside, give execution permissions to the electron file if required.
Run the electron program.
Click start.
Configure the program using the bottom menu.
Double click the camera icon. Enable selfie camera = Yes.
Choose your camera from the list.
Double click the Motion capture button.
Select “Full body (MediaPipe vision)”.
Double click the VMC button.
Set VMC Protocol and Send Camera Data to ON. Set the App mode to your application (VseeFace/Warudo, etc) if listed.
Disable the Avatar.
Click done.
Last updated