FAQ
FAQ Related to the Entire Product
What are the output "animation curves"?
For example, they are the same kind of curves as BlendShape Weight curves handled in Maya's Graph Editor.
The lip-sync animation output by CRI LipSync Alive contains information such as: "At each frame, which morph target to drive and by how much."
What is written in the CSV file?
The following information is written in the Animation curve file (.csv) and the Japanese five-vowel file (.adxlip).
- Each morph target (BlendShapes)
- Its Weight value
- Time (seconds)
Since this CSV is a plain text file, you can open it without special tools.
I'm a designer. I don't know what I should do to make the mouth move.
Don't worry. It's almost the same as the animation workflow you already use for morph targets.
- Prepare morph targets (BlendShapes) for your character model
- Analyse the audio with CRI LipSync Alive and output a CSV
- Assign the curves generated from the CSV to the Weight of the corresponding morph targets
- Play it back with the audio and adjust the curves
You don't need to learn a new workflow specifically for facial animation.
Do I need a special rig structure to use it?
No. It does not require a rig with special naming rules or a fixed facial structure.
- A BlendShape-based rig
- A mouth-shape setup designed per title
You can use it as is.
Do I need to register morph targets for analysis?
No, you do not. CRI LipSync Alive analyses lip-sync animation for each morph target from the audio alone.
Because audio analysis and rig design can be separated, you can validate and prototype even before the rig is finalized. It can also be used with character models created in the past.
Where can I get the CRI LipSync Alive package?
It can be downloaded from the Technical Support Page.
Does CRI LipSync Alive use generative AI?
No, it does not.
The analysis engine for human voice utilizes machine learning technology, but the animation calculation process uses CRI's proprietary methods that do not rely on machine learning.
The training data used for the analysis engine consists of audio data free from rights issues.
Additionally, CRI LipSync Alive does not send analyzed audio to external servers, nor is it used for AI training.
FAQ Related to CriLipsMake2
I don't know how to obtain a license key
License keys are sent individually to customers who contact us.
If you have not received one, please contact us via the CRI Middleware Contact Page.
Is it possible to analyse singing voices?
It is possible. However, you must input audio data containing only the vocal track.
If there are multiple vocal tracks, you need to separate them into individual audio files.
Can it analyse languages other than Japanese?
Yes, it is possible. CRI LipSync Alive utilises a language-independent analysis engine.
Therefore, it supports all audio inputs that can be spoken by humans.
The types of morph targets output support Mpeg4-Visemes.
Will the lip-sync animation match the volume or emotional expression?
No, CRI LipSync Alive treats any voice color as the corresponding phoneme (e.g., "a" is always "a").
We recommend using information such as the Vol column.
On the other hand, you can also enable a beta feature under development that estimates mouth opening from speech intonation.
For the -enable_auto_detection_mouth_open flag, please refer to the following page:
An error was output when I input audio data
If any of the following errors are output, the audio data format is invalid.
Check the audio formats supported by CriLipsMake2 in the Audio Data Guidelines.
- "[ERROR] Failed to open wav."
- "[ERROR] Format error."
- "[ERROR] Unsupported bit depth for signed integer samples."
- "[ERROR] Unsupported bit depth for floating point samples."
- "[ERROR] Unsupported encoding."
- "[ERROR] Failed to get num samples."
- "[ERROR] Sampling rate must be equal to or greater than %d."
- "[ERROR] Sampling rate must be equal to or less than %d."