Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a way to reconstruct 468 facial feature points into a frontal view? #5794

Open
FromNature opened this issue Dec 23, 2024 · 3 comments
Assignees
Labels
task:face landmarker Issues related to Face Landmarker: Identify facial features for visual effects and avatars. type:support General questions

Comments

@FromNature
Copy link

Hello,

The non-rigid variation intensity between facial feature points is crucial for expression analysis. However, head movement can affect the distribution of these points. Is there a method to decouple head motion from facial expression motion using Mediapipe's feature points and map them to a 2D frontal view?

Thank you, and I look forward to your response.

@FromNature FromNature added the type:others issues not falling in bug, perfromance, support, build and install or feature label Dec 23, 2024
@kuaashish kuaashish assigned kuaashish and unassigned kalyan2789g Dec 27, 2024
@kuaashish kuaashish added type:support General questions task:face landmarker Issues related to Face Landmarker: Identify facial features for visual effects and avatars. and removed type:others issues not falling in bug, perfromance, support, build and install or feature labels Dec 27, 2024
@ArdaSenyurek
Copy link

One way that comes to my mind is the use of rigs in a 3D program like Blender. I think you can calculate the bone relative orientations without any consideration about the spatial location of the whole armature. Then the problem becomes finding the right location & orientation of the armature to match with the image frames.

@FromNature
Copy link
Author

FromNature commented Jan 1, 2025 via email

@ArdaSenyurek
Copy link

You create a standard rig for the face beforehand. Then use Blender Python API to set the orientations of bones programmatically in each frame. Aside, I'd go along with the line of making use of Shape Keys then you can set drivers for a shape key value to be driven by the location of a bone.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
task:face landmarker Issues related to Face Landmarker: Identify facial features for visual effects and avatars. type:support General questions
Projects
None yet
Development

No branches or pull requests

4 participants