JoeClark replied

398 weeks ago

At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. We demonstrate our method in a live setup, where Youtube videos are reenacted in real time.
Thanks

I didn't find the right solution from the Internet.

References: https://forum.doom9.org/showthread.php?t=173330

Inventory Management Video
Please log in to post a reply.