The StreamFastWav2lipHQ is a near real-time speech-to-lip synthesis system using Wav2Lip and lip enhancer can be used for streaming applications.
-
Updated
Jun 16, 2024
The StreamFastWav2lipHQ is a near real-time speech-to-lip synthesis system using Wav2Lip and lip enhancer can be used for streaming applications.
Revolutionize virtual interactions with a Unity-based chatbot combining GPT-generated dialogue, Oculus Lip Sync, and Google Cloud Speech Recognition for lifelike conversations. See running version on the Upwork Page.
AR based android application using image processing and machine learning techniques, that makes a still images look like they are talking with audio generation and lip movements synced over that audio
Lip Language Video Data
Adventure Game Studio (AGS) module for lip sync
Create deepfake video by just uploading the original video and specifying the text the character will read
Zippy Talking Avatar uses Azure Cognitive Services and OpenAI API to generate text and speech. It is built with Next.js and Tailwind CSS. This avatar responds to user input by generating both text and speech, offering a dynamic and immersive user experience
A package for simple, expressive, and customizable text-to-speech with an animated face.
Audio-Visual Lip Synthesis via Intermediate Landmark Representation
YerFace! A stupid facial performance capture engine for cartoon animation.
AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
Keras version of Syncnet, by Joon Son Chung and Andrew Zisserman.
Learning Lip Sync of Obama from Speech Audio
Add a description, image, and links to the lip-sync topic page so that developers can more easily learn about it.
To associate your repository with the lip-sync topic, visit your repo's landing page and select "manage topics."