You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you so much for your work on this project. It's truly amazing, and I’m excited to see all the innovative tools that people will build based on it. I can already imagine many will integrate your speech-to-speech pipeline with avatar or robot embodiments, where lip sync will be crucial.
To support this, could you help us add functionality to the current flow? The current process includes 1) speech-to-text, 2) LLM, and 3) text-to-speech. I’d like to add a fourth step: either speech-to-viseme or speech-to-text with return_timestamp = "word", followed by manual mapping of words to phonemes, and then to visemes.