From 53c772584e25aecbfe69c781a6765a7ab568f53f Mon Sep 17 00:00:00 2001 From: SWivid Date: Tue, 15 Oct 2024 04:08:58 +0800 Subject: [PATCH] Update README.md --- README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/README.md b/README.md index 9d153b7..74918b3 100644 --- a/README.md +++ b/README.md @@ -89,6 +89,10 @@ python inference-cli.py \ ``` ### Gradio App +Currently supported features: +- Chunk inference +- Podcast Generation +- Multiple Speech-Type Generation You can launch a Gradio app (web interface) to launch a GUI for inference (will load ckpt from Huggingface, you may set `ckpt_path` to local file in `gradio_app.py`). Currently load ASR model, F5-TTS and E2 TTS all in once, thus use more GPU memory than `inference-cli`.