Go to file
2024-10-24 20:58:26 +08:00
.
2024-10-23 23:05:25 +08:00
2024-10-08 22:07:39 +08:00
2024-10-08 21:56:51 +08:00
.
2024-10-23 23:05:25 +08:00
2024-10-10 09:45:55 +08:00
2024-10-24 20:40:26 +08:00
2024-10-21 14:46:45 +10:00

F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching

python arXiv demo hfspace msspace lab Watermark

F5-TTS: Diffusion Transformer with ConvNeXt V2, faster trained and inference.

E2 TTS: Flat-UNet Transformer, closest reproduction from paper.

Sway Sampling: Inference-time flow step sampling strategy, greatly improves performance

Thanks to all the contributors !

Installation

# Create a python 3.10 conda env (you could also use virtualenv)
conda create -n f5-tts python=3.10
conda activate f5-tts

# Install pytorch with your CUDA version, e.g.
pip install torch==2.3.0+cu118 torchaudio==2.3.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118

Then you can choose from a few options below:

1. As a pip package (if just for inference)

pip install git+https://github.com/SWivid/F5-TTS.git

2. Local editable (if also do training, finetuning)

git clone https://github.com/SWivid/F5-TTS.git
cd F5-TTS
pip install -e .

3. Build from dockerfile

docker build -t f5tts:v1 .

Development

Use pre-commit to ensure code quality (will run linters and formatters automatically)

pip install pre-commit
pre-commit install

When making a pull request, before each commit, run:

pre-commit run --all-files

Note: Some model components have linting exceptions for E722 to accommodate tensor notation

Inference

import gradio as gr
from f5_tts.gradio_app import app

with gr.Blocks() as main_app:
    gr.Markdown("# This is an example of using F5-TTS within a bigger Gradio app")

    # ... other Gradio components

    app.render()

main_app.launch()

The pretrained model checkpoints can be reached at 🤗 Hugging Face and 🤖 Model Scope, or automatically downloaded with inference-cli and gradio_app.

Currently support 30s for a single generation, which is the TOTAL length of prompt audio and the generated. Batch inference with chunks is supported by inference-cli and gradio_app.

  • To avoid possible inference failures, make sure you have seen through the following instructions.
  • A longer prompt audio allows shorter generated output. The part longer than 30s cannot be generated properly. Consider using a prompt audio <15s.
  • Uppercased letters will be uttered letter by letter, so use lowercased letters for normal words.
  • Add some spaces (blank: " ") or punctuations (e.g. "," ".") to explicitly introduce some pauses. If first few words skipped in code-switched generation (cuz different speed with different languages), this might help.

CLI Inference

Either you can specify everything in inference-cli.toml or override with flags. Leave --ref_text "" will have ASR model transcribe the reference audio automatically (use extra GPU memory). If encounter network error, consider use local ckpt, just set ckpt_file in inference-cli.py

for change model use --ckpt_file to specify the model you want to load,
for change vocab.txt use --vocab_file to provide your vocab.txt file.

# switch to the main directory
cd f5_tts

python inference-cli.py \
--model "F5-TTS" \
--ref_audio "tests/ref_audio/test_en_1_ref_short.wav" \
--ref_text "Some call me nature, others call me mother nature." \
--gen_text "I don't really care what you call me. I've been a silent spectator, watching species evolve, empires rise and fall. But always remember, I am mighty and enduring. Respect me and I'll nurture you; ignore me and you shall face the consequences."

python inference-cli.py \
--model "E2-TTS" \
--ref_audio "tests/ref_audio/test_zh_1_ref_short.wav" \
--ref_text "对,这就是我,万人敬仰的太乙真人。" \
--gen_text "突然,身边一阵笑声。我看着他们,意气风发地挺直了胸膛,甩了甩那稍显肉感的双臂,轻笑道,我身上的肉,是为了掩饰我爆棚的魅力,否则,岂不吓坏了你们呢?"

# Multi voice
# https://github.com/SWivid/F5-TTS/pull/146#issue-2595207852
python inference-cli.py -c samples/story.toml

Gradio App

Currently supported features:

  • Chunk inference
  • Podcast Generation
  • Multiple Speech-Type Generation
  • Voice Chat powered by Qwen2.5-3B-Instruct

You can launch a Gradio app (web interface) to launch a GUI for inference (will load ckpt from Huggingface, you may also use local file in gradio_app.py). Currently load ASR model, F5-TTS and E2 TTS all in once, thus use more GPU memory than inference-cli.

python f5_tts/gradio_app.py

You can specify the port/host:

python f5_tts/gradio_app.py --port 7860 --host 0.0.0.0

Or launch a share link:

python f5_tts/gradio_app.py --share

Speech Editing

To test speech editing capabilities, use the following command.

python f5_tts/speech_edit.py

Training

Evaluation

Acknowledgements

Citation

If our work and codebase is useful for you, please cite as:

@article{chen-etal-2024-f5tts,
      title={F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching}, 
      author={Yushen Chen and Zhikang Niu and Ziyang Ma and Keqi Deng and Chunhui Wang and Jian Zhao and Kai Yu and Xie Chen},
      journal={arXiv preprint arXiv:2410.06885},
      year={2024},
}

License

Our code is released under MIT License. The pre-trained models are licensed under the CC-BY-NC license due to the training data Emilia, which is an in-the-wild dataset. Sorry for any inconvenience this may cause.

Description
Official code for "F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching"
Readme MIT 8.7 MiB
Languages
Python 97.5%
Shell 2.3%
Dockerfile 0.2%