10 KiB
F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching
F5-TTS: Diffusion Transformer with ConvNeXt V2, faster trained and inference.
E2 TTS: Flat-UNet Transformer, closest reproduction from paper.
Sway Sampling: Inference-time flow step sampling strategy, greatly improves performance
Thanks to all the contributors !
Installation
# Create a python 3.10 conda env (you could also use virtualenv)
conda create -n f5-tts python=3.10
conda activate f5-tts
# Install pytorch with your CUDA version, e.g.
pip install torch==2.3.0+cu118 torchaudio==2.3.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
Then you can choose from a few options below:
1. As a pip package (if just for inference)
pip install git+https://github.com/SWivid/F5-TTS.git
2. Local editable (if also do training, finetuning)
git clone https://github.com/SWivid/F5-TTS.git
cd F5-TTS
pip install -e .
3. Build from dockerfile
docker build -t f5tts:v1 .
Development
Use pre-commit to ensure code quality (will run linters and formatters automatically)
pip install pre-commit
pre-commit install
When making a pull request, before each commit, run:
pre-commit run --all-files
Note: Some model components have linting exceptions for E722 to accommodate tensor notation
Prepare Dataset
Example data processing scripts for Emilia and Wenetspeech4TTS, and you may tailor your own one along with a Dataset class in f5_tts/model/dataset.py.
# switch to the main directory
cd f5_tts
# prepare custom dataset up to your need
# download corresponding dataset first, and fill in the path in scripts
# Prepare the Emilia dataset
python scripts/prepare_emilia.py
# Prepare the Wenetspeech4TTS dataset
python scripts/prepare_wenetspeech4tts.py
Training & Finetuning
Once your datasets are prepared, you can start the training process.
# switch to the main directory
cd f5_tts
# setup accelerate config, e.g. use multi-gpu ddp, fp16
# will be to: ~/.cache/huggingface/accelerate/default_config.yaml
accelerate config
accelerate launch train.py
An initial guidance on Finetuning #57.
Gradio UI finetuning with f5_tts/finetune_gradio.py see #143.
Wandb Logging
By default, the training script does NOT use logging (assuming you didn't manually log in using wandb login).
To turn on wandb logging, you can either:
- Manually login with
wandb login: Learn more here - Automatically login programmatically by setting an environment variable: Get an API KEY at https://wandb.ai/site/ and set the environment variable as follows:
On Mac & Linux:
export WANDB_API_KEY=<YOUR WANDB API KEY>
On Windows:
set WANDB_API_KEY=<YOUR WANDB API KEY>
Moreover, if you couldn't access Wandb and want to log metrics offline, you can the environment variable as follows:
export WANDB_MODE=offline
Inference
import gradio as gr
from f5_tts.gradio_app import app
with gr.Blocks() as main_app:
gr.Markdown("# This is an example of using F5-TTS within a bigger Gradio app")
# ... other Gradio components
app.render()
main_app.launch()
The pretrained model checkpoints can be reached at 🤗 Hugging Face and 🤖 Model Scope, or automatically downloaded with inference-cli and gradio_app.
Currently support 30s for a single generation, which is the TOTAL length of prompt audio and the generated. Batch inference with chunks is supported by inference-cli and gradio_app.
- To avoid possible inference failures, make sure you have seen through the following instructions.
- A longer prompt audio allows shorter generated output. The part longer than 30s cannot be generated properly. Consider using a prompt audio <15s.
- Uppercased letters will be uttered letter by letter, so use lowercased letters for normal words.
- Add some spaces (blank: " ") or punctuations (e.g. "," ".") to explicitly introduce some pauses. If first few words skipped in code-switched generation (cuz different speed with different languages), this might help.
CLI Inference
Either you can specify everything in inference-cli.toml or override with flags. Leave --ref_text "" will have ASR model transcribe the reference audio automatically (use extra GPU memory). If encounter network error, consider use local ckpt, just set ckpt_file in inference-cli.py
for change model use --ckpt_file to specify the model you want to load,
for change vocab.txt use --vocab_file to provide your vocab.txt file.
# switch to the main directory
cd f5_tts
python inference-cli.py \
--model "F5-TTS" \
--ref_audio "tests/ref_audio/test_en_1_ref_short.wav" \
--ref_text "Some call me nature, others call me mother nature." \
--gen_text "I don't really care what you call me. I've been a silent spectator, watching species evolve, empires rise and fall. But always remember, I am mighty and enduring. Respect me and I'll nurture you; ignore me and you shall face the consequences."
python inference-cli.py \
--model "E2-TTS" \
--ref_audio "tests/ref_audio/test_zh_1_ref_short.wav" \
--ref_text "对,这就是我,万人敬仰的太乙真人。" \
--gen_text "突然,身边一阵笑声。我看着他们,意气风发地挺直了胸膛,甩了甩那稍显肉感的双臂,轻笑道,我身上的肉,是为了掩饰我爆棚的魅力,否则,岂不吓坏了你们呢?"
# Multi voice
python inference-cli.py -c samples/story.toml
Gradio App
Currently supported features:
- Chunk inference
- Podcast Generation
- Multiple Speech-Type Generation
You can launch a Gradio app (web interface) to launch a GUI for inference (will load ckpt from Huggingface, you may also use local file in gradio_app.py). Currently load ASR model, F5-TTS and E2 TTS all in once, thus use more GPU memory than inference-cli.
python f5_tts/gradio_app.py
You can specify the port/host:
python f5_tts/gradio_app.py --port 7860 --host 0.0.0.0
Or launch a share link:
python f5_tts/gradio_app.py --share
Speech Editing
To test speech editing capabilities, use the following command.
python f5_tts/speech_edit.py
Evaluation
Prepare Test Datasets
- Seed-TTS test set: Download from seed-tts-eval.
- LibriSpeech test-clean: Download from OpenSLR.
- Unzip the downloaded datasets and place them in the data/ directory.
- Update the path for the test-clean data in
scripts/eval_infer_batch.py - Our filtered LibriSpeech-PC 4-10s subset is already under data/ in this repo
Batch Inference for Test Set
To run batch inference for evaluations, execute the following commands:
# switch to the main directory
cd f5_tts
# batch inference for evaluations
accelerate config # if not set before
bash scripts/eval_infer_batch.sh
Download Evaluation Model Checkpoints
- Chinese ASR Model: Paraformer-zh
- English ASR Model: Faster-Whisper
- WavLM Model: Download from Google Drive.
Objective Evaluation
Install packages for evaluation:
pip install -e .[eval]
Update the path with your batch-inferenced results, and carry out WER / SIM evaluations:
# switch to the main directory
cd f5_tts
# Evaluation for Seed-TTS test set
python scripts/eval_seedtts_testset.py
# Evaluation for LibriSpeech-PC test-clean (cross-sentence)
python scripts/eval_librispeech_test_clean.py
Acknowledgements
- E2-TTS brilliant work, simple and effective
- Emilia, WenetSpeech4TTS valuable datasets
- lucidrains initial CFM structure with also bfs18 for discussion
- SD3 & Hugging Face diffusers DiT and MMDiT code structure
- torchdiffeq as ODE solver, Vocos as vocoder
- FunASR, faster-whisper, UniSpeech for evaluation tools
- ctc-forced-aligner for speech edit test
- mrfakename huggingface space demo ~
- f5-tts-mlx Implementation with MLX framework by Lucas Newman
- F5-TTS-ONNX ONNX Runtime version by DakeQQ
Citation
If our work and codebase is useful for you, please cite as:
@article{chen-etal-2024-f5tts,
title={F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching},
author={Yushen Chen and Zhikang Niu and Ziyang Ma and Keqi Deng and Chunhui Wang and Jian Zhao and Kai Yu and Xie Chen},
journal={arXiv preprint arXiv:2410.06885},
year={2024},
}
License
Our code is released under MIT License. The pre-trained models are licensed under the CC-BY-NC license due to the training data Emilia, which is an in-the-wild dataset. Sorry for any inconvenience this may cause.