SWivid
1d0cf2b8ba
add device option for infer-cli, patch-1
2025-03-22 17:35:16 +08:00
SWivid
1d82b7928e
add device option for infer-cli
2025-03-22 17:30:23 +08:00
SWivid
4ae5347282
pre-commit update and formatting
2025-03-21 23:01:00 +08:00
SWivid
621559cbbe
v1.0.7
2025-03-21 14:40:52 +08:00
SWivid
526b09eebd
add no_zero_init v1 variant path to SHARED.md
2025-03-21 14:37:14 +08:00
SWivid
9afa80f204
add option in finetune gradio to save non-ema model weight
2025-03-21 13:36:11 +08:00
SWivid
c6b3189bbd
v1.0.6 improves docker usage
2025-03-20 22:48:36 +08:00
Yushen CHEN
c87ce39515
Merge pull request #890 from MicahZoltu/patch-1
...
Improves documentation around docker usage.
2025-03-20 22:45:40 +08:00
Micah Zoltu
10ef27065b
Improves documentation around docker usage.
2025-03-20 21:37:48 +08:00
SWivid
f374640f34
Merge branch 'main' of github.com:SWivid/F5-TTS
2025-03-20 13:54:52 +08:00
SWivid
d5f4c88aa4
update issue templates
2025-03-20 13:54:15 +08:00
Yushen CHEN
f968e13b6d
Update README.md
2025-03-20 10:15:47 +08:00
SWivid
339b17fed3
update README.md for infer & train
2025-03-20 10:14:22 +08:00
SWivid
79302b694a
update README.md for infer & train
2025-03-20 10:03:54 +08:00
SWivid
a1e88c2a9e
v1.0.5 update finetune_gradio.py for clearer guidance
2025-03-17 21:50:50 +08:00
SWivid
1ab90505a4
v1.0.4 fix finetune_gradio.py vocab extend with .safetensors ckpt
2025-03-17 16:22:26 +08:00
SWivid
7e4985ca56
v1.0.3 fix api.py
1.0.3
2025-03-17 02:39:20 +08:00
SWivid
f05ceda4cb
v1.0.2 fix: torch.utils.checkpoint.checkpoint add use_reentrant=False
2025-03-15 16:34:32 +08:00
Yushen CHEN
2bd39dd813
Merge pull request #859 from ZhikangNiu/main
...
fix #858 and pass use_reentrant explicitly in checkpoint_activation mode
2025-03-15 16:23:50 +08:00
ZhikangNiu
f017815083
fix #858 and pass use_reentrant explicitly in checkpoint_activation mode
2025-03-15 15:48:47 +08:00
Yushen CHEN
297755fac3
v1.0.1 VRAM usage management #851
2025-03-14 17:31:44 +08:00
Yushen CHEN
d05075205f
Merge pull request #851 from niknah/vram-usage
...
VRAM usage on long texts gradually uses up memory.
2025-03-14 17:25:56 +08:00
Yushen CHEN
8722cf0766
Update utils_infer.py
2025-03-14 17:23:20 +08:00
niknah
48d1a9312e
VRAM usage on long texts gradually uses up memory.
2025-03-14 16:53:58 +11:00
Yushen CHEN
128f4e4bf3
Update publish-pypi.yaml
1.0.0
2025-03-13 00:08:36 +08:00
SWivid
2695e9305d
v1.0.0 release
2025-03-12 23:47:04 +08:00
SWivid
69909ac167
update README.md
2025-03-12 18:40:07 +08:00
SWivid
79bbde5d76
update README.md add a glance of few demo
2025-03-12 18:37:14 +08:00
SWivid
bf651d541e
update README.md for v1.0.0
2025-03-12 17:39:30 +08:00
SWivid
ca6e49adaa
1.0.0 F5-TTS v1 base model with better training and inference performance
2025-03-12 17:23:10 +08:00
SWivid
09b478b7d7
0.6.2 support socket_server.py with general text chunk
0.6.2
2025-02-25 04:47:40 +08:00
SWivid
a72f2f8efb
0.6.1 fix tqdm func check with difference call behavior from gr.Progress()
2025-02-22 08:33:10 +08:00
Yushen CHEN
85e6c660b0
0.6.0 chunk stream support #803 from kunci115
...
chunk stream instead of the whole content process, to make it near realtime possibility
2025-02-21 21:45:07 +08:00
SWivid
c3d415e47a
merging into one infer_batch_process function
2025-02-21 21:41:19 +08:00
SWivid
7ee55d773c
formatting
2025-02-21 17:00:51 +08:00
kunci115
d68b1f304c
[add] new line after gc.collect()
2025-02-21 14:48:58 +07:00
kunci115
7c0eafe240
[add] client use on readme
2025-02-21 14:45:09 +07:00
rino
4ceba6dc24
This patch is to solve a problem where streaming will handle all of the client input
...
[add] numpy tokenizer for stream chunk
[add] infer_batch_process_stream in utils_infer
[add] file writter after streaming
[edit] adjustment for streaming server
[edit] data handling processes and sends chunk by chunk
[delete] threading on processing the inference, just for file writting
2025-02-21 14:35:01 +07:00
SWivid
d457c3e245
update readme. #784
2025-02-19 15:31:01 +08:00
SWivid
832ecf40b9
formatting, update readme
2025-02-19 08:35:13 +08:00
Yushen CHEN
6e49f3200c
Merge pull request #797 from YoungPhlo/feat/browser-autolaunch
...
feat: Add autolaunch option to Gradio interface
2025-02-19 08:21:41 +08:00
Phlo
fea67815ae
docs: Update README with autolaunch Gradio interface option
2025-02-18 12:50:26 -06:00
Phlo
3342859c04
feat: Add autolaunch option to Gradio interface
2025-02-18 12:29:21 -06:00
SWivid
5fa0479432
0.5.3 fix MPS device compatibility; update readme
2025-02-18 18:42:03 +08:00
Yushen CHEN
e40d4462d2
Merge pull request #796 from YoungPhlo/fix/mps-fallback
...
fix: typo in MPS PyTorch env variable
2025-02-18 18:15:16 +08:00
Phlo
f005f1565e
fix: typo in MPS PyTorch env variable
2025-02-18 03:28:44 -06:00
Yushen CHEN
818d9b8476
Merge pull request #786 from fakerybakery/hf-demo-upd
...
Add link back to GitHub repo, clarify local demo
2025-02-15 05:01:45 +08:00
mrfakename
71ad071c1e
Update Gradio app
2025-02-14 12:44:52 -08:00
SWivid
0923b76d79
update README.md, add nvidia device gradio infer docker compose file example
2025-02-13 02:07:24 +08:00
SWivid
f062403353
0.5.2 Improve prepare_csv_wavs.py from @hcsolakoglu
2025-02-09 14:36:40 +08:00