SWivid
1d0cf2b8ba
add device option for infer-cli, patch-1
2025-03-22 17:35:16 +08:00
SWivid
1d82b7928e
add device option for infer-cli
2025-03-22 17:30:23 +08:00
SWivid
4ae5347282
pre-commit update and formatting
2025-03-21 23:01:00 +08:00
SWivid
526b09eebd
add no_zero_init v1 variant path to SHARED.md
2025-03-21 14:37:14 +08:00
SWivid
9afa80f204
add option in finetune gradio to save non-ema model weight
2025-03-21 13:36:11 +08:00
Yushen CHEN
f968e13b6d
Update README.md
2025-03-20 10:15:47 +08:00
SWivid
339b17fed3
update README.md for infer & train
2025-03-20 10:14:22 +08:00
SWivid
79302b694a
update README.md for infer & train
2025-03-20 10:03:54 +08:00
SWivid
a1e88c2a9e
v1.0.5 update finetune_gradio.py for clearer guidance
2025-03-17 21:50:50 +08:00
SWivid
1ab90505a4
v1.0.4 fix finetune_gradio.py vocab extend with .safetensors ckpt
2025-03-17 16:22:26 +08:00
SWivid
7e4985ca56
v1.0.3 fix api.py
2025-03-17 02:39:20 +08:00
SWivid
f05ceda4cb
v1.0.2 fix: torch.utils.checkpoint.checkpoint add use_reentrant=False
2025-03-15 16:34:32 +08:00
ZhikangNiu
f017815083
fix #858 and pass use_reentrant explicitly in checkpoint_activation mode
2025-03-15 15:48:47 +08:00
Yushen CHEN
8722cf0766
Update utils_infer.py
2025-03-14 17:23:20 +08:00
niknah
48d1a9312e
VRAM usage on long texts gradually uses up memory.
2025-03-14 16:53:58 +11:00
SWivid
bf651d541e
update README.md for v1.0.0
2025-03-12 17:39:30 +08:00
SWivid
ca6e49adaa
1.0.0 F5-TTS v1 base model with better training and inference performance
2025-03-12 17:23:10 +08:00
SWivid
09b478b7d7
0.6.2 support socket_server.py with general text chunk
2025-02-25 04:47:40 +08:00
SWivid
a72f2f8efb
0.6.1 fix tqdm func check with difference call behavior from gr.Progress()
2025-02-22 08:33:10 +08:00
SWivid
c3d415e47a
merging into one infer_batch_process function
2025-02-21 21:41:19 +08:00
SWivid
7ee55d773c
formatting
2025-02-21 17:00:51 +08:00
kunci115
d68b1f304c
[add] new line after gc.collect()
2025-02-21 14:48:58 +07:00
kunci115
7c0eafe240
[add] client use on readme
2025-02-21 14:45:09 +07:00
rino
4ceba6dc24
This patch is to solve a problem where streaming will handle all of the client input
...
[add] numpy tokenizer for stream chunk
[add] infer_batch_process_stream in utils_infer
[add] file writter after streaming
[edit] adjustment for streaming server
[edit] data handling processes and sends chunk by chunk
[delete] threading on processing the inference, just for file writting
2025-02-21 14:35:01 +07:00
SWivid
832ecf40b9
formatting, update readme
2025-02-19 08:35:13 +08:00
Phlo
3342859c04
feat: Add autolaunch option to Gradio interface
2025-02-18 12:29:21 -06:00
Phlo
f005f1565e
fix: typo in MPS PyTorch env variable
2025-02-18 03:28:44 -06:00
mrfakename
71ad071c1e
Update Gradio app
2025-02-14 12:44:52 -08:00
Hasan Can Solakoğlu
eebe337625
Increase batch size for text conversion from 32 to 100
2025-02-07 22:40:16 +03:00
Hasan Can Solakoğlu
0291ac17d2
Fix code formatting
2025-02-07 22:37:00 +03:00
Hasan Can Solakoğlu
bec4ebcae5
Enhance CSV preparation script to preserve order of processed audio files in chunk submissions
2025-02-07 22:35:30 +03:00
Hasan Can Solakoğlu
a9d6509a06
Enhance CSV preparation script with customizable worker count and improved usage examples
2025-02-07 22:32:42 +03:00
Hasan Can Solakoğlu
e7496d0170
Enhance audio processing with concurrent execution and graceful shutdown handling
2025-02-07 22:13:13 +03:00
Hasan Can Solakoğlu
34d94af2a8
Enhance audio duration extraction with ffprobe fallback and error handling
2025-02-07 20:38:42 +03:00
Can
33e865120c
Refactor imports and improve code formatting in dataset and trainer modules
2025-02-04 22:20:42 +03:00
Can
93ae7d3fc8
Enhance DynamicBatchSampler to support epoch-based shuffling
2025-02-04 20:21:59 +03:00
Hasan Can
bebbfbb916
Fix for incorrect defaults in the finetune_gradio interface ( #755 )
...
* Add missing components to setup_load_settings in finetune_gradio
2025-01-29 17:25:22 +08:00
unknown
f0996492a7
0.5.0 fix grad_accum bug from 0.4.0, #715 #728
2025-01-29 15:18:02 +08:00
unknown
0d95df4a4d
0.4.6 minor fixes for finetune-gradio -cli
2025-01-29 00:06:10 +08:00
Hasan Can Solakoğlu
f8cc2446c8
Fix for the checkpoint dropdown menu
2025-01-28 15:25:14 +03:00
unknown
607b92b391
0.4.5 fix extremely short case that lengths of text_seq > audio_seq, causing wrong cond_mask
2025-01-28 12:38:16 +08:00
unknown
ee2b77064e
0.4.4 fix hard coded stdout for finetune-gradio gui
2025-01-28 11:39:54 +08:00
Yushen CHEN
1e7d6da992
Merge pull request #746 from mwzkhalil/patch-1
...
Update finetune_gradio.py, set weights_only=True
2025-01-27 21:12:14 +08:00
Yushen CHEN
c2cf31e0c5
Merge pull request #729 from hcsolakoglu/fix-ckpt-rotation
...
Exclude pretrained models from the checkpoint rotation logic
2025-01-27 19:57:05 +08:00
Yushen CHEN
46266f1d14
Merge pull request #741 from Chiyan200/main
...
Fix Settings Loader Issues: Resolve KeyErrors, Path Handling, and Component Assignment (#731 )
2025-01-27 19:28:22 +08:00
mahwiz khalil
c54f4e7fc0
Update finetune_gradio.py
...
The safest approach here is to explicitly set weights_only=True to load only the model weights and avoid executing potentially unsafe code
2025-01-24 00:31:53 -08:00
98440
964064094a
Added intel XPU support
2025-01-22 03:36:10 +08:00
[Chiyan200]
24fe39dc3c
Fix : Settings Loader Issues: Resolve KeyErrors, Path Handling, and Component Assignment ( #731 )
2025-01-22 00:07:34 +05:30
[Chiyan200]
a74d0d0f83
Fix: Robust settings loader to handle missing keys, incorrect file paths, and dynamic assignment
...
- Ensured default settings are properly merged with file-based settings to prevent KeyErrors.
- Added logic to handle _pinyin and _char suffixes in project names, ensuring correct file paths.
- Implemented tuple-based ordered mapping for consistent and error-free component assignment.
- Added safety check to verify the existence of setting.json before loading.
- Improved maintainability by centralizing default settings and enhancing error handling.
2025-01-21 23:17:59 +05:30
Hasan Can Solakoğlu
2d27d2c1b2
Exclude pretrained models from the checkpoint rotation logic
2025-01-17 19:35:19 +03:00