Compare commits

..

237 Commits

Author SHA1 Message Date
Benexl
1ce2d2740d feat: implement get_clean_env function to manage environment variables for subprocesses 2025-12-31 21:43:43 +03:00
Benexl
ce6294a17b fix: exclude OpenSSL libraries on Linux to avoid version conflicts 2025-12-31 21:14:08 +03:00
Benexl
b550956a3e fix: update Ubuntu version in release binaries workflow to 22.04 2025-12-31 21:03:29 +03:00
Benexl
e382e4c046 chore: bump version to 3.3.7 in pyproject.toml and uv.lock 2025-12-31 20:51:00 +03:00
Benedict Xavier
efa1340e41 Merge pull request #177 from viu-media/dynamic-search-filters
Implement dynamic search enhancements (eg filters) and media info differentiation
2025-12-31 18:57:04 +03:00
Benedict Xavier
ac7e90acdf Update viu_media/assets/scripts/fzf/dynamic_preview.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-31 18:54:02 +03:00
Benedict Xavier
8c5b066019 Update viu_media/assets/scripts/fzf/dynamic_preview.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-31 18:52:57 +03:00
Benedict Xavier
a826f391c1 Update viu_media/core/utils/formatter.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-31 18:51:17 +03:00
benexl
6a31f4191f fix: remove f-string for filter adjustment message in search results 2025-12-31 18:47:40 +03:00
benexl
b8f77d80e9 feat: implement restore mode for dynamic search with last query and cached results 2025-12-31 18:43:59 +03:00
benexl
6192252d10 feat: enhance shell_safe function to support Python string literals and escape triple quotes 2025-12-31 18:31:40 +03:00
benexl
efed80f4dc feat: update score formatting in format_score_stars function to match media_info.py style 2025-12-31 18:25:05 +03:00
benexl
e49baed46f feat: differentiate between studios and producers in media info and dynamic preview 2025-12-31 18:11:10 +03:00
benexl
6e26ac500d feat: enhance consistency with normal media-info menu 2025-12-31 18:04:58 +03:00
benexl
5db33d2fa0 feat: implement dynamic search filter parser and enhance search script with filter syntax 2025-12-31 17:59:04 +03:00
benexl
0524af6e26 fix(ipc): add checks for Unix domain socket availability in MPVIPCClient and MpvIPCPlayer 2025-12-31 15:47:43 +03:00
benexl
a2fc9e442d fix: add libglib2.0-dev installation for Linux system dependencies in GitHub Actions workflow 2025-12-31 15:22:37 +03:00
benexl
f9ca8bbd79 fix: add installation of system dependencies for Linux in GitHub Actions workflow 2025-12-31 15:18:24 +03:00
benexl
dd9d9695e7 fix: remove unused imports for cleaner code 2025-12-31 15:14:14 +03:00
benexl
c9d948ae4b feat: add GitHub Actions workflow for building release binaries across platforms 2025-12-31 15:09:06 +03:00
benexl
b9766af11a fix(pyinstaller): update platform-specific settings and optimize EXE configuration 2025-12-31 15:02:29 +03:00
benexl
9d72a50916 fix: replace sys.executable with get_python_executable for better compatibility 2025-12-31 14:51:50 +03:00
benexl
acb14d025c fix: enhance menu loading to support PyInstaller compatibility with explicit module listing 2025-12-31 14:42:44 +03:00
benexl
ba9b170ba8 fix: update menu loading mechanism to support pkgutil for dynamic imports 2025-12-31 14:31:35 +03:00
benexl
ecc4de6ae6 ci: update paths 2025-12-31 14:21:50 +03:00
benexl
e065c8e8fc fix(normalizer): add anime title mapping for "Burichi -" 2025-12-31 13:10:47 +03:00
benexl
32df0503d0 fix(dependencies): update optional dependencies for platform-specific functionality 2025-12-31 13:05:56 +03:00
Benedict Xavier
11449378e9 docs: Revise Termux installation instructions in README
Updated installation instructions for Termux, including required packages and optional dependencies.
2025-12-30 14:51:36 +03:00
Benexl
8837c542f2 chore: bump version 2025-12-30 14:33:33 +03:00
Benexl
eb8c443775 fix(player): vlc on android 2025-12-30 14:32:25 +03:00
Benexl
b052ee8300 ci: only run on request 2025-12-30 12:05:38 +03:00
Benedict Xavier
f684f561df Merge pull request #175 from komposer-aml/fix/auth-input-mac
feat(auth): Allow non-interactive Anilist authentication
2025-12-30 12:00:54 +03:00
Albert Medrano-Lopez
7ed45ce07e chore: improved grammar in one other sentence during authentication flow
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-30 00:07:30 -08:00
Albert Medrano-Lopez
10d1211388 chore: improved grammar in "already logged in as" message
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-29 23:44:32 -08:00
Albert Medrano-Lopez
efa6f4d142 fix(tests): Resolve pyright type errors in anilist test_mapper.py
Updated mock data in `test_to_generic_user_profile_success` to conform to `AnilistViewerData` requirements.
Adjusted type annotations in tests with intentionally malformed data to `Any` to prevent pyright errors, ensuring proper validation of error handling.
2025-12-29 23:25:40 -08:00
Benedict Xavier
0ca63dd765 Merge branch 'master' into fix/auth-input-mac 2025-12-29 16:49:37 +03:00
Albert Medrano-Lopez
b62d878a0e feat(auth): Allow non-interactive Anilist authentication
This enhances the `anilist auth` command by allowing the authentication token to be provided directly as an argument or from a file. This provides a non-interactive way to authenticate, which is useful for scripting or for users who have issues with the interactive browser-based flow.

The `auth` command now accepts an optional `token_input` argument.
- If the argument is a valid file path, the token is read from the file.
- Otherwise, the argument is treated as the token itself.
- If no argument is provided, the command falls back to the previous interactive method.

The README is also updated to document these new authentication options.

This commit also includes:
- `test`: Unit tests for the new authentication logic (``test_auth.py``) and for the mapper fix (``test_mapper.py``).
- `fix(api)`: A fix in the Anilist API mapper to handle potentially missing data in the API response, making it more robust.
- `style`: Minor code style and formatting fixes throughout the codebase.
2025-12-29 04:05:28 -08:00
Benexl
bcc5e7df8e feat: allow disabling of initial config creation 2025-12-29 11:53:02 +03:00
Benedict Xavier
df8e925eec Update README with project reference and contribution info
Added a project reference and updated contributing section.
2025-12-29 00:32:33 +03:00
Benexl
9d9fa55b69 chore: bump version 2025-12-28 17:54:14 +03:00
Benedict Xavier
42f7e1d4e2 Merge pull request #174 from viu-media/minor-fixes 2025-12-28 17:51:57 +03:00
Type-Delta
7f4a1f265a fix: animepahe unable to stream/download 2025-12-28 19:20:46 +07:00
Benexl
12ef447eaf feat: add vlc player which i somehow forgot lol 2025-12-28 01:25:18 +03:00
Benexl
75b1b8fab4 chore: bump version 2025-12-27 19:39:18 +03:00
Benexl
6f4155dd65 feat: add logger param 2025-12-27 19:39:10 +03:00
benexl
20ce2f6ca3 fix(cli): update command name based on availability of viu-media 2025-12-16 17:57:18 +03:00
benexl
dbbfe0331b chore: bump version to 3.3.3 2025-12-16 17:54:48 +03:00
benexl
04ae196d5f fix(cli): remove stdout and stderr reconfiguration for UTF-8 encoding on Windows 2025-12-16 17:50:04 +03:00
benexl
fe92ff8716 fix(preview): update cache directory paths and improve script execution formatting 2025-12-16 17:24:49 +03:00
benexl
c047377289 fix(preview): pass posix paths 2025-12-16 16:47:44 +03:00
benexl
fcbaa7fb0d fix(cli): ensure UTF-8 encoding on Windows platforms 2025-12-16 16:17:14 +03:00
benexl
87c87ebca7 fix(preview): update path handling for cache directories in preview scripts to pass as posix paths 2025-12-16 16:16:05 +03:00
Benedict Xavier
e1272ddf35 Merge pull request #171 from axtrat/provider/animeunity 2025-12-14 09:26:07 +03:00
axtrat
5fe59e1ddb fix: fixed pyright error 2025-12-13 21:25:12 +01:00
axtrat
83ad67a4a8 refactor(animeunity): reorganize extraction logic and update mapper 2025-12-11 13:19:39 +01:00
axtrat
94866b68f3 fix(animeunity): patch missing video info due to VixCloud changes
VixCloud's window.video object no longer provides 'quality' and 'filename' fields, causing a KeyError.
This fix updates the extraction logic.
2025-12-11 13:02:40 +01:00
Benedict Xavier
5f7e10a510 Update README with Termux installation instructions
Added installation instructions for Termux and clarified Python installation requirements.
2025-12-03 11:41:23 +03:00
Benexl
95586eb36f chore: bump version 2025-12-03 10:04:07 +03:00
Benexl
c01c08c03b feat: show welcome screen once a month 2025-12-03 10:03:52 +03:00
Benexl
14e1f44696 chore: bump version 2025-12-02 19:04:14 +03:00
Benexl
36b71c0751 feat: update welcome message 2025-12-02 18:58:15 +03:00
Benexl
6a5d7a0116 chore: bump version and update deps 2025-12-02 18:31:43 +03:00
Benedict Xavier
91efee9065 Merge pull request #169 from viu-media/feat/welcomescreen 2025-12-02 18:03:25 +03:00
Benedict Xavier
69d3d2e032 Update viu_media/cli/cli.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-02 18:02:11 +03:00
Benedict Xavier
29ba77f795 Update viu_media/cli/cli.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-02 18:01:56 +03:00
Benexl
a4950efa02 feat: wait for feedback 2025-12-02 18:00:44 +03:00
Benedict Xavier
bbd7931790 Merge branch 'master' into feat/welcomescreen 2025-12-02 17:53:17 +03:00
Benedict Xavier
c3ae5f9053 Merge pull request #168 from viu-media/feature/preview-scripts-rewrite-to-python 2025-12-02 17:52:21 +03:00
Benedict Xavier
bf06d7ee2c Update viu_media/assets/scripts/fzf/media_info.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-02 17:46:58 +03:00
Benexl
41aaf92bae style: remove unused import 2025-12-02 17:44:46 +03:00
Benexl
d38dc3194f feat: export ansi utils to preview root dir when doing dynamic previews 2025-12-02 17:43:18 +03:00
Benexl
54233aca79 feat: remove redundancy and stick to ansi_utils 2025-12-02 17:42:53 +03:00
Benexl
6b8dfba57e fix: remove double quotes 2025-12-02 17:30:31 +03:00
Benexl
3b008696d5 style: remove unused imports 2025-12-02 17:27:30 +03:00
Benedict Xavier
ece1f77e99 Merge branch 'master' into feature/preview-scripts-rewrite-to-python 2025-12-02 17:16:50 +03:00
Benexl
7b9de8620b chore: cleanup old preview scripts 2025-12-02 17:15:18 +03:00
Benexl
725754ea1a feat: improve text display for dynamic search 2025-12-02 17:12:27 +03:00
Benexl
80771f65ea feat: dynamic search rewrite in python 2025-12-02 14:36:03 +03:00
Benexl
c8c4e1b2c0 feat: refactor terminal width handling in FZF scripts for improved consistency 2025-12-02 13:30:03 +03:00
Benexl
f4958cc0cc fix: clean up whitespace in display_width and print_table_row functions 2025-12-02 13:14:19 +03:00
Benexl
1f72e0a579 feat: enhance display width calculation for better text alignment in print_table_row 2025-12-02 13:07:55 +03:00
Benexl
803c8316a7 fix: improve value alignment in print_table_row for better formatting 2025-12-02 13:04:25 +03:00
Benexl
26bc84e2eb fix: clean up whitespace in ANSI utilities and preview script 2025-12-01 18:48:15 +03:00
Benexl
901d1e87c5 feat: rewrite FZF preview scripts to use ANSI utilities for improved formatting 2025-12-01 18:47:58 +03:00
Benexl
523766868c feat: implement other image renders 2025-12-01 17:44:55 +03:00
Benexl
bd9bf24e1c feat: add more image render options 2025-12-01 17:27:47 +03:00
Benexl
f27c0b8548 fix: order of operations 2025-12-01 17:25:21 +03:00
Benexl
76c1dcd5ac fix: specifying extension when saving file 2025-12-01 17:19:55 +03:00
Benexl
25a46bd242 feat: disable airing schedule preview 2025-12-01 17:19:33 +03:00
Benexl
a70db611f7 style: remove unnecessary comment 2025-12-01 17:19:11 +03:00
Benexl
091edb3a9b fix: remove extra bracket 2025-12-01 17:06:58 +03:00
Benexl
9050dd7787 feat: disable image for character, review, airing-schedule 2025-12-01 17:05:40 +03:00
Benexl
393b9e6ed6 feat: use actual file for preview script 2025-12-01 17:00:15 +03:00
Benexl
5193df2197 feat: airing schedule previews in python 2025-11-30 15:33:34 +03:00
Benexl
6ccd96d252 feat: review previews in python 2025-11-30 15:15:11 +03:00
Benexl
e8387f3db9 feat: character previews in python 2025-11-30 15:03:48 +03:00
Benexl
23ebff3f42 fix: add .py extension to final path 2025-11-30 14:41:17 +03:00
Benexl
8e803e8ecb feat(cli): search provider with title in lowercase 2025-11-20 22:14:17 +03:00
Benexl
61fcd39188 feat(dev): use PWD when specifying the viu venv bin path 2025-11-20 22:13:36 +03:00
Benexl
313f8369d7 feat: show release notes after upgrade 2025-11-18 16:32:30 +03:00
Benexl
bee73b3f9a feat(config): add show release option 2025-11-18 16:03:22 +03:00
Benexl
f647b7419a feat: add welcome screen message 2025-11-18 15:56:28 +03:00
Benexl
901c4422b5 feat: add welcome screen config option 2025-11-18 15:01:07 +03:00
Benexl
08ae8786c3 feat: sanitize " in key 2025-11-18 14:48:00 +03:00
Benexl
64093204ad feat: create temp episode preview script 2025-11-18 14:28:54 +03:00
Benexl
8440ffb5e5 feat: add a key for extra uniqueness 2025-11-18 14:20:07 +03:00
Benexl
6e287d320d feat: rewrite episode info script in python 2025-11-18 13:59:40 +03:00
Benexl
a7b0f21deb feat: rename info.py to media_info.py 2025-11-18 13:44:20 +03:00
Benedict Xavier
71b668894b Revise disclaimer and core features in README
Updated disclaimer section for clarity and removed redundancy.
2025-11-13 17:13:37 +03:00
Benedict Xavier
8b3a57ed07 Merge pull request #163 from Oreo-Kuuki/patch-1 2025-11-03 23:41:19 +03:00
Oreo-kuuki
b2f9c8349a Fix formatting of 'Hanka x Hanka' entry in normalizer.json
So like this, right?
2025-11-03 15:37:24 -05:00
Oreo-kuuki
25fe1e5e01 Fix formatting in normalizer.json entries
Added comma, hanka x hanka without the unicode
2025-11-03 15:14:08 -05:00
Oreo-kuuki
45ff463f7a Add mapping for 'Hanka×Hanka (2011)' to 'Hunter x Hunter (2011)' 2025-11-03 15:00:41 -05:00
Benexl
29ce664e4c Merge remote-tracking branch 'origin/master' into feature/preview-scripts-rewrite-to-python 2025-11-03 11:16:36 +03:00
Benexl
2217f011af fix(core-constants): use project name over cli name 2025-11-01 20:06:53 +03:00
Benexl
5960a7c502 feat(notifications): use seconds instead of minutes 2025-11-01 19:50:46 +03:00
Benexl
bd0309ee85 feat(dev): add .venv/bin to path using direnv 2025-11-01 19:15:45 +03:00
Benexl
3724f06e33 fix(allanime-anime-provider): not giving different qualities 2025-11-01 17:26:45 +03:00
Benexl
d20af89fc8 feat(debug-anime-provider-utils): allow for quality selection 2025-11-01 16:48:51 +03:00
Benexl
3872b4c8a8 feat(search-command): allow quality selection 2025-11-01 16:48:07 +03:00
Benexl
9545b893e1 feat(search-command): if no title is provided as an option prompt it 2025-11-01 16:47:28 +03:00
Benexl
1519c8be17 feat: create the preview script in the cache/preview dir 2025-11-01 00:59:38 +03:00
Benexl
9a619b41f4 feat: use prefix in preview-script.py filename 2025-11-01 00:55:19 +03:00
Benexl
0c3a963cc4 feat: use ?? where episodes are unknown 2025-11-01 00:50:45 +03:00
Benexl
192818362b feat: next episode should come last in its grp for better ui ux 2025-11-01 00:04:05 +03:00
Benexl
2d8c1d3569 feat: remove colon for better ui 2025-10-31 23:50:12 +03:00
Benexl
e37f9213f6 feat: include romaji title in synonymns if not already there 2025-10-31 23:44:11 +03:00
Benexl
097db713bc feat: refactor ruling logic to function 2025-10-31 23:37:45 +03:00
Benexl
106278e386 feat: improve synopsis separator styling 2025-10-31 23:35:31 +03:00
Benexl
44b3663644 feat: grp studio, synonymns and tags separately for better ui / ux 2025-10-31 23:23:33 +03:00
Benexl
925c30c06e fix: typo should be text not info 2025-10-31 23:23:03 +03:00
Benexl
7401a1ad8f feat: prefer to use direct implementation of graphics protocol over external tools 2025-10-31 23:04:56 +03:00
Benexl
9a0bb65e52 feat: implement image preview 2025-10-31 22:49:41 +03:00
Benexl
1d129a5771 fix: remove extra bracket 2025-10-31 22:37:36 +03:00
Benexl
515660b0f6 feat: implement the main preview text logic in python 2025-10-31 22:32:51 +03:00
Benexl
9f5c895bf5 chore: temporarily relocate initial bash preview scripts to old folder 2025-10-31 22:32:14 +03:00
Benexl
5634214fb8 chore(ci): update stale.yml to emphasize devs limited time 2025-10-27 00:33:36 +03:00
Benexl
66c0ada29d chore(ci): update days to closure of pr or issue 2025-10-27 00:24:07 +03:00
Benexl
02465b4ddb chore(ci): add stale.yml 2025-10-27 00:19:07 +03:00
Benexl
5ffd94ac24 chore(pre-commit): update pre-commit config to use only Ruff 2025-10-26 23:47:28 +03:00
Benexl
d2864df6d0 style(dev): add extra space inorder to pass ruff fmt 2025-10-26 23:37:19 +03:00
Benexl
2a28e3b9a3 chore: temporarily disable tests in workflow 2025-10-26 23:32:05 +03:00
Benexl
7b8027a8b3 fix(viu): correct import path 2025-10-26 23:28:23 +03:00
Benexl
2a36152c38 fix(provider-scraping-html-parser): pyright errors 2025-10-26 23:26:36 +03:00
Benexl
2048c7b743 fix(inquirer-selector): pyright errors 2025-10-26 23:25:55 +03:00
Benexl
133fd4c1c8 chore: run ruff check --fix 2025-10-26 23:20:30 +03:00
Benexl
e22120fe99 fix(allanime-anime-provider-utils): pyright errors 2025-10-26 23:19:36 +03:00
Benexl
44e6220662 chore: cleanup; directly implement syncplay logic in the actual players 2025-10-26 23:16:23 +03:00
Benexl
1fea1335c6 chore: move to feature branch 2025-10-26 23:10:05 +03:00
Benexl
8b664fae36 chore: move to feature branch 2025-10-26 23:09:53 +03:00
Benexl
19a85511b4 chore: move to feature branch 2025-10-26 23:09:42 +03:00
Benexl
205299108b fix(media-api-debug-utils): pyright errors 2025-10-26 23:05:31 +03:00
Benexl
7670bdd2f3 fix(jikan-media-api-mapper): pyright errors 2025-10-26 23:03:05 +03:00
Benexl
cd3f7f7fb8 fix(anilist-media-api-mapper): pyright errors 2025-10-26 22:58:12 +03:00
Benexl
5be03ed5b8 fix(core-concurrency-utils): pyright errors 2025-10-26 22:56:17 +03:00
Benexl
6581179336 fix(yt-dlp-downloader): pyright errors 2025-10-26 22:53:56 +03:00
Benexl
2bb674f4a0 fix(cli-image-utils): pyright errors 2025-10-26 22:49:32 +03:00
Benexl
642e77f601 fix(config-editor): pyright errors 2025-10-26 22:37:57 +03:00
Benexl
a5e99122f5 fix(registry-cmds): pyright errors 2025-10-26 21:30:10 +03:00
Benexl
39bd7bed61 chore: update deps 2025-10-26 20:18:08 +03:00
Benexl
869072633b chore: create .python-version 2025-10-26 20:17:47 +03:00
Benexl
cbd788a573 chore: bump python version for pyright 2025-10-26 20:13:49 +03:00
Benexl
11fe54b146 chore: update lock file 2025-10-26 19:17:48 +03:00
Benexl
a13bdb1aa0 chore: bump version 2025-10-26 19:12:56 +03:00
Benexl
627b09a723 fix(menu): runtime setting of provider 2025-10-26 19:03:51 +03:00
Benedict Xavier
aecec5c75b Add video showcase and Rofi details to README 2025-10-24 16:16:45 +03:00
Benexl
49b298ed52 chore: update lock file 2025-10-24 13:32:43 +03:00
Benexl
9a90fa196b chore: update dev deps specification to latest uv spec 2025-10-24 13:26:28 +03:00
Benexl
4ac059e873 feat(dev): automate media tag enum creation 2025-10-24 13:25:58 +03:00
Benexl
8b39a28e32 Merge pull request #157 from Abdisto/master
Adding missing media-tag
2025-10-23 01:03:02 +03:00
Abdist
066cc89b74 Update tags.json 2025-10-20 00:00:52 +02:00
Abdist
db16758d9f Fix missing closing quote in REVERSE_ISEKAI
ups
2025-10-19 23:50:41 +02:00
Abdist
78e17b2ba0 Update tags.json 2025-10-19 23:48:05 +02:00
Abdist
c5326eb8d9 Update types.py 2025-10-19 23:44:58 +02:00
Benexl
4a2d95e75e fix(animepahe-provider): update kwik.si to kwik.cx in headers 2025-10-12 12:08:05 +03:00
Benexl
3a92ba69df fix(fzf-selector): ensure consistent encoding in subprocess calls 2025-10-07 21:18:55 +03:00
Benexl
cf59f4822e feat: update repo url 2025-10-07 20:57:24 +03:00
Benexl
1cea6d0179 Merge pull request #152 from umop3plsdn/fix-category 2025-09-26 14:56:17 +03:00
David Grindle
4bc1edcc4e Fix: added the Kabuki category that was missing 2025-09-25 17:16:17 -04:00
Benexl
0c546af99c Merge pull request #149 from viu-media/minor-fixes 2025-09-21 11:53:50 +03:00
Type-Delta
1b49e186c8 change: animepahe provider domain from '.ru' to '.si' 2025-09-20 15:16:54 +07:00
Benexl
fe831f9658 Merge pull request #137 from axtrat/provider/animeunity 2025-09-07 13:57:10 +03:00
Benexl
72f0e2e5b9 Merge branch 'master' into provider/animeunity 2025-09-07 13:56:45 +03:00
Benexl
8530da23ef Merge pull request #141 from mkuritsu/master 2025-08-30 14:59:40 +03:00
mkuritsu
1e01b6e54a fix(nix): bump version and force use of python 3.12 to fix mpv gpu issues 2025-08-30 01:36:37 +01:00
axtrat
aa6ba9018d feat: limit quality selection to what's available from servers
This change affects all providers. It limits the selection if the servers don't
implement multiple qualities, ensuring that only qualities actually available
are displayed to the user.
2025-08-25 19:46:43 +02:00
axtrat
354ba6256a fix: Normalized some titles 2025-08-25 17:43:11 +02:00
axtrat
eae31420f9 fix: Error: o streaming servers 2025-08-25 15:19:25 +02:00
axtrat
01432a0fec feat: Added video quality source options 2025-08-25 15:07:38 +02:00
Benexl
c158d3fb99 Merge branch 'master' into provider/animeunity 2025-08-25 09:58:43 +03:00
axtrat
877bc043a0 fix: restoreded changes to update.py 2025-08-24 21:54:29 +02:00
axtrat
4968f8030a fix: Addes VIXCLOUD to available ProviderServer 2025-08-24 21:15:06 +02:00
axtrat
c5c7644d0d fix: Cannot fetch anime with a certain title
- added a replacing word dictionary
 - added a manual cache dictionary ID -> SearchResult to get more accurate results.
2025-08-24 18:19:39 +02:00
axtrat
ff2a5d635a feat/fix: Added special episodes to selection 2025-08-22 14:16:10 +02:00
axtrat
8626d1991c fix: Failing to get the episode list for anime that is ongoing or has more than 119 episodes. 2025-08-22 13:48:20 +02:00
Benexl
75d15a100d Merge pull request #135 from Aethar01/master 2025-08-22 13:38:29 +03:00
Aethar
25d9895c52 updated readme with correct AUR install instructions 2025-08-22 07:58:06 +09:00
axtrat
f1b796d72b feat: Initial implementation of AnimeUnity provider 2025-08-21 10:19:25 +02:00
Benexl
3f63198563 Merge pull request #132 from 0xDracula/docs/nixos-installation-instructions
docs: update installation instructions for nixos
2025-08-18 20:50:37 +03:00
Abdallah Ebrahim
8d61463156 Update README.md
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-18 20:25:10 +03:00
0xDracula
2daa51d384 docs: update installation instructions for nixos 2025-08-18 20:17:11 +03:00
Benexl
43a0d77e1b dev(envrc): isolate development files 2025-08-18 16:27:43 +03:00
Benexl
eaedf3268d feat(config): switch to toml format 2025-08-18 14:06:31 +03:00
Benexl
ade0465ea4 chore: set py version for pyright 2025-08-18 13:24:50 +03:00
Benexl
5e82db4ea8 chore: add repomixignore 2025-08-18 13:23:48 +03:00
Benexl
a10e56cb6f refactor:set min supported python version to 3.11 2025-08-18 13:19:56 +03:00
Benexl
fbd95e1966 feat(config-loader): allow env vars 2025-08-18 13:04:00 +03:00
Benexl
d37a441ccf fix(state): check for is None instead 2025-08-18 12:33:15 +03:00
Benexl
cbc1ceccbb feat(cli): auto check for updates 2025-08-18 02:14:56 +03:00
Benexl
249a207cad fix(update-command): use viu-media when updating 2025-08-18 01:28:59 +03:00
Benexl
c8a42c4920 Update README.md 2025-08-18 01:16:47 +03:00
Benexl
de8b6b7f2f chore: bump version 2025-08-18 01:15:00 +03:00
Benexl
54e0942233 chore: update uv.lock 2025-08-18 01:12:10 +03:00
Benexl
8ea0c121c2 chore: viu_media 2025-08-18 01:08:27 +03:00
Benexl
eddaad64e7 chore: viu media is better 2025-08-18 01:07:36 +03:00
Benexl
43be7a52cf chore(envrc): check if nix command is available 2025-08-18 00:31:05 +03:00
Benexl
b689760a25 Merge pull request #129 from s-weigand/fix-ci
🚇🩹 Fix test CI workflow
2025-08-17 20:25:55 +03:00
Benexl
e53246b79b feat(interactive-state): media api state should come second 2025-08-17 19:45:59 +03:00
Benexl
b0fc94cdc5 style: ruff format 2025-08-17 19:40:53 +03:00
Benexl
449f6c1e59 feat(interactive-state): create accessors that ensure values exist 2025-08-17 19:38:55 +03:00
Benexl
ab4734b79d fix(session): allow offline viewing by wrapping authenticate in try block 2025-08-17 17:49:38 +03:00
Benexl
93d0f6a1a5 refactor: fa to viu 2025-08-17 17:22:38 +03:00
Benexl
19c75c48b2 Merge pull request #128 from s-weigand/improve-title-matching
👌 Make finding best_match_title more robust
2025-08-17 16:49:32 +03:00
Benexl
5341b0a844 Update README.md 2025-08-17 16:40:26 +03:00
Benexl
24e7e6a16b Update README.md 2025-08-17 16:36:52 +03:00
s-weigand
4b310e60b8 Revert " Run on feature-branch"
This reverts commit c6b8cfc294.
2025-08-17 13:42:43 +02:00
s-weigand
4d50cffd86 🧹 Ignore blank except ruff rule 2025-08-17 13:14:32 +02:00
s-weigand
f6fedf0500 🧹 Remove unused TYPE_CHECKING import 2025-08-17 13:13:11 +02:00
s-weigand
7b431450fe 🩹 Relock uv.lock file due to changed package name 2025-08-17 13:09:52 +02:00
s-weigand
66b247330b 🚇🩹 Install libglib2.0-dev 2025-08-17 12:56:03 +02:00
s-weigand
c6b8cfc294 Run on feature-branch 2025-08-17 12:49:05 +02:00
s-weigand
6895426d67 🚇🩹 Install dbus-python build dependencies 2025-08-17 12:48:29 +02:00
s-weigand
cc69dc35f6 👌 Make finding best_match_title more robust 2025-08-17 12:34:25 +02:00
Benexl
ed81f37ae4 Merge pull request #126 from blob5/master
Build failure on nixOS. ModuleNotFoundError: No module named 'viu'
2025-08-16 23:47:43 +03:00
Senna
c6858b00c4 remove pythonImportsCheck 2025-08-16 22:08:06 +02:00
Benexl
a44034a5d4 chore: remove 2025-08-16 21:47:44 +03:00
Benexl
f768518721 Update README.md 2025-08-16 19:48:57 +03:00
294 changed files with 12171 additions and 6594 deletions

7
.envrc
View File

@@ -1 +1,6 @@
use flake
VIU_APP_NAME="viu-dev"
PATH="$PWD/.venv/bin:$PATH"
export PATH VIU_APP_NAME
if command -v nix >/dev/null; then
use flake
fi

15
.github/FUNDING.yml vendored
View File

@@ -1,15 +0,0 @@
# These are supported funding model platforms
github: benexl # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: benexl # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
polar: # Replace with a single Polar username
buy_me_a_coffee: # Replace with a single Buy Me a Coffee username
thanks_dev: # Replace with a single thanks.dev username
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

152
.github/workflows/release-binaries.yml vendored Normal file
View File

@@ -0,0 +1,152 @@
name: Build Release Binaries
on:
release:
types: [published]
workflow_dispatch:
inputs:
tag:
description: "Tag/version to build (leave empty for latest)"
required: false
type: string
permissions:
contents: write
jobs:
build:
strategy:
fail-fast: false
matrix:
include:
- os: ubuntu-22.04
target: linux
asset_name: viu-linux-x86_64
executable: viu
- os: windows-latest
target: windows
asset_name: viu-windows-x86_64.exe
executable: viu.exe
- os: macos-latest
target: macos
asset_name: viu-macos-x86_64
executable: viu
runs-on: ${{ matrix.os }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
ref: ${{ github.event.inputs.tag || github.ref }}
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install uv
uses: astral-sh/setup-uv@v3
with:
enable-cache: true
- name: Install system dependencies (Linux)
if: runner.os == 'Linux'
run: |
sudo apt-get update
sudo apt-get install -y libdbus-1-dev libglib2.0-dev
- name: Install dependencies
run: uv sync --all-extras --all-groups
- name: Build executable with PyInstaller
run: uv run pyinstaller bundle/pyinstaller.spec --distpath dist --workpath build/pyinstaller --clean
- name: Rename executable
shell: bash
run: mv dist/${{ matrix.executable }} dist/${{ matrix.asset_name }}
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.asset_name }}
path: dist/${{ matrix.asset_name }}
if-no-files-found: error
- name: Upload to Release
if: github.event_name == 'release'
uses: softprops/action-gh-release@v2
with:
files: dist/${{ matrix.asset_name }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Build for macOS ARM (Apple Silicon)
build-macos-arm:
runs-on: macos-14
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
ref: ${{ github.event.inputs.tag || github.ref }}
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install uv
uses: astral-sh/setup-uv@v3
with:
enable-cache: true
- name: Install dependencies
run: uv sync --all-extras --all-groups
- name: Build executable with PyInstaller
run: uv run pyinstaller bundle/pyinstaller.spec --distpath dist --workpath build/pyinstaller --clean
- name: Rename executable
run: mv dist/viu dist/viu-macos-arm64
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: viu-macos-arm64
path: dist/viu-macos-arm64
if-no-files-found: error
- name: Upload to Release
if: github.event_name == 'release'
uses: softprops/action-gh-release@v2
with:
files: dist/viu-macos-arm64
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Create checksums after all builds complete
checksums:
needs: [build, build-macos-arm]
runs-on: ubuntu-latest
if: github.event_name == 'release'
steps:
- name: Download all artifacts
uses: actions/download-artifact@v4
with:
path: artifacts
merge-multiple: true
- name: Generate checksums
run: |
cd artifacts
sha256sum * > SHA256SUMS.txt
cat SHA256SUMS.txt
- name: Upload checksums to Release
uses: softprops/action-gh-release@v2
with:
files: artifacts/SHA256SUMS.txt
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

57
.github/workflows/stale.yml vendored Normal file
View File

@@ -0,0 +1,57 @@
name: Mark Stale Issues and Pull Requests
on:
# schedule:
# Runs every day at 6:30 UTC
# - cron: "30 6 * * *"
# Allows you to run this workflow manually from the Actions tab for testing
workflow_dispatch:
jobs:
stale:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: |
Greetings @{{author}},
This bug report is like an ancient scroll detailing a legendary beast. Our small guild of developers is often on many quests at once, so our response times can be slower than a tortoise in a time-stop spell. We deeply appreciate your patience!
**Seeking Immediate Help or Discussion?**
Our **[Discord Tavern](https://discord.gg/HBEmAwvbHV)** is the best place to get a quick response from the community for general questions or setup help!
**Want to Be the Hero?**
You could try to tame this beast yourself! With modern grimoires (like AI coding assistants) and our **[Contribution Guide](https://github.com/viu-media/Viu/blob/master/CONTRIBUTIONS.md)**, you might just be the hero we're waiting for. We would be thrilled to review your solution!
---
To keep our quest board tidy, we need to know if this creature is still roaming the lands in the latest version of `viu`. If we don't get an update within **7 days**, we'll assume it has vanished and archive the scroll.
Thanks for being our trusted scout!
stale-pr-message: |
Hello @{{author}}, it looks like this powerful contribution has been left in the middle of its training arc! 💪
Our review dojo is managed by just a few senseis who are sometimes away on long missions, so thank you for your patience as we work through the queue.
We were excited to see this new technique being developed. Are you still planning to complete its training, or have you embarked on a different quest? If you need a sparring partner (reviewer) or some guidance from a senpai, just let us know!
To keep our dojo tidy, we'll be archiving unfinished techniques. If we don't hear back within **7 days**, we'll assume it's time to close this PR for now. You can always resume your training and reopen it when you're ready.
Thank you for your incredible effort!
# --- Labels and Timing ---
stale-issue-label: "stale"
stale-pr-label: "stale"
# How many days of inactivity before an issue/PR is marked as stale.
days-before-stale: 14
# How many days of inactivity to wait before closing a stale issue/PR.
days-before-close: 7

View File

@@ -13,7 +13,7 @@ jobs:
strategy:
matrix:
python-version: ["3.10", "3.11"] # List the Python versions you want to test
python-version: ["3.11", "3.12"]
steps:
- uses: actions/checkout@v4
@@ -22,6 +22,11 @@ jobs:
with:
python-version: ${{ matrix.python-version }}
- name: Install dbus-python build dependencies
run: |
sudo apt-get update
sudo apt-get -y install libdbus-1-dev libglib2.0-dev
- name: Install uv
uses: astral-sh/setup-uv@v3
with:
@@ -36,5 +41,7 @@ jobs:
- name: Run type checking
run: uv run pyright
- name: Run tests
run: uv run pytest tests
# TODO: write tests
# - name: Run tests
# run: uv run pytest tests

View File

@@ -1,33 +1,10 @@
default_language_version:
python: python3.12
repos:
- repo: https://github.com/pycqa/isort
rev: 5.12.0
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.14.2
hooks:
- id: isort
name: isort (python)
args: ["--profile", "black"]
- repo: https://github.com/PyCQA/autoflake
rev: v2.2.1
hooks:
- id: autoflake
args:
[
"--in-place",
"--remove-unused-variables",
"--remove-all-unused-imports",
]
# - repo: https://github.com/astral-sh/ruff-pre-commit
# rev: v0.4.10
# hooks:
# - id: ruff
# args: [--fix]
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 24.4.2
hooks:
- id: black
name: black
#language_version: python3.10
# Run the linter.
- id: ruff-check
args: [--fix]
# Run the formatter.
- id: ruff-format

1
.python-version Normal file
View File

@@ -0,0 +1 @@
3.11

1
.repomixignore Normal file
View File

@@ -0,0 +1 @@
**/generated/**/*

View File

@@ -6,7 +6,7 @@ First off, thank you for considering contributing to Viu! We welcome any help, w
There are many ways to contribute to the Viu project:
* **Reporting Bugs:** If you find a bug, please create an issue in our [issue tracker](https://github.com/Benexl/Viu/issues).
* **Reporting Bugs:** If you find a bug, please create an issue in our [issue tracker](https://github.com/viu-media/Viu/issues).
* **Suggesting Enhancements:** Have an idea for a new feature or an improvement to an existing one? We'd love to hear it.
* **Writing Code:** Help us fix bugs or implement new features.
* **Improving Documentation:** Enhance our README, add examples, or clarify our contribution guidelines.
@@ -16,7 +16,7 @@ There are many ways to contribute to the Viu project:
We follow the standard GitHub Fork & Pull Request workflow.
1. **Create an Issue:** Before starting work on a new feature or a significant bug fix, please [create an issue](https://github.com/Benexl/Viu/issues/new/choose) to discuss your idea. This allows us to give feedback and prevent duplicate work. For small bugs or documentation typos, you can skip this step.
1. **Create an Issue:** Before starting work on a new feature or a significant bug fix, please [create an issue](https://github.com/viu-media/Viu/issues/new/choose) to discuss your idea. This allows us to give feedback and prevent duplicate work. For small bugs or documentation typos, you can skip this step.
2. **Fork the Repository:** Create your own fork of the Viu repository.

178
README.md
View File

@@ -8,12 +8,12 @@
</p>
<div align="center">
[![PyPI - Version](https://img.shields.io/pypi/v/viu_cli)](https://pypi.org/project/viu_cli/)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/viu_cli)](https://pypi.org/project/viu_cli/)
[![GitHub Actions Workflow Status](https://img.shields.io/github/actions/workflow/status/Benexl/Viu/test.yml?label=Tests)](https://github.com/Benexl/Viu/actions)
[![PyPI - Version](https://img.shields.io/pypi/v/viu-media)](https://pypi.org/project/viu-media/)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/viu-media)](https://pypi.org/project/viu-media/)
[![GitHub Actions Workflow Status](https://img.shields.io/github/actions/workflow/status/viu-media/Viu/test.yml?label=Tests)](https://github.com/viu-media/Viu/actions)
[![Discord](https://img.shields.io/discord/1250887070906323096?label=Discord&logo=discord)](https://discord.gg/HBEmAwvbHV)
[![GitHub Issues](https://img.shields.io/github/issues/Benexl/Viu)](https://github.com/Benexl/Viu/issues)
[![PyPI - License](https://img.shields.io/pypi/l/viu)](https://github.com/Benexl/Viu/blob/master/LICENSE)
[![GitHub Issues](https://img.shields.io/github/issues/viu-media/Viu)](https://github.com/viu-media/Viu/issues)
[![PyPI - License](https://img.shields.io/pypi/l/viu)](https://github.com/viu-media/Viu/blob/master/LICENSE)
</div>
@@ -23,48 +23,20 @@
</a>
</p>
![viu](https://github.com/user-attachments/assets/9ab09f26-e4a8-4b70-a315-7def998cec63)
[viu-showcase.webm](https://github.com/user-attachments/assets/5da0ec87-7780-4310-9ca2-33fae7cadd5f)
<details>
<summary>
<b>Screenshots</b>
</summary>
<b>Fzf:</b>
<img width="1346" height="710" alt="250815_13h29m15s_screenshot" src="https://github.com/user-attachments/assets/d8fb8473-a0fe-47b1-b112-5cd8bec51937" />
<img width="1346" height="710" alt="250815_13h29m43s_screenshot" src="https://github.com/user-attachments/assets/16a2555d-f81e-4044-9e65-e61205dfe899" />
<img width="1346" height="710" alt="250815_13h30m09s_screenshot" src="https://github.com/user-attachments/assets/f521670a-c04f-4f5e-a62a-6c849fbf49bd" />
<img width="1346" height="710" alt="250815_13h30m33s_screenshot" src="https://github.com/user-attachments/assets/27fd2ef9-ec1f-4677-b816-038eaaca1391" />
<img width="1346" height="710" alt="250815_13h31m07s_screenshot" src="https://github.com/user-attachments/assets/6a64aa99-507e-449a-9e4a-9daa4fe496a3" />
<img width="1346" height="710" alt="250815_13h31m44s_screenshot" src="https://github.com/user-attachments/assets/a2896d1f-0e23-4ff3-b0c6-121d21a9f99a" />
<b>Rofi:</b>
<img width="1366" height="729" alt="250815_13h23m12s_screenshot" src="https://github.com/user-attachments/assets/6d18d950-11e5-41fc-a7fe-1f9eaa481e46" />
<img width="1366" height="765" alt="250815_13h24m09s_screenshot" src="https://github.com/user-attachments/assets/af852fee-17bf-4f24-ada9-7cf0e6f3451c" />
<img width="1366" height="768" alt="250815_13h24m57s_screenshot" src="https://github.com/user-attachments/assets/d3b4e2ab-10bd-40ae-88ed-0720b57957c1" />
<img width="1366" height="735" alt="250815_13h26m47s_screenshot" src="https://github.com/user-attachments/assets/64682b09-c88e-4d4c-ae26-a3aa34dd08a1" />
<img width="1366" height="768" alt="250815_13h28m05s_screenshot" src="https://github.com/user-attachments/assets/d6cd6931-0113-462c-86bb-abe6f3e12d68" />
</details>
<summary>Rofi</summary>
<details>
<summary>
<b>Riced Preview Examples</b>
</summary>
**Anilist Results Menu (FZF):**
![image](https://github.com/user-attachments/assets/240023a7-7e4e-47dd-80ff-017d65081ee1)
**Episodes Menu with Preview (FZF):**
![image](https://github.com/user-attachments/assets/580f86ef-326f-4ab3-9bd8-c1cb312fbfa6)
**No Image Preview Mode:**
![image](https://github.com/user-attachments/assets/e1248a85-438f-4758-ae34-b0e0b224addd)
**Desktop Notifications + Episodes Menu:**
![image](https://github.com/user-attachments/assets/b7802ef1-ca0d-45f5-a13a-e39c96a5d499)
[viu-showcase-rofi.webm](https://github.com/user-attachments/assets/01f197d9-5ac9-45e6-a00b-8e8cd5ab459c)
</details>
> [!IMPORTANT]
> This project scrapes public-facing websites for its streaming / downloading capabilities and primarily acts as an anilist, jikan and many other media apis tui client. The developer(s) of this application have no affiliation with these content providers. This application hosts zero content and is intended for educational and personal use only. Use at your own risk.
>
> [**Read the Full Disclaimer**](DISCLAIMER.md)
## Core Features
* 📺 **Interactive TUI:** Browse, search, and manage your AniList library in a rich terminal interface powered by `fzf`, `rofi`, or a built-in selector.
@@ -77,7 +49,7 @@
## Installation
Viu runs on any platform with Python 3.10+, including Windows, macOS, Linux, and Android (via Termux).
Viu runs on any platform with Python 3.10+, including Windows, macOS, Linux, and Android (via Termux, see other installation methods).
### Prerequisites
@@ -98,13 +70,13 @@ The best way to install Viu is with [**uv**](https://github.com/astral-sh/uv), a
```bash
# Install with all optional features for the full experience
uv tool install "viu_cli[standard]"
uv tool install "viu-media[standard]"
# Or, pick and choose the extras you need:
uv tool install viu_cli # Core functionality only
uv tool install "viu_cli[download]" # For advanced downloading with yt-dlp
uv tool install "viu_cli[discord]" # For Discord Rich Presence
uv tool install "viu_cli[notifications]" # For desktop notifications
uv tool install viu-media # Core functionality only
uv tool install "viu-media[download]" # For advanced downloading with yt-dlp
uv tool install "viu-media[discord]" # For Discord Rich Presence
uv tool install "viu-media[notifications]" # For desktop notifications
```
### Other Installation Methods
@@ -113,28 +85,116 @@ uv tool install "viu_cli[notifications]" # For desktop notifications
<summary><b>Platform-Specific and Alternative Installers</b></summary>
#### Nix / NixOS
##### Ephemeral / One-Off Run (No Installation)
```bash
nix profile install github:Benexl/viu
nix run github:viu-media/viu
```
##### Imperative Installation
```bash
nix profile install github:viu-media/viu
```
##### Declarative Installation
###### in your flake.nix
```nix
viu.url = "github:viu-media/viu";
```
###### in your system or home-manager packages
```nix
inputs.viu.packages.${pkgs.system}.default
```
#### Arch Linux (AUR)
Use an AUR helper like `yay` or `paru`.
```bash
# Stable version (recommended)
yay -S viu
yay -S viu-media
# Git version (latest commit)
yay -S viu-git
yay -S viu-media-git
```
#### Termux
You may have to have rust installed see this issue: https://github.com/pydantic/pydantic-core/issues/1012#issuecomment-2511269688.
```bash
# Recommended (with pip due to more control)
pkg install python
pkg install rust # required cause of pydantic
# NOTE: order matters
# get pydantic from the termux user repository
pip install pydantic --extra-index-url https://termux-user-repository.github.io/pypi/
# the above will take a while if you want to see more output and feel like sth is happening lol
pip install pydantic --extra-index-url https://termux-user-repository.github.io/pypi/ -v
# now you can install viu
pip install viu-media
# === optional deps ===
# if you have reach here awesome lol :)
# yt-dlp for downloading m3u8 and hls streams
pip install yt-dlp[default,curl-cffi]
# you may also need ffmpeg for processing the videos
pkg install ffmpeg
# tip if you also want yt functionality
pip install yt-dlp-ejs
# you require js runtime
# eg the recommended one
pkg install deno
# for faster fuzzy search
pip install thefuzz
# if you want faster scraping, though barely noticeable lol
pip install lxml --extra-index-url https://termux-user-repository.github.io/pypi/
# if compilation fails you need to have
pkg install libxml2 libxslt
# == ui setup ==
pkg install fzf
# then enable fzf in the config
viu --selector fzf config --update
# if you want previews as well specify preview option
# though images arent that pretty lol, so you can stick to text over full
viu --preview text config --update
# if you set preview to full you need a terminal image renderer
pkg install chafa
# == player setup ==
# for this you need to strictly install from playstore
# search for mpv or vlc (recommended, since has nicer ui)
# the only limitation is currently its not possible to pass headers to the android players
# through android intents
# so use servers like sharepoint and wixmp
# though this is not an issue when it comes to downloading ;)
# if you have installed using 'pkg' uninstall it
# okey now you are all set, i promise the hussle is worth it lol :)
# posted a video of it working to motivate you
# note i recorded it from waydroid which is android for linux sought of like an emulator(bluestacks for example)
```
https://github.com/user-attachments/assets/0c628421-a439-4dea-91bb-7153e8f20ccf
#### Using pipx (for isolated environments)
```bash
pipx install "viu_cli[standard]"
pipx install "viu-media[standard]"
```
#### Using pip
```bash
pip install "viu_cli[standard]"
pip install "viu-media[standard]"
```
</details>
@@ -143,7 +203,7 @@ uv tool install "viu_cli[notifications]" # For desktop notifications
Requires [Git](https://git-scm.com/), [Python 3.10+](https://www.python.org/), and [uv](https://astral.sh/blog/uv).
```bash
git clone https://github.com/Benexl/Viu.git --depth 1
git clone https://github.com/viu-media/Viu.git --depth 1
cd Viu
uv tool install .
viu --version
@@ -161,7 +221,7 @@ Get up and running in three simple steps:
```bash
viu anilist auth
```
This will open your browser. Authorize the app and paste the obtained token back into the terminal.
This will open your browser. Authorize the app and paste the obtained token back into the terminal. Alternatively, you can pass the token directly as an argument, or provide a path to a text file containing the token.
2. **Launch the Interactive TUI:**
```bash
@@ -342,14 +402,10 @@ You can run the background worker as a systemd service for persistence.
systemctl --user daemon-reload
systemctl --user enable --now viu-worker.service
```
## Project using it
**[Inazuma](https://github.com/viu-media/Inazuma)** - official gui wrapper over viu built in kivymd
## Contributing
Contributions are welcome! Whether it's reporting a bug, proposing a feature, or writing code, your help is appreciated. Please read our [**Contributing Guidelines**](CONTRIBUTIONS.md) to get started.
## Disclaimer
> [!IMPORTANT]
> This project scrapes public-facing websites. The developer(s) of this application have no affiliation with these content providers. This application hosts zero content and is intended for educational and personal use only. Use at your own risk.
>
> [**Read the Full Disclaimer**](DISCLAIMER.md)

View File

@@ -1,28 +1,56 @@
# -*- mode: python ; coding: utf-8 -*-
import sys
from PyInstaller.utils.hooks import collect_data_files, collect_submodules
block_cipher = None
# Platform-specific settings
is_windows = sys.platform == 'win32'
is_macos = sys.platform == 'darwin'
# Collect all required data files
datas = [
('viu/assets/*', 'viu/assets'),
('../viu_media/assets', 'viu_media/assets'),
]
# Collect all required hidden imports
# Include viu_media and all its submodules to ensure menu modules are bundled
hiddenimports = [
'click',
'rich',
'requests',
'yt_dlp',
'python_mpv',
'fuzzywuzzy',
'viu',
] + collect_submodules('viu')
'viu_media',
'viu_media.cli.interactive.menu',
'viu_media.cli.interactive.menu.media',
# Explicit menu modules (PyInstaller doesn't always pick these up)
'viu_media.cli.interactive.menu.media.downloads',
'viu_media.cli.interactive.menu.media.download_episodes',
'viu_media.cli.interactive.menu.media.dynamic_search',
'viu_media.cli.interactive.menu.media.episodes',
'viu_media.cli.interactive.menu.media.main',
'viu_media.cli.interactive.menu.media.media_actions',
'viu_media.cli.interactive.menu.media.media_airing_schedule',
'viu_media.cli.interactive.menu.media.media_characters',
'viu_media.cli.interactive.menu.media.media_review',
'viu_media.cli.interactive.menu.media.player_controls',
'viu_media.cli.interactive.menu.media.play_downloads',
'viu_media.cli.interactive.menu.media.provider_search',
'viu_media.cli.interactive.menu.media.results',
'viu_media.cli.interactive.menu.media.servers',
] + collect_submodules('viu_media')
# Exclude OpenSSL libraries on Linux to avoid version conflicts
import sys
binaries = []
if sys.platform == 'linux':
# Remove any bundled libssl or libcrypto
binaries = [b for b in binaries if not any(lib in b[0] for lib in ['libssl', 'libcrypto'])]
a = Analysis(
['./viu/viu.py'], # Changed entry point
['../viu_media/viu.py'],
pathex=[],
binaries=[],
binaries=binaries,
datas=datas,
hiddenimports=hiddenimports,
hookspath=[],
@@ -32,16 +60,18 @@ a = Analysis(
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
strip=True, # Strip debug information
optimize=2 # Optimize bytecode noarchive=False
noarchive=False,
)
pyz = PYZ(
a.pure,
a.zipped_data,
optimize=2 # Optimize bytecode cipher=block_cipher
cipher=block_cipher,
)
# Icon path - only use .ico on Windows
icon_path = '../viu_media/assets/icons/logo.ico' if is_windows else None
exe = EXE(
pyz,
a.scripts,
@@ -52,7 +82,7 @@ exe = EXE(
name='viu',
debug=False,
bootloader_ignore_signals=False,
strip=True,
strip=not is_windows, # strip doesn't work well on Windows without proper tools
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
@@ -61,5 +91,5 @@ exe = EXE(
target_arch=None,
codesign_identity=None,
entitlements_file=None,
icon='viu/assets/logo.ico'
icon=icon_path,
)

View File

@@ -0,0 +1,66 @@
#!/usr/bin/env -S uv run --script
import json
from collections import defaultdict
from pathlib import Path
import httpx
from viu_media.core.utils.graphql import execute_graphql
DEV_DIR = Path(__file__).resolve().parent
media_tags_type_py = (
DEV_DIR.parent / "viu_media" / "libs" / "media_api" / "_media_tags.py"
)
media_tags_gql = DEV_DIR / "graphql" / "anilist" / "media_tags.gql"
generated_tags_json = DEV_DIR / "generated" / "anilist" / "tags.json"
media_tags_response = execute_graphql(
"https://graphql.anilist.co", httpx.Client(), media_tags_gql, {}
)
media_tags_response.raise_for_status()
template = """\
# DO NOT EDIT THIS FILE !!! ( 。 •̀ ᴖ •́ 。)
# ITS AUTOMATICALLY GENERATED BY RUNNING ./dev/generate_anilist_media_tags.py
# FROM THE PROJECT ROOT
# SO RUN THAT INSTEAD TO UPDATE THE FILE WITH THE LATEST MEDIA TAGS :)
from enum import Enum
class MediaTag(Enum):\
"""
# 4 spaces
tab = " "
tags = defaultdict(list)
for tag in media_tags_response.json()["data"]["MediaTagCollection"]:
tags[tag["category"]].append(
{
"name": tag["name"],
"description": tag["description"],
"is_adult": tag["isAdult"],
}
)
# save copy of data used to generate the class
json.dump(tags, generated_tags_json.open("w", encoding="utf-8"), indent=2)
for key, value in tags.items():
template = f"{template}\n{tab}#\n{tab}# {key.upper()}\n{tab}#\n"
for tag in value:
name = tag["name"]
_tag_name = name.replace("-", "_").replace(" ", "_").upper()
if _tag_name.startswith(("0", "1", "2", "3", "4", "5", "6", "7", "8", "9")):
_tag_name = f"_{_tag_name}"
tag_name = ""
# sanitize invalid characters for attribute names
for char in _tag_name:
if char.isidentifier() or char.isdigit():
tag_name += char
desc = tag["description"].replace("\n", "")
is_adult = tag["is_adult"]
template = f'{template}\n{tab}# {desc} (is_adult: {is_adult})\n{tab}{tag_name} = "{name}"\n'
media_tags_type_py.write_text(template, "utf-8")

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,8 @@
query {
MediaTagCollection {
name
description
category
isAdult
}
}

0
dev/make_release Normal file → Executable file
View File

8
flake.lock generated
View File

@@ -20,17 +20,17 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1753345091,
"narHash": "sha256-CdX2Rtvp5I8HGu9swBmYuq+ILwRxpXdJwlpg8jvN4tU=",
"lastModified": 1756386758,
"narHash": "sha256-1wxxznpW2CKvI9VdniaUnTT2Os6rdRJcRUf65ZK9OtE=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "3ff0e34b1383648053bba8ed03f201d3466f90c9",
"rev": "dfb2f12e899db4876308eba6d93455ab7da304cd",
"type": "github"
},
"original": {
"owner": "nixos",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"rev": "3ff0e34b1383648053bba8ed03f201d3466f90c9",
"type": "github"
}
},

View File

@@ -2,8 +2,7 @@
description = "Viu Project Flake";
inputs = {
# The nixpkgs unstable latest commit breaks the plyer python package
nixpkgs.url = "github:nixos/nixpkgs/3ff0e34b1383648053bba8ed03f201d3466f90c9";
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
flake-utils.url = "github:numtide/flake-utils";
};
@@ -17,21 +16,21 @@
system:
let
pkgs = nixpkgs.legacyPackages.${system};
inherit (pkgs) lib python3Packages;
inherit (pkgs) lib python312Packages;
version = "3.1.0";
in
{
packages.default = python3Packages.buildPythonApplication {
packages.default = python312Packages.buildPythonApplication {
pname = "viu";
inherit version;
pyproject = true;
src = self;
build-system = with python3Packages; [ hatchling ];
build-system = with python312Packages; [ hatchling ];
dependencies = with python3Packages; [
dependencies = with python312Packages; [
click
inquirerpy
requests
@@ -67,12 +66,10 @@
# Needs to be adapted for the nix derivation build
doCheck = false;
pythonImportsCheck = [ "viu" ];
meta = {
description = "Your browser anime experience from the terminal";
homepage = "https://github.com/Benexl/Viu";
changelog = "https://github.com/Benexl/Viu/releases/tag/v${version}";
homepage = "https://github.com/viu-media/Viu";
changelog = "https://github.com/viu-media/Viu/releases/tag/v${version}";
mainProgram = "viu";
license = lib.licenses.unlicense;
maintainers = with lib.maintainers; [ theobori ];

View File

@@ -1,62 +1,57 @@
[project]
name = "viu_cli"
version = "3.2.6"
name = "viu-media"
version = "3.3.7"
description = "A browser anime site experience from the terminal"
license = "UNLICENSE"
readme = "README.md"
requires-python = ">=3.10"
requires-python = ">=3.11"
dependencies = [
"click>=8.1.7",
"httpx>=0.28.1",
"inquirerpy>=0.3.4",
"pydantic>=2.11.7",
"rich>=13.9.2",
"click>=8.1.7",
"httpx>=0.28.1",
"inquirerpy>=0.3.4",
"pydantic>=2.11.7",
"rich>=13.9.2",
]
[project.scripts]
viu = 'viu_cli:Cli'
viu = 'viu_media:Cli'
[project.optional-dependencies]
standard = [
"thefuzz>=0.22.1",
"yt-dlp>=2025.7.21",
"pycryptodomex>=3.23.0",
"pypiwin32; sys_platform == 'win32'", # For Windows-specific functionality
"pyobjc; sys_platform == 'darwin'", # For macOS-specific functionality
"dbus-python; sys_platform == 'linux'", # For Linux-specific functionality (e.g., notifications),
"plyer>=2.1.0",
"lxml>=6.0.0"
"thefuzz>=0.22.1",
"yt-dlp>=2025.7.21",
"pycryptodomex>=3.23.0",
"pypiwin32; sys_platform == 'win32'", # For Windows-specific functionality
"pyobjc; sys_platform == 'darwin'", # For macOS-specific functionality
"dbus-python; sys_platform == 'linux'", # For Linux-specific functionality (e.g., notifications),
"plyer>=2.1.0",
"lxml>=6.0.0",
]
notifications = [
"dbus-python>=1.4.0",
"pypiwin32; sys_platform == 'win32'", # For Windows-specific functionality
"pyobjc; sys_platform == 'darwin'", # For macOS-specific functionality
"dbus-python>=1.4.0; sys_platform == 'linux'",
"plyer>=2.1.0",
]
mpv = [
"mpv>=1.0.7",
]
mpv = ["mpv>=1.0.7"]
torrent = ["libtorrent>=2.0.11"]
lxml = ["lxml>=6.0.0"]
discord = ["pypresence>=4.3.0"]
download = [
"pycryptodomex>=3.23.0",
"yt-dlp>=2025.7.21",
]
torrents = [
"libtorrent>=2.0.11",
]
download = ["pycryptodomex>=3.23.0", "yt-dlp>=2025.7.21"]
torrents = ["libtorrent>=2.0.11"]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.uv]
dev-dependencies = [
"pre-commit>=4.0.1",
"pyinstaller>=6.11.1",
"pyright>=1.1.384",
"pytest>=8.3.3",
"pytest-httpx>=0.35.0",
"ruff>=0.6.9",
[dependency-groups]
dev = [
"pre-commit>=4.0.1",
"pyinstaller>=6.11.1",
"pyright>=1.1.384",
"pytest>=8.3.3",
"pytest-httpx>=0.35.0",
"ruff>=0.6.9",
]
[tool.pytest.ini_options]

View File

@@ -1,5 +1,5 @@
{
"venvPath": ".",
"venv": ".venv",
"pythonVersion": "3.10"
"pythonVersion": "3.12"
}

View File

@@ -0,0 +1,284 @@
from unittest.mock import MagicMock, patch
import pytest
from click.testing import CliRunner
from viu_media.cli.commands.anilist.commands.auth import auth
@pytest.fixture
def runner():
return CliRunner()
@pytest.fixture
def mock_config():
config = MagicMock()
config.user.interactive = True
return config
@pytest.fixture
def mock_auth_service():
with patch("viu_media.cli.service.auth.AuthService") as mock:
yield mock
@pytest.fixture
def mock_feedback_service():
with patch("viu_media.cli.service.feedback.FeedbackService") as mock:
yield mock
@pytest.fixture
def mock_selector():
with patch("viu_media.libs.selectors.selector.create_selector") as mock:
yield mock
@pytest.fixture
def mock_api_client():
with patch("viu_media.libs.media_api.api.create_api_client") as mock:
yield mock
@pytest.fixture
def mock_webbrowser():
with patch("viu_media.cli.commands.anilist.commands.auth.webbrowser") as mock:
yield mock
def test_auth_with_token_argument(
runner,
mock_config,
mock_auth_service,
mock_feedback_service,
mock_selector,
mock_api_client,
):
"""Test 'viu anilist auth <token>'."""
api_client_instance = mock_api_client.return_value
profile_mock = MagicMock()
profile_mock.name = "testuser"
api_client_instance.authenticate.return_value = profile_mock
auth_service_instance = mock_auth_service.return_value
auth_service_instance.get_auth.return_value = None
result = runner.invoke(auth, ["test_token"], obj=mock_config)
assert result.exit_code == 0
mock_api_client.assert_called_with("anilist", mock_config)
api_client_instance.authenticate.assert_called_with("test_token")
auth_service_instance.save_user_profile.assert_called_with(
profile_mock, "test_token"
)
feedback_instance = mock_feedback_service.return_value
feedback_instance.info.assert_called_with("Successfully logged in as testuser! ✨")
def test_auth_with_token_file(
runner,
mock_config,
mock_auth_service,
mock_feedback_service,
mock_selector,
mock_api_client,
tmp_path,
):
"""Test 'viu anilist auth <path/to/token.txt>'."""
token_file = tmp_path / "token.txt"
token_file.write_text("file_token")
api_client_instance = mock_api_client.return_value
profile_mock = MagicMock()
profile_mock.name = "testuser"
api_client_instance.authenticate.return_value = profile_mock
auth_service_instance = mock_auth_service.return_value
auth_service_instance.get_auth.return_value = None
result = runner.invoke(auth, [str(token_file)], obj=mock_config)
assert result.exit_code == 0
mock_api_client.assert_called_with("anilist", mock_config)
api_client_instance.authenticate.assert_called_with("file_token")
auth_service_instance.save_user_profile.assert_called_with(
profile_mock, "file_token"
)
feedback_instance = mock_feedback_service.return_value
feedback_instance.info.assert_called_with("Successfully logged in as testuser! ✨")
def test_auth_with_empty_token_file(
runner,
mock_config,
mock_auth_service,
mock_feedback_service,
mock_selector,
mock_api_client,
tmp_path,
):
"""Test 'viu anilist auth' with an empty token file."""
token_file = tmp_path / "token.txt"
token_file.write_text("")
auth_service_instance = mock_auth_service.return_value
auth_service_instance.get_auth.return_value = None
result = runner.invoke(auth, [str(token_file)], obj=mock_config)
assert result.exit_code == 0
feedback_instance = mock_feedback_service.return_value
feedback_instance.error.assert_called_with(f"Token file is empty: {token_file}")
def test_auth_interactive(
runner,
mock_config,
mock_auth_service,
mock_feedback_service,
mock_selector,
mock_api_client,
mock_webbrowser,
):
"""Test 'viu anilist auth' interactive mode."""
mock_webbrowser.open.return_value = True
selector_instance = mock_selector.return_value
selector_instance.ask.return_value = "interactive_token"
api_client_instance = mock_api_client.return_value
profile_mock = MagicMock()
profile_mock.name = "testuser"
api_client_instance.authenticate.return_value = profile_mock
auth_service_instance = mock_auth_service.return_value
auth_service_instance.get_auth.return_value = None
result = runner.invoke(auth, [], obj=mock_config)
assert result.exit_code == 0
selector_instance.ask.assert_called_with("Enter your AniList Access Token")
api_client_instance.authenticate.assert_called_with("interactive_token")
auth_service_instance.save_user_profile.assert_called_with(
profile_mock, "interactive_token"
)
feedback_instance = mock_feedback_service.return_value
feedback_instance.info.assert_called_with("Successfully logged in as testuser! ✨")
def test_auth_status_logged_in(
runner, mock_config, mock_auth_service, mock_feedback_service
):
"""Test 'viu anilist auth --status' when logged in."""
auth_service_instance = mock_auth_service.return_value
user_data_mock = MagicMock()
user_data_mock.user_profile = "testuser"
auth_service_instance.get_auth.return_value = user_data_mock
result = runner.invoke(auth, ["--status"], obj=mock_config)
assert result.exit_code == 0
feedback_instance = mock_feedback_service.return_value
feedback_instance.info.assert_called_with("Logged in as: testuser")
def test_auth_status_logged_out(
runner, mock_config, mock_auth_service, mock_feedback_service
):
"""Test 'viu anilist auth --status' when logged out."""
auth_service_instance = mock_auth_service.return_value
auth_service_instance.get_auth.return_value = None
result = runner.invoke(auth, ["--status"], obj=mock_config)
assert result.exit_code == 0
feedback_instance = mock_feedback_service.return_value
feedback_instance.error.assert_called_with("Not logged in.")
def test_auth_logout(
runner, mock_config, mock_auth_service, mock_feedback_service, mock_selector
):
"""Test 'viu anilist auth --logout'."""
selector_instance = mock_selector.return_value
selector_instance.confirm.return_value = True
result = runner.invoke(auth, ["--logout"], obj=mock_config)
assert result.exit_code == 0
auth_service_instance = mock_auth_service.return_value
auth_service_instance.clear_user_profile.assert_called_once()
feedback_instance = mock_feedback_service.return_value
feedback_instance.info.assert_called_with("You have been logged out.")
def test_auth_logout_cancel(
runner, mock_config, mock_auth_service, mock_feedback_service, mock_selector
):
"""Test 'viu anilist auth --logout' when user cancels."""
selector_instance = mock_selector.return_value
selector_instance.confirm.return_value = False
result = runner.invoke(auth, ["--logout"], obj=mock_config)
assert result.exit_code == 0
auth_service_instance = mock_auth_service.return_value
auth_service_instance.clear_user_profile.assert_not_called()
def test_auth_already_logged_in_relogin_yes(
runner,
mock_config,
mock_auth_service,
mock_feedback_service,
mock_selector,
mock_api_client,
):
"""Test 'viu anilist auth' when already logged in and user chooses to relogin."""
auth_service_instance = mock_auth_service.return_value
auth_profile_mock = MagicMock()
auth_profile_mock.user_profile.name = "testuser"
auth_service_instance.get_auth.return_value = auth_profile_mock
selector_instance = mock_selector.return_value
selector_instance.confirm.return_value = True
selector_instance.ask.return_value = "new_token"
api_client_instance = mock_api_client.return_value
new_profile_mock = MagicMock()
new_profile_mock.name = "newuser"
api_client_instance.authenticate.return_value = new_profile_mock
result = runner.invoke(auth, [], obj=mock_config)
assert result.exit_code == 0
selector_instance.confirm.assert_called_with(
"You are already logged in as testuser. Would you like to relogin"
)
auth_service_instance.save_user_profile.assert_called_with(
new_profile_mock, "new_token"
)
feedback_instance = mock_feedback_service.return_value
feedback_instance.info.assert_called_with("Successfully logged in as newuser! ✨")
def test_auth_already_logged_in_relogin_no(
runner, mock_config, mock_auth_service, mock_feedback_service, mock_selector
):
"""Test 'viu anilist auth' when already logged in and user chooses not to relogin."""
auth_service_instance = mock_auth_service.return_value
auth_profile_mock = MagicMock()
auth_profile_mock.user_profile.name = "testuser"
auth_service_instance.get_auth.return_value = auth_profile_mock
selector_instance = mock_selector.return_value
selector_instance.confirm.return_value = False
result = runner.invoke(auth, [], obj=mock_config)
assert result.exit_code == 0
auth_service_instance.save_user_profile.assert_not_called()
feedback_instance = mock_feedback_service.return_value
feedback_instance.info.assert_not_called()

View File

@@ -0,0 +1,54 @@
from typing import Any
from viu_media.libs.media_api.anilist.mapper import to_generic_user_profile
from viu_media.libs.media_api.anilist.types import AnilistViewerData
from viu_media.libs.media_api.types import UserProfile
def test_to_generic_user_profile_success():
data: AnilistViewerData = {
"data": {
"Viewer": {
"id": 123,
"name": "testuser",
"avatar": {
"large": "https://example.com/avatar.png",
"medium": "https://example.com/avatar_medium.png",
"extraLarge": "https://example.com/avatar_extraLarge.png",
"small": "https://example.com/avatar_small.png",
},
"bannerImage": "https://example.com/banner.png",
"token": "test_token",
}
}
}
profile = to_generic_user_profile(data)
assert isinstance(profile, UserProfile)
assert profile.id == 123
assert profile.name == "testuser"
assert profile.avatar_url == "https://example.com/avatar.png"
assert profile.banner_url == "https://example.com/banner.png"
def test_to_generic_user_profile_data_none():
data: Any = {"data": None}
profile = to_generic_user_profile(data)
assert profile is None
def test_to_generic_user_profile_no_data_key():
data: Any = {"errors": [{"message": "Invalid token"}]}
profile = to_generic_user_profile(data)
assert profile is None
def test_to_generic_user_profile_no_viewer_key():
data: Any = {"data": {"Page": {}}}
profile = to_generic_user_profile(data)
assert profile is None
def test_to_generic_user_profile_viewer_none():
data: Any = {"data": {"Viewer": None}}
profile = to_generic_user_profile(data)
assert profile is None

View File

@@ -1,7 +1,7 @@
[tox]
requires =
tox>=4
env_list = lint, pyright, py{310,311}
env_list = lint, pyright, py{311,312}
[testenv]
description = run unit tests

3881
uv.lock generated

File diff suppressed because it is too large Load Diff

2
fa → viu Normal file → Executable file
View File

@@ -3,4 +3,4 @@ provider_type=$1
provider_name=$2
[ -z "$provider_type" ] && echo "Please specify provider type" && exit
[ -z "$provider_name" ] && echo "Please specify provider type" && exit
uv run python -m viu_cli.libs.provider.${provider_type}.${provider_name}.provider
uv run python -m viu_media.libs.provider.${provider_type}.${provider_name}.provider

View File

@@ -1,22 +0,0 @@
#!/bin/sh
#
# Viu Airing Schedule Info Script Template
# This script formats and displays airing schedule details in the FZF preview pane.
# Python injects the actual data values into the placeholders.
draw_rule
print_kv "Anime Title" "{ANIME_TITLE}"
draw_rule
print_kv "Total Episodes" "{TOTAL_EPISODES}"
print_kv "Upcoming Episodes" "{UPCOMING_EPISODES}"
draw_rule
echo "{C_KEY}Next Episodes:{RESET}"
echo
echo "{SCHEDULE_TABLE}" | fold -s -w "$WIDTH"
draw_rule

View File

@@ -1,75 +0,0 @@
#!/bin/sh
#
# FZF Airing Schedule Preview Script Template
#
# This script is a template. The placeholders in curly braces, like {NAME}
# are dynamically filled by python using .replace()
WIDTH=${FZF_PREVIEW_COLUMNS:-80} # Set a fallback width of 80
IMAGE_RENDERER="{IMAGE_RENDERER}"
generate_sha256() {
local input
# Check if input is passed as an argument or piped
if [ -n "$1" ]; then
input="$1"
else
input=$(cat)
fi
if command -v sha256sum &>/dev/null; then
echo -n "$input" | sha256sum | awk '{print $1}'
elif command -v shasum &>/dev/null; then
echo -n "$input" | shasum -a 256 | awk '{print $1}'
elif command -v sha256 &>/dev/null; then
echo -n "$input" | sha256 | awk '{print $1}'
elif command -v openssl &>/dev/null; then
echo -n "$input" | openssl dgst -sha256 | awk '{print $2}'
else
echo -n "$input" | base64 | tr '/+' '_-' | tr -d '\n'
fi
}
print_kv() {
local key="$1"
local value="$2"
local key_len=${#key}
local value_len=${#value}
local multiplier="${3:-1}"
# Correctly calculate padding by accounting for the key, the ": ", and the value.
local padding_len=$((WIDTH - key_len - 2 - value_len * multiplier))
# If the text is too long to fit, just add a single space for separation.
if [ "$padding_len" -lt 1 ]; then
padding_len=1
value=$(echo $value| fold -s -w "$((WIDTH - key_len - 3))")
printf "{C_KEY}%s:{RESET}%*s%s\\n" "$key" "$padding_len" "" " $value"
else
printf "{C_KEY}%s:{RESET}%*s%s\\n" "$key" "$padding_len" "" " $value"
fi
}
draw_rule(){
ll=2
while [ $ll -le $FZF_PREVIEW_COLUMNS ];do
echo -n -e "{C_RULE}─{RESET}"
((ll++))
done
echo
}
title={}
hash=$(generate_sha256 "$title")
if [ "{PREVIEW_MODE}" = "full" ] || [ "{PREVIEW_MODE}" = "text" ]; then
info_file="{INFO_CACHE_DIR}{PATH_SEP}$hash"
if [ -f "$info_file" ]; then
source "$info_file"
else
echo "📅 Loading airing schedule..."
fi
fi

View File

@@ -1,41 +0,0 @@
#!/bin/sh
#
# Viu Character Info Script Template
# This script formats and displays character details in the FZF preview pane.
# Python injects the actual data values into the placeholders.
draw_rule
print_kv "Character Name" "{CHARACTER_NAME}"
if [ -n "{CHARACTER_NATIVE_NAME}" ] && [ "{CHARACTER_NATIVE_NAME}" != "N/A" ]; then
print_kv "Native Name" "{CHARACTER_NATIVE_NAME}"
fi
draw_rule
if [ -n "{CHARACTER_GENDER}" ] && [ "{CHARACTER_GENDER}" != "Unknown" ]; then
print_kv "Gender" "{CHARACTER_GENDER}"
fi
if [ -n "{CHARACTER_AGE}" ] && [ "{CHARACTER_AGE}" != "Unknown" ]; then
print_kv "Age" "{CHARACTER_AGE}"
fi
if [ -n "{CHARACTER_BLOOD_TYPE}" ] && [ "{CHARACTER_BLOOD_TYPE}" != "N/A" ]; then
print_kv "Blood Type" "{CHARACTER_BLOOD_TYPE}"
fi
if [ -n "{CHARACTER_BIRTHDAY}" ] && [ "{CHARACTER_BIRTHDAY}" != "N/A" ]; then
print_kv "Birthday" "{CHARACTER_BIRTHDAY}"
fi
if [ -n "{CHARACTER_FAVOURITES}" ] && [ "{CHARACTER_FAVOURITES}" != "0" ]; then
print_kv "Favorites" "{CHARACTER_FAVOURITES}"
fi
draw_rule
echo "{CHARACTER_DESCRIPTION}" | fold -s -w "$WIDTH"
draw_rule

View File

@@ -1,130 +0,0 @@
#!/bin/sh
#
# FZF Character Preview Script Template
#
# This script is a template. The placeholders in curly braces, like {NAME}
# are dynamically filled by python using .replace()
WIDTH=${FZF_PREVIEW_COLUMNS:-80} # Set a fallback width of 80
IMAGE_RENDERER="{IMAGE_RENDERER}"
generate_sha256() {
local input
# Check if input is passed as an argument or piped
if [ -n "$1" ]; then
input="$1"
else
input=$(cat)
fi
if command -v sha256sum &>/dev/null; then
echo -n "$input" | sha256sum | awk '{print $1}'
elif command -v shasum &>/dev/null; then
echo -n "$input" | shasum -a 256 | awk '{print $1}'
elif command -v sha256 &>/dev/null; then
echo -n "$input" | sha256 | awk '{print $1}'
elif command -v openssl &>/dev/null; then
echo -n "$input" | openssl dgst -sha256 | awk '{print $2}'
else
echo -n "$input" | base64 | tr '/+' '_-' | tr -d '\n'
fi
}
fzf_preview() {
file=$1
dim=${FZF_PREVIEW_COLUMNS}x${FZF_PREVIEW_LINES}
if [ "$dim" = x ]; then
dim=$(stty size </dev/tty | awk "{print \$2 \"x\" \$1}")
fi
if ! [ "$IMAGE_RENDERER" = "icat" ] && [ -z "$KITTY_WINDOW_ID" ] && [ "$((FZF_PREVIEW_TOP + FZF_PREVIEW_LINES))" -eq "$(stty size </dev/tty | awk "{print \$1}")" ]; then
dim=${FZF_PREVIEW_COLUMNS}x$((FZF_PREVIEW_LINES - 1))
fi
if [ "$IMAGE_RENDERER" = "icat" ] && [ -z "$GHOSTTY_BIN_DIR" ]; then
if command -v kitten >/dev/null 2>&1; then
kitten icat --clear --transfer-mode=memory --unicode-placeholder --stdin=no --place="$dim@0x0" "$file" | sed "\$d" | sed "$(printf "\$s/\$/\033[m/")"
elif command -v icat >/dev/null 2>&1; then
icat --clear --transfer-mode=memory --unicode-placeholder --stdin=no --place="$dim@0x0" "$file" | sed "\$d" | sed "$(printf "\$s/\$/\033[m/")"
else
kitty icat --clear --transfer-mode=memory --unicode-placeholder --stdin=no --place="$dim@0x0" "$file" | sed "\$d" | sed "$(printf "\$s/\$/\033[m/")"
fi
elif [ -n "$GHOSTTY_BIN_DIR" ]; then
if command -v kitten >/dev/null 2>&1; then
kitten icat --clear --transfer-mode=memory --unicode-placeholder --stdin=no --place="$dim@0x0" "$file" | sed "\$d" | sed "$(printf "\$s/\$/\033[m/")"
elif command -v icat >/dev/null 2>&1; then
icat --clear --transfer-mode=memory --unicode-placeholder --stdin=no --place="$dim@0x0" "$file" | sed "\$d" | sed "$(printf "\$s/\$/\033[m/")"
else
chafa -s "$dim" "$file"
fi
elif command -v chafa >/dev/null 2>&1; then
case "$PLATFORM" in
android) chafa -s "$dim" "$file" ;;
windows) chafa -f sixel -s "$dim" "$file" ;;
*) chafa -s "$dim" "$file" ;;
esac
echo
elif command -v imgcat >/dev/null; then
imgcat -W "${dim%%x*}" -H "${dim##*x}" "$file"
else
echo please install a terminal image viewer
echo either icat for kitty terminal and wezterm or imgcat or chafa
fi
}
print_kv() {
local key="$1"
local value="$2"
local key_len=${#key}
local value_len=${#value}
local multiplier="${3:-1}"
# Correctly calculate padding by accounting for the key, the ": ", and the value.
local padding_len=$((WIDTH - key_len - 2 - value_len * multiplier))
# If the text is too long to fit, just add a single space for separation.
if [ "$padding_len" -lt 1 ]; then
padding_len=1
value=$(echo $value| fold -s -w "$((WIDTH - key_len - 3))")
printf "{C_KEY}%s:{RESET}%*s%s\\n" "$key" "$padding_len" "" " $value"
else
printf "{C_KEY}%s:{RESET}%*s%s\\n" "$key" "$padding_len" "" " $value"
fi
}
draw_rule(){
ll=2
while [ $ll -le $FZF_PREVIEW_COLUMNS ];do
echo -n -e "{C_RULE}─{RESET}"
((ll++))
done
echo
}
title={}
hash=$(generate_sha256 "$title")
# FIXME: Disabled since they cover the text perhaps its aspect ratio related or image format not sure
# if [ "{PREVIEW_MODE}" = "full" ] || [ "{PREVIEW_MODE}" = "image" ]; then
# image_file="{IMAGE_CACHE_DIR}{PATH_SEP}$hash.png"
# if [ -f "$image_file" ]; then
# fzf_preview "$image_file"
# echo # Add a newline for spacing
# fi
# fi
if [ "{PREVIEW_MODE}" = "full" ] || [ "{PREVIEW_MODE}" = "text" ]; then
info_file="{INFO_CACHE_DIR}{PATH_SEP}$hash"
if [ -f "$info_file" ]; then
source "$info_file"
else
echo "👤 Loading character details..."
fi
fi

View File

@@ -1,315 +0,0 @@
#!/bin/bash
#
# FZF Dynamic Preview Script Template
#
# This script handles previews for dynamic search results by parsing the JSON
# search results file and extracting info for the selected item.
# The placeholders in curly braces are dynamically filled by Python using .replace()
WIDTH=${FZF_PREVIEW_COLUMNS:-80}
IMAGE_RENDERER="{IMAGE_RENDERER}"
SEARCH_RESULTS_FILE="{SEARCH_RESULTS_FILE}"
IMAGE_CACHE_PATH="{IMAGE_CACHE_PATH}"
INFO_CACHE_PATH="{INFO_CACHE_PATH}"
PATH_SEP="{PATH_SEP}"
# Color codes injected by Python
C_TITLE="{C_TITLE}"
C_KEY="{C_KEY}"
C_VALUE="{C_VALUE}"
C_RULE="{C_RULE}"
RESET="{RESET}"
# Selected item from fzf
SELECTED_ITEM={}
generate_sha256() {
local input="$1"
if command -v sha256sum &>/dev/null; then
echo -n "$input" | sha256sum | awk '{print $1}'
elif command -v shasum &>/dev/null; then
echo -n "$input" | shasum -a 256 | awk '{print $1}'
elif command -v sha256 &>/dev/null; then
echo -n "$input" | sha256 | awk '{print $1}'
elif command -v openssl &>/dev/null; then
echo -n "$input" | openssl dgst -sha256 | awk '{print $2}'
else
echo -n "$input" | base64 | tr '/+' '_-' | tr -d '\n'
fi
}
fzf_preview() {
file=$1
dim=${FZF_PREVIEW_COLUMNS}x${FZF_PREVIEW_LINES}
if [ "$dim" = x ]; then
dim=$(stty size </dev/tty | awk "{print \$2 \"x\" \$1}")
fi
if ! [ "$IMAGE_RENDERER" = "icat" ] && [ -z "$KITTY_WINDOW_ID" ] && [ "$((FZF_PREVIEW_TOP + FZF_PREVIEW_LINES))" -eq "$(stty size </dev/tty | awk "{print \$1}")" ]; then
dim=${FZF_PREVIEW_COLUMNS}x$((FZF_PREVIEW_LINES - 1))
fi
if [ "$IMAGE_RENDERER" = "icat" ] && [ -z "$GHOSTTY_BIN_DIR" ]; then
if command -v kitten >/dev/null 2>&1; then
kitten icat --clear --transfer-mode=memory --unicode-placeholder --stdin=no --place="$dim@0x0" "$file" | sed "\$d" | sed "$(printf "\$s/\$/\033[m/")"
elif command -v icat >/dev/null 2>&1; then
icat --clear --transfer-mode=memory --unicode-placeholder --stdin=no --place="$dim@0x0" "$file" | sed "\$d" | sed "$(printf "\$s/\$/\033[m/")"
else
kitty icat --clear --transfer-mode=memory --unicode-placeholder --stdin=no --place="$dim@0x0" "$file" | sed "\$d" | sed "$(printf "\$s/\$/\033[m/")"
fi
elif [ -n "$GHOSTTY_BIN_DIR" ]; then
if command -v kitten >/dev/null 2>&1; then
kitten icat --clear --transfer-mode=memory --unicode-placeholder --stdin=no --place="$dim@0x0" "$file" | sed "\$d" | sed "$(printf "\$s/\$/\033[m/")"
elif command -v icat >/dev/null 2>&1; then
icat --clear --transfer-mode=memory --unicode-placeholder --stdin=no --place="$dim@0x0" "$file" | sed "\$d" | sed "$(printf "\$s/\$/\033[m/")"
else
chafa -s "$dim" "$file"
fi
elif command -v chafa >/dev/null 2>&1; then
case "$PLATFORM" in
android) chafa -s "$dim" "$file" ;;
windows) chafa -f sixel -s "$dim" "$file" ;;
*) chafa -s "$dim" "$file" ;;
esac
echo
elif command -v imgcat >/dev/null; then
imgcat -W "${dim%%x*}" -H "${dim##*x}" "$file"
else
echo please install a terminal image viewer
echo either icat for kitty terminal and wezterm or imgcat or chafa
fi
}
print_kv() {
local key="$1"
local value="$2"
local key_len=${#key}
local value_len=${#value}
local multiplier="${3:-1}"
local padding_len=$((WIDTH - key_len - 2 - value_len * multiplier))
if [ "$padding_len" -lt 1 ]; then
padding_len=1
value=$(echo $value| fold -s -w "$((WIDTH - key_len - 3))")
printf "{C_KEY}%s:{RESET}%*s%s\\n" "$key" "$padding_len" "" " $value"
else
printf "{C_KEY}%s:{RESET}%*s%s\\n" "$key" "$padding_len" "" " $value"
fi
}
draw_rule() {
ll=2
while [ $ll -le $FZF_PREVIEW_COLUMNS ];do
echo -n -e "{C_RULE}─{RESET}"
((ll++))
done
echo
}
clean_html() {
echo "$1" | sed 's/<[^>]*>//g' | sed 's/&lt;/</g' | sed 's/&gt;/>/g' | sed 's/&amp;/\&/g' | sed 's/&quot;/"/g' | sed "s/&#39;/'/g"
}
format_date() {
local date_obj="$1"
if [ "$date_obj" = "null" ] || [ -z "$date_obj" ]; then
echo "N/A"
return
fi
# Extract year, month, day from the date object
if command -v jq >/dev/null 2>&1; then
year=$(echo "$date_obj" | jq -r '.year // "N/A"' 2>/dev/null || echo "N/A")
month=$(echo "$date_obj" | jq -r '.month // ""' 2>/dev/null || echo "")
day=$(echo "$date_obj" | jq -r '.day // ""' 2>/dev/null || echo "")
else
year=$(echo "$date_obj" | python3 -c "import json, sys; data=json.load(sys.stdin); print(data.get('year', 'N/A'))" 2>/dev/null || echo "N/A")
month=$(echo "$date_obj" | python3 -c "import json, sys; data=json.load(sys.stdin); print(data.get('month', ''))" 2>/dev/null || echo "")
day=$(echo "$date_obj" | python3 -c "import json, sys; data=json.load(sys.stdin); print(data.get('day', ''))" 2>/dev/null || echo "")
fi
if [ "$year" = "N/A" ] || [ "$year" = "null" ]; then
echo "N/A"
elif [ -n "$month" ] && [ "$month" != "null" ] && [ -n "$day" ] && [ "$day" != "null" ]; then
echo "$day/$month/$year"
elif [ -n "$month" ] && [ "$month" != "null" ]; then
echo "$month/$year"
else
echo "$year"
fi
}
# If no selection or search results file doesn't exist, show placeholder
if [ -z "$SELECTED_ITEM" ] || [ ! -f "$SEARCH_RESULTS_FILE" ]; then
echo "${C_TITLE}Dynamic Search Preview${RESET}"
draw_rule
echo "Type to search for anime..."
echo "Results will appear here as you type."
echo
echo "DEBUG:"
echo "SELECTED_ITEM='$SELECTED_ITEM'"
echo "SEARCH_RESULTS_FILE='$SEARCH_RESULTS_FILE'"
if [ -f "$SEARCH_RESULTS_FILE" ]; then
echo "Search results file exists"
else
echo "Search results file missing"
fi
exit 0
fi
# Parse the search results JSON and find the matching item
if command -v jq >/dev/null 2>&1; then
MEDIA_DATA=$(cat "$SEARCH_RESULTS_FILE" | jq --arg anime_title "$SELECTED_ITEM" '
.data.Page.media[]? |
select((.title.english // .title.romaji // .title.native // "Unknown") == $anime_title )
' )
else
# Fallback to Python for JSON parsing
MEDIA_DATA=$(cat "$SEARCH_RESULTS_FILE" | python3 -c "
import json
import sys
try:
data = json.load(sys.stdin)
selected_item = '''$SELECTED_ITEM'''
if 'data' not in data or 'Page' not in data['data'] or 'media' not in data['data']['Page']:
sys.exit(1)
media_list = data['data']['Page']['media']
for media in media_list:
title = media.get('title', {})
english_title = title.get('english') or title.get('romaji') or title.get('native', 'Unknown')
year = media.get('startDate', {}).get('year', 'Unknown') if media.get('startDate') else 'Unknown'
status = media.get('status', 'Unknown')
genres = ', '.join(media.get('genres', [])[:3]) or 'Unknown'
display_format = f'{english_title} ({year}) [{status}] - {genres}'
# Debug output for matching
print(f"DEBUG: selected_item='{selected_item.strip()}' display_format='{display_format.strip()}'", file=sys.stderr)
if selected_item.strip() == display_format.strip():
json.dump(media, sys.stdout, indent=2)
sys.exit(0)
print(f"DEBUG: No match found for selected_item='{selected_item.strip()}'", file=sys.stderr)
sys.exit(1)
except Exception as e:
print(f'Error: {e}', file=sys.stderr)
sys.exit(1)
" 2>/dev/null)
fi
# If we couldn't find the media data, show error
if [ $? -ne 0 ] || [ -z "$MEDIA_DATA" ]; then
echo "${C_TITLE}Preview Error${RESET}"
draw_rule
echo "Could not load preview data for:"
echo "$SELECTED_ITEM"
echo
echo "DEBUG INFO:"
echo "Search results file: $SEARCH_RESULTS_FILE"
if [ -f "$SEARCH_RESULTS_FILE" ]; then
echo "File exists, size: $(wc -c < "$SEARCH_RESULTS_FILE") bytes"
echo "First few lines of search results:"
head -3 "$SEARCH_RESULTS_FILE" 2>/dev/null || echo "Cannot read file"
else
echo "Search results file does not exist"
fi
exit 0
fi
# Extract information from the media data
if command -v jq >/dev/null 2>&1; then
# Use jq for faster extraction
TITLE=$(echo "$MEDIA_DATA" | jq -r '.title.english // .title.romaji // .title.native // "Unknown"' 2>/dev/null || echo "Unknown")
STATUS=$(echo "$MEDIA_DATA" | jq -r '.status // "Unknown"' 2>/dev/null || echo "Unknown")
FORMAT=$(echo "$MEDIA_DATA" | jq -r '.format // "Unknown"' 2>/dev/null || echo "Unknown")
EPISODES=$(echo "$MEDIA_DATA" | jq -r '.episodes // "Unknown"' 2>/dev/null || echo "Unknown")
DURATION=$(echo "$MEDIA_DATA" | jq -r 'if .duration then "\(.duration) min" else "Unknown" end' 2>/dev/null || echo "Unknown")
SCORE=$(echo "$MEDIA_DATA" | jq -r 'if .averageScore then "\(.averageScore)/100" else "N/A" end' 2>/dev/null || echo "N/A")
FAVOURITES=$(echo "$MEDIA_DATA" | jq -r '.favourites // 0' 2>/dev/null | sed ':a;s/\B[0-9]\{3\}\>/,&/;ta' || echo "0")
POPULARITY=$(echo "$MEDIA_DATA" | jq -r '.popularity // 0' 2>/dev/null | sed ':a;s/\B[0-9]\{3\}\>/,&/;ta' || echo "0")
GENRES=$(echo "$MEDIA_DATA" | jq -r '(.genres[:5] // []) | join(", ") | if . == "" then "Unknown" else . end' 2>/dev/null || echo "Unknown")
DESCRIPTION=$(echo "$MEDIA_DATA" | jq -r '.description // "No description available."' 2>/dev/null || echo "No description available.")
# Get start and end dates as JSON objects
START_DATE_OBJ=$(echo "$MEDIA_DATA" | jq -c '.startDate' 2>/dev/null || echo "null")
END_DATE_OBJ=$(echo "$MEDIA_DATA" | jq -c '.endDate' 2>/dev/null || echo "null")
# Get cover image URL
COVER_IMAGE=$(echo "$MEDIA_DATA" | jq -r '.coverImage.large // ""' 2>/dev/null || echo "")
else
# Fallback to Python for extraction
TITLE=$(echo "$MEDIA_DATA" | python3 -c "import json, sys; data=json.load(sys.stdin); title=data.get('title',{}); print(title.get('english') or title.get('romaji') or title.get('native', 'Unknown'))" 2>/dev/null || echo "Unknown")
STATUS=$(echo "$MEDIA_DATA" | python3 -c "import json, sys; data=json.load(sys.stdin); print(data.get('status', 'Unknown'))" 2>/dev/null || echo "Unknown")
FORMAT=$(echo "$MEDIA_DATA" | python3 -c "import json, sys; data=json.load(sys.stdin); print(data.get('format', 'Unknown'))" 2>/dev/null || echo "Unknown")
EPISODES=$(echo "$MEDIA_DATA" | python3 -c "import json, sys; data=json.load(sys.stdin); print(data.get('episodes', 'Unknown'))" 2>/dev/null || echo "Unknown")
DURATION=$(echo "$MEDIA_DATA" | python3 -c "import json, sys; data=json.load(sys.stdin); duration=data.get('duration'); print(f'{duration} min' if duration else 'Unknown')" 2>/dev/null || echo "Unknown")
SCORE=$(echo "$MEDIA_DATA" | python3 -c "import json, sys; data=json.load(sys.stdin); score=data.get('averageScore'); print(f'{score}/100' if score else 'N/A')" 2>/dev/null || echo "N/A")
FAVOURITES=$(echo "$MEDIA_DATA" | python3 -c "import json, sys; data=json.load(sys.stdin); print(f\"{data.get('favourites', 0):,}\")" 2>/dev/null || echo "0")
POPULARITY=$(echo "$MEDIA_DATA" | python3 -c "import json, sys; data=json.load(sys.stdin); print(f\"{data.get('popularity', 0):,}\")" 2>/dev/null || echo "0")
GENRES=$(echo "$MEDIA_DATA" | python3 -c "import json, sys; data=json.load(sys.stdin); print(', '.join(data.get('genres', [])[:5]))" 2>/dev/null || echo "Unknown")
DESCRIPTION=$(echo "$MEDIA_DATA" | python3 -c "import json, sys; data=json.load(sys.stdin); print(data.get('description', 'No description available.'))" 2>/dev/null || echo "No description available.")
# Get start and end dates
START_DATE_OBJ=$(echo "$MEDIA_DATA" | python3 -c "import json, sys; data=json.load(sys.stdin); json.dump(data.get('startDate'), sys.stdout)" 2>/dev/null || echo "null")
END_DATE_OBJ=$(echo "$MEDIA_DATA" | python3 -c "import json, sys; data=json.load(sys.stdin); json.dump(data.get('endDate'), sys.stdout)" 2>/dev/null || echo "null")
# Get cover image URL
COVER_IMAGE=$(echo "$MEDIA_DATA" | python3 -c "import json, sys; data=json.load(sys.stdin); cover=data.get('coverImage',{}); print(cover.get('large', ''))" 2>/dev/null || echo "")
fi
# Format the dates
START_DATE=$(format_date "$START_DATE_OBJ")
END_DATE=$(format_date "$END_DATE_OBJ")
# Generate cache hash for this item (using selected item like regular preview)
CACHE_HASH=$(generate_sha256 "$SELECTED_ITEM")
# Try to show image if available
if [ "{PREVIEW_MODE}" = "full" ] || [ "{PREVIEW_MODE}" = "image" ]; then
image_file="{IMAGE_CACHE_PATH}{PATH_SEP}${CACHE_HASH}.png"
# If image not cached and we have a URL, try to download it quickly
if [ ! -f "$image_file" ] && [ -n "$COVER_IMAGE" ]; then
if command -v curl >/dev/null 2>&1; then
# Quick download with timeout
curl -s -m 3 -L "$COVER_IMAGE" -o "$image_file" 2>/dev/null || rm -f "$image_file" 2>/dev/null
fi
fi
if [ -f "$image_file" ]; then
fzf_preview "$image_file"
else
echo "🖼️ Loading image..."
fi
echo
fi
# Display text info if configured
if [ "{PREVIEW_MODE}" = "full" ] || [ "{PREVIEW_MODE}" = "text" ]; then
draw_rule
print_kv "Title" "$TITLE"
draw_rule
print_kv "Score" "$SCORE"
print_kv "Favourites" "$FAVOURITES"
print_kv "Popularity" "$POPULARITY"
print_kv "Status" "$STATUS"
draw_rule
print_kv "Episodes" "$EPISODES"
print_kv "Duration" "$DURATION"
print_kv "Format" "$FORMAT"
draw_rule
print_kv "Genres" "$GENRES"
print_kv "Start Date" "$START_DATE"
print_kv "End Date" "$END_DATE"
draw_rule
# Clean and display description
CLEAN_DESCRIPTION=$(clean_html "$DESCRIPTION")
echo "$CLEAN_DESCRIPTION" | fold -s -w "$WIDTH"
fi

View File

@@ -1,31 +0,0 @@
#!/bin/sh
#
# Episode Preview Info Script Template
# This script formats and displays episode information in the FZF preview pane.
# Some values are injected by python those with '{name}' syntax using .replace()
draw_rule
echo "{TITLE}" | fold -s -w "$WIDTH"
draw_rule
print_kv "Duration" "{DURATION}"
print_kv "Status" "{STATUS}"
draw_rule
print_kv "Total Episodes" "{EPISODES}"
print_kv "Next Episode" "{NEXT_EPISODE}"
draw_rule
print_kv "Progress" "{USER_PROGRESS}"
print_kv "List Status" "{USER_STATUS}"
draw_rule
print_kv "Start Date" "{START_DATE}"
print_kv "End Date" "{END_DATE}"
draw_rule

View File

@@ -1,54 +0,0 @@
#!/bin/sh
#
# Viu Preview Info Script Template
# This script formats and displays the textual information in the FZF preview pane.
# Some values are injected by python those with '{name}' syntax using .replace()
draw_rule
print_kv "Title" "{TITLE}"
draw_rule
# Emojis take up double the space
score_multiplier=1
if ! [ "{SCORE}" = "N/A" ]; then
score_multiplier=2
fi
print_kv "Score" "{SCORE}" $score_multiplier
print_kv "Favourites" "{FAVOURITES}"
print_kv "Popularity" "{POPULARITY}"
print_kv "Status" "{STATUS}"
draw_rule
print_kv "Episodes" "{EPISODES}"
print_kv "Next Episode" "{NEXT_EPISODE}"
print_kv "Duration" "{DURATION}"
draw_rule
print_kv "Genres" "{GENRES}"
print_kv "Format" "{FORMAT}"
draw_rule
print_kv "List Status" "{USER_STATUS}"
print_kv "Progress" "{USER_PROGRESS}"
draw_rule
print_kv "Start Date" "{START_DATE}"
print_kv "End Date" "{END_DATE}"
draw_rule
print_kv "Studios" "{STUDIOS}"
print_kv "Synonymns" "{SYNONYMNS}"
print_kv "Tags" "{TAGS}"
draw_rule
# Synopsis
echo "{SYNOPSIS}" | fold -s -w "$WIDTH"

View File

@@ -1,147 +0,0 @@
#!/bin/sh
#
# FZF Preview Script Template
#
# This script is a template. The placeholders in curly braces, like {NAME}
# are dynamically filled by python using .replace()
WIDTH=${FZF_PREVIEW_COLUMNS:-80} # Set a fallback width of 80
IMAGE_RENDERER="{IMAGE_RENDERER}"
generate_sha256() {
local input
# Check if input is passed as an argument or piped
if [ -n "$1" ]; then
input="$1"
else
input=$(cat)
fi
if command -v sha256sum &>/dev/null; then
echo -n "$input" | sha256sum | awk '{print $1}'
elif command -v shasum &>/dev/null; then
echo -n "$input" | shasum -a 256 | awk '{print $1}'
elif command -v sha256 &>/dev/null; then
echo -n "$input" | sha256 | awk '{print $1}'
elif command -v openssl &>/dev/null; then
echo -n "$input" | openssl dgst -sha256 | awk '{print $2}'
else
echo -n "$input" | base64 | tr '/+' '_-' | tr -d '\n'
fi
}
fzf_preview() {
file=$1
dim=${FZF_PREVIEW_COLUMNS}x${FZF_PREVIEW_LINES}
if [ "$dim" = x ]; then
dim=$(stty size </dev/tty | awk "{print \$2 \"x\" \$1}")
fi
if ! [ "$IMAGE_RENDERER" = "icat" ] && [ -z "$KITTY_WINDOW_ID" ] && [ "$((FZF_PREVIEW_TOP + FZF_PREVIEW_LINES))" -eq "$(stty size </dev/tty | awk "{print \$1}")" ]; then
dim=${FZF_PREVIEW_COLUMNS}x$((FZF_PREVIEW_LINES - 1))
fi
if [ "$IMAGE_RENDERER" = "icat" ] && [ -z "$GHOSTTY_BIN_DIR" ]; then
if command -v kitten >/dev/null 2>&1; then
kitten icat --clear --transfer-mode=memory --unicode-placeholder{SCALE_UP} --stdin=no --place="$dim@0x0" "$file" | sed "\$d" | sed "$(printf "\$s/\$/\033[m/")"
elif command -v icat >/dev/null 2>&1; then
icat --clear --transfer-mode=memory --unicode-placeholder{SCALE_UP} --stdin=no --place="$dim@0x0" "$file" | sed "\$d" | sed "$(printf "\$s/\$/\033[m/")"
else
kitty icat --clear --transfer-mode=memory --unicode-placeholder{SCALE_UP} --stdin=no --place="$dim@0x0" "$file" | sed "\$d" | sed "$(printf "\$s/\$/\033[m/")"
fi
elif [ -n "$GHOSTTY_BIN_DIR" ]; then
dim=$((FZF_PREVIEW_COLUMNS - 1))x${FZF_PREVIEW_LINES}
if command -v kitten >/dev/null 2>&1; then
kitten icat --clear --transfer-mode=memory --unicode-placeholder{SCALE_UP} --stdin=no --place="$dim@0x0" "$file" | sed "\$d" | sed "$(printf "\$s/\$/\033[m/")"
elif command -v icat >/dev/null 2>&1; then
icat --clear --transfer-mode=memory --unicode-placeholder{SCALE_UP} --stdin=no --place="$dim@0x0" "$file" | sed "\$d" | sed "$(printf "\$s/\$/\033[m/")"
else
chafa -s "$dim" "$file"
fi
elif command -v chafa >/dev/null 2>&1; then
case "$PLATFORM" in
android) chafa -s "$dim" "$file" ;;
windows) chafa -f sixel -s "$dim" "$file" ;;
*) chafa -s "$dim" "$file" ;;
esac
echo
elif command -v imgcat >/dev/null; then
imgcat -W "${dim%%x*}" -H "${dim##*x}" "$file"
else
echo please install a terminal image viewer
echo either icat for kitty terminal and wezterm or imgcat or chafa
fi
}
# --- Helper function for printing a key-value pair, aligning the value to the right ---
print_kv() {
local key="$1"
local value="$2"
local key_len=${#key}
local value_len=${#value}
local multiplier="${3:-1}"
# Correctly calculate padding by accounting for the key, the ": ", and the value.
local padding_len=$((WIDTH - key_len - 2 - value_len * multiplier))
# If the text is too long to fit, just add a single space for separation.
if [ "$padding_len" -lt 1 ]; then
padding_len=1
value=$(echo "$value"| fold -s -w "$((WIDTH - key_len - 3))")
printf "{C_KEY}%s:{RESET}%*s%s\\n" "$key" "$padding_len" "" " $value"
else
printf "{C_KEY}%s:{RESET}%*s%s\\n" "$key" "$padding_len" "" " $value"
fi
}
# --- Draw a rule across the screen ---
# TODO: figure out why this method does not work in fzf
draw_rule() {
local rule
# Generate the line of '─' characters, removing the trailing newline `tr` adds.
rule=$(printf '%*s' "$WIDTH" | tr ' ' '─' | tr -d '\n')
# Print the rule with colors and a single, clean newline.
printf "{C_RULE}%s{RESET}\\n" "$rule"
}
draw_rule(){
ll=2
while [ $ll -le $FZF_PREVIEW_COLUMNS ];do
echo -n -e "{C_RULE}─{RESET}"
((ll++))
done
echo
}
# Generate the same cache key that the Python worker uses
# {PREFIX} is used only on episode previews to make sure they are unique
title={}
hash=$(generate_sha256 "{PREFIX}$title")
#
# --- Display image if configured and the cached file exists ---
#
if [ "{PREVIEW_MODE}" = "full" ] || [ "{PREVIEW_MODE}" = "image" ]; then
image_file="{IMAGE_CACHE_PATH}{PATH_SEP}$hash.png"
if [ -f "$image_file" ]; then
fzf_preview "$image_file"
else
echo "🖼️ Loading image..."
fi
echo # Add a newline for spacing
fi
# Display text info if configured and the cached file exists
if [ "{PREVIEW_MODE}" = "full" ] || [ "{PREVIEW_MODE}" = "text" ]; then
info_file="{INFO_CACHE_PATH}{PATH_SEP}$hash"
if [ -f "$info_file" ]; then
source "$info_file"
else
echo "📝 Loading details..."
fi
fi

View File

@@ -1,19 +0,0 @@
#!/bin/sh
#
# Viu Review Info Script Template
# This script formats and displays review details in the FZF preview pane.
# Python injects the actual data values into the placeholders.
draw_rule
print_kv "Review By" "{REVIEWER_NAME}"
draw_rule
print_kv "Summary" "{REVIEW_SUMMARY}"
draw_rule
echo "{REVIEW_BODY}" | fold -s -w "$WIDTH"
draw_rule

View File

@@ -1,75 +0,0 @@
#!/bin/sh
#
# FZF Preview Script Template
#
# This script is a template. The placeholders in curly braces, like {NAME}
# are dynamically filled by python using .replace()
WIDTH=${FZF_PREVIEW_COLUMNS:-80} # Set a fallback width of 80
IMAGE_RENDERER="{IMAGE_RENDERER}"
generate_sha256() {
local input
# Check if input is passed as an argument or piped
if [ -n "$1" ]; then
input="$1"
else
input=$(cat)
fi
if command -v sha256sum &>/dev/null; then
echo -n "$input" | sha256sum | awk '{print $1}'
elif command -v shasum &>/dev/null; then
echo -n "$input" | shasum -a 256 | awk '{print $1}'
elif command -v sha256 &>/dev/null; then
echo -n "$input" | sha256 | awk '{print $1}'
elif command -v openssl &>/dev/null; then
echo -n "$input" | openssl dgst -sha256 | awk '{print $2}'
else
echo -n "$input" | base64 | tr '/+' '_-' | tr -d '\n'
fi
}
print_kv() {
local key="$1"
local value="$2"
local key_len=${#key}
local value_len=${#value}
local multiplier="${3:-1}"
# Correctly calculate padding by accounting for the key, the ": ", and the value.
local padding_len=$((WIDTH - key_len - 2 - value_len * multiplier))
# If the text is too long to fit, just add a single space for separation.
if [ "$padding_len" -lt 1 ]; then
padding_len=1
value=$(echo $value| fold -s -w "$((WIDTH - key_len - 3))")
printf "{C_KEY}%s:{RESET}%*s%s\\n" "$key" "$padding_len" "" " $value"
else
printf "{C_KEY}%s:{RESET}%*s%s\\n" "$key" "$padding_len" "" " $value"
fi
}
draw_rule(){
ll=2
while [ $ll -le $FZF_PREVIEW_COLUMNS ];do
echo -n -e "{C_RULE}─{RESET}"
((ll++))
done
echo
}
title={}
hash=$(generate_sha256 "$title")
if [ "{PREVIEW_MODE}" = "full" ] || [ "{PREVIEW_MODE}" = "text" ]; then
info_file="{INFO_CACHE_DIR}{PATH_SEP}$hash"
if [ -f "$info_file" ]; then
source "$info_file"
else
echo "📝 Loading details..."
fi
fi

View File

@@ -1,118 +0,0 @@
#!/bin/bash
#
# FZF Dynamic Search Script Template
#
# This script is a template for dynamic search functionality in fzf.
# The placeholders in curly braces, like {QUERY} are dynamically filled by Python using .replace()
# Configuration variables (injected by Python)
GRAPHQL_ENDPOINT="{GRAPHQL_ENDPOINT}"
CACHE_DIR="{CACHE_DIR}"
SEARCH_RESULTS_FILE="{SEARCH_RESULTS_FILE}"
AUTH_HEADER="{AUTH_HEADER}"
# Get the current query from fzf
QUERY="{{q}}"
# If query is empty, exit with empty results
if [ -z "$QUERY" ]; then
echo ""
exit 0
fi
# Create GraphQL variables
VARIABLES=$(cat <<EOF
{
"query": "$QUERY",
"type": "ANIME",
"per_page": 50,
"genre_not_in": ["Hentai"]
}
EOF
)
# The GraphQL query is injected here as a properly escaped string
GRAPHQL_QUERY='{GRAPHQL_QUERY}'
# Create the GraphQL request payload
PAYLOAD=$(cat <<EOF
{
"query": $GRAPHQL_QUERY,
"variables": $VARIABLES
}
EOF
)
# Make the GraphQL request and save raw results
if [ -n "$AUTH_HEADER" ]; then
RESPONSE=$(curl -s -X POST \
-H "Content-Type: application/json" \
-H "Authorization: $AUTH_HEADER" \
-d "$PAYLOAD" \
"$GRAPHQL_ENDPOINT")
else
RESPONSE=$(curl -s -X POST \
-H "Content-Type: application/json" \
-d "$PAYLOAD" \
"$GRAPHQL_ENDPOINT")
fi
# Check if the request was successful
if [ $? -ne 0 ] || [ -z "$RESPONSE" ]; then
echo "❌ Search failed"
exit 1
fi
# Save the raw response for later processing
echo "$RESPONSE" > "$SEARCH_RESULTS_FILE"
# Parse and display results
if command -v jq >/dev/null 2>&1; then
# Use jq for faster and more reliable JSON parsing
echo "$RESPONSE" | jq -r '
if .errors then
"❌ Search error: " + (.errors | tostring)
elif (.data.Page.media // []) | length == 0 then
"❌ No results found"
else
.data.Page.media[] | (.title.english // .title.romaji // .title.native // "Unknown")
end
' 2>/dev/null || echo "❌ Parse error"
else
# Fallback to Python for JSON parsing
echo "$RESPONSE" | python3 -c "
import json
import sys
try:
data = json.load(sys.stdin)
if 'errors' in data:
print('❌ Search error: ' + str(data['errors']))
sys.exit(1)
if 'data' not in data or 'Page' not in data['data'] or 'media' not in data['data']['Page']:
print('❌ No results found')
sys.exit(0)
media_list = data['data']['Page']['media']
if not media_list:
print('❌ No results found')
sys.exit(0)
for media in media_list:
title = media.get('title', {})
english_title = title.get('english') or title.get('romaji') or title.get('native', 'Unknown')
year = media.get('startDate', {}).get('year', 'Unknown') if media.get('startDate') else 'Unknown'
status = media.get('status', 'Unknown')
genres = ', '.join(media.get('genres', [])[:3]) or 'Unknown'
# Format: Title (Year) [Status] - Genres
print(f'{english_title} ({year}) [{status}] - {genres}')
except Exception as e:
print(f'❌ Parse error: {str(e)}')
sys.exit(1)
"
fi

View File

@@ -1,3 +0,0 @@
from .cli import cli as run_cli
__all__ = ["run_cli"]

View File

@@ -1,114 +0,0 @@
import logging
import sys
from typing import TYPE_CHECKING
import click
from click.core import ParameterSource
from ..core.config import AppConfig
from ..core.constants import PROJECT_NAME, USER_CONFIG, __version__
from .config import ConfigLoader
from .options import options_from_model
from .utils.exception import setup_exceptions_handler
from .utils.lazyloader import LazyGroup
from .utils.logging import setup_logging
if TYPE_CHECKING:
from typing import TypedDict
from typing_extensions import Unpack
class Options(TypedDict):
no_config: bool | None
trace: bool | None
dev: bool | None
log: bool | None
rich_traceback: bool | None
rich_traceback_theme: str
logger = logging.getLogger(__name__)
commands = {
"config": "config.config",
"search": "search.search",
"anilist": "anilist.anilist",
"download": "download.download",
"update": "update.update",
"registry": "registry.registry",
"worker": "worker.worker",
"queue": "queue.queue",
"completions": "completions.completions",
}
@click.group(
cls=LazyGroup,
root="viu_cli.cli.commands",
invoke_without_command=True,
lazy_subcommands=commands,
context_settings=dict(auto_envvar_prefix=PROJECT_NAME),
)
@click.version_option(__version__, "--version")
@click.option("--no-config", is_flag=True, help="Don't load the user config file.")
@click.option(
"--trace", is_flag=True, help="Controls Whether to display tracebacks or not"
)
@click.option("--dev", is_flag=True, help="Controls Whether the app is in dev mode")
@click.option("--log", is_flag=True, help="Controls Whether to log")
@click.option(
"--rich-traceback",
is_flag=True,
help="Controls Whether to display a rich traceback",
)
@click.option(
"--rich-traceback-theme",
default="github-dark",
help="Controls Whether to display a rich traceback",
)
@options_from_model(AppConfig)
@click.pass_context
def cli(ctx: click.Context, **options: "Unpack[Options]"):
"""
The main entry point for the Viu CLI.
"""
setup_logging(options["log"])
setup_exceptions_handler(
options["trace"],
options["dev"],
options["rich_traceback"],
options["rich_traceback_theme"],
)
logger.info(f"Current Command: {' '.join(sys.argv)}")
cli_overrides = {}
param_lookup = {p.name: p for p in ctx.command.params}
for param_name, param_value in ctx.params.items():
source = ctx.get_parameter_source(param_name)
if source in (ParameterSource.ENVIRONMENT, ParameterSource.COMMANDLINE):
parameter = param_lookup.get(param_name)
if (
parameter
and hasattr(parameter, "model_name")
and hasattr(parameter, "field_name")
):
model_name = getattr(parameter, "model_name")
field_name = getattr(parameter, "field_name")
if model_name not in cli_overrides:
cli_overrides[model_name] = {}
cli_overrides[model_name][field_name] = param_value
loader = ConfigLoader(config_path=USER_CONFIG)
config = (
AppConfig.model_validate(cli_overrides)
if options["no_config"]
else loader.load(cli_overrides)
)
ctx.obj = config
if ctx.invoked_subcommand is None:
from .commands.anilist import cmd
ctx.invoke(cmd.anilist)

View File

@@ -1,160 +0,0 @@
"""Update command for Viu CLI."""
import sys
from typing import TYPE_CHECKING
import click
from rich import print
from rich.console import Console
from rich.markdown import Markdown
from ..utils.update import check_for_updates, update_app
if TYPE_CHECKING:
from ...core.config import AppConfig
@click.command(
help="Update Viu to the latest version",
short_help="Update Viu",
epilog="""
\b
\b\bExamples:
# Check for updates and update if available
viu update
\b
# Force update even if already up to date
viu update --force
\b
# Only check for updates without updating
viu update --check-only
\b
# Show release notes for the latest version
viu update --release-notes
""",
)
@click.option(
"--force",
"-f",
is_flag=True,
help="Force update even if already up to date",
)
@click.option(
"--check-only",
"-c",
is_flag=True,
help="Only check for updates without updating",
)
@click.option(
"--release-notes",
"-r",
is_flag=True,
help="Show release notes for the latest version",
)
@click.pass_context
@click.pass_obj
def update(
config: "AppConfig",
ctx: click.Context,
force: bool,
check_only: bool,
release_notes: bool,
) -> None:
"""
Update Viu to the latest version.
This command checks for available updates and optionally updates
the application to the latest version from the configured sources
(pip, uv, pipx, git, or nix depending on installation method).
Args:
config: The application configuration object
ctx: The click context containing CLI options
force: Whether to force update even if already up to date
check_only: Whether to only check for updates without updating
release_notes: Whether to show release notes for the latest version
"""
try:
if release_notes:
print("[cyan]Fetching latest release notes...[/]")
is_latest, release_json = check_for_updates()
if not release_json:
print(
"[yellow]Could not fetch release information. Please check your internet connection.[/]"
)
sys.exit(1)
version = release_json.get("tag_name", "unknown")
release_name = release_json.get("name", version)
release_body = release_json.get("body", "No release notes available.")
published_at = release_json.get("published_at", "unknown")
console = Console()
print(f"[bold cyan]Release: {release_name}[/]")
print(f"[dim]Version: {version}[/]")
print(f"[dim]Published: {published_at}[/]")
print()
# Display release notes as markdown if available
if release_body.strip():
markdown = Markdown(release_body)
console.print(markdown)
else:
print("[dim]No release notes available for this version.[/]")
return
elif check_only:
print("[cyan]Checking for updates...[/]")
is_latest, release_json = check_for_updates()
if not release_json:
print(
"[yellow]Could not check for updates. Please check your internet connection.[/]"
)
sys.exit(1)
if is_latest:
print("[green]Viu is up to date![/]")
print(
f"[dim]Current version: {release_json.get('tag_name', 'unknown')}[/]"
)
else:
latest_version = release_json.get("tag_name", "unknown")
print(f"[yellow]Update available: {latest_version}[/]")
print("[dim]Run 'viu update' to update[/]")
sys.exit(1)
else:
print("[cyan]Checking for updates and updating if necessary...[/]")
success, release_json = update_app(force=force)
if not release_json:
print(
"[red]Could not check for updates. Please check your internet connection.[/]"
)
sys.exit(1)
if success:
latest_version = release_json.get("tag_name", "unknown")
print(f"[green]Successfully updated to version {latest_version}![/]")
else:
if force:
print(
"[red]Update failed. Please check the error messages above.[/]"
)
sys.exit(1)
# If not forced and update failed, it might be because already up to date
# The update_app function already prints appropriate messages
except KeyboardInterrupt:
print("\n[yellow]Update cancelled by user.[/]")
sys.exit(1)
except Exception as e:
print(f"[red]An error occurred during update: {e}[/]")
# Get trace option from parent context
trace = ctx.parent.params.get("trace", False) if ctx.parent else False
if trace:
raise
sys.exit(1)

View File

@@ -1,4 +0,0 @@
from .generate import generate_config_ini_from_app_model
from .loader import ConfigLoader
__all__ = ["ConfigLoader", "generate_config_ini_from_app_model"]

View File

@@ -1,85 +0,0 @@
from enum import Enum
from typing import Dict, Optional, Union
from pydantic import BaseModel, ConfigDict, Field
from ...libs.media_api.params import MediaSearchParams, UserMediaListSearchParams
from ...libs.media_api.types import MediaItem, PageInfo
from ...libs.provider.anime.types import Anime, SearchResults, Server
# TODO: is internal directive a good name
class InternalDirective(Enum):
MAIN = "MAIN"
BACK = "BACK"
BACKX2 = "BACKX2"
BACKX3 = "BACKX3"
BACKX4 = "BACKX4"
EXIT = "EXIT"
CONFIG_EDIT = "CONFIG_EDIT"
RELOAD = "RELOAD"
class MenuName(Enum):
MAIN = "MAIN"
AUTH = "AUTH"
EPISODES = "EPISODES"
RESULTS = "RESULTS"
SERVERS = "SERVERS"
WATCH_HISTORY = "WATCH_HISTORY"
PROVIDER_SEARCH = "PROVIDER_SEARCH"
PLAYER_CONTROLS = "PLAYER_CONTROLS"
USER_MEDIA_LIST = "USER_MEDIA_LIST"
SESSION_MANAGEMENT = "SESSION_MANAGEMENT"
MEDIA_ACTIONS = "MEDIA_ACTIONS"
DOWNLOADS = "DOWNLOADS"
DYNAMIC_SEARCH = "DYNAMIC_SEARCH"
MEDIA_REVIEW = "MEDIA_REVIEW"
MEDIA_CHARACTERS = "MEDIA_CHARACTERS"
MEDIA_AIRING_SCHEDULE = "MEDIA_AIRING_SCHEDULE"
PLAY_DOWNLOADS = "PLAY_DOWNLOADS"
DOWNLOADS_PLAYER_CONTROLS = "DOWNLOADS_PLAYER_CONTROLS"
DOWNLOAD_EPISODES = "DOWNLOAD_EPISODES"
class StateModel(BaseModel):
model_config = ConfigDict(frozen=True)
class MediaApiState(StateModel):
search_result: Optional[Dict[int, MediaItem]] = None
search_params: Optional[Union[MediaSearchParams, UserMediaListSearchParams]] = None
page_info: Optional[PageInfo] = None
media_id: Optional[int] = None
@property
def media_item(self) -> Optional[MediaItem]:
if self.search_result and self.media_id:
return self.search_result[self.media_id]
class ProviderState(StateModel):
search_results: Optional[SearchResults] = None
anime: Optional[Anime] = None
episode: Optional[str] = None
servers: Optional[Dict[str, Server]] = None
server_name: Optional[str] = None
start_time: Optional[str] = None
@property
def server(self) -> Optional[Server]:
if self.servers and self.server_name:
return self.servers[self.server_name]
class State(StateModel):
menu_name: MenuName
provider: ProviderState = Field(default_factory=ProviderState)
media_api: MediaApiState = Field(default_factory=MediaApiState)

View File

@@ -1,58 +0,0 @@
import os
import re
import shutil
import sys
def is_running_in_termux():
# Check environment variables
if os.environ.get("TERMUX_VERSION") is not None:
return True
# Check Python installation path
if sys.prefix.startswith("/data/data/com.termux/files/usr"):
return True
# Check for Termux-specific binary
if os.path.exists("/data/data/com.termux/files/usr/bin/termux-info"):
return True
return False
def is_bash_script(text: str) -> bool:
# Normalize line endings
text = text.strip()
# Check for shebang at the top
if text.startswith("#!/bin/bash") or text.startswith("#!/usr/bin/env bash"):
return True
# Look for common bash syntax/keywords
bash_keywords = [
r"\becho\b",
r"\bfi\b",
r"\bthen\b",
r"\bfunction\b",
r"\bfor\b",
r"\bwhile\b",
r"\bdone\b",
r"\bcase\b",
r"\besac\b",
r"\$\(",
r"\[\[",
r"\]\]",
r";;",
]
# Score based on matches
matches = sum(bool(re.search(pattern, text)) for pattern in bash_keywords)
return matches >= 2
def is_running_kitty_terminal() -> bool:
return True if os.environ.get("KITTY_WINDOW_ID") else False
def has_fzf() -> bool:
return True if shutil.which("fzf") else False

View File

@@ -1,22 +0,0 @@
from httpx import get
ANISKIP_ENDPOINT = "https://api.aniskip.com/v1/skip-times"
# TODO: Finish own implementation of aniskip script
class AniSkip:
@classmethod
def get_skip_times(
cls, mal_id: int, episode_number: float | int, types=["op", "ed"]
):
url = f"{ANISKIP_ENDPOINT}/{mal_id}/{episode_number}?types=op&types=ed"
response = get(url)
print(response.text)
return response.json()
if __name__ == "__main__":
mal_id = input("Mal id: ")
episode_number = input("episode_number: ")
skip_times = AniSkip.get_skip_times(int(mal_id), float(episode_number))
print(skip_times)

View File

@@ -1,3 +0,0 @@
from .api import connect
__all__ = ["connect"]

View File

@@ -1,13 +0,0 @@
import time
from pypresence import Presence
def connect(show, episode, switch):
presence = Presence(client_id="1292070065583165512")
presence.connect()
if not switch.is_set():
presence.update(details=show, state="Watching episode " + episode)
time.sleep(10)
else:
presence.close()

View File

@@ -1,873 +0,0 @@
from __future__ import annotations
from datetime import datetime
from enum import Enum
from typing import Dict, List, Optional
from pydantic import BaseModel, ConfigDict, Field
# ENUMS
class MediaStatus(Enum):
FINISHED = "FINISHED"
RELEASING = "RELEASING"
NOT_YET_RELEASED = "NOT_YET_RELEASED"
CANCELLED = "CANCELLED"
HIATUS = "HIATUS"
class MediaType(Enum):
ANIME = "ANIME"
MANGA = "MANGA"
class UserMediaListStatus(Enum):
PLANNING = "planning"
WATCHING = "watching"
COMPLETED = "completed"
DROPPED = "dropped"
PAUSED = "paused"
REPEATING = "repeating"
class MediaGenre(Enum):
ACTION = "Action"
ADVENTURE = "Adventure"
COMEDY = "Comedy"
DRAMA = "Drama"
ECCHI = "Ecchi"
FANTASY = "Fantasy"
HORROR = "Horror"
MAHOU_SHOUJO = "Mahou Shoujo"
MECHA = "Mecha"
MUSIC = "Music"
MYSTERY = "Mystery"
PSYCHOLOGICAL = "Psychological"
ROMANCE = "Romance"
SCI_FI = "Sci-Fi"
SLICE_OF_LIFE = "Slice of Life"
SPORTS = "Sports"
SUPERNATURAL = "Supernatural"
THRILLER = "Thriller"
HENTAI = "Hentai"
class MediaFormat(Enum):
TV = "TV"
TV_SHORT = "TV_SHORT"
MOVIE = "MOVIE"
MANGA = "MANGA"
SPECIAL = "SPECIAL"
OVA = "OVA"
ONA = "ONA"
MUSIC = "MUSIC"
NOVEL = "NOVEL"
ONE_SHOT = "ONE_SHOT"
class NotificationType(Enum):
AIRING = "AIRING"
RELATED_MEDIA_ADDITION = "RELATED_MEDIA_ADDITION"
MEDIA_DATA_CHANGE = "MEDIA_DATA_CHANGE"
# ... add other types as needed
# MODELS
class BaseMediaApiModel(BaseModel):
model_config = ConfigDict(frozen=True)
class MediaImage(BaseMediaApiModel):
"""A generic representation of media imagery URLs."""
large: str
medium: Optional[str] = None
extra_large: Optional[str] = None
class MediaTitle(BaseMediaApiModel):
"""A generic representation of media titles."""
english: str
romaji: Optional[str] = None
native: Optional[str] = None
class MediaTrailer(BaseMediaApiModel):
"""A generic representation of a media trailer."""
id: str
site: str # e.g., "youtube"
thumbnail_url: Optional[str] = None
class AiringSchedule(BaseMediaApiModel):
"""A generic representation of the next airing episode."""
episode: int
airing_at: Optional[datetime] = None
class CharacterName(BaseMediaApiModel):
"""A generic representation of a character's name."""
first: Optional[str] = None
middle: Optional[str] = None
last: Optional[str] = None
full: Optional[str] = None
native: Optional[str] = None
class CharacterImage(BaseMediaApiModel):
"""A generic representation of a character's image."""
medium: Optional[str] = None
large: Optional[str] = None
class Character(BaseMediaApiModel):
"""A generic representation of an anime character."""
id: Optional[int] = None
name: CharacterName
image: Optional[CharacterImage] = None
description: Optional[str] = None
gender: Optional[str] = None
age: Optional[str] = None
blood_type: Optional[str] = None
favourites: Optional[int] = None
date_of_birth: Optional[datetime] = None
class AiringScheduleItem(BaseMediaApiModel):
"""A generic representation of an airing schedule item."""
episode: int
airing_at: Optional[datetime] = None
time_until_airing: Optional[int] = None # In seconds
class CharacterSearchResult(BaseMediaApiModel):
"""A generic representation of character search results."""
characters: List[Character] = Field(default_factory=list)
page_info: Optional[PageInfo] = None
class AiringScheduleResult(BaseMediaApiModel):
"""A generic representation of airing schedule results."""
schedule_items: List[AiringScheduleItem] = Field(default_factory=list)
page_info: Optional[PageInfo] = None
class Studio(BaseMediaApiModel):
"""A generic representation of an animation studio."""
id: Optional[int] = None
name: Optional[str] = None
favourites: Optional[int] = None
is_animation_studio: Optional[bool] = None
class MediaTagItem(BaseMediaApiModel):
"""A generic representation of a descriptive tag."""
name: MediaTag
rank: Optional[int] = None # Percentage relevance from 0-100
class StreamingEpisode(BaseMediaApiModel):
"""A generic representation of a streaming episode."""
title: str
thumbnail: Optional[str] = None
class UserListItem(BaseMediaApiModel):
"""Generic representation of a user's list status for a media item."""
id: Optional[int] = None
status: Optional[UserMediaListStatus] = None
progress: Optional[int] = None
score: Optional[float] = None
repeat: Optional[int] = None
notes: Optional[str] = None
start_date: Optional[datetime] = None
completed_at: Optional[datetime] = None
created_at: Optional[str] = None
class MediaItem(BaseMediaApiModel):
id: int
title: MediaTitle
id_mal: Optional[int] = None
type: MediaType = MediaType.ANIME
status: MediaStatus = MediaStatus.FINISHED
format: Optional[MediaFormat] = MediaFormat.TV
cover_image: Optional[MediaImage] = None
banner_image: Optional[str] = None
trailer: Optional[MediaTrailer] = None
description: Optional[str] = None
episodes: Optional[int] = None
duration: Optional[int] = None # In minutes
genres: List[MediaGenre] = Field(default_factory=list)
tags: List[MediaTagItem] = Field(default_factory=list)
studios: List[Studio] = Field(default_factory=list)
synonymns: List[str] = Field(default_factory=list)
average_score: Optional[float] = None
popularity: Optional[int] = None
favourites: Optional[int] = None
start_date: Optional[datetime] = None
end_date: Optional[datetime] = None
next_airing: Optional[AiringSchedule] = None
# streaming episodes
streaming_episodes: Dict[str, StreamingEpisode] = Field(default_factory=dict)
# user related
user_status: Optional[UserListItem] = None
class Notification(BaseMediaApiModel):
"""A generic representation of a user notification."""
id: int
type: NotificationType
episode: Optional[int] = None
contexts: List[str] = Field(default_factory=list)
created_at: datetime
media: MediaItem
class PageInfo(BaseMediaApiModel):
"""Generic pagination information."""
total: int = 1
current_page: int = 1
has_next_page: bool = False
per_page: int = 15
class MediaSearchResult(BaseMediaApiModel):
"""A generic representation of a page of media search results."""
page_info: PageInfo
media: List[MediaItem] = Field(default_factory=list)
class UserProfile(BaseMediaApiModel):
"""A generic representation of a user's profile."""
id: int
name: str
avatar_url: Optional[str] = None
banner_url: Optional[str] = None
class Reviewer(BaseMediaApiModel):
"""A generic representation of a user who wrote a review."""
name: str
avatar_url: Optional[str] = None
class MediaReview(BaseMediaApiModel):
"""A generic representation of a media review."""
summary: Optional[str] = None
body: str
user: Reviewer
# ENUMS
class MediaTag(Enum):
# Cast
POLYAMOROUS = "Polyamorous"
# Cast Main Cast
ANTI_HERO = "Anti-Hero"
ELDERLY_PROTAGONIST = "Elderly Protagonist"
ENSEMBLE_CAST = "Ensemble Cast"
ESTRANGED_FAMILY = "Estranged Family"
FEMALE_PROTAGONIST = "Female Protagonist"
MALE_PROTAGONIST = "Male Protagonist"
PRIMARILY_ADULT_CAST = "Primarily Adult Cast"
PRIMARILY_ANIMAL_CAST = "Primarily Animal Cast"
PRIMARILY_CHILD_CAST = "Primarily Child Cast"
PRIMARILY_FEMALE_CAST = "Primarily Female Cast"
PRIMARILY_MALE_CAST = "Primarily Male Cast"
PRIMARILY_TEEN_CAST = "Primarily Teen Cast"
# Cast Traits
AGE_REGRESSION = "Age Regression"
AGENDER = "Agender"
ALIENS = "Aliens"
AMNESIA = "Amnesia"
ANGELS = "Angels"
ANTHROPOMORPHISM = "Anthropomorphism"
AROMANTIC = "Aromantic"
ARRANGED_MARRIAGE = "Arranged Marriage"
ARTIFICIAL_INTELLIGENCE = "Artificial Intelligence"
ASEXUAL = "Asexual"
BISEXUAL = "Bisexual"
BUTLER = "Butler"
CENTAUR = "Centaur"
CHIMERA = "Chimera"
CHUUNIBYOU = "Chuunibyou"
CLONE = "Clone"
COSPLAY = "Cosplay"
COWBOYS = "Cowboys"
CROSSDRESSING = "Crossdressing"
CYBORG = "Cyborg"
DELINQUENTS = "Delinquents"
DEMONS = "Demons"
DETECTIVE = "Detective"
DINOSAURS = "Dinosaurs"
DISABILITY = "Disability"
DISSOCIATIVE_IDENTITIES = "Dissociative Identities"
DRAGONS = "Dragons"
DULLAHAN = "Dullahan"
ELF = "Elf"
FAIRY = "Fairy"
FEMBOY = "Femboy"
GHOST = "Ghost"
GOBLIN = "Goblin"
GODS = "Gods"
GYARU = "Gyaru"
HIKIKOMORI = "Hikikomori"
HOMELESS = "Homeless"
IDOL = "Idol"
KEMONOMIMI = "Kemonomimi"
KUUDERE = "Kuudere"
MAIDS = "Maids"
MERMAID = "Mermaid"
MONSTER_BOY = "Monster Boy"
MONSTER_GIRL = "Monster Girl"
NEKOMIMI = "Nekomimi"
NINJA = "Ninja"
NUDITY = "Nudity"
NUN = "Nun"
OFFICE_LADY = "Office Lady"
OIRAN = "Oiran"
OJOU_SAMA = "Ojou-sama"
ORPHAN = "Orphan"
PIRATES = "Pirates"
ROBOTS = "Robots"
SAMURAI = "Samurai"
SHRINE_MAIDEN = "Shrine Maiden"
SKELETON = "Skeleton"
SUCCUBUS = "Succubus"
TANNED_SKIN = "Tanned Skin"
TEACHER = "Teacher"
TOMBOY = "Tomboy"
TRANSGENDER = "Transgender"
TSUNDERE = "Tsundere"
TWINS = "Twins"
VAMPIRE = "Vampire"
VETERINARIAN = "Veterinarian"
VIKINGS = "Vikings"
VILLAINESS = "Villainess"
VTUBER = "VTuber"
WEREWOLF = "Werewolf"
WITCH = "Witch"
YANDERE = "Yandere"
YOUKAI = "Youkai"
ZOMBIE = "Zombie"
# Demographic
JOSEI = "Josei"
KIDS = "Kids"
SEINEN = "Seinen"
SHOUJO = "Shoujo"
SHOUNEN = "Shounen"
# Setting
MATRIARCHY = "Matriarchy"
# Setting Scene
BAR = "Bar"
BOARDING_SCHOOL = "Boarding School"
CAMPING = "Camping"
CIRCUS = "Circus"
COASTAL = "Coastal"
COLLEGE = "College"
DESERT = "Desert"
DUNGEON = "Dungeon"
FOREIGN = "Foreign"
INN = "Inn"
KONBINI = "Konbini"
NATURAL_DISASTER = "Natural Disaster"
OFFICE = "Office"
OUTDOOR_ACTIVITIES = "Outdoor Activities"
PRISON = "Prison"
RESTAURANT = "Restaurant"
RURAL = "Rural"
SCHOOL = "School"
SCHOOL_CLUB = "School Club"
SNOWSCAPE = "Snowscape"
URBAN = "Urban"
WILDERNESS = "Wilderness"
WORK = "Work"
# Setting Time
ACHRONOLOGICAL_ORDER = "Achronological Order"
ANACHRONISM = "Anachronism"
ANCIENT_CHINA = "Ancient China"
DYSTOPIAN = "Dystopian"
HISTORICAL = "Historical"
MEDIEVAL = "Medieval"
TIME_SKIP = "Time Skip"
# Setting Universe
AFTERLIFE = "Afterlife"
ALTERNATE_UNIVERSE = "Alternate Universe"
AUGMENTED_REALITY = "Augmented Reality"
OMEGAVERSE = "Omegaverse"
POST_APOCALYPTIC = "Post-Apocalyptic"
SPACE = "Space"
URBAN_FANTASY = "Urban Fantasy"
VIRTUAL_WORLD = "Virtual World"
# Sexual Content
AHEGAO = "Ahegao"
AMPUTATION = "Amputation"
ANAL_SEX = "Anal Sex"
ARMPITS = "Armpits"
ASHIKOKI = "Ashikoki"
ASPHYXIATION = "Asphyxiation"
BONDAGE = "Bondage"
BOOBJOB = "Boobjob"
CERVIX_PENETRATION = "Cervix Penetration"
CHEATING = "Cheating"
CUMFLATION = "Cumflation"
CUNNILINGUS = "Cunnilingus"
DEEPTHROAT = "Deepthroat"
DEFLORATION = "Defloration"
DILF = "DILF"
DOUBLE_PENETRATION = "Double Penetration"
EROTIC_PIERCINGS = "Erotic Piercings"
EXHIBITIONISM = "Exhibitionism"
FACIAL = "Facial"
FEET = "Feet"
FELLATIO = "Fellatio"
FEMDOM = "Femdom"
FISTING = "Fisting"
FLAT_CHEST = "Flat Chest"
FUTANARI = "Futanari"
GROUP_SEX = "Group Sex"
HAIR_PULLING = "Hair Pulling"
HANDJOB = "Handjob"
HUMAN_PET = "Human Pet"
HYPERSEXUALITY = "Hypersexuality"
INCEST = "Incest"
INSEKI = "Inseki"
IRRUMATIO = "Irrumatio"
LACTATION = "Lactation"
LARGE_BREASTS = "Large Breasts"
MALE_PREGNANCY = "Male Pregnancy"
MASOCHISM = "Masochism"
MASTURBATION = "Masturbation"
MATING_PRESS = "Mating Press"
MILF = "MILF"
NAKADASHI = "Nakadashi"
NETORARE = "Netorare"
NETORASE = "Netorase"
NETORI = "Netori"
PET_PLAY = "Pet Play"
PROSTITUTION = "Prostitution"
PUBLIC_SEX = "Public Sex"
RAPE = "Rape"
RIMJOB = "Rimjob"
SADISM = "Sadism"
SCAT = "Scat"
SCISSORING = "Scissoring"
SEX_TOYS = "Sex Toys"
SHIMAIDON = "Shimaidon"
SQUIRTING = "Squirting"
SUMATA = "Sumata"
SWEAT = "Sweat"
TENTACLES = "Tentacles"
THREESOME = "Threesome"
VIRGINITY = "Virginity"
VORE = "Vore"
VOYEUR = "Voyeur"
WATERSPORTS = "Watersports"
ZOOPHILIA = "Zoophilia"
# Technical
_4_KOMA = "4-koma"
ACHROMATIC = "Achromatic"
ADVERTISEMENT = "Advertisement"
ANTHOLOGY = "Anthology"
CGI = "CGI"
EPISODIC = "Episodic"
FLASH = "Flash"
FULL_CGI = "Full CGI"
FULL_COLOR = "Full Color"
LONG_STRIP = "Long Strip"
MIXED_MEDIA = "Mixed Media"
NO_DIALOGUE = "No Dialogue"
NON_FICTION = "Non-fiction"
POV = "POV"
PUPPETRY = "Puppetry"
ROTOSCOPING = "Rotoscoping"
STOP_MOTION = "Stop Motion"
VERTICAL_VIDEO = "Vertical Video"
# Theme Action
ARCHERY = "Archery"
BATTLE_ROYALE = "Battle Royale"
ESPIONAGE = "Espionage"
FUGITIVE = "Fugitive"
GUNS = "Guns"
MARTIAL_ARTS = "Martial Arts"
SPEARPLAY = "Spearplay"
SWORDPLAY = "Swordplay"
# Theme Arts
ACTING = "Acting"
CALLIGRAPHY = "Calligraphy"
CLASSIC_LITERATURE = "Classic Literature"
DRAWING = "Drawing"
FASHION = "Fashion"
FOOD = "Food"
MAKEUP = "Makeup"
PHOTOGRAPHY = "Photography"
RAKUGO = "Rakugo"
WRITING = "Writing"
# Theme Arts-Music
BAND = "Band"
CLASSICAL_MUSIC = "Classical Music"
DANCING = "Dancing"
HIP_HOP_MUSIC = "Hip-hop Music"
JAZZ_MUSIC = "Jazz Music"
METAL_MUSIC = "Metal Music"
MUSICAL_THEATER = "Musical Theater"
ROCK_MUSIC = "Rock Music"
# Theme Comedy
PARODY = "Parody"
SATIRE = "Satire"
SLAPSTICK = "Slapstick"
SURREAL_COMEDY = "Surreal Comedy"
# Theme Drama
BULLYING = "Bullying"
CLASS_STRUGGLE = "Class Struggle"
COMING_OF_AGE = "Coming of Age"
CONSPIRACY = "Conspiracy"
ECO_HORROR = "Eco-Horror"
FAKE_RELATIONSHIP = "Fake Relationship"
KINGDOM_MANAGEMENT = "Kingdom Management"
REHABILITATION = "Rehabilitation"
REVENGE = "Revenge"
SUICIDE = "Suicide"
TRAGEDY = "Tragedy"
# Theme Fantasy
ALCHEMY = "Alchemy"
BODY_SWAPPING = "Body Swapping"
CULTIVATION = "Cultivation"
CURSES = "Curses"
EXORCISM = "Exorcism"
FAIRY_TALE = "Fairy Tale"
HENSHIN = "Henshin"
ISEKAI = "Isekai"
KAIJU = "Kaiju"
MAGIC = "Magic"
MYTHOLOGY = "Mythology"
NECROMANCY = "Necromancy"
SHAPESHIFTING = "Shapeshifting"
STEAMPUNK = "Steampunk"
SUPER_POWER = "Super Power"
SUPERHERO = "Superhero"
WUXIA = "Wuxia"
# Theme Game
BOARD_GAME = "Board Game"
E_SPORTS = "E-Sports"
VIDEO_GAMES = "Video Games"
# Theme Game-Card & Board Game
CARD_BATTLE = "Card Battle"
GO = "Go"
KARUTA = "Karuta"
MAHJONG = "Mahjong"
POKER = "Poker"
SHOGI = "Shogi"
# Theme Game-Sport
ACROBATICS = "Acrobatics"
AIRSOFT = "Airsoft"
AMERICAN_FOOTBALL = "American Football"
ATHLETICS = "Athletics"
BADMINTON = "Badminton"
BASEBALL = "Baseball"
BASKETBALL = "Basketball"
BOWLING = "Bowling"
BOXING = "Boxing"
CHEERLEADING = "Cheerleading"
CYCLING = "Cycling"
FENCING = "Fencing"
FISHING = "Fishing"
FITNESS = "Fitness"
FOOTBALL = "Football"
GOLF = "Golf"
HANDBALL = "Handball"
ICE_SKATING = "Ice Skating"
JUDO = "Judo"
LACROSSE = "Lacrosse"
PARKOUR = "Parkour"
RUGBY = "Rugby"
SCUBA_DIVING = "Scuba Diving"
SKATEBOARDING = "Skateboarding"
SUMO = "Sumo"
SURFING = "Surfing"
SWIMMING = "Swimming"
TABLE_TENNIS = "Table Tennis"
TENNIS = "Tennis"
VOLLEYBALL = "Volleyball"
WRESTLING = "Wrestling"
# Theme Other
ADOPTION = "Adoption"
ANIMALS = "Animals"
ASTRONOMY = "Astronomy"
AUTOBIOGRAPHICAL = "Autobiographical"
BIOGRAPHICAL = "Biographical"
BLACKMAIL = "Blackmail"
BODY_HORROR = "Body Horror"
BODY_IMAGE = "Body Image"
CANNIBALISM = "Cannibalism"
CHIBI = "Chibi"
COSMIC_HORROR = "Cosmic Horror"
CREATURE_TAMING = "Creature Taming"
CRIME = "Crime"
CROSSOVER = "Crossover"
DEATH_GAME = "Death Game"
DENPA = "Denpa"
DRUGS = "Drugs"
ECONOMICS = "Economics"
EDUCATIONAL = "Educational"
ENVIRONMENTAL = "Environmental"
ERO_GURO = "Ero Guro"
FILMMAKING = "Filmmaking"
FOUND_FAMILY = "Found Family"
GAMBLING = "Gambling"
GENDER_BENDING = "Gender Bending"
GORE = "Gore"
INDIGENOUS_CULTURES = "Indigenous Cultures"
LANGUAGE_BARRIER = "Language Barrier"
LGBTQ_PLUS_THEMES = "LGBTQ+ Themes"
LOST_CIVILIZATION = "Lost Civilization"
MARRIAGE = "Marriage"
MEDICINE = "Medicine"
MEMORY_MANIPULATION = "Memory Manipulation"
META = "Meta"
MOUNTAINEERING = "Mountaineering"
NOIR = "Noir"
OTAKU_CULTURE = "Otaku Culture"
PANDEMIC = "Pandemic"
PHILOSOPHY = "Philosophy"
POLITICS = "Politics"
PREGNANCY = "Pregnancy"
PROXY_BATTLE = "Proxy Battle"
PSYCHOSEXUAL = "Psychosexual"
REINCARNATION = "Reincarnation"
RELIGION = "Religion"
RESCUE = "Rescue"
ROYAL_AFFAIRS = "Royal Affairs"
SLAVERY = "Slavery"
SOFTWARE_DEVELOPMENT = "Software Development"
SURVIVAL = "Survival"
TERRORISM = "Terrorism"
TORTURE = "Torture"
TRAVEL = "Travel"
VOCAL_SYNTH = "Vocal Synth"
WAR = "War"
# Theme Other-Organisations
ASSASSINS = "Assassins"
CRIMINAL_ORGANIZATION = "Criminal Organization"
CULT = "Cult"
FIREFIGHTERS = "Firefighters"
GANGS = "Gangs"
MAFIA = "Mafia"
MILITARY = "Military"
POLICE = "Police"
TRIADS = "Triads"
YAKUZA = "Yakuza"
# Theme Other-Vehicle
AVIATION = "Aviation"
CARS = "Cars"
MOPEDS = "Mopeds"
MOTORCYCLES = "Motorcycles"
SHIPS = "Ships"
TANKS = "Tanks"
TRAINS = "Trains"
# Theme Romance
AGE_GAP = "Age Gap"
BOYS_LOVE = "Boys' Love"
COHABITATION = "Cohabitation"
FEMALE_HAREM = "Female Harem"
HETEROSEXUAL = "Heterosexual"
LOVE_TRIANGLE = "Love Triangle"
MALE_HAREM = "Male Harem"
MATCHMAKING = "Matchmaking"
MIXED_GENDER_HAREM = "Mixed Gender Harem"
TEENS_LOVE = "Teens' Love"
UNREQUITED_LOVE = "Unrequited Love"
YURI = "Yuri"
# Theme Sci-Fi
CYBERPUNK = "Cyberpunk"
SPACE_OPERA = "Space Opera"
TIME_LOOP = "Time Loop"
TIME_MANIPULATION = "Time Manipulation"
TOKUSATSU = "Tokusatsu"
# Theme Sci-Fi-Mecha
REAL_ROBOT = "Real Robot"
SUPER_ROBOT = "Super Robot"
# Theme Slice of Life
AGRICULTURE = "Agriculture"
CUTE_BOYS_DOING_CUTE_THINGS = "Cute Boys Doing Cute Things"
CUTE_GIRLS_DOING_CUTE_THINGS = "Cute Girls Doing Cute Things"
FAMILY_LIFE = "Family Life"
HORTICULTURE = "Horticulture"
IYASHIKEI = "Iyashikei"
PARENTHOOD = "Parenthood"
class MediaSort(Enum):
ID = "ID"
ID_DESC = "ID_DESC"
TITLE_ROMAJI = "TITLE_ROMAJI"
TITLE_ROMAJI_DESC = "TITLE_ROMAJI_DESC"
TITLE_ENGLISH = "TITLE_ENGLISH"
TITLE_ENGLISH_DESC = "TITLE_ENGLISH_DESC"
TITLE_NATIVE = "TITLE_NATIVE"
TITLE_NATIVE_DESC = "TITLE_NATIVE_DESC"
TYPE = "TYPE"
TYPE_DESC = "TYPE_DESC"
FORMAT = "FORMAT"
FORMAT_DESC = "FORMAT_DESC"
START_DATE = "START_DATE"
START_DATE_DESC = "START_DATE_DESC"
END_DATE = "END_DATE"
END_DATE_DESC = "END_DATE_DESC"
SCORE = "SCORE"
SCORE_DESC = "SCORE_DESC"
POPULARITY = "POPULARITY"
POPULARITY_DESC = "POPULARITY_DESC"
TRENDING = "TRENDING"
TRENDING_DESC = "TRENDING_DESC"
EPISODES = "EPISODES"
EPISODES_DESC = "EPISODES_DESC"
DURATION = "DURATION"
DURATION_DESC = "DURATION_DESC"
STATUS = "STATUS"
STATUS_DESC = "STATUS_DESC"
CHAPTERS = "CHAPTERS"
CHAPTERS_DESC = "CHAPTERS_DESC"
VOLUMES = "VOLUMES"
VOLUMES_DESC = "VOLUMES_DESC"
UPDATED_AT = "UPDATED_AT"
UPDATED_AT_DESC = "UPDATED_AT_DESC"
SEARCH_MATCH = "SEARCH_MATCH"
FAVOURITES = "FAVOURITES"
FAVOURITES_DESC = "FAVOURITES_DESC"
class UserMediaListSort(Enum):
MEDIA_ID = "MEDIA_ID"
MEDIA_ID_DESC = "MEDIA_ID_DESC"
SCORE = "SCORE"
SCORE_DESC = "SCORE_DESC"
STATUS = "STATUS"
STATUS_DESC = "STATUS_DESC"
PROGRESS = "PROGRESS"
PROGRESS_DESC = "PROGRESS_DESC"
PROGRESS_VOLUMES = "PROGRESS_VOLUMES"
PROGRESS_VOLUMES_DESC = "PROGRESS_VOLUMES_DESC"
REPEAT = "REPEAT"
REPEAT_DESC = "REPEAT_DESC"
PRIORITY = "PRIORITY"
PRIORITY_DESC = "PRIORITY_DESC"
STARTED_ON = "STARTED_ON"
STARTED_ON_DESC = "STARTED_ON_DESC"
FINISHED_ON = "FINISHED_ON"
FINISHED_ON_DESC = "FINISHED_ON_DESC"
ADDED_TIME = "ADDED_TIME"
ADDED_TIME_DESC = "ADDED_TIME_DESC"
UPDATED_TIME = "UPDATED_TIME"
UPDATED_TIME_DESC = "UPDATED_TIME_DESC"
MEDIA_TITLE_ROMAJI = "MEDIA_TITLE_ROMAJI"
MEDIA_TITLE_ROMAJI_DESC = "MEDIA_TITLE_ROMAJI_DESC"
MEDIA_TITLE_ENGLISH = "MEDIA_TITLE_ENGLISH"
MEDIA_TITLE_ENGLISH_DESC = "MEDIA_TITLE_ENGLISH_DESC"
MEDIA_TITLE_NATIVE = "MEDIA_TITLE_NATIVE"
MEDIA_TITLE_NATIVE_DESC = "MEDIA_TITLE_NATIVE_DESC"
MEDIA_POPULARITY = "MEDIA_POPULARITY"
MEDIA_POPULARITY_DESC = "MEDIA_POPULARITY_DESC"
MEDIA_SCORE = "MEDIA_SCORE"
MEDIA_SCORE_DESC = "MEDIA_SCORE_DESC"
MEDIA_START_DATE = "MEDIA_START_DATE"
MEDIA_START_DATE_DESC = "MEDIA_START_DATE_DESC"
MEDIA_RATING = "MEDIA_RATING"
MEDIA_RATING_DESC = "MEDIA_RATING_DESC"
class MediaSeason(Enum):
WINTER = "WINTER"
SPRING = "SPRING"
SUMMER = "SUMMER"
FALL = "FALL"
class MediaYear(Enum):
_1900 = "1900"
_1910 = "1910"
_1920 = "1920"
_1930 = "1930"
_1940 = "1940"
_1950 = "1950"
_1960 = "1960"
_1970 = "1970"
_1980 = "1980"
_1990 = "1990"
_2000 = "2000"
_2004 = "2004"
_2005 = "2005"
_2006 = "2006"
_2007 = "2007"
_2008 = "2008"
_2009 = "2009"
_2010 = "2010"
_2011 = "2011"
_2012 = "2012"
_2013 = "2013"
_2014 = "2014"
_2015 = "2015"
_2016 = "2016"
_2017 = "2017"
_2018 = "2018"
_2019 = "2019"
_2020 = "2020"
_2021 = "2021"
_2022 = "2022"
_2023 = "2023"
_2024 = "2024"
_2025 = "2025"

View File

@@ -1,65 +0,0 @@
"""
Syncplay integration for Viu.
This module provides a procedural function to launch Syncplay with the given media and options.
"""
import shutil
import subprocess
from .tools import exit_app
def SyncPlayer(url: str, anime_title=None, headers={}, subtitles=[], *args):
"""
Launch Syncplay for synchronized playback with friends.
Args:
url: The media URL to play.
anime_title: Optional title to display in the player.
headers: Optional HTTP headers to pass to the player.
subtitles: Optional list of subtitle dicts with 'url' keys.
*args: Additional arguments (unused).
Returns:
Tuple of ("0", "0") for compatibility.
"""
# TODO: handle m3u8 multi quality streams
#
# check for SyncPlay
SYNCPLAY_EXECUTABLE = shutil.which("syncplay")
if not SYNCPLAY_EXECUTABLE:
print("Syncplay not found")
exit_app(1)
return "0", "0"
# start SyncPlayer
mpv_args = []
if headers:
mpv_headers = "--http-header-fields="
for header_name, header_value in headers.items():
mpv_headers += f"{header_name}:{header_value},"
mpv_args.append(mpv_headers)
for subtitle in subtitles:
mpv_args.append(f"--sub-file={subtitle['url']}")
if not anime_title:
subprocess.run(
[
SYNCPLAY_EXECUTABLE,
url,
],
check=False,
)
else:
subprocess.run(
[
SYNCPLAY_EXECUTABLE,
url,
"--",
f"--force-media-title={anime_title}",
*mpv_args,
],
check=False,
)
# for compatability
return "0", "0"

View File

@@ -1,105 +0,0 @@
"""An abstraction over all providers offering added features with a simple and well typed api
[TODO:description]
"""
import importlib
import logging
from typing import TYPE_CHECKING
from .libs.manga_provider import manga_sources
if TYPE_CHECKING:
pass
logger = logging.getLogger(__name__)
class MangaProvider:
"""Class that manages all anime sources adding some extra functionality to them.
Attributes:
PROVIDERS: [TODO:attribute]
provider: [TODO:attribute]
provider: [TODO:attribute]
dynamic: [TODO:attribute]
retries: [TODO:attribute]
manga_provider: [TODO:attribute]
"""
PROVIDERS = list(manga_sources.keys())
provider = PROVIDERS[0]
def __init__(self, provider="mangadex", dynamic=False, retries=0) -> None:
self.provider = provider
self.dynamic = dynamic
self.retries = retries
self.lazyload_provider(self.provider)
def lazyload_provider(self, provider):
"""updates the current provider being used"""
_, anime_provider_cls_name = manga_sources[provider].split(".", 1)
package = f"viu_cli.libs.manga_provider.{provider}"
provider_api = importlib.import_module(".api", package)
manga_provider = getattr(provider_api, anime_provider_cls_name)
self.manga_provider = manga_provider()
def search_for_manga(
self,
user_query,
nsfw=True,
unknown=True,
):
"""core abstraction over all providers search functionality
Args:
user_query ([TODO:parameter]): [TODO:description]
translation_type ([TODO:parameter]): [TODO:description]
nsfw ([TODO:parameter]): [TODO:description]
manga_provider ([TODO:parameter]): [TODO:description]
anilist_obj: [TODO:description]
Returns:
[TODO:return]
"""
manga_provider = self.manga_provider
try:
results = manga_provider.search_for_manga(user_query, nsfw, unknown)
except Exception as e:
logger.error(e)
results = None
return results
def get_manga(
self,
anime_id: str,
):
"""core abstraction over getting info of an anime from all providers
Args:
anime_id: [TODO:description]
anilist_obj: [TODO:description]
Returns:
[TODO:return]
"""
manga_provider = self.manga_provider
try:
results = manga_provider.get_manga(anime_id)
except Exception as e:
logger.error(e)
results = None
return results
def get_chapter_thumbnails(
self,
manga_id: str,
chapter: str,
):
manga_provider = self.manga_provider
try:
results = manga_provider.get_chapter_thumbnails(manga_id, chapter)
except Exception as e:
logger.error(e)
results = None
return results # pyright:ignore

View File

@@ -1 +0,0 @@
manga_sources = {"mangadex": "api.MangaDexApi"}

View File

@@ -1,18 +0,0 @@
from httpx import Client
from ....core.utils.networking import random_user_agent
class MangaProvider:
session: Client
USER_AGENT = random_user_agent()
HEADERS = {}
def __init__(self) -> None:
self.session = Client(
headers={
"User-Agent": self.USER_AGENT,
**self.HEADERS,
},
timeout=10,
)

View File

@@ -1,15 +0,0 @@
import logging
from httpx import get
logger = logging.getLogger(__name__)
def fetch_manga_info_from_bal(anilist_id):
try:
url = f"https://raw.githubusercontent.com/bal-mackup/mal-backup/master/anilist/manga/{anilist_id}.json"
response = get(url, timeout=11)
if response.ok:
return response.json()
except Exception as e:
logger.error(e)

View File

@@ -1,51 +0,0 @@
import logging
from ...common.mini_anilist import search_for_manga_with_anilist
from ..base_provider import MangaProvider
from ..common import fetch_manga_info_from_bal
logger = logging.getLogger(__name__)
class MangaDexApi(MangaProvider):
def search_for_manga(self, title: str, *args):
try:
search_results = search_for_manga_with_anilist(title)
return search_results
except Exception as e:
logger.error(f"[MANGADEX-ERROR]: {e}")
def get_manga(self, anilist_manga_id: str):
bal_data = fetch_manga_info_from_bal(anilist_manga_id)
if not bal_data:
return
manga_id, MangaDexManga = next(iter(bal_data["Sites"]["Mangadex"].items()))
return {
"id": manga_id,
"title": MangaDexManga["title"],
"poster": MangaDexManga["image"],
"availableChapters": [],
}
def get_chapter_thumbnails(self, manga_id, chapter):
chapter_info_url = f"https://api.mangadex.org/chapter?manga={manga_id}&translatedLanguage[]=en&chapter={chapter}&includeEmptyPages=0"
chapter_info_response = self.session.get(chapter_info_url)
if not chapter_info_response.ok:
return
chapter_info = next(iter(chapter_info_response.json()["data"]))
chapters_thumbnails_url = (
f"https://api.mangadex.org/at-home/server/{chapter_info['id']}"
)
chapter_thumbnails_response = self.session.get(chapters_thumbnails_url)
if not chapter_thumbnails_response.ok:
return
chapter_thumbnails_info = chapter_thumbnails_response.json()
base_url = chapter_thumbnails_info["baseUrl"]
hash = chapter_thumbnails_info["chapter"]["hash"]
return {
"thumbnails": [
f"{base_url}/data/{hash}/{chapter_thumbnail}"
for chapter_thumbnail in chapter_thumbnails_info["chapter"]["data"]
],
"title": chapter_info["attributes"]["title"],
}

View File

@@ -1,6 +1,6 @@
import sys
if sys.version_info < (3, 10):
if sys.version_info < (3, 11):
raise ImportError(
"You are using an unsupported version of Python. Only Python versions 3.10 and above are supported by Viu"
) # noqa: F541

View File

@@ -1,4 +1,3 @@
██╗░░░██╗██╗██╗░░░██╗
██║░░░██║██║██║░░░██║
╚██╗░██╔╝██║██║░░░██║

View File

Before

Width:  |  Height:  |  Size: 3.7 KiB

After

Width:  |  Height:  |  Size: 3.7 KiB

View File

Before

Width:  |  Height:  |  Size: 276 KiB

After

Width:  |  Height:  |  Size: 276 KiB

View File

@@ -4,7 +4,9 @@
"Magia Record: Mahou Shoujo Madoka☆Magica Gaiden (TV)": "Mahou Shoujo Madoka☆Magica",
"Dungeon ni Deai o Motomeru no wa Machigatte Iru Darouka": "Dungeon ni Deai wo Motomeru no wa Machigatteiru Darou ka",
"Hazurewaku no \"Joutai Ijou Skill\" de Saikyou ni Natta Ore ga Subete wo Juurin suru made": "Hazure Waku no [Joutai Ijou Skill] de Saikyou ni Natta Ore ga Subete wo Juurin Suru made",
"Re:Zero kara Hajimeru Isekai Seikatsu Season 3": "Re:Zero kara Hajimeru Isekai Seikatsu 3rd Season"
"Re:Zero kara Hajimeru Isekai Seikatsu Season 3": "Re:Zero kara Hajimeru Isekai Seikatsu 3rd Season",
"Hanka×Hanka (2011)": "Hunter × Hunter (2011)",
"Burichi -": "bleach"
},
"hianime": {
"My Star": "Oshi no Ko"
@@ -13,5 +15,12 @@
"Azumanga Daiou The Animation": "Azumanga Daioh",
"Mairimashita! Iruma-kun 2nd Season": "Mairimashita! Iruma-kun 2",
"Mairimashita! Iruma-kun 3rd Season": "Mairimashita! Iruma-kun 3"
},
"animeunity": {
"Kaiju No. 8": "Kaiju No.8",
"Naruto Shippuden": "Naruto: Shippuden",
"Psycho-Pass: Sinners of the System Case.1 - Crime and Punishment": "PSYCHO-PASS Sinners of the System: Case.1 Crime and Punishment",
"Psycho-Pass: Sinners of the System Case.2 - First Guardian": "PSYCHO-PASS Sinners of the System: Case.2 First Guardian",
"Psycho-Pass: Sinners of the System Case.3 - On the Other Side of Love and Hate": "PSYCHO-PASS Sinners of the System: Case.3 Beyond the Pale of Vengeance"
}
}

View File

@@ -0,0 +1,202 @@
"""
ANSI utilities for FZF preview scripts.
Lightweight stdlib-only utilities to replace Rich dependency in preview scripts.
Provides RGB color formatting, table rendering, and markdown stripping.
"""
import os
import re
import shutil
import textwrap
import unicodedata
def get_terminal_width() -> int:
"""
Get terminal width, prioritizing FZF preview environment variables.
Returns:
Terminal width in columns
"""
fzf_cols = os.environ.get("FZF_PREVIEW_COLUMNS")
if fzf_cols:
return int(fzf_cols)
return shutil.get_terminal_size((80, 24)).columns
def display_width(text: str) -> int:
"""
Calculate the actual display width of text, accounting for wide characters.
Args:
text: Text to measure
Returns:
Display width in terminal columns
"""
width = 0
for char in text:
# East Asian Width property: 'F' (Fullwidth) and 'W' (Wide) take 2 columns
if unicodedata.east_asian_width(char) in ("F", "W"):
width += 2
else:
width += 1
return width
def rgb_color(r: int, g: int, b: int, text: str, bold: bool = False) -> str:
"""
Format text with RGB color using ANSI escape codes.
Args:
r: Red component (0-255)
g: Green component (0-255)
b: Blue component (0-255)
text: Text to colorize
bold: Whether to make text bold
Returns:
ANSI-escaped colored text
"""
color_code = f"\x1b[38;2;{r};{g};{b}m"
bold_code = "\x1b[1m" if bold else ""
reset = "\x1b[0m"
return f"{color_code}{bold_code}{text}{reset}"
def parse_color(color_csv: str) -> tuple[int, int, int]:
"""
Parse RGB color from comma-separated string.
Args:
color_csv: Color as 'R,G,B' string
Returns:
Tuple of (r, g, b) integers
"""
parts = color_csv.split(",")
return int(parts[0]), int(parts[1]), int(parts[2])
def print_rule(sep_color: str) -> None:
"""
Print a horizontal rule line.
Args:
sep_color: Color as 'R,G,B' string
"""
width = get_terminal_width()
r, g, b = parse_color(sep_color)
print(rgb_color(r, g, b, "" * width))
def print_table_row(
key: str, value: str, header_color: str, key_width: int, value_width: int
) -> None:
"""
Print a two-column table row with left-aligned key and right-aligned value.
Args:
key: Left column text (header/key)
value: Right column text (value)
header_color: Color for key as 'R,G,B' string
key_width: Width for key column
value_width: Width for value column
"""
r, g, b = parse_color(header_color)
key_styled = rgb_color(r, g, b, key, bold=True)
# Get actual terminal width
term_width = get_terminal_width()
# Calculate display widths accounting for wide characters
key_display_width = display_width(key)
# Calculate actual value width based on terminal and key display width
actual_value_width = max(20, term_width - key_display_width - 2)
# Wrap value if it's too long (use character count, not display width for wrapping)
value_lines = textwrap.wrap(str(value), width=actual_value_width) if value else [""]
if not value_lines:
value_lines = [""]
# Print first line with properly aligned value
first_line = value_lines[0]
first_line_display_width = display_width(first_line)
# Use manual spacing to right-align based on display width
spacing = term_width - key_display_width - first_line_display_width - 2
if spacing > 0:
print(f"{key_styled} {' ' * spacing}{first_line}")
else:
print(f"{key_styled} {first_line}")
# Print remaining wrapped lines (left-aligned, indented)
for line in value_lines[1:]:
print(f"{' ' * (key_display_width + 2)}{line}")
def strip_markdown(text: str) -> str:
"""
Strip markdown formatting from text.
Removes:
- Headers (# ## ###)
- Bold (**text** or __text__)
- Italic (*text* or _text_)
- Links ([text](url))
- Code blocks (```code```)
- Inline code (`code`)
Args:
text: Markdown-formatted text
Returns:
Plain text with markdown removed
"""
if not text:
return ""
# Remove code blocks first
text = re.sub(r"```[\s\S]*?```", "", text)
# Remove inline code
text = re.sub(r"`([^`]+)`", r"\1", text)
# Remove headers
text = re.sub(r"^#{1,6}\s+", "", text, flags=re.MULTILINE)
# Remove bold (** or __)
text = re.sub(r"\*\*(.+?)\*\*", r"\1", text)
text = re.sub(r"__(.+?)__", r"\1", text)
# Remove italic (* or _)
text = re.sub(r"\*(.+?)\*", r"\1", text)
text = re.sub(r"_(.+?)_", r"\1", text)
# Remove links, keep text
text = re.sub(r"\[(.+?)\]\(.+?\)", r"\1", text)
# Remove images
text = re.sub(r"!\[.*?\]\(.+?\)", "", text)
return text.strip()
def wrap_text(text: str, width: int | None = None) -> str:
"""
Wrap text to terminal width.
Args:
text: Text to wrap
width: Width to wrap to (defaults to terminal width)
Returns:
Wrapped text
"""
if width is None:
width = get_terminal_width()
return textwrap.fill(text, width=width)

View File

@@ -0,0 +1,323 @@
#!/usr/bin/env python3
"""
Filter Parser for Dynamic Search
This module provides a parser for the special filter syntax used in dynamic search.
Filter syntax allows users to add filters inline with their search query.
SYNTAX:
@filter:value - Apply a filter with the given value
@filter:value1,value2 - Apply multiple values (for array filters)
@filter:!value - Exclude/negate a filter value
SUPPORTED FILTERS:
@genre:action,comedy - Filter by genres
@genre:!hentai - Exclude genre
@status:airing - Filter by status (airing, finished, upcoming, cancelled, hiatus)
@year:2024 - Filter by season year
@season:winter - Filter by season (winter, spring, summer, fall)
@format:tv,movie - Filter by format (tv, movie, ova, ona, special, music)
@sort:score - Sort by (score, popularity, trending, title, date)
@score:>80 - Minimum score
@score:<50 - Maximum score
@popularity:>10000 - Minimum popularity
@onlist - Only show anime on user's list
@onlist:false - Only show anime NOT on user's list
EXAMPLES:
"naruto @genre:action @status:finished"
"isekai @year:2024 @season:winter @sort:score"
"@genre:action,adventure @status:airing"
"romance @genre:!hentai @format:tv,movie"
"""
import re
from typing import Any, Dict, List, Optional, Tuple
# Mapping of user-friendly filter names to GraphQL variable names
FILTER_ALIASES = {
# Status aliases
"airing": "RELEASING",
"releasing": "RELEASING",
"finished": "FINISHED",
"completed": "FINISHED",
"upcoming": "NOT_YET_RELEASED",
"not_yet_released": "NOT_YET_RELEASED",
"unreleased": "NOT_YET_RELEASED",
"cancelled": "CANCELLED",
"canceled": "CANCELLED",
"hiatus": "HIATUS",
"paused": "HIATUS",
# Format aliases
"tv": "TV",
"tv_short": "TV_SHORT",
"tvshort": "TV_SHORT",
"movie": "MOVIE",
"film": "MOVIE",
"ova": "OVA",
"ona": "ONA",
"special": "SPECIAL",
"music": "MUSIC",
# Season aliases
"winter": "WINTER",
"spring": "SPRING",
"summer": "SUMMER",
"fall": "FALL",
"autumn": "FALL",
# Sort aliases
"score": "SCORE_DESC",
"score_desc": "SCORE_DESC",
"score_asc": "SCORE",
"popularity": "POPULARITY_DESC",
"popularity_desc": "POPULARITY_DESC",
"popularity_asc": "POPULARITY",
"trending": "TRENDING_DESC",
"trending_desc": "TRENDING_DESC",
"trending_asc": "TRENDING",
"title": "TITLE_ROMAJI",
"title_desc": "TITLE_ROMAJI_DESC",
"date": "START_DATE_DESC",
"date_desc": "START_DATE_DESC",
"date_asc": "START_DATE",
"newest": "START_DATE_DESC",
"oldest": "START_DATE",
"favourites": "FAVOURITES_DESC",
"favorites": "FAVOURITES_DESC",
"episodes": "EPISODES_DESC",
}
# Genre name normalization (lowercase -> proper case)
GENRE_NAMES = {
"action": "Action",
"adventure": "Adventure",
"comedy": "Comedy",
"drama": "Drama",
"ecchi": "Ecchi",
"fantasy": "Fantasy",
"horror": "Horror",
"mahou_shoujo": "Mahou Shoujo",
"mahou": "Mahou Shoujo",
"magical_girl": "Mahou Shoujo",
"mecha": "Mecha",
"music": "Music",
"mystery": "Mystery",
"psychological": "Psychological",
"romance": "Romance",
"sci-fi": "Sci-Fi",
"scifi": "Sci-Fi",
"sci_fi": "Sci-Fi",
"slice_of_life": "Slice of Life",
"sol": "Slice of Life",
"sports": "Sports",
"supernatural": "Supernatural",
"thriller": "Thriller",
"hentai": "Hentai",
}
# Filter pattern: @key:value or @key (boolean flags)
FILTER_PATTERN = re.compile(r"@(\w+)(?::([^\s]+))?", re.IGNORECASE)
# Comparison operators for numeric filters
COMPARISON_PATTERN = re.compile(r"^([<>]=?)?(\d+)$")
def normalize_value(value: str, value_type: str) -> str:
"""Normalize a filter value based on its type."""
value_lower = value.lower().strip()
if value_type == "genre":
return GENRE_NAMES.get(value_lower, value.title())
elif value_type in ("status", "format", "season", "sort"):
return FILTER_ALIASES.get(value_lower, value.upper())
return value
def parse_value_list(value_str: str) -> Tuple[List[str], List[str]]:
"""
Parse a comma-separated value string, separating includes from excludes.
Returns:
Tuple of (include_values, exclude_values)
"""
includes = []
excludes = []
for val in value_str.split(","):
val = val.strip()
if not val:
continue
if val.startswith("!"):
excludes.append(val[1:])
else:
includes.append(val)
return includes, excludes
def parse_comparison(value: str) -> Tuple[Optional[str], Optional[int]]:
"""
Parse a comparison value like ">80" or "<50".
Returns:
Tuple of (operator, number) or (None, None) if invalid
"""
match = COMPARISON_PATTERN.match(value)
if match:
operator = match.group(1) or ">" # Default to greater than
number = int(match.group(2))
return operator, number
return None, None
def parse_filters(query: str) -> Tuple[str, Dict[str, Any]]:
"""
Parse a search query and extract filter directives.
Args:
query: The full search query including filter syntax
Returns:
Tuple of (clean_query, filters_dict)
- clean_query: The query with filter syntax removed
- filters_dict: Dictionary of GraphQL variables to apply
"""
filters: Dict[str, Any] = {}
# Find all filter matches
matches = list(FILTER_PATTERN.finditer(query))
for match in matches:
filter_name = match.group(1).lower()
filter_value = match.group(2) # May be None for boolean flags
# Handle different filter types
if filter_name == "genre":
if filter_value:
includes, excludes = parse_value_list(filter_value)
if includes:
normalized = [normalize_value(v, "genre") for v in includes]
filters.setdefault("genre_in", []).extend(normalized)
if excludes:
normalized = [normalize_value(v, "genre") for v in excludes]
filters.setdefault("genre_not_in", []).extend(normalized)
elif filter_name == "status":
if filter_value:
includes, excludes = parse_value_list(filter_value)
if includes:
normalized = [normalize_value(v, "status") for v in includes]
filters.setdefault("status_in", []).extend(normalized)
if excludes:
normalized = [normalize_value(v, "status") for v in excludes]
filters.setdefault("status_not_in", []).extend(normalized)
elif filter_name == "format":
if filter_value:
includes, _ = parse_value_list(filter_value)
if includes:
normalized = [normalize_value(v, "format") for v in includes]
filters.setdefault("format_in", []).extend(normalized)
elif filter_name == "year":
if filter_value:
try:
filters["seasonYear"] = int(filter_value)
except ValueError:
pass # Invalid year, skip
elif filter_name == "season":
if filter_value:
filters["season"] = normalize_value(filter_value, "season")
elif filter_name == "sort":
if filter_value:
sort_val = normalize_value(filter_value, "sort")
filters["sort"] = [sort_val]
elif filter_name == "score":
if filter_value:
op, num = parse_comparison(filter_value)
if num is not None:
if op in (">", ">="):
filters["averageScore_greater"] = num
elif op in ("<", "<="):
filters["averageScore_lesser"] = num
elif filter_name == "popularity":
if filter_value:
op, num = parse_comparison(filter_value)
if num is not None:
if op in (">", ">="):
filters["popularity_greater"] = num
elif op in ("<", "<="):
filters["popularity_lesser"] = num
elif filter_name == "onlist":
if filter_value is None or filter_value.lower() in ("true", "yes", "1"):
filters["on_list"] = True
elif filter_value.lower() in ("false", "no", "0"):
filters["on_list"] = False
elif filter_name == "tag":
if filter_value:
includes, excludes = parse_value_list(filter_value)
if includes:
# Tags use title case typically
normalized = [v.replace("_", " ").title() for v in includes]
filters.setdefault("tag_in", []).extend(normalized)
if excludes:
normalized = [v.replace("_", " ").title() for v in excludes]
filters.setdefault("tag_not_in", []).extend(normalized)
# Remove filter syntax from query to get clean search text
clean_query = FILTER_PATTERN.sub("", query).strip()
# Clean up multiple spaces
clean_query = re.sub(r"\s+", " ", clean_query).strip()
return clean_query, filters
def get_help_text() -> str:
"""Return a help string describing the filter syntax."""
return """
╭─────────────────── Filter Syntax Help ───────────────────╮
│ │
│ @genre:action,comedy Filter by genres │
│ @genre:!hentai Exclude genre │
│ @status:airing Status: airing, finished, │
│ upcoming, cancelled, hiatus │
│ @year:2024 Filter by year │
│ @season:winter winter, spring, summer, fall │
│ @format:tv,movie tv, movie, ova, ona, special │
│ @sort:score score, popularity, trending, │
│ date, title, newest, oldest │
│ @score:>80 Minimum score │
│ @score:<50 Maximum score │
│ @popularity:>10000 Minimum popularity │
│ @onlist Only on your list │
│ @onlist:false Not on your list │
│ @tag:isekai,reincarnation Filter by tags │
│ │
│ Examples: │
│ naruto @genre:action @status:finished │
│ @genre:action,adventure @year:2024 @sort:score │
│ isekai @season:winter @year:2024 │
│ │
╰──────────────────────────────────────────────────────────╯
""".strip()
if __name__ == "__main__":
# Test the parser
import json
import sys
if len(sys.argv) > 1:
test_query = " ".join(sys.argv[1:])
clean, filters = parse_filters(test_query)
print(f"Original: {test_query}")
print(f"Clean query: {clean}")
print(f"Filters: {json.dumps(filters, indent=2)}")
else:
print(get_help_text())

View File

@@ -0,0 +1,36 @@
import sys
from _ansi_utils import (
print_rule,
print_table_row,
strip_markdown,
wrap_text,
get_terminal_width,
)
HEADER_COLOR = sys.argv[1]
SEPARATOR_COLOR = sys.argv[2]
# Get terminal dimensions
term_width = get_terminal_width()
# Print title centered
print("{ANIME_TITLE}".center(term_width))
rows = [
("Total Episodes", "{TOTAL_EPISODES}"),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("Upcoming Episodes", "{UPCOMING_EPISODES}"),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
print_rule(SEPARATOR_COLOR)
print(wrap_text(strip_markdown("""{SCHEDULE_TABLE}"""), term_width))

View File

@@ -0,0 +1,47 @@
import sys
from _ansi_utils import (
print_rule,
print_table_row,
strip_markdown,
wrap_text,
get_terminal_width,
)
HEADER_COLOR = sys.argv[1]
SEPARATOR_COLOR = sys.argv[2]
# Get terminal dimensions
term_width = get_terminal_width()
# Print title centered
print("{CHARACTER_NAME}".center(term_width))
rows = [
("Native Name", "{CHARACTER_NATIVE_NAME}"),
("Gender", "{CHARACTER_GENDER}"),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("Age", "{CHARACTER_AGE}"),
("Blood Type", "{CHARACTER_BLOOD_TYPE}"),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("Birthday", "{CHARACTER_BIRTHDAY}"),
("Favourites", "{CHARACTER_FAVOURITES}"),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
print_rule(SEPARATOR_COLOR)
print(wrap_text(strip_markdown("""{CHARACTER_DESCRIPTION}"""), term_width))

View File

@@ -0,0 +1,499 @@
#!/usr/bin/env python3
#
# FZF Dynamic Preview Script for Search Results
#
# This script handles previews for dynamic search by reading from the cached
# search results JSON and generating preview content on-the-fly.
# Template variables are injected by Python using .replace()
import json
import os
import shutil
import subprocess
import sys
from hashlib import sha256
from pathlib import Path
# Import the utility functions
from _ansi_utils import (
get_terminal_width,
print_rule,
print_table_row,
strip_markdown,
wrap_text,
)
# --- Template Variables (Injected by Python) ---
SEARCH_RESULTS_FILE = Path("{SEARCH_RESULTS_FILE}")
IMAGE_CACHE_DIR = Path("{IMAGE_CACHE_DIR}")
PREVIEW_MODE = "{PREVIEW_MODE}"
IMAGE_RENDERER = "{IMAGE_RENDERER}"
HEADER_COLOR = "{HEADER_COLOR}"
SEPARATOR_COLOR = "{SEPARATOR_COLOR}"
SCALE_UP = "{SCALE_UP}" == "True"
# --- Arguments ---
# sys.argv[1] is the selected anime title from fzf
SELECTED_TITLE = sys.argv[1] if len(sys.argv) > 1 else ""
def format_number(num):
"""Format number with thousand separators."""
if num is None:
return "N/A"
return f"{num:,}"
def format_score_stars(score):
"""Format score as stars out of 6."""
if score is None:
return "N/A"
# Convert 0-100 score to 0-6 stars, capped at 6 for consistency
stars = min(round(score * 6 / 100), 6)
return "" * stars + f" ({score}/100)"
def format_date(date_obj):
"""Format date object to string."""
if not date_obj or date_obj == "null":
return "N/A"
year = date_obj.get("year")
month = date_obj.get("month")
day = date_obj.get("day")
if not year:
return "N/A"
if month and day:
return f"{day}/{month}/{year}"
if month:
return f"{month}/{year}"
return str(year)
def get_media_from_results(title):
"""Find media item in search results by title."""
if not SEARCH_RESULTS_FILE.exists():
return None
try:
with open(SEARCH_RESULTS_FILE, "r", encoding="utf-8") as f:
data = json.load(f)
media_list = data.get("data", {}).get("Page", {}).get("media", [])
for media in media_list:
title_obj = media.get("title", {})
eng = title_obj.get("english")
rom = title_obj.get("romaji")
nat = title_obj.get("native")
if title in (eng, rom, nat):
return media
return None
except Exception as e:
print(f"Error reading search results: {e}", file=sys.stderr)
return None
def download_image(url: str, output_path: Path) -> bool:
"""Download image from URL and save to file."""
try:
# Try using urllib (stdlib)
from urllib import request
req = request.Request(url, headers={"User-Agent": "viu/1.0"})
with request.urlopen(req, timeout=5) as response:
data = response.read()
output_path.write_bytes(data)
return True
except Exception:
# Silently fail - preview will just not show image
return False
def which(cmd):
"""Check if command exists."""
return shutil.which(cmd)
def get_terminal_dimensions():
"""Get terminal dimensions from FZF environment."""
fzf_cols = os.environ.get("FZF_PREVIEW_COLUMNS")
fzf_lines = os.environ.get("FZF_PREVIEW_LINES")
if fzf_cols and fzf_lines:
return int(fzf_cols), int(fzf_lines)
try:
rows, cols = (
subprocess.check_output(
["stty", "size"], text=True, stderr=subprocess.DEVNULL
)
.strip()
.split()
)
return int(cols), int(rows)
except Exception:
return 80, 24
def render_kitty(file_path, width, height, scale_up):
"""Render using the Kitty Graphics Protocol (kitten/icat)."""
cmd = []
if which("kitten"):
cmd = ["kitten", "icat"]
elif which("icat"):
cmd = ["icat"]
elif which("kitty"):
cmd = ["kitty", "+kitten", "icat"]
if not cmd:
return False
args = [
"--clear",
"--transfer-mode=memory",
"--unicode-placeholder",
"--stdin=no",
f"--place={width}x{height}@0x0",
]
if scale_up:
args.append("--scale-up")
args.append(file_path)
subprocess.run(cmd + args, stdout=sys.stdout, stderr=sys.stderr)
return True
def render_sixel(file_path, width, height):
"""Render using Sixel."""
if which("chafa"):
subprocess.run(
["chafa", "-f", "sixel", "-s", f"{width}x{height}", file_path],
stdout=sys.stdout,
stderr=sys.stderr,
)
return True
if which("img2sixel"):
pixel_width = width * 10
pixel_height = height * 20
subprocess.run(
[
"img2sixel",
f"--width={pixel_width}",
f"--height={pixel_height}",
file_path,
],
stdout=sys.stdout,
stderr=sys.stderr,
)
return True
return False
def render_iterm(file_path, width, height):
"""Render using iTerm2 Inline Image Protocol."""
if which("imgcat"):
subprocess.run(
["imgcat", "-W", str(width), "-H", str(height), file_path],
stdout=sys.stdout,
stderr=sys.stderr,
)
return True
if which("chafa"):
subprocess.run(
["chafa", "-f", "iterm", "-s", f"{width}x{height}", file_path],
stdout=sys.stdout,
stderr=sys.stderr,
)
return True
return False
def render_timg(file_path, width, height):
"""Render using timg."""
if which("timg"):
subprocess.run(
["timg", f"-g{width}x{height}", "--upscale", file_path],
stdout=sys.stdout,
stderr=sys.stderr,
)
return True
return False
def render_chafa_auto(file_path, width, height):
"""Render using Chafa in auto mode."""
if which("chafa"):
subprocess.run(
["chafa", "-s", f"{width}x{height}", file_path],
stdout=sys.stdout,
stderr=sys.stderr,
)
return True
return False
def fzf_image_preview(file_path: str):
"""Main dispatch function to choose the best renderer."""
cols, lines = get_terminal_dimensions()
width = cols
height = lines
# Check explicit configuration
if IMAGE_RENDERER == "icat" or IMAGE_RENDERER == "system-kitty":
if render_kitty(file_path, width, height, SCALE_UP):
return
elif IMAGE_RENDERER == "sixel" or IMAGE_RENDERER == "system-sixels":
if render_sixel(file_path, width, height):
return
elif IMAGE_RENDERER == "imgcat":
if render_iterm(file_path, width, height):
return
elif IMAGE_RENDERER == "timg":
if render_timg(file_path, width, height):
return
elif IMAGE_RENDERER == "chafa":
if render_chafa_auto(file_path, width, height):
return
# Auto-detection / Fallback
if os.environ.get("KITTY_WINDOW_ID") or os.environ.get("GHOSTTY_BIN_DIR"):
if render_kitty(file_path, width, height, SCALE_UP):
return
if os.environ.get("TERM_PROGRAM") == "iTerm.app":
if render_iterm(file_path, width, height):
return
# Try standard tools in order of quality/preference
if render_kitty(file_path, width, height, SCALE_UP):
return
if render_sixel(file_path, width, height):
return
if render_timg(file_path, width, height):
return
if render_chafa_auto(file_path, width, height):
return
print("⚠️ No suitable image renderer found (icat, chafa, timg, img2sixel).")
def main():
if not SELECTED_TITLE:
print("No selection")
return
# Get the media data from cached search results
media = get_media_from_results(SELECTED_TITLE)
if not media:
print("Loading preview...")
return
term_width = get_terminal_width()
# Extract media information
title_obj = media.get("title", {})
title = (
title_obj.get("english")
or title_obj.get("romaji")
or title_obj.get("native")
or "Unknown"
)
# Show image if in image or full mode
if PREVIEW_MODE in ("image", "full"):
cover_image = media.get("coverImage", {}).get("large", "")
if cover_image:
# Ensure image cache directory exists
IMAGE_CACHE_DIR.mkdir(parents=True, exist_ok=True)
# Generate hash matching the preview worker pattern
# Use "anime-" prefix and hash of just the title (no KEY prefix for dynamic search)
hash_id = f"anime-{sha256(SELECTED_TITLE.encode('utf-8')).hexdigest()}"
image_file = IMAGE_CACHE_DIR / f"{hash_id}.png"
# Download image if not cached
if not image_file.exists():
download_image(cover_image, image_file)
# Try to render the image
if image_file.exists():
fzf_image_preview(str(image_file))
print() # Spacer
else:
print("🖼️ Loading image...")
print()
# Show text info if in text or full mode
if PREVIEW_MODE in ("text", "full"):
# Separator line
r, g, b = map(int, SEPARATOR_COLOR.split(","))
separator = f"\x1b[38;2;{r};{g};{b}m" + ("" * term_width) + "\x1b[0m"
print(separator, flush=True)
# Title centered
print(title.center(term_width))
# Extract data
status = media.get("status", "Unknown")
format_type = media.get("format", "Unknown")
episodes = media.get("episodes", "??")
duration = media.get("duration")
duration_str = f"{duration} min/ep" if duration else "Unknown"
score = media.get("averageScore")
score_str = format_score_stars(score)
favourites = format_number(media.get("favourites", 0))
popularity = format_number(media.get("popularity", 0))
genres = ", ".join(media.get("genres", [])) or "Unknown"
start_date = format_date(media.get("startDate"))
end_date = format_date(media.get("endDate"))
studios_list = media.get("studios", {}).get("nodes", [])
# Studios are those with isAnimationStudio=true
studios = ", ".join([s["name"] for s in studios_list if s.get("name") and s.get("isAnimationStudio")]) or "N/A"
# Producers are those with isAnimationStudio=false
producers = ", ".join([s["name"] for s in studios_list if s.get("name") and not s.get("isAnimationStudio")]) or "N/A"
synonyms_list = media.get("synonyms", [])
# Include romaji in synonyms if different from title
romaji = title_obj.get("romaji")
if romaji and romaji != title and romaji not in synonyms_list:
synonyms_list = [romaji] + synonyms_list
synonyms = ", ".join(synonyms_list) or "N/A"
# Tags
tags_list = media.get("tags", [])
tags = ", ".join([t.get("name", "") for t in tags_list if t.get("name")]) or "N/A"
# Next airing episode
next_airing = media.get("nextAiringEpisode")
if next_airing:
next_ep = next_airing.get("episode", "?")
airing_at = next_airing.get("airingAt")
if airing_at:
from datetime import datetime
try:
dt = datetime.fromtimestamp(airing_at)
next_episode_str = f"Episode {next_ep} on {dt.strftime('%A, %d %B %Y at %H:%M')}"
except (ValueError, OSError):
next_episode_str = f"Episode {next_ep}"
else:
next_episode_str = f"Episode {next_ep}"
else:
next_episode_str = "N/A"
# User list status
media_list_entry = media.get("mediaListEntry")
if media_list_entry:
user_status = media_list_entry.get("status", "NOT_ON_LIST")
user_progress = f"Episode {media_list_entry.get('progress', 0)}"
else:
user_status = "NOT_ON_LIST"
user_progress = "0"
description = media.get("description", "No description available.")
description = strip_markdown(description)
# Print sections matching media_info.py structure exactly
rows = [
("Score", score_str),
("Favorites", favourites),
("Popularity", popularity),
("Status", status),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("Episodes", str(episodes)),
("Duration", duration_str),
("Next Episode", next_episode_str),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("Genres", genres),
("Format", format_type),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("List Status", user_status),
("Progress", user_progress),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("Start Date", start_date),
("End Date", end_date),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("Studios", studios),
("Producers", producers),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("Synonyms", synonyms),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("Tags", tags),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
print_rule(SEPARATOR_COLOR)
print(wrap_text(description, term_width))
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
pass
except Exception as e:
print(f"Preview Error: {e}", file=sys.stderr)

View File

@@ -0,0 +1,49 @@
import sys
from _ansi_utils import print_rule, print_table_row, get_terminal_width
HEADER_COLOR = sys.argv[1]
SEPARATOR_COLOR = sys.argv[2]
# Get terminal dimensions
term_width = get_terminal_width()
# Print title centered
print("{TITLE}".center(term_width))
rows = [
("Duration", "{DURATION}"),
("Status", "{STATUS}"),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("Total Episodes", "{EPISODES}"),
("Next Episode", "{NEXT_EPISODE}"),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("Progress", "{USER_PROGRESS}"),
("List Status", "{USER_STATUS}"),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("Start Date", "{START_DATE}"),
("End Date", "{END_DATE}"),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
print_rule(SEPARATOR_COLOR)

View File

@@ -0,0 +1,94 @@
import sys
from _ansi_utils import (
print_rule,
print_table_row,
strip_markdown,
wrap_text,
get_terminal_width,
)
HEADER_COLOR = sys.argv[1]
SEPARATOR_COLOR = sys.argv[2]
# Get terminal dimensions
term_width = get_terminal_width()
# Print title centered
print("{TITLE}".center(term_width))
# Define table data
rows = [
("Score", "{SCORE}"),
("Favorites", "{FAVOURITES}"),
("Popularity", "{POPULARITY}"),
("Status", "{STATUS}"),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("Episodes", "{EPISODES}"),
("Duration", "{DURATION}"),
("Next Episode", "{NEXT_EPISODE}"),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("Genres", "{GENRES}"),
("Format", "{FORMAT}"),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("List Status", "{USER_STATUS}"),
("Progress", "{USER_PROGRESS}"),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("Start Date", "{START_DATE}"),
("End Date", "{END_DATE}"),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("Studios", "{STUDIOS}"),
("Producers", "{PRODUCERS}"),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("Synonyms", "{SYNONYMNS}"),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
rows = [
("Tags", "{TAGS}"),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
print_rule(SEPARATOR_COLOR)
print(wrap_text(strip_markdown("""{SYNOPSIS}"""), term_width))

View File

@@ -0,0 +1,288 @@
#!/usr/bin/env python3
#
# FZF Preview Script Template
#
# This script is a template. The placeholders in curly braces, like {NAME}
# are dynamically filled by python using .replace() during runtime.
import os
import shutil
import subprocess
import sys
from hashlib import sha256
from pathlib import Path
# --- Template Variables (Injected by Python) ---
PREVIEW_MODE = "{PREVIEW_MODE}"
IMAGE_CACHE_DIR = Path("{IMAGE_CACHE_DIR}")
INFO_CACHE_DIR = Path("{INFO_CACHE_DIR}")
IMAGE_RENDERER = "{IMAGE_RENDERER}"
HEADER_COLOR = "{HEADER_COLOR}"
SEPARATOR_COLOR = "{SEPARATOR_COLOR}"
PREFIX = "{PREFIX}"
SCALE_UP = "{SCALE_UP}" == "True"
# --- Arguments ---
# sys.argv[1] is usually the raw line from FZF (the anime title/key)
TITLE = sys.argv[1] if len(sys.argv) > 1 else ""
KEY = """{KEY}"""
KEY = KEY + "-" if KEY else KEY
# Generate the hash to find the cached files
hash_id = f"{PREFIX}-{sha256((KEY + TITLE).encode('utf-8')).hexdigest()}"
def get_terminal_dimensions():
"""
Determine the available dimensions (cols x lines) for the preview window.
Prioritizes FZF environment variables.
"""
fzf_cols = os.environ.get("FZF_PREVIEW_COLUMNS")
fzf_lines = os.environ.get("FZF_PREVIEW_LINES")
if fzf_cols and fzf_lines:
return int(fzf_cols), int(fzf_lines)
# Fallback to stty if FZF vars aren't set (unlikely in preview)
try:
rows, cols = (
subprocess.check_output(
["stty", "size"], text=True, stderr=subprocess.DEVNULL
)
.strip()
.split()
)
return int(cols), int(rows)
except Exception:
return 80, 24
def which(cmd):
"""Alias for shutil.which"""
return shutil.which(cmd)
def render_kitty(file_path, width, height, scale_up):
"""Render using the Kitty Graphics Protocol (kitten/icat)."""
# 1. Try 'kitten icat' (Modern)
# 2. Try 'icat' (Legacy/Alias)
# 3. Try 'kitty +kitten icat' (Fallback)
cmd = []
if which("kitten"):
cmd = ["kitten", "icat"]
elif which("icat"):
cmd = ["icat"]
elif which("kitty"):
cmd = ["kitty", "+kitten", "icat"]
if not cmd:
return False
# Build Arguments
args = [
"--clear",
"--transfer-mode=memory",
"--unicode-placeholder",
"--stdin=no",
f"--place={width}x{height}@0x0",
]
if scale_up:
args.append("--scale-up")
args.append(file_path)
subprocess.run(cmd + args, stdout=sys.stdout, stderr=sys.stderr)
return True
def render_sixel(file_path, width, height):
"""
Render using Sixel.
Prioritizes 'chafa' for Sixel as it handles text-cell sizing better than img2sixel.
"""
# Option A: Chafa (Best for Sixel sizing)
if which("chafa"):
# Chafa automatically detects Sixel support if terminal reports it,
# but we force it here if specifically requested via logic flow.
subprocess.run(
["chafa", "-f", "sixel", "-s", f"{width}x{height}", file_path],
stdout=sys.stdout,
stderr=sys.stderr,
)
return True
# Option B: img2sixel (Libsixel)
# Note: img2sixel uses pixels, not cells. We estimate 1 cell ~= 10px width, 20px height
if which("img2sixel"):
pixel_width = width * 10
pixel_height = height * 20
subprocess.run(
[
"img2sixel",
f"--width={pixel_width}",
f"--height={pixel_height}",
file_path,
],
stdout=sys.stdout,
stderr=sys.stderr,
)
return True
return False
def render_iterm(file_path, width, height):
"""Render using iTerm2 Inline Image Protocol."""
if which("imgcat"):
subprocess.run(
["imgcat", "-W", str(width), "-H", str(height), file_path],
stdout=sys.stdout,
stderr=sys.stderr,
)
return True
# Chafa also supports iTerm
if which("chafa"):
subprocess.run(
["chafa", "-f", "iterm", "-s", f"{width}x{height}", file_path],
stdout=sys.stdout,
stderr=sys.stderr,
)
return True
return False
def render_timg(file_path, width, height):
"""Render using timg (supports half-blocks, quarter-blocks, sixel, kitty, etc)."""
if which("timg"):
subprocess.run(
["timg", f"-g{width}x{height}", "--upscale", file_path],
stdout=sys.stdout,
stderr=sys.stderr,
)
return True
return False
def render_chafa_auto(file_path, width, height):
"""
Render using Chafa in auto mode.
It supports Sixel, Kitty, iTerm, and various unicode block modes.
"""
if which("chafa"):
subprocess.run(
["chafa", "-s", f"{width}x{height}", file_path],
stdout=sys.stdout,
stderr=sys.stderr,
)
return True
return False
def fzf_image_preview(file_path: str):
"""
Main dispatch function to choose the best renderer.
"""
cols, lines = get_terminal_dimensions()
# Heuristic: Reserve 1 line for prompt/status if needed, though FZF handles this.
# Some renderers behave better with a tiny bit of padding.
width = cols
height = lines
# --- 1. Check Explicit Configuration ---
if IMAGE_RENDERER == "icat" or IMAGE_RENDERER == "system-kitty":
if render_kitty(file_path, width, height, SCALE_UP):
return
elif IMAGE_RENDERER == "sixel" or IMAGE_RENDERER == "system-sixels":
if render_sixel(file_path, width, height):
return
elif IMAGE_RENDERER == "imgcat":
if render_iterm(file_path, width, height):
return
elif IMAGE_RENDERER == "timg":
if render_timg(file_path, width, height):
return
elif IMAGE_RENDERER == "chafa":
if render_chafa_auto(file_path, width, height):
return
# --- 2. Auto-Detection / Fallback Strategy ---
# If explicit failed or set to 'auto'/'system-default', try detecting environment
# Ghostty / Kitty Environment
if os.environ.get("KITTY_WINDOW_ID") or os.environ.get("GHOSTTY_BIN_DIR"):
if render_kitty(file_path, width, height, SCALE_UP):
return
# iTerm Environment
if os.environ.get("TERM_PROGRAM") == "iTerm.app":
if render_iterm(file_path, width, height):
return
# Try standard tools in order of quality/preference
if render_kitty(file_path, width, height, SCALE_UP):
return # Try kitty just in case
if render_sixel(file_path, width, height):
return
if render_timg(file_path, width, height):
return
if render_chafa_auto(file_path, width, height):
return
print("⚠️ No suitable image renderer found (icat, chafa, timg, img2sixel).")
def fzf_text_info_render():
"""Renders the text-based info via the cached python script."""
# Get terminal dimensions from FZF environment or fallback
cols, lines = get_terminal_dimensions()
# Print simple separator line with proper width
r, g, b = map(int, SEPARATOR_COLOR.split(","))
separator = f"\x1b[38;2;{r};{g};{b}m" + ("" * cols) + "\x1b[0m"
print(separator, flush=True)
if PREVIEW_MODE == "text" or PREVIEW_MODE == "full":
preview_info_path = INFO_CACHE_DIR / f"{hash_id}.py"
if preview_info_path.exists():
subprocess.run(
[sys.executable, str(preview_info_path), HEADER_COLOR, SEPARATOR_COLOR]
)
else:
# Print dim text
print("\x1b[2m📝 Loading details...\x1b[0m")
def main():
# 1. Image Preview
if (PREVIEW_MODE == "image" or PREVIEW_MODE == "full") and (
PREFIX not in ("character", "review", "airing-schedule")
):
preview_image_path = IMAGE_CACHE_DIR / f"{hash_id}.png"
if preview_image_path.exists():
fzf_image_preview(str(preview_image_path))
print() # Spacer
else:
print("🖼️ Loading image...")
# 2. Text Info Preview
fzf_text_info_render()
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
pass
except Exception as e:
print(f"Preview Error: {e}")

View File

@@ -0,0 +1,28 @@
import sys
from _ansi_utils import (
print_rule,
print_table_row,
strip_markdown,
wrap_text,
get_terminal_width,
)
HEADER_COLOR = sys.argv[1]
SEPARATOR_COLOR = sys.argv[2]
# Get terminal dimensions
term_width = get_terminal_width()
# Print title centered
print("{REVIEWER_NAME}".center(term_width))
rows = [
("Summary", "{REVIEW_SUMMARY}"),
]
print_rule(SEPARATOR_COLOR)
for key, value in rows:
print_table_row(key, value, HEADER_COLOR, 15, term_width - 20)
print_rule(SEPARATOR_COLOR)
print(wrap_text(strip_markdown("""{REVIEW_BODY}"""), term_width))

View File

@@ -0,0 +1,211 @@
#!/usr/bin/env python3
#
# FZF Dynamic Search Script Template
#
# This script is a template for dynamic search functionality in fzf.
# The placeholders in curly braces, like {GRAPHQL_ENDPOINT} are dynamically
# filled by Python using .replace() during runtime.
#
# FILTER SYNTAX:
# @genre:action,comedy Filter by genres
# @genre:!hentai Exclude genre
# @status:airing Status: airing, finished, upcoming, cancelled, hiatus
# @year:2024 Filter by year
# @season:winter winter, spring, summer, fall
# @format:tv,movie tv, movie, ova, ona, special
# @sort:score score, popularity, trending, date, title
# @score:>80 / @score:<50 Min/max score
# @onlist / @onlist:false Filter by list status
# @tag:isekai Filter by tags
import json
import sys
from pathlib import Path
from urllib import request
from urllib.error import URLError
# Import the filter parser
from _filter_parser import parse_filters
# --- Template Variables (Injected by Python) ---
GRAPHQL_ENDPOINT = "{GRAPHQL_ENDPOINT}"
SEARCH_RESULTS_FILE = Path("{SEARCH_RESULTS_FILE}")
LAST_QUERY_FILE = Path("{LAST_QUERY_FILE}")
AUTH_HEADER = "{AUTH_HEADER}"
# The GraphQL query is injected as a properly escaped JSON string
GRAPHQL_QUERY = "{GRAPHQL_QUERY}"
# --- Get Query from fzf ---
# fzf passes the current query as the first argument when using --bind change:reload
RAW_QUERY = sys.argv[1] if len(sys.argv) > 1 else ""
# Parse the query to extract filters and clean search text
QUERY, PARSED_FILTERS = parse_filters(RAW_QUERY)
# If query is empty and no filters, show help hint
if not RAW_QUERY.strip():
print("💡 Tip: Use @genre:action @status:airing for filters (type @help for syntax)")
sys.exit(0)
# Show filter help if requested
if RAW_QUERY.strip().lower() in ("@help", "@?", "@h"):
from _filter_parser import get_help_text
print(get_help_text())
sys.exit(0)
# If we only have filters (no search text), that's valid - we'll search with filters only
# But if we have neither query nor filters, we already showed the help hint above
def make_graphql_request(
endpoint: str, query: str, variables: dict, auth_token: str = ""
) -> tuple[dict | None, str | None]:
"""
Make a GraphQL request to the specified endpoint.
Args:
endpoint: GraphQL API endpoint URL
query: GraphQL query string
variables: Query variables as a dictionary
auth_token: Optional authorization token (Bearer token)
Returns:
Tuple of (Response JSON, error message) - one will be None
"""
payload = {"query": query, "variables": variables}
headers = {"Content-Type": "application/json", "User-Agent": "viu/1.0"}
if auth_token:
headers["Authorization"] = auth_token
try:
req = request.Request(
endpoint,
data=json.dumps(payload).encode("utf-8"),
headers=headers,
method="POST",
)
with request.urlopen(req, timeout=10) as response:
return json.loads(response.read().decode("utf-8")), None
except URLError as e:
return None, f"Network error: {e.reason}"
except json.JSONDecodeError as e:
return None, f"Invalid response: {e}"
except Exception as e:
return None, f"Request error: {e}"
def extract_title(media_item: dict) -> str:
"""
Extract the best available title from a media item.
Args:
media_item: Media object from GraphQL response
Returns:
Title string (english > romaji > native > "Unknown")
"""
title_obj = media_item.get("title", {})
return (
title_obj.get("english")
or title_obj.get("romaji")
or title_obj.get("native")
or "Unknown"
)
def main():
# Ensure parent directory exists
SEARCH_RESULTS_FILE.parent.mkdir(parents=True, exist_ok=True)
# Base GraphQL variables
variables = {
"type": "ANIME",
"per_page": 50,
"genre_not_in": ["Hentai"], # Default exclusion
}
# Add search query if provided
if QUERY:
variables["query"] = QUERY
# Apply parsed filters from the filter syntax
for key, value in PARSED_FILTERS.items():
# Handle array merging for _in and _not_in fields
if key.endswith("_in") or key.endswith("_not_in"):
if key in variables:
# Merge arrays, avoiding duplicates
existing = set(variables[key])
existing.update(value)
variables[key] = list(existing)
else:
variables[key] = value
else:
variables[key] = value
# Make the GraphQL request
response, error = make_graphql_request(
GRAPHQL_ENDPOINT, GRAPHQL_QUERY, variables, AUTH_HEADER
)
if error:
print(f"{error}")
# Also show what we tried to search for debugging
print(f" Query: {QUERY or '(none)'}")
print(f" Filters: {json.dumps(PARSED_FILTERS) if PARSED_FILTERS else '(none)'}")
sys.exit(1)
if response is None:
print("❌ Search failed: No response received")
sys.exit(1)
# Check for GraphQL errors first (these come in the response body)
if "errors" in response:
errors = response["errors"]
if errors:
# Extract error messages
error_msgs = [e.get("message", str(e)) for e in errors]
print(f"❌ API Error: {'; '.join(error_msgs)}")
# Show variables for debugging
print(f" Filters used: {json.dumps(PARSED_FILTERS, indent=2) if PARSED_FILTERS else '(none)'}")
sys.exit(1)
# Save the raw response for later processing by dynamic_search.py
try:
with open(SEARCH_RESULTS_FILE, "w", encoding="utf-8") as f:
json.dump(response, f, ensure_ascii=False, indent=2)
# Also save the raw query so it can be restored when going back
with open(LAST_QUERY_FILE, "w", encoding="utf-8") as f:
f.write(RAW_QUERY)
except IOError as e:
print(f"❌ Failed to save results: {e}")
sys.exit(1)
# Navigate the response structure
data = response.get("data", {})
page = data.get("Page", {})
media_list = page.get("media", [])
if not media_list:
print("🔍 No results found")
if PARSED_FILTERS:
print(" Try adjusting your filters")
sys.exit(0)
# Output titles for fzf (one per line)
for media in media_list:
title = extract_title(media)
print(title)
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
sys.exit(0)
except Exception as e:
print(f"❌ Unexpected error: {type(e).__name__}: {e}")
sys.exit(1)

View File

@@ -0,0 +1,9 @@
from .cli import cli as run_cli
import sys
import os
if sys.platform.startswith("win"):
os.environ.setdefault("PYTHONUTF8", "1")
__all__ = ["run_cli"]

249
viu_media/cli/cli.py Normal file
View File

@@ -0,0 +1,249 @@
import logging
import shutil
import sys
from typing import TYPE_CHECKING
import click
from click.core import ParameterSource
from ..core.config import AppConfig
from ..core.constants import CLI_NAME, USER_CONFIG, __version__
from .config import ConfigLoader
from .options import options_from_model
from .utils.exception import setup_exceptions_handler
from .utils.lazyloader import LazyGroup
from .utils.logging import setup_logging
if TYPE_CHECKING:
from typing import TypedDict
from typing_extensions import Unpack
class Options(TypedDict):
no_config: bool | None
trace: bool | None
dev: bool | None
log: bool | None
rich_traceback: bool | None
rich_traceback_theme: str
logger = logging.getLogger(__name__)
commands = {
"config": "config.config",
"search": "search.search",
"anilist": "anilist.anilist",
"download": "download.download",
"update": "update.update",
"registry": "registry.registry",
"worker": "worker.worker",
"queue": "queue.queue",
"completions": "completions.completions",
}
@click.group(
cls=LazyGroup,
root="viu_media.cli.commands",
invoke_without_command=True,
lazy_subcommands=commands,
context_settings=dict(auto_envvar_prefix=CLI_NAME),
)
@click.version_option(__version__, "--version")
@click.option("--no-config", is_flag=True, help="Don't load the user config file.")
@click.option(
"--trace", is_flag=True, help="Controls Whether to display tracebacks or not"
)
@click.option("--dev", is_flag=True, help="Controls Whether the app is in dev mode")
@click.option("--log", is_flag=True, help="Controls Whether to log")
@click.option(
"--rich-traceback",
is_flag=True,
help="Controls Whether to display a rich traceback",
)
@click.option(
"--rich-traceback-theme",
default="github-dark",
help="Controls Whether to display a rich traceback",
)
@options_from_model(AppConfig)
@click.pass_context
def cli(ctx: click.Context, **options: "Unpack[Options]"):
"""
The main entry point for the Viu CLI.
"""
setup_logging(options["log"])
setup_exceptions_handler(
options["trace"],
options["dev"],
options["rich_traceback"],
options["rich_traceback_theme"],
)
logger.info(f"Current Command: {' '.join(sys.argv)}")
cli_overrides = {}
param_lookup = {p.name: p for p in ctx.command.params}
for param_name, param_value in ctx.params.items():
source = ctx.get_parameter_source(param_name)
if source in (ParameterSource.ENVIRONMENT, ParameterSource.COMMANDLINE):
parameter = param_lookup.get(param_name)
if (
parameter
and hasattr(parameter, "model_name")
and hasattr(parameter, "field_name")
):
model_name = getattr(parameter, "model_name")
field_name = getattr(parameter, "field_name")
if model_name not in cli_overrides:
cli_overrides[model_name] = {}
cli_overrides[model_name][field_name] = param_value
loader = ConfigLoader(config_path=USER_CONFIG)
config = (
AppConfig.model_validate(cli_overrides)
if options["no_config"]
else loader.load(cli_overrides)
)
ctx.obj = config
if config.general.welcome_screen:
import time
from ..core.constants import APP_CACHE_DIR, USER_NAME, SUPPORT_PROJECT_URL
last_welcomed_at_file = APP_CACHE_DIR / ".last_welcome"
should_welcome = False
if last_welcomed_at_file.exists():
try:
last_welcomed_at = float(
last_welcomed_at_file.read_text(encoding="utf-8")
)
# runs once a month
if (time.time() - last_welcomed_at) > 30 * 24 * 3600:
should_welcome = True
except Exception as e:
logger.warning(f"Failed to read welcome screen timestamp: {e}")
else:
should_welcome = True
if should_welcome:
last_welcomed_at_file.write_text(str(time.time()), encoding="utf-8")
from rich.prompt import Confirm
if Confirm.ask(f"""\
[green]How are you, {USER_NAME} 🙂?
If you enjoy the project and would like to support it, you can buy me a coffee at {SUPPORT_PROJECT_URL}.
Would you like to open the support page? Select yes to continue — otherwise, enjoy your terminal-anime browsing experience 😁.[/]
You can disable this message by turning off the welcome_screen option in the config. It only appears once a month.
"""):
from webbrowser import open
open(SUPPORT_PROJECT_URL)
if config.general.show_new_release:
import time
from ..core.constants import APP_CACHE_DIR
last_release_file = APP_CACHE_DIR / ".last_release"
should_print_release_notes = False
if last_release_file.exists():
last_release = last_release_file.read_text(encoding="utf-8")
current_version = list(map(int, __version__.replace("v", "").split(".")))
last_saved_version = list(
map(int, last_release.replace("v", "").split("."))
)
if (
(current_version[0] > last_saved_version[0])
or (
current_version[1] > last_saved_version[1]
and current_version[0] == last_saved_version[0]
)
or (
current_version[2] > last_saved_version[2]
and current_version[0] == last_saved_version[0]
and current_version[1] == last_saved_version[1]
)
):
should_print_release_notes = True
else:
should_print_release_notes = True
if should_print_release_notes:
last_release_file.write_text(__version__, encoding="utf-8")
from .service.feedback import FeedbackService
from .utils.update import check_for_updates, print_release_json, update_app
from rich.prompt import Confirm
feedback = FeedbackService(config)
feedback.info("Getting release notes...")
is_latest, release_json = check_for_updates()
if Confirm.ask(
"Would you also like to update your config with the latest options and config notes"
):
import subprocess
_cli_cmd_name = "viu" if not shutil.which("viu-media") else "viu-media"
cmd = [_cli_cmd_name, "config", "--update"]
print(f"running '{' '.join(cmd)}'...")
subprocess.run(cmd)
if is_latest:
print_release_json(release_json)
else:
print_release_json(release_json)
print("It seems theres another update waiting for you as well 😁")
click.pause("Press Any Key To Proceed...")
if config.general.check_for_updates:
import time
from ..core.constants import APP_CACHE_DIR
last_updated_at_file = APP_CACHE_DIR / ".last_update"
should_check_for_update = False
if last_updated_at_file.exists():
try:
last_updated_at_time = float(
last_updated_at_file.read_text(encoding="utf-8")
)
if (
time.time() - last_updated_at_time
) > config.general.update_check_interval * 3600:
should_check_for_update = True
except Exception as e:
logger.warning(f"Failed to check for update: {e}")
else:
should_check_for_update = True
if should_check_for_update:
last_updated_at_file.write_text(str(time.time()), encoding="utf-8")
from .service.feedback import FeedbackService
from .utils.update import check_for_updates, print_release_json, update_app
feedback = FeedbackService(config)
feedback.info("Checking for updates...")
is_latest, release_json = check_for_updates()
if not is_latest:
from ..libs.selectors.selector import create_selector
selector = create_selector(config)
if release_json and selector.confirm(
"Theres an update available would you like to see the release notes before deciding to update?"
):
print_release_json(release_json)
selector.ask("Enter to continue...")
if selector.confirm("Would you like to update?"):
update_app()
if ctx.invoked_subcommand is None:
from .commands.anilist import cmd
ctx.invoke(cmd.anilist)

View File

@@ -18,7 +18,7 @@ commands = {
@click.group(
cls=LazyGroup,
name="anilist",
root="viu_cli.cli.commands.anilist.commands",
root="viu_media.cli.commands.anilist.commands",
invoke_without_command=True,
help="A beautiful interface that gives you access to a commplete streaming experience",
short_help="Access all streaming options",

View File

@@ -1,25 +1,72 @@
import click
import webbrowser
from pathlib import Path
import click
from .....core.config.model import AppConfig
def _get_token(feedback, selector, token_input: str | None) -> str | None:
"""
Retrieves the authentication token from a file path, a direct string, or an interactive prompt.
"""
if token_input:
path = Path(token_input)
if path.is_file():
try:
token = path.read_text().strip()
if not token:
feedback.error(f"Token file is empty: {path}")
return None
return token
except Exception as e:
feedback.error(f"Error reading token from file: {e}")
return None
return token_input
from .....core.constants import ANILIST_AUTH
open_success = webbrowser.open(ANILIST_AUTH, new=2)
if open_success:
feedback.info("Your browser has been opened to obtain an AniList token.")
feedback.info(
f"Or you can visit the site manually [magenta][link={ANILIST_AUTH}]here[/link][/magenta]."
)
else:
feedback.warning(
f"Failed to open the browser. Please visit the site manually [magenta][link={ANILIST_AUTH}]here[/link][/magenta]."
)
feedback.info(
"After authorizing, copy the token from the address bar and paste it below."
)
return selector.ask("Enter your AniList Access Token")
@click.command(help="Login to your AniList account to enable progress tracking.")
@click.option("--status", "-s", is_flag=True, help="Check current login status.")
@click.option("--logout", "-l", is_flag=True, help="Log out and erase credentials.")
@click.argument("token_input", required=False, type=str)
@click.pass_obj
def auth(config: AppConfig, status: bool, logout: bool):
"""Handles user authentication and credential management."""
from .....core.constants import ANILIST_AUTH
def auth(config: AppConfig, status: bool, logout: bool, token_input: str | None):
"""
Handles user authentication and credential management.
This command allows you to log in to your AniList account to enable
progress tracking and other features.
You can provide your authentication token in three ways:
1. Interactively: Run the command without arguments to open a browser
and be prompted to paste the token.
2. As an argument: Pass the token string directly to the command.
$ viu anilist auth "your_token_here"
3. As a file: Pass the path to a text file containing the token.
$ viu anilist auth /path/to/token.txt
"""
from .....libs.media_api.api import create_api_client
from .....libs.selectors.selector import create_selector
from ....service.auth import AuthService
from ....service.feedback import FeedbackService
auth_service = AuthService("anilist")
feedback = FeedbackService(config)
selector = create_selector(config)
feedback.clear_console()
if status:
user_data = auth_service.get_auth()
@@ -29,6 +76,11 @@ def auth(config: AppConfig, status: bool, logout: bool):
feedback.error("Not logged in.")
return
from .....libs.selectors.selector import create_selector
selector = create_selector(config)
feedback.clear_console()
if logout:
if selector.confirm("Are you sure you want to log out and erase your token?"):
auth_service.clear_user_profile()
@@ -40,25 +92,14 @@ def auth(config: AppConfig, status: bool, logout: bool):
f"You are already logged in as {auth_profile.user_profile.name}.Would you like to relogin"
):
return
api_client = create_api_client("anilist", config)
token = _get_token(feedback, selector, token_input)
open_success = webbrowser.open(ANILIST_AUTH, new=2)
if open_success:
feedback.info("Your browser has been opened to obtain an AniList token.")
feedback.info(f"or you can visit the site manually [magenta][link={ANILIST_AUTH}]here[/link][/magenta].")
else:
feedback.warning(
f"Failed to open the browser. Please visit the site manually [magenta][link={ANILIST_AUTH}]here[/link][/magenta]."
)
feedback.info(
"After authorizing, copy the token from the address bar and paste it below."
)
token = selector.ask("Enter your AniList Access Token")
if not token:
feedback.error("Login cancelled.")
if not token_input:
feedback.error("Login cancelled.")
return
api_client = create_api_client("anilist", config)
# Use the API client to validate the token and get profile info
profile = api_client.authenticate(token.strip())

Some files were not shown because too many files have changed in this diff Show More