Compare commits

...

53 Commits

Author SHA1 Message Date
Daniel Dietzler
4b422bd0f7 disable metrics in dev env 2023-12-31 19:50:10 +01:00
Daniel Dietzler
f33a662f48 open api 2023-12-28 19:49:39 +01:00
Daniel Dietzler
0232655da2 revert individual settings for metrics, now only enable/disable 2023-12-28 19:49:32 +01:00
Daniel Dietzler
ac4c57247e add icon indicating when metrics are being shared 2023-12-28 19:29:21 +01:00
Daniel Dietzler
fb01bd956f open api 2023-12-28 19:06:45 +01:00
Daniel Dietzler
902d4d0275 settings for metrics 2023-12-28 19:06:35 +01:00
Daniel Dietzler
db997f9173 queue metrics job every 24 hours 2023-12-23 22:09:51 +01:00
Daniel Dietzler
e9197cde67 collect more metrics, move everything related to metrics repo 2023-12-23 21:50:20 +01:00
Daniel Dietzler
874f707c92 initial sample implementation of metrics 2023-12-23 21:50:18 +01:00
Daniel Dietzler
8fdd3aaed1 fix(server): init library scanning on start up (#5951) 2023-12-23 20:46:42 +00:00
Alex
aaa7a613b2 fix(web): cannot open detail panel in public shared link (#5946)
* fix(web): cannot open detail panel in public shared link

* fix websocket auth message

* refactor
2023-12-23 10:07:12 -06:00
Ikko Eltociear Ashimine
45cf3291a2 docs: fix README_ja_JP.md (#5939)
minor fix
2023-12-23 09:20:04 -06:00
renovate[bot]
612590feda fix(deps): update dependency pillow to v10 [security] (#5944) 2023-12-23 14:10:05 +00:00
Mert
19ea0ead85 renovate: use in-range-only strategy for python (#5937) 2023-12-23 09:04:36 -05:00
Alex
e47e25e671 fix(server): access system config before database migration complete (#5912) 2023-12-21 12:52:49 -06:00
Mert
4dd7412a86 pin python (#5904) 2023-12-21 11:54:56 -05:00
Mert
cc2dc12f6c fix(server): run migrations after database checks (#5832)
* run migrations after checks

* optional migrations

* only run checks in server and e2e

* re-add migrations for microservices

* refactor

* move e2e init

* remove assert from migration

* update providers

* update microservices app service

* fixed logging

* refactored version check, added unit tests

* more version tests

* don't use mocks for sut

* refactor tests

* suggest image only if postgres is 14, 15 or 16

* review suggestions

* fixed regexp escape

* fix typing

* update migration
2023-12-21 10:06:26 -06:00
waclaw66
2790a46703 pin fix (#5909) 2023-12-21 09:28:45 -06:00
Mert
f602295bf9 chore(dev): move envs to image (#5906) 2023-12-21 09:28:23 -06:00
Mert
092a23fd7f feat(server,ml): remove image tagging (#5903)
* remove image tagging

* updated lock

* fixed tests, improved logging

* be nice

* fixed tests
2023-12-20 20:47:56 -05:00
shenlong
154292242f fix(mobile): use proper id for gellery_viewer hero attribute (#5894)
Co-authored-by: shenlong-tanwen <139912620+shalong-tanwen@users.noreply.github.com>
2023-12-20 11:23:17 -06:00
Jonathan Jogenfors
8295542941 feat(cli): Add existing assets to album and allow album name (#5838)
* Allow building and installing cli

* feat: add format fix

* docs: remove cli folder

* feat: use immich scoped package

* feat: rewrite cli readme

* docs: add info on running without building

* cleanup

* chore: remove import functionality from cli

* feat: add logout to cli

* docs: add todo for file format from server

* docs: add compilation step to cli

* fix: success message spacing

* feat: can create albums

* fix: add check step to cli

* fix: typos

* feat: pull file formats from server

* chore: use crawl service from server

* chore: fix lint

* docs: add cli documentation

* chore: rename ignore pattern

* chore: add version number to cli

* feat: use sdk

* fix: cleanup

* feat: album name on windows

* chore: remove skipped asset field

* feat: add more info to server-info command

* chore: cleanup

* wip

* chore: remove unneeded packages

* e2e test can start

* git ignore for geocode in cli

* add cli e2e to github actions

* can do e2e tests in the cli

* simplify e2e test

* cleanup

* set matrix strategy in workflow

* run npm ci in server

* choose different working directory

* check out submodules too

* increase test timeout

* set node version

* cli docker e2e tests

* fix cli docker file

* run cli e2e in correct folder

* set docker context

* correct docker build

* remove cli from dockerignore

* chore: fix docs links

* feat: add cli v2 milestone

* fix: set correct cli date

* remove submodule

* chore: add npmignore

* chore(cli): push to npm

* fix: server e2e

* run npm ci in server

* remove state from e2e

* run npm ci in server

* reshuffle docker compose files

* use new e2e composes in makefile

* increase test timeout to 10 minutes

* make github actions run makefile e2e tests

* cleanup github test names

* assert on server version

* chore: split cli e2e tests into one file per command

* chore: set cli release working dir

* chore: add repo url to npmjs

* chore: bump node setup to v4

* chore: normalize the github url

* check e2e code in lint

* fix lint

* test key login flow

* feat: allow configurable config dir

* fix session service tests

* create missing dir

* cleanup

* bump cli version to 2.0.4

* remove form-data

* feat: allow single files as argument

* add version option

* bump dependencies

* fix lint

* wip use axios as upload

* version bump

* cApiTALiZaTiON

* don't touch package lock

* wip: don't use job queues

* don't use make for cli e2e

* fix server e2e

* feat: always skip hashing when adding albums

* feat: create album with specific name

* check asset duplication before adding to album

* update documentation

* use correct check for when to create albums

---------

Co-authored-by: Alex <alex.tran1502@gmail.com>
Co-authored-by: Jason Rasmussen <jrasm91@gmail.com>
2023-12-20 09:51:53 -06:00
André Pinto
4505ebc315 fix(mobile): Fix pt-PT locale. Add missing pt-PT localizely entry (#5892) 2023-12-20 09:46:20 -06:00
Jan
19d66296fe chore(web): redirect share page redirect to base path (#5889) 2023-12-20 09:06:23 -06:00
martin
64176d2ff4 fix(web): multiple small issues on the web app (#5875) 2023-12-19 15:56:55 -06:00
Alex
cabc2d57dd Revert "chore(mobile): translation update (#5867)" (#5871)
This reverts commit 4e06ccd052.
2023-12-19 13:17:20 -06:00
Jonathan Jogenfors
5a3a2c7293 feat(cli): Allow uploading a single file (#5837)
* Allow building and installing cli

* feat: add format fix

* docs: remove cli folder

* feat: use immich scoped package

* feat: rewrite cli readme

* docs: add info on running without building

* cleanup

* chore: remove import functionality from cli

* feat: add logout to cli

* docs: add todo for file format from server

* docs: add compilation step to cli

* fix: success message spacing

* feat: can create albums

* fix: add check step to cli

* fix: typos

* feat: pull file formats from server

* chore: use crawl service from server

* chore: fix lint

* docs: add cli documentation

* chore: rename ignore pattern

* chore: add version number to cli

* feat: use sdk

* fix: cleanup

* feat: album name on windows

* chore: remove skipped asset field

* feat: add more info to server-info command

* chore: cleanup

* wip

* chore: remove unneeded packages

* e2e test can start

* git ignore for geocode in cli

* add cli e2e to github actions

* can do e2e tests in the cli

* simplify e2e test

* cleanup

* set matrix strategy in workflow

* run npm ci in server

* choose different working directory

* check out submodules too

* increase test timeout

* set node version

* cli docker e2e tests

* fix cli docker file

* run cli e2e in correct folder

* set docker context

* correct docker build

* remove cli from dockerignore

* chore: fix docs links

* feat: add cli v2 milestone

* fix: set correct cli date

* remove submodule

* chore: add npmignore

* chore(cli): push to npm

* fix: server e2e

* run npm ci in server

* remove state from e2e

* run npm ci in server

* reshuffle docker compose files

* use new e2e composes in makefile

* increase test timeout to 10 minutes

* make github actions run makefile e2e tests

* cleanup github test names

* assert on server version

* chore: split cli e2e tests into one file per command

* chore: set cli release working dir

* chore: add repo url to npmjs

* chore: bump node setup to v4

* chore: normalize the github url

* check e2e code in lint

* fix lint

* test key login flow

* feat: allow configurable config dir

* fix session service tests

* create missing dir

* cleanup

* bump cli version to 2.0.4

* remove form-data

* feat: allow single files as argument

* add version option

* bump dependencies

* fix lint

* wip use axios as upload

* version bump

* cApiTALiZaTiON

* don't touch package lock

* wip: don't use job queues

* don't use make for cli e2e

* fix server e2e

* feat: can upload single file

* fix upload options dto

---------

Co-authored-by: Alex <alex.tran1502@gmail.com>
Co-authored-by: Jason Rasmussen <jrasm91@gmail.com>
2023-12-19 13:15:11 -06:00
martin
7e216809f3 fix(server): remove shared link with removed asset (#5845) 2023-12-19 11:05:18 -06:00
martin
81af48af7b fix(web): open image in new tab with memories on firefox (#5847)
* fix: open image in new tab with memories on firefox

* don't use z-index

---------

Co-authored-by: Alex <alex.tran1502@gmail.com>
2023-12-19 11:01:22 -06:00
Alex
4e06ccd052 chore(mobile): translation update (#5867) 2023-12-19 10:45:11 -06:00
Mohamed BOUSSAID
234449f3c6 fix(server, web): Prevent the user from setting a future date of birth (#5803)
* Hide the person age if it is negative

* Add validation to prevent future birth dates

* Add comment

* Add test, Add birth date validation and update birth date modal

* Add birthDate validation in PersonService and SetBirthDateModal

* Running npm run format:fix

* Generating the migration file propoerly, and Make the birthdate form logic simpler

* Make birthDate type only string

* Adding useLocationPin back
2023-12-19 10:07:38 -06:00
RenautMestdagh
95a7bf7fac Update README_nl_NL.md (#5840) 2023-12-19 10:02:25 -06:00
waclaw66
1c69dff967 feat(web): bigger dialog box of location change (#5862) 2023-12-19 09:49:09 -06:00
Jonathan Jogenfors
f4c5bdfa1c fix(cli): don't open too many files (#5841)
* Allow building and installing cli

* feat: add format fix

* docs: remove cli folder

* feat: use immich scoped package

* feat: rewrite cli readme

* docs: add info on running without building

* cleanup

* chore: remove import functionality from cli

* feat: add logout to cli

* docs: add todo for file format from server

* docs: add compilation step to cli

* fix: success message spacing

* feat: can create albums

* fix: add check step to cli

* fix: typos

* feat: pull file formats from server

* chore: use crawl service from server

* chore: fix lint

* docs: add cli documentation

* chore: rename ignore pattern

* chore: add version number to cli

* feat: use sdk

* fix: cleanup

* feat: album name on windows

* chore: remove skipped asset field

* feat: add more info to server-info command

* chore: cleanup

* wip

* chore: remove unneeded packages

* e2e test can start

* git ignore for geocode in cli

* add cli e2e to github actions

* can do e2e tests in the cli

* simplify e2e test

* cleanup

* set matrix strategy in workflow

* run npm ci in server

* choose different working directory

* check out submodules too

* increase test timeout

* set node version

* cli docker e2e tests

* fix cli docker file

* run cli e2e in correct folder

* set docker context

* correct docker build

* remove cli from dockerignore

* chore: fix docs links

* feat: add cli v2 milestone

* fix: set correct cli date

* remove submodule

* chore: add npmignore

* chore(cli): push to npm

* fix: server e2e

* run npm ci in server

* remove state from e2e

* run npm ci in server

* reshuffle docker compose files

* use new e2e composes in makefile

* increase test timeout to 10 minutes

* make github actions run makefile e2e tests

* cleanup github test names

* assert on server version

* chore: split cli e2e tests into one file per command

* chore: set cli release working dir

* chore: add repo url to npmjs

* chore: bump node setup to v4

* chore: normalize the github url

* check e2e code in lint

* fix lint

* test key login flow

* feat: allow configurable config dir

* fix session service tests

* create missing dir

* cleanup

* bump cli version to 2.0.4

* remove form-data

* feat: allow single files as argument

* add version option

* bump dependencies

* fix lint

* wip use axios as upload

* version bump

* cApiTALiZaTiON

* don't touch package lock

* wip: don't use job queues

* don't use make for cli e2e

* fix server e2e

* chore: remove old gha step

* add npm ci to server

* feat: use graceful-fs

---------

Co-authored-by: Alex <alex.tran1502@gmail.com>
Co-authored-by: Jason Rasmussen <jrasm91@gmail.com>
2023-12-19 09:40:29 -06:00
Alex The Bot
b40859551b Version v1.91.4 2023-12-19 03:34:19 +00:00
Jonathan Jogenfors
4e9b96ff1a test(cli): e2e testing (#5101)
* Allow building and installing cli

* feat: add format fix

* docs: remove cli folder

* feat: use immich scoped package

* feat: rewrite cli readme

* docs: add info on running without building

* cleanup

* chore: remove import functionality from cli

* feat: add logout to cli

* docs: add todo for file format from server

* docs: add compilation step to cli

* fix: success message spacing

* feat: can create albums

* fix: add check step to cli

* fix: typos

* feat: pull file formats from server

* chore: use crawl service from server

* chore: fix lint

* docs: add cli documentation

* chore: rename ignore pattern

* chore: add version number to cli

* feat: use sdk

* fix: cleanup

* feat: album name on windows

* chore: remove skipped asset field

* feat: add more info to server-info command

* chore: cleanup

* wip

* chore: remove unneeded packages

* e2e test can start

* git ignore for geocode in cli

* add cli e2e to github actions

* can do e2e tests in the cli

* simplify e2e test

* cleanup

* set matrix strategy in workflow

* run npm ci in server

* choose different working directory

* check out submodules too

* increase test timeout

* set node version

* cli docker e2e tests

* fix cli docker file

* run cli e2e in correct folder

* set docker context

* correct docker build

* remove cli from dockerignore

* chore: fix docs links

* feat: add cli v2 milestone

* fix: set correct cli date

* remove submodule

* chore: add npmignore

* chore(cli): push to npm

* fix: server e2e

* run npm ci in server

* remove state from e2e

* run npm ci in server

* reshuffle docker compose files

* use new e2e composes in makefile

* increase test timeout to 10 minutes

* make github actions run makefile e2e tests

* cleanup github test names

* assert on server version

* chore: split cli e2e tests into one file per command

* chore: set cli release working dir

* chore: add repo url to npmjs

* chore: bump node setup to v4

* chore: normalize the github url

* check e2e code in lint

* fix lint

* test key login flow

* feat: allow configurable config dir

* fix session service tests

* create missing dir

* cleanup

* bump cli version to 2.0.4

* remove form-data

* feat: allow single files as argument

* add version option

* bump dependencies

* fix lint

* wip use axios as upload

* version bump

* cApiTALiZaTiON

* don't touch package lock

* wip: don't use job queues

* don't use make for cli e2e

* fix server e2e

* chore: remove old gha step

* add npm ci to server

---------

Co-authored-by: Alex <alex.tran1502@gmail.com>
Co-authored-by: Jason Rasmussen <jrasm91@gmail.com>
2023-12-18 20:29:26 -06:00
martin
baed16dab6 fix(web): shared link background color on dark mode (#5846) 2023-12-18 20:26:55 -06:00
Jon Howell
a7b4727c20 feat(docs): Add a linear quick-start guide (#5812)
* feat(docs): Add a linear quick-start guide

* prettier

* fix: format

* removed unused text

---------

Co-authored-by: Alex Tran <alex.tran1502@gmail.com>
2023-12-18 20:45:49 +00:00
Alex
9834693fab fix(web): access /search throw error (#5834) 2023-12-18 14:42:25 -06:00
shenlong
085dc6cd93 fix(mobile): use safe area for gallery_viewer bottom sheet (#5831)
Co-authored-by: shenlong-tanwen <139912620+shalong-tanwen@users.noreply.github.com>
2023-12-18 11:22:06 -06:00
Mert
de1514a441 chore(server): startup check for pgvecto.rs (#5815)
* startup check for pgvecto.rs

* prefilter after assertion

* formatting

* add assert to migration

* more specific import

* use runner
2023-12-18 10:38:25 -06:00
Alex
fade8b627f chore(web): display places on a single row (#5825) 2023-12-18 10:34:25 -06:00
Jason Rasmussen
d3e1572229 fix(server): file sending and cache control (#5829)
* fix: file sending

* fix: tests
2023-12-18 10:33:46 -06:00
Alex
ffc31f034c chore(mobile): handle delete file error (#5827) 2023-12-18 09:54:42 -06:00
Alex
3beeffaaf0 fix(server): metadata search does not return all EXIF info (#5810)
* docs: update default config content

* fix(server): metadata search does not return all EXIF info

* remove console log

* generate sql

* Correct sql generation
2023-12-18 07:13:36 -06:00
Ferdinand Mütsch
b68800d45c chore(docs): add caddy reverse proxy config example (#5777) 2023-12-18 02:22:59 +00:00
Mert
b520955d0e fix(server): add more conditions to smart search (#5806)
* add more asset conditions

* udpate sql
2023-12-17 20:17:30 -06:00
Mert
6e7b3d6f24 fix(server): fix metadata search not working (#5800)
* don't require ml

* update e2e

* fixes

* fix e2e

* add additional conditions

* select all exif columns

* more fixes

* update sql
2023-12-17 20:16:08 -06:00
Alex
c45e8cc170 fix(web): cannot open map cluster (#5797) 2023-12-17 20:13:55 -06:00
Michael Manganiello
c6f56d9591 chore(server): Check activity permissions in bulk (#5775)
Modify Access repository, to evaluate `asset` permissions in bulk.
This is the last set of permission changes, to migrate all of them to
run in bulk!
Queries have been validated to match what they currently generate for single ids.

Queries:

* `activity` owner access:

```sql
-- Before
SELECT 1 AS "row_exists" FROM (SELECT 1 AS dummy_column) "dummy_table" WHERE EXISTS (
  SELECT 1
  FROM "activity" "ActivityEntity"
  WHERE
    "ActivityEntity"."id" = $1
    AND "ActivityEntity"."userId" = $2
)
LIMIT 1

-- After
SELECT "ActivityEntity"."id" AS "ActivityEntity_id"
FROM "activity" "ActivityEntity"
WHERE
  "ActivityEntity"."id" IN ($1)
  AND "ActivityEntity"."userId" = $2
```

* `activity` album owner access:

```sql
-- Before
SELECT 1 AS "row_exists" FROM (SELECT 1 AS dummy_column) "dummy_table" WHERE EXISTS (
  SELECT 1
  FROM "activity" "ActivityEntity"
    LEFT JOIN "albums" "ActivityEntity__ActivityEntity_album"
      ON "ActivityEntity__ActivityEntity_album"."id"="ActivityEntity"."albumId"
      AND "ActivityEntity__ActivityEntity_album"."deletedAt" IS NULL
  WHERE
    "ActivityEntity"."id" = $1
    AND "ActivityEntity__ActivityEntity_album"."ownerId" = $2
)
LIMIT 1

-- After
SELECT "ActivityEntity"."id" AS "ActivityEntity_id"
FROM "activity" "ActivityEntity"
  LEFT JOIN "albums" "ActivityEntity__ActivityEntity_album"
    ON "ActivityEntity__ActivityEntity_album"."id"="ActivityEntity"."albumId"
    AND "ActivityEntity__ActivityEntity_album"."deletedAt" IS NULL
WHERE
  "ActivityEntity"."id" IN ($1)
  AND "ActivityEntity__ActivityEntity_album"."ownerId" = $2
```

* `activity` create access:

```sql
-- Before
SELECT 1 AS "row_exists" FROM (SELECT 1 AS dummy_column) "dummy_table" WHERE EXISTS (
  SELECT 1
  FROM "albums" "AlbumEntity"
    LEFT JOIN "albums_shared_users_users" "AlbumEntity_AlbumEntity__AlbumEntity_sharedUsers"
      ON "AlbumEntity_AlbumEntity__AlbumEntity_sharedUsers"."albumsId"="AlbumEntity"."id"
    LEFT JOIN "users" "AlbumEntity__AlbumEntity_sharedUsers"
      ON "AlbumEntity__AlbumEntity_sharedUsers"."id"="AlbumEntity_AlbumEntity__AlbumEntity_sharedUsers"."usersId"
      AND "AlbumEntity__AlbumEntity_sharedUsers"."deletedAt" IS NULL
  WHERE
    (
      (
        "AlbumEntity"."id" = $1
        AND "AlbumEntity"."isActivityEnabled" = $2
        AND "AlbumEntity__AlbumEntity_sharedUsers"."id" = $3
      )
      OR (
        "AlbumEntity"."id" = $4
        AND "AlbumEntity"."isActivityEnabled" = $5
        AND "AlbumEntity"."ownerId" = $6
      )
    )
    AND "AlbumEntity"."deletedAt" IS NULL
)
LIMIT 1

-- After
SELECT "AlbumEntity"."id" AS "AlbumEntity_id"
FROM "albums" "AlbumEntity"
  LEFT JOIN "albums_shared_users_users" "AlbumEntity_AlbumEntity__AlbumEntity_sharedUsers"
    ON "AlbumEntity_AlbumEntity__AlbumEntity_sharedUsers"."albumsId"="AlbumEntity"."id"
  LEFT JOIN "users" "AlbumEntity__AlbumEntity_sharedUsers"
    ON "AlbumEntity__AlbumEntity_sharedUsers"."id"="AlbumEntity_AlbumEntity__AlbumEntity_sharedUsers"."usersId"
    AND "AlbumEntity__AlbumEntity_sharedUsers"."deletedAt" IS NULL
WHERE
  (
    (
      "AlbumEntity"."id" IN ($1)
      AND "AlbumEntity"."isActivityEnabled" = $2
      AND "AlbumEntity__AlbumEntity_sharedUsers"."id" = $3
    )
    OR (
      "AlbumEntity"."id" IN ($4)
      AND "AlbumEntity"."isActivityEnabled" = $5
      AND "AlbumEntity"."ownerId" = $6
    )
  )
  AND "AlbumEntity"."deletedAt" IS NULL
```
2023-12-17 12:10:21 -06:00
Alex
691e20521d docs: update default config content (#5798) 2023-12-17 12:07:53 -06:00
Quek
27f8dd6040 doc: documentation of the Immich Flutter Architectural Pattern (#5748)
* Added Documentation of the Immich Flutter Architectural Pattern

* Update README.md

---------

Co-authored-by: Alex <alex.tran1502@gmail.com>
2023-12-17 17:51:03 +00:00
Mert
e3fa32ad23 fix(server): fix inconsistent explore queries (#5774)
* remove limits

* update sql
2023-12-17 11:04:35 -06:00
231 changed files with 5803 additions and 3629 deletions

View File

@@ -1,5 +1,5 @@
.vscode/
cli/
design/
docker/
docs/
@@ -18,3 +18,8 @@ web/node_modules/
web/coverage/
web/.svelte-kit
web/build/
cli/node_modules
cli/.reverse-geocoding-dump/
cli/upload/
cli/dist/

View File

@@ -21,7 +21,7 @@ jobs:
submodules: "recursive"
- name: Run e2e tests
run: docker compose -f ./docker/docker-compose.test.yml up --renew-anon-volumes --abort-on-container-exit --exit-code-from immich-server --remove-orphans --build
run: make test-server-e2e
doc-tests:
name: Docs
@@ -90,9 +90,13 @@ jobs:
- name: Checkout code
uses: actions/checkout@v4
- name: Run npm install
- name: Run npm install in cli
run: npm ci
- name: Run npm install in server
run: npm ci
working-directory: ./server
- name: Run linter
run: npm run lint
if: ${{ !cancelled() }}
@@ -109,6 +113,29 @@ jobs:
run: npm run test:cov
if: ${{ !cancelled() }}
cli-e2e-tests:
name: CLI (e2e)
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./cli
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
submodules: "recursive"
- name: Run npm install in cli
run: npm ci
- name: Run npm install in server
run: npm ci
working-directory: ./server
- name: Run e2e tests
run: npm run test:e2e
web-unit-tests:
name: Web
runs-on: ubuntu-latest
@@ -182,7 +209,7 @@ jobs:
poetry run black --check app export
- name: Run mypy type checking
run: |
poetry run mypy --install-types --non-interactive --strict app/ export/
poetry run mypy --install-types --non-interactive --strict app/
- name: Run tests and coverage
run: |
poetry run pytest --cov app

View File

@@ -16,8 +16,8 @@ stage:
pull-stage:
docker compose -f ./docker/docker-compose.staging.yml pull
test-e2e:
docker compose -f ./docker/docker-compose.test.yml up --renew-anon-volumes --abort-on-container-exit --exit-code-from immich-server --remove-orphans --build
test-server-e2e:
docker compose -f ./server/test/docker-compose.server-e2e.yml up --renew-anon-volumes --abort-on-container-exit --exit-code-from immich-server --remove-orphans --build
prod:
docker compose -f ./docker/docker-compose.prod.yml up --build -V --remove-orphans

View File

@@ -102,7 +102,7 @@ Spec: Free-tier Oracle VM - Amsterdam - 2.4Ghz quad-core ARM64 CPU, 24GB RAM
私はこのプロジェクトにコミットしてきました。ドキュメントを更新し、新しい機能を追加し、バグを修正し続けるつもりですが、私ひとりではできません。だから、続けるためのモチベーションをさらに高めてくれる皆さんの助けが必要なのです。
[selfhosted.show - In the episode 'The-organization-must-いいえt-be-name is a Hostile Actor'](https://selfhosted.show/79?t=1418) のホストが言ったように、これはチームと私がやっていることの大規模な事業だ。そしていつの日か、フルタイムでこの仕事ができるようになりたいと思っています。
[selfhosted.show - In the episode 'The-organization-must-not-be-name is a Hostile Actor'](https://selfhosted.show/79?t=1418) のホストが言ったように、これはチームと私がやっていることの大規模な事業だ。そしていつの日か、フルタイムでこの仕事ができるようになりたいと思っています。
もし、あなたがこのプロジェクトに賛同し、このアプリを長く使い続けたいと思われるのであれば、以下のオプションから支援をご検討ください。

View File

@@ -102,7 +102,7 @@ Spec: Free-tier Oracle VM - Amsterdam - 2.4Ghz quad-core ARM64 CPU, 24GB RAM
Ik ben trouw aan dit project en ik zal niet stoppen. Ik zal de documenten blijven bijwerken, nieuwe functies toevoegen en bugs oplossen. Maar ik kan het niet alleen. Ik heb dus jouw hulp nodig om mij extra motivatie te geven om door te gaan.
Als onze gastheren in de [selfhosted.show - In de aflevering 'The-organization-must-Neet-be-name is a Hostile Actor'](https://selfhosted.show/79?t=1418) zeiden, dit is een eNeerme onderneming van wat het team en ik doen. En ik zou dit graag fulltime willen doen, ik vraag jouw hulp om dat mogelijk te maken.
Als onze gastheren in de [selfhosted.show - In de aflevering 'The-organization-must-Neet-be-name is a Hostile Actor'](https://selfhosted.show/79?t=1418) zeiden, dit is een enorme onderneming van wat het team en ik doen. En ik zou dit graag fulltime willen doen, ik vraag jouw hulp om dat mogelijk te maken.
Als je denkt dat dit het juiste doel is en de app iets is dat je jezelf al heel lang ziet gebruiken, overweeg dan om het project te steunen met de onderstaande optie.

4
cli/.gitignore vendored
View File

@@ -10,4 +10,6 @@ oclif.manifest.json
.vscode
.idea
/coverage/
/coverage/
.reverse-geocoding-dump/
upload/

View File

@@ -1,4 +1,6 @@
**/*.spec.js
test/**
upload/**
.editorconfig
.eslintignore
.eslintrc.js

19
cli/Dockerfile Normal file
View File

@@ -0,0 +1,19 @@
FROM ghcr.io/immich-app/base-server-dev:20231109 as test
WORKDIR /usr/src/app/server
COPY server/package.json server/package-lock.json ./
RUN npm ci
COPY ./server/ .
WORKDIR /usr/src/app/cli
COPY cli/package.json cli/package-lock.json ./
RUN npm ci
COPY ./cli/ .
FROM ghcr.io/immich-app/base-server-prod:20231109
VOLUME /usr/src/app/upload
EXPOSE 3001
ENTRYPOINT ["tini", "--", "/bin/sh"]

1932
cli/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "@immich/cli",
"version": "2.0.4",
"version": "2.0.5",
"description": "Command Line Interface (CLI) for Immich",
"main": "dist/index.js",
"bin": {
@@ -18,9 +18,11 @@
"commander": "^11.0.0",
"form-data": "^4.0.0",
"glob": "^10.3.1",
"graceful-fs": "^4.2.11",
"yaml": "^2.3.1"
},
"devDependencies": {
"@testcontainers/postgresql": "^10.4.0",
"@types/byte-size": "^8.1.0",
"@types/chai": "^4.3.5",
"@types/cli-progress": "^3.11.0",
@@ -37,6 +39,7 @@
"eslint-plugin-jest": "^27.2.2",
"eslint-plugin-prettier": "^5.0.0",
"eslint-plugin-unicorn": "^49.0.0",
"immich": "file:../server",
"jest": "^29.5.0",
"jest-extended": "^4.0.0",
"jest-message-util": "^29.5.0",
@@ -50,13 +53,15 @@
},
"scripts": {
"build": "tsc --project tsconfig.build.json",
"lint": "eslint \"src/**/*.ts\" --max-warnings 0",
"lint": "eslint \"src/**/*.ts\" \"test/**/*.ts\" --max-warnings 0",
"lint:fix": "npm run lint -- --fix",
"prepack": "npm run build",
"test": "jest",
"test:cov": "jest --coverage",
"format": "prettier --check .",
"format:fix": "prettier --write .",
"check": "tsc --noEmit"
"check": "tsc --noEmit",
"test:e2e": "NODE_OPTIONS='--experimental-vm-modules' jest --config test/e2e/jest-e2e.json --runInBand"
},
"jest": {
"clearMocks": true,
@@ -71,10 +76,15 @@
"^.+\\.ts$": "ts-jest"
},
"collectCoverageFrom": [
"<rootDir>/src/**/*.(t|j)s"
"<rootDir>/src/**/*.(t|j)s",
"!**/open-api/**"
],
"moduleNameMapper": {
"^@api(|/.*)$": "<rootDir>/src/api/$1"
"^@api(|/.*)$": "<rootDir>/src/api/$1",
"^@test(|/.*)$": "<rootDir>../server/test/$1",
"^@app/immich(|/.*)$": "<rootDir>../server/src/immich/$1",
"^@app/infra(|/.*)$": "<rootDir>../server/src/infra/$1",
"^@app/domain(|/.*)$": "<rootDir>../server/src/domain/$1"
},
"coverageDirectory": "./coverage",
"testEnvironment": "node"

View File

@@ -4,7 +4,7 @@
* Immich
* Immich API
*
* The version of the OpenAPI document: 1.91.3
* The version of the OpenAPI document: 1.91.4
*
*
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
@@ -373,12 +373,6 @@ export interface AllJobStatusResponseDto {
* @memberof AllJobStatusResponseDto
*/
'migration': JobStatusDto;
/**
*
* @type {JobStatusDto}
* @memberof AllJobStatusResponseDto
*/
'objectTagging': JobStatusDto;
/**
*
* @type {JobStatusDto}
@@ -1318,39 +1312,6 @@ export interface CheckExistingAssetsResponseDto {
*/
'existingIds': Array<string>;
}
/**
*
* @export
* @interface ClassificationConfig
*/
export interface ClassificationConfig {
/**
*
* @type {boolean}
* @memberof ClassificationConfig
*/
'enabled': boolean;
/**
*
* @type {number}
* @memberof ClassificationConfig
*/
'minScore': number;
/**
*
* @type {string}
* @memberof ClassificationConfig
*/
'modelName': string;
/**
*
* @type {ModelType}
* @memberof ClassificationConfig
*/
'modelType'?: ModelType;
}
/**
*
* @export
@@ -2015,7 +1976,6 @@ export const JobName = {
ThumbnailGeneration: 'thumbnailGeneration',
MetadataExtraction: 'metadataExtraction',
VideoConversion: 'videoConversion',
ObjectTagging: 'objectTagging',
RecognizeFaces: 'recognizeFaces',
SmartSearch: 'smartSearch',
BackgroundTask: 'backgroundTask',
@@ -2358,7 +2318,6 @@ export interface MergePersonDto {
*/
export const ModelType = {
ImageClassification: 'image-classification',
FacialRecognition: 'facial-recognition',
Clip: 'clip'
} as const;
@@ -3103,6 +3062,12 @@ export interface ServerFeaturesDto {
* @memberof ServerFeaturesDto
*/
'map': boolean;
/**
*
* @type {boolean}
* @memberof ServerFeaturesDto
*/
'metrics': boolean;
/**
*
* @type {boolean}
@@ -3139,12 +3104,6 @@ export interface ServerFeaturesDto {
* @memberof ServerFeaturesDto
*/
'sidecar': boolean;
/**
*
* @type {boolean}
* @memberof ServerFeaturesDto
*/
'tagImage': boolean;
/**
*
* @type {boolean}
@@ -3613,6 +3572,12 @@ export interface SystemConfigDto {
* @memberof SystemConfigDto
*/
'map': SystemConfigMapDto;
/**
*
* @type {SystemConfigMetricsDto}
* @memberof SystemConfigDto
*/
'metrics': SystemConfigMetricsDto;
/**
*
* @type {SystemConfigNewVersionCheckDto}
@@ -3803,12 +3768,6 @@ export interface SystemConfigJobDto {
* @memberof SystemConfigJobDto
*/
'migration': JobSettingsDto;
/**
*
* @type {JobSettingsDto}
* @memberof SystemConfigJobDto
*/
'objectTagging': JobSettingsDto;
/**
*
* @type {JobSettingsDto}
@@ -3911,12 +3870,6 @@ export interface SystemConfigLoggingDto {
* @interface SystemConfigMachineLearningDto
*/
export interface SystemConfigMachineLearningDto {
/**
*
* @type {ClassificationConfig}
* @memberof SystemConfigMachineLearningDto
*/
'classification': ClassificationConfig;
/**
*
* @type {CLIPConfig}
@@ -3967,6 +3920,19 @@ export interface SystemConfigMapDto {
*/
'lightStyle': string;
}
/**
*
* @export
* @interface SystemConfigMetricsDto
*/
export interface SystemConfigMetricsDto {
/**
*
* @type {boolean}
* @memberof SystemConfigMetricsDto
*/
'enabled': boolean;
}
/**
*
* @export
@@ -12751,6 +12717,109 @@ export class LibraryApi extends BaseAPI {
}
/**
* MetricsApi - axios parameter creator
* @export
*/
export const MetricsApiAxiosParamCreator = function (configuration?: Configuration) {
return {
/**
*
* @param {*} [options] Override http request option.
* @throws {RequiredError}
*/
getMetrics: async (options: AxiosRequestConfig = {}): Promise<RequestArgs> => {
const localVarPath = `/metrics`;
// use dummy base URL string because the URL constructor only accepts absolute URLs.
const localVarUrlObj = new URL(localVarPath, DUMMY_BASE_URL);
let baseOptions;
if (configuration) {
baseOptions = configuration.baseOptions;
}
const localVarRequestOptions = { method: 'GET', ...baseOptions, ...options};
const localVarHeaderParameter = {} as any;
const localVarQueryParameter = {} as any;
// authentication cookie required
// authentication api_key required
await setApiKeyToObject(localVarHeaderParameter, "x-api-key", configuration)
// authentication bearer required
// http bearer authentication required
await setBearerAuthToObject(localVarHeaderParameter, configuration)
setSearchParams(localVarUrlObj, localVarQueryParameter);
let headersFromBaseOptions = baseOptions && baseOptions.headers ? baseOptions.headers : {};
localVarRequestOptions.headers = {...localVarHeaderParameter, ...headersFromBaseOptions, ...options.headers};
return {
url: toPathString(localVarUrlObj),
options: localVarRequestOptions,
};
},
}
};
/**
* MetricsApi - functional programming interface
* @export
*/
export const MetricsApiFp = function(configuration?: Configuration) {
const localVarAxiosParamCreator = MetricsApiAxiosParamCreator(configuration)
return {
/**
*
* @param {*} [options] Override http request option.
* @throws {RequiredError}
*/
async getMetrics(options?: AxiosRequestConfig): Promise<(axios?: AxiosInstance, basePath?: string) => AxiosPromise<object>> {
const localVarAxiosArgs = await localVarAxiosParamCreator.getMetrics(options);
return createRequestFunction(localVarAxiosArgs, globalAxios, BASE_PATH, configuration);
},
}
};
/**
* MetricsApi - factory interface
* @export
*/
export const MetricsApiFactory = function (configuration?: Configuration, basePath?: string, axios?: AxiosInstance) {
const localVarFp = MetricsApiFp(configuration)
return {
/**
*
* @param {*} [options] Override http request option.
* @throws {RequiredError}
*/
getMetrics(options?: AxiosRequestConfig): AxiosPromise<object> {
return localVarFp.getMetrics(options).then((request) => request(axios, basePath));
},
};
};
/**
* MetricsApi - object-oriented interface
* @export
* @class MetricsApi
* @extends {BaseAPI}
*/
export class MetricsApi extends BaseAPI {
/**
*
* @param {*} [options] Override http request option.
* @throws {RequiredError}
* @memberof MetricsApi
*/
public getMetrics(options?: AxiosRequestConfig) {
return MetricsApiFp(this.configuration).getMetrics(options).then((request) => request(this.axios, this.basePath));
}
}
/**
* OAuthApi - axios parameter creator
* @export

View File

@@ -4,7 +4,7 @@
* Immich
* Immich API
*
* The version of the OpenAPI document: 1.91.3
* The version of the OpenAPI document: 1.91.4
*
*
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).

View File

@@ -4,7 +4,7 @@
* Immich
* Immich API
*
* The version of the OpenAPI document: 1.91.3
* The version of the OpenAPI document: 1.91.4
*
*
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).

View File

@@ -4,7 +4,7 @@
* Immich
* Immich API
*
* The version of the OpenAPI document: 1.91.3
* The version of the OpenAPI document: 1.91.4
*
*
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).

View File

@@ -4,7 +4,7 @@
* Immich
* Immich API
*
* The version of the OpenAPI document: 1.91.3
* The version of the OpenAPI document: 1.91.4
*
*
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).

View File

@@ -1,10 +1,9 @@
import { ImmichApi } from '../api/client';
import path from 'node:path';
import { SessionService } from '../services/session.service';
import { LoginError } from '../cores/errors/login-error';
import { exit } from 'node:process';
import os from 'os';
import { ServerVersionResponseDto, UserResponseDto } from 'src/api/open-api';
import { BaseOptionsDto } from 'src/cores/dto/base-options-dto';
export abstract class BaseCommand {
protected sessionService!: SessionService;
@@ -12,14 +11,11 @@ export abstract class BaseCommand {
protected user!: UserResponseDto;
protected serverVersion!: ServerVersionResponseDto;
protected configDir;
protected authPath;
constructor() {
const userHomeDir = os.homedir();
this.configDir = path.join(userHomeDir, '.config/immich/');
this.sessionService = new SessionService(this.configDir);
this.authPath = path.join(this.configDir, 'auth.yml');
constructor(options: BaseOptionsDto) {
if (!options.config) {
throw new Error('Config directory is required');
}
this.sessionService = new SessionService(options.config);
}
public async connect(): Promise<void> {

View File

@@ -2,7 +2,7 @@ import { Asset } from '../cores/models/asset';
import { CrawlService } from '../services';
import { UploadOptionsDto } from '../cores/dto/upload-options-dto';
import { CrawlOptionsDto } from '../cores/dto/crawl-options-dto';
import fs from 'node:fs';
import cliProgress from 'cli-progress';
import byteSize from 'byte-size';
import { BaseCommand } from '../cli/base-command';
@@ -15,8 +15,6 @@ export default class Upload extends BaseCommand {
public async run(paths: string[], options: UploadOptionsDto): Promise<void> {
await this.connect();
const deviceId = 'CLI';
const formatResponse = await this.immichApi.serverInfoApi.getSupportedMediaTypes();
const crawlService = new CrawlService(formatResponse.data.image, formatResponse.data.video);
@@ -24,15 +22,28 @@ export default class Upload extends BaseCommand {
crawlOptions.pathsToCrawl = paths;
crawlOptions.recursive = options.recursive;
crawlOptions.exclusionPatterns = options.exclusionPatterns;
crawlOptions.includeHidden = options.includeHidden;
const files: string[] = [];
for (const pathArgument of paths) {
const fileStat = await fs.promises.lstat(pathArgument);
if (fileStat.isFile()) {
files.push(pathArgument);
}
}
const crawledFiles: string[] = await crawlService.crawl(crawlOptions);
crawledFiles.push(...files);
if (crawledFiles.length === 0) {
console.log('No assets found, exiting');
return;
}
const assetsToUpload = crawledFiles.map((path) => new Asset(path, deviceId));
const assetsToUpload = crawledFiles.map((path) => new Asset(path));
const uploadProgress = new cliProgress.SingleBar(
{
@@ -51,6 +62,10 @@ export default class Upload extends BaseCommand {
// Compute total size first
await asset.process();
totalSize += asset.fileSize;
if (options.albumName) {
asset.albumName = options.albumName;
}
}
const existingAlbums = (await this.immichApi.albumApi.getAllAlbums()).data;
@@ -65,6 +80,10 @@ export default class Upload extends BaseCommand {
});
let skipUpload = false;
let skipAsset = false;
let existingAssetId: string | undefined = undefined;
if (!options.skipHash) {
const assetBulkUploadCheckDto = { assets: [{ id: asset.path, checksum: await asset.hash() }] };
@@ -73,14 +92,24 @@ export default class Upload extends BaseCommand {
});
skipUpload = checkResponse.data.results[0].action === 'reject';
const isDuplicate = checkResponse.data.results[0].reason === 'duplicate';
if (isDuplicate) {
existingAssetId = checkResponse.data.results[0].assetId;
}
skipAsset = skipUpload && !isDuplicate;
}
if (!skipUpload) {
if (!skipAsset) {
if (!options.dryRun) {
const formData = asset.getUploadFormData();
const res = await this.uploadAsset(formData);
if (!skipUpload) {
const formData = asset.getUploadFormData();
const res = await this.uploadAsset(formData);
existingAssetId = res.data.id;
}
if (options.album && asset.albumName) {
if ((options.album || options.albumName) && asset.albumName !== undefined) {
let album = existingAlbums.find((album) => album.albumName === asset.albumName);
if (!album) {
const res = await this.immichApi.albumApi.createAlbum({
@@ -90,7 +119,12 @@ export default class Upload extends BaseCommand {
existingAlbums.push(album);
}
await this.immichApi.albumApi.addAssetsToAlbum({ id: album.id, bulkIdsDto: { ids: [res.data.id] } });
if (existingAssetId) {
await this.immichApi.albumApi.addAssetsToAlbum({
id: album.id,
bulkIdsDto: { ids: [existingAssetId] },
});
}
}
}

37
cli/src/constants.ts Normal file
View File

@@ -0,0 +1,37 @@
import pkg from '../package.json';
export interface ICLIVersion {
major: number;
minor: number;
patch: number;
}
export class CLIVersion implements ICLIVersion {
constructor(
public readonly major: number,
public readonly minor: number,
public readonly patch: number,
) {}
toString() {
return `${this.major}.${this.minor}.${this.patch}`;
}
toJSON() {
const { major, minor, patch } = this;
return { major, minor, patch };
}
static fromString(version: string): CLIVersion {
const regex = /(?:v)?(?<major>\d+)\.(?<minor>\d+)\.(?<patch>\d+)/i;
const matchResult = version.match(regex);
if (matchResult) {
const [, major, minor, patch] = matchResult.map(Number);
return new CLIVersion(major, minor, patch);
} else {
throw new Error(`Invalid version format: ${version}`);
}
}
}
export const cliVersion = CLIVersion.fromString(pkg.version);

View File

@@ -0,0 +1,3 @@
export class BaseOptionsDto {
config?: string;
}

View File

@@ -1,9 +1,10 @@
export class UploadOptionsDto {
recursive = false;
exclusionPatterns!: string[];
dryRun = false;
skipHash = false;
delete = false;
readOnly = true;
album = false;
recursive? = false;
exclusionPatterns?: string[] = [];
dryRun? = false;
skipHash? = false;
delete? = false;
album? = false;
albumName? = '';
includeHidden? = false;
}

View File

@@ -2,10 +2,8 @@ export class LoginError extends Error {
constructor(message: string) {
super(message);
// assign the error class name in your custom error (as a shortcut)
this.name = this.constructor.name;
// capturing the stack trace keeps the reference to your error class
Error.captureStackTrace(this, this.constructor);
}
}

View File

@@ -1,4 +1,4 @@
import * as fs from 'fs';
import * as fs from 'graceful-fs';
import { basename } from 'node:path';
import crypto from 'crypto';
import Os from 'os';
@@ -17,9 +17,8 @@ export class Asset {
fileSize!: number;
albumName?: string;
constructor(path: string, deviceId: string) {
constructor(path: string) {
this.path = path;
this.deviceId = deviceId;
}
async process() {
@@ -45,12 +44,11 @@ export class Asset {
if (!this.deviceAssetId) throw new Error('Device asset id not set');
if (!this.fileCreatedAt) throw new Error('File created at not set');
if (!this.fileModifiedAt) throw new Error('File modified at not set');
if (!this.deviceId) throw new Error('Device id not set');
const data: any = {
assetData: this.assetData as any,
deviceAssetId: this.deviceAssetId,
deviceId: this.deviceId,
deviceId: 'CLI',
fileCreatedAt: this.fileCreatedAt,
fileModifiedAt: this.fileModifiedAt,
isFavorite: String(false),

View File

@@ -1,13 +1,23 @@
#! /usr/bin/env node
import { program, Option } from 'commander';
import { Option, Command } from 'commander';
import Upload from './commands/upload';
import ServerInfo from './commands/server-info';
import LoginKey from './commands/login/key';
import Logout from './commands/logout';
import { version } from '../package.json';
program.name('immich').description('Immich command line interface').version(version);
import path from 'node:path';
import os from 'os';
const userHomeDir = os.homedir();
const configDir = path.join(userHomeDir, '.config/immich/');
const program = new Command()
.name('immich')
.version(version)
.description('Command line interface for Immich')
.addOption(new Option('-d, --config', 'Configuration directory').env('IMMICH_CONFIG_DIR').default(configDir));
program
.command('upload')
@@ -16,11 +26,17 @@ program
.addOption(new Option('-r, --recursive', 'Recursive').env('IMMICH_RECURSIVE').default(false))
.addOption(new Option('-i, --ignore [paths...]', 'Paths to ignore').env('IMMICH_IGNORE_PATHS'))
.addOption(new Option('-h, --skip-hash', "Don't hash files before upload").env('IMMICH_SKIP_HASH').default(false))
.addOption(new Option('-i, --include-hidden', 'Include hidden folders').env('IMMICH_INCLUDE_HIDDEN').default(false))
.addOption(
new Option('-a, --album', 'Automatically create albums based on folder name')
.env('IMMICH_AUTO_CREATE_ALBUM')
.default(false),
)
.addOption(
new Option('-A, --album-name <name>', 'Add all assets to specified album')
.env('IMMICH_ALBUM_NAME')
.conflicts('album'),
)
.addOption(
new Option('-n, --dry-run', "Don't perform any actions, just show what will be done")
.env('IMMICH_DRY_RUN')
@@ -30,14 +46,14 @@ program
.argument('[paths...]', 'One or more paths to assets to be uploaded')
.action(async (paths, options) => {
options.exclusionPatterns = options.ignore;
await new Upload().run(paths, options);
await new Upload(program.opts()).run(paths, options);
});
program
.command('server-info')
.description('Display server information')
.action(async () => {
await new ServerInfo().run();
await new ServerInfo(program.opts()).run();
});
program
@@ -46,14 +62,14 @@ program
.argument('[instanceUrl]')
.argument('[apiKey]')
.action(async (paths, options) => {
await new LoginKey().run(paths, options);
await new LoginKey(program.opts()).run(paths, options);
});
program
.command('logout')
.description('Remove stored credentials')
.action(async () => {
await new Logout().run();
await new Logout(program.opts()).run();
});
program.parse(process.argv);

View File

@@ -19,7 +19,7 @@ const tests: Test[] = [
files: {},
},
{
test: 'should crawl a single path',
test: 'should crawl a single folder',
options: {
pathsToCrawl: ['/photos/'],
},
@@ -27,6 +27,25 @@ const tests: Test[] = [
'/photos/image.jpg': true,
},
},
{
test: 'should crawl a single file',
options: {
pathsToCrawl: ['/photos/image.jpg'],
},
files: {
'/photos/image.jpg': true,
},
},
{
test: 'should crawl a single file and a folder',
options: {
pathsToCrawl: ['/photos/image.jpg', '/images/'],
},
files: {
'/photos/image.jpg': true,
'/images/image2.jpg': true,
},
},
{
test: 'should exclude by file extension',
options: {
@@ -54,6 +73,7 @@ const tests: Test[] = [
options: {
pathsToCrawl: ['/photos/'],
exclusionPatterns: ['**/raw/**'],
recursive: true,
},
files: {
'/photos/image.jpg': true,
@@ -98,6 +118,7 @@ const tests: Test[] = [
test: 'should crawl a single path',
options: {
pathsToCrawl: ['/photos/'],
recursive: true,
},
files: {
'/photos/image.jpg': true,

View File

@@ -1,5 +1,6 @@
import { CrawlOptionsDto } from 'src/cores/dto/crawl-options-dto';
import { glob } from 'glob';
import * as fs from 'fs';
export class CrawlService {
private readonly extensions!: string[];
@@ -8,21 +9,57 @@ export class CrawlService {
this.extensions = image.concat(video).map((extension) => extension.replace('.', ''));
}
crawl(crawlOptions: CrawlOptionsDto): Promise<string[]> {
async crawl(crawlOptions: CrawlOptionsDto): Promise<string[]> {
const { pathsToCrawl, exclusionPatterns, includeHidden } = crawlOptions;
if (!pathsToCrawl) {
return Promise.resolve([]);
}
const base = pathsToCrawl.length === 1 ? pathsToCrawl[0] : `{${pathsToCrawl.join(',')}}`;
const extensions = `*{${this.extensions}}`;
const patterns: string[] = [];
const crawledFiles: string[] = [];
return glob(`${base}/**/${extensions}`, {
for await (const currentPath of pathsToCrawl) {
try {
const stats = await fs.promises.stat(currentPath);
if (stats.isFile() || stats.isSymbolicLink()) {
crawledFiles.push(currentPath);
} else {
patterns.push(currentPath);
}
} catch (error: any) {
if (error.code === 'ENOENT') {
patterns.push(currentPath);
} else {
throw error;
}
}
}
let searchPattern: string;
if (patterns.length === 1) {
searchPattern = patterns[0];
} else if (patterns.length === 0) {
return crawledFiles;
} else {
searchPattern = '{' + patterns.join(',') + '}';
}
if (crawlOptions.recursive) {
searchPattern = searchPattern + '/**/';
}
searchPattern = `${searchPattern}/*.{${this.extensions.join(',')}}`;
const globbedFiles = await glob(searchPattern, {
absolute: true,
nocase: true,
nodir: true,
dot: includeHidden,
ignore: exclusionPatterns,
});
const returnedFiles = crawledFiles.concat(globbedFiles);
returnedFiles.sort();
return returnedFiles;
}
}

View File

@@ -1,8 +1,17 @@
import { SessionService } from './session.service';
import mockfs from 'mock-fs';
import fs from 'node:fs';
import yaml from 'yaml';
import { LoginError } from '../cores/errors/login-error';
import {
TEST_AUTH_FILE,
TEST_CONFIG_DIR,
TEST_IMMICH_API_KEY,
TEST_IMMICH_INSTANCE_URL,
createTestAuthFile,
deleteAuthFile,
readTestAuthFile,
spyOnConsole,
} from '../../test/cli-test-utils';
const mockPingServer = jest.fn(() => Promise.resolve({ data: { res: 'pong' } }));
const mockUserInfo = jest.fn(() => Promise.resolve({ data: { email: 'admin@example.com' } }));
@@ -22,74 +31,85 @@ jest.mock('../api/open-api', () => {
describe('SessionService', () => {
let sessionService: SessionService;
let consoleSpy: jest.SpyInstance;
beforeAll(() => {
// Write a dummy output before mock-fs to prevent some annoying errors
console.log();
consoleSpy = spyOnConsole();
});
beforeEach(() => {
const configDir = '/config';
sessionService = new SessionService(configDir);
deleteAuthFile();
sessionService = new SessionService(TEST_CONFIG_DIR);
});
afterEach(() => {
deleteAuthFile();
});
it('should connect to immich', async () => {
mockfs({
'/config/auth.yml': 'apiKey: pNussssKSYo5WasdgalvKJ1n9kdvaasdfbluPg\ninstanceUrl: https://test/api',
});
await createTestAuthFile(
JSON.stringify({
apiKey: TEST_IMMICH_API_KEY,
instanceUrl: TEST_IMMICH_INSTANCE_URL,
}),
);
await sessionService.connect();
expect(mockPingServer).toHaveBeenCalledTimes(1);
});
it('should error if no auth file exists', async () => {
mockfs();
await sessionService.connect().catch((error) => {
expect(error.message).toEqual('No auth file exist. Please login first');
});
});
it('should error if auth file is missing instance URl', async () => {
mockfs({
'/config/auth.yml': 'foo: pNussssKSYo5WasdgalvKJ1n9kdvaasdfbluPg\napiKey: https://test/api',
});
await createTestAuthFile(
JSON.stringify({
apiKey: TEST_IMMICH_API_KEY,
}),
);
await sessionService.connect().catch((error) => {
expect(error).toBeInstanceOf(LoginError);
expect(error.message).toEqual('Instance URL missing in auth config file /config/auth.yml');
expect(error.message).toEqual(`Instance URL missing in auth config file ${TEST_AUTH_FILE}`);
});
});
it('should error if auth file is missing api key', async () => {
mockfs({
'/config/auth.yml': 'instanceUrl: pNussssKSYo5WasdgalvKJ1n9kdvaasdfbluPg\nbar: https://test/api',
});
await sessionService.connect().catch((error) => {
expect(error).toBeInstanceOf(LoginError);
expect(error.message).toEqual('API key missing in auth config file /config/auth.yml');
});
await createTestAuthFile(
JSON.stringify({
instanceUrl: TEST_IMMICH_INSTANCE_URL,
}),
);
await expect(sessionService.connect()).rejects.toThrow(
new LoginError(`API key missing in auth config file ${TEST_AUTH_FILE}`),
);
});
it.skip('should create auth file when logged in', async () => {
mockfs();
it('should create auth file when logged in', async () => {
await sessionService.keyLogin(TEST_IMMICH_INSTANCE_URL, TEST_IMMICH_API_KEY);
await sessionService.keyLogin('https://test/api', 'pNussssKSYo5WasdgalvKJ1n9kdvaasdfbluPg');
const data: string = await fs.promises.readFile('/config/auth.yml', 'utf8');
const data: string = await readTestAuthFile();
const authConfig = yaml.parse(data);
expect(authConfig.instanceUrl).toBe('https://test/api');
expect(authConfig.apiKey).toBe('pNussssKSYo5WasdgalvKJ1n9kdvaasdfbluPg');
expect(authConfig.instanceUrl).toBe(TEST_IMMICH_INSTANCE_URL);
expect(authConfig.apiKey).toBe(TEST_IMMICH_API_KEY);
});
it('should delete auth file when logging out', async () => {
mockfs({
'/config/auth.yml': 'apiKey: pNussssKSYo5WasdgalvKJ1n9kdvaasdfbluPg\ninstanceUrl: https://test/api',
});
await createTestAuthFile(
JSON.stringify({
apiKey: TEST_IMMICH_API_KEY,
instanceUrl: TEST_IMMICH_INSTANCE_URL,
}),
);
await sessionService.logout();
await fs.promises.access('/auth.yml', fs.constants.F_OK).catch((error) => {
await fs.promises.access(TEST_AUTH_FILE, fs.constants.F_OK).catch((error) => {
expect(error.message).toContain('ENOENT');
});
});
afterEach(() => {
mockfs.restore();
expect(consoleSpy.mock.calls).toEqual([[`Removed auth file ${TEST_AUTH_FILE}`]]);
});
});

View File

@@ -5,33 +5,39 @@ import { ImmichApi } from '../api/client';
import { LoginError } from '../cores/errors/login-error';
export class SessionService {
readonly configDir: string;
readonly configDir!: string;
readonly authPath!: string;
private api!: ImmichApi;
constructor(configDir: string) {
this.configDir = configDir;
this.authPath = path.join(this.configDir, 'auth.yml');
this.authPath = path.join(configDir, '/auth.yml');
}
public async connect(): Promise<ImmichApi> {
await fs.promises.access(this.authPath, fs.constants.F_OK).catch((error) => {
if (error.code === 'ENOENT') {
throw new LoginError('No auth file exist. Please login first');
let instanceUrl = process.env.IMMICH_INSTANCE_URL;
let apiKey = process.env.IMMICH_API_KEY;
if (!instanceUrl || !apiKey) {
await fs.promises.access(this.authPath, fs.constants.F_OK).catch((error) => {
if (error.code === 'ENOENT') {
throw new LoginError('No auth file exist. Please login first');
}
});
const data: string = await fs.promises.readFile(this.authPath, 'utf8');
const parsedConfig = yaml.parse(data);
instanceUrl = parsedConfig.instanceUrl;
apiKey = parsedConfig.apiKey;
if (!instanceUrl) {
throw new LoginError(`Instance URL missing in auth config file ${this.authPath}`);
}
});
const data: string = await fs.promises.readFile(this.authPath, 'utf8');
const parsedConfig = yaml.parse(data);
const instanceUrl: string = parsedConfig.instanceUrl;
const apiKey: string = parsedConfig.apiKey;
if (!instanceUrl) {
throw new LoginError('Instance URL missing in auth config file ' + this.authPath);
}
if (!apiKey) {
throw new LoginError('API key missing in auth config file ' + this.authPath);
if (!apiKey) {
throw new LoginError(`API key missing in auth config file ${this.authPath}`);
}
}
this.api = new ImmichApi(instanceUrl, apiKey);
@@ -59,10 +65,6 @@ export class SessionService {
}
}
if (!fs.existsSync(this.configDir)) {
console.error('waah');
}
fs.writeFileSync(this.authPath, yaml.stringify({ instanceUrl, apiKey }));
console.log('Wrote auth info to ' + this.authPath);
@@ -82,7 +84,7 @@ export class SessionService {
});
if (pingResponse.res !== 'pong') {
throw new Error('Unexpected ping reply');
throw new Error(`Could not parse response. Is Immich listening on ${this.api.apiConfiguration.instanceUrl}?`);
}
}
}

View File

@@ -0,0 +1,38 @@
import { BaseOptionsDto } from 'src/cores/dto/base-options-dto';
import fs from 'node:fs';
import path from 'node:path';
export const TEST_CONFIG_DIR = '/tmp/immich/';
export const TEST_AUTH_FILE = path.join(TEST_CONFIG_DIR, 'auth.yml');
export const TEST_IMMICH_INSTANCE_URL = 'https://test/api';
export const TEST_IMMICH_API_KEY = 'pNussssKSYo5WasdgalvKJ1n9kdvaasdfbluPg';
export const CLI_BASE_OPTIONS: BaseOptionsDto = { config: TEST_CONFIG_DIR };
export const spyOnConsole = () => jest.spyOn(console, 'log').mockImplementation();
export const createTestAuthFile = async (contents: string) => {
if (!fs.existsSync(TEST_CONFIG_DIR)) {
// Create config folder if it doesn't exist
const created = await fs.promises.mkdir(TEST_CONFIG_DIR, { recursive: true });
if (!created) {
throw new Error(`Failed to create config folder ${TEST_CONFIG_DIR}`);
}
}
fs.writeFileSync(TEST_AUTH_FILE, contents);
};
export const readTestAuthFile = async (): Promise<string> => {
return await fs.promises.readFile(TEST_AUTH_FILE, 'utf8');
};
export const deleteAuthFile = () => {
try {
fs.unlinkSync(TEST_AUTH_FILE);
} catch (error: any) {
if (error.code !== 'ENOENT') {
throw error;
}
}
};

View File

@@ -0,0 +1,24 @@
{
"moduleFileExtensions": ["js", "json", "ts"],
"modulePaths": ["<rootDir>"],
"rootDir": "../..",
"globalSetup": "<rootDir>/test/e2e/setup.ts",
"testEnvironment": "node",
"testRegex": ".e2e-spec.ts$",
"testTimeout": 6000000,
"transform": {
"^.+\\.(t|j)s$": "ts-jest"
},
"collectCoverageFrom": [
"<rootDir>/src/**/*.(t|j)s",
"!<rootDir>/src/**/*.spec.(t|s)s",
"!<rootDir>/src/infra/migrations/**"
],
"coverageDirectory": "./coverage",
"moduleNameMapper": {
"^@test(|/.*)$": "<rootDir>../server/test/$1",
"^@app/immich(|/.*)$": "<rootDir>../server/src/immich/$1",
"^@app/infra(|/.*)$": "<rootDir>../server/src/infra/$1",
"^@app/domain(|/.*)$": "<rootDir>/../server/src/domain/$1"
}
}

View File

@@ -0,0 +1,48 @@
import { api } from '@test/api';
import { restoreTempFolder, testApp } from 'immich/test/test-utils';
import { LoginResponseDto } from 'src/api/open-api';
import { APIKeyCreateResponseDto } from '@app/domain';
import LoginKey from 'src/commands/login/key';
import { LoginError } from 'src/cores/errors/login-error';
import { CLI_BASE_OPTIONS, spyOnConsole } from 'test/cli-test-utils';
describe(`login-key (e2e)`, () => {
let server: any;
let admin: LoginResponseDto;
let apiKey: APIKeyCreateResponseDto;
let instanceUrl: string;
spyOnConsole();
beforeAll(async () => {
server = (await testApp.create()).getHttpServer();
if (!process.env.IMMICH_INSTANCE_URL) {
throw new Error('IMMICH_INSTANCE_URL environment variable not set');
} else {
instanceUrl = process.env.IMMICH_INSTANCE_URL;
}
});
afterAll(async () => {
await testApp.teardown();
await restoreTempFolder();
});
beforeEach(async () => {
await testApp.reset();
await restoreTempFolder();
await api.authApi.adminSignUp(server);
admin = await api.authApi.adminLogin(server);
apiKey = await api.apiKeyApi.createApiKey(server, admin.accessToken);
process.env.IMMICH_API_KEY = apiKey.secret;
});
it('should error when providing an invalid API key', async () => {
await expect(async () => await new LoginKey(CLI_BASE_OPTIONS).run(instanceUrl, 'invalid')).rejects.toThrow(
new LoginError(`Failed to connect to server ${instanceUrl}: Request failed with status code 401`),
);
});
it('should log in when providing the correct API key', async () => {
await new LoginKey(CLI_BASE_OPTIONS).run(instanceUrl, apiKey.secret);
});
});

View File

@@ -0,0 +1,42 @@
import { api } from '@test/api';
import { restoreTempFolder, testApp } from 'immich/test/test-utils';
import { LoginResponseDto } from 'src/api/open-api';
import ServerInfo from 'src/commands/server-info';
import { APIKeyCreateResponseDto } from '@app/domain';
import { CLI_BASE_OPTIONS, spyOnConsole } from 'test/cli-test-utils';
describe(`server-info (e2e)`, () => {
let server: any;
let admin: LoginResponseDto;
let apiKey: APIKeyCreateResponseDto;
const consoleSpy = spyOnConsole();
beforeAll(async () => {
server = (await testApp.create()).getHttpServer();
});
afterAll(async () => {
await testApp.teardown();
await restoreTempFolder();
});
beforeEach(async () => {
await testApp.reset();
await restoreTempFolder();
await api.authApi.adminSignUp(server);
admin = await api.authApi.adminLogin(server);
apiKey = await api.apiKeyApi.createApiKey(server, admin.accessToken);
process.env.IMMICH_API_KEY = apiKey.secret;
});
it('should show server version', async () => {
await new ServerInfo(CLI_BASE_OPTIONS).run();
expect(consoleSpy.mock.calls).toEqual([
[expect.stringMatching(new RegExp('Server is running version \\d+.\\d+.\\d+'))],
[expect.stringMatching('Supported image types: .*')],
[expect.stringMatching('Supported video types: .*')],
['Images: 0, Videos: 0, Total: 0'],
]);
});
});

43
cli/test/e2e/setup.ts Normal file
View File

@@ -0,0 +1,43 @@
import path from 'path';
import { PostgreSqlContainer } from '@testcontainers/postgresql';
import { access } from 'fs/promises';
export default async () => {
let IMMICH_TEST_ASSET_PATH: string = '';
if (process.env.IMMICH_TEST_ASSET_PATH === undefined) {
IMMICH_TEST_ASSET_PATH = path.normalize(`${__dirname}/../../../server/test/assets/`);
process.env.IMMICH_TEST_ASSET_PATH = IMMICH_TEST_ASSET_PATH;
} else {
IMMICH_TEST_ASSET_PATH = process.env.IMMICH_TEST_ASSET_PATH;
}
const directoryExists = async (dirPath: string) =>
await access(dirPath)
.then(() => true)
.catch(() => false);
if (!(await directoryExists(`${IMMICH_TEST_ASSET_PATH}/albums`))) {
throw new Error(
`Test assets not found. Please checkout https://github.com/immich-app/test-assets into ${IMMICH_TEST_ASSET_PATH} before testing`,
);
}
if (process.env.DB_HOSTNAME === undefined) {
// DB hostname not set which likely means we're not running e2e through docker compose. Start a local postgres container.
const pg = await new PostgreSqlContainer('tensorchord/pgvecto-rs:pg14-v0.1.11')
.withExposedPorts(5432)
.withDatabase('immich')
.withUsername('postgres')
.withPassword('postgres')
.withReuse()
.start();
process.env.DB_URL = pg.getConnectionUri();
}
process.env.NODE_ENV = 'development';
process.env.IMMICH_TEST_ENV = 'true';
process.env.IMMICH_CONFIG_FILE = path.normalize(`${__dirname}/../../../server/test/e2e/immich-e2e-config.json`);
process.env.TZ = 'Z';
};

View File

@@ -0,0 +1,84 @@
import { api } from '@test/api';
import { IMMICH_TEST_ASSET_PATH, restoreTempFolder, testApp } from 'immich/test/test-utils';
import { LoginResponseDto } from 'src/api/open-api';
import Upload from 'src/commands/upload';
import { APIKeyCreateResponseDto } from '@app/domain';
import { CLI_BASE_OPTIONS, spyOnConsole } from 'test/cli-test-utils';
describe(`upload (e2e)`, () => {
let server: any;
let admin: LoginResponseDto;
let apiKey: APIKeyCreateResponseDto;
spyOnConsole();
beforeAll(async () => {
server = (await testApp.create()).getHttpServer();
});
afterAll(async () => {
await testApp.teardown();
await restoreTempFolder();
});
beforeEach(async () => {
await testApp.reset();
await restoreTempFolder();
await api.authApi.adminSignUp(server);
admin = await api.authApi.adminLogin(server);
apiKey = await api.apiKeyApi.createApiKey(server, admin.accessToken);
process.env.IMMICH_API_KEY = apiKey.secret;
});
it('should upload a folder recursively', async () => {
await new Upload(CLI_BASE_OPTIONS).run([`${IMMICH_TEST_ASSET_PATH}/albums/nature/`], { recursive: true });
const assets = await api.assetApi.getAllAssets(server, admin.accessToken);
expect(assets.length).toBeGreaterThan(4);
});
it('should not create a new album', async () => {
await new Upload(CLI_BASE_OPTIONS).run([`${IMMICH_TEST_ASSET_PATH}/albums/nature/`], { recursive: true });
const albums = await api.albumApi.getAllAlbums(server, admin.accessToken);
expect(albums.length).toEqual(0);
});
it('should create album from folder name', async () => {
await new Upload(CLI_BASE_OPTIONS).run([`${IMMICH_TEST_ASSET_PATH}/albums/nature/`], {
recursive: true,
album: true,
});
const albums = await api.albumApi.getAllAlbums(server, admin.accessToken);
expect(albums.length).toEqual(1);
const natureAlbum = albums[0];
expect(natureAlbum.albumName).toEqual('nature');
});
it('should add existing assets to album', async () => {
await new Upload(CLI_BASE_OPTIONS).run([`${IMMICH_TEST_ASSET_PATH}/albums/nature/`], {
recursive: true,
});
// Upload again, but this time add to album
await new Upload(CLI_BASE_OPTIONS).run([`${IMMICH_TEST_ASSET_PATH}/albums/nature/`], {
recursive: true,
album: true,
});
const albums = await api.albumApi.getAllAlbums(server, admin.accessToken);
expect(albums.length).toEqual(1);
const natureAlbum = albums[0];
expect(natureAlbum.albumName).toEqual('nature');
});
it('should upload to the specified album name', async () => {
await new Upload(CLI_BASE_OPTIONS).run([`${IMMICH_TEST_ASSET_PATH}/albums/nature/`], {
recursive: true,
albumName: 'testAlbum',
});
const albums = await api.albumApi.getAllAlbums(server, admin.accessToken);
expect(albums.length).toEqual(1);
const testAlbum = albums[0];
expect(testAlbum.albumName).toEqual('testAlbum');
});
});

3
cli/test/global-setup.js Normal file
View File

@@ -0,0 +1,3 @@
module.exports = async () => {
process.env.TZ = 'UTC';
};

View File

@@ -8,17 +8,24 @@
"experimentalDecorators": true,
"allowSyntheticDefaultImports": true,
"resolveJsonModule": true,
"target": "es2022",
"target": "es2021",
"moduleResolution": "node16",
"sourceMap": true,
"outDir": "./dist",
"incremental": true,
"skipLibCheck": true,
"esModuleInterop": true,
"rootDirs": ["src", "../server/src"],
"baseUrl": "./",
"paths": {
"@test": ["test"],
"@test/*": ["test/*"]
"@test": ["../server/test"],
"@test/*": ["../server/test/*"],
"@app/immich": ["../server/src/immich"],
"@app/immich/*": ["../server/src/immich/*"],
"@app/infra": ["../server/src/infra"],
"@app/infra/*": ["../server/src/infra/*"],
"@app/domain": ["../server/src/domain"],
"@app/domain/*": ["../server/src/domain/*"]
}
},
"exclude": ["dist", "node_modules", "upload"]

View File

@@ -12,6 +12,7 @@ x-server-build: &server-common
context: ../
dockerfile: server/Dockerfile
target: dev
restart: always
volumes:
- ../server:/usr/src/app
- ${UPLOAD_LOCATION}/photos:/usr/src/app/upload
@@ -19,8 +20,6 @@ x-server-build: &server-common
- /etc/localtime:/etc/localtime:ro
env_file:
- .env
environment:
- NODE_ENV=development
ulimits:
nofile:
soft: 1048576
@@ -87,8 +86,6 @@ services:
- model-cache:/cache
env_file:
- .env
environment:
- NODE_ENV=development
depends_on:
- database
restart: unless-stopped

View File

@@ -11,7 +11,6 @@ services:
# volumes:
# - /usr/lib/wsl:/usr/lib/wsl # If using VAAPI in WSL2
# environment:
# - NVIDIA_DRIVER_CAPABILITIES=all # If using NVIDIA GPU
# - LD_LIBRARY_PATH=/usr/lib/wsl/lib # If using VAAPI in WSL2
# - LIBVA_DRIVER_NAME=d3d12 # If using VAAPI in WSL2
# deploy: # Uncomment this section if using NVIDIA GPU

View File

@@ -56,10 +56,6 @@ Template changes will only apply to new assets. To retroactively apply the templ
This is fixed by running the storage migration job.
### Why is object detection not very good?
The default image tagging model is relatively small. You can change this for a larger model like `google/vit-base-patch16-224` by setting the model name under Settings > Machine Learning Settings > Image Tagging. You can then re-run the Image Tagging job to get improved tags.
### Why are there so many thumbnail generation jobs?
Immich generates three thumbnails for each asset (blurred, small, and large), as well as a thumbnail for each recognized face.

View File

@@ -28,3 +28,13 @@ server {
}
}
```
### Caddy example config
As an alternative to nginx, you can also use [Caddy](https://caddyserver.com/) as a reverse proxy (with automatic HTTPS configuration). Below is an example config.
```
immich.example.org {
reverse_proxy http://<snip>:2283
}
```

View File

@@ -73,7 +73,7 @@ The Immich Microservices image uses the same `Dockerfile` as the Immich Server,
- Thumbnail Generation
- Metadata Extraction
- Video Transcoding
- Object Tagging
- Smart Search
- Facial Recognition
- Storage Template Migration
- Sidecar (see [XMP Sidecars](/docs/features/xmp-sidecars.md))

View File

@@ -32,15 +32,12 @@ The default configuration looks like this:
"backgroundTask": {
"concurrency": 5
},
"clipEncoding": {
"smartSearch": {
"concurrency": 2
},
"metadataExtraction": {
"concurrency": 5
},
"objectTagging": {
"concurrency": 2
},
"recognizeFaces": {
"concurrency": 2
},
@@ -66,14 +63,13 @@ The default configuration looks like this:
"concurrency": 1
}
},
"logging": {
"enabled": true,
"level": "log"
},
"machineLearning": {
"enabled": true,
"url": "http://immich-machine-learning:3003",
"classification": {
"enabled": true,
"modelName": "microsoft/resnet-50",
"minScore": 0.9
},
"clip": {
"enabled": true,
"modelName": "ViT-B-32__openai"
@@ -88,7 +84,8 @@ The default configuration looks like this:
},
"map": {
"enabled": true,
"tileUrl": "https://tile.openstreetmap.org/{z}/{x}/{y}.png"
"lightStyle": "",
"darkStyle": ""
},
"reverseGeocoding": {
"enabled": true
@@ -133,9 +130,6 @@ The default configuration looks like this:
"enabled": true,
"cronExpression": "0 0 * * *"
}
},
"stylesheets": {
"css": ""
}
}
```

View File

@@ -1,5 +1,5 @@
---
sidebar_position: 4
sidebar_position: 5
---
# Help Me!

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.5 KiB

View File

@@ -1,5 +1,5 @@
---
sidebar_position: 2
sidebar_position: 3
---
# Logo

View File

@@ -0,0 +1,85 @@
---
sidebar_position: 2
---
# Quick Start
Here is a quick, no-choices path to install Immich and take it for a test drive.
Once you've tried it, perhaps you'll use one of the many other ways
to install and use it.
## Requirements
Check the [requirements page](../install/requirements) to get started.
## Install and launch via Docker Compose
Follow the [Docker Compose (Recommended)](../install/docker-compose) instructions
to install the server.
- Where random passwords are required, `pwgen` is a handy utility.
- `UPLOAD_LOCATION` should be set to some new directory on the server
with free space.
- You may ignore "Step 4 - Upgrading".
## Try the Web UI
import RegisterAdminUser from '../partials/_register-admin.md';
<RegisterAdminUser />
Try uploading a picture from your browser.
<img src={require('./img/upload-button.png').default} title="Upload button" />
## Try the Mobile UI
### Download the Mobile App
import MobileAppDownload from '../partials/_mobile-app-download.md';
<MobileAppDownload />
### Login to the Mobile App
import MobileAppLogin from '../partials/_mobile-app-login.md';
<MobileAppLogin />
In the mobile app, you should see the photo you uploaded from the web UI.
### Transfer Photos from your Mobile Device
import MobileAppBackup from '../partials/_mobile-app-backup.md';
<MobileAppBackup />
Depending on how many photos are on your mobile device, this backup may
take quite a while.
You can select the Jobs tab to see Immich processing your photos.
<img src={require('../guides/img/jobs-tab.png').default} title="Jobs tab" />
## Where to go from here?
You may decide you'd like to install the server a different way;
the Install category on the left menu provides many options.
You may decide you'd like to add the _rest_ of your photos from Google Photos,
even those not on your mobile device, via Google Takeout.
You can use [immich-go](https://github.com/simulot/immich-go) for this.
You may want to
[upload photos from your own archive](../features/command-line-interface).
You may want to incorporate an immutable archive of photos from an
[External Library](../features/libraries#external-libraries);
there's a [Guide](../guides/external-library) for that.
You may want your mobile device to
[back photos up to your server automatically](../features/automatic-backup).
You may want to back up the content of your Immich instance
along with other parts of your server; be sure to read about
[database backup](../administration/backup-and-restore).

View File

@@ -1,5 +1,5 @@
---
sidebar_position: 3
sidebar_position: 4
---
# Support The Project

View File

@@ -30,6 +30,8 @@ download:
locale_code: pl-PL
- file: mobile/assets/i18n/fi-FI.json
locale_code: fi-FI
- file: mobile/assets/i18n/pt-PT.json
locale_code: pt-PT
- file: mobile/assets/i18n/pt-BR.json
locale_code: pt-BR
- file: mobile/assets/i18n/cs-CZ.json

View File

@@ -1,6 +1,5 @@
# Immich Machine Learning
- Image classification
- CLIP embeddings
- Facial recognition

View File

@@ -59,3 +59,37 @@ def clip_preprocess_cfg() -> dict[str, Any]:
"resize_mode": "shortest",
"fill_color": 0,
}
@pytest.fixture(scope="session")
def clip_tokenizer_cfg() -> dict[str, Any]:
return {
"add_prefix_space": False,
"added_tokens_decoder": {
"49406": {
"content": "<|startoftext|>",
"lstrip": False,
"normalized": True,
"rstrip": False,
"single_word": False,
"special": True,
},
"49407": {
"content": "<|endoftext|>",
"lstrip": False,
"normalized": True,
"rstrip": False,
"single_word": False,
"special": True,
},
},
"bos_token": "<|startoftext|>",
"clean_up_tokenization_spaces": True,
"do_lower_case": True,
"eos_token": "<|endoftext|>",
"errors": "replace",
"model_max_length": 77,
"pad_token": "<|endoftext|>",
"tokenizer_class": "CLIPTokenizer",
"unk_token": "<|endoftext|>",
}

View File

@@ -6,7 +6,6 @@ from .base import InferenceModel
from .clip import MCLIPEncoder, OpenCLIPEncoder
from .constants import is_insightface, is_mclip, is_openclip
from .facial_recognition import FaceRecognizer
from .image_classification import ImageClassifier
def from_model_type(model_type: ModelType, model_name: str, **model_kwargs: Any) -> InferenceModel:
@@ -19,8 +18,6 @@ def from_model_type(model_type: ModelType, model_name: str, **model_kwargs: Any)
case ModelType.FACIAL_RECOGNITION:
if is_insightface(model_name):
return FaceRecognizer(model_name, **model_kwargs)
case ModelType.IMAGE_CLASSIFICATION:
return ImageClassifier(model_name, **model_kwargs)
case _:
raise ValueError(f"Unknown model type {model_type}")

View File

@@ -35,7 +35,7 @@ class InferenceModel(ABC):
)
log.debug(
(
f"Setting '{self.model_name}' execution providers to {self.providers}"
f"Setting '{self.model_name}' execution providers to {self.providers} "
"in descending order of preference"
),
)
@@ -55,7 +55,7 @@ class InferenceModel(ABC):
def download(self) -> None:
if not self.cached:
log.info(
(f"Downloading {self.model_type.replace('-', ' ')} model '{self.model_name}'." "This may take a while.")
f"Downloading {self.model_type.replace('-', ' ')} model '{self.model_name}'. This may take a while."
)
self._download()
@@ -63,7 +63,7 @@ class InferenceModel(ABC):
if self.loaded:
return
self.download()
log.info(f"Loading {self.model_type.replace('-', ' ')} model '{self.model_name}'")
log.info(f"Loading {self.model_type.replace('-', ' ')} model '{self.model_name}' to memory")
self._load()
self.loaded = True
@@ -119,11 +119,11 @@ class InferenceModel(ABC):
def clear_cache(self) -> None:
if not self.cache_dir.exists():
log.warn(
f"Attempted to clear cache for model '{self.model_name}' but cache directory does not exist.",
f"Attempted to clear cache for model '{self.model_name}', but cache directory does not exist",
)
return
if not rmtree.avoids_symlink_attacks:
raise RuntimeError("Attempted to clear cache, but rmtree is not safe on this platform.")
raise RuntimeError("Attempted to clear cache, but rmtree is not safe on this platform")
if self.cache_dir.is_dir():
log.info(f"Cleared cache directory for model '{self.model_name}'.")

View File

@@ -8,11 +8,11 @@ from typing import Any, Literal
import numpy as np
import onnxruntime as ort
from PIL import Image
from transformers import AutoTokenizer
from tokenizers import Encoding, Tokenizer
from app.config import clean_name, log
from app.models.transforms import crop, get_pil_resampling, normalize, resize, to_numpy
from app.schemas import ModelType, ndarray_f32, ndarray_i32, ndarray_i64
from app.schemas import ModelType, ndarray_f32, ndarray_i32
from .base import InferenceModel
@@ -40,6 +40,7 @@ class BaseCLIPEncoder(InferenceModel):
providers=self.providers,
provider_options=self.provider_options,
)
log.debug(f"Loaded clip text model '{self.model_name}'")
if self.mode == "vision" or self.mode is None:
log.debug(f"Loading clip vision model '{self.model_name}'")
@@ -50,6 +51,7 @@ class BaseCLIPEncoder(InferenceModel):
providers=self.providers,
provider_options=self.provider_options,
)
log.debug(f"Loaded clip vision model '{self.model_name}'")
def _predict(self, image_or_text: Image.Image | str) -> ndarray_f32:
if isinstance(image_or_text, bytes):
@@ -99,6 +101,14 @@ class BaseCLIPEncoder(InferenceModel):
def visual_path(self) -> Path:
return self.visual_dir / "model.onnx"
@property
def tokenizer_file_path(self) -> Path:
return self.textual_dir / "tokenizer.json"
@property
def tokenizer_cfg_path(self) -> Path:
return self.textual_dir / "tokenizer_config.json"
@property
def preprocess_cfg_path(self) -> Path:
return self.visual_dir / "preprocess_cfg.json"
@@ -107,6 +117,34 @@ class BaseCLIPEncoder(InferenceModel):
def cached(self) -> bool:
return self.textual_path.is_file() and self.visual_path.is_file()
@cached_property
def model_cfg(self) -> dict[str, Any]:
log.debug(f"Loading model config for CLIP model '{self.model_name}'")
model_cfg: dict[str, Any] = json.load(self.model_cfg_path.open())
log.debug(f"Loaded model config for CLIP model '{self.model_name}'")
return model_cfg
@cached_property
def tokenizer_file(self) -> dict[str, Any]:
log.debug(f"Loading tokenizer file for CLIP model '{self.model_name}'")
tokenizer_file: dict[str, Any] = json.load(self.tokenizer_file_path.open())
log.debug(f"Loaded tokenizer file for CLIP model '{self.model_name}'")
return tokenizer_file
@cached_property
def tokenizer_cfg(self) -> dict[str, Any]:
log.debug(f"Loading tokenizer config for CLIP model '{self.model_name}'")
tokenizer_cfg: dict[str, Any] = json.load(self.tokenizer_cfg_path.open())
log.debug(f"Loaded tokenizer config for CLIP model '{self.model_name}'")
return tokenizer_cfg
@cached_property
def preprocess_cfg(self) -> dict[str, Any]:
log.debug(f"Loading visual preprocessing config for CLIP model '{self.model_name}'")
preprocess_cfg: dict[str, Any] = json.load(self.preprocess_cfg_path.open())
log.debug(f"Loaded visual preprocessing config for CLIP model '{self.model_name}'")
return preprocess_cfg
class OpenCLIPEncoder(BaseCLIPEncoder):
def __init__(
@@ -121,8 +159,8 @@ class OpenCLIPEncoder(BaseCLIPEncoder):
def _load(self) -> None:
super()._load()
self.tokenizer = AutoTokenizer.from_pretrained(self.textual_dir)
self.sequence_length = self.model_cfg["text_cfg"]["context_length"]
context_length = self.model_cfg["text_cfg"]["context_length"]
pad_token = self.tokenizer_cfg["pad_token"]
self.size = (
self.preprocess_cfg["size"][0] if type(self.preprocess_cfg["size"]) == list else self.preprocess_cfg["size"]
@@ -131,16 +169,16 @@ class OpenCLIPEncoder(BaseCLIPEncoder):
self.mean = np.array(self.preprocess_cfg["mean"], dtype=np.float32)
self.std = np.array(self.preprocess_cfg["std"], dtype=np.float32)
log.debug(f"Loading tokenizer for CLIP model '{self.model_name}'")
self.tokenizer: Tokenizer = Tokenizer.from_file(self.tokenizer_file_path.as_posix())
pad_id = self.tokenizer.token_to_id(pad_token)
self.tokenizer.enable_padding(length=context_length, pad_token=pad_token, pad_id=pad_id)
self.tokenizer.enable_truncation(max_length=context_length)
log.debug(f"Loaded tokenizer for CLIP model '{self.model_name}'")
def tokenize(self, text: str) -> dict[str, ndarray_i32]:
input_ids: ndarray_i64 = self.tokenizer(
text,
max_length=self.sequence_length,
return_tensors="np",
return_attention_mask=False,
padding="max_length",
truncation=True,
).input_ids
return {"text": input_ids.astype(np.int32)}
tokens: Encoding = self.tokenizer.encode(text)
return {"text": np.array([tokens.ids], dtype=np.int32)}
def transform(self, image: Image.Image) -> dict[str, ndarray_f32]:
image = resize(image, self.size)
@@ -149,18 +187,11 @@ class OpenCLIPEncoder(BaseCLIPEncoder):
image_np = normalize(image_np, self.mean, self.std)
return {"image": np.expand_dims(image_np.transpose(2, 0, 1), 0)}
@cached_property
def model_cfg(self) -> dict[str, Any]:
model_cfg: dict[str, Any] = json.load(self.model_cfg_path.open())
return model_cfg
@cached_property
def preprocess_cfg(self) -> dict[str, Any]:
preprocess_cfg: dict[str, Any] = json.load(self.preprocess_cfg_path.open())
return preprocess_cfg
class MCLIPEncoder(OpenCLIPEncoder):
def tokenize(self, text: str) -> dict[str, ndarray_i32]:
tokens: dict[str, ndarray_i64] = self.tokenizer(text, return_tensors="np")
return {k: v.astype(np.int32) for k, v in tokens.items()}
tokens: Encoding = self.tokenizer.encode(text)
return {
"input_ids": np.array([tokens.ids], dtype=np.int32),
"attention_mask": np.array([tokens.attention_mask], dtype=np.int32),
}

View File

@@ -1,75 +0,0 @@
from io import BytesIO
from pathlib import Path
from typing import Any
from huggingface_hub import snapshot_download
from optimum.onnxruntime import ORTModelForImageClassification
from optimum.pipelines import pipeline
from PIL import Image
from transformers import AutoImageProcessor
from ..config import log
from ..schemas import ModelType
from .base import InferenceModel
class ImageClassifier(InferenceModel):
_model_type = ModelType.IMAGE_CLASSIFICATION
def __init__(
self,
model_name: str,
min_score: float = 0.9,
cache_dir: Path | str | None = None,
**model_kwargs: Any,
) -> None:
self.min_score = model_kwargs.pop("minScore", min_score)
super().__init__(model_name, cache_dir, **model_kwargs)
def _download(self) -> None:
snapshot_download(
cache_dir=self.cache_dir,
repo_id=self.model_name,
allow_patterns=["*.bin", "*.json", "*.txt"],
local_dir=self.cache_dir,
local_dir_use_symlinks=True,
)
def _load(self) -> None:
processor = AutoImageProcessor.from_pretrained(self.cache_dir, cache_dir=self.cache_dir)
model_path = self.cache_dir / "model.onnx"
model_kwargs = {
"cache_dir": self.cache_dir,
"provider": self.providers[0],
"provider_options": self.provider_options[0],
"session_options": self.sess_options,
}
if model_path.exists():
model = ORTModelForImageClassification.from_pretrained(self.cache_dir, **model_kwargs)
self.model = pipeline(self.model_type.value, model, feature_extractor=processor)
else:
log.info(
(
f"ONNX model not found in cache directory for '{self.model_name}'."
"Exporting optimized model for future use."
),
)
self.sess_options.optimized_model_filepath = model_path.as_posix()
self.model = pipeline(
self.model_type.value,
self.model_name,
model_kwargs=model_kwargs,
feature_extractor=processor,
)
def _predict(self, image: Image.Image | bytes) -> list[str]:
if isinstance(image, bytes):
image = Image.open(BytesIO(image))
predictions: list[dict[str, Any]] = self.model(image)
tags = [tag for pred in predictions for tag in pred["label"].split(", ") if pred["score"] >= self.min_score]
return tags
def configure(self, **model_kwargs: Any) -> None:
self.min_score = model_kwargs.pop("minScore", self.min_score)

View File

@@ -25,7 +25,6 @@ class BoundingBox(TypedDict):
class ModelType(StrEnum):
IMAGE_CLASSIFICATION = "image-classification"
CLIP = "clip"
FACIAL_RECOGNITION = "facial-recognition"

View File

@@ -17,42 +17,9 @@ from .models.base import PicklableSessionOptions
from .models.cache import ModelCache
from .models.clip import OpenCLIPEncoder
from .models.facial_recognition import FaceRecognizer
from .models.image_classification import ImageClassifier
from .schemas import ModelType
class TestImageClassifier:
classifier_preds = [
{"label": "that's an image alright", "score": 0.8},
{"label": "well it ends with .jpg", "score": 0.1},
{"label": "idk, im just seeing bytes", "score": 0.05},
{"label": "not sure", "score": 0.04},
{"label": "probably a virus", "score": 0.01},
]
def test_min_score(self, pil_image: Image.Image, mocker: MockerFixture) -> None:
mocker.patch.object(ImageClassifier, "load")
classifier = ImageClassifier("test_model_name", min_score=0.0)
assert classifier.min_score == 0.0
classifier.model = mock.Mock()
classifier.model.return_value = self.classifier_preds
all_labels = classifier.predict(pil_image)
classifier.min_score = 0.5
filtered_labels = classifier.predict(pil_image)
assert all_labels == [
"that's an image alright",
"well it ends with .jpg",
"idk",
"im just seeing bytes",
"not sure",
"probably a virus",
]
assert filtered_labels == ["that's an image alright"]
class TestCLIP:
embedding = np.random.rand(512).astype(np.float32)
cache_dir = Path("test_cache")
@@ -63,11 +30,13 @@ class TestCLIP:
mocker: MockerFixture,
clip_model_cfg: dict[str, Any],
clip_preprocess_cfg: Callable[[Path], dict[str, Any]],
clip_tokenizer_cfg: Callable[[Path], dict[str, Any]],
) -> None:
mocker.patch.object(OpenCLIPEncoder, "download")
mocker.patch.object(OpenCLIPEncoder, "model_cfg", clip_model_cfg)
mocker.patch.object(OpenCLIPEncoder, "preprocess_cfg", clip_preprocess_cfg)
mocker.patch("app.models.clip.AutoTokenizer.from_pretrained", autospec=True)
mocker.patch.object(OpenCLIPEncoder, "tokenizer_cfg", clip_tokenizer_cfg)
mocker.patch("app.models.clip.Tokenizer.from_file", autospec=True)
mocked = mocker.patch("app.models.clip.ort.InferenceSession", autospec=True)
mocked.return_value.run.return_value = [[self.embedding]]
@@ -85,11 +54,13 @@ class TestCLIP:
mocker: MockerFixture,
clip_model_cfg: dict[str, Any],
clip_preprocess_cfg: Callable[[Path], dict[str, Any]],
clip_tokenizer_cfg: Callable[[Path], dict[str, Any]],
) -> None:
mocker.patch.object(OpenCLIPEncoder, "download")
mocker.patch.object(OpenCLIPEncoder, "model_cfg", clip_model_cfg)
mocker.patch.object(OpenCLIPEncoder, "preprocess_cfg", clip_preprocess_cfg)
mocker.patch("app.models.clip.AutoTokenizer.from_pretrained", autospec=True)
mocker.patch.object(OpenCLIPEncoder, "tokenizer_cfg", clip_tokenizer_cfg)
mocker.patch("app.models.clip.Tokenizer.from_file", autospec=True)
mocked = mocker.patch("app.models.clip.ort.InferenceSession", autospec=True)
mocked.return_value.run.return_value = [[self.embedding]]
@@ -145,17 +116,15 @@ class TestFaceRecognition:
class TestCache:
async def test_caches(self, mock_get_model: mock.Mock) -> None:
model_cache = ModelCache()
await model_cache.get("test_model_name", ModelType.IMAGE_CLASSIFICATION)
await model_cache.get("test_model_name", ModelType.IMAGE_CLASSIFICATION)
await model_cache.get("test_model_name", ModelType.FACIAL_RECOGNITION)
await model_cache.get("test_model_name", ModelType.FACIAL_RECOGNITION)
assert len(model_cache.cache._cache) == 1
mock_get_model.assert_called_once()
async def test_kwargs_used(self, mock_get_model: mock.Mock) -> None:
model_cache = ModelCache()
await model_cache.get("test_model_name", ModelType.IMAGE_CLASSIFICATION, cache_dir="test_cache")
mock_get_model.assert_called_once_with(
ModelType.IMAGE_CLASSIFICATION, "test_model_name", cache_dir="test_cache"
)
await model_cache.get("test_model_name", ModelType.FACIAL_RECOGNITION, cache_dir="test_cache")
mock_get_model.assert_called_once_with(ModelType.FACIAL_RECOGNITION, "test_model_name", cache_dir="test_cache")
async def test_different_clip(self, mock_get_model: mock.Mock) -> None:
model_cache = ModelCache()
@@ -172,14 +141,14 @@ class TestCache:
@mock.patch("app.models.cache.OptimisticLock", autospec=True)
async def test_model_ttl(self, mock_lock_cls: mock.Mock, mock_get_model: mock.Mock) -> None:
model_cache = ModelCache(ttl=100)
await model_cache.get("test_model_name", ModelType.IMAGE_CLASSIFICATION)
await model_cache.get("test_model_name", ModelType.FACIAL_RECOGNITION)
mock_lock_cls.return_value.__aenter__.return_value.cas.assert_called_with(mock.ANY, ttl=100)
@mock.patch("app.models.cache.SimpleMemoryCache.expire")
async def test_revalidate(self, mock_cache_expire: mock.Mock, mock_get_model: mock.Mock) -> None:
model_cache = ModelCache(ttl=100, revalidate=True)
await model_cache.get("test_model_name", ModelType.IMAGE_CLASSIFICATION)
await model_cache.get("test_model_name", ModelType.IMAGE_CLASSIFICATION)
await model_cache.get("test_model_name", ModelType.FACIAL_RECOGNITION)
await model_cache.get("test_model_name", ModelType.FACIAL_RECOGNITION)
mock_cache_expire.assert_called_once_with(mock.ANY, 100)
@@ -188,23 +157,6 @@ class TestCache:
reason="More time-consuming since it deploys the app and loads models.",
)
class TestEndpoints:
def test_tagging_endpoint(
self, pil_image: Image.Image, responses: dict[str, Any], deployed_app: TestClient
) -> None:
byte_image = BytesIO()
pil_image.save(byte_image, format="jpeg")
response = deployed_app.post(
"http://localhost:3003/predict",
data={
"modelName": "microsoft/resnet-50",
"modelType": "image-classification",
"options": json.dumps({"minScore": 0.0}),
},
files={"image": byte_image.getvalue()},
)
assert response.status_code == 200
assert response.json() == responses["image-classification"]
def test_clip_image_endpoint(
self, pil_image: Image.Image, responses: dict[str, Any], deployed_app: TestClient
) -> None:

View File

@@ -12,7 +12,6 @@ byte_image = BytesIO()
@events.init_command_line_parser.add_listener
def _(parser: ArgumentParser) -> None:
parser.add_argument("--tag-model", type=str, default="microsoft/resnet-50")
parser.add_argument("--clip-model", type=str, default="ViT-B-32::openai")
parser.add_argument("--face-model", type=str, default="buffalo_l")
parser.add_argument(
@@ -54,18 +53,6 @@ class InferenceLoadTest(HttpUser):
self.data = byte_image.getvalue()
class ClassificationFormDataLoadTest(InferenceLoadTest):
@task
def classify(self) -> None:
data = [
("modelName", self.environment.parsed_options.clip_model),
("modelType", "clip"),
("options", json.dumps({"minScore": self.environment.parsed_options.tag_min_score})),
]
files = {"image": self.data}
self.client.post("/predict", data=data, files=files)
class CLIPTextFormDataLoadTest(InferenceLoadTest):
@task
def encode_text(self) -> None:

View File

@@ -5,8 +5,7 @@
"handlers": {
"console": {
"class": "app.config.CustomRichHandler",
"formatter": "rich",
"level": "INFO"
"formatter": "rich"
}
},
"loggers": {

File diff suppressed because it is too large Load Diff

View File

@@ -1,50 +1,40 @@
[tool.poetry]
name = "machine-learning"
version = "1.91.3"
version = "1.91.4"
description = ""
authors = ["Hau Tran <alex.tran1502@gmail.com>"]
readme = "README.md"
packages = [{include = "app"}]
[tool.poetry.dependencies]
python = "~3.11"
torch = [
{markers = "platform_machine == 'arm64' or platform_machine == 'aarch64'", version = "=2.1.0", source = "pypi"},
{markers = "platform_machine == 'amd64' or platform_machine == 'x86_64'", version = "=2.1.0", source = "pytorch-cpu"}
]
transformers = "^4.29.2"
python = ">=3.10,<3.12"
onnxruntime = "^1.15.0"
insightface = "^0.7.3"
opencv-python-headless = "^4.7.0.72"
pillow = "^9.5.0"
fastapi = "^0.95.2"
uvicorn = {extras = ["standard"], version = "^0.22.0"}
insightface = ">=0.7.3,<1.0"
opencv-python-headless = ">=4.7.0.72,<5.0"
pillow = ">=9.5.0,<11.0"
fastapi = ">=0.95.2,<1.0"
uvicorn = {extras = ["standard"], version = ">=0.22.0,<1.0"}
pydantic = "^1.10.8"
aiocache = "^0.12.1"
optimum = "^1.9.1"
rich = "^13.4.2"
ftfy = "^6.1.1"
aiocache = ">=0.12.1,<1.0"
rich = ">=13.4.2"
ftfy = ">=6.1.1"
setuptools = "^68.0.0"
python-multipart = "^0.0.6"
orjson = "^3.9.5"
safetensors = "0.3.2"
gunicorn = "^21.1.0"
python-multipart = ">=0.0.6,<1.0"
orjson = ">=3.9.5"
gunicorn = ">=21.1.0"
huggingface-hub = ">=0.20.1,<1.0"
tokenizers = ">=0.15.0,<1.0"
[tool.poetry.group.dev.dependencies]
mypy = "^1.3.0"
black = "^23.3.0"
pytest = "^7.3.1"
locust = "^2.15.1"
httpx = "^0.24.1"
pytest-asyncio = "^0.21.0"
pytest-cov = "^4.1.0"
ruff = "^0.0.272"
pytest-mock = "^3.11.1"
[[tool.poetry.source]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
priority = "explicit"
mypy = ">=1.3.0"
black = ">=23.3.0"
pytest = ">=7.3.1"
locust = ">=2.15.1"
httpx = ">=0.24.1"
pytest-asyncio = ">=0.21.0"
pytest-cov = ">=4.1.0"
ruff = ">=0.0.272"
pytest-mock = ">=3.11.1"
[build-system]
requires = ["poetry-core"]

View File

@@ -1 +1,54 @@
# Immich Mobile Application - Flutter
The Immich mobile app is a Flutter-based solution leveraging the Isar Database for local storage and Riverpod for state management. This structure optimizes functionality and maintainability, allowing for efficient development and robust performance.
## Setup
You must set up Flutter toolchain in your machine before you can perform any of the development.
## Immich-Flutter Directory Structure
Below are the directory inside the `lib` directory:
- `constants`: Store essential constants utilized across the application, like colors and locale.
- `extensions`: Extensions enhancing various existing functionalities within the app, such as asset_extensions.dart, string_extensions.dart, and more.
- `module_template`: Provides a template structure for different modules within the app, including subdivisions like models, providers, services, UI, and views.
- `models`: Placeholder for storing module-specific models.
- `providers`: Section to define module-specific Riverpod providers.
- `services`: Houses services tailored to the module's functionality.
- `ui`: Contains UI components and widgets for the module.
- `views`: Placeholder for module-specific views.
- `modules`: Organizes different functional modules of the app, each containing subdivisions for models, providers, services, UI, and views. This structure promotes modular development and scalability.
- `routing`: Includes guards like auth_guard.dart, backup_permission_guard.dart, and routers like router.dart and router.gr.dart for streamlined navigation and permission management.
- `shared`: cache, models, providers, services, ui, views: Encapsulates shared functionalities, such as caching mechanisms, common models, providers, services, UI components, and views accessible across the application.
- `utils`: A collection of utility classes and functions catering to different app functionalities, including async_mutex.dart, bytes_units.dart, debounce.dart, migration.dart, and more.
## Immich Architectural Pattern
The Immich Flutter app embraces a well-defined architectural pattern inspired by the Model-View-ViewModel (MVVM) approach. This layout organizes modules for models, providers, services, UI, and views, creating a modular development approach that strongly emphasizes a clean separation of concerns.
Please use the `module_template` provided to create a new module.
### Architecture Breakdown
Below is how your code needs to be structured:
- Models: In Immich, Models are like the app's blueprint—they're essential for organizing and using information. Imagine them as containers that hold data the app needs to function. They also handle basic rules and logic for managing and interacting with this data across the app.
- Providers (Riverpod): Providers in Immich are a bit like traffic managers. They help different parts of the app communicate and share information effectively. They ensure that the right data gets to the right places at the right time. These providers use Riverpod, a tool that helps with managing and organizing how the app's information flows. Everything related to the state goes here.
- Services: Services are the helpful behind-the-scenes workers in Immich. They handle important tasks like handling network requests or managing other essential functions. These services work independently and focus on supporting the app's main functionalities.
- UI: In Immich, the UI focuses solely on how things appear and feel without worrying about the app's complex inner workings. You can slot in your reusable widget here.
- Views: Views use Providers to get the needed information and handle actions without dealing with the technical complexities behind the scenes. Normally Flutter's screen & pages goes here.
## Contributing
Please refer to the [architecture](https://immich.app/docs/developer/architecture/) for contributing to the mobile app!

View File

@@ -35,8 +35,8 @@ platform :android do
task: 'bundle',
build_type: 'Release',
properties: {
"android.injected.version.code" => 115,
"android.injected.version.name" => "1.91.3",
"android.injected.version.code" => 116,
"android.injected.version.name" => "1.91.4",
}
)
upload_to_play_store(skip_upload_apk: true, skip_upload_images: true, skip_upload_screenshots: true, aab: '../build/app/outputs/bundle/release/app-release.aab')

View File

@@ -19,7 +19,7 @@ platform :ios do
desc "iOS Beta"
lane :beta do
increment_version_number(
version_number: "1.91.3"
version_number: "1.91.4"
)
increment_build_number(
build_number: latest_testflight_build_number + 1,

View File

@@ -14,7 +14,7 @@ const List<Locale> locales = [
Locale('ja', 'JP'),
Locale('pl', 'PL'),
Locale('fi', 'FI'),
Locale('pt', 'PR'),
Locale('pt', 'PT'),
Locale('cs', 'CZ'),
Locale('uk', 'UA'),
Locale('ru', 'RU'),

View File

@@ -179,18 +179,18 @@ class GalleryViewerPage extends HookConsumerWidget {
barrierColor: Colors.transparent,
backgroundColor: Colors.transparent,
isScrollControlled: true,
useSafeArea: true,
context: context,
builder: (context) {
if (ref
.watch(appSettingsServiceProvider)
.getSetting<bool>(AppSettingsEnum.advancedTroubleshooting)) {
return AdvancedBottomSheet(assetDetail: asset());
}
return Padding(
padding: EdgeInsets.only(
bottom: MediaQuery.of(context).viewInsets.bottom,
bottom: MediaQuery.viewInsetsOf(context).bottom,
),
child: ExifBottomSheet(asset: asset()),
child: ref
.watch(appSettingsServiceProvider)
.getSetting<bool>(AppSettingsEnum.advancedTroubleshooting)
? AdvancedBottomSheet(assetDetail: asset())
: ExifBottomSheet(asset: asset()),
);
},
);
@@ -795,8 +795,8 @@ class GalleryViewerPage extends HookConsumerWidget {
imageProvider: provider,
heroAttributes: PhotoViewHeroAttributes(
tag: isFromDto
? '${a.remoteId}-$heroOffset'
: a.id + heroOffset,
? '${currentAsset.remoteId}-$heroOffset'
: currentAsset.id + heroOffset,
transitionOnUserGestures: true,
),
filterQuality: FilterQuality.high,
@@ -815,8 +815,8 @@ class GalleryViewerPage extends HookConsumerWidget {
handleSwipeUpDown(details),
heroAttributes: PhotoViewHeroAttributes(
tag: isFromDto
? '${a.remoteId}-$heroOffset'
: a.id + heroOffset,
? '${currentAsset.remoteId}-$heroOffset'
: currentAsset.id + heroOffset,
),
filterQuality: FilterQuality.high,
maxScale: 1.0,

View File

@@ -394,8 +394,12 @@ class BackupService {
continue;
} finally {
if (Platform.isIOS) {
file?.deleteSync();
livePhotoFile?.deleteSync();
try {
await file?.delete();
await livePhotoFile?.delete();
} catch (e) {
debugPrint("ERROR deleting file: ${e.toString()}");
}
}
}
}

View File

@@ -50,7 +50,6 @@ doc/CQMode.md
doc/ChangePasswordDto.md
doc/CheckExistingAssetsDto.md
doc/CheckExistingAssetsResponseDto.md
doc/ClassificationConfig.md
doc/Colorspace.md
doc/CreateAlbumDto.md
doc/CreateLibraryDto.md
@@ -90,6 +89,7 @@ doc/MapMarkerResponseDto.md
doc/MapTheme.md
doc/MemoryLaneResponseDto.md
doc/MergePersonDto.md
doc/MetricsApi.md
doc/ModelType.md
doc/OAuthApi.md
doc/OAuthAuthorizeResponseDto.md
@@ -146,6 +146,7 @@ doc/SystemConfigLibraryScanDto.md
doc/SystemConfigLoggingDto.md
doc/SystemConfigMachineLearningDto.md
doc/SystemConfigMapDto.md
doc/SystemConfigMetricsDto.md
doc/SystemConfigNewVersionCheckDto.md
doc/SystemConfigOAuthDto.md
doc/SystemConfigPasswordLoginDto.md
@@ -189,6 +190,7 @@ lib/api/authentication_api.dart
lib/api/face_api.dart
lib/api/job_api.dart
lib/api/library_api.dart
lib/api/metrics_api.dart
lib/api/o_auth_api.dart
lib/api/partner_api.dart
lib/api/person_api.dart
@@ -244,7 +246,6 @@ lib/model/bulk_ids_dto.dart
lib/model/change_password_dto.dart
lib/model/check_existing_assets_dto.dart
lib/model/check_existing_assets_response_dto.dart
lib/model/classification_config.dart
lib/model/clip_config.dart
lib/model/clip_mode.dart
lib/model/colorspace.dart
@@ -333,6 +334,7 @@ lib/model/system_config_library_scan_dto.dart
lib/model/system_config_logging_dto.dart
lib/model/system_config_machine_learning_dto.dart
lib/model/system_config_map_dto.dart
lib/model/system_config_metrics_dto.dart
lib/model/system_config_new_version_check_dto.dart
lib/model/system_config_o_auth_dto.dart
lib/model/system_config_password_login_dto.dart
@@ -408,7 +410,6 @@ test/bulk_ids_dto_test.dart
test/change_password_dto_test.dart
test/check_existing_assets_dto_test.dart
test/check_existing_assets_response_dto_test.dart
test/classification_config_test.dart
test/clip_config_test.dart
test/clip_mode_test.dart
test/colorspace_test.dart
@@ -451,6 +452,7 @@ test/map_marker_response_dto_test.dart
test/map_theme_test.dart
test/memory_lane_response_dto_test.dart
test/merge_person_dto_test.dart
test/metrics_api_test.dart
test/model_type_test.dart
test/o_auth_api_test.dart
test/o_auth_authorize_response_dto_test.dart
@@ -507,6 +509,7 @@ test/system_config_library_scan_dto_test.dart
test/system_config_logging_dto_test.dart
test/system_config_machine_learning_dto_test.dart
test/system_config_map_dto_test.dart
test/system_config_metrics_dto_test.dart
test/system_config_new_version_check_dto_test.dart
test/system_config_o_auth_dto_test.dart
test/system_config_password_login_dto_test.dart

View File

@@ -3,7 +3,7 @@ Immich API
This Dart package is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project:
- API version: 1.91.3
- API version: 1.91.4
- Build package: org.openapitools.codegen.languages.DartClientCodegen
## Requirements
@@ -145,6 +145,7 @@ Class | Method | HTTP request | Description
*LibraryApi* | [**removeOfflineFiles**](doc//LibraryApi.md#removeofflinefiles) | **POST** /library/{id}/removeOffline |
*LibraryApi* | [**scanLibrary**](doc//LibraryApi.md#scanlibrary) | **POST** /library/{id}/scan |
*LibraryApi* | [**updateLibrary**](doc//LibraryApi.md#updatelibrary) | **PUT** /library/{id} |
*MetricsApi* | [**getMetrics**](doc//MetricsApi.md#getmetrics) | **GET** /metrics |
*OAuthApi* | [**finishOAuth**](doc//OAuthApi.md#finishoauth) | **POST** /oauth/callback |
*OAuthApi* | [**generateOAuthConfig**](doc//OAuthApi.md#generateoauthconfig) | **POST** /oauth/config |
*OAuthApi* | [**linkOAuthAccount**](doc//OAuthApi.md#linkoauthaccount) | **POST** /oauth/link |
@@ -252,7 +253,6 @@ Class | Method | HTTP request | Description
- [ChangePasswordDto](doc//ChangePasswordDto.md)
- [CheckExistingAssetsDto](doc//CheckExistingAssetsDto.md)
- [CheckExistingAssetsResponseDto](doc//CheckExistingAssetsResponseDto.md)
- [ClassificationConfig](doc//ClassificationConfig.md)
- [Colorspace](doc//Colorspace.md)
- [CreateAlbumDto](doc//CreateAlbumDto.md)
- [CreateLibraryDto](doc//CreateLibraryDto.md)
@@ -338,6 +338,7 @@ Class | Method | HTTP request | Description
- [SystemConfigLoggingDto](doc//SystemConfigLoggingDto.md)
- [SystemConfigMachineLearningDto](doc//SystemConfigMachineLearningDto.md)
- [SystemConfigMapDto](doc//SystemConfigMapDto.md)
- [SystemConfigMetricsDto](doc//SystemConfigMetricsDto.md)
- [SystemConfigNewVersionCheckDto](doc//SystemConfigNewVersionCheckDto.md)
- [SystemConfigOAuthDto](doc//SystemConfigOAuthDto.md)
- [SystemConfigPasswordLoginDto](doc//SystemConfigPasswordLoginDto.md)

View File

@@ -12,7 +12,6 @@ Name | Type | Description | Notes
**library_** | [**JobStatusDto**](JobStatusDto.md) | |
**metadataExtraction** | [**JobStatusDto**](JobStatusDto.md) | |
**migration** | [**JobStatusDto**](JobStatusDto.md) | |
**objectTagging** | [**JobStatusDto**](JobStatusDto.md) | |
**recognizeFaces** | [**JobStatusDto**](JobStatusDto.md) | |
**search** | [**JobStatusDto**](JobStatusDto.md) | |
**sidecar** | [**JobStatusDto**](JobStatusDto.md) | |

65
mobile/openapi/doc/MetricsApi.md generated Normal file
View File

@@ -0,0 +1,65 @@
# openapi.api.MetricsApi
## Load the API package
```dart
import 'package:openapi/api.dart';
```
All URIs are relative to */api*
Method | HTTP request | Description
------------- | ------------- | -------------
[**getMetrics**](MetricsApi.md#getmetrics) | **GET** /metrics |
# **getMetrics**
> Object getMetrics()
### Example
```dart
import 'package:openapi/api.dart';
// TODO Configure API key authorization: cookie
//defaultApiClient.getAuthentication<ApiKeyAuth>('cookie').apiKey = 'YOUR_API_KEY';
// uncomment below to setup prefix (e.g. Bearer) for API key, if needed
//defaultApiClient.getAuthentication<ApiKeyAuth>('cookie').apiKeyPrefix = 'Bearer';
// TODO Configure API key authorization: api_key
//defaultApiClient.getAuthentication<ApiKeyAuth>('api_key').apiKey = 'YOUR_API_KEY';
// uncomment below to setup prefix (e.g. Bearer) for API key, if needed
//defaultApiClient.getAuthentication<ApiKeyAuth>('api_key').apiKeyPrefix = 'Bearer';
// TODO Configure HTTP Bearer authorization: bearer
// Case 1. Use String Token
//defaultApiClient.getAuthentication<HttpBearerAuth>('bearer').setAccessToken('YOUR_ACCESS_TOKEN');
// Case 2. Use Function which generate token.
// String yourTokenGeneratorFunction() { ... }
//defaultApiClient.getAuthentication<HttpBearerAuth>('bearer').setAccessToken(yourTokenGeneratorFunction);
final api_instance = MetricsApi();
try {
final result = api_instance.getMetrics();
print(result);
} catch (e) {
print('Exception when calling MetricsApi->getMetrics: $e\n');
}
```
### Parameters
This endpoint does not need any parameter.
### Return type
[**Object**](Object.md)
### Authorization
[cookie](../README.md#cookie), [api_key](../README.md#api_key), [bearer](../README.md#bearer)
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)

View File

@@ -12,13 +12,13 @@ Name | Type | Description | Notes
**configFile** | **bool** | |
**facialRecognition** | **bool** | |
**map** | **bool** | |
**metrics** | **bool** | |
**oauth** | **bool** | |
**oauthAutoLaunch** | **bool** | |
**passwordLogin** | **bool** | |
**reverseGeocoding** | **bool** | |
**search** | **bool** | |
**sidecar** | **bool** | |
**tagImage** | **bool** | |
**trash** | **bool** | |
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@@ -14,6 +14,7 @@ Name | Type | Description | Notes
**logging** | [**SystemConfigLoggingDto**](SystemConfigLoggingDto.md) | |
**machineLearning** | [**SystemConfigMachineLearningDto**](SystemConfigMachineLearningDto.md) | |
**map** | [**SystemConfigMapDto**](SystemConfigMapDto.md) | |
**metrics** | [**SystemConfigMetricsDto**](SystemConfigMetricsDto.md) | |
**newVersionCheck** | [**SystemConfigNewVersionCheckDto**](SystemConfigNewVersionCheckDto.md) | |
**oauth** | [**SystemConfigOAuthDto**](SystemConfigOAuthDto.md) | |
**passwordLogin** | [**SystemConfigPasswordLoginDto**](SystemConfigPasswordLoginDto.md) | |

View File

@@ -12,7 +12,6 @@ Name | Type | Description | Notes
**library_** | [**JobSettingsDto**](JobSettingsDto.md) | |
**metadataExtraction** | [**JobSettingsDto**](JobSettingsDto.md) | |
**migration** | [**JobSettingsDto**](JobSettingsDto.md) | |
**objectTagging** | [**JobSettingsDto**](JobSettingsDto.md) | |
**recognizeFaces** | [**JobSettingsDto**](JobSettingsDto.md) | |
**search** | [**JobSettingsDto**](JobSettingsDto.md) | |
**sidecar** | [**JobSettingsDto**](JobSettingsDto.md) | |

View File

@@ -8,7 +8,6 @@ import 'package:openapi/api.dart';
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**classification** | [**ClassificationConfig**](ClassificationConfig.md) | |
**clip** | [**CLIPConfig**](CLIPConfig.md) | |
**enabled** | **bool** | |
**facialRecognition** | [**RecognitionConfig**](RecognitionConfig.md) | |

View File

@@ -1,4 +1,4 @@
# openapi.model.ClassificationConfig
# openapi.model.SystemConfigMetricsDto
## Load the model package
```dart
@@ -9,9 +9,6 @@ import 'package:openapi/api.dart';
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**enabled** | **bool** | |
**minScore** | **int** | |
**modelName** | **String** | |
**modelType** | [**ModelType**](ModelType.md) | | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@@ -37,6 +37,7 @@ part 'api/authentication_api.dart';
part 'api/face_api.dart';
part 'api/job_api.dart';
part 'api/library_api.dart';
part 'api/metrics_api.dart';
part 'api/o_auth_api.dart';
part 'api/partner_api.dart';
part 'api/person_api.dart';
@@ -88,7 +89,6 @@ part 'model/cq_mode.dart';
part 'model/change_password_dto.dart';
part 'model/check_existing_assets_dto.dart';
part 'model/check_existing_assets_response_dto.dart';
part 'model/classification_config.dart';
part 'model/colorspace.dart';
part 'model/create_album_dto.dart';
part 'model/create_library_dto.dart';
@@ -174,6 +174,7 @@ part 'model/system_config_library_scan_dto.dart';
part 'model/system_config_logging_dto.dart';
part 'model/system_config_machine_learning_dto.dart';
part 'model/system_config_map_dto.dart';
part 'model/system_config_metrics_dto.dart';
part 'model/system_config_new_version_check_dto.dart';
part 'model/system_config_o_auth_dto.dart';
part 'model/system_config_password_login_dto.dart';

59
mobile/openapi/lib/api/metrics_api.dart generated Normal file
View File

@@ -0,0 +1,59 @@
//
// AUTO-GENERATED FILE, DO NOT MODIFY!
//
// @dart=2.12
// ignore_for_file: unused_element, unused_import
// ignore_for_file: always_put_required_named_parameters_first
// ignore_for_file: constant_identifier_names
// ignore_for_file: lines_longer_than_80_chars
part of openapi.api;
class MetricsApi {
MetricsApi([ApiClient? apiClient]) : apiClient = apiClient ?? defaultApiClient;
final ApiClient apiClient;
/// Performs an HTTP 'GET /metrics' operation and returns the [Response].
Future<Response> getMetricsWithHttpInfo() async {
// ignore: prefer_const_declarations
final path = r'/metrics';
// ignore: prefer_final_locals
Object? postBody;
final queryParams = <QueryParam>[];
final headerParams = <String, String>{};
final formParams = <String, String>{};
const contentTypes = <String>[];
return apiClient.invokeAPI(
path,
'GET',
queryParams,
postBody,
headerParams,
formParams,
contentTypes.isEmpty ? null : contentTypes.first,
);
}
Future<Object?> getMetrics() async {
final response = await getMetricsWithHttpInfo();
if (response.statusCode >= HttpStatus.badRequest) {
throw ApiException(response.statusCode, await _decodeBodyBytes(response));
}
// When a remote server returns no body with a status of 204, we shall not decode it.
// At the time of writing this, `dart:convert` will throw an "Unexpected end of input"
// FormatException when trying to decode an empty string.
if (response.body.isNotEmpty && response.statusCode != HttpStatus.noContent) {
return await apiClient.deserializeAsync(await _decodeBodyBytes(response), 'Object',) as Object;
}
return null;
}
}

View File

@@ -263,8 +263,6 @@ class ApiClient {
return CheckExistingAssetsDto.fromJson(value);
case 'CheckExistingAssetsResponseDto':
return CheckExistingAssetsResponseDto.fromJson(value);
case 'ClassificationConfig':
return ClassificationConfig.fromJson(value);
case 'Colorspace':
return ColorspaceTypeTransformer().decode(value);
case 'CreateAlbumDto':
@@ -435,6 +433,8 @@ class ApiClient {
return SystemConfigMachineLearningDto.fromJson(value);
case 'SystemConfigMapDto':
return SystemConfigMapDto.fromJson(value);
case 'SystemConfigMetricsDto':
return SystemConfigMetricsDto.fromJson(value);
case 'SystemConfigNewVersionCheckDto':
return SystemConfigNewVersionCheckDto.fromJson(value);
case 'SystemConfigOAuthDto':

View File

@@ -17,7 +17,6 @@ class AllJobStatusResponseDto {
required this.library_,
required this.metadataExtraction,
required this.migration,
required this.objectTagging,
required this.recognizeFaces,
required this.search,
required this.sidecar,
@@ -35,8 +34,6 @@ class AllJobStatusResponseDto {
JobStatusDto migration;
JobStatusDto objectTagging;
JobStatusDto recognizeFaces;
JobStatusDto search;
@@ -57,7 +54,6 @@ class AllJobStatusResponseDto {
other.library_ == library_ &&
other.metadataExtraction == metadataExtraction &&
other.migration == migration &&
other.objectTagging == objectTagging &&
other.recognizeFaces == recognizeFaces &&
other.search == search &&
other.sidecar == sidecar &&
@@ -73,7 +69,6 @@ class AllJobStatusResponseDto {
(library_.hashCode) +
(metadataExtraction.hashCode) +
(migration.hashCode) +
(objectTagging.hashCode) +
(recognizeFaces.hashCode) +
(search.hashCode) +
(sidecar.hashCode) +
@@ -83,7 +78,7 @@ class AllJobStatusResponseDto {
(videoConversion.hashCode);
@override
String toString() => 'AllJobStatusResponseDto[backgroundTask=$backgroundTask, library_=$library_, metadataExtraction=$metadataExtraction, migration=$migration, objectTagging=$objectTagging, recognizeFaces=$recognizeFaces, search=$search, sidecar=$sidecar, smartSearch=$smartSearch, storageTemplateMigration=$storageTemplateMigration, thumbnailGeneration=$thumbnailGeneration, videoConversion=$videoConversion]';
String toString() => 'AllJobStatusResponseDto[backgroundTask=$backgroundTask, library_=$library_, metadataExtraction=$metadataExtraction, migration=$migration, recognizeFaces=$recognizeFaces, search=$search, sidecar=$sidecar, smartSearch=$smartSearch, storageTemplateMigration=$storageTemplateMigration, thumbnailGeneration=$thumbnailGeneration, videoConversion=$videoConversion]';
Map<String, dynamic> toJson() {
final json = <String, dynamic>{};
@@ -91,7 +86,6 @@ class AllJobStatusResponseDto {
json[r'library'] = this.library_;
json[r'metadataExtraction'] = this.metadataExtraction;
json[r'migration'] = this.migration;
json[r'objectTagging'] = this.objectTagging;
json[r'recognizeFaces'] = this.recognizeFaces;
json[r'search'] = this.search;
json[r'sidecar'] = this.sidecar;
@@ -114,7 +108,6 @@ class AllJobStatusResponseDto {
library_: JobStatusDto.fromJson(json[r'library'])!,
metadataExtraction: JobStatusDto.fromJson(json[r'metadataExtraction'])!,
migration: JobStatusDto.fromJson(json[r'migration'])!,
objectTagging: JobStatusDto.fromJson(json[r'objectTagging'])!,
recognizeFaces: JobStatusDto.fromJson(json[r'recognizeFaces'])!,
search: JobStatusDto.fromJson(json[r'search'])!,
sidecar: JobStatusDto.fromJson(json[r'sidecar'])!,
@@ -173,7 +166,6 @@ class AllJobStatusResponseDto {
'library',
'metadataExtraction',
'migration',
'objectTagging',
'recognizeFaces',
'search',
'sidecar',

View File

@@ -1,131 +0,0 @@
//
// AUTO-GENERATED FILE, DO NOT MODIFY!
//
// @dart=2.12
// ignore_for_file: unused_element, unused_import
// ignore_for_file: always_put_required_named_parameters_first
// ignore_for_file: constant_identifier_names
// ignore_for_file: lines_longer_than_80_chars
part of openapi.api;
class ClassificationConfig {
/// Returns a new [ClassificationConfig] instance.
ClassificationConfig({
required this.enabled,
required this.minScore,
required this.modelName,
this.modelType,
});
bool enabled;
int minScore;
String modelName;
///
/// Please note: This property should have been non-nullable! Since the specification file
/// does not include a default value (using the "default:" property), however, the generated
/// source code must fall back to having a nullable type.
/// Consider adding a "default:" property in the specification file to hide this note.
///
ModelType? modelType;
@override
bool operator ==(Object other) => identical(this, other) || other is ClassificationConfig &&
other.enabled == enabled &&
other.minScore == minScore &&
other.modelName == modelName &&
other.modelType == modelType;
@override
int get hashCode =>
// ignore: unnecessary_parenthesis
(enabled.hashCode) +
(minScore.hashCode) +
(modelName.hashCode) +
(modelType == null ? 0 : modelType!.hashCode);
@override
String toString() => 'ClassificationConfig[enabled=$enabled, minScore=$minScore, modelName=$modelName, modelType=$modelType]';
Map<String, dynamic> toJson() {
final json = <String, dynamic>{};
json[r'enabled'] = this.enabled;
json[r'minScore'] = this.minScore;
json[r'modelName'] = this.modelName;
if (this.modelType != null) {
json[r'modelType'] = this.modelType;
} else {
// json[r'modelType'] = null;
}
return json;
}
/// Returns a new [ClassificationConfig] instance and imports its values from
/// [value] if it's a [Map], null otherwise.
// ignore: prefer_constructors_over_static_methods
static ClassificationConfig? fromJson(dynamic value) {
if (value is Map) {
final json = value.cast<String, dynamic>();
return ClassificationConfig(
enabled: mapValueOfType<bool>(json, r'enabled')!,
minScore: mapValueOfType<int>(json, r'minScore')!,
modelName: mapValueOfType<String>(json, r'modelName')!,
modelType: ModelType.fromJson(json[r'modelType']),
);
}
return null;
}
static List<ClassificationConfig> listFromJson(dynamic json, {bool growable = false,}) {
final result = <ClassificationConfig>[];
if (json is List && json.isNotEmpty) {
for (final row in json) {
final value = ClassificationConfig.fromJson(row);
if (value != null) {
result.add(value);
}
}
}
return result.toList(growable: growable);
}
static Map<String, ClassificationConfig> mapFromJson(dynamic json) {
final map = <String, ClassificationConfig>{};
if (json is Map && json.isNotEmpty) {
json = json.cast<String, dynamic>(); // ignore: parameter_assignments
for (final entry in json.entries) {
final value = ClassificationConfig.fromJson(entry.value);
if (value != null) {
map[entry.key] = value;
}
}
}
return map;
}
// maps a json object with a list of ClassificationConfig-objects as value to a dart map
static Map<String, List<ClassificationConfig>> mapListFromJson(dynamic json, {bool growable = false,}) {
final map = <String, List<ClassificationConfig>>{};
if (json is Map && json.isNotEmpty) {
// ignore: parameter_assignments
json = json.cast<String, dynamic>();
for (final entry in json.entries) {
map[entry.key] = ClassificationConfig.listFromJson(entry.value, growable: growable,);
}
}
return map;
}
/// The list of required keys that must be present in a JSON.
static const requiredKeys = <String>{
'enabled',
'minScore',
'modelName',
};
}

View File

@@ -26,7 +26,6 @@ class JobName {
static const thumbnailGeneration = JobName._(r'thumbnailGeneration');
static const metadataExtraction = JobName._(r'metadataExtraction');
static const videoConversion = JobName._(r'videoConversion');
static const objectTagging = JobName._(r'objectTagging');
static const recognizeFaces = JobName._(r'recognizeFaces');
static const smartSearch = JobName._(r'smartSearch');
static const backgroundTask = JobName._(r'backgroundTask');
@@ -41,7 +40,6 @@ class JobName {
thumbnailGeneration,
metadataExtraction,
videoConversion,
objectTagging,
recognizeFaces,
smartSearch,
backgroundTask,
@@ -91,7 +89,6 @@ class JobNameTypeTransformer {
case r'thumbnailGeneration': return JobName.thumbnailGeneration;
case r'metadataExtraction': return JobName.metadataExtraction;
case r'videoConversion': return JobName.videoConversion;
case r'objectTagging': return JobName.objectTagging;
case r'recognizeFaces': return JobName.recognizeFaces;
case r'smartSearch': return JobName.smartSearch;
case r'backgroundTask': return JobName.backgroundTask;

View File

@@ -23,13 +23,11 @@ class ModelType {
String toJson() => value;
static const imageClassification = ModelType._(r'image-classification');
static const facialRecognition = ModelType._(r'facial-recognition');
static const clip = ModelType._(r'clip');
/// List of all possible values in this [enum][ModelType].
static const values = <ModelType>[
imageClassification,
facialRecognition,
clip,
];
@@ -70,7 +68,6 @@ class ModelTypeTypeTransformer {
ModelType? decode(dynamic data, {bool allowNull = true}) {
if (data != null) {
switch (data) {
case r'image-classification': return ModelType.imageClassification;
case r'facial-recognition': return ModelType.facialRecognition;
case r'clip': return ModelType.clip;
default:

View File

@@ -17,13 +17,13 @@ class ServerFeaturesDto {
required this.configFile,
required this.facialRecognition,
required this.map,
required this.metrics,
required this.oauth,
required this.oauthAutoLaunch,
required this.passwordLogin,
required this.reverseGeocoding,
required this.search,
required this.sidecar,
required this.tagImage,
required this.trash,
});
@@ -35,6 +35,8 @@ class ServerFeaturesDto {
bool map;
bool metrics;
bool oauth;
bool oauthAutoLaunch;
@@ -47,8 +49,6 @@ class ServerFeaturesDto {
bool sidecar;
bool tagImage;
bool trash;
@override
@@ -57,13 +57,13 @@ class ServerFeaturesDto {
other.configFile == configFile &&
other.facialRecognition == facialRecognition &&
other.map == map &&
other.metrics == metrics &&
other.oauth == oauth &&
other.oauthAutoLaunch == oauthAutoLaunch &&
other.passwordLogin == passwordLogin &&
other.reverseGeocoding == reverseGeocoding &&
other.search == search &&
other.sidecar == sidecar &&
other.tagImage == tagImage &&
other.trash == trash;
@override
@@ -73,17 +73,17 @@ class ServerFeaturesDto {
(configFile.hashCode) +
(facialRecognition.hashCode) +
(map.hashCode) +
(metrics.hashCode) +
(oauth.hashCode) +
(oauthAutoLaunch.hashCode) +
(passwordLogin.hashCode) +
(reverseGeocoding.hashCode) +
(search.hashCode) +
(sidecar.hashCode) +
(tagImage.hashCode) +
(trash.hashCode);
@override
String toString() => 'ServerFeaturesDto[clipEncode=$clipEncode, configFile=$configFile, facialRecognition=$facialRecognition, map=$map, oauth=$oauth, oauthAutoLaunch=$oauthAutoLaunch, passwordLogin=$passwordLogin, reverseGeocoding=$reverseGeocoding, search=$search, sidecar=$sidecar, tagImage=$tagImage, trash=$trash]';
String toString() => 'ServerFeaturesDto[clipEncode=$clipEncode, configFile=$configFile, facialRecognition=$facialRecognition, map=$map, metrics=$metrics, oauth=$oauth, oauthAutoLaunch=$oauthAutoLaunch, passwordLogin=$passwordLogin, reverseGeocoding=$reverseGeocoding, search=$search, sidecar=$sidecar, trash=$trash]';
Map<String, dynamic> toJson() {
final json = <String, dynamic>{};
@@ -91,13 +91,13 @@ class ServerFeaturesDto {
json[r'configFile'] = this.configFile;
json[r'facialRecognition'] = this.facialRecognition;
json[r'map'] = this.map;
json[r'metrics'] = this.metrics;
json[r'oauth'] = this.oauth;
json[r'oauthAutoLaunch'] = this.oauthAutoLaunch;
json[r'passwordLogin'] = this.passwordLogin;
json[r'reverseGeocoding'] = this.reverseGeocoding;
json[r'search'] = this.search;
json[r'sidecar'] = this.sidecar;
json[r'tagImage'] = this.tagImage;
json[r'trash'] = this.trash;
return json;
}
@@ -114,13 +114,13 @@ class ServerFeaturesDto {
configFile: mapValueOfType<bool>(json, r'configFile')!,
facialRecognition: mapValueOfType<bool>(json, r'facialRecognition')!,
map: mapValueOfType<bool>(json, r'map')!,
metrics: mapValueOfType<bool>(json, r'metrics')!,
oauth: mapValueOfType<bool>(json, r'oauth')!,
oauthAutoLaunch: mapValueOfType<bool>(json, r'oauthAutoLaunch')!,
passwordLogin: mapValueOfType<bool>(json, r'passwordLogin')!,
reverseGeocoding: mapValueOfType<bool>(json, r'reverseGeocoding')!,
search: mapValueOfType<bool>(json, r'search')!,
sidecar: mapValueOfType<bool>(json, r'sidecar')!,
tagImage: mapValueOfType<bool>(json, r'tagImage')!,
trash: mapValueOfType<bool>(json, r'trash')!,
);
}
@@ -173,13 +173,13 @@ class ServerFeaturesDto {
'configFile',
'facialRecognition',
'map',
'metrics',
'oauth',
'oauthAutoLaunch',
'passwordLogin',
'reverseGeocoding',
'search',
'sidecar',
'tagImage',
'trash',
};
}

View File

@@ -19,6 +19,7 @@ class SystemConfigDto {
required this.logging,
required this.machineLearning,
required this.map,
required this.metrics,
required this.newVersionCheck,
required this.oauth,
required this.passwordLogin,
@@ -41,6 +42,8 @@ class SystemConfigDto {
SystemConfigMapDto map;
SystemConfigMetricsDto metrics;
SystemConfigNewVersionCheckDto newVersionCheck;
SystemConfigOAuthDto oauth;
@@ -65,6 +68,7 @@ class SystemConfigDto {
other.logging == logging &&
other.machineLearning == machineLearning &&
other.map == map &&
other.metrics == metrics &&
other.newVersionCheck == newVersionCheck &&
other.oauth == oauth &&
other.passwordLogin == passwordLogin &&
@@ -83,6 +87,7 @@ class SystemConfigDto {
(logging.hashCode) +
(machineLearning.hashCode) +
(map.hashCode) +
(metrics.hashCode) +
(newVersionCheck.hashCode) +
(oauth.hashCode) +
(passwordLogin.hashCode) +
@@ -93,7 +98,7 @@ class SystemConfigDto {
(trash.hashCode);
@override
String toString() => 'SystemConfigDto[ffmpeg=$ffmpeg, job=$job, library_=$library_, logging=$logging, machineLearning=$machineLearning, map=$map, newVersionCheck=$newVersionCheck, oauth=$oauth, passwordLogin=$passwordLogin, reverseGeocoding=$reverseGeocoding, storageTemplate=$storageTemplate, theme=$theme, thumbnail=$thumbnail, trash=$trash]';
String toString() => 'SystemConfigDto[ffmpeg=$ffmpeg, job=$job, library_=$library_, logging=$logging, machineLearning=$machineLearning, map=$map, metrics=$metrics, newVersionCheck=$newVersionCheck, oauth=$oauth, passwordLogin=$passwordLogin, reverseGeocoding=$reverseGeocoding, storageTemplate=$storageTemplate, theme=$theme, thumbnail=$thumbnail, trash=$trash]';
Map<String, dynamic> toJson() {
final json = <String, dynamic>{};
@@ -103,6 +108,7 @@ class SystemConfigDto {
json[r'logging'] = this.logging;
json[r'machineLearning'] = this.machineLearning;
json[r'map'] = this.map;
json[r'metrics'] = this.metrics;
json[r'newVersionCheck'] = this.newVersionCheck;
json[r'oauth'] = this.oauth;
json[r'passwordLogin'] = this.passwordLogin;
@@ -128,6 +134,7 @@ class SystemConfigDto {
logging: SystemConfigLoggingDto.fromJson(json[r'logging'])!,
machineLearning: SystemConfigMachineLearningDto.fromJson(json[r'machineLearning'])!,
map: SystemConfigMapDto.fromJson(json[r'map'])!,
metrics: SystemConfigMetricsDto.fromJson(json[r'metrics'])!,
newVersionCheck: SystemConfigNewVersionCheckDto.fromJson(json[r'newVersionCheck'])!,
oauth: SystemConfigOAuthDto.fromJson(json[r'oauth'])!,
passwordLogin: SystemConfigPasswordLoginDto.fromJson(json[r'passwordLogin'])!,
@@ -189,6 +196,7 @@ class SystemConfigDto {
'logging',
'machineLearning',
'map',
'metrics',
'newVersionCheck',
'oauth',
'passwordLogin',

View File

@@ -17,7 +17,6 @@ class SystemConfigJobDto {
required this.library_,
required this.metadataExtraction,
required this.migration,
required this.objectTagging,
required this.recognizeFaces,
required this.search,
required this.sidecar,
@@ -35,8 +34,6 @@ class SystemConfigJobDto {
JobSettingsDto migration;
JobSettingsDto objectTagging;
JobSettingsDto recognizeFaces;
JobSettingsDto search;
@@ -57,7 +54,6 @@ class SystemConfigJobDto {
other.library_ == library_ &&
other.metadataExtraction == metadataExtraction &&
other.migration == migration &&
other.objectTagging == objectTagging &&
other.recognizeFaces == recognizeFaces &&
other.search == search &&
other.sidecar == sidecar &&
@@ -73,7 +69,6 @@ class SystemConfigJobDto {
(library_.hashCode) +
(metadataExtraction.hashCode) +
(migration.hashCode) +
(objectTagging.hashCode) +
(recognizeFaces.hashCode) +
(search.hashCode) +
(sidecar.hashCode) +
@@ -83,7 +78,7 @@ class SystemConfigJobDto {
(videoConversion.hashCode);
@override
String toString() => 'SystemConfigJobDto[backgroundTask=$backgroundTask, library_=$library_, metadataExtraction=$metadataExtraction, migration=$migration, objectTagging=$objectTagging, recognizeFaces=$recognizeFaces, search=$search, sidecar=$sidecar, smartSearch=$smartSearch, storageTemplateMigration=$storageTemplateMigration, thumbnailGeneration=$thumbnailGeneration, videoConversion=$videoConversion]';
String toString() => 'SystemConfigJobDto[backgroundTask=$backgroundTask, library_=$library_, metadataExtraction=$metadataExtraction, migration=$migration, recognizeFaces=$recognizeFaces, search=$search, sidecar=$sidecar, smartSearch=$smartSearch, storageTemplateMigration=$storageTemplateMigration, thumbnailGeneration=$thumbnailGeneration, videoConversion=$videoConversion]';
Map<String, dynamic> toJson() {
final json = <String, dynamic>{};
@@ -91,7 +86,6 @@ class SystemConfigJobDto {
json[r'library'] = this.library_;
json[r'metadataExtraction'] = this.metadataExtraction;
json[r'migration'] = this.migration;
json[r'objectTagging'] = this.objectTagging;
json[r'recognizeFaces'] = this.recognizeFaces;
json[r'search'] = this.search;
json[r'sidecar'] = this.sidecar;
@@ -114,7 +108,6 @@ class SystemConfigJobDto {
library_: JobSettingsDto.fromJson(json[r'library'])!,
metadataExtraction: JobSettingsDto.fromJson(json[r'metadataExtraction'])!,
migration: JobSettingsDto.fromJson(json[r'migration'])!,
objectTagging: JobSettingsDto.fromJson(json[r'objectTagging'])!,
recognizeFaces: JobSettingsDto.fromJson(json[r'recognizeFaces'])!,
search: JobSettingsDto.fromJson(json[r'search'])!,
sidecar: JobSettingsDto.fromJson(json[r'sidecar'])!,
@@ -173,7 +166,6 @@ class SystemConfigJobDto {
'library',
'metadataExtraction',
'migration',
'objectTagging',
'recognizeFaces',
'search',
'sidecar',

View File

@@ -13,15 +13,12 @@ part of openapi.api;
class SystemConfigMachineLearningDto {
/// Returns a new [SystemConfigMachineLearningDto] instance.
SystemConfigMachineLearningDto({
required this.classification,
required this.clip,
required this.enabled,
required this.facialRecognition,
required this.url,
});
ClassificationConfig classification;
CLIPConfig clip;
bool enabled;
@@ -32,7 +29,6 @@ class SystemConfigMachineLearningDto {
@override
bool operator ==(Object other) => identical(this, other) || other is SystemConfigMachineLearningDto &&
other.classification == classification &&
other.clip == clip &&
other.enabled == enabled &&
other.facialRecognition == facialRecognition &&
@@ -41,18 +37,16 @@ class SystemConfigMachineLearningDto {
@override
int get hashCode =>
// ignore: unnecessary_parenthesis
(classification.hashCode) +
(clip.hashCode) +
(enabled.hashCode) +
(facialRecognition.hashCode) +
(url.hashCode);
@override
String toString() => 'SystemConfigMachineLearningDto[classification=$classification, clip=$clip, enabled=$enabled, facialRecognition=$facialRecognition, url=$url]';
String toString() => 'SystemConfigMachineLearningDto[clip=$clip, enabled=$enabled, facialRecognition=$facialRecognition, url=$url]';
Map<String, dynamic> toJson() {
final json = <String, dynamic>{};
json[r'classification'] = this.classification;
json[r'clip'] = this.clip;
json[r'enabled'] = this.enabled;
json[r'facialRecognition'] = this.facialRecognition;
@@ -68,7 +62,6 @@ class SystemConfigMachineLearningDto {
final json = value.cast<String, dynamic>();
return SystemConfigMachineLearningDto(
classification: ClassificationConfig.fromJson(json[r'classification'])!,
clip: CLIPConfig.fromJson(json[r'clip'])!,
enabled: mapValueOfType<bool>(json, r'enabled')!,
facialRecognition: RecognitionConfig.fromJson(json[r'facialRecognition'])!,
@@ -120,7 +113,6 @@ class SystemConfigMachineLearningDto {
/// The list of required keys that must be present in a JSON.
static const requiredKeys = <String>{
'classification',
'clip',
'enabled',
'facialRecognition',

View File

@@ -0,0 +1,98 @@
//
// AUTO-GENERATED FILE, DO NOT MODIFY!
//
// @dart=2.12
// ignore_for_file: unused_element, unused_import
// ignore_for_file: always_put_required_named_parameters_first
// ignore_for_file: constant_identifier_names
// ignore_for_file: lines_longer_than_80_chars
part of openapi.api;
class SystemConfigMetricsDto {
/// Returns a new [SystemConfigMetricsDto] instance.
SystemConfigMetricsDto({
required this.enabled,
});
bool enabled;
@override
bool operator ==(Object other) => identical(this, other) || other is SystemConfigMetricsDto &&
other.enabled == enabled;
@override
int get hashCode =>
// ignore: unnecessary_parenthesis
(enabled.hashCode);
@override
String toString() => 'SystemConfigMetricsDto[enabled=$enabled]';
Map<String, dynamic> toJson() {
final json = <String, dynamic>{};
json[r'enabled'] = this.enabled;
return json;
}
/// Returns a new [SystemConfigMetricsDto] instance and imports its values from
/// [value] if it's a [Map], null otherwise.
// ignore: prefer_constructors_over_static_methods
static SystemConfigMetricsDto? fromJson(dynamic value) {
if (value is Map) {
final json = value.cast<String, dynamic>();
return SystemConfigMetricsDto(
enabled: mapValueOfType<bool>(json, r'enabled')!,
);
}
return null;
}
static List<SystemConfigMetricsDto> listFromJson(dynamic json, {bool growable = false,}) {
final result = <SystemConfigMetricsDto>[];
if (json is List && json.isNotEmpty) {
for (final row in json) {
final value = SystemConfigMetricsDto.fromJson(row);
if (value != null) {
result.add(value);
}
}
}
return result.toList(growable: growable);
}
static Map<String, SystemConfigMetricsDto> mapFromJson(dynamic json) {
final map = <String, SystemConfigMetricsDto>{};
if (json is Map && json.isNotEmpty) {
json = json.cast<String, dynamic>(); // ignore: parameter_assignments
for (final entry in json.entries) {
final value = SystemConfigMetricsDto.fromJson(entry.value);
if (value != null) {
map[entry.key] = value;
}
}
}
return map;
}
// maps a json object with a list of SystemConfigMetricsDto-objects as value to a dart map
static Map<String, List<SystemConfigMetricsDto>> mapListFromJson(dynamic json, {bool growable = false,}) {
final map = <String, List<SystemConfigMetricsDto>>{};
if (json is Map && json.isNotEmpty) {
// ignore: parameter_assignments
json = json.cast<String, dynamic>();
for (final entry in json.entries) {
map[entry.key] = SystemConfigMetricsDto.listFromJson(entry.value, growable: growable,);
}
}
return map;
}
/// The list of required keys that must be present in a JSON.
static const requiredKeys = <String>{
'enabled',
};
}

View File

@@ -36,11 +36,6 @@ void main() {
// TODO
});
// JobStatusDto objectTagging
test('to test the property `objectTagging`', () async {
// TODO
});
// JobStatusDto recognizeFaces
test('to test the property `recognizeFaces`', () async {
// TODO

View File

@@ -0,0 +1,26 @@
//
// AUTO-GENERATED FILE, DO NOT MODIFY!
//
// @dart=2.12
// ignore_for_file: unused_element, unused_import
// ignore_for_file: always_put_required_named_parameters_first
// ignore_for_file: constant_identifier_names
// ignore_for_file: lines_longer_than_80_chars
import 'package:openapi/api.dart';
import 'package:test/test.dart';
/// tests for MetricsApi
void main() {
// final instance = MetricsApi();
group('tests for MetricsApi', () {
//Future<Object> getMetrics() async
test('test getMetrics', () async {
// TODO
});
});
}

View File

@@ -36,6 +36,11 @@ void main() {
// TODO
});
// bool metrics
test('to test the property `metrics`', () async {
// TODO
});
// bool oauth
test('to test the property `oauth`', () async {
// TODO
@@ -66,11 +71,6 @@ void main() {
// TODO
});
// bool tagImage
test('to test the property `tagImage`', () async {
// TODO
});
// bool trash
test('to test the property `trash`', () async {
// TODO

View File

@@ -46,6 +46,11 @@ void main() {
// TODO
});
// SystemConfigMetricsDto metrics
test('to test the property `metrics`', () async {
// TODO
});
// SystemConfigNewVersionCheckDto newVersionCheck
test('to test the property `newVersionCheck`', () async {
// TODO

View File

@@ -36,11 +36,6 @@ void main() {
// TODO
});
// JobSettingsDto objectTagging
test('to test the property `objectTagging`', () async {
// TODO
});
// JobSettingsDto recognizeFaces
test('to test the property `recognizeFaces`', () async {
// TODO

View File

@@ -16,11 +16,6 @@ void main() {
// final instance = SystemConfigMachineLearningDto();
group('test SystemConfigMachineLearningDto', () {
// ClassificationConfig classification
test('to test the property `classification`', () async {
// TODO
});
// CLIPConfig clip
test('to test the property `clip`', () async {
// TODO

View File

@@ -11,31 +11,16 @@
import 'package:openapi/api.dart';
import 'package:test/test.dart';
// tests for ClassificationConfig
// tests for SystemConfigMetricsDto
void main() {
// final instance = ClassificationConfig();
// final instance = SystemConfigMetricsDto();
group('test ClassificationConfig', () {
group('test SystemConfigMetricsDto', () {
// bool enabled
test('to test the property `enabled`', () async {
// TODO
});
// int minScore
test('to test the property `minScore`', () async {
// TODO
});
// String modelName
test('to test the property `modelName`', () async {
// TODO
});
// ModelType modelType
test('to test the property `modelType`', () async {
// TODO
});
});

View File

@@ -2,7 +2,7 @@ name: immich_mobile
description: Immich - selfhosted backup media file on mobile phone
publish_to: "none"
version: 1.91.3+115
version: 1.91.4+116
isar_version: &isar_version 3.1.0+1
environment:

View File

@@ -42,7 +42,7 @@
{
"matchFileNames": ["machine-learning/**"],
"groupName": "machine-learning",
"matchUpdateTypes": ["minor", "patch"],
"rangeStrategy": "in-range-only",
"schedule": "on tuesday"
},
{

View File

@@ -10,7 +10,10 @@ RUN npm ci && \
rm -rf node_modules/@img/sharp-libvips* && \
rm -rf node_modules/@img/sharp-linuxmusl-x64
COPY server .
ENV PATH="${PATH}:/usr/src/app/bin"
ENV PATH="${PATH}:/usr/src/app/bin" \
NODE_ENV=development \
NVIDIA_DRIVER_CAPABILITIES=all \
NVIDIA_VISIBLE_DEVICES=all
ENTRYPOINT ["tini", "--", "/bin/sh"]
@@ -34,7 +37,9 @@ RUN npm run build
FROM ghcr.io/immich-app/base-server-prod:20231214@sha256:b214f86683fde081b09beed2d7bfc28bec55c829751ccf2e02ad7dd18293f5e0
WORKDIR /usr/src/app
ENV NODE_ENV=production
ENV NODE_ENV=production \
NVIDIA_DRIVER_CAPABILITIES=all \
NVIDIA_VISIBLE_DEVICES=all
COPY --from=prod /usr/src/app/node_modules ./node_modules
COPY --from=prod /usr/src/app/dist ./dist
COPY --from=prod /usr/src/app/bin ./bin

View File

@@ -3716,6 +3716,38 @@
]
}
},
"/metrics": {
"get": {
"operationId": "getMetrics",
"parameters": [],
"responses": {
"200": {
"content": {
"application/json": {
"schema": {
"type": "object"
}
}
},
"description": ""
}
},
"security": [
{
"bearer": []
},
{
"cookie": []
},
{
"api_key": []
}
],
"tags": [
"Metrics"
]
}
},
"/oauth/authorize": {
"post": {
"operationId": "startOAuth",
@@ -6188,7 +6220,7 @@
"info": {
"title": "Immich",
"description": "Immich API",
"version": "1.91.3",
"version": "1.91.4",
"contact": {}
},
"tags": [],
@@ -6479,9 +6511,6 @@
"migration": {
"$ref": "#/components/schemas/JobStatusDto"
},
"objectTagging": {
"$ref": "#/components/schemas/JobStatusDto"
},
"recognizeFaces": {
"$ref": "#/components/schemas/JobStatusDto"
},
@@ -6508,7 +6537,6 @@
"thumbnailGeneration",
"metadataExtraction",
"videoConversion",
"objectTagging",
"smartSearch",
"storageTemplateMigration",
"migration",
@@ -7201,28 +7229,6 @@
],
"type": "object"
},
"ClassificationConfig": {
"properties": {
"enabled": {
"type": "boolean"
},
"minScore": {
"type": "integer"
},
"modelName": {
"type": "string"
},
"modelType": {
"$ref": "#/components/schemas/ModelType"
}
},
"required": [
"minScore",
"enabled",
"modelName"
],
"type": "object"
},
"Colorspace": {
"enum": [
"srgb",
@@ -7819,7 +7825,6 @@
"thumbnailGeneration",
"metadataExtraction",
"videoConversion",
"objectTagging",
"recognizeFaces",
"smartSearch",
"backgroundTask",
@@ -8090,7 +8095,6 @@
},
"ModelType": {
"enum": [
"image-classification",
"facial-recognition",
"clip"
],
@@ -8656,6 +8660,9 @@
"map": {
"type": "boolean"
},
"metrics": {
"type": "boolean"
},
"oauth": {
"type": "boolean"
},
@@ -8674,9 +8681,6 @@
"sidecar": {
"type": "boolean"
},
"tagImage": {
"type": "boolean"
},
"trash": {
"type": "boolean"
}
@@ -8686,14 +8690,14 @@
"configFile",
"facialRecognition",
"map",
"metrics",
"trash",
"reverseGeocoding",
"oauth",
"oauthAutoLaunch",
"passwordLogin",
"sidecar",
"search",
"tagImage"
"search"
],
"type": "object"
},
@@ -9059,6 +9063,9 @@
"map": {
"$ref": "#/components/schemas/SystemConfigMapDto"
},
"metrics": {
"$ref": "#/components/schemas/SystemConfigMetricsDto"
},
"newVersionCheck": {
"$ref": "#/components/schemas/SystemConfigNewVersionCheckDto"
},
@@ -9089,6 +9096,7 @@
"logging",
"machineLearning",
"map",
"metrics",
"newVersionCheck",
"oauth",
"passwordLogin",
@@ -9191,9 +9199,6 @@
"migration": {
"$ref": "#/components/schemas/JobSettingsDto"
},
"objectTagging": {
"$ref": "#/components/schemas/JobSettingsDto"
},
"recognizeFaces": {
"$ref": "#/components/schemas/JobSettingsDto"
},
@@ -9220,7 +9225,6 @@
"thumbnailGeneration",
"metadataExtraction",
"videoConversion",
"objectTagging",
"smartSearch",
"storageTemplateMigration",
"migration",
@@ -9275,9 +9279,6 @@
},
"SystemConfigMachineLearningDto": {
"properties": {
"classification": {
"$ref": "#/components/schemas/ClassificationConfig"
},
"clip": {
"$ref": "#/components/schemas/CLIPConfig"
},
@@ -9294,7 +9295,6 @@
"required": [
"enabled",
"url",
"classification",
"clip",
"facialRecognition"
],
@@ -9319,6 +9319,17 @@
],
"type": "object"
},
"SystemConfigMetricsDto": {
"properties": {
"enabled": {
"type": "boolean"
}
},
"required": [
"enabled"
],
"type": "object"
},
"SystemConfigNewVersionCheckDto": {
"properties": {
"enabled": {

View File

@@ -1,12 +1,12 @@
{
"name": "immich",
"version": "1.91.3",
"version": "1.91.4",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "immich",
"version": "1.91.3",
"version": "1.91.4",
"license": "UNLICENSED",
"dependencies": {
"@babel/runtime": "^7.22.11",

View File

@@ -1,6 +1,6 @@
{
"name": "immich",
"version": "1.91.3",
"version": "1.91.4",
"description": "",
"author": "",
"private": true,

View File

@@ -140,6 +140,20 @@ export class AccessCore {
private async checkAccessOther(auth: AuthDto, permission: Permission, ids: Set<string>) {
switch (permission) {
// uses album id
case Permission.ACTIVITY_CREATE:
return await this.repository.activity.checkCreateAccess(auth.user.id, ids);
// uses activity id
case Permission.ACTIVITY_DELETE: {
const isOwner = await this.repository.activity.checkOwnerAccess(auth.user.id, ids);
const isAlbumOwner = await this.repository.activity.checkAlbumOwnerAccess(
auth.user.id,
setDifference(ids, isOwner),
);
return setUnion(isOwner, isAlbumOwner);
}
case Permission.ASSET_READ: {
const isOwner = await this.repository.asset.checkOwnerAccess(auth.user.id, ids);
const isAlbum = await this.repository.asset.checkAlbumAccess(auth.user.id, setDifference(ids, isOwner));
@@ -249,41 +263,16 @@ export class AccessCore {
return await this.repository.person.checkOwnerAccess(auth.user.id, ids);
case Permission.PERSON_CREATE:
return this.repository.person.hasFaceOwnerAccess(auth.user.id, ids);
return this.repository.person.checkFaceOwnerAccess(auth.user.id, ids);
case Permission.PERSON_REASSIGN:
return this.repository.person.hasFaceOwnerAccess(auth.user.id, ids);
return this.repository.person.checkFaceOwnerAccess(auth.user.id, ids);
case Permission.PARTNER_UPDATE:
return await this.repository.partner.checkUpdateAccess(auth.user.id, ids);
}
const allowedIds = new Set();
for (const id of ids) {
const hasAccess = await this.hasOtherAccess(auth, permission, id);
if (hasAccess) {
allowedIds.add(id);
}
}
return allowedIds;
}
// TODO: Migrate logic to checkAccessOther to evaluate permissions in bulk.
private async hasOtherAccess(auth: AuthDto, permission: Permission, id: string) {
switch (permission) {
// uses album id
case Permission.ACTIVITY_CREATE:
return await this.repository.activity.hasCreateAccess(auth.user.id, id);
// uses activity id
case Permission.ACTIVITY_DELETE:
return (
(await this.repository.activity.hasOwnerAccess(auth.user.id, id)) ||
(await this.repository.activity.hasAlbumOwnerAccess(auth.user.id, id))
);
default:
return false;
return new Set();
}
}
}

Some files were not shown because too many files have changed in this diff Show More