Compare commits

...

271 Commits

Author SHA1 Message Date
Karolin Varner
b483612cb7 feat(protocol): Hash-based retransmission mechanism
See the updated whitepaper for details.

Fixes: #331
2024-12-07 12:36:40 +01:00
Karolin Varner
a30805f8a0 feat(nix): Dev shell environment with full rust installation
$ nix develop .#fullEnv

This environment contains extra utilities such as the rust language
server.
2024-12-07 12:36:40 +01:00
Karolin Varner
a9b0a90ab5 chore: Update affiliations in whitepaper 2024-12-07 12:36:40 +01:00
Karolin Varner
2bc138e614 chore: Add more info on denial of service mitigation to whitepaper changelog 2024-12-07 12:36:40 +01:00
Karolin Varner
f97781039f build(deps): bump clap from 4.5.22 to 4.5.23 (#519) 2024-12-06 17:00:00 +01:00
dependabot[bot]
5eda161cf2 build(deps): bump clap from 4.5.22 to 4.5.23
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.22 to 4.5.23.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.22...clap_complete-v4.5.23)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-05 23:53:58 +00:00
Karolin Varner
a473fe6d9b build(deps): bump clap from 4.5.21 to 4.5.22 (#518) 2024-12-04 11:40:33 +01:00
dependabot[bot]
e2c46f1ff0 build(deps): bump clap from 4.5.21 to 4.5.22
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.21 to 4.5.22.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.21...clap_complete-v4.5.22)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-04 11:40:25 +01:00
Karolin Varner
c8b804b39d build(deps): bump tokio from 1.41.1 to 1.42.0 (#517) 2024-12-04 11:40:14 +01:00
dependabot[bot]
e56798b04c build(deps): bump tokio from 1.41.1 to 1.42.0
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.41.1 to 1.42.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.41.1...tokio-1.42.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-04 11:40:04 +01:00
Karolin Varner
b76d18e3c8 build(deps): bump anyhow from 1.0.93 to 1.0.94 (#516) 2024-12-04 11:39:54 +01:00
dependabot[bot]
a9792c3143 build(deps): bump anyhow from 1.0.93 to 1.0.94
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.93 to 1.0.94.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.93...1.0.94)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-03 23:26:40 +00:00
Karolin Varner
cb2c1c12ee Dev/karo/docs and unit tests (#512) 2024-11-30 16:30:48 +01:00
Karolin Varner
08514d69e5 feat: Expand Rosenpass unix socket API documentation 2024-11-30 16:17:56 +01:00
Karolin Varner
baf2d68070 build(deps): bump mio from 1.0.2 to 1.0.3 (#511) 2024-11-30 14:34:43 +01:00
dependabot[bot]
cc7f7a4b4d build(deps): bump mio from 1.0.2 to 1.0.3
Bumps [mio](https://github.com/tokio-rs/mio) from 1.0.2 to 1.0.3.
- [Release notes](https://github.com/tokio-rs/mio/releases)
- [Changelog](https://github.com/tokio-rs/mio/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/mio/compare/v1.0.2...v1.0.3)

---
updated-dependencies:
- dependency-name: mio
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-30 14:34:20 +01:00
Karolin Varner
5b701631b5 build(deps): bump libc from 0.2.166 to 0.2.167 (#510) 2024-11-30 14:34:12 +01:00
dependabot[bot]
402158b706 build(deps): bump libc from 0.2.166 to 0.2.167
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.166 to 0.2.167.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Changelog](https://github.com/rust-lang/libc/blob/0.2.167/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.166...0.2.167)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-30 14:34:05 +01:00
Karolin Varner
e95636bf70 build(deps): bump postcard from 1.1.0 to 1.1.1 (#509) 2024-11-30 14:33:55 +01:00
dependabot[bot]
744e2bcf3e build(deps): bump postcard from 1.1.0 to 1.1.1
Bumps [postcard](https://github.com/jamesmunns/postcard) from 1.1.0 to 1.1.1.
- [Release notes](https://github.com/jamesmunns/postcard/releases)
- [Changelog](https://github.com/jamesmunns/postcard/blob/main/CHANGELOG.md)
- [Commits](https://github.com/jamesmunns/postcard/compare/postcard/v1.1.0...postcard/v1.1.1)

---
updated-dependencies:
- dependency-name: postcard
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-29 23:31:34 +00:00
Karolin Varner
8c82ca18fb fix: API should be disabled by default 2024-11-29 18:42:15 +01:00
Karolin Varner
208e79c3a7 build(deps): bump postcard from 1.0.10 to 1.1.0 (#507) 2024-11-29 08:50:55 +01:00
dependabot[bot]
6ee023c9e9 build(deps): bump postcard from 1.0.10 to 1.1.0
Bumps [postcard](https://github.com/jamesmunns/postcard) from 1.0.10 to 1.1.0.
- [Release notes](https://github.com/jamesmunns/postcard/releases)
- [Changelog](https://github.com/jamesmunns/postcard/blob/main/CHANGELOG.md)
- [Commits](https://github.com/jamesmunns/postcard/compare/v1.0.10...postcard/v1.1.0)

---
updated-dependencies:
- dependency-name: postcard
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-28 23:12:57 +00:00
Karolin Varner
6f75d34934 build(deps): bump tempfile from 3.13.0 to 3.14.0 (#489) 2024-11-28 21:13:59 +01:00
dependabot[bot]
6b364a35dc build(deps): bump tempfile from 3.13.0 to 3.14.0
Bumps [tempfile](https://github.com/Stebalien/tempfile) from 3.13.0 to 3.14.0.
- [Changelog](https://github.com/Stebalien/tempfile/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Stebalien/tempfile/compare/v3.13.0...v3.14.0)

---
updated-dependencies:
- dependency-name: tempfile
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-28 21:13:46 +01:00
Karolin Varner
2b6d10f0aa build(deps): bump clap from 4.5.20 to 4.5.21 (#494) 2024-11-28 21:13:37 +01:00
dependabot[bot]
cb380b89d1 build(deps): bump clap from 4.5.20 to 4.5.21
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.20 to 4.5.21.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.20...clap_complete-v4.5.21)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-28 21:13:12 +01:00
Karolin Varner
f703933e7f build(deps): bump clap_complete from 4.5.37 to 4.5.38 (#495) 2024-11-28 21:13:05 +01:00
dependabot[bot]
d02a5d2eb7 build(deps): bump clap_complete from 4.5.37 to 4.5.38
Bumps [clap_complete](https://github.com/clap-rs/clap) from 4.5.37 to 4.5.38.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.37...clap_complete-v4.5.38)

---
updated-dependencies:
- dependency-name: clap_complete
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-28 21:12:49 +01:00
Karolin Varner
c7273e6f88 build(deps): bump codecov/codecov-action from 4 to 5 (#497) 2024-11-28 21:12:21 +01:00
dependabot[bot]
85eca49a5b build(deps): bump codecov/codecov-action from 4 to 5
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 4 to 5.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-28 21:11:19 +01:00
dependabot[bot]
9943f1336b build(deps): bump rustix from 0.38.40 to 0.38.41
Bumps [rustix](https://github.com/bytecodealliance/rustix) from 0.38.40 to 0.38.41.
- [Release notes](https://github.com/bytecodealliance/rustix/releases)
- [Changelog](https://github.com/bytecodealliance/rustix/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bytecodealliance/rustix/compare/v0.38.40...v0.38.41)

---
updated-dependencies:
- dependency-name: rustix
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-28 21:11:08 +01:00
Karolin Varner
bb2a0732cc refactor: replace rustix with std where possible
Merge pull request #490 from mkroening/rustix
2024-11-28 21:01:34 +01:00
Karolin Varner
1275b992a0 Merge branch 'main' into rustix 2024-11-28 21:01:07 +01:00
Karolin Varner
196767964f Fix docstring warnings
Merge pull request #479 from aparcar/docstrings
2024-11-28 20:59:53 +01:00
Karolin Varner
d4e9770fe6 Merge branch 'main' into docstrings 2024-11-28 20:59:31 +01:00
Karolin Varner
8e2f6991d1 Rename mio.connection.shoud_close (typo in function name)
Merge pull request #501 from PD3P/mio-connection-typo
2024-11-28 20:58:07 +01:00
dependabot[bot]
af0db88939 build(deps): bump libc from 0.2.165 to 0.2.166 (#505)
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.165 to 0.2.166.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Changelog](https://github.com/rust-lang/libc/blob/0.2.166/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.165...0.2.166)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-28 19:00:49 +01:00
dependabot[bot]
6601742903 build(deps): bump libc from 0.2.162 to 0.2.165 (#503)
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.162 to 0.2.165.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Changelog](https://github.com/rust-lang/libc/blob/0.2.165/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.162...0.2.165)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-26 20:54:02 +01:00
Philipp Dresselmann
9436281350 Docs: Add cargo test arguments in CONTRIBUTING.md (#502)
Presumably, this should match the command used in the CI workflow and not skip any features?
2024-11-25 14:52:51 +01:00
Philipp Dresselmann
f3399907b9 chore(API): Rename mio.connection.shoud_close
Technically a breaking change... Hopefully that's not a problem here?
2024-11-22 09:43:33 +01:00
dependabot[bot]
0cea8c5eff build(deps): bump rustix from 0.38.39 to 0.38.40
Bumps [rustix](https://github.com/bytecodealliance/rustix) from 0.38.39 to 0.38.40.
- [Release notes](https://github.com/bytecodealliance/rustix/releases)
- [Changelog](https://github.com/bytecodealliance/rustix/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bytecodealliance/rustix/compare/v0.38.39...v0.38.40)

---
updated-dependencies:
- dependency-name: rustix
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-14 11:33:23 +01:00
dependabot[bot]
5b3f4da23e build(deps): bump serde from 1.0.214 to 1.0.215
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.214 to 1.0.215.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.214...v1.0.215)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-14 11:33:08 +01:00
dependabot[bot]
c13badb697 build(deps): bump thiserror from 1.0.68 to 1.0.69
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.68 to 1.0.69.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.68...1.0.69)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-13 14:06:25 +01:00
dependabot[bot]
cc7757a0db build(deps): bump serial_test from 3.1.1 to 3.2.0
Bumps [serial_test](https://github.com/palfrey/serial_test) from 3.1.1 to 3.2.0.
- [Release notes](https://github.com/palfrey/serial_test/releases)
- [Commits](https://github.com/palfrey/serial_test/compare/v3.1.1...v3.2.0)

---
updated-dependencies:
- dependency-name: serial_test
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-13 11:48:07 +01:00
Paul Spooren
d267916445 docs(cli): Improve help text
This commit does multiple things at once to improve the user experience:
* Always start with an upper case letter, no mixing
* Hide deprecated `keygen` command, it still works if called
* Extend and rework some documentation textx
* Drop false `log_level` text, it contains a logic error
* Wrap all documentation at 80 chars

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-13 11:36:16 +01:00
Martin Kröning
03bc89a582 build(rosenpass): only enable rustix for experimental API
Signed-off-by: Martin Kröning <martin.kroening@eonerc.rwth-aachen.de>
2024-11-11 11:14:33 +01:00
Martin Kröning
19b31bcdf0 refactor(mio): close FDs via std instead of rustix
Signed-off-by: Martin Kröning <martin.kroening@eonerc.rwth-aachen.de>
2024-11-11 11:14:33 +01:00
Martin Kröning
939d216027 refactor: import FD traits from std instead of rustix
Signed-off-by: Martin Kröning <martin.kroening@eonerc.rwth-aachen.de>
2024-11-11 11:14:33 +01:00
Paul Spooren
05fbaff2dc docs(to): fix docstrings and add examples
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 12:49:10 +01:00
Paul Spooren
1d1c0e9da7 chore(examples): add examples to docs
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
e19b724673 docs(typenum): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
f879ad5020 docs(result): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
29e7087cb5 docs(mem): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
637a08d222 docs(io): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
6416c247f4 docs(fd): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
4b3b7e41e4 docs(util): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
325fb915f0 docs(result): add docstring and examples
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
43cb0c09c5 docs(length_prefix_encoding): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
0836a2eb28 docs(zerocopy): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
ca7df013d5 docs(option): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
1209d68718 docs(zeroize): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
8806494899 docs(mio): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
dependabot[bot]
582d27351a build(deps): bump libfuzzer-sys from 0.4.7 to 0.4.8
Bumps [libfuzzer-sys](https://github.com/rust-fuzz/libfuzzer) from 0.4.7 to 0.4.8.
- [Changelog](https://github.com/rust-fuzz/libfuzzer/blob/main/CHANGELOG.md)
- [Commits](https://github.com/rust-fuzz/libfuzzer/compare/0.4.7...0.4.8)

---
updated-dependencies:
- dependency-name: libfuzzer-sys
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-08 10:50:37 +01:00
dependabot[bot]
61136d79eb build(deps): bump tokio from 1.41.0 to 1.41.1
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.41.0 to 1.41.1.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.41.0...tokio-1.41.1)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-08 10:50:27 +01:00
dependabot[bot]
71bd406201 build(deps): bump libc from 0.2.161 to 0.2.162
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.161 to 0.2.162.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Changelog](https://github.com/rust-lang/libc/blob/0.2.162/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.161...0.2.162)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-08 10:50:17 +01:00
Paul Spooren
ce63cf534a Merge pull request #485 from rosenpass/dependabot/github_actions/actions/checkout-4
build(deps): bump actions/checkout from 3 to 4
2024-11-08 10:47:58 +01:00
dependabot[bot]
d3ff19bdb9 build(deps): bump actions/checkout from 3 to 4
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-07 23:45:49 +00:00
Paul Spooren
3b6d0822d6 Merge pull request #468 from aparcar/hello-config 2024-11-07 15:14:00 +01:00
Paul Spooren
533afea129 Merge pull request #453 from aparcar/boot_race 2024-11-07 15:13:38 +01:00
Paul Spooren
da5b281b96 ci: add regression test for boot race condition
If two instances start up at the same time, they end up with different
keys on both ends. Test this with different delays of 2 (working), 1
(flaky) and 0 (broken) seconds.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-07 14:38:31 +01:00
Paul Spooren
b9e873e534 feat(config): Implenent todos from validate function
Readability of public/secret keys can be checked by simply loading the
key and thereby also checking that it's actually valid.

A user should either define `key_out` or a valid WireGuard peer (made of
`device` and `peer`). If neither is defined, let the user know that this
function will never do any good.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-07 14:34:57 +01:00
dependabot[bot]
a3b339b180 build(deps): bump actions/checkout from 3 to 4
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-07 14:33:23 +01:00
Paul Spooren
b4347c1382 feat(cli): Print downstream error of config validation
The incredible helpful error message would never reach the enduser.
Attach it to upper layer print to help users fix the issues.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-07 14:23:46 +01:00
Paul Spooren
0745019e10 docs(cli): Create commented config file
The previous `gen-config` output contained no comments and was partly
misleading, i.e. the `pre_shared_key` is actually a path and not the
key itself. Mark things that are optional.

To keep things in sync, add a test that verifies that the configuration
is actually valid.

While at it, use 127.0.0.1 as peer address instead a fictitious domain
which would break the tests.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-07 14:23:46 +01:00
dependabot[bot]
2369006342 build(deps): bump actionsx/prettier from 2 to 3
Bumps [actionsx/prettier](https://github.com/actionsx/prettier) from 2 to 3.
- [Release notes](https://github.com/actionsx/prettier/releases)
- [Commits](https://github.com/actionsx/prettier/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actionsx/prettier
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-07 14:17:32 +01:00
dependabot[bot]
0fa6176d06 build(deps): bump arbitrary from 1.3.2 to 1.4.1
Bumps [arbitrary](https://github.com/rust-fuzz/arbitrary) from 1.3.2 to 1.4.1.
- [Changelog](https://github.com/rust-fuzz/arbitrary/blob/main/CHANGELOG.md)
- [Commits](https://github.com/rust-fuzz/arbitrary/compare/v1.3.2...v1.4.1)

---
updated-dependencies:
- dependency-name: arbitrary
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 16:17:02 +01:00
dependabot[bot]
22bdeaf8f1 build(deps): bump anyhow from 1.0.91 to 1.0.93
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.91 to 1.0.93.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.91...1.0.93)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 15:44:55 +01:00
dependabot[bot]
5731272844 build(deps): bump actions/cache from 3 to 4
Bumps [actions/cache](https://github.com/actions/cache) from 3 to 4.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 15:13:43 +01:00
dependabot[bot]
bc7cef9de0 build(deps): bump peaceiris/actions-gh-pages from 3 to 4
Bumps [peaceiris/actions-gh-pages](https://github.com/peaceiris/actions-gh-pages) from 3 to 4.
- [Release notes](https://github.com/peaceiris/actions-gh-pages/releases)
- [Changelog](https://github.com/peaceiris/actions-gh-pages/blob/main/CHANGELOG.md)
- [Commits](https://github.com/peaceiris/actions-gh-pages/compare/v3...v4)

---
updated-dependencies:
- dependency-name: peaceiris/actions-gh-pages
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 15:13:22 +01:00
dependabot[bot]
4cdcc35c3e build(deps): bump cachix/install-nix-action from 21 to 30
Bumps [cachix/install-nix-action](https://github.com/cachix/install-nix-action) from 21 to 30.
- [Release notes](https://github.com/cachix/install-nix-action/releases)
- [Commits](https://github.com/cachix/install-nix-action/compare/v21...v30)

---
updated-dependencies:
- dependency-name: cachix/install-nix-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 15:12:58 +01:00
dependabot[bot]
a8f1292cbf build(deps): bump cachix/cachix-action from 12 to 15
Bumps [cachix/cachix-action](https://github.com/cachix/cachix-action) from 12 to 15.
- [Release notes](https://github.com/cachix/cachix-action/releases)
- [Commits](https://github.com/cachix/cachix-action/compare/v12...v15)

---
updated-dependencies:
- dependency-name: cachix/cachix-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 15:12:38 +01:00
dependabot[bot]
ae5c5ed2b4 build(deps): bump softprops/action-gh-release from 1 to 2
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 1 to 2.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](https://github.com/softprops/action-gh-release/compare/v1...v2)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 15:12:11 +01:00
Paul Spooren
c483452a6a ci(dependabot): check for GitHub action updates
We already use Dependabot for cargo updates, use it for GitHub action
updates, too. Right now we see warnings every now and then because Node
wants another upgrade or some checkout stuff is about to be deprecated.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-06 13:43:09 +01:00
dependabot[bot]
4ce331d299 build(deps): bump serde from 1.0.213 to 1.0.214
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.213 to 1.0.214.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.213...v1.0.214)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 13:29:18 +01:00
dependabot[bot]
d81eb7e2ed build(deps): bump thiserror from 1.0.65 to 1.0.68
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.65 to 1.0.68.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.65...1.0.68)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 13:22:56 +01:00
dependabot[bot]
61043500ba build(deps): bump rustix from 0.38.37 to 0.38.39
Bumps [rustix](https://github.com/bytecodealliance/rustix) from 0.38.37 to 0.38.39.
- [Release notes](https://github.com/bytecodealliance/rustix/releases)
- [Changelog](https://github.com/bytecodealliance/rustix/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bytecodealliance/rustix/compare/v0.38.37...v0.38.39)

---
updated-dependencies:
- dependency-name: rustix
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 13:22:15 +01:00
dependabot[bot]
9c4752559d build(deps): bump clap_complete from 4.5.35 to 4.5.37
Bumps [clap_complete](https://github.com/clap-rs/clap) from 4.5.35 to 4.5.37.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.35...clap_complete-v4.5.37)

---
updated-dependencies:
- dependency-name: clap_complete
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 10:39:38 +01:00
dependabot[bot]
6aec7acdb8 build(deps): bump clap_complete from 4.5.29 to 4.5.35
Bumps [clap_complete](https://github.com/clap-rs/clap) from 4.5.29 to 4.5.35.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.29...clap_complete-v4.5.35)

---
updated-dependencies:
- dependency-name: clap_complete
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-28 12:22:13 +01:00
dependabot[bot]
337cc1b4b4 build(deps): bump clap_mangen from 0.2.23 to 0.2.24
Bumps [clap_mangen](https://github.com/clap-rs/clap) from 0.2.23 to 0.2.24.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_mangen-v0.2.23...clap_mangen-v0.2.24)

---
updated-dependencies:
- dependency-name: clap_mangen
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-28 11:18:35 +01:00
Karolin Varner
387a266a49 chore: Dependency updates
Merge branch 'dev/karo/updates'
2024-10-24 17:30:52 +02:00
dependabot[bot]
179970b905 build(deps): bump thiserror from 1.0.64 to 1.0.65
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.64 to 1.0.65.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.64...1.0.65)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-24 17:30:32 +02:00
dependabot[bot]
8b769e04c1 build(deps): bump anyhow from 1.0.89 to 1.0.91
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.89 to 1.0.91.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.89...1.0.91)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-24 17:29:48 +02:00
dependabot[bot]
810bdf5519 build(deps): bump tokio from 1.40.0 to 1.41.0
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.40.0 to 1.41.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.40.0...tokio-1.41.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-24 17:29:23 +02:00
dependabot[bot]
d3a666bea0 build(deps): bump serde from 1.0.210 to 1.0.213
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.210 to 1.0.213.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.210...v1.0.213)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-24 17:28:47 +02:00
Karolin Varner
2b8f780584 Unit tests & api doc
Merge pull request #458 from rosenpass/dev/karo/docs_and_unit_tests
2024-10-24 17:25:56 +02:00
Karolin Varner
6aea3c0c1f chore: Documentation and unit tests for rosenpass_util::io 2024-10-24 14:01:20 +02:00
Karolin Varner
e4fdfcae08 chore: Documentation and unit tests for rosenpass_util::functional 2024-10-24 14:01:20 +02:00
Karolin Varner
48e629fff7 feat: sideffect/mutating should take FnMut over Fn 2024-10-24 14:01:20 +02:00
Karolin Varner
6321bb36fc chore: Formatting 2024-10-24 14:01:20 +02:00
Karolin Varner
2f9ff487ba chore: Unused import 2024-10-24 14:01:20 +02:00
Karolin Varner
c0c06cd1dc chore: Wrong formatting for module doc 2024-10-24 14:01:20 +02:00
Karolin Varner
e9772effa6 chore: Documentation and unit tests for rosenpass_util::file 2024-10-24 14:01:20 +02:00
Karolin Varner
cf68f15674 chore: Documentation and unit tests for rosenpass_util::fd 2024-10-24 14:01:20 +02:00
Karolin Varner
dd5d45cdc9 chore: Documentation and unit tests for rosenpass_util::controlflow 2024-10-24 14:01:20 +02:00
Karolin Varner
17a6aed8a6 feat(cli): Automatically generate man page
Merge pull request #434 from aparcar/lil-cli-ng
2024-10-24 13:59:31 +02:00
Paul Spooren
3f9926e353 feat(cli): Automatically generate man page
Instead of using a static one, generate it via clap_mangen. To generate
the manpage run `rosenpass --generate-manpage <folder>`.

Right now clap does not support flattening of generated manpages,
meaning that each subcommand is explained in its own file. To add extra
sections to the main file `rosenpass.1`, it's rewritten after the
initial creation.

Once clap support flattened Man pages, the `generate_to` call can be
removed and all subcommand are added to the `rosenpass.1` file.

This implementation allows downstream manpage generation to stay
unchanged even after switching from multiple manpages to a flattened
one.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-10-22 10:06:47 +02:00
dependabot[bot]
f4ab2ac891 build(deps): bump libc from 0.2.159 to 0.2.161 (#449)
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.159 to 0.2.161.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Changelog](https://github.com/rust-lang/libc/blob/0.2.161/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.159...0.2.161)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-18 19:09:05 +02:00
Karolin Varner
de51c1005f Merge pull request #447 from rosenpass/dev/update-cargolock
chore: update Cargo.lock
2024-10-14 15:43:35 +02:00
wucke13
1e2cd589b1 chore: update Cargo.lock 2024-10-14 15:42:40 +02:00
Karolin Varner
02bc485d97 Merge pull request #446 from rosenpass/dev/karo/docs_and_unit_tests
Documentation and unit tests
2024-10-14 15:42:13 +02:00
Karolin Varner
3ae52b9824 chore: Documentation and unit tests for crate rosenpass-util::build 2024-10-13 19:22:14 +02:00
Karolin Varner
cbf361206b chore: Documentation and unit tests for crate rosenpass-util::b64 2024-10-13 17:21:30 +02:00
Karolin Varner
398da99df2 chore: Documentation and unit tests for crate rosenpass-constant-time 2024-10-13 16:58:20 +02:00
Karolin Varner
acfbb67abe chore: Documentation and unit tests for crate rosenpass-oqs 2024-10-13 16:34:50 +02:00
Karolin Varner
c407b8b006 chore(rosenpass): Set version to 0.3.0-dev
Merge pull request #436 from aparcar/0.2.2-dev
2024-10-10 11:36:14 +02:00
Paul Spooren
bc7213d8c0 chore(rosenpass): Set version to 0.3.0-dev
The latest release left the `main` branch on 0.2.1

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-10-10 11:35:33 +02:00
dependabot[bot]
69e68aad2c build(deps): bump clap from 4.5.19 to 4.5.20
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.19 to 4.5.20.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.19...clap_complete-v4.5.20)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-10 11:34:22 +02:00
Karolin Varner
9b07f5803b fix(rosenpass): fix compilation without API
Merge pull request #443 from mkroening/no-api
2024-10-10 11:33:33 +02:00
Martin Kröning
5ce572b739 fix(rosenpass): fix compilation without API
Signed-off-by: Martin Kröning <martin.kroening@eonerc.rwth-aachen.de>
2024-10-09 12:52:18 +02:00
wucke13
d9f8fa0092 refactor(flake.nix): externalize pkgs, add overlay
This splits the complexity of the `flake.nix` into multiple files. At
cross-compiled and static builds at the benefit of simpler nix
expressions and generally better cross compilation compatibility.
the same time, naersk is removed; causing much slower builds for cross-
compiled packages.

This partially addresses the points mentioned in #412.
2024-10-08 17:30:08 +02:00
dependabot[bot]
a5208795f6 build(deps): bump futures from 0.3.30 to 0.3.31
Bumps [futures](https://github.com/rust-lang/futures-rs) from 0.3.30 to 0.3.31.
- [Release notes](https://github.com/rust-lang/futures-rs/releases)
- [Changelog](https://github.com/rust-lang/futures-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/futures-rs/compare/0.3.30...0.3.31)

---
updated-dependencies:
- dependency-name: futures
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-08 14:27:46 +02:00
Karolin Varner
0959148305 ci: add concurrency option to skip in progress
Merge pull request #432 from aparcar/con
2024-10-03 16:48:02 +02:00
Paul Spooren
f2bc3a8b64 ci: Rename regression workflow to "Regression"
No magic here, this is likely a copy&paste error. Problem is that one
workflow being called "QC" (regressions.yml) cancels out the other "QC"
(qc.yaml).

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-10-03 16:47:49 +02:00
Paul Spooren
06529df2c0 ci: add concurrency option to skip in progress
Instead of running outdated CI jobs, skip them automatically.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-10-03 16:47:49 +02:00
Karolin Varner
128c77f77a ci: Skip Nix build of aarch64 since it takes forever
Merge pull request #433 from aparcar/no-arm-ci
2024-10-03 16:47:09 +02:00
Karolin Varner
501cc9bb05 Merge branch 'main' into no-arm-ci 2024-10-03 16:46:36 +02:00
dependabot[bot]
9ad5277a90 build(deps): bump clap from 4.5.18 to 4.5.19
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.18 to 4.5.19.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.18...clap_complete-v4.5.19)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-02 18:45:26 +02:00
Paul Spooren
0cbcaeaf98 ci: Skip Nix build of aarch64 since it takes forever
More than 6 hours aka failing the CI. Drop it for now and hope to have
it enabled later again.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-10-01 14:18:50 +02:00
Paul Spooren
687ef3f6f8 docs: Correct protocol retransmission unit/vars
Those are seconds not ms, also it's BEGIN not BEG.

While over there, drop the unused variable `RETRANSMIT_ABORT` which was
never used anywhere in the code and drop an outdated TODO comment.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-10-01 14:08:44 +02:00
Paul Spooren
b0706354d3 chore: Format all Cargo.toml files
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-10-01 11:22:45 +01:00
dependabot[bot]
c1e86daec8 build(deps): bump libc from 0.2.158 to 0.2.159 (#429)
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.158 to 0.2.159.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Changelog](https://github.com/rust-lang/libc/blob/0.2.159/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.158...0.2.159)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-25 20:02:31 +02:00
dependabot[bot]
18a286e688 build(deps): bump thiserror from 1.0.63 to 1.0.64 (#428)
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.63 to 1.0.64.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.63...1.0.64)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-24 17:28:42 +02:00
dependabot[bot]
cb92313391 build(deps): bump clap from 4.5.17 to 4.5.18 (#427)
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.17 to 4.5.18.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.17...clap_complete-v4.5.18)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-21 09:26:54 +02:00
dependabot[bot]
5cd30b4c13 build(deps): bump anyhow from 1.0.88 to 1.0.89 (#425)
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.88 to 1.0.89.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.88...1.0.89)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-17 17:41:27 +02:00
dependabot[bot]
76d8d38744 build(deps): bump rustix from 0.38.36 to 0.38.37
Bumps [rustix](https://github.com/bytecodealliance/rustix) from 0.38.36 to 0.38.37.
- [Release notes](https://github.com/bytecodealliance/rustix/releases)
- [Changelog](https://github.com/bytecodealliance/rustix/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bytecodealliance/rustix/compare/v0.38.36...v0.38.37)

---
updated-dependencies:
- dependency-name: rustix
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-12 19:01:42 +02:00
dependabot[bot]
f63f0bbc2e build(deps): bump anyhow from 1.0.87 to 1.0.88
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.87 to 1.0.88.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.87...1.0.88)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-12 19:01:31 +02:00
Karolin Varner
4a449e6502 chore: drop copy & paste doc error in protocol.rs
Merge pull request #422 from aparcar/cos1
2024-09-10 18:02:49 +02:00
Karolin Varner
1e6d2df004 Merge branch 'main' into cos1 2024-09-10 18:02:25 +02:00
Paul Spooren
3fa9aadda2 chore: drop copy & paste doc error in protocol.rs
There seem to be a paste typo in the docs, drop it to lower confusion.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-09-10 12:39:57 +02:00
dependabot[bot]
0c79a4ce95 build(deps): bump serde from 1.0.209 to 1.0.210 (#420)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.209 to 1.0.210.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.209...v1.0.210)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-09 16:22:00 +02:00
dependabot[bot]
036960b5b1 build(deps): bump anyhow from 1.0.86 to 1.0.87 (#421)
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.86 to 1.0.87.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.86...1.0.87)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-09 15:01:01 +02:00
dependabot[bot]
e7258849cb build(deps): bump rustix from 0.38.35 to 0.38.36 (#419)
Bumps [rustix](https://github.com/bytecodealliance/rustix) from 0.38.35 to 0.38.36.
- [Release notes](https://github.com/bytecodealliance/rustix/releases)
- [Commits](https://github.com/bytecodealliance/rustix/compare/v0.38.35...v0.38.36)

---
updated-dependencies:
- dependency-name: rustix
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-06 09:53:51 +02:00
dependabot[bot]
8c88f68990 build(deps): bump clap from 4.5.16 to 4.5.17
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.16 to 4.5.17.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.16...clap_complete-v4.5.17)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-05 10:24:56 +02:00
dependabot[bot]
cf20536576 build(deps): bump tokio from 1.39.3 to 1.40.0
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.39.3 to 1.40.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.39.3...tokio-1.40.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-31 11:30:00 +02:00
dependabot[bot]
72e18e3ec2 build(deps): bump derive_builder from 0.20.0 to 0.20.1
Bumps [derive_builder](https://github.com/colin-kiegel/rust-derive-builder) from 0.20.0 to 0.20.1.
- [Release notes](https://github.com/colin-kiegel/rust-derive-builder/releases)
- [Commits](https://github.com/colin-kiegel/rust-derive-builder/compare/v0.20.0...v0.20.1)

---
updated-dependencies:
- dependency-name: derive_builder
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-30 08:56:28 +02:00
dependabot[bot]
6040156a0e build(deps): bump rustix from 0.38.34 to 0.38.35 (#414)
Bumps [rustix](https://github.com/bytecodealliance/rustix) from 0.38.34 to 0.38.35.
- [Release notes](https://github.com/bytecodealliance/rustix/releases)
- [Commits](https://github.com/bytecodealliance/rustix/compare/v0.38.34...v0.38.35)

---
updated-dependencies:
- dependency-name: rustix
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-28 20:39:53 +02:00
dependabot[bot]
d3b318b413 build(deps): bump stacker from 0.1.16 to 0.1.17 (#415)
Bumps [stacker](https://github.com/rust-lang/stacker) from 0.1.16 to 0.1.17.
- [Commits](https://github.com/rust-lang/stacker/compare/stacker-0.1.16...stacker-0.1.17)

---
updated-dependencies:
- dependency-name: stacker
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-28 20:39:25 +02:00
dependabot[bot]
3a49345138 build(deps): bump serde from 1.0.208 to 1.0.209 (#413)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.208 to 1.0.209.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.208...v1.0.209)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-27 20:02:59 +02:00
dependabot[bot]
4ec7813259 build(deps): bump stacker from 0.1.15 to 0.1.16
Bumps [stacker](https://github.com/rust-lang/stacker) from 0.1.15 to 0.1.16.
- [Commits](https://github.com/rust-lang/stacker/compare/stacker-0.1.15...stacker-0.1.16)

---
updated-dependencies:
- dependency-name: stacker
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-22 16:54:13 +02:00
dependabot[bot]
db31da14d3 build(deps): bump postcard from 1.0.9 to 1.0.10
Bumps [postcard](https://github.com/jamesmunns/postcard) from 1.0.9 to 1.0.10.
- [Release notes](https://github.com/jamesmunns/postcard/releases)
- [Changelog](https://github.com/jamesmunns/postcard/blob/main/CHANGELOG.md)
- [Commits](https://github.com/jamesmunns/postcard/compare/v1.0.9...v1.0.10)

---
updated-dependencies:
- dependency-name: postcard
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-22 16:54:00 +02:00
Karolin Varner
4c20efc8a8 Merge: fix(API): Tests failing on mac
Merge pull request #409 from rosenpass/dev/karo/macos-fix
2024-08-21 13:46:53 +02:00
Karolin Varner
c81d484294 fix(API): Tests failing on mac 2024-08-21 12:48:45 +02:00
dependabot[bot]
cc578169d6 build(deps): bump postcard from 1.0.8 to 1.0.9 (#408)
Bumps [postcard](https://github.com/jamesmunns/postcard) from 1.0.8 to 1.0.9.
- [Release notes](https://github.com/jamesmunns/postcard/releases)
- [Changelog](https://github.com/jamesmunns/postcard/blob/main/CHANGELOG.md)
- [Commits](https://github.com/jamesmunns/postcard/compare/v1.0.8...v1.0.9)

---
updated-dependencies:
- dependency-name: postcard
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-21 08:12:56 +02:00
dependabot[bot]
91527702f1 build(deps): bump tokio from 1.39.2 to 1.39.3 (#407)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.39.2 to 1.39.3.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.39.2...tokio-1.39.3)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-21 08:12:39 +02:00
dependabot[bot]
0179f1c673 build(deps): bump libc from 0.2.156 to 0.2.158 (#406)
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.156 to 0.2.158.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Changelog](https://github.com/rust-lang/libc/blob/0.2.158/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.156...0.2.158)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-21 08:12:05 +02:00
Karolin Varner
2238919657 Merge: fd/time: add tests, docs, cleanups
Merge pull request #405 from aparcar/fd-tests-cleanup
2024-08-19 17:52:42 +02:00
Paul Spooren
d913e19883 test: add tests for controlflow
While at it, fix the label handling and fix a typo in continue_if, where
a `break` falsely replaced a `continue`

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-08-19 17:24:38 +02:00
Paul Spooren
1555d0897b feat(ord): drop obsolete RTX_BUFFER_SIZE and usize_max
The RTX_BUFFER_SIZE function is nowhere used in the code and when
dropping it, usize_max (const version of max()) becomes obsolete, too.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-08-19 17:24:37 +02:00
Paul Spooren
abdbf8f3da feat(util/time): cleanup, document and add tests
Drop the unused `dur` function, it's nowhere found in the code.

Document both Timebase and Timebase::now()

Add tests

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-08-19 17:24:16 +02:00
Paul Spooren
9f78531979 tests: cleanup fd.rs tests
Trigger the internal assert of owned.rs instead of writing our own. To
correctly test it use `should_panic` macro.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-08-19 17:24:16 +02:00
Karolin Varner
624d8d2f44 Merge: API: Close connections after errors & use mio::Token based polling
Merge pull request #404 from rosenpass/dev/karo/api_remove_connection
2024-08-19 15:03:46 +02:00
Karolin Varner
9bbf9433e6 fix(API): Be polite and kill child processes in api integration tests 2024-08-19 00:31:01 +02:00
Karolin Varner
77760d71df feat(API): Use mio::Token based polling
Avoid polling every single IO source to collect events,
poll those specific IO sources mio tells us about.
2024-08-19 00:31:01 +02:00
Karolin Varner
53e560191f feat(API): Close API connections after error 2024-08-19 00:31:01 +02:00
Karolin Varner
93cd266c68 Merge API Endpoint: AddPskBroker
Merge pull request #403 from rosenpass/dev/karo/api-add-psk-broker
2024-08-17 22:25:21 +02:00
Karolin Varner
594f894206 feat(API): AddPskBroker endpoint 2024-08-17 15:30:10 +02:00
Karolin Varner
a831e01a5c chore: Utilities to check for unix domain stream sockets 2024-08-17 15:30:10 +02:00
dependabot[bot]
0884641d64 build(deps): bump libc from 0.2.155 to 0.2.156
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.155 to 0.2.156.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Changelog](https://github.com/rust-lang/libc/blob/0.2.156/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.155...0.2.156)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-17 10:54:06 +02:00
dependabot[bot]
ae85d0ed2b build(deps): bump clap from 4.5.15 to 4.5.16
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.15 to 4.5.16.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.15...clap_complete-v4.5.16)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-16 17:28:51 +02:00
Karolin Varner
163f66f20e Merge – API Feature: Adding listen sockets
Merge pull request #395 from rosenpass/dev/karo/api-add-listen-socket
2024-08-16 17:16:44 +02:00
Paul Spooren
3caff91515 rosenpass: fallback for empty api section in config
The [api] section is newly added and causes existing installation to
break since they lack the configuration options. Instead, use a serde
default function.

Signed-off-by: Paul Spooren <mail@aparcar.org>
Co-authored-by: Karolin Varner <karo@cupdev.net>
2024-08-16 14:37:42 +02:00
Karolin Varner
24eebe29a1 feat(API): AddListenSocket endpoint 2024-08-16 14:37:42 +02:00
Karolin Varner
1d2fa7d038 feat(api): API Feature – Add server keys via API
Merge pull request #392 from rosenpass/dev/karo/api-supply-server-keys
2024-08-16 11:22:46 +02:00
Karolin Varner
edf1e774c1 feat(API): SupplyKeypair endpoint 2024-08-16 11:13:34 +02:00
Karolin Varner
7a31b57227 chore(API): Infrastructure to use endpoints with fd. passing 2024-08-16 08:39:27 +02:00
Karolin Varner
d5a8c85abe chore(API): Specifying a keypair should be opt. at startup
…so we can specify it later using the API.
2024-08-16 08:34:07 +02:00
Karolin Varner
48f7ff93e3 chore(API, AppServer): Deal with CryptoServer being uninit.
Before this, we would just raise an error.
2024-08-16 08:34:07 +02:00
Karolin Varner
5f6c36e773 chore(AppServer): Decouple AppServer from CryptoServer::timebase 2024-08-16 08:34:07 +02:00
Karolin Varner
7b3b7612cf chore(api): API should have access to AppServer
The borrow checker does not approve, hence there are many shenanigans
with extension traits.
2024-08-16 08:34:07 +02:00
Karolin Varner
c1704b1464 fix(API): Wrong response size set 2024-08-16 08:34:07 +02:00
dependabot[bot]
2785aaf783 build(deps): bump serde from 1.0.207 to 1.0.208
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.207 to 1.0.208.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.207...v1.0.208)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-16 08:30:08 +02:00
Karolin Varner
15002a74cc Merge: Experimental PSK Broker Support
Merge pull request #376 from pqcfox/feat/netlink-broker-cli

Add broker support to Rosenpass using `MioBrokerClient` (backport of dev/broker-architecture)
2024-08-16 08:26:15 +02:00
Karolin Varner
0fe2d9825b fix: Remove ineffectual broker integration test 2024-08-16 00:35:46 +02:00
Karolin Varner
ab805dae75 fix: libc & rustix are making problems in CI for unknown reasons 2024-08-16 00:35:46 +02:00
Karolin Varner
08653c3338 chore: clippy 2024-08-16 00:35:46 +02:00
Karolin Varner
520c8c6eaa chore: Feature naming scheme fully applied
experimental_broker_api -> experiment_broker_api
2024-08-15 22:47:20 +02:00
Karolin Varner
258efe408c fix: PSK broker integration did not work
This commit resolves multiple issues with the PSK broker integration.

- The manual testing procedure never actually utilized the brokers
  due to the use of the outfile option, this led to issues with the
  broker being hidden.
- The manual testing procedure omitted checking whether a PSK was
  actually sent to WireGuard entirely. This was fixed by writing an
  entirely new manual integration testing shell-script that can serve
  as a blueprint for future integration tests.
- Many parts of the PSK broker code did not report (log) errors
  accurately; added error logging
- BrokerServer set message.payload.return_code to the msg_type value,
  this led to crashes
- The PSK broker commands all omitted to set the memfd policy, this led
  to immediate crashes once secrets where actually allocated
- The MioBrokerClient IO state machine was broken and the design was
  too obtuse to debug. The state machine returned the length prefix as
  a message instead of actually interpreting it as a state machine.
  Seems the code was integrated but never actually tested. This was
  fixed by rewriting the entire state machine code using the new
  LengthPrefixEncoder/Decoder facilities. A write-buffer that was not
  being flushed is now handled by flushing the buffer in blocking-io
  mode.
2024-08-15 22:47:20 +02:00
Karolin Varner
fd0f35b279 chore: gen-key subcommand should show canonical paths 2024-08-15 22:12:02 +02:00
Karolin Varner
8808ed5dbc fix: Quiet log level should be warn 2024-08-15 09:43:25 +02:00
Karolin Varner
6fc45cab53 chore: prettier 2024-08-15 08:55:13 +02:00
Katherine Watson
1f7196e473 doc: Add documentation for testing 2024-08-14 19:49:00 -07:00
Katherine Watson
c359b87d0c chore: Convert broker interface setup to use mio's UnixStream where possible 2024-08-14 19:03:45 -07:00
Katherine Watson
355b48169b chore: Make MiobrokerClient import conditional 2024-08-14 19:03:45 -07:00
Katherine Watson
274d245bed chore: Unify enable_wg_broker and enable_broker_api features 2024-08-14 19:03:45 -07:00
Katherine Watson
065b0fcc8a feat: Add enable_wg_broker feature using MioBrokerClient
doc: Add documentation for new methods and arguments

fix: Require new psk_broker_spawn flag to use broker without extra parameters, to make all-features cargo test pass

fix: Fix MioBrokerClient buffer size to allow room for length prefix

fix: Fix remaining issue with panic
2024-08-14 19:03:44 -07:00
dependabot[bot]
191fb10663 build(deps): bump mio from 1.0.1 to 1.0.2
Bumps [mio](https://github.com/tokio-rs/mio) from 1.0.1 to 1.0.2.
- [Release notes](https://github.com/tokio-rs/mio/releases)
- [Changelog](https://github.com/tokio-rs/mio/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/mio/compare/v1.0.1...v1.0.2)

---
updated-dependencies:
- dependency-name: mio
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-14 09:28:27 +02:00
dependabot[bot]
3faa84117f build(deps): bump tokio from 1.39.1 to 1.39.2
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.39.1 to 1.39.2.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.39.1...tokio-1.39.2)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-13 13:14:15 +02:00
dependabot[bot]
fda75a0184 build(deps): bump serde from 1.0.204 to 1.0.207
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.204 to 1.0.207.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.204...v1.0.207)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-13 13:14:03 +02:00
dependabot[bot]
96b1f6c0d3 build(deps): bump procspawn from 1.0.0 to 1.0.1 (#390)
Bumps [procspawn](https://github.com/mitsuhiko/procspawn) from 1.0.0 to 1.0.1.
- [Changelog](https://github.com/mitsuhiko/procspawn/blob/master/CHANGELOG.md)
- [Commits](https://github.com/mitsuhiko/procspawn/compare/1.0.0...1.0.1)

---
updated-dependencies:
- dependency-name: procspawn
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-13 08:15:57 +02:00
dependabot[bot]
fb73c68626 build(deps): bump tempfile from 3.10.1 to 3.11.0 (#387)
Bumps [tempfile](https://github.com/Stebalien/tempfile) from 3.10.1 to 3.11.0.
- [Changelog](https://github.com/Stebalien/tempfile/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Stebalien/tempfile/compare/v3.10.1...v3.11.0)

---
updated-dependencies:
- dependency-name: tempfile
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-13 08:15:46 +02:00
dependabot[bot]
42b0e23695 build(deps): bump clap from 4.5.13 to 4.5.15 (#397)
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.13 to 4.5.15.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.13...clap_complete-v4.5.15)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-13 08:13:06 +02:00
Karolin Varner
c58f832727 Merge pull request #391 from aparcar/pb
add test cases for util modules
2024-08-12 16:26:01 +02:00
Paul Spooren
7b6a9eebc1 ci: test full workspace with codecov
Previously only the default members were checked for coverage.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-08-12 12:10:47 +02:00
Paul Spooren
4554dc4bb3 ci: drop codecov token
It's not needed to see generate results for pull requests.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-08-12 11:44:33 +02:00
Paul Spooren
465c6beaab ci: switch to codecov action v4 branch
Instead of using a specific version, use branch v4 which stays API
compatible.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-08-12 11:43:26 +02:00
Paul Spooren
1853e0a3c0 feat: add test case and check fd value
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-08-12 11:37:15 +02:00
Benjamin Lipp
245d4d1a0f feat: add tests for util file.rs
Co-authored-by: Paul Spooren <mail@aparcar.org>
2024-08-12 11:37:15 +02:00
Karolin Varner
d5d15cd9bc Merge Rosenpass API infrastructure
Pull request #388 from rosenpass/dev/karo/api
2024-08-08 22:02:04 +02:00
Katherine Watson
9fd3df67ed chore: Fix typos and add various comments 2024-08-07 23:11:13 -07:00
Karolin Varner
6d47169a5c feat: Set CLOEXEC flag on claimed fds and mask them
Masking the file descriptors (by replaying them with a file descriptor pointing towards /dev/null)
mitigates use after free (on file descriptor) attacks. In case some
piece of code still holds a reference to the file descriptor, that
file descriptor now merely holds a reference to /dev/null.

Otherwise, the file descriptor might be reused and the reference
could now mistakenly point to all sorts of – potentially more harmful – files, such as memfd_secret
file descriptors, storing our secret keys.
2024-08-05 16:16:09 +02:00
Karolin Varner
4bcd38a4ea feat: Infrastructure for the Rosenpass API 2024-08-03 16:51:18 +02:00
Karolin Varner
730a03957a feat: A variety of utilities in preparation for implementing the API 2024-08-03 16:50:21 +02:00
Karolin Varner
ea071f5363 feat: Convenience functions and traits to automatically handle ErrorKind::{Interrupt, WouldBlock} 2024-08-03 16:49:02 +02:00
Karolin Varner
3063d3e4c2 feat: Convenience traits to get the ErrorKind of an io error for match clauses 2024-08-03 16:48:25 +02:00
Karolin Varner
1bf0eed90a feat: Convenience function to just call a function 2024-08-03 16:46:48 +02:00
Karolin Varner
138e6b6553 chore: to crate documentation indendation (purely cosmetic) 2024-08-03 16:32:02 +02:00
Karolin Varner
2dde0a2b47 chore: Refactor integration_tests (purely cosmetic) 2024-08-03 16:31:19 +02:00
Karolin Varner
3cc3b6009f chore: Move CliCommand::run -> CliArgs::run; do not mutate the configuration
This way CliArgs::run has access to all command line parameters.
Avoided mutating the CliArgs (or rather CliCommand) structure here,
because doing so is simply bad style. There is no good reasoning for
why this function should mutate CliCommand, except for a bit of
convenience.
2024-08-03 16:29:19 +02:00
Karolin Varner
1ab457ed37 fix: Print stack trace to errors propagated to main function 2024-08-03 15:50:14 +02:00
Karolin Varner
c9c266fe7c fix: Flush stdout after printing key update notification
Otherwise, the notification might not be delivered due to buffering.
2024-08-03 15:50:14 +02:00
Karolin Varner
8d3c8790fe chore: Reorganize memfd secret policy
- Policy is now set in main.rs, not cli.rs.
- Feature is called experiment_memfd_secret, not enable_memfd_alloc

This also fixes the last remaining warnings.
2024-08-03 15:17:09 +02:00
Karolin Varner
648a94ead8 chore: Clippy fixes on wireguard-broker 2024-08-03 15:02:49 +02:00
Karolin Varner
54ac5eecdb chore: Warnings & clippy hints 2024-08-03 14:13:03 +02:00
Karolin Varner
40c5bbd167 chore: Ensure that rustAnalyzer is installed in dev environment 2024-08-03 14:06:19 +02:00
Karolin Varner
a4b8fc2226 chore: Move memcmp test API doc to test memcmp test module 2024-08-03 14:05:22 +02:00
Karolin Varner
37f7b3e4e9 fix: Consistently use feature flag experiment_libcrux
Before this, some parts of the code used an incorrect feature flag
name, preventing libcrux from being used.
2024-08-03 14:03:31 +02:00
Karolin Varner
deafc1c1af chore: Style adjustments – Cargo.toml 2024-08-03 14:03:31 +02:00
Karolin Varner
6bbe85a57b chore: Remove unnecessary imports 2024-08-03 13:59:55 +02:00
Karolin Varner
e70c5b33a8 chore: Ignore vscode directory 2024-08-03 13:35:31 +02:00
dependabot[bot]
25fdfef4d0 build(deps): bump clap from 4.5.11 to 4.5.13 (#384)
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.11 to 4.5.13.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.11...v4.5.13)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-01 09:47:20 +02:00
dependabot[bot]
6ab8fafe59 build(deps): bump clap from 4.5.9 to 4.5.11
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.9 to 4.5.11.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.9...v4.5.11)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-29 14:28:22 +02:00
dependabot[bot]
c1aacf76b8 build(deps): bump mio from 0.8.11 to 1.0.1 (#380)
Bumps [mio](https://github.com/tokio-rs/mio) from 0.8.11 to 1.0.1.
- [Release notes](https://github.com/tokio-rs/mio/releases)
- [Changelog](https://github.com/tokio-rs/mio/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/mio/commits)

---
updated-dependencies:
- dependency-name: mio
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-27 15:59:48 +02:00
dependabot[bot]
1bcaf5781f build(deps): bump tokio from 1.38.1 to 1.39.1
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.38.1 to 1.39.1.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.38.1...tokio-1.39.1)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-25 19:05:30 +02:00
Paul Spooren
de60e5f8f0 Docs: run prettier over CONTRIBUTING.md
... or else the CI fails on all PRs

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-07-25 15:56:54 +02:00
Alice Bowman
b50ddda151 Documentation: pointed to website documentation in readme 2024-07-23 10:46:52 +02:00
Alice Bowman
7282fba3b3 Docs: migrated cooking recipe from wiki 2024-07-23 10:41:44 +02:00
dependabot[bot]
0cca389f10 build(deps): bump thiserror from 1.0.62 to 1.0.63 (#371)
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.62 to 1.0.63.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.62...1.0.63)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-18 14:29:08 +02:00
Karolin Varner
8a08d49215 Merge pull request #370 from rosenpass/dependabot/cargo/tokio-1.38.1
build(deps): bump tokio from 1.38.0 to 1.38.1
2024-07-17 08:35:06 +02:00
dependabot[bot]
8637bc7884 build(deps): bump tokio from 1.38.0 to 1.38.1
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.38.0 to 1.38.1.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.38.0...tokio-1.38.1)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-16 23:32:14 +00:00
dependabot[bot]
4412c2bdd1 build(deps): bump thiserror from 1.0.61 to 1.0.62 (#366)
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.61 to 1.0.62.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.61...1.0.62)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-12 14:28:18 +02:00
Karolin Varner
ecc815dd8e Merge pull request #363 from aparcar/regression-ci
Regression CI and fixup
2024-07-10 21:09:16 +02:00
Paul Spooren
b7d7c03e35 Merge branch 'main' into regression-ci 2024-07-10 20:06:33 +02:00
Paul Spooren
f6320c3c35 ci: fixup regression test
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-07-10 18:57:45 +02:00
Karolin Varner
19f7905bc9 Merge pull request #362 from rosenpass/dev/karo/libcrux_chacha20poly1305
feat: Experimental support for encryption using libcrux
2024-07-10 15:08:31 +02:00
Karolin Varner
9b5b7ee620 Merge pull request #338 from aparcar/no-unused
drop unused import of WG_B64_LEN
2024-07-10 15:04:35 +02:00
dependabot[bot]
4fdd271de7 build(deps): bump clap from 4.5.8 to 4.5.9 (#365)
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.8 to 4.5.9.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/v4.5.8...v4.5.9)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-10 14:17:45 +02:00
dependabot[bot]
860e65965a build(deps): bump serde from 1.0.203 to 1.0.204 (#364)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.203 to 1.0.204.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.203...v1.0.204)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-09 09:08:54 +02:00
Prabhpreet Dua
87144233da Prettier 2024-07-08 13:54:26 +02:00
Prabhpreet Dua
d0a6e99a1f feat: Regression CI based on misc/generate_configs.py 2024-07-08 13:54:26 +02:00
Paul Spooren
79b634fadf drop unused import of WG_B64_LEN
This causes warnings

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-07-08 13:48:00 +02:00
Karolin Varner
99ac3c0902 feat: Experimental support for encryption using libcrux
Libcrux is a library for formally verified implementations of
cryptographic primitives. It uses multiple back ends; one of which is
libjade. A cryptographic library written in the jasmin assembly
language for high assurance cryptographic implementations.

To use compile with the experiment_libcrux feature enabled:

    cargo build --features experiment_libcrux
2024-07-03 21:46:40 +02:00
dependabot[bot]
010c14dadf build(deps): bump zerocopy from 0.7.34 to 0.7.35 (#361)
Bumps [zerocopy](https://github.com/google/zerocopy) from 0.7.34 to 0.7.35.
- [Release notes](https://github.com/google/zerocopy/releases)
- [Changelog](https://github.com/google/zerocopy/blob/main/CHANGELOG.md)
- [Commits](https://github.com/google/zerocopy/commits)

---
updated-dependencies:
- dependency-name: zerocopy
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-03 11:08:42 +02:00
dependabot[bot]
45b6132312 build(deps): bump clap from 4.5.7 to 4.5.8 (#360)
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.7 to 4.5.8.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.7...v4.5.8)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-29 20:18:42 +02:00
dependabot[bot]
77f9fd38f3 build(deps): bump log from 0.4.21 to 0.4.22 (#359)
Bumps [log](https://github.com/rust-lang/log) from 0.4.21 to 0.4.22.
- [Release notes](https://github.com/rust-lang/log/releases)
- [Changelog](https://github.com/rust-lang/log/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/log/compare/0.4.21...0.4.22)

---
updated-dependencies:
- dependency-name: log
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-29 20:17:25 +02:00
Karolin Varner
775ed86adc Merge pull request #356 from pqcfox/hotfix/fix-kyber-encaps-fuzz-test
Fix Kyber encapsulation fuzz test shared key length to make test pass
2024-06-28 16:59:05 +02:00
Katherine Watson
40377dce1f fix: Fix shared_secret length in Kyber encaps fuzz test 2024-06-27 09:17:05 -07:00
Karolin Varner
19293471e8 Merge pull request #357 from rosenpass/dev/cve/new_name
meta: Use my new name
2024-06-27 11:15:52 +02:00
Clara Engler
cc5877dd83 meta: Use my new name 2024-06-27 10:30:34 +02:00
Karolin Varner
ebb591aa6f Merge pull request #354 from pqcfox/hotfix/fix-static-kem-branch-errors
Fix CI after merge of branch introducing PublicBox
2024-06-25 08:57:50 +02:00
Katherine Watson
07146d9914 fix: update handle_msg.rs fuzz test and handshake.rs bench to use PublicBox 2024-06-21 18:21:33 -07:00
Karolin Varner
cd04dbc4eb Move static KEM public key to new PublicBox struct 2024-06-21 13:06:05 +02:00
Katherine Watson
cc22165dc4 chore: Ensure punctuation is consistent in doc comments 2024-06-17 20:53:19 -07:00
Katherine Watson
8496571765 test: Modify existing tests to cover load/store for PublicBox as well 2024-06-17 20:49:40 -07:00
Katherine Watson
ee3a1f580e Refactor PublicBox to reuse Public code and minimize stack overhead 2024-06-17 20:49:40 -07:00
Katherine Watson
89584645c3 Migrate PublicBox to above tests 2024-06-17 20:49:40 -07:00
Katherine Watson
3286e49370 Replace &* incantations with .deref() 2024-06-17 20:49:40 -07:00
Karolin Varner
100d7b6e1c chore: Simplify some dereferencing incantations in PublicBox 2024-06-17 20:49:40 -07:00
Katherine Watson
921b2bfc39 Fix comments in PublicBox impl to refer to PublicBox 2024-06-17 20:49:40 -07:00
Katherine Watson
a18658847c Move static KEM public key to new PublicBox struct 2024-06-17 20:49:40 -07:00
Alice Michaela Bowman
bdad414c90 Add cargo-test runner for macos x86-64 (#348)
* added cargo-test runner for macos 86-64
---------

Co-authored-by: Prabhpreet Dua <615318+prabhpreet@users.noreply.github.com>
2024-06-17 15:48:19 +02:00
137 changed files with 11225 additions and 2065 deletions

14
.ci/boot_race/a.toml Normal file
View File

@@ -0,0 +1,14 @@
public_key = "rp-a-public-key"
secret_key = "rp-a-secret-key"
listen = ["127.0.0.1:9999"]
verbosity = "Verbose"
[api]
listen_path = []
listen_fd = []
stream_fd = []
[[peers]]
public_key = "rp-b-public-key"
endpoint = "127.0.0.1:9998"
key_out = "rp-b-key-out.txt"

14
.ci/boot_race/b.toml Normal file
View File

@@ -0,0 +1,14 @@
public_key = "rp-b-public-key"
secret_key = "rp-b-secret-key"
listen = ["127.0.0.1:9998"]
verbosity = "Verbose"
[api]
listen_path = []
listen_fd = []
stream_fd = []
[[peers]]
public_key = "rp-a-public-key"
endpoint = "127.0.0.1:9999"
key_out = "rp-a-key-out.txt"

48
.ci/boot_race/run.sh Normal file
View File

@@ -0,0 +1,48 @@
#!/bin/bash
iterations="$1"
sleep_time="$2"
config_a="$3"
config_b="$4"
PWD="$(pwd)"
EXEC="$PWD/target/release/rosenpass"
i=0
while [ "$i" -ne "$iterations" ]; do
echo "=> Iteration $i"
# flush the PSK files
echo "A" >rp-a-key-out.txt
echo "B" >rp-b-key-out.txt
# start the two instances
echo "Starting instance A"
"$EXEC" exchange-config "$config_a" &
PID_A=$!
sleep "$sleep_time"
echo "Starting instance B"
"$EXEC" exchange-config "$config_b" &
PID_B=$!
# give the key exchange some time to complete
sleep 3
# kill the instances
kill $PID_A
kill $PID_B
# compare the keys
if cmp -s rp-a-key-out.txt rp-b-key-out.txt; then
echo "Keys match"
else
echo "::warning title=Key Exchange Race Condition::The key exchange resulted in different keys. Delay was ${sleep_time}s."
# TODO: set this to 1 when the race condition is fixed
exit 0
fi
# give the instances some time to shut down
sleep 2
i=$((i + 1))
done

33
.ci/run-regression.sh Executable file
View File

@@ -0,0 +1,33 @@
#!/usr/bin/env bash
iterations="$1"
sleep_time="$2"
PWD="$(pwd)"
EXEC="$PWD/target/release/rosenpass"
LOGS="$PWD/output/logs"
mkdir -p "$LOGS"
run_command() {
local file=$1
local log_file="$2"
("$EXEC" exchange-config "$file" 2>&1 | tee -a "$log_file") &
echo $!
}
pids=()
(cd output/dut && run_command "configs/dut-$iterations.toml" "$LOGS/dut.log")
for (( x=0; x<iterations; x++ )); do
(cd output/ate && run_command "configs/ate-$x.toml" "$LOGS/ate-$x.log") & pids+=($!)
done
sleep "$sleep_time"
lsof -i :9999 | awk 'NR!=1 {print $2}' | xargs kill
for (( x=0; x<iterations; x++ )); do
port=$((x + 50000))
lsof -i :$port | awk 'NR!=1 {print $2}' | xargs kill
done

View File

@@ -4,3 +4,7 @@ updates:
directory: "/"
schedule:
interval: "daily"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"

View File

@@ -13,10 +13,10 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Clone rosenpass-website repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
repository: rosenpass/rosenpass-website
ref: main

View File

@@ -6,6 +6,11 @@ on:
push:
branches:
- main
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
i686-linux---default:
name: Build i686-linux.default
@@ -14,11 +19,11 @@ jobs:
needs:
- i686-linux---rosenpass
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -30,11 +35,11 @@ jobs:
- ubuntu-latest
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -47,11 +52,11 @@ jobs:
needs:
- i686-linux---rosenpass
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -62,11 +67,11 @@ jobs:
runs-on:
- ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -79,11 +84,11 @@ jobs:
needs:
- x86_64-darwin---rosenpass
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -98,11 +103,11 @@ jobs:
- x86_64-darwin---rp
- x86_64-darwin---rosenpass-oci-image
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -114,11 +119,11 @@ jobs:
- macos-13
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -130,11 +135,11 @@ jobs:
- macos-13
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -147,11 +152,11 @@ jobs:
needs:
- x86_64-darwin---rosenpass
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -162,11 +167,11 @@ jobs:
runs-on:
- macos-13
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -179,11 +184,11 @@ jobs:
needs:
- x86_64-linux---rosenpass
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -196,11 +201,11 @@ jobs:
needs:
- x86_64-linux---proverif-patched
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -212,11 +217,11 @@ jobs:
- ubuntu-latest
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -231,51 +236,51 @@ jobs:
- x86_64-linux---rosenpass-static-oci-image
- x86_64-linux---rp-static
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-linux.release-package --print-build-logs
aarch64-linux---release-package:
name: Build aarch64-linux.release-package
runs-on:
- ubuntu-latest
needs:
- aarch64-linux---rosenpass-oci-image
- aarch64-linux---rosenpass
- aarch64-linux---rp
steps:
- run: |
DEBIAN_FRONTEND=noninteractive
sudo apt-get update -q -y && sudo apt-get install -q -y qemu-system-aarch64 qemu-efi binfmt-support qemu-user-static
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
extra_nix_config: |
system = aarch64-linux
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.aarch64-linux.release-package --print-build-logs
# aarch64-linux---release-package:
# name: Build aarch64-linux.release-package
# runs-on:
# - ubuntu-latest
# needs:
# - aarch64-linux---rosenpass-oci-image
# - aarch64-linux---rosenpass
# - aarch64-linux---rp
# steps:
# - run: |
# DEBIAN_FRONTEND=noninteractive
# sudo apt-get update -q -y && sudo apt-get install -q -y qemu-system-aarch64 qemu-efi binfmt-support qemu-user-static
# - uses: actions/checkout@v4
# - uses: cachix/install-nix-action@v30
# with:
# nix_path: nixpkgs=channel:nixos-unstable
# extra_nix_config: |
# system = aarch64-linux
# - uses: cachix/cachix-action@v15
# with:
# name: rosenpass
# authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
# - name: Build
# run: nix build .#packages.aarch64-linux.release-package --print-build-logs
x86_64-linux---rosenpass:
name: Build x86_64-linux.rosenpass
runs-on:
- ubuntu-latest
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -290,13 +295,13 @@ jobs:
- run: |
DEBIAN_FRONTEND=noninteractive
sudo apt-get update -q -y && sudo apt-get install -q -y qemu-system-aarch64 qemu-efi binfmt-support qemu-user-static
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
extra_nix_config: |
system = aarch64-linux
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -311,13 +316,13 @@ jobs:
- run: |
DEBIAN_FRONTEND=noninteractive
sudo apt-get update -q -y && sudo apt-get install -q -y qemu-system-aarch64 qemu-efi binfmt-support qemu-user-static
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
extra_nix_config: |
system = aarch64-linux
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -330,11 +335,11 @@ jobs:
needs:
- x86_64-linux---rosenpass
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -350,13 +355,13 @@ jobs:
- run: |
DEBIAN_FRONTEND=noninteractive
sudo apt-get update -q -y && sudo apt-get install -q -y qemu-system-aarch64 qemu-efi binfmt-support qemu-user-static
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
extra_nix_config: |
system = aarch64-linux
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -368,11 +373,11 @@ jobs:
- ubuntu-latest
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -384,11 +389,11 @@ jobs:
- ubuntu-latest
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -401,11 +406,11 @@ jobs:
needs:
- x86_64-linux---rosenpass-static
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -417,11 +422,11 @@ jobs:
- ubuntu-latest
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -432,11 +437,11 @@ jobs:
runs-on:
- ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -447,11 +452,11 @@ jobs:
runs-on: ubuntu-latest
if: ${{ github.ref == 'refs/heads/main' }}
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -460,7 +465,7 @@ jobs:
- name: Build
run: nix build .#packages.x86_64-linux.whitepaper --print-build-logs
- name: Deploy PDF artifacts
uses: peaceiris/actions-gh-pages@v3
uses: peaceiris/actions-gh-pages@v4
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: result/

View File

@@ -4,6 +4,10 @@ on:
push:
branches: [main]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
checks: write
contents: read
@@ -12,8 +16,8 @@ jobs:
prettier:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actionsx/prettier@v2
- uses: actions/checkout@v4
- uses: actionsx/prettier@v3
with:
args: --check .
@@ -21,7 +25,7 @@ jobs:
name: Shellcheck
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Run ShellCheck
uses: ludeeus/action-shellcheck@master
@@ -29,15 +33,15 @@ jobs:
name: Rust Format
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Run Rust Formatting Script
run: bash format_rust_code.sh --mode check
cargo-bench:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/cache@v3
- uses: actions/checkout@v4
- uses: actions/cache@v4
with:
path: |
~/.cargo/bin/
@@ -57,16 +61,14 @@ jobs:
steps:
- name: Install mandoc
run: sudo apt-get install -y mandoc
- uses: actions/checkout@v3
- name: Check rosenpass.1
run: doc/check.sh doc/rosenpass.1
- uses: actions/checkout@v4
- name: Check rp.1
run: doc/check.sh doc/rp.1
cargo-audit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- uses: actions-rs/audit-check@v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
@@ -74,8 +76,8 @@ jobs:
cargo-clippy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/cache@v3
- uses: actions/checkout@v4
- uses: actions/cache@v4
with:
path: |
~/.cargo/bin/
@@ -93,8 +95,8 @@ jobs:
cargo-doc:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/cache@v3
- uses: actions/checkout@v4
- uses: actions/cache@v4
with:
path: |
~/.cargo/bin/
@@ -110,10 +112,15 @@ jobs:
- run: RUSTDOCFLAGS="-D warnings" cargo doc --no-deps --document-private-items
cargo-test:
runs-on: ubuntu-latest
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, macos-13]
# - ubuntu is x86-64
# - macos-13 is also x86-64 architecture
steps:
- uses: actions/checkout@v3
- uses: actions/cache@v3
- uses: actions/checkout@v4
- uses: actions/cache@v4
with:
path: |
~/.cargo/bin/
@@ -131,8 +138,8 @@ jobs:
runs-on:
- ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/cache@v3
- uses: actions/checkout@v4
- uses: actions/cache@v4
with:
path: |
~/.cargo/bin/
@@ -141,10 +148,10 @@ jobs:
~/.cargo/git/db/
target/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- uses: cachix/install-nix-action@v21
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -153,8 +160,8 @@ jobs:
cargo-fuzz:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/cache@v3
- uses: actions/checkout@v4
- uses: actions/cache@v4
with:
path: |
~/.cargo/bin/
@@ -186,17 +193,20 @@ jobs:
codecov:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- run: rustup component add llvm-tools-preview
- run: |
cargo install cargo-llvm-cov || true
cargo llvm-cov --lcov --output-path coverage.lcov
cargo llvm-cov \
--workspace\
--all-features \
--lcov \
--output-path coverage.lcov
# If using tarapulin
#- run: cargo install cargo-tarpaulin
#- run: cargo tarpaulin --out Xml
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v4.0.1
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./coverage.lcov
verbose: true

37
.github/workflows/regressions.yml vendored Normal file
View File

@@ -0,0 +1,37 @@
name: Regressions
on:
pull_request:
push:
branches: [main]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
checks: write
contents: read
jobs:
multi-peer:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: cargo build --bin rosenpass --release
- run: python misc/generate_configs.py
- run: chmod +x .ci/run-regression.sh
- run: .ci/run-regression.sh 100 20
- run: |
[ $(ls -1 output/ate/out | wc -l) -eq 100 ]
boot-race:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: cargo build --bin rosenpass --release
- run: chmod +x .ci/boot_race/run.sh
- run: cargo run --release --bin rosenpass gen-keys .ci/boot_race/a.toml
- run: cargo run --release --bin rosenpass gen-keys .ci/boot_race/b.toml
- run: .ci/boot_race/run.sh 5 2 .ci/boot_race/a.toml .ci/boot_race/b.toml
- run: .ci/boot_race/run.sh 5 1 .ci/boot_race/a.toml .ci/boot_race/b.toml
- run: .ci/boot_race/run.sh 5 0 .ci/boot_race/a.toml .ci/boot_race/b.toml

View File

@@ -11,18 +11,18 @@ jobs:
runs-on:
- ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build release
run: nix build .#release-package --print-build-logs
- name: Release
uses: softprops/action-gh-release@v1
uses: softprops/action-gh-release@v2
with:
draft: ${{ contains(github.ref_name, 'rc') }}
prerelease: ${{ contains(github.ref_name, 'alpha') || contains(github.ref_name, 'beta') }}
@@ -32,18 +32,18 @@ jobs:
runs-on:
- macos-13
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build release
run: nix build .#release-package --print-build-logs
- name: Release
uses: softprops/action-gh-release@v1
uses: softprops/action-gh-release@v2
with:
draft: ${{ contains(github.ref_name, 'rc') }}
prerelease: ${{ contains(github.ref_name, 'alpha') || contains(github.ref_name, 'beta') }}
@@ -53,18 +53,18 @@ jobs:
runs-on:
- ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build release
run: nix build .#release-package --print-build-logs
- name: Release
uses: softprops/action-gh-release@v1
uses: softprops/action-gh-release@v2
with:
draft: ${{ contains(github.ref_name, 'rc') }}
prerelease: ${{ contains(github.ref_name, 'alpha') || contains(github.ref_name, 'beta') }}

5
.gitignore vendored
View File

@@ -20,3 +20,8 @@ _markdown_*
**/result
**/result-*
.direnv
# Visual studio code
.vscode
/output

1
.mailmap Normal file
View File

@@ -0,0 +1 @@
Clara Engler <cve@cve.cx> <me@emilengler.com>

View File

@@ -1,4 +1,5 @@
.direnv/
flake.lock
papers/whitepaper.md
target/
src/usage.md
target/

38
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,38 @@
**Making a new Release of Rosenpass — Cooking Recipe**
If you have to change a file, do what it takes to get the change as commit on the main branch, then **start from step 0**.
If any other issue occurs
0. Make sure you are in the root directory of the project
- `cd "$(git rev-parse --show-toplevel)"`
1. Make sure you locally checked out the head of the main branch
- `git stash --include-untracked && git checkout main && git pull`
2. Make sure all tests pass
- `cargo test --workspace --all-features`
3. Make sure the current version in `rosenpass/Cargo.toml` matches that in the [last release on GitHub](https://github.com/rosenpass/rosenpass/releases)
- Only normal releases count, release candidates and draft releases can be ignored
4. Pick the kind of release that you want to make (`major`, `minor`, `patch`, `rc`, ...)
- See `cargo release --help` for more information on the available release types
- Pick `rc` if in doubt
5. Try to release a new version
- `cargo release rc --package rosenpass`
- An issue was reported? Go fix it, start again with step 0!
6. Actually make the release
- `cargo release rc --package rosenpass --execute`
- Tentatively wait for any interactions, such as entering ssh keys etc.
- You may be asked for your ssh key multiple times!
**Frequently Asked Questions (FAQ)**
- You have untracked files, which `cargo release` complains about?
- `git stash --include-untracked`
- You cannot push to crates.io because you are not logged in?
- Follow the steps displayed in [`cargo login`](https://doc.rust-lang.org/cargo/commands/cargo-login.html)
- How is the release page added to [GitHub Releases](https://github.com/rosenpass/rosenpass/releases) itself?
- Our CI Pipeline will create the release, once `cargo release` pushed the new version tag to the repo. The new release should pop up almost immediately in [GitHub Releases](https://github.com/rosenpass/rosenpass/releases) after the [Actions/Release](https://github.com/rosenpass/rosenpass/actions/workflows/release.yaml) pipeline started.
- No new release pops up in the `Release` sidebar element on the [main page](https://github.com/rosenpass/rosenpass)
- Did you push a `rc` release? This view only shows non-draft release, but `rc` releases are considered as draft. See [Releases](https://github.com/rosenpass/rosenpass/releases) page to see all (including draft!) releases.
- The release page was created on GitHub, but there are no assets/artifacts other than the source code tar ball/zip?
- The artifacts are generated and pushed automatically to the release, but this takes some time (a couple of minutes). You can check the respective CI pipeline: [Actions/Release](https://github.com/rosenpass/rosenpass/actions/workflows/release.yaml), which should start immediately after `cargo release` pushed the new release tag to the repo. The release artifacts only are added later to the release, once all jobs in bespoke pipeline finished.
- How are the release artifacts generated, and what are they?
- The release artifacts are built using one Nix derivation per platform, `nix build .#release-package`. It contains both statically linked versions of `rosenpass` itself and OCI container images.

1070
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -32,24 +32,28 @@ rosenpass-secret-memory = { path = "secret-memory" }
rosenpass-oqs = { path = "oqs" }
rosenpass-wireguard-broker = { path = "wireguard-broker" }
doc-comment = "0.3.3"
base64ct = {version = "1.6.0", default-features=false}
base64ct = { version = "1.6.0", default-features = false }
zeroize = "1.8.1"
memoffset = "0.9.1"
thiserror = "1.0.61"
thiserror = "1.0.69"
paste = "1.0.15"
env_logger = "0.10.2"
toml = "0.7.8"
static_assertions = "1.1.0"
allocator-api2 = "0.2.14"
memsec = { git="https://github.com/rosenpass/memsec.git" ,rev="aceb9baee8aec6844125bd6612f92e9a281373df", features = [ "alloc_ext", ] }
memsec = { git = "https://github.com/rosenpass/memsec.git", rev = "aceb9baee8aec6844125bd6612f92e9a281373df", features = [
"alloc_ext",
] }
rand = "0.8.5"
typenum = "1.17.0"
log = { version = "0.4.21" }
clap = { version = "4.5.7", features = ["derive"] }
serde = { version = "1.0.203", features = ["derive"] }
arbitrary = { version = "1.3.2", features = ["derive"] }
anyhow = { version = "1.0.86", features = ["backtrace", "std"] }
mio = { version = "0.8.11", features = ["net", "os-poll"] }
log = { version = "0.4.22" }
clap = { version = "4.5.23", features = ["derive"] }
clap_mangen = "0.2.24"
clap_complete = "4.5.38"
serde = { version = "1.0.215", features = ["derive"] }
arbitrary = { version = "1.4.1", features = ["derive"] }
anyhow = { version = "1.0.94", features = ["backtrace", "std"] }
mio = { version = "1.0.3", features = ["net", "os-poll"] }
oqs-sys = { version = "0.9.1", default-features = false, features = [
'classic_mceliece',
'kyber',
@@ -59,24 +63,30 @@ chacha20poly1305 = { version = "0.10.1", default-features = false, features = [
"std",
"heapless",
] }
zerocopy = { version = "0.7.34", features = ["derive"] }
zerocopy = { version = "0.7.35", features = ["derive"] }
home = "0.5.9"
derive_builder = "0.20.0"
tokio = { version = "1.38", features = ["macros", "rt-multi-thread"] }
postcard= {version = "1.0.8", features = ["alloc"]}
derive_builder = "0.20.1"
tokio = { version = "1.42", features = ["macros", "rt-multi-thread"] }
postcard = { version = "1.1.1", features = ["alloc"] }
libcrux = { version = "0.0.2-pre.2" }
hex-literal = { version = "0.4.1" }
hex = { version = "0.4.3" }
heck = { version = "0.5.0" }
libc = { version = "0.2" }
uds = { git = "https://github.com/rosenpass/uds" }
#Dev dependencies
serial_test = "3.1.1"
serial_test = "3.2.0"
tempfile = "3"
stacker = "0.1.15"
stacker = "0.1.17"
libfuzzer-sys = "0.4"
test_bin = "0.4.0"
criterion = "0.4.0"
allocator-api2-tests = "0.2.15"
procspawn = {version = "1.0.0", features= ["test-support"]}
procspawn = { version = "1.0.1", features = ["test-support"] }
#Broker dependencies (might need cleanup or changes)
wireguard-uapi = "3.0.0"
wireguard-uapi = { version = "3.0.0", features = ["xplatform"] }
command-fds = "0.2.3"
rustix = { version = "0.38.27", features = ["net"] }
rustix = { version = "0.38.41", features = ["net", "fs"] }

View File

@@ -10,8 +10,6 @@
//! The [KEM] Trait describes the basic API offered by a Key Encapsulation
//! Mechanism. Two implementations for it are provided, [StaticKEM] and [EphemeralKEM].
use std::result::Result;
/// Key Encapsulation Mechanism
///
/// The KEM interface defines three operations: Key generation, key encapsulation and key

View File

@@ -9,6 +9,9 @@ homepage = "https://rosenpass.eu/"
repository = "https://github.com/rosenpass/rosenpass"
readme = "readme.md"
[features]
experiment_libcrux = ["dep:libcrux"]
[dependencies]
anyhow = { workspace = true }
rosenpass-to = { workspace = true }
@@ -20,3 +23,4 @@ static_assertions = { workspace = true }
zeroize = { workspace = true }
chacha20poly1305 = { workspace = true }
blake2 = { workspace = true }
libcrux = { workspace = true, optional = true }

View File

@@ -2,7 +2,7 @@ use anyhow::Result;
use rosenpass_secret_memory::Secret;
use rosenpass_to::To;
use crate::subtle::incorrect_hmac_blake2b as hash;
use crate::keyed_hash as hash;
pub use hash::KEY_LEN;

View File

@@ -7,9 +7,25 @@ const_assert!(KEY_LEN == aead::KEY_LEN);
const_assert!(KEY_LEN == xaead::KEY_LEN);
const_assert!(KEY_LEN == hash_domain::KEY_LEN);
/// Keyed hashing
///
/// This should only be used for implementation details; anything with relevance
/// to the cryptographic protocol should use the facilities in [hash_domain], (though
/// hash domain uses this module internally)
pub mod keyed_hash {
pub use crate::subtle::incorrect_hmac_blake2b::{
hash, KEY_LEN, KEY_MAX, KEY_MIN, OUT_MAX, OUT_MIN,
};
}
/// Authenticated encryption with associated data
pub mod aead {
#[cfg(not(feature = "experiment_libcrux"))]
pub use crate::subtle::chacha20poly1305_ietf::{decrypt, encrypt, KEY_LEN, NONCE_LEN, TAG_LEN};
#[cfg(feature = "experiment_libcrux")]
pub use crate::subtle::chacha20poly1305_ietf_libcrux::{
decrypt, encrypt, KEY_LEN, NONCE_LEN, TAG_LEN,
};
}
/// Authenticated encryption with associated data with a constant nonce

View File

@@ -0,0 +1,60 @@
use rosenpass_to::ops::copy_slice;
use rosenpass_to::To;
use zeroize::Zeroize;
pub const KEY_LEN: usize = 32; // Grrrr! Libcrux, please provide me these constants.
pub const TAG_LEN: usize = 16;
pub const NONCE_LEN: usize = 12;
#[inline]
pub fn encrypt(
ciphertext: &mut [u8],
key: &[u8],
nonce: &[u8],
ad: &[u8],
plaintext: &[u8],
) -> anyhow::Result<()> {
let (ciphertext, mac) = ciphertext.split_at_mut(ciphertext.len() - TAG_LEN);
use libcrux::aead as C;
let crux_key = C::Key::Chacha20Poly1305(C::Chacha20Key(key.try_into().unwrap()));
let crux_iv = C::Iv(nonce.try_into().unwrap());
copy_slice(plaintext).to(ciphertext);
let crux_tag = libcrux::aead::encrypt(&crux_key, ciphertext, crux_iv, ad).unwrap();
copy_slice(crux_tag.as_ref()).to(mac);
match crux_key {
C::Key::Chacha20Poly1305(mut k) => k.0.zeroize(),
_ => panic!(),
}
Ok(())
}
#[inline]
pub fn decrypt(
plaintext: &mut [u8],
key: &[u8],
nonce: &[u8],
ad: &[u8],
ciphertext: &[u8],
) -> anyhow::Result<()> {
let (ciphertext, mac) = ciphertext.split_at(ciphertext.len() - TAG_LEN);
use libcrux::aead as C;
let crux_key = C::Key::Chacha20Poly1305(C::Chacha20Key(key.try_into().unwrap()));
let crux_iv = C::Iv(nonce.try_into().unwrap());
let crux_tag = C::Tag::from_slice(mac).unwrap();
copy_slice(ciphertext).to(plaintext);
libcrux::aead::decrypt(&crux_key, plaintext, crux_iv, ad, &crux_tag).unwrap();
match crux_key {
C::Key::Chacha20Poly1305(mut k) => k.0.zeroize(),
_ => panic!(),
}
Ok(())
}

View File

@@ -1,4 +1,7 @@
pub mod blake2b;
#[cfg(not(feature = "experiment_libcrux"))]
pub mod chacha20poly1305_ietf;
#[cfg(feature = "experiment_libcrux")]
pub mod chacha20poly1305_ietf_libcrux;
pub mod incorrect_hmac_blake2b;
pub mod xchacha20poly1305_ietf;

View File

@@ -1,7 +1,15 @@
//! Constant-time comparison
use core::ptr;
/// Little endian memcmp version of quinier/memsec
/// https://github.com/quininer/memsec/blob/bbc647967ff6d20d6dccf1c85f5d9037fcadd3b0/src/lib.rs#L30
///
/// # Panic & Safety
///
/// Both input arrays must be at least of the indicated length.
///
/// See [std::ptr::read_volatile] on safety.
#[inline(never)]
pub unsafe fn memcmp_le(b1: *const u8, b2: *const u8, len: usize) -> i32 {
let mut res = 0;
@@ -13,6 +21,16 @@ pub unsafe fn memcmp_le(b1: *const u8, b2: *const u8, len: usize) -> i32 {
((res - 1) >> 8) + (res >> 8) + 1
}
#[test]
pub fn memcmp_le_test() {
// use rosenpass_constant_time::memcmp_le;
let a = [0, 1, 0, 0];
let b = [0, 0, 0, 1];
assert_eq!(-1, unsafe { memcmp_le(a.as_ptr(), b.as_ptr(), 4) });
assert_eq!(0, unsafe { memcmp_le(a.as_ptr(), a.as_ptr(), 4) });
assert_eq!(1, unsafe { memcmp_le(b.as_ptr(), a.as_ptr(), 4) });
}
/// compares two slices of memory content and returns an integer indicating the relationship between
/// the slices
///
@@ -32,6 +50,28 @@ pub unsafe fn memcmp_le(b1: *const u8, b2: *const u8, len: usize) -> i32 {
/// ## Tests
/// For discussion on how to ensure the constant-time execution of this function, see
/// <https://github.com/rosenpass/rosenpass/issues/232>
///
/// # Examples
///
/// ```rust
/// use rosenpass_constant_time::compare;
/// let a = [0, 1, 0, 0];
/// let b = [0, 0, 0, 1];
/// assert_eq!(-1, compare(&a, &b));
/// assert_eq!(0, compare(&a, &a));
/// assert_eq!(1, compare(&b, &a));
/// ```
///
/// # Panic
///
/// This function will panic if the input arrays are of different lengths.
///
/// ```should_panic
/// use rosenpass_constant_time::compare;
/// let a = [0, 1, 0];
/// let b = [0, 0, 0, 1];
/// compare(&a, &b);
/// ```
#[inline]
pub fn compare(a: &[u8], b: &[u8]) -> i32 {
assert!(a.len() == b.len());

View File

@@ -1,3 +1,5 @@
//! Incrementing numbers
use core::hint::black_box;
/// Interpret the given slice as a little-endian unsigned integer

View File

@@ -1,3 +1,5 @@
#![warn(missing_docs)]
#![warn(clippy::missing_docs_in_private_items)]
//! constant-time implementations of some primitives
//!
//! Rosenpass internal library providing basic constant-time operations.

View File

@@ -1,3 +1,5 @@
//! memcmp
/// compares two sclices of memory content and returns whether they are equal
///
/// ## Leaks
@@ -8,17 +10,27 @@
/// The execution time of the function grows approx. linear with the length of the input. This is
/// considered safe.
///
/// ## Tests
/// [`tests::memcmp_runs_in_constant_time`] runs a stasticial test that the equality of the two
/// input parameters does not correlate with the run time.
/// ## Examples
///
/// For discussion on how to (further) ensure the constant-time execution of this function,
/// see <https://github.com/rosenpass/rosenpass/issues/232>
/// ```rust
/// use rosenpass_constant_time::memcmp;
/// let a = [0, 0, 0, 0];
/// let b = [0, 0, 0, 1];
/// let c = [0, 0, 0];
/// assert!(memcmp(&a, &a));
/// assert!(!memcmp(&a, &b));
/// assert!(!memcmp(&a, &c));
/// ```
#[inline]
pub fn memcmp(a: &[u8], b: &[u8]) -> bool {
a.len() == b.len() && unsafe { memsec::memeq(a.as_ptr(), b.as_ptr(), a.len()) }
}
/// [tests::memcmp_runs_in_constant_time] runs a stasticial test that the equality of the two
/// input parameters does not correlate with the run time.
///
/// For discussion on how to (further) ensure the constant-time execution of this function,
/// see <https://github.com/rosenpass/rosenpass/issues/232>
#[cfg(all(test, feature = "constant_time_tests"))]
mod tests {
use super::*;

View File

@@ -1,3 +1,5 @@
//! xor
use core::hint::black_box;
use rosenpass_to::{with_destination, To};

View File

@@ -1,114 +0,0 @@
.Dd $Mdocdate$
.Dt ROSENPASS 1
.Os
.Sh NAME
.Nm rosenpass
.Nd builds post-quantum-secure VPNs
.Sh SYNOPSIS
.Nm
.Op COMMAND
.Op Ar OPTIONS ...
.Op Ar ARGS ...
.Sh DESCRIPTION
.Nm
performs cryptographic key exchanges that are secure against quantum-computers
and then outputs the keys.
These keys can then be passed to various services, such as wireguard or other
vpn services, as pre-shared-keys to achieve security against attackers with
quantum computers.
.Pp
This is a research project and quantum computers are not thought to become
practical in fewer than ten years.
If you are not specifically tasked with developing post-quantum secure systems,
you probably do not need this tool.
.Ss COMMANDS
.Bl -tag -width Ds
.It Ar gen-keys --secret-key <file-path> --public-key <file-path>
Generate a keypair to use in the exchange command later.
Send the public-key file to your communication partner and keep the private-key
file secret!
.It Ar exchange private-key <file-path> public-key <file-path> [ OPTIONS ] PEERS
Start a process to exchange keys with the specified peers.
You should specify at least one peer.
.Pp
Its
.Ar OPTIONS
are as follows:
.Bl -tag -width Ds
.It Ar listen <ip>[:<port>]
Instructs
.Nm
to listen on the specified interface and port.
By default,
.Nm
will listen on all interfaces and select a random port.
.It Ar verbose
Extra logging.
.El
.El
.Ss PEER
Each
.Ar PEER
is defined as follows:
.Qq peer public-key <file-path> [endpoint <ip>[:<port>]] [preshared-key <file-path>] [outfile <file-path>] [wireguard <dev> <peer> <extra_params>]
.Pp
Providing a
.Ar PEER
instructs
.Nm
to exchange keys with the given peer and write the resulting PSK into the given
output file.
You must either specify the outfile or wireguard output option.
.Pp
The parameters of
.Ar PEER
are as follows:
.Bl -tag -width Ds
.It Ar endpoint <ip>[:<port>]
Specifies the address where the peer can be reached.
This will be automatically updated after the first successful key exchange with
the peer.
If this is unspecified, the peer must initiate the connection.
.It Ar preshared-key <file-path>
You may specify a pre-shared key which will be mixed into the final secret.
.It Ar outfile <file-path>
You may specify a file to write the exchanged keys to.
If this option is specified,
.Nm
will write a notification to standard out every time the key is updated.
.It Ar wireguard <dev> <peer> <extra_params>
This allows you to directly specify a wireguard peer to deploy the
pre-shared-key to.
You may specify extra parameters you would pass to
.Qq wg set
besides the preshared-key parameter which is used by
.Nm .
This makes it possible to add peers entirely from
.Nm .
.El
.Sh EXIT STATUS
.Ex -std
.Sh SEE ALSO
.Xr rp 1 ,
.Xr wg 1
.Rs
.%A Karolin Varner
.%A Benjamin Lipp
.%A Wanja Zaeske
.%A Lisa Schmidt
.%D 2023
.%T Rosenpass
.%U https://rosenpass.eu/whitepaper.pdf
.Re
.Sh STANDARDS
This tool is the reference implementation of the Rosenpass protocol, as
specified within the whitepaper referenced above.
.Sh AUTHORS
Rosenpass was created by Karolin Varner, Benjamin Lipp, Wanja Zaeske,
Marei Peischl, Stephan Ajuvo, and Lisa Schmidt.
.Pp
This manual page was written by
.An Emil Engler
.Sh BUGS
The bugs are tracked at
.Lk https://github.com/rosenpass/rosenpass/issues .

View File

@@ -113,7 +113,7 @@ Rosenpass was created by Karolin Varner, Benjamin Lipp, Wanja Zaeske,
Marei Peischl, Stephan Ajuvo, and Lisa Schmidt.
.Pp
This manual page was written by
.An Emil Engler
.An Clara Engler
.Sh BUGS
The bugs are tracked at
.Lk https://github.com/rosenpass/rosenpass/issues .

49
flake.lock generated
View File

@@ -2,15 +2,17 @@
"nodes": {
"fenix": {
"inputs": {
"nixpkgs": ["nixpkgs"],
"nixpkgs": [
"nixpkgs"
],
"rust-analyzer-src": "rust-analyzer-src"
},
"locked": {
"lastModified": 1712298178,
"narHash": "sha256-590fpCPXYAkaAeBz/V91GX4/KGzPObdYtqsTWzT6AhI=",
"lastModified": 1728282832,
"narHash": "sha256-I7AbcwGggf+CHqpyd/9PiAjpIBGTGx5woYHqtwxaV7I=",
"owner": "nix-community",
"repo": "fenix",
"rev": "569b5b5781395da08e7064e825953c548c26af76",
"rev": "1ec71be1f4b8f3105c5d38da339cb061fefc43f4",
"type": "github"
},
"original": {
@@ -24,11 +26,11 @@
"systems": "systems"
},
"locked": {
"lastModified": 1710146030,
"narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=",
"lastModified": 1726560853,
"narHash": "sha256-X6rJYSESBVr3hBoH0WbKE5KvhPU5bloyZ2L4K60/fPQ=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a",
"rev": "c1dfcf08411b08f6b8615f7d8971a2bfa81d5e8a",
"type": "github"
},
"original": {
@@ -37,36 +39,18 @@
"type": "github"
}
},
"naersk": {
"inputs": {
"nixpkgs": ["nixpkgs"]
},
"locked": {
"lastModified": 1698420672,
"narHash": "sha256-/TdeHMPRjjdJub7p7+w55vyABrsJlt5QkznPYy55vKA=",
"owner": "nix-community",
"repo": "naersk",
"rev": "aeb58d5e8faead8980a807c840232697982d47b9",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "naersk",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1712168706,
"narHash": "sha256-XP24tOobf6GGElMd0ux90FEBalUtw6NkBSVh/RlA6ik=",
"lastModified": 1728193676,
"narHash": "sha256-PbDWAIjKJdlVg+qQRhzdSor04bAPApDqIv2DofTyynk=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "1487bdea619e4a7a53a4590c475deabb5a9d1bfb",
"rev": "ecbc1ca8ffd6aea8372ad16be9ebbb39889e55b6",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-23.11",
"ref": "nixos-24.05",
"repo": "nixpkgs",
"type": "github"
}
@@ -75,18 +59,17 @@
"inputs": {
"fenix": "fenix",
"flake-utils": "flake-utils",
"naersk": "naersk",
"nixpkgs": "nixpkgs"
}
},
"rust-analyzer-src": {
"flake": false,
"locked": {
"lastModified": 1712156296,
"narHash": "sha256-St7ZQrkrr5lmQX9wC1ZJAFxL8W7alswnyZk9d1se3Us=",
"lastModified": 1728249780,
"narHash": "sha256-J269DvCI5dzBmPrXhAAtj566qt0b22TJtF3TIK+tMsI=",
"owner": "rust-lang",
"repo": "rust-analyzer",
"rev": "8e581ac348e223488622f4d3003cb2bd412bf27e",
"rev": "2b750da1a1a2c1d2c70896108d7096089842d877",
"type": "github"
},
"original": {

422
flake.nix
View File

@@ -1,12 +1,8 @@
{
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-23.11";
nixpkgs.url = "github:NixOS/nixpkgs/nixos-24.05";
flake-utils.url = "github:numtide/flake-utils";
# for quicker rust builds
naersk.url = "github:nix-community/naersk";
naersk.inputs.nixpkgs.follows = "nixpkgs";
# for rust nightly with llvm-tools-preview
fenix.url = "github:nix-community/fenix";
fenix.inputs.nixpkgs.follows = "nixpkgs";
@@ -15,6 +11,15 @@
outputs = { self, nixpkgs, flake-utils, ... }@inputs:
nixpkgs.lib.foldl (a: b: nixpkgs.lib.recursiveUpdate a b) { } [
#
### Export the overlay.nix from this flake ###
#
{
overlays.default = import ./overlay.nix;
}
#
### Actual Rosenpass Package and Docker Container Images ###
#
@@ -30,310 +35,39 @@
]
(system:
let
scoped = (scope: scope.result);
lib = nixpkgs.lib;
# normal nixpkgs
pkgs = import nixpkgs {
inherit system;
};
# parsed Cargo.toml
cargoToml = builtins.fromTOML (builtins.readFile ./rosenpass/Cargo.toml);
# source files relevant for rust
src = scoped rec {
# File suffices to include
extensions = [
"lock"
"rs"
"toml"
];
# Files to explicitly include
files = [
"to/README.md"
];
src = ./.;
filter = (path: type: scoped rec {
inherit (lib) any id removePrefix hasSuffix;
anyof = (any id);
basename = baseNameOf (toString path);
relative = removePrefix (toString src + "/") (toString path);
result = anyof [
(type == "directory")
(any (ext: hasSuffix ".${ext}" basename) extensions)
(any (file: file == relative) files)
];
});
result = pkgs.lib.sources.cleanSourceWith { inherit src filter; };
};
# a function to generate a nix derivation for rosenpass against any
# given set of nixpkgs
rosenpassDerivation = p:
let
# whether we want to build a statically linked binary
isStatic = p.targetPlatform.isStatic;
# the rust target of `p`
target = p.rust.toRustTargetSpec p.targetPlatform;
# convert a string to shout case
shout = string: builtins.replaceStrings [ "-" ] [ "_" ] (pkgs.lib.toUpper string);
# suitable Rust toolchain
toolchain = with inputs.fenix.packages.${system}; combine [
stable.cargo
stable.rustc
targets.${target}.stable.rust-std
];
# naersk with a custom toolchain
naersk = pkgs.callPackage inputs.naersk {
cargo = toolchain;
rustc = toolchain;
};
# used to trick the build.rs into believing that CMake was ran **again**
fakecmake = pkgs.writeScriptBin "cmake" ''
#! ${pkgs.stdenv.shell} -e
true
'';
in
naersk.buildPackage
{
# metadata and source
name = cargoToml.package.name;
version = cargoToml.package.version;
inherit src;
cargoBuildOptions = x: x ++ [ "-p" "rosenpass" ];
cargoTestOptions = x: x ++ [ "-p" "rosenpass" ];
doCheck = true;
nativeBuildInputs = with pkgs; [
p.stdenv.cc
cmake # for oqs build in the oqs-sys crate
mandoc # for the built-in manual
removeReferencesTo
rustPlatform.bindgenHook # for C-bindings in the crypto libs
];
buildInputs = with p; [ bash ];
override = x: {
preBuild =
# nix defaults to building for aarch64 _without_ the armv8-a crypto
# extensions, but liboqs depens on these
(lib.optionalString (system == "aarch64-linux") ''
NIX_CFLAGS_COMPILE="$NIX_CFLAGS_COMPILE -march=armv8-a+crypto"
''
);
# fortify is only compatible with dynamic linking
hardeningDisable = lib.optional isStatic "fortify";
};
overrideMain = x: {
# CMake detects that it was served a _foreign_ target dir, and CMake
# would be executed again upon the second build step of naersk.
# By adding our specially optimized CMake version, we reduce the cost
# of recompilation by 99 % while, while avoiding any CMake errors.
nativeBuildInputs = [ (lib.hiPrio fakecmake) ] ++ x.nativeBuildInputs;
# make sure that libc is linked, under musl this is not the case per
# default
preBuild = (lib.optionalString isStatic ''
NIX_CFLAGS_COMPILE="$NIX_CFLAGS_COMPILE -lc"
'');
};
# We want to build for a specific target...
CARGO_BUILD_TARGET = target;
# ... which might require a non-default linker:
"CARGO_TARGET_${shout target}_LINKER" =
let
inherit (p.stdenv) cc;
in
"${cc}/bin/${cc.targetPrefix}cc";
meta = with pkgs.lib;
{
inherit (cargoToml.package) description homepage;
license = with licenses; [ mit asl20 ];
maintainers = [ maintainers.wucke13 ];
platforms = platforms.all;
};
} // (lib.mkIf isStatic {
# otherwise pkg-config tries to link non-existent dynamic libs
# documented here: https://docs.rs/pkg-config/latest/pkg_config/
PKG_CONFIG_ALL_STATIC = true;
# tell rust to build everything statically linked
CARGO_BUILD_RUSTFLAGS = "-C target-feature=+crt-static";
});
# a function to generate a nix derivation for the rp helper against any
# given set of nixpkgs
rpDerivation = p:
let
# whether we want to build a statically linked binary
isStatic = p.targetPlatform.isStatic;
# the rust target of `p`
target = p.rust.toRustTargetSpec p.targetPlatform;
# convert a string to shout case
shout = string: builtins.replaceStrings [ "-" ] [ "_" ] (pkgs.lib.toUpper string);
# suitable Rust toolchain
toolchain = with inputs.fenix.packages.${system}; combine [
stable.cargo
stable.rustc
targets.${target}.stable.rust-std
];
# naersk with a custom toolchain
naersk = pkgs.callPackage inputs.naersk {
cargo = toolchain;
rustc = toolchain;
};
# used to trick the build.rs into believing that CMake was ran **again**
fakecmake = pkgs.writeScriptBin "cmake" ''
#! ${pkgs.stdenv.shell} -e
true
'';
in
naersk.buildPackage
{
# metadata and source
name = cargoToml.package.name;
version = cargoToml.package.version;
inherit src;
cargoBuildOptions = x: x ++ [ "-p" "rp" ];
cargoTestOptions = x: x ++ [ "-p" "rp" ];
doCheck = true;
nativeBuildInputs = with pkgs; [
p.stdenv.cc
cmake # for oqs build in the oqs-sys crate
mandoc # for the built-in manual
removeReferencesTo
rustPlatform.bindgenHook # for C-bindings in the crypto libs
];
buildInputs = with p; [ bash ];
override = x: {
preBuild =
# nix defaults to building for aarch64 _without_ the armv8-a crypto
# extensions, but liboqs depens on these
(lib.optionalString (system == "aarch64-linux") ''
NIX_CFLAGS_COMPILE="$NIX_CFLAGS_COMPILE -march=armv8-a+crypto"
''
);
# fortify is only compatible with dynamic linking
hardeningDisable = lib.optional isStatic "fortify";
};
overrideMain = x: {
# CMake detects that it was served a _foreign_ target dir, and CMake
# would be executed again upon the second build step of naersk.
# By adding our specially optimized CMake version, we reduce the cost
# of recompilation by 99 % while, while avoiding any CMake errors.
nativeBuildInputs = [ (lib.hiPrio fakecmake) ] ++ x.nativeBuildInputs;
# make sure that libc is linked, under musl this is not the case per
# default
preBuild = (lib.optionalString isStatic ''
NIX_CFLAGS_COMPILE="$NIX_CFLAGS_COMPILE -lc"
'');
};
# We want to build for a specific target...
CARGO_BUILD_TARGET = target;
# ... which might require a non-default linker:
"CARGO_TARGET_${shout target}_LINKER" =
let
inherit (p.stdenv) cc;
in
"${cc}/bin/${cc.targetPrefix}cc";
meta = with pkgs.lib;
{
inherit (cargoToml.package) description homepage;
license = with licenses; [ mit asl20 ];
maintainers = [ maintainers.wucke13 ];
platforms = platforms.all;
};
} // (lib.mkIf isStatic {
# otherwise pkg-config tries to link non-existent dynamic libs
# documented here: https://docs.rs/pkg-config/latest/pkg_config/
PKG_CONFIG_ALL_STATIC = true;
# tell rust to build everything statically linked
CARGO_BUILD_RUSTFLAGS = "-C target-feature=+crt-static";
});
# a function to generate a docker image based of rosenpass
rosenpassOCI = name: pkgs.dockerTools.buildImage rec {
inherit name;
copyToRoot = pkgs.buildEnv {
name = "image-root";
paths = [ self.packages.${system}.${name} ];
pathsToLink = [ "/bin" ];
};
config.Cmd = [ "/bin/rosenpass" ];
# apply our own overlay, overriding/inserting our packages as defined in ./pkgs
overlays = [ self.overlays.default ];
};
in
rec {
packages = rec {
default = rosenpass;
rosenpass = rosenpassDerivation pkgs;
rp = rpDerivation pkgs;
rosenpass-oci-image = rosenpassOCI "rosenpass";
{
packages = {
default = pkgs.rosenpass;
rosenpass = pkgs.rosenpass;
rosenpass-oci-image = pkgs.rosenpass-oci-image;
rp = pkgs.rp;
# derivation for the release
release-package =
let
version = cargoToml.package.version;
package =
if pkgs.hostPlatform.isLinux then
packages.rosenpass-static
else packages.rosenpass;
rp =
if pkgs.hostPlatform.isLinux then
packages.rp-static
else packages.rp;
oci-image =
if pkgs.hostPlatform.isLinux then
packages.rosenpass-static-oci-image
else packages.rosenpass-oci-image;
in
pkgs.runCommandNoCC "lace-result" { }
''
mkdir {bin,$out}
tar -cvf $out/rosenpass-${system}-${version}.tar \
-C ${package} bin/rosenpass \
-C ${rp} bin/rp
cp ${oci-image} \
$out/rosenpass-oci-image-${system}-${version}.tar.gz
'';
} // (if pkgs.stdenv.isLinux then rec {
rosenpass-static = rosenpassDerivation pkgs.pkgsStatic;
rp-static = rpDerivation pkgs.pkgsStatic;
rosenpass-static-oci-image = rosenpassOCI "rosenpass-static";
} else { });
release-package = pkgs.release-package;
# for good measure, we also offer to cross compile to Linux on Arm
aarch64-linux-rosenpass-static =
pkgs.pkgsCross.aarch64-multiplatform.pkgsStatic.rosenpass;
aarch64-linux-rp-static = pkgs.pkgsCross.aarch64-multiplatform.pkgsStatic.rp;
}
//
# We only offer static builds for linux, as this is not supported on OS X
(nixpkgs.lib.attrsets.optionalAttrs pkgs.stdenv.isLinux {
rosenpass-static = pkgs.pkgsStatic.rosenpass;
rosenpass-static-oci-image = pkgs.pkgsStatic.rosenpass-oci-image;
rp-static = pkgs.pkgsStatic.rp;
});
}
))
#
### Linux specifics ###
#
@@ -341,88 +75,60 @@
let
pkgs = import nixpkgs {
inherit system;
# apply our own overlay, overriding/inserting our packages as defined in ./pkgs
overlays = [ self.overlays.default ];
};
packages = self.packages.${system};
in
{
#
### Whitepaper ###
#
packages.whitepaper =
let
tlsetup = (pkgs.texlive.combine {
inherit (pkgs.texlive) scheme-basic acmart amsfonts ccicons
csquotes csvsimple doclicense fancyvrb fontspec gobble
koma-script ifmtarg latexmk lm markdown mathtools minted noto
nunito pgf soul unicode-math lualatex-math paralist
gitinfo2 eso-pic biblatex biblatex-trad biblatex-software
xkeyval xurl xifthen biber;
});
in
pkgs.stdenvNoCC.mkDerivation {
name = "whitepaper";
src = ./papers;
nativeBuildInputs = with pkgs; [
ncurses # tput
python3Packages.pygments
tlsetup # custom tex live scheme
which
];
buildPhase = ''
export HOME=$(mktemp -d)
latexmk -r tex/CI.rc
'';
installPhase = ''
mkdir -p $out
mv *.pdf readme.md $out/
'';
};
#
### Reading materials ###
#
packages.whitepaper = pkgs.whitepaper;
#
### Proof and Proof Tools ###
#
packages.proverif-patched = pkgs.proverif.overrideAttrs (old: {
postInstall = ''
install -D -t $out/lib cryptoverif.pvl
'';
});
packages.proof-proverif = pkgs.stdenv.mkDerivation {
name = "rosenpass-proverif-proof";
version = "unstable";
src = pkgs.lib.sources.sourceByRegex ./. [
"analyze.sh"
"marzipan(/marzipan.awk)?"
"analysis(/.*)?"
];
nativeBuildInputs = [ pkgs.proverif pkgs.graphviz ];
CRYPTOVERIF_LIB = packages.proverif-patched + "/lib/cryptoverif.pvl";
installPhase = ''
mkdir -p $out
bash analyze.sh -color -html $out
'';
};
packages.proverif-patched = pkgs.proverif-patched;
packages.proof-proverif = pkgs.proof-proverif;
#
### Devshells ###
#
devShells.default = pkgs.mkShell {
inherit (packages.proof-proverif) CRYPTOVERIF_LIB;
inputsFrom = [ packages.default ];
inherit (pkgs.proof-proverif) CRYPTOVERIF_LIB;
inputsFrom = [ pkgs.rosenpass ];
nativeBuildInputs = with pkgs; [
cmake # override the fakecmake from the main step above
cargo-release
clippy
rustfmt
nodePackages.prettier
nushell # for the .ci/gen-workflow-files.nu script
proverif-patched
];
};
# TODO: Write this as a patched version of the default environment
devShells.fullEnv = pkgs.mkShell {
inherit (pkgs.proof-proverif) CRYPTOVERIF_LIB;
inputsFrom = [ pkgs.rosenpass ];
nativeBuildInputs = with pkgs; [
cargo-release
rustfmt
packages.proverif-patched
nodePackages.prettier
nushell # for the .ci/gen-workflow-files.nu script
proverif-patched
inputs.fenix.packages.${system}.complete.toolchain
pkgs.cargo-llvm-cov
];
};
devShells.coverage = pkgs.mkShell {
inputsFrom = [ packages.default ];
nativeBuildInputs = with pkgs; [ inputs.fenix.packages.${system}.complete.toolchain cargo-llvm-cov ];
inputsFrom = [ pkgs.rosenpass ];
nativeBuildInputs = [
inputs.fenix.packages.${system}.complete.toolchain
pkgs.cargo-llvm-cov
];
};

View File

@@ -4,6 +4,9 @@ version = "0.0.1"
publish = false
edition = "2021"
[features]
experiment_libcrux = ["rosenpass-ciphers/experiment_libcrux"]
[package.metadata]
cargo-fuzz = true
@@ -81,4 +84,4 @@ doc = false
name = "fuzz_vec_secret_alloc_memfdsec_mallocfb"
path = "fuzz_targets/vec_secret_alloc_memfdsec_mallocfb.rs"
test = false
doc = false
doc = false

View File

@@ -15,8 +15,7 @@ pub struct Input {
}
fuzz_target!(|input: Input| {
let mut ciphertext: Vec<u8> = Vec::with_capacity(input.plaintext.len() + 16);
ciphertext.resize(input.plaintext.len() + 16, 0);
let mut ciphertext = vec![0u8; input.plaintext.len() + 16];
aead::encrypt(
ciphertext.as_mut_slice(),

View File

@@ -7,14 +7,14 @@ use rosenpass::protocol::CryptoServer;
use rosenpass_cipher_traits::Kem;
use rosenpass_ciphers::kem::StaticKem;
use rosenpass_secret_memory::policy::*;
use rosenpass_secret_memory::Secret;
use rosenpass_secret_memory::{PublicBox, Secret};
use std::sync::Once;
static ONCE: Once = Once::new();
fuzz_target!(|rx_buf: &[u8]| {
ONCE.call_once(secret_policy_use_only_malloc_secrets);
let sk = Secret::from_slice(&[0; StaticKem::SK_LEN]);
let pk = Secret::from_slice(&[0; StaticKem::PK_LEN]);
let pk = PublicBox::from_slice(&[0; StaticKem::PK_LEN]);
let mut cs = CryptoServer::new(sk, pk);
let mut tx_buf = [0; 10240];

View File

@@ -14,7 +14,7 @@ pub struct Input {
fuzz_target!(|input: Input| {
let mut ciphertext = [0u8; EphemeralKem::CT_LEN];
let mut shared_secret = [0u8; EphemeralKem::SK_LEN];
let mut shared_secret = [0u8; EphemeralKem::SHK_LEN];
EphemeralKem::encaps(&mut shared_secret, &mut ciphertext, &input.pk).unwrap();
});

View File

@@ -0,0 +1,13 @@
secret_key = "peer_a.rp.sk"
public_key = "peer_a.rp.pk"
listen = ["[::1]:46127"]
verbosity = "Verbose"
[api]
listen_path = []
listen_fd = []
stream_fd = []
[[peers]]
public_key = "peer_b.rp.pk"
device = "rpPskBrkTestA"

View File

@@ -0,0 +1,14 @@
secret_key = "peer_b.rp.sk"
public_key = "peer_b.rp.pk"
listen = []
verbosity = "Verbose"
[api]
listen_path = []
listen_fd = []
stream_fd = []
[[peers]]
public_key = "peer_a.rp.pk"
endpoint = "[::1]:46127"
device = "rpPskBrkTestB"

View File

@@ -0,0 +1,215 @@
#! /bin/bash
set -e -o pipefail
enquote() {
while (( "$#" > 1)); do
printf "%q " "$1"
shift
done
if (("$#" > 0)); then
printf "%q" "$1"
fi
}
CLEANUP_HOOKS=()
hook_cleanup() {
local hook
set +e +o pipefail
for hook in "${CLEANUP_HOOKS[@]}"; do
eval "${hook}"
done
}
cleanup() {
CLEANUP_HOOKS=("$(enquote exc_with_ctx cleanup "$@")" "${CLEANUP_HOOKS[@]}")
}
cleanup_eval() {
cleanup eval "$*"
}
stderr() {
echo >&2 "$@"
}
log() {
local level; level="$1"; shift || fatal "USAGE: log LVL MESSAGE.."
stderr "[${level}]" "$@"
}
info() {
log "INFO" "$@"
}
debug() {
log "DEBUG" "$@"
}
fatal() {
log "FATAL" "$@"
exit 1
}
assert() {
local msg; msg="$1"; shift || fatal "USAGE: assert_cmd MESSAGE COMMAND.."
"$@" || fatal "${msg}"
}
abs_dir() {
local dir; dir="$1"; shift || fatal "USAGE: abs_dir DIR"
(
cd "${dir}"
pwd -P
)
}
exc_with_ctx() {
local ctx; ctx="$1"; shift || fatal "USAGE: exc_with_ctx CONTEXT COMMAND.."
if [[ -z "${ctx}" ]]; then
info '$' "$@"
else
info "${ctx}\$" "$@"
fi
"$@"
}
exc() {
exc_with_ctx "" "$@"
}
exc_eval() {
exc eval "$*"
}
exc_eval_with_ctx() {
local ctx; ctx="$1"; shift || fatal "USAGE: exc_eval_with_ctx CONTEXT EVAL_COMMAND.."
exc_with_ctx "eval:${ctx}" "$*"
}
exc_as_user() {
exc sudo -u "${SUDO_USER}" "$@"
}
exc_eval_as_user() {
exc_as_user bash -c "$*"
}
fork_eval_as_user() {
exc sudo -u "${SUDO_USER}" bash -c "$*" &
local pid; pid="$!"
cleanup wait "${pid}"
cleanup pkill -2 -P "${pid}" # Reverse ordering
}
info_success() {
stderr
stderr
if [[ "${SUCCESS}" = 1 ]]; then
stderr " Test was a success!"
else
stderr " !!! TEST WAS A FAILURE!!!"
fi
stderr
}
main() {
assert "Use as root with sudo" [ "$(id -u)" -eq 0 ]
assert "Use as root with sudo" [ -n "${SUDO_UID}" ]
assert "SUDO_UID is 0; refusing to build as root" [ "${SUDO_UID}" -ne 0 ]
cleanup info_success
trap hook_cleanup EXIT
SCRIPT="$0"
CFG_TEMPLATE_DIR="$(abs_dir "$(dirname "${SCRIPT}")")"
REPO="$(abs_dir "${CFG_TEMPLATE_DIR}/../..")"
BINS="${REPO}/target/debug"
# Create temp dir
TMP_DIR="/tmp/rosenpass-psk-broker-test-$(date +%s)-$(uuidgen)"
cleanup rm -rf "${TMP_DIR}"
exc_as_user mkdir -p "${TMP_DIR}"
# Copy config
CFG_DIR="${TMP_DIR}/cfg"
exc_as_user cp -R "${CFG_TEMPLATE_DIR}" "${CFG_DIR}"
exc umask 077
exc cd "${REPO}"
local build_cmd; build_cmd=(cargo build --workspace --color=always --all-features --bins --profile dev)
if test -e "${BINS}/rosenpass-wireguard-broker-privileged" -a -e "${BINS}/rosenpass"; then
info "Found the binaries rosenpass-wireguard-broker-privileged and rosenpass." \
"Run following commands as a regular user to recompile the binaries with the right options" \
"in case of an error:" '$' "${build_cmd[@]}"
else
exc_as_user "${build_cmd[@]}"
fi
exc sudo setcap CAP_NET_ADMIN=+eip "${BINS}/rosenpass-wireguard-broker-privileged"
exc cd "${CFG_DIR}"
exc_eval_as_user "wg genkey > peer_a.wg.sk"
exc_eval_as_user "wg pubkey < peer_a.wg.sk > peer_a.wg.pk"
exc_eval_as_user "wg genkey > peer_b.wg.sk"
exc_eval_as_user "wg pubkey < peer_b.wg.sk > peer_b.wg.pk"
exc_eval_as_user "wg genpsk > peer_a_invalid.psk"
exc_eval_as_user "wg genpsk > peer_b_invalid.psk"
exc_eval_as_user "echo $(enquote "peer = \"$(cat peer_b.wg.pk)\"") >> peer_a.rp.config"
exc_eval_as_user "echo $(enquote "peer = \"$(cat peer_a.wg.pk)\"") >> peer_b.rp.config"
exc_as_user "${BINS}"/rosenpass gen-keys peer_a.rp.config
exc_as_user "${BINS}"/rosenpass gen-keys peer_b.rp.config
cleanup ip l del dev rpPskBrkTestA
cleanup ip l del dev rpPskBrkTestB
exc ip l add dev rpPskBrkTestA type wireguard
exc ip l add dev rpPskBrkTestB type wireguard
exc wg set rpPskBrkTestA \
listen-port 46125 \
private-key peer_a.wg.sk \
peer "$(cat peer_b.wg.pk)" \
endpoint 'localhost:46126' \
preshared-key peer_a_invalid.psk \
allowed-ips fe80::2/64
exc wg set rpPskBrkTestB \
listen-port 46126 \
private-key peer_b.wg.sk \
peer "$(cat peer_a.wg.pk)" \
endpoint 'localhost:46125' \
preshared-key peer_b_invalid.psk \
allowed-ips fe80::1/64
exc ip l set rpPskBrkTestA up
exc ip l set rpPskBrkTestB up
exc ip a add fe80::1/64 dev rpPskBrkTestA
exc ip a add fe80::2/64 dev rpPskBrkTestB
fork_eval_as_user "\
RUST_LOG='info' \
PATH=$(enquote "${REPO}/target/debug:${PATH}") \
$(enquote "${BINS}/rosenpass") --psk-broker-spawn \
exchange-config peer_a.rp.config"
fork_eval_as_user "\
RUST_LOG='info' \
PATH=$(enquote "${REPO}/target/debug:${PATH}") \
$(enquote "${BINS}/rosenpass-wireguard-broker-socket-handler") \
--listen-path broker.sock"
fork_eval_as_user "\
RUST_LOG='info' \
PATH=$(enquote "$PWD/target/debug:${PATH}") \
$(enquote "${BINS}/rosenpass") --psk-broker-path broker.sock \
exchange-config peer_b.rp.config"
exc_as_user ping -c 2 -w 10 fe80::1%rpPskBrkTestA
exc_as_user ping -c 2 -w 10 fe80::2%rpPskBrkTestB
exc_as_user ping -c 2 -w 10 fe80::2%rpPskBrkTestA
exc_as_user ping -c 2 -w 10 fe80::1%rpPskBrkTestB
SUCCESS=1
}
main "$@"

View File

@@ -1,14 +1,14 @@
from pathlib import Path
from subprocess import run
import os
config = dict(
peer_counts=[1, 5, 10, 50, 100, 500],
peer_count_max=100,
ate_ip="192.168.2.1",
dut_ip="192.168.2.4",
ate_ip="127.0.0.1",
dut_ip="127.0.0.1",
dut_port=9999,
path_to_rosenpass_bin="/Users/user/src/rosenppass/rosenpass/target/debug/rosenpass",
path_to_rosenpass_bin=os.getcwd() + "/target/release/rosenpass",
)
print(config)

View File

@@ -14,3 +14,7 @@ rosenpass-cipher-traits = { workspace = true }
rosenpass-util = { workspace = true }
oqs-sys = { workspace = true }
paste = { workspace = true }
[dev-dependencies]
rosenpass-secret-memory = { workspace = true }
rosenpass-constant-time = { workspace = true }

View File

@@ -1,9 +1,42 @@
//! Generic helpers for declaring bindings to liboqs kems
/// Generate bindings to a liboqs-provided KEM
macro_rules! oqs_kem {
($name:ident) => { ::paste::paste!{
#[doc = "Bindings for ::oqs_sys::kem::" [<"OQS_KEM" _ $name:snake>] "_*"]
mod [< $name:snake >] {
use rosenpass_cipher_traits::Kem;
use rosenpass_util::result::Guaranteed;
#[doc = "Bindings for ::oqs_sys::kem::" [<"OQS_KEM" _ $name:snake>] "_*"]
#[doc = ""]
#[doc = "# Examples"]
#[doc = ""]
#[doc = "```rust"]
#[doc = "use std::borrow::{Borrow, BorrowMut};"]
#[doc = "use rosenpass_cipher_traits::Kem;"]
#[doc = "use rosenpass_oqs::" $name:camel " as MyKem;"]
#[doc = "use rosenpass_secret_memory::{Secret, Public};"]
#[doc = ""]
#[doc = "rosenpass_secret_memory::secret_policy_try_use_memfd_secrets();"]
#[doc = ""]
#[doc = "// Recipient generates secret key, transfers pk to sender"]
#[doc = "let mut sk = Secret::<{ MyKem::SK_LEN }>::zero();"]
#[doc = "let mut pk = Public::<{ MyKem::PK_LEN }>::zero();"]
#[doc = "MyKem::keygen(sk.secret_mut(), pk.borrow_mut());"]
#[doc = ""]
#[doc = "// Sender generates ciphertext and local shared key, sends ciphertext to recipient"]
#[doc = "let mut shk_enc = Secret::<{ MyKem::SHK_LEN }>::zero();"]
#[doc = "let mut ct = Public::<{ MyKem::CT_LEN }>::zero();"]
#[doc = "MyKem::encaps(shk_enc.secret_mut(), ct.borrow_mut(), pk.borrow());"]
#[doc = ""]
#[doc = "// Recipient decapsulates ciphertext"]
#[doc = "let mut shk_dec = Secret::<{ MyKem::SHK_LEN }>::zero();"]
#[doc = "MyKem::decaps(shk_dec.secret_mut(), sk.secret(), ct.borrow());"]
#[doc = ""]
#[doc = "// Both parties end up with the same shared key"]
#[doc = "assert!(rosenpass_constant_time::compare(shk_enc.secret_mut(), shk_dec.secret_mut()) == 0);"]
#[doc = "```"]
pub enum [< $name:camel >] {}
/// # Panic & Safety

View File

@@ -1,3 +1,8 @@
#![warn(missing_docs)]
#![warn(clippy::missing_docs_in_private_items)]
//! Bindings for liboqs used in Rosenpass
/// Call into a libOQS function
macro_rules! oqs_call {
($name:path, $($args:expr),*) => {{
use oqs_sys::common::OQS_STATUS::*;

39
overlay.nix Normal file
View File

@@ -0,0 +1,39 @@
final: prev: {
#
### Actual rosenpass software ###
#
rosenpass = final.callPackage ./pkgs/rosenpass.nix { };
rosenpass-oci-image = final.callPackage ./pkgs/rosenpass-oci-image.nix { };
rp = final.callPackage ./pkgs/rosenpass.nix { package = "rp"; };
release-package = final.callPackage ./pkgs/release-package.nix { };
#
### Appendix ###
#
proverif-patched = prev.proverif.overrideAttrs (old: {
postInstall = ''
install -D -t $out/lib cryptoverif.pvl
'';
});
proof-proverif = final.stdenv.mkDerivation {
name = "rosenpass-proverif-proof";
version = "unstable";
src = final.lib.sources.sourceByRegex ./. [
"analyze.sh"
"marzipan(/marzipan.awk)?"
"analysis(/.*)?"
];
nativeBuildInputs = [ final.proverif final.graphviz ];
CRYPTOVERIF_LIB = final.proverif-patched + "/lib/cryptoverif.pvl";
installPhase = ''
mkdir -p $out
bash analyze.sh -color -html $out
'';
};
whitepaper = final.callPackage ./pkgs/whitepaper.nix { };
}

View File

@@ -2,8 +2,8 @@
template: rosenpass
title: Rosenpass
author:
- Karolin Varner = Independent Researcher
- Benjamin Lipp = Max Planck Institute for Security and Privacy (MPI-SP)
- Karolin Varner = Rosenpass e.V., Max Planck Institute for Security and Privacy (MPI-SP)
- Benjamin Lipp = Rosenpass e.V., Max Planck Institute for Security and Privacy (MPI-SP)
- Wanja Zaeske
- Lisa Schmidt = {Scientific Illustrator \\url{mullana.de}}
- Prabhpreet Dua
@@ -383,9 +383,18 @@ fn load_biscuit(nct) {
"biscuit additional data",
spkr, sidi, sidr);
let pt : Biscuit = XAEAD::dec(k, n, ct, ad);
// Find the peer and apply retransmission protection
lookup_peer(pt.peerid);
assert(pt.biscuit_no <= peer.biscuit_used);
// In December 2024, the InitConf retransmission mechanisim was redesigned
// in a backwards-compatible way. See the changelog.
//
// -- 2024-11-30, Karolin Varner
if (protocol_version!(< "0.3.0")) {
// Ensure that the biscuit is used only once
assert(pt.biscuit_no <= peer.biscuit_used);
}
// Restore the chaining key
ck ← pt.ck;
@@ -501,7 +510,7 @@ LAST_UNDER_LOAD_WINDOW = 1 //seconds
The initiator deals with packet loss by storing the messages it sends to the responder and retransmitting them in randomized, exponentially increasing intervals until they get a response. Receiving RespHello terminates retransmission of InitHello. A Data or EmptyData message serves as acknowledgement of receiving InitConf and terminates its retransmission.
The responder does not need to do anything special to handle RespHello retransmission if the RespHello package is lost, the initiator retransmits InitHello and the responder can generate another RespHello package from that. InitConf retransmission needs to be handled specifically in the responder code because accepting an InitConf retransmission would reset the live session including the nonce counter, which would cause nonce reuse. Implementations must detect the case that `biscuit_no = biscuit_used` in ICR5, skip execution of ICR6 and ICR7, and just transmit another EmptyData package to confirm that the initiator can stop transmitting InitConf.
The responder uses less complex form of the same mechanism: The responder never retransmits RespHello, instead the responder generates a new RespHello message if InitHello is retransmitted. Responder confirmation messages of completed handshake (EmptyData) messages are retransmitted by storing the most recent InitConf messages (or their hashes) and caching the associated EmptyData messages. Through this cache, InitConf retransmission is detected and the associated EmptyData message is retransmitted.
### Interaction with cookie reply system
@@ -515,6 +524,76 @@ When the responder is under load and it recieves an InitConf message, the messag
# Changelog
### 0.3.x
#### 2024-10-30 InitConf retransmission updates
\vspace{0.5em}
Author: Karolin Varner
Issue: [#331](https://github.com/rosenpass/rosenpass/issues/331)
PR: [#513](https://github.com/rosenpass/rosenpass/pull/513)
\vspace{0.5em}
We redesign the InitConf retransmission mechanism to use a hash table. This avoids the need for the InitConf handling code to account for InitConf retransmission specifically and moves the retransmission logic into less-sensitive code.
Previously, we would specifically account for InitConf retransmission in the InitConf handling code by checking the biscuit number: If the biscuit number was higher than any previously seen biscuit number, then this must be a new key-exchange being completed; if the biscuit number was exactly the highest seen biscuit number, then the InitConf message is interpreted as an InitConf retransmission; in this case, an entirely new EmptyData (responder confirmation) message was generated as confirmation that InitConf has been received and that the initiator can now cease opportunistic retransmission of InitConf.
This mechanism was a bit brittle; even leading to a very minor but still relevant security issue, necessitating the release of Rosenpass maintenance version 0.2.2 with a [fix for the problem](https://github.com/rosenpass/rosenpass/pull/329). We had processed the InitConf message, correctly identifying that InitConf was a retransmission, but we failed to pass this information on to the rest of the code base, leading to double emission of the same "hey, we have a new cryptographic session key" even if the `outfile` option was used to integrate Rosenpass into some external application. If this event was used anywhere to reset a nonce, then this could have led to a nonce-misuse, although for the use with WireGuard this is not an issue.
By removing all retransmission handling code from the cryptographic protocol, we are taking structural measures to exclude the possibilities of similar issues.
- In section "Dealing With Package Loss" we replace
\begin{quote}
The responder does not need to do anything special to handle RespHello retransmission if the RespHello package is lost, the initiator retransmits InitHello and the responder can generate another RespHello package from that. InitConf retransmission needs to be handled specifically in the responder code because accepting an InitConf retransmission would reset the live session including the nonce counter, which would cause nonce reuse. Implementations must detect the case that `biscuit_no = biscuit_used` in ICR5, skip execution of ICR6 and ICR7, and just transmit another EmptyData package to confirm that the initiator can stop transmitting InitConf.
\end{quote}
by
\begin{quote}
The responder uses less complex form of the same mechanism: The responder never retransmits RespHello, instead the responder generates a new RespHello message if InitHello is retransmitted. Responder confirmation messages of completed handshake (EmptyData) messages are retransmitted by storing the most recent InitConf messages (or their hashes) and caching the associated EmptyData messages. Through this cache, InitConf retransmission is detected and the associated EmptyData message is retransmitted.
\end{quote}
- In function `load_biscuit` we replace
``` {=tex}
\begin{quote}
\begin{minted}{pseudorust}
assert(pt.biscuit_no <= peer.biscuit_used);
\end{minted}
\end{quote}
```
by
``` {=tex}
\begin{quote}
\begin{minted}{pseudorust}
// In December 2024, the InitConf retransmission mechanisim was redesigned
// in a backwards-compatible way. See the changelog.
//
// -- 2024-11-30, Karolin Varner
if (protocol_version!(< "0.3.0")) {
// Ensure that the biscuit is used only once
assert(pt.biscuit_no <= peer.biscuit_used);
}
\end{minted}
\end{quote}
```
#### 2024-04-16 Denial of Service Mitigation
\vspace{0.5em}
Author: Prabhpreet Dua
Issue: [#137](https://github.com/rosenpass/rosenpass/issues/137)
PR: [#142](https://github.com/rosenpass/rosenpass/pull/142)
\vspace{0.5em}
- Added denial of service mitigation using the WireGuard cookie mechanism
- Added section "Denial of Service Mitigation and Cookies", and modify "Dealing with Packet Loss" for DoS cookie mechanism
\printbibliography

27
pkgs/release-package.nix Normal file
View File

@@ -0,0 +1,27 @@
{ lib, stdenvNoCC, runCommandNoCC, pkgsStatic, rosenpass, rosenpass-oci-image, rp } @ args:
let
version = rosenpass.version;
# select static packages on Linux, default packages otherwise
package =
if stdenvNoCC.hostPlatform.isLinux then
pkgsStatic.rosenpass
else args.rosenpass;
rp =
if stdenvNoCC.hostPlatform.isLinux then
pkgsStatic.rp
else args.rp;
oci-image =
if stdenvNoCC.hostPlatform.isLinux then
pkgsStatic.rosenpass-oci-image
else args.rosenpass-oci-image;
in
runCommandNoCC "lace-result" { } ''
mkdir {bin,$out}
tar -cvf $out/rosenpass-${stdenvNoCC.hostPlatform.system}-${version}.tar \
-C ${package} bin/rosenpass \
-C ${rp} bin/rp
cp ${oci-image} \
$out/rosenpass-oci-image-${stdenvNoCC.hostPlatform.system}-${version}.tar.gz
''

View File

@@ -0,0 +1,11 @@
{ dockerTools, buildEnv, rosenpass }:
dockerTools.buildImage {
name = rosenpass.name + "-oci";
copyToRoot = buildEnv {
name = "image-root";
paths = [ rosenpass ];
pathsToLink = [ "/bin" ];
};
config.Cmd = [ "/bin/rosenpass" ];
}

78
pkgs/rosenpass.nix Normal file
View File

@@ -0,0 +1,78 @@
{ lib, stdenv, rustPlatform, cmake, mandoc, removeReferencesTo, bash, package ? "rosenpass" }:
let
# whether we want to build a statically linked binary
isStatic = stdenv.targetPlatform.isStatic;
scoped = (scope: scope.result);
# source files relevant for rust
src = scoped rec {
# File suffices to include
extensions = [
"lock"
"rs"
"toml"
];
# Files to explicitly include
files = [
"to/README.md"
];
src = ../.;
filter = (path: type: scoped rec {
inherit (lib) any id removePrefix hasSuffix;
anyof = (any id);
basename = baseNameOf (toString path);
relative = removePrefix (toString src + "/") (toString path);
result = anyof [
(type == "directory")
(any (ext: hasSuffix ".${ext}" basename) extensions)
(any (file: file == relative) files)
];
});
result = lib.sources.cleanSourceWith { inherit src filter; };
};
# parsed Cargo.toml
cargoToml = builtins.fromTOML (builtins.readFile (src + "/rosenpass/Cargo.toml"));
in
rustPlatform.buildRustPackage {
name = cargoToml.package.name;
version = cargoToml.package.version;
inherit src;
cargoBuildOptions = [ "--package" package ];
cargoTestOptions = [ "--package" package ];
doCheck = true;
cargoLock = {
lockFile = src + "/Cargo.lock";
outputHashes = {
"memsec-0.6.3" = "sha256-4ri+IEqLd77cLcul3lZrmpDKj4cwuYJ8oPRAiQNGeLw=";
"uds-0.4.2" = "sha256-qlxr/iJt2AV4WryePIvqm/8/MK/iqtzegztNliR93W8=";
};
};
nativeBuildInputs = [
stdenv.cc
cmake # for oqs build in the oqs-sys crate
mandoc # for the built-in manual
removeReferencesTo
rustPlatform.bindgenHook # for C-bindings in the crypto libs
];
buildInputs = [ bash ];
hardeningDisable = lib.optional isStatic "fortify";
meta = {
inherit (cargoToml.package) description homepage;
license = with lib.licenses; [ mit asl20 ];
maintainers = [ lib.maintainers.wucke13 ];
platforms = lib.platforms.all;
};
}

29
pkgs/whitepaper.nix Normal file
View File

@@ -0,0 +1,29 @@
{ stdenvNoCC, texlive, ncurses, python3Packages, which }:
let
customTexLiveSetup = (texlive.combine {
inherit (texlive) acmart amsfonts biber biblatex biblatex-software
biblatex-trad ccicons csquotes csvsimple doclicense eso-pic fancyvrb
fontspec gitinfo2 gobble ifmtarg koma-script latexmk lm lualatex-math
markdown mathtools minted noto nunito paralist pgf scheme-basic soul
unicode-math upquote xifthen xkeyval xurl;
});
in
stdenvNoCC.mkDerivation {
name = "whitepaper";
src = ../papers;
nativeBuildInputs = [
ncurses # tput
python3Packages.pygments
customTexLiveSetup # custom tex live scheme
which
];
buildPhase = ''
export HOME=$(mktemp -d)
latexmk -r tex/CI.rc
'';
installPhase = ''
mkdir -p $out
mv *.pdf readme.md $out/
'';
}

View File

@@ -66,6 +66,8 @@ A wrapper script provides instant feedback about which queries execute as expect
# Getting Rosenpass
Documentation and installation guides can be found at the [Rosenpass website](https://rosenpass.eu/docs).
Rosenpass is packaged for more and more distributions, maybe also for the distribution of your choice?
[![Packaging status](https://repology.org/badge/vertical-allrepos/rosenpass.svg)](https://repology.org/project/rosenpass/versions)

View File

@@ -1,6 +1,6 @@
[package]
name = "rosenpass"
version = "0.2.1"
version = "0.3.0-dev"
authors = ["Karolin Varner <karo@cupdev.net>", "wucke13 <wucke13@gmail.com>"]
edition = "2021"
license = "MIT OR Apache-2.0"
@@ -13,6 +13,19 @@ readme = "readme.md"
name = "rosenpass"
path = "src/main.rs"
[[bin]]
name = "rosenpass-gen-ipc-msg-types"
path = "src/bin/gen-ipc-msg-types.rs"
required-features = ["experiment_api", "internal_bin_gen_ipc_msg_types"]
[[test]]
name = "api-integration-tests"
required-features = ["experiment_api", "internal_testing"]
[[test]]
name = "api-integration-tests-api-setup"
required-features = ["experiment_api", "internal_testing"]
[[bench]]
name = "handshake"
harness = false
@@ -34,12 +47,21 @@ env_logger = { workspace = true }
serde = { workspace = true }
toml = { workspace = true }
clap = { workspace = true }
clap_complete = { workspace = true }
clap_mangen = { workspace = true }
mio = { workspace = true }
rand = { workspace = true }
zerocopy = { workspace = true }
home = { workspace = true }
derive_builder = {workspace = true}
rosenpass-wireguard-broker = {workspace = true}
derive_builder = { workspace = true }
rosenpass-wireguard-broker = { workspace = true }
zeroize = { workspace = true }
hex-literal = { workspace = true, optional = true }
hex = { workspace = true, optional = true }
heck = { workspace = true, optional = true }
command-fds = { workspace = true, optional = true }
rustix = { workspace = true, optional = true }
uds = { workspace = true, optional = true, features = ["mio_1xx"] }
[build-dependencies]
anyhow = { workspace = true }
@@ -48,9 +70,22 @@ anyhow = { workspace = true }
criterion = { workspace = true }
test_bin = { workspace = true }
stacker = { workspace = true }
serial_test = {workspace = true}
procspawn = {workspace = true}
serial_test = { workspace = true }
procspawn = { workspace = true }
tempfile = { workspace = true }
rustix = { workspace = true }
[features]
enable_broker_api = ["rosenpass-wireguard-broker/enable_broker_api"]
enable_memfd_alloc = []
default = []
experiment_memfd_secret = ["rosenpass-wireguard-broker/experiment_memfd_secret"]
experiment_libcrux = ["rosenpass-ciphers/experiment_libcrux"]
experiment_api = [
"hex-literal",
"uds",
"command-fds",
"rustix",
"rosenpass-util/experiment_file_descriptor_passing",
"rosenpass-wireguard-broker/experiment_api",
]
internal_testing = []
internal_bin_gen_ipc_msg_types = ["hex", "heck"]

View File

@@ -1,5 +1,6 @@
use anyhow::Result;
use rosenpass::protocol::{CryptoServer, HandleMsgResult, MsgBuf, PeerPtr, SPk, SSk, SymKey};
use std::ops::DerefMut;
use rosenpass_cipher_traits::Kem;
use rosenpass_ciphers::kem::StaticKem;
@@ -40,7 +41,7 @@ fn hs(ini: &mut CryptoServer, res: &mut CryptoServer) -> Result<()> {
fn keygen() -> Result<(SSk, SPk)> {
let (mut sk, mut pk) = (SSk::zero(), SPk::zero());
StaticKem::keygen(sk.secret_mut(), pk.secret_mut())?;
StaticKem::keygen(sk.secret_mut(), pk.deref_mut())?;
Ok((sk, pk))
}

View File

@@ -1,52 +0,0 @@
use anyhow::bail;
use anyhow::Result;
use std::env;
use std::fs::File;
use std::io::Write;
use std::path::PathBuf;
use std::process::Command;
/// Invokes a troff compiler to compile a manual page
fn render_man(compiler: &str, man: &str) -> Result<String> {
let out = Command::new(compiler).args(["-Tascii", man]).output()?;
if !out.status.success() {
bail!("{} returned an error", compiler);
}
Ok(String::from_utf8(out.stdout)?)
}
/// Generates the manual page
fn generate_man() -> String {
// This function is purposely stupid and redundant
let man = render_man("mandoc", "./doc/rosenpass.1");
if let Ok(man) = man {
return man;
}
let man = render_man("groff", "./doc/rosenpass.1");
if let Ok(man) = man {
return man;
}
"Cannot render manual page. Please visit https://rosenpass.eu/docs/manuals/\n".into()
}
fn man() {
let out_dir = PathBuf::from(env::var("OUT_DIR").unwrap());
let man = generate_man();
let path = out_dir.join("rosenpass.1.ascii");
let mut file = File::create(&path).unwrap();
file.write_all(man.as_bytes()).unwrap();
println!("cargo:rustc-env=ROSENPASS_MAN={}", path.display());
}
fn main() {
// For now, rerun the build script on every time, as the build script
// is not very expensive right now.
println!("cargo:rerun-if-changed=./");
man();
}

View File

@@ -0,0 +1,341 @@
// Note: This is business logic; tested through the integration tests in
// rosenpass/tests/
use std::{borrow::BorrowMut, collections::VecDeque, os::fd::OwnedFd};
use anyhow::Context;
use rosenpass_to::{ops::copy_slice, To};
use rosenpass_util::{
fd::FdIo,
functional::{run, ApplyExt},
io::ReadExt,
mem::DiscardResultExt,
mio::UnixStreamExt,
result::OkExt,
};
use rosenpass_wireguard_broker::brokers::mio_client::MioBrokerClient;
use crate::{
api::{add_listen_socket_response_status, add_psk_broker_response_status},
app_server::AppServer,
protocol::BuildCryptoServer,
};
use super::{supply_keypair_response_status, Server as ApiServer};
/// Stores the state of the API handler.
///
/// This is used in the context [ApiHandlerContext]; [ApiHandlerContext] exposes both
/// the [AppServer] and the API handler state.
///
/// [ApiHandlerContext] is what actually contains the API handler functions.
#[derive(Debug)]
pub struct ApiHandler {
_dummy: (),
}
impl ApiHandler {
/// Construct an [Self]
#[allow(clippy::new_without_default)]
pub fn new() -> Self {
Self { _dummy: () }
}
}
/// The implementation of the API requires both access to its own state [ApiHandler] and to the
/// [AppServer] the API is supposed to operate on.
///
/// This trait provides both; it implements a pattern to allow for multiple - **potentially
/// overlapping** mutable references to be passed to the API handler functions.
///
/// This relatively complex scheme is chosen to appease the borrow checker: We want flexibility
/// with regard to where the [ApiHandler] is stored and we need a mutable reference to
/// [ApiHandler]. We also need a mutable reference to [AppServer]. Achieving this by using the
/// direct method would be impossible because the [ApiHandler] is actually stored somewhere inside
/// [AppServer]. The borrow checker does not allow this.
///
/// What we have instead is in practice a reference to [AppServer] and a function (as part of
/// the trait) that extracts an [ApiHandler] reference from [AppServer], which is allowed by the
/// borrow checker. A benefit of the use of a trait here is that we could, if desired, also store
/// the [ApiHandler] outside [AppServer]. It really depends on the trait.
pub trait ApiHandlerContext {
/// Retrieve the [ApiHandler]
fn api_handler(&self) -> &ApiHandler;
/// Retrieve the [AppServer]
fn app_server(&self) -> &AppServer;
/// Retrieve the [ApiHandler]
fn api_handler_mut(&mut self) -> &mut ApiHandler;
/// Retrieve the [AppServer]
fn app_server_mut(&mut self) -> &mut AppServer;
}
/// This is the Error raised by [ApiServer::supply_keypair]; it contains both
/// the underlying error message as well as the status value
/// returned by the API.
///
/// [ApiServer::supply_keypair] generally constructs a [Self] by using one of the
/// utility functions [SupplyKeypairErrorExt].
#[derive(thiserror::Error, Debug)]
#[error("Error in SupplyKeypair")]
struct SupplyKeypairError {
/// The status code communicated via the Rosenpass API
status: u128,
/// The underlying error that caused the Rosenpass API level Error
#[source]
cause: anyhow::Error,
}
trait SupplyKeypairErrorExt<T> {
/// Imbue any Error (that can be represented as [anyhow::Error]) with
/// an arbitrary error code
fn e_custom(self, status: u128) -> Result<T, SupplyKeypairError>;
/// Imbue any Error (that can be represented as [anyhow::Error]) with
/// the [supply_keypair_response_status::INTERNAL_ERROR] error code
fn einternal(self) -> Result<T, SupplyKeypairError>;
/// Imbue any Error (that can be represented as [anyhow::Error]) with
/// the [supply_keypair_response_status::KEYPAIR_ALREADY_SUPPLIED] error code
fn ealready_supplied(self) -> Result<T, SupplyKeypairError>;
/// Imbue any Error (that can be represented as [anyhow::Error]) with
/// the [supply_keypair_response_status::INVALID_REQUEST] error code
fn einvalid_req(self) -> Result<T, SupplyKeypairError>;
}
impl<T, E: Into<anyhow::Error>> SupplyKeypairErrorExt<T> for Result<T, E> {
fn e_custom(self, status: u128) -> Result<T, SupplyKeypairError> {
self.map_err(|e| SupplyKeypairError {
status,
cause: e.into(),
})
}
fn einternal(self) -> Result<T, SupplyKeypairError> {
self.e_custom(supply_keypair_response_status::INTERNAL_ERROR)
}
fn ealready_supplied(self) -> Result<T, SupplyKeypairError> {
self.e_custom(supply_keypair_response_status::KEYPAIR_ALREADY_SUPPLIED)
}
fn einvalid_req(self) -> Result<T, SupplyKeypairError> {
self.e_custom(supply_keypair_response_status::INVALID_REQUEST)
}
}
impl<T> ApiServer for T
where
T: ?Sized + ApiHandlerContext,
{
fn ping(
&mut self,
req: &super::PingRequest,
_req_fds: &mut VecDeque<OwnedFd>,
res: &mut super::PingResponse,
) -> anyhow::Result<()> {
let (req, res) = (&req.payload, &mut res.payload);
copy_slice(&req.echo).to(&mut res.echo);
Ok(())
}
fn supply_keypair(
&mut self,
req: &super::SupplyKeypairRequest,
req_fds: &mut VecDeque<OwnedFd>,
res: &mut super::SupplyKeypairResponse,
) -> anyhow::Result<()> {
let outcome: Result<(), SupplyKeypairError> = run(|| {
// Acquire the file descriptors
let mut sk_io = FdIo(
req_fds
.front()
.context("First file descriptor, secret key, missing.")
.einvalid_req()?,
);
let mut pk_io = FdIo(
req_fds
.get(1)
.context("Second file descriptor, public key, missing.")
.einvalid_req()?,
);
// Actually read the secrets
let mut sk = crate::protocol::SSk::zero();
sk_io.read_exact_til_end(sk.secret_mut()).einvalid_req()?;
let mut pk = crate::protocol::SPk::zero();
pk_io.read_exact_til_end(pk.borrow_mut()).einvalid_req()?;
// Retrieve the construction site
let construction_site = self.app_server_mut().crypto_site.borrow_mut();
// Retrieve the builder
use rosenpass_util::build::ConstructionSite as C;
let maybe_builder = match construction_site {
C::Builder(builder) => Some(builder),
C::Product(_) => None,
C::Void => {
return Err(anyhow::Error::msg("CryptoServer construction side is void"))
.einternal();
}
};
// Retrieve a reference to the keypair
let Some(BuildCryptoServer {
ref mut keypair, ..
}) = maybe_builder
else {
return Err(anyhow::Error::msg("CryptoServer already built")).ealready_supplied();
};
// Supply the keypair to the CryptoServer
keypair
.insert(crate::protocol::Keypair { sk, pk })
.discard_result();
// Actually construct the CryptoServer
construction_site
.erect()
.map_err(|e| anyhow::Error::msg(format!("Error erecting the CryptoServer {e:?}")))
.einternal()?;
Ok(())
});
// Handle errors
use supply_keypair_response_status as status;
let status = match outcome {
Ok(()) => status::OK,
Err(e) => {
let lvl = match e.status {
status::INTERNAL_ERROR => log::Level::Warn,
_ => log::Level::Debug,
};
log::log!(
lvl,
"Error while processing API Request.\n Request: {:?}\n Error: {:?}",
req,
e.cause
);
if e.status == status::INTERNAL_ERROR {
return Err(e.cause);
}
e.status
}
};
res.payload.status = status;
Ok(())
}
fn add_listen_socket(
&mut self,
_req: &super::boilerplate::AddListenSocketRequest,
req_fds: &mut VecDeque<OwnedFd>,
res: &mut super::boilerplate::AddListenSocketResponse,
) -> anyhow::Result<()> {
// Retrieve file descriptor
let sock_res = run(|| -> anyhow::Result<mio::net::UdpSocket> {
let sock = req_fds
.pop_front()
.context("Invalid request socket missing.")?;
// TODO: We need to have this outside linux
#[cfg(target_os = "linux")]
rosenpass_util::fd::GetSocketProtocol::demand_udp_socket(&sock)?;
let sock = std::net::UdpSocket::from(sock);
sock.set_nonblocking(true)?;
mio::net::UdpSocket::from_std(sock).ok()
});
let sock = match sock_res {
Ok(sock) => sock,
Err(e) => {
log::debug!("Error processing AddListenSocket API request: {e:?}");
res.payload.status = add_listen_socket_response_status::INVALID_REQUEST;
return Ok(());
}
};
// Register socket
let reg_result = self.app_server_mut().register_listen_socket(sock);
if let Err(internal_error) = reg_result {
log::warn!("Internal error processing AddListenSocket API request: {internal_error:?}");
res.payload.status = add_listen_socket_response_status::INTERNAL_ERROR;
return Ok(());
};
res.payload.status = add_listen_socket_response_status::OK;
Ok(())
}
fn add_psk_broker(
&mut self,
_req: &super::boilerplate::AddPskBrokerRequest,
req_fds: &mut VecDeque<OwnedFd>,
res: &mut super::boilerplate::AddPskBrokerResponse,
) -> anyhow::Result<()> {
// Retrieve file descriptor
let sock_res = run(|| {
let sock = req_fds
.pop_front()
.context("Invalid request socket missing.")?;
mio::net::UnixStream::from_fd(sock)
});
// Handle errors
let sock = match sock_res {
Ok(sock) => sock,
Err(e) => {
log::debug!(
"Request found to be invalid while processing AddPskBroker API request: {e:?}"
);
res.payload.status = add_psk_broker_response_status::INVALID_REQUEST;
return Ok(());
}
};
// Register Socket
let client = Box::new(MioBrokerClient::new(sock));
// Workaround: The broker code is currently impressively overcomplicated. Brokers are
// stored in a hash map but the hash map key used is just a counter so a vector could
// have been used. Broker configuration is abstracted, different peers can have different
// brokers but there is no facility to add multiple brokers in practice. The broker index
// uses a `Public` wrapper without actually holding any cryptographic data. Even the broker
// configuration uses a trait abstraction for no discernible reason and a lot of the code
// introduces pointless, single-field wrapper structs.
// We should use an implement-what-is-actually-needed strategy next time.
// The Broker code needs to be slimmed down, the right direction to go is probably to
// just add event and capability support to the API and use the API to deliver OSK events.
//
// For now, we just replace the latest broker.
let erase_ptr = {
use crate::app_server::BrokerStorePtr;
//
use rosenpass_secret_memory::Public;
use zerocopy::AsBytes;
(self.app_server().brokers.store.len() - 1)
.apply(|x| x as u64)
.apply(|x| Public::from_slice(x.as_bytes()))
.apply(BrokerStorePtr)
};
let register_result = run(|| {
let srv = self.app_server_mut();
srv.unregister_broker(erase_ptr)?;
srv.register_broker(client)
});
if let Err(e) = register_result {
log::warn!("Internal error while processing AddPskBroker API request: {e:?}");
res.payload.status = add_psk_broker_response_status::INTERNAL_ERROR;
return Ok(());
}
res.payload.status = add_psk_broker_response_status::OK;
Ok(())
}
}

View File

@@ -0,0 +1,222 @@
use zerocopy::{ByteSlice, Ref};
use rosenpass_util::zerocopy::{RefMaker, ZerocopySliceExt};
use super::{
PingRequest, PingResponse, RawMsgType, RefMakerRawMsgTypeExt, RequestMsgType, RequestRef,
ResponseMsgType, ResponseRef, SupplyKeypairRequest, SupplyKeypairResponse,
};
pub trait ByteSliceRefExt: ByteSlice {
fn msg_type_maker(self) -> RefMaker<Self, RawMsgType> {
self.zk_ref_maker()
}
fn msg_type(self) -> anyhow::Result<Ref<Self, PingRequest>> {
self.zk_parse()
}
fn msg_type_from_prefix(self) -> anyhow::Result<Ref<Self, PingRequest>> {
self.zk_parse_prefix()
}
fn msg_type_from_suffix(self) -> anyhow::Result<Ref<Self, PingRequest>> {
self.zk_parse_suffix()
}
fn request_msg_type(self) -> anyhow::Result<RequestMsgType> {
self.msg_type_maker().parse_request_msg_type()
}
fn request_msg_type_from_prefix(self) -> anyhow::Result<RequestMsgType> {
self.msg_type_maker()
.from_prefix()?
.parse_request_msg_type()
}
fn request_msg_type_from_suffix(self) -> anyhow::Result<RequestMsgType> {
self.msg_type_maker()
.from_suffix()?
.parse_request_msg_type()
}
fn response_msg_type(self) -> anyhow::Result<ResponseMsgType> {
self.msg_type_maker().parse_response_msg_type()
}
fn response_msg_type_from_prefix(self) -> anyhow::Result<ResponseMsgType> {
self.msg_type_maker()
.from_prefix()?
.parse_response_msg_type()
}
fn response_msg_type_from_suffix(self) -> anyhow::Result<ResponseMsgType> {
self.msg_type_maker()
.from_suffix()?
.parse_response_msg_type()
}
fn parse_request(self) -> anyhow::Result<RequestRef<Self>> {
RequestRef::parse(self)
}
fn parse_request_from_prefix(self) -> anyhow::Result<RequestRef<Self>> {
RequestRef::parse_from_prefix(self)
}
fn parse_request_from_suffix(self) -> anyhow::Result<RequestRef<Self>> {
RequestRef::parse_from_suffix(self)
}
fn parse_response(self) -> anyhow::Result<ResponseRef<Self>> {
ResponseRef::parse(self)
}
fn parse_response_from_prefix(self) -> anyhow::Result<ResponseRef<Self>> {
ResponseRef::parse_from_prefix(self)
}
fn parse_response_from_suffix(self) -> anyhow::Result<ResponseRef<Self>> {
ResponseRef::parse_from_suffix(self)
}
fn ping_request_maker(self) -> RefMaker<Self, PingRequest> {
self.zk_ref_maker()
}
fn ping_request(self) -> anyhow::Result<Ref<Self, PingRequest>> {
self.zk_parse()
}
fn ping_request_from_prefix(self) -> anyhow::Result<Ref<Self, PingRequest>> {
self.zk_parse_prefix()
}
fn ping_request_from_suffix(self) -> anyhow::Result<Ref<Self, PingRequest>> {
self.zk_parse_suffix()
}
fn ping_response_maker(self) -> RefMaker<Self, PingResponse> {
self.zk_ref_maker()
}
fn ping_response(self) -> anyhow::Result<Ref<Self, PingResponse>> {
self.zk_parse()
}
fn ping_response_from_prefix(self) -> anyhow::Result<Ref<Self, PingResponse>> {
self.zk_parse_prefix()
}
fn ping_response_from_suffix(self) -> anyhow::Result<Ref<Self, PingResponse>> {
self.zk_parse_suffix()
}
fn supply_keypair_request(self) -> anyhow::Result<Ref<Self, SupplyKeypairRequest>> {
self.zk_parse()
}
fn supply_keypair_request_from_prefix(self) -> anyhow::Result<Ref<Self, SupplyKeypairRequest>> {
self.zk_parse_prefix()
}
fn supply_keypair_request_from_suffix(self) -> anyhow::Result<Ref<Self, SupplyKeypairRequest>> {
self.zk_parse_suffix()
}
fn supply_keypair_response_maker(self) -> RefMaker<Self, SupplyKeypairResponse> {
self.zk_ref_maker()
}
fn supply_keypair_response(self) -> anyhow::Result<Ref<Self, SupplyKeypairResponse>> {
self.zk_parse()
}
fn supply_keypair_response_from_prefix(
self,
) -> anyhow::Result<Ref<Self, SupplyKeypairResponse>> {
self.zk_parse_prefix()
}
fn supply_keypair_response_from_suffix(
self,
) -> anyhow::Result<Ref<Self, SupplyKeypairResponse>> {
self.zk_parse_suffix()
}
fn add_listen_socket_request(self) -> anyhow::Result<Ref<Self, super::AddListenSocketRequest>> {
self.zk_parse()
}
fn add_listen_socket_request_from_prefix(
self,
) -> anyhow::Result<Ref<Self, super::AddListenSocketRequest>> {
self.zk_parse_prefix()
}
fn add_listen_socket_request_from_suffix(
self,
) -> anyhow::Result<Ref<Self, super::AddListenSocketRequest>> {
self.zk_parse_suffix()
}
fn add_listen_socket_response_maker(self) -> RefMaker<Self, super::AddListenSocketResponse> {
self.zk_ref_maker()
}
fn add_listen_socket_response(
self,
) -> anyhow::Result<Ref<Self, super::AddListenSocketResponse>> {
self.zk_parse()
}
fn add_listen_socket_response_from_prefix(
self,
) -> anyhow::Result<Ref<Self, super::AddListenSocketResponse>> {
self.zk_parse_prefix()
}
fn add_listen_socket_response_from_suffix(
self,
) -> anyhow::Result<Ref<Self, super::AddListenSocketResponse>> {
self.zk_parse_suffix()
}
fn add_psk_broker_request(self) -> anyhow::Result<Ref<Self, super::AddPskBrokerRequest>> {
self.zk_parse()
}
fn add_psk_broker_request_from_prefix(
self,
) -> anyhow::Result<Ref<Self, super::AddPskBrokerRequest>> {
self.zk_parse_prefix()
}
fn add_psk_broker_request_from_suffix(
self,
) -> anyhow::Result<Ref<Self, super::AddPskBrokerRequest>> {
self.zk_parse_suffix()
}
fn add_psk_broker_response_maker(self) -> RefMaker<Self, super::AddPskBrokerResponse> {
self.zk_ref_maker()
}
fn add_psk_broker_response(self) -> anyhow::Result<Ref<Self, super::AddPskBrokerResponse>> {
self.zk_parse()
}
fn add_psk_broker_response_from_prefix(
self,
) -> anyhow::Result<Ref<Self, super::AddPskBrokerResponse>> {
self.zk_parse_prefix()
}
fn add_psk_broker_response_from_suffix(
self,
) -> anyhow::Result<Ref<Self, super::AddPskBrokerResponse>> {
self.zk_parse_suffix()
}
}
impl<B: ByteSlice> ByteSliceRefExt for B {}

View File

@@ -0,0 +1,29 @@
use zerocopy::{ByteSliceMut, Ref};
use rosenpass_util::zerocopy::RefMaker;
use super::RawMsgType;
pub trait Message {
type Payload;
type MessageClass: Into<RawMsgType>;
const MESSAGE_TYPE: Self::MessageClass;
fn from_payload(payload: Self::Payload) -> Self;
fn init(&mut self);
fn setup<B: ByteSliceMut>(buf: B) -> anyhow::Result<Ref<B, Self>>;
}
pub trait ZerocopyResponseMakerSetupMessageExt<B, T> {
fn setup_msg(self) -> anyhow::Result<Ref<B, T>>;
}
impl<B, T> ZerocopyResponseMakerSetupMessageExt<B, T> for RefMaker<B, T>
where
B: ByteSliceMut,
T: Message,
{
fn setup_msg(self) -> anyhow::Result<Ref<B, T>> {
T::setup(self.into_buf())
}
}

View File

@@ -0,0 +1,162 @@
use hex_literal::hex;
use rosenpass_util::zerocopy::RefMaker;
use zerocopy::ByteSlice;
use crate::RosenpassError::{self, InvalidApiMessageType};
pub type RawMsgType = u128;
// constants generated by gen-ipc-msg-types:
// hash domain hash of: Rosenpass IPC API -> Rosenpass Protocol Server -> Ping Request
pub const PING_REQUEST: RawMsgType =
RawMsgType::from_le_bytes(hex!("2397 3ecc c441 704d 0b02 ea31 45d3 4999"));
// hash domain hash of: Rosenpass IPC API -> Rosenpass Protocol Server -> Ping Response
pub const PING_RESPONSE: RawMsgType =
RawMsgType::from_le_bytes(hex!("4ec7 f6f0 2bbc ba64 48f1 da14 c7cf 0260"));
// hash domain hash of: Rosenpass IPC API -> Rosenpass Protocol Server -> Supply Keypair Request
const SUPPLY_KEYPAIR_REQUEST: RawMsgType =
RawMsgType::from_le_bytes(hex!("ac91 a5a6 4f4b 21d0 ac7f 9b55 74f7 3529"));
// hash domain hash of: Rosenpass IPC API -> Rosenpass Protocol Server -> Supply Keypair Response
const SUPPLY_KEYPAIR_RESPONSE: RawMsgType =
RawMsgType::from_le_bytes(hex!("f2dc 49bd e261 5f10 40b7 3c16 ec61 edb9"));
// hash domain hash of: Rosenpass IPC API -> Rosenpass Protocol Server -> Add Listen Socket Request
const ADD_LISTEN_SOCKET_REQUEST: RawMsgType =
RawMsgType::from_le_bytes(hex!("3f21 434f 87cc a08c 02c4 61e4 0816 c7da"));
// hash domain hash of: Rosenpass IPC API -> Rosenpass Protocol Server -> Add Listen Socket Response
const ADD_LISTEN_SOCKET_RESPONSE: RawMsgType =
RawMsgType::from_le_bytes(hex!("45d5 0f0d 93f0 6105 98f2 9469 5dfd 5f36"));
// hash domain hash of: Rosenpass IPC API -> Rosenpass Protocol Server -> Add Psk Broker Request
const ADD_PSK_BROKER_REQUEST: RawMsgType =
RawMsgType::from_le_bytes(hex!("d798 b8dc bd61 5cab 8df1 c63d e4eb a2d1"));
// hash domain hash of: Rosenpass IPC API -> Rosenpass Protocol Server -> Add Psk Broker Response
const ADD_PSK_BROKER_RESPONSE: RawMsgType =
RawMsgType::from_le_bytes(hex!("bd25 e418 ffb0 6930 248b 217e 2fae e353"));
pub trait MessageAttributes {
fn message_size(&self) -> usize;
}
#[derive(Hash, PartialEq, Eq, PartialOrd, Ord, Debug, Clone, Copy)]
pub enum RequestMsgType {
Ping,
SupplyKeypair,
AddListenSocket,
AddPskBroker,
}
#[derive(Hash, PartialEq, Eq, PartialOrd, Ord, Debug, Clone, Copy)]
pub enum ResponseMsgType {
Ping,
SupplyKeypair,
AddListenSocket,
AddPskBroker,
}
impl MessageAttributes for RequestMsgType {
fn message_size(&self) -> usize {
match self {
Self::Ping => std::mem::size_of::<super::PingRequest>(),
Self::SupplyKeypair => std::mem::size_of::<super::SupplyKeypairRequest>(),
Self::AddListenSocket => std::mem::size_of::<super::AddListenSocketRequest>(),
Self::AddPskBroker => std::mem::size_of::<super::AddPskBrokerRequest>(),
}
}
}
impl MessageAttributes for ResponseMsgType {
fn message_size(&self) -> usize {
match self {
Self::Ping => std::mem::size_of::<super::PingResponse>(),
Self::SupplyKeypair => std::mem::size_of::<super::SupplyKeypairResponse>(),
Self::AddListenSocket => std::mem::size_of::<super::AddListenSocketResponse>(),
Self::AddPskBroker => std::mem::size_of::<super::AddPskBrokerResponse>(),
}
}
}
impl TryFrom<RawMsgType> for RequestMsgType {
type Error = RosenpassError;
fn try_from(value: RawMsgType) -> Result<Self, Self::Error> {
use RequestMsgType as E;
Ok(match value {
self::PING_REQUEST => E::Ping,
self::SUPPLY_KEYPAIR_REQUEST => E::SupplyKeypair,
self::ADD_LISTEN_SOCKET_REQUEST => E::AddListenSocket,
self::ADD_PSK_BROKER_REQUEST => E::AddPskBroker,
_ => return Err(InvalidApiMessageType(value)),
})
}
}
impl From<RequestMsgType> for RawMsgType {
fn from(val: RequestMsgType) -> Self {
use RequestMsgType as E;
match val {
E::Ping => self::PING_REQUEST,
E::SupplyKeypair => self::SUPPLY_KEYPAIR_REQUEST,
E::AddListenSocket => self::ADD_LISTEN_SOCKET_REQUEST,
E::AddPskBroker => self::ADD_PSK_BROKER_REQUEST,
}
}
}
impl TryFrom<RawMsgType> for ResponseMsgType {
type Error = RosenpassError;
fn try_from(value: RawMsgType) -> Result<Self, Self::Error> {
use ResponseMsgType as E;
Ok(match value {
self::PING_RESPONSE => E::Ping,
self::SUPPLY_KEYPAIR_RESPONSE => E::SupplyKeypair,
self::ADD_LISTEN_SOCKET_RESPONSE => E::AddListenSocket,
self::ADD_PSK_BROKER_RESPONSE => E::AddPskBroker,
_ => return Err(InvalidApiMessageType(value)),
})
}
}
impl From<ResponseMsgType> for RawMsgType {
fn from(val: ResponseMsgType) -> Self {
use ResponseMsgType as E;
match val {
E::Ping => self::PING_RESPONSE,
E::SupplyKeypair => self::SUPPLY_KEYPAIR_RESPONSE,
E::AddListenSocket => self::ADD_LISTEN_SOCKET_RESPONSE,
E::AddPskBroker => self::ADD_PSK_BROKER_RESPONSE,
}
}
}
pub trait RawMsgTypeExt {
fn into_request_msg_type(self) -> Result<RequestMsgType, RosenpassError>;
fn into_response_msg_type(self) -> Result<ResponseMsgType, RosenpassError>;
}
impl RawMsgTypeExt for RawMsgType {
fn into_request_msg_type(self) -> Result<RequestMsgType, RosenpassError> {
self.try_into()
}
fn into_response_msg_type(self) -> Result<ResponseMsgType, RosenpassError> {
self.try_into()
}
}
pub trait RefMakerRawMsgTypeExt {
fn parse_request_msg_type(self) -> anyhow::Result<RequestMsgType>;
fn parse_response_msg_type(self) -> anyhow::Result<ResponseMsgType>;
}
impl<B: ByteSlice> RefMakerRawMsgTypeExt for RefMaker<B, RawMsgType> {
fn parse_request_msg_type(self) -> anyhow::Result<RequestMsgType> {
Ok(self.parse()?.read().try_into()?)
}
fn parse_response_msg_type(self) -> anyhow::Result<ResponseMsgType> {
Ok(self.parse()?.read().try_into()?)
}
}

View File

@@ -0,0 +1,17 @@
mod byte_slice_ext;
mod message_trait;
mod message_type;
mod payload;
mod request_ref;
mod request_response;
mod response_ref;
mod server;
pub use byte_slice_ext::*;
pub use message_trait::*;
pub use message_type::*;
pub use payload::*;
pub use request_ref::*;
pub use request_response::*;
pub use response_ref::*;
pub use server::*;

View File

@@ -0,0 +1,353 @@
use rosenpass_util::zerocopy::ZerocopyMutSliceExt;
use zerocopy::{AsBytes, ByteSliceMut, FromBytes, FromZeroes, Ref};
use super::{Message, RawMsgType, RequestMsgType, ResponseMsgType};
/// Size required to fit any message in binary form
pub const MAX_REQUEST_LEN: usize = 2500; // TODO fix this
pub const MAX_RESPONSE_LEN: usize = 2500; // TODO fix this
pub const MAX_REQUEST_FDS: usize = 2;
#[repr(packed)]
#[derive(Debug, Copy, Clone, Hash, AsBytes, FromBytes, FromZeroes, PartialEq, Eq)]
pub struct Envelope<M: AsBytes + FromBytes> {
/// Which message this is
pub msg_type: RawMsgType,
/// The actual Paylod
pub payload: M,
}
pub type RequestEnvelope<M> = Envelope<M>;
pub type ResponseEnvelope<M> = Envelope<M>;
#[repr(packed)]
#[derive(Debug, Copy, Clone, Hash, AsBytes, FromBytes, FromZeroes, PartialEq, Eq)]
pub struct PingRequestPayload {
/// Randomly generated connection id
pub echo: [u8; 256],
}
pub type PingRequest = RequestEnvelope<PingRequestPayload>;
impl PingRequest {
pub fn new(echo: [u8; 256]) -> Self {
Self::from_payload(PingRequestPayload { echo })
}
}
impl Message for PingRequest {
type Payload = PingRequestPayload;
type MessageClass = RequestMsgType;
const MESSAGE_TYPE: Self::MessageClass = RequestMsgType::Ping;
fn from_payload(payload: Self::Payload) -> Self {
Self {
msg_type: Self::MESSAGE_TYPE.into(),
payload,
}
}
fn setup<B: ByteSliceMut>(buf: B) -> anyhow::Result<Ref<B, Self>> {
let mut r: Ref<B, Self> = buf.zk_zeroized()?;
r.init();
Ok(r)
}
fn init(&mut self) {
self.msg_type = Self::MESSAGE_TYPE.into();
}
}
#[repr(packed)]
#[derive(Debug, Copy, Clone, Hash, AsBytes, FromBytes, FromZeroes, PartialEq, Eq)]
pub struct PingResponsePayload {
/// Randomly generated connection id
pub echo: [u8; 256],
}
pub type PingResponse = ResponseEnvelope<PingResponsePayload>;
impl PingResponse {
pub fn new(echo: [u8; 256]) -> Self {
Self::from_payload(PingResponsePayload { echo })
}
}
impl Message for PingResponse {
type Payload = PingResponsePayload;
type MessageClass = ResponseMsgType;
const MESSAGE_TYPE: Self::MessageClass = ResponseMsgType::Ping;
fn from_payload(payload: Self::Payload) -> Self {
Self {
msg_type: Self::MESSAGE_TYPE.into(),
payload,
}
}
fn setup<B: ByteSliceMut>(buf: B) -> anyhow::Result<Ref<B, Self>> {
let mut r: Ref<B, Self> = buf.zk_zeroized()?;
r.init();
Ok(r)
}
fn init(&mut self) {
self.msg_type = Self::MESSAGE_TYPE.into();
}
}
#[repr(packed)]
#[derive(Debug, Copy, Clone, Hash, AsBytes, FromBytes, FromZeroes, PartialEq, Eq)]
pub struct SupplyKeypairRequestPayload {}
pub type SupplyKeypairRequest = RequestEnvelope<SupplyKeypairRequestPayload>;
impl Default for SupplyKeypairRequest {
fn default() -> Self {
Self::new()
}
}
impl SupplyKeypairRequest {
pub fn new() -> Self {
Self::from_payload(SupplyKeypairRequestPayload {})
}
}
impl Message for SupplyKeypairRequest {
type Payload = SupplyKeypairRequestPayload;
type MessageClass = RequestMsgType;
const MESSAGE_TYPE: Self::MessageClass = RequestMsgType::SupplyKeypair;
fn from_payload(payload: Self::Payload) -> Self {
Self {
msg_type: Self::MESSAGE_TYPE.into(),
payload,
}
}
fn setup<B: ByteSliceMut>(buf: B) -> anyhow::Result<Ref<B, Self>> {
let mut r: Ref<B, Self> = buf.zk_zeroized()?;
r.init();
Ok(r)
}
fn init(&mut self) {
self.msg_type = Self::MESSAGE_TYPE.into();
}
}
pub mod supply_keypair_response_status {
pub const OK: u128 = 0;
pub const KEYPAIR_ALREADY_SUPPLIED: u128 = 1;
// TODO: This is not actually part of the API. Remove.
pub const INTERNAL_ERROR: u128 = 2;
pub const INVALID_REQUEST: u128 = 3;
/// TODO: Deprectaed, remove
pub const IO_ERROR: u128 = 4;
}
#[repr(packed)]
#[derive(Debug, Copy, Clone, Hash, AsBytes, FromBytes, FromZeroes, PartialEq, Eq)]
pub struct SupplyKeypairResponsePayload {
pub status: u128,
}
pub type SupplyKeypairResponse = ResponseEnvelope<SupplyKeypairResponsePayload>;
impl SupplyKeypairResponse {
pub fn new(status: u128) -> Self {
Self::from_payload(SupplyKeypairResponsePayload { status })
}
}
impl Message for SupplyKeypairResponse {
type Payload = SupplyKeypairResponsePayload;
type MessageClass = ResponseMsgType;
const MESSAGE_TYPE: Self::MessageClass = ResponseMsgType::SupplyKeypair;
fn from_payload(payload: Self::Payload) -> Self {
Self {
msg_type: Self::MESSAGE_TYPE.into(),
payload,
}
}
fn setup<B: ByteSliceMut>(buf: B) -> anyhow::Result<Ref<B, Self>> {
let mut r: Ref<B, Self> = buf.zk_zeroized()?;
r.init();
Ok(r)
}
fn init(&mut self) {
self.msg_type = Self::MESSAGE_TYPE.into();
}
}
#[repr(packed)]
#[derive(Debug, Copy, Clone, Hash, AsBytes, FromBytes, FromZeroes, PartialEq, Eq)]
pub struct AddListenSocketRequestPayload {}
pub type AddListenSocketRequest = RequestEnvelope<AddListenSocketRequestPayload>;
impl Default for AddListenSocketRequest {
fn default() -> Self {
Self::new()
}
}
impl AddListenSocketRequest {
pub fn new() -> Self {
Self::from_payload(AddListenSocketRequestPayload {})
}
}
impl Message for AddListenSocketRequest {
type Payload = AddListenSocketRequestPayload;
type MessageClass = RequestMsgType;
const MESSAGE_TYPE: Self::MessageClass = RequestMsgType::AddListenSocket;
fn from_payload(payload: Self::Payload) -> Self {
Self {
msg_type: Self::MESSAGE_TYPE.into(),
payload,
}
}
fn setup<B: ByteSliceMut>(buf: B) -> anyhow::Result<Ref<B, Self>> {
let mut r: Ref<B, Self> = buf.zk_zeroized()?;
r.init();
Ok(r)
}
fn init(&mut self) {
self.msg_type = Self::MESSAGE_TYPE.into();
}
}
pub mod add_listen_socket_response_status {
pub const OK: u128 = 0;
pub const INVALID_REQUEST: u128 = 1;
pub const INTERNAL_ERROR: u128 = 2;
}
#[repr(packed)]
#[derive(Debug, Copy, Clone, Hash, AsBytes, FromBytes, FromZeroes, PartialEq, Eq)]
pub struct AddListenSocketResponsePayload {
pub status: u128,
}
pub type AddListenSocketResponse = ResponseEnvelope<AddListenSocketResponsePayload>;
impl AddListenSocketResponse {
pub fn new(status: u128) -> Self {
Self::from_payload(AddListenSocketResponsePayload { status })
}
}
impl Message for AddListenSocketResponse {
type Payload = AddListenSocketResponsePayload;
type MessageClass = ResponseMsgType;
const MESSAGE_TYPE: Self::MessageClass = ResponseMsgType::AddListenSocket;
fn from_payload(payload: Self::Payload) -> Self {
Self {
msg_type: Self::MESSAGE_TYPE.into(),
payload,
}
}
fn setup<B: ByteSliceMut>(buf: B) -> anyhow::Result<Ref<B, Self>> {
let mut r: Ref<B, Self> = buf.zk_zeroized()?;
r.init();
Ok(r)
}
fn init(&mut self) {
self.msg_type = Self::MESSAGE_TYPE.into();
}
}
#[repr(packed)]
#[derive(Debug, Copy, Clone, Hash, AsBytes, FromBytes, FromZeroes, PartialEq, Eq)]
pub struct AddPskBrokerRequestPayload {}
pub type AddPskBrokerRequest = RequestEnvelope<AddPskBrokerRequestPayload>;
impl Default for AddPskBrokerRequest {
fn default() -> Self {
Self::new()
}
}
impl AddPskBrokerRequest {
pub fn new() -> Self {
Self::from_payload(AddPskBrokerRequestPayload {})
}
}
impl Message for AddPskBrokerRequest {
type Payload = AddPskBrokerRequestPayload;
type MessageClass = RequestMsgType;
const MESSAGE_TYPE: Self::MessageClass = RequestMsgType::AddPskBroker;
fn from_payload(payload: Self::Payload) -> Self {
Self {
msg_type: Self::MESSAGE_TYPE.into(),
payload,
}
}
fn setup<B: ByteSliceMut>(buf: B) -> anyhow::Result<Ref<B, Self>> {
let mut r: Ref<B, Self> = buf.zk_zeroized()?;
r.init();
Ok(r)
}
fn init(&mut self) {
self.msg_type = Self::MESSAGE_TYPE.into();
}
}
pub mod add_psk_broker_response_status {
pub const OK: u128 = 0;
pub const INVALID_REQUEST: u128 = 1;
pub const INTERNAL_ERROR: u128 = 2;
}
#[repr(packed)]
#[derive(Debug, Copy, Clone, Hash, AsBytes, FromBytes, FromZeroes, PartialEq, Eq)]
pub struct AddPskBrokerResponsePayload {
pub status: u128,
}
pub type AddPskBrokerResponse = ResponseEnvelope<AddPskBrokerResponsePayload>;
impl AddPskBrokerResponse {
pub fn new(status: u128) -> Self {
Self::from_payload(AddPskBrokerResponsePayload { status })
}
}
impl Message for AddPskBrokerResponse {
type Payload = AddPskBrokerResponsePayload;
type MessageClass = ResponseMsgType;
const MESSAGE_TYPE: Self::MessageClass = ResponseMsgType::AddPskBroker;
fn from_payload(payload: Self::Payload) -> Self {
Self {
msg_type: Self::MESSAGE_TYPE.into(),
payload,
}
}
fn setup<B: ByteSliceMut>(buf: B) -> anyhow::Result<Ref<B, Self>> {
let mut r: Ref<B, Self> = buf.zk_zeroized()?;
r.init();
Ok(r)
}
fn init(&mut self) {
self.msg_type = Self::MESSAGE_TYPE.into();
}
}

View File

@@ -0,0 +1,146 @@
use anyhow::ensure;
use zerocopy::{ByteSlice, ByteSliceMut, Ref};
use super::{ByteSliceRefExt, MessageAttributes, PingRequest, RequestMsgType};
struct RequestRefMaker<B> {
buf: B,
msg_type: RequestMsgType,
}
impl<B: ByteSlice> RequestRef<B> {
pub fn parse(buf: B) -> anyhow::Result<Self> {
RequestRefMaker::new(buf)?.parse()
}
pub fn parse_from_prefix(buf: B) -> anyhow::Result<Self> {
RequestRefMaker::new(buf)?.from_prefix()?.parse()
}
pub fn parse_from_suffix(buf: B) -> anyhow::Result<Self> {
RequestRefMaker::new(buf)?.from_suffix()?.parse()
}
pub fn message_type(&self) -> RequestMsgType {
match self {
Self::Ping(_) => RequestMsgType::Ping,
Self::SupplyKeypair(_) => RequestMsgType::SupplyKeypair,
Self::AddListenSocket(_) => RequestMsgType::AddListenSocket,
Self::AddPskBroker(_) => RequestMsgType::AddPskBroker,
}
}
}
impl<B> From<Ref<B, PingRequest>> for RequestRef<B> {
fn from(v: Ref<B, PingRequest>) -> Self {
Self::Ping(v)
}
}
impl<B> From<Ref<B, super::SupplyKeypairRequest>> for RequestRef<B> {
fn from(v: Ref<B, super::SupplyKeypairRequest>) -> Self {
Self::SupplyKeypair(v)
}
}
impl<B> From<Ref<B, super::AddListenSocketRequest>> for RequestRef<B> {
fn from(v: Ref<B, super::AddListenSocketRequest>) -> Self {
Self::AddListenSocket(v)
}
}
impl<B> From<Ref<B, super::AddPskBrokerRequest>> for RequestRef<B> {
fn from(v: Ref<B, super::AddPskBrokerRequest>) -> Self {
Self::AddPskBroker(v)
}
}
impl<B: ByteSlice> RequestRefMaker<B> {
fn new(buf: B) -> anyhow::Result<Self> {
let msg_type = buf.deref().request_msg_type_from_prefix()?;
Ok(Self { buf, msg_type })
}
fn target_size(&self) -> usize {
self.msg_type.message_size()
}
fn parse(self) -> anyhow::Result<RequestRef<B>> {
Ok(match self.msg_type {
RequestMsgType::Ping => RequestRef::Ping(self.buf.ping_request()?),
RequestMsgType::SupplyKeypair => {
RequestRef::SupplyKeypair(self.buf.supply_keypair_request()?)
}
RequestMsgType::AddListenSocket => {
RequestRef::AddListenSocket(self.buf.add_listen_socket_request()?)
}
RequestMsgType::AddPskBroker => {
RequestRef::AddPskBroker(self.buf.add_psk_broker_request()?)
}
})
}
#[allow(clippy::wrong_self_convention)]
fn from_prefix(self) -> anyhow::Result<Self> {
self.ensure_fit()?;
let point = self.target_size();
let Self { buf, msg_type } = self;
let (buf, _) = buf.split_at(point);
Ok(Self { buf, msg_type })
}
#[allow(clippy::wrong_self_convention)]
fn from_suffix(self) -> anyhow::Result<Self> {
self.ensure_fit()?;
let point = self.buf.len() - self.target_size();
let Self { buf, msg_type } = self;
let (buf, _) = buf.split_at(point);
Ok(Self { buf, msg_type })
}
pub fn ensure_fit(&self) -> anyhow::Result<()> {
let have = self.buf.len();
let need = self.target_size();
ensure!(
need <= have,
"Buffer is undersized at {have} bytes (need {need} bytes)!"
);
Ok(())
}
}
pub enum RequestRef<B> {
Ping(Ref<B, PingRequest>),
SupplyKeypair(Ref<B, super::SupplyKeypairRequest>),
AddListenSocket(Ref<B, super::AddListenSocketRequest>),
AddPskBroker(Ref<B, super::AddPskBrokerRequest>),
}
impl<B> RequestRef<B>
where
B: ByteSlice,
{
pub fn bytes(&self) -> &[u8] {
match self {
Self::Ping(r) => r.bytes(),
Self::SupplyKeypair(r) => r.bytes(),
Self::AddListenSocket(r) => r.bytes(),
Self::AddPskBroker(r) => r.bytes(),
}
}
}
impl<B> RequestRef<B>
where
B: ByteSliceMut,
{
pub fn bytes_mut(&mut self) -> &[u8] {
match self {
Self::Ping(r) => r.bytes_mut(),
Self::SupplyKeypair(r) => r.bytes_mut(),
Self::AddListenSocket(r) => r.bytes_mut(),
Self::AddPskBroker(r) => r.bytes_mut(),
}
}
}

View File

@@ -0,0 +1,190 @@
use rosenpass_util::zerocopy::{
RefMaker, ZerocopyEmancipateExt, ZerocopyEmancipateMutExt, ZerocopySliceExt,
};
use zerocopy::{ByteSlice, ByteSliceMut, Ref};
use super::{Message, PingRequest, PingResponse};
use super::{RequestRef, ResponseRef, ZerocopyResponseMakerSetupMessageExt};
pub trait RequestMsg: Sized + Message {
type ResponseMsg: ResponseMsg;
fn zk_response_maker<B: ByteSlice>(buf: B) -> RefMaker<B, Self::ResponseMsg> {
buf.zk_ref_maker()
}
fn setup_response<B: ByteSliceMut>(buf: B) -> anyhow::Result<Ref<B, Self::ResponseMsg>> {
Self::zk_response_maker(buf).setup_msg()
}
fn setup_response_from_prefix<B: ByteSliceMut>(
buf: B,
) -> anyhow::Result<Ref<B, Self::ResponseMsg>> {
Self::zk_response_maker(buf).from_prefix()?.setup_msg()
}
fn setup_response_from_suffix<B: ByteSliceMut>(
buf: B,
) -> anyhow::Result<Ref<B, Self::ResponseMsg>> {
Self::zk_response_maker(buf).from_prefix()?.setup_msg()
}
}
pub trait ResponseMsg: Message {
type RequestMsg: RequestMsg;
}
impl RequestMsg for PingRequest {
type ResponseMsg = PingResponse;
}
impl ResponseMsg for PingResponse {
type RequestMsg = PingRequest;
}
impl RequestMsg for super::SupplyKeypairRequest {
type ResponseMsg = super::SupplyKeypairResponse;
}
impl ResponseMsg for super::SupplyKeypairResponse {
type RequestMsg = super::SupplyKeypairRequest;
}
impl RequestMsg for super::AddListenSocketRequest {
type ResponseMsg = super::AddListenSocketResponse;
}
impl ResponseMsg for super::AddListenSocketResponse {
type RequestMsg = super::AddListenSocketRequest;
}
impl RequestMsg for super::AddPskBrokerRequest {
type ResponseMsg = super::AddPskBrokerResponse;
}
impl ResponseMsg for super::AddPskBrokerResponse {
type RequestMsg = super::AddPskBrokerRequest;
}
pub type PingPair<B1, B2> = (Ref<B1, PingRequest>, Ref<B2, PingResponse>);
pub type SupplyKeypairPair<B1, B2> = (
Ref<B1, super::SupplyKeypairRequest>,
Ref<B2, super::SupplyKeypairResponse>,
);
pub type AddListenSocketPair<B1, B2> = (
Ref<B1, super::AddListenSocketRequest>,
Ref<B2, super::AddListenSocketResponse>,
);
pub type AddPskBrokerPair<B1, B2> = (
Ref<B1, super::AddPskBrokerRequest>,
Ref<B2, super::AddPskBrokerResponse>,
);
pub enum RequestResponsePair<B1, B2> {
Ping(PingPair<B1, B2>),
SupplyKeypair(SupplyKeypairPair<B1, B2>),
AddListenSocket(AddListenSocketPair<B1, B2>),
AddPskBroker(AddPskBrokerPair<B1, B2>),
}
impl<B1, B2> From<PingPair<B1, B2>> for RequestResponsePair<B1, B2> {
fn from(v: PingPair<B1, B2>) -> Self {
RequestResponsePair::Ping(v)
}
}
impl<B1, B2> From<SupplyKeypairPair<B1, B2>> for RequestResponsePair<B1, B2> {
fn from(v: SupplyKeypairPair<B1, B2>) -> Self {
RequestResponsePair::SupplyKeypair(v)
}
}
impl<B1, B2> From<AddListenSocketPair<B1, B2>> for RequestResponsePair<B1, B2> {
fn from(v: AddListenSocketPair<B1, B2>) -> Self {
RequestResponsePair::AddListenSocket(v)
}
}
impl<B1, B2> From<AddPskBrokerPair<B1, B2>> for RequestResponsePair<B1, B2> {
fn from(v: AddPskBrokerPair<B1, B2>) -> Self {
RequestResponsePair::AddPskBroker(v)
}
}
impl<B1, B2> RequestResponsePair<B1, B2>
where
B1: ByteSlice,
B2: ByteSlice,
{
pub fn both(&self) -> (RequestRef<&[u8]>, ResponseRef<&[u8]>) {
match self {
Self::Ping((req, res)) => {
let req = RequestRef::Ping(req.emancipate());
let res = ResponseRef::Ping(res.emancipate());
(req, res)
}
Self::SupplyKeypair((req, res)) => {
let req = RequestRef::SupplyKeypair(req.emancipate());
let res = ResponseRef::SupplyKeypair(res.emancipate());
(req, res)
}
Self::AddListenSocket((req, res)) => {
let req = RequestRef::AddListenSocket(req.emancipate());
let res = ResponseRef::AddListenSocket(res.emancipate());
(req, res)
}
Self::AddPskBroker((req, res)) => {
let req = RequestRef::AddPskBroker(req.emancipate());
let res = ResponseRef::AddPskBroker(res.emancipate());
(req, res)
}
}
}
pub fn request(&self) -> RequestRef<&[u8]> {
self.both().0
}
pub fn response(&self) -> ResponseRef<&[u8]> {
self.both().1
}
}
impl<B1, B2> RequestResponsePair<B1, B2>
where
B1: ByteSliceMut,
B2: ByteSliceMut,
{
pub fn both_mut(&mut self) -> (RequestRef<&mut [u8]>, ResponseRef<&mut [u8]>) {
match self {
Self::Ping((req, res)) => {
let req = RequestRef::Ping(req.emancipate_mut());
let res = ResponseRef::Ping(res.emancipate_mut());
(req, res)
}
Self::SupplyKeypair((req, res)) => {
let req = RequestRef::SupplyKeypair(req.emancipate_mut());
let res = ResponseRef::SupplyKeypair(res.emancipate_mut());
(req, res)
}
Self::AddListenSocket((req, res)) => {
let req = RequestRef::AddListenSocket(req.emancipate_mut());
let res = ResponseRef::AddListenSocket(res.emancipate_mut());
(req, res)
}
Self::AddPskBroker((req, res)) => {
let req = RequestRef::AddPskBroker(req.emancipate_mut());
let res = ResponseRef::AddPskBroker(res.emancipate_mut());
(req, res)
}
}
}
pub fn request_mut(&mut self) -> RequestRef<&mut [u8]> {
self.both_mut().0
}
pub fn response_mut(&mut self) -> ResponseRef<&mut [u8]> {
self.both_mut().1
}
}

View File

@@ -0,0 +1,147 @@
// TODO: This is copied verbatim from ResponseRef…not pretty
use anyhow::ensure;
use zerocopy::{ByteSlice, ByteSliceMut, Ref};
use super::{ByteSliceRefExt, MessageAttributes, PingResponse, ResponseMsgType};
struct ResponseRefMaker<B> {
buf: B,
msg_type: ResponseMsgType,
}
impl<B: ByteSlice> ResponseRef<B> {
pub fn parse(buf: B) -> anyhow::Result<Self> {
ResponseRefMaker::new(buf)?.parse()
}
pub fn parse_from_prefix(buf: B) -> anyhow::Result<Self> {
ResponseRefMaker::new(buf)?.from_prefix()?.parse()
}
pub fn parse_from_suffix(buf: B) -> anyhow::Result<Self> {
ResponseRefMaker::new(buf)?.from_suffix()?.parse()
}
pub fn message_type(&self) -> ResponseMsgType {
match self {
Self::Ping(_) => ResponseMsgType::Ping,
Self::SupplyKeypair(_) => ResponseMsgType::SupplyKeypair,
Self::AddListenSocket(_) => ResponseMsgType::AddListenSocket,
Self::AddPskBroker(_) => ResponseMsgType::AddPskBroker,
}
}
}
impl<B> From<Ref<B, PingResponse>> for ResponseRef<B> {
fn from(v: Ref<B, PingResponse>) -> Self {
Self::Ping(v)
}
}
impl<B> From<Ref<B, super::SupplyKeypairResponse>> for ResponseRef<B> {
fn from(v: Ref<B, super::SupplyKeypairResponse>) -> Self {
Self::SupplyKeypair(v)
}
}
impl<B> From<Ref<B, super::AddListenSocketResponse>> for ResponseRef<B> {
fn from(v: Ref<B, super::AddListenSocketResponse>) -> Self {
Self::AddListenSocket(v)
}
}
impl<B> From<Ref<B, super::AddPskBrokerResponse>> for ResponseRef<B> {
fn from(v: Ref<B, super::AddPskBrokerResponse>) -> Self {
Self::AddPskBroker(v)
}
}
impl<B: ByteSlice> ResponseRefMaker<B> {
fn new(buf: B) -> anyhow::Result<Self> {
let msg_type = buf.deref().response_msg_type_from_prefix()?;
Ok(Self { buf, msg_type })
}
fn target_size(&self) -> usize {
self.msg_type.message_size()
}
fn parse(self) -> anyhow::Result<ResponseRef<B>> {
Ok(match self.msg_type {
ResponseMsgType::Ping => ResponseRef::Ping(self.buf.ping_response()?),
ResponseMsgType::SupplyKeypair => {
ResponseRef::SupplyKeypair(self.buf.supply_keypair_response()?)
}
ResponseMsgType::AddListenSocket => {
ResponseRef::AddListenSocket(self.buf.add_listen_socket_response()?)
}
ResponseMsgType::AddPskBroker => {
ResponseRef::AddPskBroker(self.buf.add_psk_broker_response()?)
}
})
}
#[allow(clippy::wrong_self_convention)]
fn from_prefix(self) -> anyhow::Result<Self> {
self.ensure_fit()?;
let point = self.target_size();
let Self { buf, msg_type } = self;
let (buf, _) = buf.split_at(point);
Ok(Self { buf, msg_type })
}
#[allow(clippy::wrong_self_convention)]
fn from_suffix(self) -> anyhow::Result<Self> {
self.ensure_fit()?;
let point = self.buf.len() - self.target_size();
let Self { buf, msg_type } = self;
let (buf, _) = buf.split_at(point);
Ok(Self { buf, msg_type })
}
pub fn ensure_fit(&self) -> anyhow::Result<()> {
let have = self.buf.len();
let need = self.target_size();
ensure!(
need <= have,
"Buffer is undersized at {have} bytes (need {need} bytes)!"
);
Ok(())
}
}
pub enum ResponseRef<B> {
Ping(Ref<B, PingResponse>),
SupplyKeypair(Ref<B, super::SupplyKeypairResponse>),
AddListenSocket(Ref<B, super::AddListenSocketResponse>),
AddPskBroker(Ref<B, super::AddPskBrokerResponse>),
}
impl<B> ResponseRef<B>
where
B: ByteSlice,
{
pub fn bytes(&self) -> &[u8] {
match self {
Self::Ping(r) => r.bytes(),
Self::SupplyKeypair(r) => r.bytes(),
Self::AddListenSocket(r) => r.bytes(),
Self::AddPskBroker(r) => r.bytes(),
}
}
}
impl<B> ResponseRef<B>
where
B: ByteSliceMut,
{
pub fn bytes_mut(&mut self) -> &[u8] {
match self {
Self::Ping(r) => r.bytes_mut(),
Self::SupplyKeypair(r) => r.bytes_mut(),
Self::AddListenSocket(r) => r.bytes_mut(),
Self::AddPskBroker(r) => r.bytes_mut(),
}
}
}

View File

@@ -0,0 +1,159 @@
use super::{ByteSliceRefExt, Message, PingRequest, PingResponse, RequestRef, RequestResponsePair};
use std::{collections::VecDeque, os::fd::OwnedFd};
use zerocopy::{ByteSlice, ByteSliceMut};
pub trait Server {
/// This implements the handler for the [crate::api::RequestMsgType::Ping] API message
///
/// It merely takes a buffer and returns that same buffer.
fn ping(
&mut self,
req: &PingRequest,
req_fds: &mut VecDeque<OwnedFd>,
res: &mut PingResponse,
) -> anyhow::Result<()>;
/// Supply the cryptographic server keypair through file descriptor passing in the API
///
/// This implements the handler for the [crate::api::RequestMsgType::SupplyKeypair] API message.
///
/// # File descriptors
///
/// 1. The secret key (size must match exactly); the file descriptor must be backed by either
/// of
/// - file-system file
/// - [memfd](https://man.archlinux.org/man/memfd.2.en)
/// - [memfd_secret](https://man.archlinux.org/man/memfd.2.en)
/// 2. The public key (size must match exactly); the file descriptor must be backed by either
/// of
/// - file-system file
/// - [memfd](https://man.archlinux.org/man/memfd.2.en)
/// - [memfd_secret](https://man.archlinux.org/man/memfd.2.en)
///
/// # API Return Status
///
/// 1. [crate::api::supply_keypair_response_status::OK] - Indicates success
/// 2. [crate::api::supply_keypair_response_status::KEYPAIR_ALREADY_SUPPLIED] The endpoint was used but
/// the server already has server keys
/// 3. [crate::api::supply_keypair_response_status::INVALID_REQUEST] Malformed request; could be:
/// - Missing file descriptors for public key
/// - File descriptors contain data of invalid length
/// - Invalid file descriptor type
///
/// # Description
///
/// At startup, if no server keys are specified in the rosenpass configuration, and if the API
/// is enabled, the Rosenpass process waits for server keys to be supplied to the API. Before
/// then, any messages for the rosenpass cryptographic protocol are ignored and dropped all
/// cryptographic operations require access to the server keys.
///
/// Both private and public keys are specified through file descriptors and both are read from
/// their respective file descriptors into process memory. A file descriptor based transport is
/// used because of the excessive size of Classic McEliece public keys (100kb and up).
///
/// The file descriptors for the keys need not be backed by a file on disk. You can supply a
/// [memfd](https://man.archlinux.org/man/memfd.2.en) or [memfd_secret](https://man.archlinux.org/man/memfd_secret.2.en)
/// backed file descriptor if the server keys are not backed by a file system file.
fn supply_keypair(
&mut self,
req: &super::SupplyKeypairRequest,
req_fds: &mut VecDeque<OwnedFd>,
res: &mut super::SupplyKeypairResponse,
) -> anyhow::Result<()>;
/// Supply a new UDP listen socket through file descriptor passing via the API
///
/// This implements the handler for the [crate::api::RequestMsgType::AddListenSocket] API message.
///
/// # File descriptors
///
/// 1. The listen socket; must be backed by a UDP network listen socket
///
/// # API Return Status
///
/// 1. [crate::api::add_listen_socket_response_status::OK] - Indicates success
/// 2. [add_listen_socket_response_status::INVALID_REQUEST] Malformed request; could be:
/// - Missing file descriptors for public key
/// - Invalid file descriptor type
/// 3. [crate::api::add_listen_socket_response_status::INTERNAL_ERROR] Some other, non-fatal error
/// occured. Check the logs on log
///
/// # Description
///
/// This endpoint allows you to supply a UDP listen socket; it will be used to perform
/// cryptographic key exchanges via the Rosenpass protocol.
fn add_listen_socket(
&mut self,
req: &super::AddListenSocketRequest,
req_fds: &mut VecDeque<OwnedFd>,
res: &mut super::AddListenSocketResponse,
) -> anyhow::Result<()>;
fn add_psk_broker(
&mut self,
req: &super::AddPskBrokerRequest,
req_fds: &mut VecDeque<OwnedFd>,
res: &mut super::AddPskBrokerResponse,
) -> anyhow::Result<()>;
fn dispatch<ReqBuf, ResBuf>(
&mut self,
p: &mut RequestResponsePair<ReqBuf, ResBuf>,
req_fds: &mut VecDeque<OwnedFd>,
) -> anyhow::Result<()>
where
ReqBuf: ByteSlice,
ResBuf: ByteSliceMut,
{
match p {
RequestResponsePair::Ping((req, res)) => self.ping(req, req_fds, res),
RequestResponsePair::SupplyKeypair((req, res)) => {
self.supply_keypair(req, req_fds, res)
}
RequestResponsePair::AddListenSocket((req, res)) => {
self.add_listen_socket(req, req_fds, res)
}
RequestResponsePair::AddPskBroker((req, res)) => self.add_psk_broker(req, req_fds, res),
}
}
fn handle_message<ReqBuf, ResBuf>(
&mut self,
req: ReqBuf,
req_fds: &mut VecDeque<OwnedFd>,
res: ResBuf,
) -> anyhow::Result<usize>
where
ReqBuf: ByteSlice,
ResBuf: ByteSliceMut,
{
let req = req.parse_request_from_prefix()?;
// TODO: This is not pretty; This match should be moved into RequestRef
let mut pair = match req {
RequestRef::Ping(req) => {
let mut res = res.ping_response_from_prefix()?;
res.init();
RequestResponsePair::Ping((req, res))
}
RequestRef::SupplyKeypair(req) => {
let mut res = res.supply_keypair_response_from_prefix()?;
res.init();
RequestResponsePair::SupplyKeypair((req, res))
}
RequestRef::AddListenSocket(req) => {
let mut res = res.add_listen_socket_response_from_prefix()?;
res.init();
RequestResponsePair::AddListenSocket((req, res))
}
RequestRef::AddPskBroker(req) => {
let mut res = res.add_psk_broker_response_from_prefix()?;
res.init();
RequestResponsePair::AddPskBroker((req, res))
}
};
self.dispatch(&mut pair, req_fds)?;
let res_len = pair.response().bytes().len();
Ok(res_len)
}
}

40
rosenpass/src/api/cli.rs Normal file
View File

@@ -0,0 +1,40 @@
use std::path::PathBuf;
use clap::Args;
use crate::config::Rosenpass as RosenpassConfig;
use super::config::ApiConfig;
#[cfg(feature = "experiment_api")]
#[derive(Args, Debug)]
pub struct ApiCli {
/// Where in the file-system to create the unix socket the rosenpass API will be listening for
/// connections on.
#[arg(long)]
api_listen_path: Vec<PathBuf>,
/// When rosenpass is called from another process, the other process can open and bind the
/// unix socket for the Rosenpass API to use themselves, passing it to this process. In Rust this can be achieved
/// using the [command-fds](https://docs.rs/command-fds/latest/command_fds/) crate.
#[arg(long)]
api_listen_fd: Vec<i32>,
/// When rosenpass is called from another process, the other process can connect the unix socket for the API
/// themselves, for instance using the `socketpair(2)` system call.
#[arg(long)]
api_stream_fd: Vec<i32>,
}
impl ApiCli {
pub fn apply_to_config(&self, cfg: &mut RosenpassConfig) -> anyhow::Result<()> {
self.apply_to_api_config(&mut cfg.api)
}
pub fn apply_to_api_config(&self, cfg: &mut ApiConfig) -> anyhow::Result<()> {
cfg.listen_path.extend_from_slice(&self.api_listen_path);
cfg.listen_fd.extend_from_slice(&self.api_listen_fd);
cfg.stream_fd.extend_from_slice(&self.api_stream_fd);
Ok(())
}
}

View File

@@ -0,0 +1,49 @@
use std::path::PathBuf;
use mio::net::UnixListener;
use rosenpass_util::mio::{UnixListenerExt, UnixStreamExt};
use serde::{Deserialize, Serialize};
use crate::app_server::AppServer;
#[derive(Debug, Serialize, Deserialize, Default, Clone)]
pub struct ApiConfig {
/// Where in the file-system to create the unix socket the rosenpass API will be listening for
/// connections on
pub listen_path: Vec<PathBuf>,
/// When rosenpass is called from another process, the other process can open and bind the
/// unix socket for the Rosenpass API to use themselves, passing it to this process. In Rust this can be achieved
/// using the [command-fds](https://docs.rs/command-fds/latest/command_fds/) crate.
pub listen_fd: Vec<i32>,
/// When rosenpass is called from another process, the other process can connect the unix socket for the API
/// themselves, for instance using the `socketpair(2)` system call.
pub stream_fd: Vec<i32>,
}
impl ApiConfig {
pub fn apply_to_app_server(&self, srv: &mut AppServer) -> anyhow::Result<()> {
for path in self.listen_path.iter() {
srv.add_api_listener(UnixListener::bind(path)?)?;
}
for fd in self.listen_fd.iter() {
srv.add_api_listener(UnixListenerExt::claim_fd(*fd)?)?;
}
for fd in self.stream_fd.iter() {
srv.add_api_connection(UnixStreamExt::claim_fd(*fd)?)?;
}
Ok(())
}
pub fn count_api_sources(&self) -> usize {
self.listen_path.len() + self.listen_fd.len() + self.stream_fd.len()
}
pub fn has_api_sources(&self) -> bool {
self.count_api_sources() > 0
}
}

View File

@@ -0,0 +1,321 @@
use std::borrow::{Borrow, BorrowMut};
use std::collections::VecDeque;
use std::os::fd::OwnedFd;
use mio::net::UnixStream;
use rosenpass_secret_memory::Secret;
use rosenpass_util::mio::ReadWithFileDescriptors;
use rosenpass_util::{
io::{IoResultKindHintExt, TryIoResultKindHintExt},
length_prefix_encoding::{
decoder::{self as lpe_decoder, LengthPrefixDecoder},
encoder::{self as lpe_encoder, LengthPrefixEncoder},
},
mio::interest::RW as MIO_RW,
};
use zeroize::Zeroize;
use crate::api::MAX_REQUEST_FDS;
use crate::{api::Server, app_server::AppServer};
use super::super::{ApiHandler, ApiHandlerContext};
#[derive(Debug)]
struct SecretBuffer<const N: usize>(pub Secret<N>);
impl<const N: usize> SecretBuffer<N> {
fn new() -> Self {
Self(Secret::zero())
}
}
impl<const N: usize> Borrow<[u8]> for SecretBuffer<N> {
fn borrow(&self) -> &[u8] {
self.0.secret()
}
}
impl<const N: usize> BorrowMut<[u8]> for SecretBuffer<N> {
fn borrow_mut(&mut self) -> &mut [u8] {
self.0.secret_mut()
}
}
// TODO: Unfortunately, zerocopy is quite particular about alignment, hence the 4096
type ReadBuffer = LengthPrefixDecoder<SecretBuffer<4096>>;
type WriteBuffer = LengthPrefixEncoder<SecretBuffer<4096>>;
type ReadFdBuffer = VecDeque<OwnedFd>;
#[derive(Debug)]
struct MioConnectionBuffers {
read_buffer: ReadBuffer,
write_buffer: WriteBuffer,
read_fd_buffer: ReadFdBuffer,
}
#[derive(Debug)]
pub struct MioConnection {
io: UnixStream,
mio_token: mio::Token,
invalid_read: bool,
buffers: Option<MioConnectionBuffers>,
api_handler: ApiHandler,
}
impl MioConnection {
pub fn new(app_server: &mut AppServer, mut io: UnixStream) -> std::io::Result<Self> {
let mio_token = app_server.mio_token_dispenser.dispense();
app_server
.mio_poll
.registry()
.register(&mut io, mio_token, MIO_RW)?;
let invalid_read = false;
let read_buffer = LengthPrefixDecoder::new(SecretBuffer::new());
let write_buffer = LengthPrefixEncoder::from_buffer(SecretBuffer::new());
let read_fd_buffer = VecDeque::new();
let buffers = Some(MioConnectionBuffers {
read_buffer,
write_buffer,
read_fd_buffer,
});
let api_state = ApiHandler::new();
Ok(Self {
io,
mio_token,
invalid_read,
buffers,
api_handler: api_state,
})
}
pub fn should_close(&self) -> bool {
let exhausted = self
.buffers
.as_ref()
.map(|b| b.write_buffer.exhausted())
.unwrap_or(false);
self.invalid_read && exhausted
}
pub fn close(mut self, app_server: &mut AppServer) -> anyhow::Result<()> {
app_server.mio_poll.registry().deregister(&mut self.io)?;
Ok(())
}
pub fn mio_token(&self) -> mio::Token {
self.mio_token
}
}
pub trait MioConnectionContext {
fn mio_connection(&self) -> &MioConnection;
fn app_server(&self) -> &AppServer;
fn mio_connection_mut(&mut self) -> &mut MioConnection;
fn app_server_mut(&mut self) -> &mut AppServer;
fn poll(&mut self) -> anyhow::Result<()> {
macro_rules! short {
($e:expr) => {
match $e {
None => return Ok(()),
Some(()) => {}
}
};
}
// All of these functions return an error, None ("operation incomplete")
// or some ("operation complete, keep processing")
short!(self.flush_write_buffer()?); // Flush last message
short!(self.recv()?); // Receive new message
short!(self.handle_incoming_message()?); // Process new message with API
short!(self.flush_write_buffer()?); // Begin flushing response
Ok(())
}
fn handle_incoming_message(&mut self) -> anyhow::Result<Option<()>> {
self.with_buffers_stolen(|this, bufs| {
// Acquire request & response. Caller is responsible to make sure
// that read buffer holds a message and that write buffer is cleared.
// Hence the unwraps and assertions
assert!(bufs.write_buffer.exhausted());
let req = bufs.read_buffer.message().unwrap().unwrap();
let req_fds = &mut bufs.read_fd_buffer;
let res = bufs.write_buffer.buffer_bytes_mut();
// Call API handler
// Transitive trait implementations: MioConnectionContext -> ApiHandlerContext -> as ApiServer
let response_len = this.handle_message(req, req_fds, res)?;
bufs.write_buffer
.restart_write_with_new_message(response_len)?;
bufs.read_buffer.zeroize(); // clear for new message to read
bufs.read_fd_buffer.clear();
Ok(Some(()))
})
}
fn flush_write_buffer(&mut self) -> anyhow::Result<Option<()>> {
if self.write_buf_mut().exhausted() {
return Ok(Some(()));
}
use lpe_encoder::WriteToIoReturn as Ret;
use std::io::ErrorKind as K;
loop {
let conn = self.mio_connection_mut();
let bufs = conn.buffers.as_mut().unwrap();
let sock = &conn.io;
let write_buf = &mut bufs.write_buffer;
match write_buf.write_to_stdio(sock).io_err_kind_hint() {
// Done
Ok(Ret { done: true, .. }) => {
write_buf.zeroize(); // clear for new message to write
break Ok(Some(()));
}
// Would block
Ok(Ret {
bytes_written: 0, ..
}) => break Ok(None),
Err((_e, K::WouldBlock)) => break Ok(None),
// Just continue
Ok(_) => continue, /* Ret { bytes_written > 0, done = false } acc. to previous cases*/
Err((_e, K::Interrupted)) => continue,
// Other errors
Err((e, _ek)) => Err(e)?,
}
}
}
fn recv(&mut self) -> anyhow::Result<Option<()>> {
if !self.write_buf_mut().exhausted() || self.mio_connection().invalid_read {
return Ok(None);
}
use lpe_decoder::{ReadFromIoError as E, ReadFromIoReturn as Ret};
use std::io::ErrorKind as K;
loop {
let conn = self.mio_connection_mut();
let bufs = conn.buffers.as_mut().unwrap();
let read_buf = &mut bufs.read_buffer;
let read_fd_buf = &mut bufs.read_fd_buffer;
let sock = &conn.io;
let fd_passing_sock = ReadWithFileDescriptors::<MAX_REQUEST_FDS, UnixStream, _, _>::new(
sock,
read_fd_buf,
);
match read_buf
.read_from_stdio(fd_passing_sock)
.try_io_err_kind_hint()
{
// We actually received a proper message
// (Impl below match to appease borrow checker)
Ok(Ret {
message: Some(_msg),
..
}) => break Ok(Some(())),
// Message does not fit in buffer
Err((e @ E::MessageTooLargeError { .. }, _)) => {
log::warn!("Received message on API that was too big to fit in our buffers; \
looks like the client is broken. Stopping to process messages of the client.\n\
Error: {e:?}");
conn.invalid_read = true; // Closed mio_manager
break Ok(None);
}
// Would block
Ok(Ret { bytes_read: 0, .. }) => break Ok(None),
Err((_, Some(K::WouldBlock))) => break Ok(None),
// Just keep going
Ok(Ret { bytes_read: _, .. }) => continue,
Err((_, Some(K::Interrupted))) => continue,
// Other IO Error (just pass on to the caller)
Err((E::IoError(e), _)) => {
log::warn!(
"IO error while trying to read message from API socket. \
The connection is broken. Stopping to process messages of the client.\n\
Error: {e:?}"
);
conn.invalid_read = true; // closed later by mio_manager
break Err(e.into());
}
};
}
}
fn mio_token(&self) -> mio::Token {
self.mio_connection().mio_token()
}
fn should_close(&self) -> bool {
self.mio_connection().should_close()
}
}
trait MioConnectionContextPrivate: MioConnectionContext {
fn steal_buffers(&mut self) -> MioConnectionBuffers {
self.mio_connection_mut().buffers.take().unwrap()
}
fn return_buffers(&mut self, buffers: MioConnectionBuffers) {
let opt = &mut self.mio_connection_mut().buffers;
assert!(opt.is_none());
let _ = opt.insert(buffers);
}
fn with_buffers_stolen<R, F: FnOnce(&mut Self, &mut MioConnectionBuffers) -> R>(
&mut self,
f: F,
) -> R {
let mut bufs = self.steal_buffers();
let res = f(self, &mut bufs);
self.return_buffers(bufs);
res
}
fn write_buf_mut(&mut self) -> &mut WriteBuffer {
self.mio_connection_mut()
.buffers
.as_mut()
.unwrap()
.write_buffer
.borrow_mut()
}
}
impl<T> MioConnectionContextPrivate for T where T: ?Sized + MioConnectionContext {}
impl<T> ApiHandlerContext for T
where
T: ?Sized + MioConnectionContext,
{
fn api_handler(&self) -> &ApiHandler {
&self.mio_connection().api_handler
}
fn app_server(&self) -> &AppServer {
MioConnectionContext::app_server(self)
}
fn api_handler_mut(&mut self) -> &mut ApiHandler {
&mut self.mio_connection_mut().api_handler
}
fn app_server_mut(&mut self) -> &mut AppServer {
MioConnectionContext::app_server_mut(self)
}
}

View File

@@ -0,0 +1,173 @@
use std::{borrow::BorrowMut, io};
use mio::net::{UnixListener, UnixStream};
use rosenpass_util::{
functional::ApplyExt, io::nonblocking_handle_io_errors, mio::interest::RW as MIO_RW,
};
use crate::app_server::{AppServer, AppServerIoSource};
use super::{MioConnection, MioConnectionContext};
#[derive(Default, Debug)]
pub struct MioManager {
listeners: Vec<UnixListener>,
connections: Vec<Option<MioConnection>>,
}
#[derive(Debug, PartialEq, Eq, Copy, Clone)]
pub enum MioManagerIoSource {
Listener(usize),
Connection(usize),
}
impl MioManager {
pub fn new() -> Self {
Self::default()
}
}
struct MioConnectionFocus<'a, T: ?Sized + MioManagerContext> {
ctx: &'a mut T,
conn_idx: usize,
}
impl<'a, T: ?Sized + MioManagerContext> MioConnectionFocus<'a, T> {
fn new(ctx: &'a mut T, conn_idx: usize) -> Self {
Self { ctx, conn_idx }
}
}
pub trait MioManagerContext {
fn mio_manager(&self) -> &MioManager;
fn mio_manager_mut(&mut self) -> &mut MioManager;
fn app_server(&self) -> &AppServer;
fn app_server_mut(&mut self) -> &mut AppServer;
fn add_listener(&mut self, mut listener: UnixListener) -> io::Result<()> {
let srv = self.app_server_mut();
let mio_token = srv.mio_token_dispenser.dispense();
srv.mio_poll
.registry()
.register(&mut listener, mio_token, MIO_RW)?;
let io_source = self
.mio_manager()
.listeners
.len()
.apply(MioManagerIoSource::Listener)
.apply(AppServerIoSource::MioManager);
self.mio_manager_mut().listeners.push(listener);
self.app_server_mut()
.register_io_source(mio_token, io_source);
Ok(())
}
fn add_connection(&mut self, connection: UnixStream) -> io::Result<()> {
let connection = MioConnection::new(self.app_server_mut(), connection)?;
let mio_token = connection.mio_token();
let conns: &mut Vec<Option<MioConnection>> =
self.mio_manager_mut().connections.borrow_mut();
let idx = conns
.iter_mut()
.enumerate()
.find(|(_, slot)| slot.is_some())
.map(|(idx, _)| idx)
.unwrap_or(conns.len());
conns.insert(idx, Some(connection));
let io_source = idx
.apply(MioManagerIoSource::Listener)
.apply(AppServerIoSource::MioManager);
self.app_server_mut()
.register_io_source(mio_token, io_source);
Ok(())
}
fn poll_particular(&mut self, io_source: MioManagerIoSource) -> anyhow::Result<()> {
use MioManagerIoSource as S;
match io_source {
S::Listener(idx) => self.accept_from(idx)?,
S::Connection(idx) => self.poll_particular_connection(idx)?,
};
Ok(())
}
fn poll(&mut self) -> anyhow::Result<()> {
self.accept_connections()?;
self.poll_connections()?;
Ok(())
}
fn accept_connections(&mut self) -> io::Result<()> {
for idx in 0..self.mio_manager_mut().listeners.len() {
self.accept_from(idx)?;
}
Ok(())
}
fn accept_from(&mut self, idx: usize) -> io::Result<()> {
// Accept connection until the socket would block or returns another error
// TODO: This currently only adds connections--we eventually need the ability to remove
// them as well, see the note in connection.rs
loop {
match nonblocking_handle_io_errors(|| self.mio_manager().listeners[idx].accept())? {
None => break,
Some((conn, _addr)) => {
self.add_connection(conn)?;
}
};
}
Ok(())
}
fn poll_connections(&mut self) -> anyhow::Result<()> {
for idx in 0..self.mio_manager().connections.len() {
self.poll_particular_connection(idx)?;
}
Ok(())
}
fn poll_particular_connection(&mut self, idx: usize) -> anyhow::Result<()> {
if self.mio_manager().connections[idx].is_none() {
return Ok(());
}
let mut conn = MioConnectionFocus::new(self, idx);
conn.poll()?;
if conn.should_close() {
let conn = self.mio_manager_mut().connections[idx].take().unwrap();
let mio_token = conn.mio_token();
if let Err(e) = conn.close(self.app_server_mut()) {
log::warn!("Error while closing API connection {e:?}");
};
self.app_server_mut().unregister_io_source(mio_token);
}
Ok(())
}
}
impl<T: ?Sized + MioManagerContext> MioConnectionContext for MioConnectionFocus<'_, T> {
fn mio_connection(&self) -> &MioConnection {
self.ctx.mio_manager().connections[self.conn_idx]
.as_ref()
.unwrap()
}
fn app_server(&self) -> &AppServer {
self.ctx.app_server()
}
fn mio_connection_mut(&mut self) -> &mut MioConnection {
self.ctx.mio_manager_mut().connections[self.conn_idx]
.as_mut()
.unwrap()
}
fn app_server_mut(&mut self) -> &mut AppServer {
self.ctx.app_server_mut()
}
}

View File

@@ -0,0 +1,5 @@
mod connection;
mod manager;
pub use connection::*;
pub use manager::*;

9
rosenpass/src/api/mod.rs Normal file
View File

@@ -0,0 +1,9 @@
mod api_handler;
mod boilerplate;
pub use api_handler::*;
pub use boilerplate::*;
pub mod cli;
pub mod config;
pub mod mio;

View File

@@ -1,5 +1,6 @@
use anyhow::bail;
use anyhow::Context;
use anyhow::Result;
use derive_builder::Builder;
use log::{error, info, warn};
@@ -7,7 +8,14 @@ use mio::Interest;
use mio::Token;
use rosenpass_secret_memory::Public;
use rosenpass_secret_memory::Secret;
use rosenpass_util::build::ConstructionSite;
use rosenpass_util::file::StoreValueB64;
use rosenpass_util::functional::run;
use rosenpass_util::functional::ApplyExt;
use rosenpass_util::io::IoResultKindHintExt;
use rosenpass_util::io::SubstituteForIoErrorKindExt;
use rosenpass_util::option::SomeExt;
use rosenpass_util::result::OkExt;
use rosenpass_wireguard_broker::WireguardBrokerMio;
use rosenpass_wireguard_broker::{WireguardBrokerCfg, WG_KEY_LEN};
use zerocopy::AsBytes;
@@ -15,8 +23,12 @@ use zerocopy::AsBytes;
use std::cell::Cell;
use std::collections::HashMap;
use std::collections::VecDeque;
use std::fmt::Debug;
use std::io;
use std::io::stdout;
use std::io::ErrorKind;
use std::io::Write;
use std::net::Ipv4Addr;
use std::net::Ipv6Addr;
use std::net::SocketAddr;
@@ -28,6 +40,7 @@ use std::slice;
use std::time::Duration;
use std::time::Instant;
use crate::protocol::BuildCryptoServer;
use crate::protocol::HostIdentification;
use crate::{
config::Verbosity,
@@ -63,7 +76,7 @@ pub struct MioTokenDispenser {
}
impl MioTokenDispenser {
fn dispense(&mut self) -> Token {
pub fn dispense(&mut self) -> Token {
let r = self.counter;
self.counter += 1;
Token(r)
@@ -72,7 +85,7 @@ impl MioTokenDispenser {
#[derive(Debug, Default)]
pub struct BrokerStore {
store: HashMap<
pub store: HashMap<
Public<BROKER_ID_BYTES>,
Box<dyn WireguardBrokerMio<Error = anyhow::Error, MioError = anyhow::Error>>,
>,
@@ -138,15 +151,28 @@ pub struct AppServerTest {
pub termination_handler: Option<std::sync::mpsc::Receiver<()>>,
}
#[derive(Debug, PartialEq, Eq, Copy, Clone)]
pub enum AppServerIoSource {
Socket(usize),
PskBroker(Public<BROKER_ID_BYTES>),
#[cfg(feature = "experiment_api")]
MioManager(crate::api::mio::MioManagerIoSource),
}
const EVENT_CAPACITY: usize = 20;
/// Holds the state of the application, namely the external IO
///
/// Responsible for file IO, network IO
// TODO add user control via unix domain socket and stdin/stdout
#[derive(Debug)]
pub struct AppServer {
pub crypt: CryptoServer,
pub crypto_site: ConstructionSite<BuildCryptoServer, CryptoServer>,
pub sockets: Vec<mio::net::UdpSocket>,
pub events: mio::Events,
pub short_poll_queue: VecDeque<mio::event::Event>,
pub performed_long_poll: bool,
pub io_source_index: HashMap<mio::Token, AppServerIoSource>,
pub mio_poll: mio::Poll,
pub mio_token_dispenser: MioTokenDispenser,
pub brokers: BrokerStore,
@@ -159,6 +185,8 @@ pub struct AppServer {
pub unpolled_count: usize,
pub last_update_time: Instant,
pub test_helpers: Option<AppServerTest>,
#[cfg(feature = "experiment_api")]
pub api_manager: crate::api::mio::MioManager,
}
/// A socket pointer is an index assigned to a socket;
@@ -507,15 +535,14 @@ impl HostPathDiscoveryEndpoint {
impl AppServer {
pub fn new(
sk: SSk,
pk: SPk,
keypair: Option<(SSk, SPk)>,
addrs: Vec<SocketAddr>,
verbosity: Verbosity,
test_helpers: Option<AppServerTest>,
) -> anyhow::Result<Self> {
// setup mio
let mio_poll = mio::Poll::new()?;
let events = mio::Events::with_capacity(20);
let events = mio::Events::with_capacity(EVENT_CAPACITY);
let mut mio_token_dispenser = MioTokenDispenser::default();
// bind each SocketAddr to a socket
@@ -590,22 +617,30 @@ impl AppServer {
}
// register all sockets to mio
for socket in sockets.iter_mut() {
mio_poll.registry().register(
socket,
mio_token_dispenser.dispense(),
Interest::READABLE,
)?;
let mut io_source_index = HashMap::new();
for (idx, socket) in sockets.iter_mut().enumerate() {
let mio_token = mio_token_dispenser.dispense();
mio_poll
.registry()
.register(socket, mio_token, Interest::READABLE)?;
let prev = io_source_index.insert(mio_token, AppServerIoSource::Socket(idx));
assert!(prev.is_none());
}
// TODO use mio::net::UnixStream together with std::os::unix::net::UnixStream for Linux
let crypto_site = match keypair {
Some((sk, pk)) => ConstructionSite::from_product(CryptoServer::new(sk, pk)),
None => ConstructionSite::new(BuildCryptoServer::empty()),
};
Ok(Self {
crypt: CryptoServer::new(sk, pk),
crypto_site,
peers: Vec::new(),
verbosity,
sockets,
events,
short_poll_queue: Default::default(),
performed_long_poll: false,
io_source_index,
mio_poll,
mio_token_dispenser,
brokers: BrokerStore::default(),
@@ -616,48 +651,78 @@ impl AppServer {
unpolled_count: 0,
last_update_time: Instant::now(),
test_helpers,
#[cfg(feature = "experiment_api")]
api_manager: crate::api::mio::MioManager::default(),
})
}
pub fn crypto_server(&self) -> anyhow::Result<&CryptoServer> {
self.crypto_site
.product_ref()
.context("Cryptography handler not initialized")
}
pub fn crypto_server_mut(&mut self) -> anyhow::Result<&mut CryptoServer> {
self.crypto_site
.product_mut()
.context("Cryptography handler not initialized")
}
pub fn verbose(&self) -> bool {
matches!(self.verbosity, Verbosity::Verbose)
}
pub fn register_listen_socket(&mut self, mut sock: mio::net::UdpSocket) -> anyhow::Result<()> {
let mio_token = self.mio_token_dispenser.dispense();
self.mio_poll
.registry()
.register(&mut sock, mio_token, mio::Interest::READABLE)?;
let io_source = self.sockets.len().apply(AppServerIoSource::Socket);
self.sockets.push(sock);
self.register_io_source(mio_token, io_source);
Ok(())
}
pub fn register_io_source(&mut self, token: mio::Token, io_source: AppServerIoSource) {
let prev = self.io_source_index.insert(token, io_source);
assert!(prev.is_none());
}
pub fn unregister_io_source(&mut self, token: mio::Token) {
let value = self.io_source_index.remove(&token);
assert!(value.is_some(), "Removed IO source that does not exist");
}
pub fn register_broker(
&mut self,
broker: Box<dyn WireguardBrokerMio<Error = anyhow::Error, MioError = anyhow::Error>>,
) -> Result<BrokerStorePtr> {
let ptr = Public::from_slice((self.brokers.store.len() as u64).as_bytes());
if self.brokers.store.insert(ptr, broker).is_some() {
bail!("Broker already registered");
}
let mio_token = self.mio_token_dispenser.dispense();
let io_source = ptr.apply(AppServerIoSource::PskBroker);
//Register broker
self.brokers
.store
.get_mut(&ptr)
.ok_or(anyhow::format_err!("Broker wasn't added to registry"))?
.register(
self.mio_poll.registry(),
self.mio_token_dispenser.dispense(),
)?;
.register(self.mio_poll.registry(), mio_token)?;
self.register_io_source(mio_token, io_source);
Ok(BrokerStorePtr(ptr))
}
pub fn unregister_broker(&mut self, ptr: BrokerStorePtr) -> Result<()> {
//Unregister broker
self.brokers
.store
.get_mut(&ptr.0)
.ok_or_else(|| anyhow::anyhow!("Broker not found"))?
.unregister(self.mio_poll.registry())?;
//Remove broker from store
self.brokers
let mut broker = self
.brokers
.store
.remove(&ptr.0)
.ok_or_else(|| anyhow::anyhow!("Broker not found"))?;
.context("Broker not found")?;
self.unregister_io_source(broker.mio_token().unwrap());
broker.unregister(self.mio_poll.registry())?;
Ok(())
}
@@ -669,8 +734,13 @@ impl AppServer {
broker_peer: Option<BrokerPeer>,
hostname: Option<String>,
) -> anyhow::Result<AppPeerPtr> {
let PeerPtr(pn) = self.crypt.add_peer(psk, pk)?;
let PeerPtr(pn) = match &mut self.crypto_site {
ConstructionSite::Void => bail!("Crypto server construction site is void"),
ConstructionSite::Builder(builder) => builder.add_peer(psk, pk),
ConstructionSite::Product(srv) => srv.add_peer(psk, pk)?,
};
assert!(pn == self.peers.len());
let initial_endpoint = hostname
.map(Endpoint::discovery_from_hostname)
.transpose()?;
@@ -712,7 +782,7 @@ impl AppServer {
);
if tries_left > 0 {
error!("re-initializing networking in {sleep}! {tries_left} tries left.");
std::thread::sleep(self.crypt.timebase.dur(sleep));
std::thread::sleep(Duration::from_secs_f64(sleep));
continue;
}
@@ -755,16 +825,31 @@ impl AppServer {
}
}
match self.poll(&mut *rx)? {
#[allow(clippy::redundant_closure_call)]
SendInitiation(peer) => tx_maybe_with!(peer, || self
.crypt
enum CryptoSrv {
Avail,
Missing,
}
let poll_result = self.poll(&mut *rx)?;
let have_crypto = match self.crypto_site.is_available() {
true => CryptoSrv::Avail,
false => CryptoSrv::Missing,
};
#[allow(clippy::redundant_closure_call)]
match (have_crypto, poll_result) {
(CryptoSrv::Missing, SendInitiation(_)) => {}
(CryptoSrv::Avail, SendInitiation(peer)) => tx_maybe_with!(peer, || self
.crypto_server_mut()?
.initiate_handshake(peer.lower(), &mut *tx))?,
#[allow(clippy::redundant_closure_call)]
SendRetransmission(peer) => tx_maybe_with!(peer, || self
.crypt
(CryptoSrv::Missing, SendRetransmission(_)) => {}
(CryptoSrv::Avail, SendRetransmission(peer)) => tx_maybe_with!(peer, || self
.crypto_server_mut()?
.retransmit_handshake(peer.lower(), &mut *tx))?,
DeleteKey(peer) => {
(CryptoSrv::Missing, DeleteKey(_)) => {}
(CryptoSrv::Avail, DeleteKey(peer)) => {
self.output_key(peer, Stale, &SymKey::random())?;
// There was a loss of connection apparently; restart host discovery
@@ -778,12 +863,15 @@ impl AppServer {
);
}
ReceivedMessage(len, endpoint) => {
(CryptoSrv::Missing, ReceivedMessage(_, _)) => {}
(CryptoSrv::Avail, ReceivedMessage(len, endpoint)) => {
let msg_result = match self.under_load {
DoSOperation::UnderLoad => {
self.handle_msg_under_load(&endpoint, &rx[..len], &mut *tx)
}
DoSOperation::Normal => self.crypt.handle_msg(&rx[..len], &mut *tx),
DoSOperation::Normal => {
self.crypto_server_mut()?.handle_msg(&rx[..len], &mut *tx)
}
};
match msg_result {
Err(ref e) => {
@@ -811,7 +899,8 @@ impl AppServer {
ap.get_app_mut(self).current_endpoint = Some(endpoint);
// TODO: Maybe we should rather call the key "rosenpass output"?
self.output_key(ap, Exchanged, &self.crypt.osk(p)?)?;
let osk = &self.crypto_server_mut()?.osk(p)?;
self.output_key(ap, Exchanged, osk)?;
}
}
}
@@ -827,9 +916,9 @@ impl AppServer {
tx: &mut [u8],
) -> Result<crate::protocol::HandleMsgResult> {
match endpoint {
Endpoint::SocketBoundAddress(socket) => {
self.crypt.handle_msg_under_load(rx, &mut *tx, socket)
}
Endpoint::SocketBoundAddress(socket) => self
.crypto_server_mut()?
.handle_msg_under_load(rx, &mut *tx, socket),
Endpoint::Discovery(_) => {
anyhow::bail!("Host-path discovery is not supported under load")
}
@@ -842,7 +931,7 @@ impl AppServer {
why: KeyOutputReason,
key: &SymKey,
) -> anyhow::Result<()> {
let peerid = peer.lower().get(&self.crypt).pidt()?;
let peerid = peer.lower().get(self.crypto_server()?).pidt()?;
if self.verbose() {
let msg = match why {
@@ -870,10 +959,14 @@ impl AppServer {
// this is intentionally writing to stdout instead of stderr, because
// it is meant to allow external detection of a successful key-exchange
println!(
let stdout = stdout();
let mut stdout = stdout.lock();
writeln!(
stdout,
"output-key peer {} key-file {of:?} {why}",
peerid.fmt_b64::<MAX_B64_PEER_ID_SIZE>()
);
)?;
stdout.flush()?;
}
peer.set_psk(self, key)?;
@@ -884,17 +977,32 @@ impl AppServer {
pub fn poll(&mut self, rx_buf: &mut [u8]) -> anyhow::Result<AppPollResult> {
use crate::protocol::PollResult as C;
use AppPollResult as A;
loop {
return Ok(match self.crypt.poll()? {
C::DeleteKey(PeerPtr(no)) => A::DeleteKey(AppPeerPtr(no)),
C::SendInitiation(PeerPtr(no)) => A::SendInitiation(AppPeerPtr(no)),
C::SendRetransmission(PeerPtr(no)) => A::SendRetransmission(AppPeerPtr(no)),
C::Sleep(timeout) => match self.try_recv(rx_buf, timeout)? {
Some((len, addr)) => A::ReceivedMessage(len, addr),
None => continue,
},
});
}
let res = loop {
// Call CryptoServer's poll (if available)
let crypto_poll = self
.crypto_site
.product_mut()
.map(|crypto| crypto.poll())
.transpose()?;
// Map crypto server's poll result to our poll result
let io_poll_timeout = match crypto_poll {
Some(C::DeleteKey(PeerPtr(no))) => break A::DeleteKey(AppPeerPtr(no)),
Some(C::SendInitiation(PeerPtr(no))) => break A::SendInitiation(AppPeerPtr(no)),
Some(C::SendRetransmission(PeerPtr(no))) => {
break A::SendRetransmission(AppPeerPtr(no))
}
Some(C::Sleep(timeout)) => timeout, // No event from crypto-server, do IO
None => crate::protocol::UNENDING, // Crypto server is uninitialized, do IO
};
// Perform IO (look for a message)
if let Some((len, addr)) = self.try_recv(rx_buf, io_poll_timeout)? {
break A::ReceivedMessage(len, addr);
}
};
Ok(res)
}
/// Tries to receive a new message
@@ -932,22 +1040,33 @@ impl AppServer {
// readiness event seems to be good enough™ for now.
// only poll if we drained all sockets before
if self.all_sockets_drained {
//Non blocked polling
self.mio_poll
.poll(&mut self.events, Some(Duration::from_secs(0)))?;
if self.events.iter().peekable().peek().is_none() {
// if there are no events, then add to blocking poll count
self.blocking_polls_count += 1;
//Execute blocking poll
self.mio_poll.poll(&mut self.events, Some(timeout))?;
} else {
self.non_blocking_polls_count += 1;
run(|| -> anyhow::Result<()> {
if !self.all_sockets_drained || !self.short_poll_queue.is_empty() {
self.unpolled_count += 1;
return Ok(());
}
} else {
self.unpolled_count += 1;
}
self.perform_mio_poll_and_register_events(Duration::from_secs(0))?; // Non-blocking poll
if !self.short_poll_queue.is_empty() {
// Got some events in non-blocking mode
self.non_blocking_polls_count += 1;
return Ok(());
}
if !self.performed_long_poll {
// pass go perform a full long poll before we enter blocking poll mode
// to make sure our experimental short poll feature did not miss any events
// due to being buggy.
return Ok(());
}
// Perform and register blocking poll
self.blocking_polls_count += 1;
self.perform_mio_poll_and_register_events(timeout)?;
self.performed_long_poll = false;
Ok(())
})?;
if let Some(AppServerTest {
enable_dos_permanently: true,
@@ -982,26 +1101,58 @@ impl AppServer {
}
}
// Focused polling i.e. actually using mio::Token is experimental for now.
// The reason for this is that we need to figure out how to integrate load detection
// and focused polling for one. Mio event-based polling also does not play nice with
// the current function signature and its reentrant design which is focused around receiving UDP socket packages
// for processing by the crypto protocol server.
// Besides that, there are also some parts of the code which intentionally block
// despite available data. This is the correct behavior; e.g. api::mio::Connection blocks
// further reads from its unix socket until the write buffer is flushed. In other words
// the connection handler makes sure that there is a buffer to put the response in while
// before reading further request.
// The potential problem with this behavior is that we end up ignoring instructions from
// epoll() to read from the particular sockets, so epoll will return information about that
// particular blocked file descriptor every call. We have only so many event slots and
// in theory, the event array could fill up entirely with intentionally blocked sockets.
// We need to figure out how to deal with this situation.
// Mio uses uses epoll in level-triggered mode, so we could handle taint-tracking for ignored
// sockets ourselves. The facilities are available in epoll and Mio, but we need to figure out how mio uses those
// facilities and how we can integrate them here.
// This will involve rewriting a lot of IO code and we should probably have integration
// tests before we approach that.
//
// This hybrid approach is not without merit though; the short poll implementation covers
// all our IO sources, so under contention, rosenpass should generally not hit the long
// poll mode below. We keep short polling and calling epoll() in non-blocking mode (timeout
// of zero) until we run out of IO events processed. Then, just before we would perform a
// blocking poll, we go through all available IO sources to see if we missed anything.
{
while let Some(ev) = self.short_poll_queue.pop_front() {
if let Some(v) = self.try_recv_from_mio_token(buf, ev.token())? {
return Ok(Some(v));
}
}
}
// drain all sockets
let mut would_block_count = 0;
for (sock_no, socket) in self.sockets.iter_mut().enumerate() {
match socket.recv_from(buf) {
Ok((n, addr)) => {
for sock_no in 0..self.sockets.len() {
match self
.try_recv_from_listen_socket(buf, sock_no)
.io_err_kind_hint()
{
Ok(None) => continue,
Ok(Some(v)) => {
// at least one socket was not drained...
self.all_sockets_drained = false;
return Ok(Some((
n,
Endpoint::SocketBoundAddress(SocketBoundEndpoint::new(
SocketPtr(sock_no),
addr,
)),
)));
return Ok(Some(v));
}
Err(e) if e.kind() == ErrorKind::WouldBlock => {
Err((_, ErrorKind::WouldBlock)) => {
would_block_count += 1;
}
// TODO if one socket continuously returns an error, then we never poll, thus we never wait for a timeout, thus we have a spin-lock
Err(e) => return Err(e.into()),
Err((e, _)) => return Err(e)?,
}
}
@@ -1013,6 +1164,127 @@ impl AppServer {
broker.process_poll()?;
}
// API poll
#[cfg(feature = "experiment_api")]
{
use crate::api::mio::MioManagerContext;
MioManagerFocus(self).poll()?;
}
self.performed_long_poll = true;
Ok(None)
}
fn perform_mio_poll_and_register_events(&mut self, timeout: Duration) -> io::Result<()> {
self.mio_poll.poll(&mut self.events, Some(timeout))?;
// Fill the short poll buffer with the acquired events
self.events
.iter()
.cloned()
.for_each(|v| self.short_poll_queue.push_back(v));
Ok(())
}
fn try_recv_from_mio_token(
&mut self,
buf: &mut [u8],
token: mio::Token,
) -> anyhow::Result<Option<(usize, Endpoint)>> {
let io_source = match self.io_source_index.get(&token) {
Some(io_source) => *io_source,
None => {
log::warn!("No IO source assiociated with mio token ({token:?}). Polling using mio tokens directly is an experimental feature and IO handler should recover when all available io sources are polled. This is a developer error. Please report it.");
return Ok(None);
}
};
self.try_recv_from_io_source(buf, io_source)
}
fn try_recv_from_io_source(
&mut self,
buf: &mut [u8],
io_source: AppServerIoSource,
) -> anyhow::Result<Option<(usize, Endpoint)>> {
match io_source {
AppServerIoSource::Socket(idx) => self
.try_recv_from_listen_socket(buf, idx)
.substitute_for_ioerr_wouldblock(None)?
.ok(),
AppServerIoSource::PskBroker(key) => self
.brokers
.store
.get_mut(&key)
.with_context(|| format!("No PSK broker under key {key:?}"))?
.process_poll()
.map(|_| None),
#[cfg(feature = "experiment_api")]
AppServerIoSource::MioManager(mmio_src) => {
use crate::api::mio::MioManagerContext;
MioManagerFocus(self)
.poll_particular(mmio_src)
.map(|_| None)
}
}
}
fn try_recv_from_listen_socket(
&mut self,
buf: &mut [u8],
idx: usize,
) -> io::Result<Option<(usize, Endpoint)>> {
use std::io::ErrorKind as K;
let (n, addr) = loop {
match self.sockets[idx].recv_from(buf).io_err_kind_hint() {
Ok(v) => break v,
Err((_, K::Interrupted)) => continue,
Err((e, _)) => return Err(e)?,
}
};
SocketPtr(idx)
.apply(|sp| SocketBoundEndpoint::new(sp, addr))
.apply(Endpoint::SocketBoundAddress)
.apply(|ep| (n, ep))
.some()
.ok()
}
#[cfg(feature = "experiment_api")]
pub fn add_api_connection(&mut self, connection: mio::net::UnixStream) -> std::io::Result<()> {
use crate::api::mio::MioManagerContext;
MioManagerFocus(self).add_connection(connection)
}
#[cfg(feature = "experiment_api")]
pub fn add_api_listener(&mut self, listener: mio::net::UnixListener) -> std::io::Result<()> {
use crate::api::mio::MioManagerContext;
MioManagerFocus(self).add_listener(listener)
}
}
#[cfg(feature = "experiment_api")]
struct MioManagerFocus<'a>(&'a mut AppServer);
#[cfg(feature = "experiment_api")]
impl crate::api::mio::MioManagerContext for MioManagerFocus<'_> {
fn mio_manager(&self) -> &crate::api::mio::MioManager {
&self.0.api_manager
}
fn mio_manager_mut(&mut self) -> &mut crate::api::mio::MioManager {
&mut self.0.api_manager
}
fn app_server(&self) -> &AppServer {
self.0
}
fn app_server_mut(&mut self) -> &mut AppServer {
self.0
}
}

View File

@@ -0,0 +1,92 @@
use anyhow::{Context, Result};
use heck::ToShoutySnakeCase;
use rosenpass_ciphers::{hash_domain::HashDomain, KEY_LEN};
fn calculate_hash_value(hd: HashDomain, values: &[&str]) -> Result<[u8; KEY_LEN]> {
match values.split_first() {
Some((head, tail)) => calculate_hash_value(hd.mix(head.as_bytes())?, tail),
None => Ok(hd.into_value()),
}
}
fn print_literal(path: &[&str]) -> Result<()> {
let val = calculate_hash_value(HashDomain::zero(), path)?;
let (last, prefix) = path.split_last().context("developer error!")?;
let var_name = last.to_shouty_snake_case();
print!("// hash domain hash of: ");
for n in prefix.iter() {
print!("{n} -> ");
}
println!("{last}");
let c = hex::encode(val)
.chars()
.collect::<Vec<char>>()
.chunks_exact(4)
.map(|chunk| chunk.iter().collect::<String>())
.collect::<Vec<_>>();
println!("const {var_name} : RawMsgType = RawMsgType::from_le_bytes(hex!(\"{} {} {} {} {} {} {} {}\"));",
c[0], c[1], c[2], c[3], c[4], c[5], c[6], c[7]);
Ok(())
}
#[derive(Debug, Clone)]
enum Tree {
Branch(String, Vec<Tree>),
Leaf(String),
}
impl Tree {
fn name(&self) -> &str {
match self {
Self::Branch(name, _) => name,
Self::Leaf(name) => name,
}
}
fn gen_code_inner(&self, prefix: &[&str]) -> Result<()> {
let mut path = prefix.to_owned();
path.push(self.name());
match self {
Self::Branch(_, ref children) => {
for c in children.iter() {
c.gen_code_inner(&path)?
}
}
Self::Leaf(_) => print_literal(&path)?,
};
Ok(())
}
fn gen_code(&self) -> Result<()> {
self.gen_code_inner(&[])
}
}
fn main() -> Result<()> {
let tree = Tree::Branch(
"Rosenpass IPC API".to_owned(),
vec![Tree::Branch(
"Rosenpass Protocol Server".to_owned(),
vec![
Tree::Leaf("Ping Request".to_owned()),
Tree::Leaf("Ping Response".to_owned()),
Tree::Leaf("Supply Keypair Request".to_owned()),
Tree::Leaf("Supply Keypair Response".to_owned()),
Tree::Leaf("Add Listen Socket Request".to_owned()),
Tree::Leaf("Add Listen Socket Response".to_owned()),
Tree::Leaf("Add Psk Broker Request".to_owned()),
Tree::Leaf("Add Psk Broker Response".to_owned()),
],
)],
);
println!("type RawMsgType = u128;");
println!();
tree.gen_code()
}

View File

@@ -1,15 +1,13 @@
use anyhow::{bail, ensure};
use anyhow::{bail, ensure, Context};
use clap::{Parser, Subcommand};
use rosenpass_cipher_traits::Kem;
use rosenpass_ciphers::kem::StaticKem;
use rosenpass_secret_memory::file::StoreSecret;
use rosenpass_secret_memory::{
secret_policy_try_use_memfd_secrets, secret_policy_use_only_malloc_secrets,
};
use rosenpass_util::file::{LoadValue, LoadValueB64};
use rosenpass_util::file::{LoadValue, LoadValueB64, StoreValue};
use rosenpass_wireguard_broker::brokers::native_unix::{
NativeUnixBroker, NativeUnixBrokerConfigBaseBuilder, NativeUnixBrokerConfigBaseBuilderError,
};
use std::ops::DerefMut;
use std::path::PathBuf;
use crate::app_server::AppServerTest;
@@ -18,27 +16,94 @@ use crate::protocol::{SPk, SSk, SymKey};
use super::config;
#[cfg(feature = "experiment_api")]
use {
command_fds::{CommandFdExt, FdMapping},
log::{error, info},
mio::net::UnixStream,
rosenpass_util::fd::claim_fd,
rosenpass_wireguard_broker::brokers::mio_client::MioBrokerClient,
rosenpass_wireguard_broker::WireguardBrokerMio,
rustix::net::{socketpair, AddressFamily, SocketFlags, SocketType},
std::os::fd::AsRawFd,
std::os::unix::net,
std::process::Command,
std::thread,
};
/// enum representing a choice of interface to a WireGuard broker
#[derive(Debug)]
pub enum BrokerInterface {
Socket(PathBuf),
FileDescriptor(i32),
SocketPair,
}
/// struct holding all CLI arguments for `clap` crate to parse
#[derive(Parser, Debug)]
#[command(author, version, about, long_about)]
#[command(author, version, about, long_about, arg_required_else_help = true)]
pub struct CliArgs {
/// lowest log level to show log messages at higher levels will be omitted
/// Lowest log level to show
#[arg(long = "log-level", value_name = "LOG_LEVEL", group = "log-level")]
log_level: Option<log::LevelFilter>,
/// show verbose log output sets log level to "debug"
/// Show verbose log output sets log level to "debug"
#[arg(short, long, group = "log-level")]
verbose: bool,
/// show no log output sets log level to "error"
/// Show no log output sets log level to "error"
#[arg(short, long, group = "log-level")]
quiet: bool,
#[command(flatten)]
#[cfg(feature = "experiment_api")]
api: crate::api::cli::ApiCli,
/// Path of the `wireguard_psk` broker socket to connect to
#[cfg(feature = "experiment_api")]
#[arg(long, group = "psk-broker-specs")]
psk_broker_path: Option<PathBuf>,
/// File descriptor of the `wireguard_psk` broker socket to connect to
///
/// When this command is called from another process, the other process can
/// open and bind the Unix socket for the PSK broker connection to use
/// themselves, passing it to this process - in Rust this can be achieved
/// using the [command-fds](https://docs.rs/command-fds/latest/command_fds/)
/// crate
#[cfg(feature = "experiment_api")]
#[arg(long, group = "psk-broker-specs")]
psk_broker_fd: Option<i32>,
/// Spawn a PSK broker locally using a socket pair
#[cfg(feature = "experiment_api")]
#[arg(short, long, group = "psk-broker-specs")]
psk_broker_spawn: bool,
#[command(subcommand)]
pub command: CliCommand,
pub command: Option<CliCommand>,
/// Generate man pages for the CLI
///
/// This option is used to generate man pages for Rosenpass in the specified
/// directory and exit.
#[clap(long, value_name = "out_dir")]
pub generate_manpage: Option<PathBuf>,
/// Generate completion file for a shell
///
/// This option is used to generate completion files for the specified shell
#[clap(long, value_name = "shell")]
pub print_completions: Option<clap_complete::Shell>,
}
impl CliArgs {
pub fn apply_to_config(&self, _cfg: &mut config::Rosenpass) -> anyhow::Result<()> {
#[cfg(feature = "experiment_api")]
self.api.apply_to_config(_cfg)?;
Ok(())
}
/// returns the log level filter set by CLI args
/// returns `None` if the user did not specify any log level filter via CLI
///
@@ -50,32 +115,54 @@ impl CliArgs {
return Some(log::LevelFilter::Info);
}
if self.quiet {
return Some(log::LevelFilter::Error);
return Some(log::LevelFilter::Warn);
}
if let Some(level_filter) = self.log_level {
return Some(level_filter);
}
None
}
#[cfg(feature = "experiment_api")]
/// returns the broker interface set by CLI args
/// returns `None` if the `experiment_api` feature isn't enabled
pub fn get_broker_interface(&self) -> Option<BrokerInterface> {
if let Some(path_ref) = self.psk_broker_path.as_ref() {
Some(BrokerInterface::Socket(path_ref.to_path_buf()))
} else if let Some(fd) = self.psk_broker_fd {
Some(BrokerInterface::FileDescriptor(fd))
} else if self.psk_broker_spawn {
Some(BrokerInterface::SocketPair)
} else {
None
}
}
#[cfg(not(feature = "experiment_api"))]
/// returns the broker interface set by CLI args
/// returns `None` if the `experiment_api` feature isn't enabled
pub fn get_broker_interface(&self) -> Option<BrokerInterface> {
None
}
}
/// represents a command specified via CLI
#[derive(Subcommand, Debug)]
pub enum CliCommand {
/// Start Rosenpass in server mode and carry on with the key exchange
/// Start Rosenpass key exchanges based on a configuration file
///
/// This will parse the configuration file and perform the key exchange
/// with the specified peers. If a peer's endpoint is specified, this
/// Rosenpass instance will try to initiate a key exchange with the peer,
/// otherwise only initiation attempts from the peer will be responded to.
/// This will parse the configuration file and perform key exchanges with
/// the specified peers. If a peer's endpoint is specified, this Rosenpass
/// instance will try to initiate a key exchange with the peer; otherwise,
/// only initiation attempts from other peers will be responded to.
ExchangeConfig { config_file: PathBuf },
/// Start in daemon mode, performing key exchanges
/// Start Rosenpass key exchanges based on command line arguments
///
/// The configuration is read from the command line. The `peer` token
/// always separates multiple peers, e. g. if the token `peer` appears
/// in the WIREGUARD_EXTRA_ARGS it is not put into the WireGuard arguments
/// but instead a new peer is created.
/// The configuration is read from the command line. The `peer` token always
/// separates multiple peers, e.g., if the token `peer` appears in the
/// WIREGUARD_EXTRA_ARGS, it is not put into the WireGuard arguments but
/// instead a new peer is created.
/* Explanation: `first_arg` and `rest_of_args` are combined into one
* `Vec<String>`. They are only used to trick clap into displaying some
* guidance on the CLI usage.
@@ -104,7 +191,10 @@ pub enum CliCommand {
config_file: Option<PathBuf>,
},
/// Generate a demo config file
/// Generate a demo config file for Rosenpass
///
/// The generated config file will contain a single peer and all common
/// options.
GenConfig {
config_file: PathBuf,
@@ -113,19 +203,19 @@ pub enum CliCommand {
force: bool,
},
/// Generate the keys mentioned in a configFile
/// Generate secret & public key for Rosenpass
///
/// Generates secret- & public-key to their destination. If a config file
/// is provided then the key file destination is taken from there.
/// Otherwise the
/// Generates secret & public key to their destination. If a config file is
/// provided then the key file destination is taken from there, otherwise
/// the destination is taken from the CLI arguments.
GenKeys {
config_file: Option<PathBuf>,
/// where to write public-key to
/// Where to write public key to
#[clap(short, long)]
public_key: Option<PathBuf>,
/// where to write secret-key to
/// Where to write secret key to
#[clap(short, long)]
secret_key: Option<PathBuf>,
@@ -134,67 +224,57 @@ pub enum CliCommand {
force: bool,
},
/// Deprecated - use gen-keys instead
/// Validate a configuration file
///
/// This command will validate the configuration file and print any errors
/// it finds. If the configuration file is valid, it will print a success.
/// Defined secret & public keys are checked for existence and validity.
Validate { config_files: Vec<PathBuf> },
/// DEPRECATED - use the gen-keys command instead
#[allow(rustdoc::broken_intra_doc_links)]
#[allow(rustdoc::invalid_html_tags)]
#[command(hide = true)]
Keygen {
// NOTE yes, the legacy keygen argument initially really accepted "privet-key", not "secret-key"!
// NOTE yes, the legacy keygen argument initially really accepted
// "private-key", not "secret-key"!
/// public-key <PATH> private-key <PATH>
args: Vec<String>,
},
/// Validate a configuration
Validate { config_files: Vec<PathBuf> },
/// Show the rosenpass manpage
// TODO make this the default, but only after the manpage has been adjusted once the CLI stabilizes
Man,
}
impl CliCommand {
/// runs the command specified via CLI
impl CliArgs {
/// Runs the command specified via CLI
///
/// ## TODO
/// - This method consumes the [`CliCommand`] value. It might be wise to use a reference...
pub fn run(self, test_helpers: Option<AppServerTest>) -> anyhow::Result<()> {
//Specify secret policy
#[cfg(feature = "enable_memfd_alloc")]
secret_policy_try_use_memfd_secrets();
#[cfg(not(feature = "enable_memfd_alloc"))]
secret_policy_use_only_malloc_secrets();
pub fn run(
self,
broker_interface: Option<BrokerInterface>,
test_helpers: Option<AppServerTest>,
) -> anyhow::Result<()> {
use CliCommand::*;
match self {
Man => {
let man_cmd = std::process::Command::new("man")
.args(["1", "rosenpass"])
.status();
if !(man_cmd.is_ok() && man_cmd.unwrap().success()) {
println!(include_str!(env!("ROSENPASS_MAN")));
}
}
GenConfig { config_file, force } => {
match &self.command {
Some(GenConfig { config_file, force }) => {
ensure!(
force || !config_file.exists(),
*force || !config_file.exists(),
"config file {config_file:?} already exists"
);
config::Rosenpass::example_config().store(config_file)?;
std::fs::write(config_file, config::EXAMPLE_CONFIG)?;
}
// Deprecated - use gen-keys instead
Keygen { args } => {
Some(Keygen { args }) => {
log::warn!("The 'keygen' command is deprecated. Please use the 'gen-keys' command instead.");
let mut public_key: Option<PathBuf> = None;
let mut secret_key: Option<PathBuf> = None;
// Manual arg parsing, since clap wants to prefix flags with "--"
let mut args = args.into_iter();
let mut args = args.iter();
loop {
match (args.next().as_deref(), args.next()) {
match (args.next().map(|x| x.as_str()), args.next()) {
(Some("private-key"), Some(opt)) | (Some("secret-key"), Some(opt)) => {
secret_key = Some(opt.into());
}
@@ -218,12 +298,12 @@ impl CliCommand {
generate_and_save_keypair(secret_key.unwrap(), public_key.unwrap())?;
}
GenKeys {
Some(GenKeys {
config_file,
public_key,
secret_key,
force,
} => {
}) => {
// figure out where the key file is specified, in the config file or directly as flag?
let (pkf, skf) = match (config_file, public_key, secret_key) {
(Some(config_file), _, _) => {
@@ -233,10 +313,13 @@ impl CliCommand {
);
let config = config::Rosenpass::load(config_file)?;
let keypair = config
.keypair
.context("Config file present, but no keypair is specified.")?;
(config.public_key, config.secret_key)
(keypair.public_key, keypair.secret_key)
}
(_, Some(pkf), Some(skf)) => (pkf, skf),
(_, Some(pkf), Some(skf)) => (pkf.clone(), skf.clone()),
_ => {
bail!("either a config-file or both public-key and secret-key file are required")
}
@@ -246,12 +329,14 @@ impl CliCommand {
let mut problems = vec![];
if !force && pkf.is_file() {
problems.push(format!(
"public-key file {pkf:?} exist, refusing to overwrite it"
"public-key file {:?} exists, refusing to overwrite",
std::fs::canonicalize(&pkf)?,
));
}
if !force && skf.is_file() {
problems.push(format!(
"secret-key file {skf:?} exist, refusing to overwrite it"
"secret-key file {:?} exists, refusing to overwrite",
std::fs::canonicalize(&skf)?,
));
}
if !problems.is_empty() {
@@ -262,48 +347,57 @@ impl CliCommand {
generate_and_save_keypair(skf, pkf)?;
}
ExchangeConfig { config_file } => {
Some(ExchangeConfig { config_file }) => {
ensure!(
config_file.exists(),
"config file '{config_file:?}' does not exist"
);
let config = config::Rosenpass::load(config_file)?;
let mut config = config::Rosenpass::load(config_file)?;
config.validate()?;
Self::event_loop(config, test_helpers)?;
self.apply_to_config(&mut config)?;
config.check_usefullness()?;
Self::event_loop(config, broker_interface, test_helpers)?;
}
Exchange {
Some(Exchange {
first_arg,
mut rest_of_args,
rest_of_args,
config_file,
} => {
rest_of_args.insert(0, first_arg);
}) => {
let mut rest_of_args = rest_of_args.clone();
rest_of_args.insert(0, first_arg.clone());
let args = rest_of_args;
let mut config = config::Rosenpass::parse_args(args)?;
if let Some(p) = config_file {
config.store(&p)?;
config.config_file_path = p;
config.store(p)?;
config.config_file_path.clone_from(p);
}
config.validate()?;
Self::event_loop(config, test_helpers)?;
self.apply_to_config(&mut config)?;
config.check_usefullness()?;
Self::event_loop(config, broker_interface, test_helpers)?;
}
Validate { config_files } => {
Some(Validate { config_files }) => {
for file in config_files {
match config::Rosenpass::load(&file) {
match config::Rosenpass::load(file) {
Ok(config) => {
eprintln!("{file:?} is valid TOML and conforms to the expected schema");
match config.validate() {
Ok(_) => eprintln!("{file:?} has passed all logical checks"),
Err(_) => eprintln!("{file:?} contains logical errors"),
Err(err) => eprintln!("{file:?} contains logical errors: '{err}'"),
}
}
Err(e) => eprintln!("{file:?} is not valid: {e}"),
}
}
}
&None => {} // calp print help if no command is given
}
Ok(())
@@ -311,23 +405,34 @@ impl CliCommand {
fn event_loop(
config: config::Rosenpass,
broker_interface: Option<BrokerInterface>,
test_helpers: Option<AppServerTest>,
) -> anyhow::Result<()> {
const MAX_PSK_SIZE: usize = 1000;
// load own keys
let sk = SSk::load(&config.secret_key)?;
let pk = SPk::load(&config.public_key)?;
let keypair = config
.keypair
.as_ref()
.map(|kp| -> anyhow::Result<_> {
let sk = SSk::load(&kp.secret_key)?;
let pk = SPk::load(&kp.public_key)?;
Ok((sk, pk))
})
.transpose()?;
// start an application server
let mut srv = std::boxed::Box::<AppServer>::new(AppServer::new(
sk,
pk,
config.listen,
keypair,
config.listen.clone(),
config.verbosity,
test_helpers,
)?);
let broker_store_ptr = srv.register_broker(Box::new(NativeUnixBroker::new()))?;
config.apply_to_app_server(&mut srv)?;
let broker = Self::create_broker(broker_interface)?;
let broker_store_ptr = srv.register_broker(broker)?;
fn cfg_err_map(e: NativeUnixBrokerConfigBaseBuilderError) -> anyhow::Error {
anyhow::Error::msg(format!("NativeUnixBrokerConfigBaseBuilderError: {:?}", e))
@@ -364,13 +469,102 @@ impl CliCommand {
srv.event_loop()
}
#[cfg(feature = "experiment_api")]
fn create_broker(
broker_interface: Option<BrokerInterface>,
) -> Result<
Box<dyn WireguardBrokerMio<MioError = anyhow::Error, Error = anyhow::Error>>,
anyhow::Error,
> {
if let Some(interface) = broker_interface {
let socket = Self::get_broker_socket(interface)?;
Ok(Box::new(MioBrokerClient::new(socket)))
} else {
Ok(Box::new(NativeUnixBroker::new()))
}
}
#[cfg(not(feature = "experiment_api"))]
fn create_broker(
_broker_interface: Option<BrokerInterface>,
) -> Result<Box<NativeUnixBroker>, anyhow::Error> {
Ok(Box::new(NativeUnixBroker::new()))
}
#[cfg(feature = "experiment_api")]
fn get_broker_socket(broker_interface: BrokerInterface) -> Result<UnixStream, anyhow::Error> {
// Connect to the psk broker unix socket if one was specified
// OR OTHERWISE spawn the psk broker and use socketpair(2) to connect with them
match broker_interface {
BrokerInterface::Socket(broker_path) => Ok(UnixStream::connect(broker_path)?),
BrokerInterface::FileDescriptor(broker_fd) => {
// mio::net::UnixStream doesn't implement From<OwnedFd>, so we have to go through std
let sock = net::UnixStream::from(claim_fd(broker_fd)?);
sock.set_nonblocking(true)?;
Ok(UnixStream::from_std(sock))
}
BrokerInterface::SocketPair => {
// Form a socketpair for communicating to the broker
let (ours, theirs) = socketpair(
AddressFamily::UNIX,
SocketType::STREAM,
SocketFlags::empty(),
None,
)?;
// Setup our end of the socketpair
let ours = net::UnixStream::from(ours);
ours.set_nonblocking(true)?;
// Start the PSK broker
let mut child = Command::new("rosenpass-wireguard-broker-socket-handler")
.args(["--stream-fd", "3"])
.fd_mappings(vec![FdMapping {
parent_fd: theirs.as_raw_fd(),
child_fd: 3,
}])?
.spawn()?;
// Handle the PSK broker crashing
thread::spawn(move || {
let status = child.wait();
if let Ok(status) = status {
if status.success() {
// Maybe they are doing double forking?
info!("PSK broker exited.");
} else {
error!("PSK broker exited with an error ({status:?})");
}
} else {
error!("Wait on PSK broker process failed ({status:?})");
}
});
Ok(UnixStream::from_std(ours))
}
}
}
}
/// generate secret and public keys, store in files according to the paths passed as arguments
fn generate_and_save_keypair(secret_key: PathBuf, public_key: PathBuf) -> anyhow::Result<()> {
let mut ssk = crate::protocol::SSk::random();
let mut spk = crate::protocol::SPk::random();
StaticKem::keygen(ssk.secret_mut(), spk.secret_mut())?;
StaticKem::keygen(ssk.secret_mut(), spk.deref_mut())?;
ssk.store_secret(secret_key)?;
spk.store(public_key)
}
#[cfg(feature = "internal_testing")]
pub mod testing {
use super::*;
pub fn generate_and_save_keypair(
secret_key: PathBuf,
public_key: PathBuf,
) -> anyhow::Result<()> {
super::generate_and_save_keypair(secret_key, public_key)
}
}

View File

@@ -6,7 +6,8 @@
//! ## TODO
//! - support `~` in <https://github.com/rosenpass/rosenpass/issues/237>
//! - provide tooling to create config file from shell <https://github.com/rosenpass/rosenpass/issues/247>
use crate::protocol::{SPk, SSk};
use rosenpass_util::file::LoadValue;
use std::{
collections::HashSet,
fs,
@@ -19,13 +20,28 @@ use anyhow::{bail, ensure};
use rosenpass_util::file::{fopen_w, Visibility};
use serde::{Deserialize, Serialize};
use crate::app_server::AppServer;
#[cfg(feature = "experiment_api")]
fn empty_api_config() -> crate::api::config::ApiConfig {
crate::api::config::ApiConfig {
listen_path: Vec::new(),
listen_fd: Vec::new(),
stream_fd: Vec::new(),
}
}
#[derive(Debug, Serialize, Deserialize)]
pub struct Rosenpass {
/// path to the public key file
pub public_key: PathBuf,
// TODO: Raise error if secret key or public key alone is set during deserialization
// SEE: https://github.com/serde-rs/serde/issues/2793
#[serde(flatten)]
pub keypair: Option<Keypair>,
/// path to the secret key file
pub secret_key: PathBuf,
/// Location of the API listen sockets
#[cfg(feature = "experiment_api")]
#[serde(default = "empty_api_config")]
pub api: crate::api::config::ApiConfig,
/// list of [`SocketAddr`] to listen on
///
@@ -52,9 +68,29 @@ pub struct Rosenpass {
pub config_file_path: PathBuf,
}
#[derive(Debug, Deserialize, Serialize, PartialEq, Eq, Clone)]
pub struct Keypair {
/// path to the public key file
pub public_key: PathBuf,
/// path to the secret key file
pub secret_key: PathBuf,
}
impl Keypair {
pub fn new<Pk: AsRef<Path>, Sk: AsRef<Path>>(public_key: Pk, secret_key: Sk) -> Self {
let public_key = public_key.as_ref().to_path_buf();
let secret_key = secret_key.as_ref().to_path_buf();
Self {
public_key,
secret_key,
}
}
}
/// ## TODO
/// - replace this type with [`log::LevelFilter`], also see <https://github.com/rosenpass/rosenpass/pull/246>
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize, Copy, Clone)]
pub enum Verbosity {
Quiet,
Verbose,
@@ -107,6 +143,12 @@ pub struct WireGuard {
pub extra_params: Vec<String>,
}
impl Default for Rosenpass {
fn default() -> Self {
Self::empty()
}
}
impl Rosenpass {
/// load configuration from a TOML file
///
@@ -122,8 +164,10 @@ impl Rosenpass {
// resolve `~` (see https://github.com/rosenpass/rosenpass/issues/237)
use util::resolve_path_with_tilde;
resolve_path_with_tilde(&mut config.public_key);
resolve_path_with_tilde(&mut config.secret_key);
if let Some(ref mut keypair) = config.keypair {
resolve_path_with_tilde(&mut keypair.public_key);
resolve_path_with_tilde(&mut keypair.secret_key);
}
for peer in config.peers.iter_mut() {
resolve_path_with_tilde(&mut peer.public_key);
if let Some(ref mut psk) = &mut peer.pre_shared_key {
@@ -157,25 +201,43 @@ impl Rosenpass {
self.store(&self.config_file_path)
}
/// Validate a configuration
///
/// ## TODO
/// - check that files do not just exist but are also readable
/// - warn if neither out_key nor exchange_command of a peer is defined (v.i.)
pub fn validate(&self) -> anyhow::Result<()> {
// check the public key file exists
ensure!(
self.public_key.is_file(),
"could not find public-key file {:?}: no such file",
self.public_key
);
pub fn apply_to_app_server(&self, _srv: &mut AppServer) -> anyhow::Result<()> {
#[cfg(feature = "experiment_api")]
self.api.apply_to_app_server(_srv)?;
Ok(())
}
// check the secret-key file exists
ensure!(
self.secret_key.is_file(),
"could not find secret-key file {:?}: no such file",
self.secret_key
);
/// Validate a configuration
pub fn validate(&self) -> anyhow::Result<()> {
if let Some(ref keypair) = self.keypair {
// check the public key file exists
ensure!(
keypair.public_key.is_file(),
"could not find public-key file {:?}: no such file. Consider running `rosenpass gen-keys` to generate a new keypair.",
keypair.public_key
);
// check the public-key file is a valid key
ensure!(
SPk::load(&keypair.public_key).is_ok(),
"could not load public-key file {:?}: invalid key",
keypair.public_key
);
// check the secret-key file exists
ensure!(
keypair.secret_key.is_file(),
"could not find secret-key file {:?}: no such file. Consider running `rosenpass gen-keys` to generate a new keypair.",
keypair.secret_key
);
// check the secret-key file is a valid key
ensure!(
SSk::load(&keypair.secret_key).is_ok(),
"could not load public-key file {:?}: invalid key",
keypair.secret_key
);
}
for (i, peer) in self.peers.iter().enumerate() {
// check peer's public-key file exists
@@ -185,6 +247,13 @@ impl Rosenpass {
peer.public_key
);
// check peer's public-key file is a valid key
ensure!(
SPk::load(&peer.public_key).is_ok(),
"peer {i} public-key file {:?} is invalid",
peer.public_key
);
// check endpoint is usable
if let Some(addr) = peer.endpoint.as_ref() {
ensure!(
@@ -194,18 +263,57 @@ impl Rosenpass {
);
}
// TODO warn if neither out_key nor exchange_command is defined
// check if `key_out` or `device` and `peer` are defined
if peer.key_out.is_none() {
if let Some(wg) = &peer.wg {
if wg.device.is_empty() || wg.peer.is_empty() {
ensure!(
false,
"peer {i} has neither `key_out` nor valid wireguard config defined"
);
}
} else {
ensure!(
false,
"peer {i} has neither `key_out` nor valid wireguard config defined"
);
}
}
}
Ok(())
}
pub fn check_usefullness(&self) -> anyhow::Result<()> {
#[cfg(not(feature = "experiment_api"))]
ensure!(self.keypair.is_some(), "Server keypair missing.");
#[cfg(feature = "experiment_api")]
ensure!(
self.keypair.is_some() || self.api.has_api_sources(),
"{}{}",
"Specify a server keypair or some API connections to configure the keypair with.",
"Without a keypair, rosenpass can not operate."
);
Ok(())
}
pub fn empty() -> Self {
Self::new(None)
}
pub fn from_sk_pk<Sk: AsRef<Path>, Pk: AsRef<Path>>(sk: Sk, pk: Pk) -> Self {
Self::new(Some(Keypair::new(pk, sk)))
}
/// Creates a new configuration
pub fn new<P1: AsRef<Path>, P2: AsRef<Path>>(public_key: P1, secret_key: P2) -> Self {
pub fn new(keypair: Option<Keypair>) -> Self {
Self {
public_key: PathBuf::from(public_key.as_ref()),
secret_key: PathBuf::from(secret_key.as_ref()),
keypair,
listen: vec![],
#[cfg(feature = "experiment_api")]
api: crate::api::config::ApiConfig::default(),
verbosity: Verbosity::Quiet,
peers: vec![],
config_file_path: PathBuf::new(),
@@ -228,7 +336,7 @@ impl Rosenpass {
/// from chaotic args
/// Quest: the grammar is undecideable, what do we do here?
pub fn parse_args(args: Vec<String>) -> anyhow::Result<Self> {
let mut config = Self::new("", "");
let mut config = Self::new(Some(Keypair::new("", "")));
#[derive(Debug, Hash, PartialEq, Eq)]
enum State {
@@ -289,7 +397,7 @@ impl Rosenpass {
already_set.insert(OwnPublicKey),
"public-key was already set"
);
config.public_key = pk.into();
config.keypair.as_mut().unwrap().public_key = pk.into();
Own
}
(OwnSecretKey, sk, None) => {
@@ -297,7 +405,7 @@ impl Rosenpass {
already_set.insert(OwnSecretKey),
"secret-key was already set"
);
config.secret_key = sk.into();
config.keypair.as_mut().unwrap().secret_key = sk.into();
Own
}
(OwnListen, l, None) => {
@@ -416,45 +524,146 @@ impl Rosenpass {
}
}
impl Rosenpass {
/// Generate an example configuration
pub fn example_config() -> Self {
let peer = RosenpassPeer {
public_key: "/path/to/rp-peer-public-key".into(),
endpoint: Some("my-peer.test:9999".into()),
key_out: Some("/path/to/rp-key-out.txt".into()),
pre_shared_key: Some("additional pre shared key".into()),
wg: Some(WireGuard {
device: "wirgeguard device e.g. wg0".into(),
peer: "wireguard public key".into(),
extra_params: vec!["passed to".into(), "wg set".into()],
}),
};
Self {
public_key: "/path/to/rp-public-key".into(),
secret_key: "/path/to/rp-secret-key".into(),
peers: vec![peer],
..Self::new("", "")
}
}
}
impl Default for Verbosity {
fn default() -> Self {
Self::Quiet
}
}
pub static EXAMPLE_CONFIG: &str = r###"public_key = "/path/to/rp-public-key"
secret_key = "/path/to/rp-secret-key"
listen = []
verbosity = "Verbose"
[[peers]]
# Commented out fields are optional
public_key = "/path/to/rp-peer-public-key"
endpoint = "127.0.0.1:9998"
# pre_shared_key = "/path/to/preshared-key"
# Choose to store the key in a file via `key_out` or pass it to WireGuard by
# defining `device` and `peer`. You may choose to do both.
key_out = "/path/to/rp-key-out.txt" # path to store the key
# device = "wg0" # WireGuard interface
#peer = "RULdRAtUw7SFfVfGD..." # WireGuard public key
# extra_params = [] # passed to WireGuard `wg set`
"###;
#[cfg(test)]
mod test {
use super::*;
use std::net::IpAddr;
use std::{borrow::Borrow, net::IpAddr};
fn toml_des<S: Borrow<str>>(s: S) -> Result<toml::Table, toml::de::Error> {
toml::from_str(s.borrow())
}
fn toml_ser<S: Serialize>(s: S) -> Result<toml::Table, toml::ser::Error> {
toml::Table::try_from(s)
}
fn assert_toml<L: Serialize, R: Borrow<str>>(l: L, r: R, info: &str) -> anyhow::Result<()> {
fn lines_prepend(prefix: &str, s: &str) -> anyhow::Result<String> {
use std::fmt::Write;
let mut buf = String::new();
for line in s.lines() {
writeln!(&mut buf, "{prefix}{line}")?;
}
Ok(buf)
}
let l = toml_ser(l)?;
let r = toml_des(r.borrow())?;
ensure!(
l == r,
"{}{}TOML value mismatch.\n Have:\n{}\n Expected:\n{}",
info,
if info.is_empty() { "" } else { ": " },
lines_prepend(" ", &toml::to_string_pretty(&l)?)?,
lines_prepend(" ", &toml::to_string_pretty(&r)?)?
);
Ok(())
}
fn assert_toml_round<'de, L: Serialize + Deserialize<'de>, R: Borrow<str>>(
l: L,
r: R,
) -> anyhow::Result<()> {
let l = toml_ser(l)?;
assert_toml(&l, r.borrow(), "Straight deserialization")?;
let l: L = l.try_into().unwrap();
let l = toml_ser(l).unwrap();
assert_toml(l, r.borrow(), "Roundtrip deserialization")?;
Ok(())
}
fn split_str(s: &str) -> Vec<String> {
s.split(' ').map(|s| s.to_string()).collect()
}
#[test]
fn toml_serialization() -> anyhow::Result<()> {
#[cfg(feature = "experiment_api")]
assert_toml_round(
Rosenpass::empty(),
r#"
listen = []
verbosity = "Quiet"
peers = []
[api]
listen_path = []
listen_fd = []
stream_fd = []
"#,
)?;
#[cfg(not(feature = "experiment_api"))]
assert_toml_round(
Rosenpass::empty(),
r#"
listen = []
verbosity = "Quiet"
peers = []
"#,
)?;
#[cfg(feature = "experiment_api")]
assert_toml_round(
Rosenpass::from_sk_pk("/my/sk", "/my/pk"),
r#"
public_key = "/my/pk"
secret_key = "/my/sk"
listen = []
verbosity = "Quiet"
peers = []
[api]
listen_path = []
listen_fd = []
stream_fd = []
"#,
)?;
#[cfg(not(feature = "experiment_api"))]
assert_toml_round(
Rosenpass::from_sk_pk("/my/sk", "/my/pk"),
r#"
public_key = "/my/pk"
secret_key = "/my/sk"
listen = []
verbosity = "Quiet"
peers = []
"#,
)?;
Ok(())
}
#[test]
fn test_simple_cli_parse() {
let args = split_str(
@@ -465,8 +674,10 @@ mod test {
let config = Rosenpass::parse_args(args).unwrap();
assert_eq!(config.public_key, PathBuf::from("/my/public-key"));
assert_eq!(config.secret_key, PathBuf::from("/my/secret-key"));
assert_eq!(
config.keypair,
Some(Keypair::new("/my/public-key", "/my/secret-key"))
);
assert_eq!(config.verbosity, Verbosity::Verbose);
assert_eq!(
&config.listen,
@@ -495,8 +706,10 @@ mod test {
let config = Rosenpass::parse_args(args).unwrap();
assert_eq!(config.public_key, PathBuf::from("/my/public-key"));
assert_eq!(config.secret_key, PathBuf::from("/my/secret-key"));
assert_eq!(
config.keypair,
Some(Keypair::new("/my/public-key", "/my/secret-key"))
);
assert_eq!(config.verbosity, Verbosity::Verbose);
assert!(&config.listen.is_empty());
assert_eq!(

View File

@@ -1,3 +1,5 @@
#[cfg(feature = "experiment_api")]
pub mod api;
pub mod app_server;
pub mod cli;
pub mod config;
@@ -11,4 +13,8 @@ pub enum RosenpassError {
BufferSizeMismatch,
#[error("invalid message type")]
InvalidMessageType(u8),
#[error("invalid API message type")]
InvalidApiMessageType(u128),
#[error("could not parse API message")]
InvalidApiMessage,
}

View File

@@ -1,13 +1,59 @@
use clap::CommandFactory;
use clap::Parser;
use clap_mangen::roff::{roman, Roff};
use log::error;
use rosenpass::cli::CliArgs;
use std::process::exit;
fn print_custom_man_section(section: &str, text: &str, file: &mut std::fs::File) {
let mut roff = Roff::default();
roff.control("SH", [section]);
roff.text([roman(text)]);
let _ = roff.to_writer(file);
}
/// Catches errors, prints them through the logger, then exits
pub fn main() {
// parse CLI arguments
let args = CliArgs::parse();
if let Some(shell) = args.print_completions {
let mut cmd = CliArgs::command();
clap_complete::generate(shell, &mut cmd, "rosenpass", &mut std::io::stdout());
return;
}
if let Some(out_dir) = args.generate_manpage {
std::fs::create_dir_all(&out_dir).expect("Failed to create man pages directory");
let cmd = CliArgs::command();
let man = clap_mangen::Man::new(cmd.clone());
let _ = clap_mangen::generate_to(cmd, &out_dir);
let file_path = out_dir.join("rosenpass.1");
let mut file = std::fs::File::create(file_path).expect("Failed to create man page file");
let _ = man.render_title(&mut file);
let _ = man.render_name_section(&mut file);
let _ = man.render_synopsis_section(&mut file);
let _ = man.render_subcommands_section(&mut file);
let _ = man.render_options_section(&mut file);
print_custom_man_section("EXIT STATUS", EXIT_STATUS_MAN, &mut file);
print_custom_man_section("SEE ALSO", SEE_ALSO_MAN, &mut file);
print_custom_man_section("STANDARDS", STANDARDS_MAN, &mut file);
print_custom_man_section("AUTHORS", AUTHORS_MAN, &mut file);
print_custom_man_section("BUGS", BUGS_MAN, &mut file);
return;
}
{
use rosenpass_secret_memory as SM;
#[cfg(feature = "experiment_memfd_secret")]
SM::secret_policy_try_use_memfd_secrets();
#[cfg(not(feature = "experiment_memfd_secret"))]
SM::secret_policy_use_only_malloc_secrets();
}
// init logging
{
let mut log_builder = env_logger::Builder::from_default_env(); // sets log level filter from environment (or defaults)
@@ -26,11 +72,30 @@ pub fn main() {
// error!("error dummy");
}
match args.command.run(None) {
let broker_interface = args.get_broker_interface();
match args.run(broker_interface, None) {
Ok(_) => {}
Err(e) => {
error!("{e}");
error!("{e:?}");
exit(1);
}
}
}
static EXIT_STATUS_MAN: &str = r"
The rosenpass utility exits 0 on success, and >0 if an error occurs.";
static SEE_ALSO_MAN: &str = r"
rp(1), wg(1)
Karolin Varner, Benjamin Lipp, Wanja Zaeske, and Lisa Schmidt, Rosenpass, https://rosenpass.eu/whitepaper.pdf, 2023.";
static STANDARDS_MAN: &str = r"
This tool is the reference implementation of the Rosenpass protocol, as
specified within the whitepaper referenced above.";
static AUTHORS_MAN: &str = r"
Rosenpass was created by Karolin Varner, Benjamin Lipp, Wanja Zaeske, Marei
Peischl, Stephan Ajuvo, and Lisa Schmidt.";
static BUGS_MAN: &str = r"
The bugs are tracked at https://github.com/rosenpass/rosenpass/issues.";

View File

@@ -21,8 +21,11 @@ pub const MAC_SIZE: usize = 16;
pub const COOKIE_SIZE: usize = 16;
pub const SID_LEN: usize = 4;
pub type MsgEnvelopeMac = [u8; 16];
pub type MsgEnvelopeCookie = MsgEnvelopeMac;
#[repr(packed)]
#[derive(AsBytes, FromBytes, FromZeroes)]
#[derive(AsBytes, FromBytes, FromZeroes, Clone)]
pub struct Envelope<M: AsBytes + FromBytes> {
/// [MsgType] of this message
pub msg_type: u8,
@@ -32,9 +35,9 @@ pub struct Envelope<M: AsBytes + FromBytes> {
pub payload: M,
/// Message Authentication Code (mac) over all bytes until (exclusive)
/// `mac` itself
pub mac: [u8; 16],
pub mac: MsgEnvelopeMac,
/// Currently unused, TODO: do something with this
pub cookie: [u8; 16],
pub cookie: MsgEnvelopeCookie,
}
#[repr(packed)]
@@ -70,7 +73,7 @@ pub struct RespHello {
}
#[repr(packed)]
#[derive(AsBytes, FromBytes, FromZeroes)]
#[derive(AsBytes, FromBytes, FromZeroes, Debug)]
pub struct InitConf {
/// Copied from InitHello
pub sidi: [u8; 4],
@@ -83,7 +86,7 @@ pub struct InitConf {
}
#[repr(packed)]
#[derive(AsBytes, FromBytes, FromZeroes)]
#[derive(AsBytes, FromBytes, FromZeroes, Clone, Copy)]
pub struct EmptyData {
/// Copied from RespHello
pub sid: [u8; 4],

View File

@@ -0,0 +1,127 @@
use rosenpass_util::{
build::Build,
mem::{DiscardResultExt, SwapWithDefaultExt},
result::ensure_or,
};
use thiserror::Error;
use super::{CryptoServer, PeerPtr, SPk, SSk, SymKey};
#[derive(Debug, Clone)]
pub struct Keypair {
pub sk: SSk,
pub pk: SPk,
}
// TODO: We need a named tuple derive
impl Keypair {
pub fn new(sk: SSk, pk: SPk) -> Self {
Self { sk, pk }
}
pub fn zero() -> Self {
Self::new(SSk::zero(), SPk::zero())
}
pub fn random() -> Self {
Self::new(SSk::random(), SPk::random())
}
pub fn from_parts(parts: (SSk, SPk)) -> Self {
Self::new(parts.0, parts.1)
}
pub fn into_parts(self) -> (SSk, SPk) {
(self.sk, self.pk)
}
}
#[derive(Error, Debug)]
#[error("PSK already set in BuildCryptoServer")]
pub struct PskAlreadySet;
#[derive(Error, Debug)]
#[error("Keypair already set in BuildCryptoServer")]
pub struct KeypairAlreadySet;
#[derive(Error, Debug)]
#[error("Can not construct CryptoServer: Missing keypair")]
pub struct MissingKeypair;
#[derive(Debug, Default)]
pub struct BuildCryptoServer {
pub keypair: Option<Keypair>,
pub peers: Vec<PeerParams>,
}
impl Build<CryptoServer> for BuildCryptoServer {
type Error = anyhow::Error;
fn build(self) -> Result<CryptoServer, Self::Error> {
let Some(Keypair { sk, pk }) = self.keypair else {
return Err(MissingKeypair)?;
};
let mut srv = CryptoServer::new(sk, pk);
for (idx, PeerParams { psk, pk }) in self.peers.into_iter().enumerate() {
let PeerPtr(idx2) = srv.add_peer(psk, pk)?;
assert!(idx == idx2, "Peer id changed during CryptoServer construction from {idx} to {idx2}. This is a developer error.")
}
Ok(srv)
}
}
#[derive(Debug)]
pub struct PeerParams {
pub psk: Option<SymKey>,
pub pk: SPk,
}
impl BuildCryptoServer {
pub fn new(keypair: Option<Keypair>, peers: Vec<PeerParams>) -> Self {
Self { keypair, peers }
}
pub fn empty() -> Self {
Self::new(None, Vec::new())
}
pub fn from_parts(parts: (Option<Keypair>, Vec<PeerParams>)) -> Self {
Self {
keypair: parts.0,
peers: parts.1,
}
}
pub fn take_parts(&mut self) -> (Option<Keypair>, Vec<PeerParams>) {
(self.keypair.take(), self.peers.swap_with_default())
}
pub fn into_parts(mut self) -> (Option<Keypair>, Vec<PeerParams>) {
self.take_parts()
}
pub fn with_keypair(&mut self, keypair: Keypair) -> Result<&mut Self, KeypairAlreadySet> {
ensure_or(self.keypair.is_none(), KeypairAlreadySet)?;
self.keypair.insert(keypair).discard_result();
Ok(self)
}
pub fn with_added_peer(&mut self, psk: Option<SymKey>, pk: SPk) -> &mut Self {
// TODO: Check here already whether peer was already added
self.peers.push(PeerParams { psk, pk });
self
}
pub fn add_peer(&mut self, psk: Option<SymKey>, pk: SPk) -> PeerPtr {
let id = PeerPtr(self.peers.len());
self.with_added_peer(psk, pk);
id
}
pub fn emancipate(&mut self) -> Self {
Self::from_parts(self.take_parts())
}
}

View File

@@ -0,0 +1,6 @@
mod build_crypto_server;
#[allow(clippy::module_inception)]
mod protocol;
pub use build_crypto_server::*;
pub use protocol::*;

View File

@@ -19,6 +19,7 @@
//! [CryptoServer].
//!
//! ```
//! use std::ops::DerefMut;
//! use rosenpass_secret_memory::policy::*;
//! use rosenpass_cipher_traits::Kem;
//! use rosenpass_ciphers::kem::StaticKem;
@@ -32,11 +33,11 @@
//!
//! // initialize secret and public key for peer a ...
//! let (mut peer_a_sk, mut peer_a_pk) = (SSk::zero(), SPk::zero());
//! StaticKem::keygen(peer_a_sk.secret_mut(), peer_a_pk.secret_mut())?;
//! StaticKem::keygen(peer_a_sk.secret_mut(), peer_a_pk.deref_mut())?;
//!
//! // ... and for peer b
//! let (mut peer_b_sk, mut peer_b_pk) = (SSk::zero(), SPk::zero());
//! StaticKem::keygen(peer_b_sk.secret_mut(), peer_b_pk.secret_mut())?;
//! StaticKem::keygen(peer_b_sk.secret_mut(), peer_b_pk.deref_mut())?;
//!
//! // initialize server and a pre-shared key
//! let psk = SymKey::random();
@@ -69,8 +70,11 @@
//! # }
//! ```
use std::borrow::Borrow;
use std::convert::Infallible;
use std::fmt::Debug;
use std::mem::size_of;
use std::ops::Deref;
use std::{
collections::hash_map::{
Entry::{Occupied, Vacant},
@@ -86,22 +90,21 @@ use memoffset::span_of;
use rosenpass_cipher_traits::Kem;
use rosenpass_ciphers::hash_domain::{SecretHashDomain, SecretHashDomainNamespace};
use rosenpass_ciphers::kem::{EphemeralKem, StaticKem};
use rosenpass_ciphers::keyed_hash;
use rosenpass_ciphers::{aead, xaead, KEY_LEN};
use rosenpass_constant_time as constant_time;
use rosenpass_secret_memory::{Public, Secret};
use rosenpass_util::{cat, mem::cpy_min, ord::max_usize, time::Timebase};
use rosenpass_secret_memory::{Public, PublicBox, Secret};
use rosenpass_to::ops::copy_slice;
use rosenpass_to::To;
use rosenpass_util::functional::ApplyExt;
use rosenpass_util::mem::DiscardResultExt;
use rosenpass_util::{cat, mem::cpy_min, time::Timebase};
use zerocopy::{AsBytes, FromBytes, Ref};
use crate::{hash_domains, msgs::*, RosenpassError};
// CONSTANTS & SETTINGS //////////////////////////
/// Size required to fit any message in binary form
pub const RTX_BUFFER_SIZE: usize = max_usize(
size_of::<Envelope<InitHello>>(),
size_of::<Envelope<InitConf>>(),
);
/// A type for time, e.g. for backoff before re-tries
pub type Timing = f64;
@@ -138,11 +141,10 @@ pub const PEER_COOKIE_VALUE_EPOCH: Timing = 120.0;
// decryption for a second epoch
pub const BISCUIT_EPOCH: Timing = 300.0;
// Retransmission pub constants; will retransmit for up to _ABORT ms; starting with a delay of
// _DELAY_BEG ms and increasing the delay exponentially by a factor of
// _DELAY_GROWTH up to _DELAY_END. An additional jitter factor of ±_DELAY_JITTER
// is added.
pub const RETRANSMIT_ABORT: Timing = 120.0;
// Retransmission pub constants; will retransmit for up to _ABORT seconds;
// starting with a delay of _DELAY_BEGIN seconds and increasing the delay
// exponentially by a factor of _DELAY_GROWTH up to _DELAY_END.
// An additional jitter factor of ±_DELAY_JITTER is added.
pub const RETRANSMIT_DELAY_GROWTH: Timing = 2.0;
pub const RETRANSMIT_DELAY_BEGIN: Timing = 0.5;
pub const RETRANSMIT_DELAY_END: Timing = 10.0;
@@ -163,7 +165,7 @@ pub fn has_happened(ev: Timing, now: Timing) -> bool {
// DATA STRUCTURES & BASIC TRAITS & ACCESSORS ////
pub type SPk = Secret<{ StaticKem::PK_LEN }>; // Just Secret<> instead of Public<> so it gets allocated on the heap
pub type SPk = PublicBox<{ StaticKem::PK_LEN }>;
pub type SSk = Secret<{ StaticKem::SK_LEN }>;
pub type EPk = Public<{ EphemeralKem::PK_LEN }>;
pub type ESk = Secret<{ EphemeralKem::SK_LEN }>;
@@ -205,6 +207,7 @@ pub struct CryptoServer {
// Peer/Handshake DB
pub peers: Vec<Peer>,
pub index: HashMap<IndexKey, PeerNo>,
pub known_response_hasher: KnownResponseHasher,
// Tick handling
pub peer_poll_off: usize,
@@ -234,6 +237,7 @@ pub type BiscuitKey = CookieStore<KEY_LEN>;
pub enum IndexKey {
Peer(PeerId),
Sid(SessionId),
KnownInitConfResponse(KnownResponseHash),
}
#[derive(Debug)]
@@ -244,6 +248,7 @@ pub struct Peer {
pub session: Option<Session>,
pub handshake: Option<InitiatorHandshake>,
pub initiation_requested: bool,
pub known_init_conf_response: Option<KnownInitConfResponse>,
}
impl Peer {
@@ -255,6 +260,7 @@ impl Peer {
session: None,
initiation_requested: false,
handshake: None,
known_init_conf_response: None,
}
}
}
@@ -313,6 +319,50 @@ pub struct InitiatorHandshake {
pub cookie_value: CookieStore<COOKIE_VALUE_LEN>,
}
pub struct KnownResponse<ResponseType: AsBytes + FromBytes> {
received_at: Timing,
request_mac: KnownResponseHash,
response: Envelope<ResponseType>,
}
impl<ResponseType: AsBytes + FromBytes> Debug for KnownResponse<ResponseType> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("KnownResponse")
.field("received_at", &self.received_at)
.field("request_mac", &self.request_mac)
.field("response", &"...")
.finish()
}
}
pub type KnownInitConfResponse = KnownResponse<EmptyData>;
pub type KnownResponseHash = Public<16>;
#[derive(Debug)]
pub struct KnownResponseHasher {
pub key: SymKey,
}
impl KnownResponseHasher {
fn new() -> Self {
Self {
key: SymKey::random(),
}
}
/// # Panic & Safety
///
/// Panics in case of a problem with this underlying hash function
pub fn hash<Msg: AsBytes + FromBytes>(&self, msg: &Envelope<Msg>) -> KnownResponseHash {
let data = &msg.as_bytes()[span_of!(Envelope<Msg>, msg_type..cookie)];
let hash = keyed_hash::hash(self.key.secret(), data)
.to_this(Public::<32>::zero)
.unwrap();
Public::from_slice(&hash[0..16]) // truncate to 16 bytes
}
}
#[derive(Debug)]
pub struct Session {
// Metadata
@@ -374,6 +424,12 @@ pub struct IniHsPtr(pub usize);
#[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Debug)]
pub struct SessionPtr(pub usize);
/// Valid index to [CryptoServer::peers] cookie value
pub struct PeerCookieValuePtr(usize); // TODO: Change
/// Valid index to [CryptoServer::peers] known init conf response
pub struct KnownInitConfResponsePtr(PeerNo);
/// Valid index to [CryptoServer::biscuit_keys]
#[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Debug)]
pub struct BiscuitKeyPtr(pub usize);
@@ -382,14 +438,17 @@ pub struct BiscuitKeyPtr(pub usize);
#[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Debug)]
pub struct ServerCookieSecretPtr(pub usize);
/// Valid index to [CryptoServer::peers] cookie value
pub struct PeerCookieValuePtr(usize);
impl PeerPtr {
/// # Panic & Safety
///
/// The function panics if the peer referenced by this PeerPtr does not exist.
pub fn get<'a>(&self, srv: &'a CryptoServer) -> &'a Peer {
&srv.peers[self.0]
}
/// # Panic & Safety
///
/// The function panics if the peer referenced by this PeerPtr does not exist.
pub fn get_mut<'a>(&self, srv: &'a mut CryptoServer) -> &'a mut Peer {
&mut srv.peers[self.0]
}
@@ -405,6 +464,10 @@ impl PeerPtr {
pub fn cv(&self) -> PeerCookieValuePtr {
PeerCookieValuePtr(self.0)
}
pub fn known_init_conf_response(&self) -> KnownInitConfResponsePtr {
KnownInitConfResponsePtr(self.0)
}
}
impl IniHsPtr {
@@ -513,6 +576,118 @@ impl PeerCookieValuePtr {
}
}
impl KnownInitConfResponsePtr {
pub fn peer(&self) -> PeerPtr {
PeerPtr(self.0)
}
/// # Panic & Safety
///
/// The function panics if the peer referenced by this KnownInitConfResponsePtr does not exist.
pub fn get<'a>(&self, srv: &'a CryptoServer) -> Option<&'a KnownInitConfResponse> {
self.peer().get(srv).known_init_conf_response.as_ref()
}
/// # Panic & Safety
///
/// The function panics if the peer referenced by this KnownInitConfResponsePtr does not exist.
pub fn get_mut<'a>(&self, srv: &'a mut CryptoServer) -> Option<&'a mut KnownInitConfResponse> {
self.peer().get_mut(srv).known_init_conf_response.as_mut()
}
/// # Panic & Safety
///
/// The function panics if
///
/// - the peer referenced by this KnownInitConfResponsePtr does not exist
/// - the peer contains a KnownInitConfResponse (i.e. if [Peer::known_init_conf_response] is Some(...)), but the index to this KnownInitConfResponsePtr is missing (i.e. there is no appropriate index
/// value in [CryptoServer::index])
pub fn remove(&self, srv: &mut CryptoServer) -> Option<KnownInitConfResponse> {
let peer = self.peer();
let val = peer.get_mut(srv).known_init_conf_response.take()?;
let lookup_key = IndexKey::KnownInitConfResponse(val.request_mac);
srv.index.remove(&lookup_key).unwrap();
Some(val)
}
pub fn insert(&self, srv: &mut CryptoServer, known_response: KnownInitConfResponse) {
self.remove(srv).discard_result();
let index_key = IndexKey::KnownInitConfResponse(known_response.request_mac);
self.peer().get_mut(srv).known_init_conf_response = Some(known_response);
// There is a question here whether we should just discard the result…or panic if the
// result is Some(...).
//
// The result being anything other than None should never occur:
// - If we have never seen this InitConf message, then the result should be None and no value should
// have been written. This is fine.
// - If we have seen this message before, we should have responded with a known answer
// which would be fine
// - If we have never seen this InitConf message before, but the hashes are the same, this
// would constitute a collision on our hash function, which is security because the
// cryptography (collision resistance of our hash) prevents this. If this happened, it
// would be bad but we could not detect it.
if srv.index.insert(index_key, self.0).is_some() {
log::warn!(
r#"
Replaced a cached message in the InitConf known-response table
for network retransmission handling. This should never happen and is
probably a bug. Please report seeing this message at the following location:
https://github.com/rosenpass/rosenpass/issues
"#
);
}
}
pub fn lookup_for_request_msg(
srv: &CryptoServer,
req: &Envelope<InitConf>,
) -> Option<KnownInitConfResponsePtr> {
let index_key = Self::index_key_for_msg(srv, req);
let peer_no = *srv.index.get(&index_key)?;
Some(Self(peer_no))
}
pub fn lookup_response_for_request_msg<'a>(
srv: &'a CryptoServer,
req: &Envelope<InitConf>,
) -> Option<&'a Envelope<EmptyData>> {
Self::lookup_for_request_msg(srv, req)?
.get(srv)
.map(|v| &v.response)
}
pub fn insert_for_request_msg(
srv: &mut CryptoServer,
peer: PeerPtr,
req: &Envelope<InitConf>,
res: Envelope<EmptyData>,
) {
let ptr = peer.known_init_conf_response();
ptr.insert(
srv,
KnownInitConfResponse {
received_at: srv.timebase.now(),
request_mac: Self::index_key_hash_for_msg(srv, req),
response: res,
},
);
}
pub fn index_key_hash_for_msg(
srv: &CryptoServer,
req: &Envelope<InitConf>,
) -> KnownResponseHash {
srv.known_response_hasher.hash(req)
}
pub fn index_key_for_msg(srv: &CryptoServer, req: &Envelope<InitConf>) -> IndexKey {
Self::index_key_hash_for_msg(srv, req).apply(IndexKey::KnownInitConfResponse)
}
}
// DATABASE //////////////////////////////////////
impl CryptoServer {
@@ -530,6 +705,7 @@ impl CryptoServer {
biscuit_keys: [CookieStore::new(), CookieStore::new()],
peers: Vec::new(),
index: HashMap::new(),
known_response_hasher: KnownResponseHasher::new(),
peer_poll_off: 0,
cookie_secrets: [CookieStore::new(), CookieStore::new()],
}
@@ -548,7 +724,7 @@ impl CryptoServer {
pub fn pidm(&self) -> Result<PeerId> {
Ok(Public::new(
hash_domains::peerid()?
.mix(self.spkm.secret())?
.mix(self.spkm.deref())?
.into_value()))
}
@@ -568,6 +744,7 @@ impl CryptoServer {
biscuit_used: BiscuitId::zero(),
session: None,
handshake: None,
known_init_conf_response: None,
initiation_requested: false,
};
let peerid = peer.pidt()?;
@@ -700,6 +877,7 @@ impl Peer {
biscuit_used: BiscuitId::zero(),
session: None,
handshake: None,
known_init_conf_response: None,
initiation_requested: false,
}
}
@@ -708,7 +886,7 @@ impl Peer {
pub fn pidt(&self) -> Result<PeerId> {
Ok(Public::new(
hash_domains::peerid()?
.mix(self.spkt.secret())?
.mix(self.spkt.deref())?
.into_value()))
}
}
@@ -857,6 +1035,25 @@ impl Mortal for PeerCookieValuePtr {
}
}
impl Mortal for KnownInitConfResponsePtr {
fn created_at(&self, srv: &CryptoServer) -> Option<Timing> {
let t = self.get(srv)?.received_at;
if t < 0.0 {
None
} else {
Some(t)
}
}
fn retire_at(&self, srv: &CryptoServer) -> Option<Timing> {
self.die_at(srv)
}
fn die_at(&self, srv: &CryptoServer) -> Option<Timing> {
self.created_at(srv).map(|t| t + REKEY_AFTER_TIME_RESPONDER)
}
}
/// Trait extension to the [Mortal] Trait, that enables nicer access to timing
/// information
trait MortalExt: Mortal {
@@ -1017,7 +1214,7 @@ impl CryptoServer {
let cookie_value = active_cookie_value.unwrap();
let cookie_key = hash_domains::cookie_key()?
.mix(self.spkm.secret())?
.mix(self.spkm.deref())?
.into_value();
let mut msg_out = truncating_cast_into::<CookieReply>(tx_buf)?;
@@ -1120,10 +1317,38 @@ impl CryptoServer {
ensure!(msg_in.check_seal(self)?, seal_broken);
let mut msg_out = truncating_cast_into::<Envelope<EmptyData>>(tx_buf)?;
let (peer, if_exchanged) =
self.handle_init_conf(&msg_in.payload, &mut msg_out.payload)?;
// Check if we have a cached response
let peer = match KnownInitConfResponsePtr::lookup_for_request_msg(self, &msg_in) {
// Cached response; copy out of cache
Some(cached) => {
let peer = cached.peer();
let cached = cached
.get(self)
.map(|v| v.response.borrow())
// Invalid! Found peer no with cache in index but the cache does not exist
.unwrap();
copy_slice(cached.as_bytes()).to(msg_out.as_bytes_mut());
peer
}
// No cached response, actually call cryptographic handler
None => {
let peer = self.handle_init_conf(&msg_in.payload, &mut msg_out.payload)?;
KnownInitConfResponsePtr::insert_for_request_msg(
self,
peer,
&msg_in,
msg_out.clone(),
);
exchanged = true;
peer
}
};
len = self.seal_and_commit_msg(peer, MsgType::EmptyData, &mut msg_out)?;
exchanged = if_exchanged;
peer
}
Ok(MsgType::EmptyData) => {
@@ -1407,7 +1632,8 @@ impl Pollable for PeerPtr {
PollResult::SendInitiation(*self)
},
)
.poll_child(srv, &hs) // Defer to the handshake for polling (retransmissions)
.poll_child(srv, &hs)? // Defer to the handshake for polling (retransmissions)
.poll_child(srv, &self.known_init_conf_response())
}
}
@@ -1422,6 +1648,15 @@ impl Pollable for IniHsPtr {
}
}
impl Pollable for KnownInitConfResponsePtr {
fn poll(&self, srv: &mut CryptoServer) -> Result<PollResult> {
begin_poll()
// Erase stale cache
.sched(self.life_left(srv), void_poll(|| self.remove(srv)))
.ok()
}
}
// MESSAGE RETRANSMISSION ////////////////////////
impl CryptoServer {
@@ -1477,7 +1712,7 @@ impl IniHsPtr {
.min(ih.tx_count as f64),
)
* RETRANSMIT_DELAY_JITTER
* (rand::random::<f64>() + 1.0); // TODO: Replace with the rand crate
* (rand::random::<f64>() + 1.0);
ih.tx_count += 1;
Ok(())
}
@@ -1509,7 +1744,7 @@ where
/// Calculate the message authentication code (`mac`) and also append cookie value
pub fn seal(&mut self, peer: PeerPtr, srv: &CryptoServer) -> Result<()> {
let mac = hash_domains::mac()?
.mix(peer.get(srv).spkt.secret())?
.mix(peer.get(srv).spkt.deref())?
.mix(&self.as_bytes()[span_of!(Self, msg_type..mac)])?;
self.mac.copy_from_slice(mac.into_value()[..16].as_ref());
self.seal_cookie(peer, srv)?;
@@ -1536,7 +1771,7 @@ where
/// Check the message authentication code
pub fn check_seal(&self, srv: &CryptoServer) -> Result<bool> {
let expected = hash_domains::mac()?
.mix(srv.spkm.secret())?
.mix(srv.spkm.deref())?
.mix(&self.as_bytes()[span_of!(Self, msg_type..mac)])?;
Ok(constant_time::memcmp(
&self.mac,
@@ -1641,7 +1876,7 @@ impl HandshakeState {
// calculate ad contents
let ad = hash_domains::biscuit_ad()?
.mix(srv.spkm.secret())?
.mix(srv.spkm.deref())?
.mix(self.sidi.as_slice())?
.mix(self.sidr.as_slice())?
.into_value();
@@ -1676,7 +1911,7 @@ impl HandshakeState {
// Calculate additional data fields
let ad = hash_domains::biscuit_ad()?
.mix(srv.spkm.secret())?
.mix(srv.spkm.deref())?
.mix(sidi.as_slice())?
.mix(sidr.as_slice())?
.into_value();
@@ -1706,15 +1941,6 @@ impl HandshakeState {
.find_peer(pid) // TODO: FindPeer should return a Result<()>
.with_context(|| format!("Could not decode biscuit for peer {pid:?}: No such peer."))?;
// Defense against replay attacks; implementations may accept
// the most recent biscuit no again (bn = peer.bn_{prev}) which
// indicates retransmission
// TODO: Handle retransmissions without involving the crypto code
ensure!(
constant_time::compare(&biscuit.biscuit_no, &*peer.get(srv).biscuit_used) >= 0,
"Rejecting biscuit: Outdated biscuit number"
);
Ok((peer, no, hs))
}
@@ -1763,7 +1989,7 @@ impl CryptoServer {
let mut hs = InitiatorHandshake::zero_with_timestamp(self);
// IHI1
hs.core.init(peer.get(self).spkt.secret())?;
hs.core.init(peer.get(self).spkt.deref())?;
// IHI2
hs.core.sidi.randomize();
@@ -1780,7 +2006,7 @@ impl CryptoServer {
hs.core
.encaps_and_mix::<StaticKem, { StaticKem::SHK_LEN }>(
ih.sctr.as_mut_slice(),
peer.get(self).spkt.secret(),
peer.get(self).spkt.deref(),
)?;
// IHI6
@@ -1789,7 +2015,7 @@ impl CryptoServer {
// IHI7
hs.core
.mix(self.spkm.secret())?
.mix(self.spkm.deref())?
.mix(peer.get(self).psk.secret())?;
// IHI8
@@ -1807,7 +2033,7 @@ impl CryptoServer {
core.sidi = SessionId::from_slice(&ih.sidi);
// IHR1
core.init(self.spkm.secret())?;
core.init(self.spkm.deref())?;
// IHR4
core.mix(&ih.sidi)?.mix(&ih.epki)?;
@@ -1815,7 +2041,7 @@ impl CryptoServer {
// IHR5
core.decaps_and_mix::<StaticKem, { StaticKem::SHK_LEN }>(
self.sskm.secret(),
self.spkm.secret(),
self.spkm.deref(),
&ih.sctr,
)?;
@@ -1828,7 +2054,7 @@ impl CryptoServer {
};
// IHR7
core.mix(peer.get(self).spkt.secret())?
core.mix(peer.get(self).spkt.deref())?
.mix(peer.get(self).psk.secret())?;
// IHR8
@@ -1848,7 +2074,7 @@ impl CryptoServer {
// RHR5
core.encaps_and_mix::<StaticKem, { StaticKem::SHK_LEN }>(
&mut rh.scti,
peer.get(self).spkt.secret(),
peer.get(self).spkt.deref(),
)?;
// RHR6
@@ -1909,14 +2135,14 @@ impl CryptoServer {
// RHI4
core.decaps_and_mix::<EphemeralKem, { EphemeralKem::SHK_LEN }>(
hs!().eski.secret(),
&*hs!().epki,
hs!().epki.deref(),
&rh.ecti,
)?;
// RHI5
core.decaps_and_mix::<StaticKem, { StaticKem::SHK_LEN }>(
self.sskm.secret(),
self.spkm.secret(),
self.spkm.deref(),
&rh.scti,
)?;
@@ -1952,12 +2178,7 @@ impl CryptoServer {
Ok(peer)
}
pub fn handle_init_conf(
&mut self,
ic: &InitConf,
rc: &mut EmptyData,
) -> Result<(PeerPtr, bool)> {
let mut exchanged = false;
pub fn handle_init_conf(&mut self, ic: &InitConf, rc: &mut EmptyData) -> Result<PeerPtr> {
// (peer, bn) ← LoadBiscuit(InitConf.biscuit)
// ICR1
let (peer, biscuit_no, mut core) = HandshakeState::load_biscuit(
@@ -1977,20 +2198,23 @@ impl CryptoServer {
core.decrypt_and_mix(&mut [0u8; 0], &ic.auth)?;
// ICR5
if constant_time::compare(&*biscuit_no, &*peer.get(self).biscuit_used) > 0 {
// ICR6
peer.get_mut(self).biscuit_used = biscuit_no;
// Defense against replay attacks; implementations may accept
// the most recent biscuit no again (bn = peer.bn_{prev}) which
// indicates retransmission
ensure!(
constant_time::compare(&*biscuit_no, &*peer.get(self).biscuit_used) > 0,
"Rejecting biscuit: Outdated biscuit number"
);
// ICR7
peer.session()
.insert(self, core.enter_live(self, HandshakeRole::Responder)?)?;
// TODO: This should be part of the protocol specification.
// Abort any ongoing handshake from initiator role
peer.hs().take(self);
// ICR6
peer.get_mut(self).biscuit_used = biscuit_no;
// Only exchange key on new biscuit number- avoid duplicate key exchanges on retransmitted InitConf messages
exchanged = true;
}
// ICR7
peer.session()
.insert(self, core.enter_live(self, HandshakeRole::Responder)?)?;
// TODO: This should be part of the protocol specification.
// Abort any ongoing handshake from initiator role
peer.hs().take(self);
// TODO: Implementing RP should be possible without touching the live session stuff
// TODO: I fear that this may lead to race conditions; the acknowledgement may be
@@ -2014,8 +2238,7 @@ impl CryptoServer {
// Send ack Implementing sending the empty acknowledgement here
// instead of a generic PeerPtr::send(&Server, Option<&[u8]>) -> Either<EmptyData, Data>
// because data transmission is a stub currently. This software is supposed to be used
// as a key exchange service feeding a PSK into some classical (i.e. non post quantum)
// because data transmission is a stub currently.
let ses = peer
.session()
.get_mut(self)
@@ -2029,7 +2252,7 @@ impl CryptoServer {
let k = ses.txkm.secret();
aead::encrypt(&mut rc.auth, k, &n, &[], &[])?; // ct, k, n, ad, pt
Ok((peer, exchanged))
Ok(peer)
}
pub fn handle_resp_conf(&mut self, rc: &EmptyData) -> Result<PeerPtr> {
@@ -2113,7 +2336,7 @@ impl CryptoServer {
),
}?;
let spkt = peer.get(self).spkt.secret();
let spkt = peer.get(self).spkt.deref();
let cookie_key = hash_domains::cookie_key()?.mix(spkt)?.into_value();
let cookie_value = peer.cv().update_mut(self).unwrap();
@@ -2146,10 +2369,11 @@ fn truncating_cast_into_nomut<T: FromBytes>(buf: &[u8]) -> Result<Ref<&[u8], T>,
#[cfg(test)]
mod test {
use std::{net::SocketAddrV4, thread::sleep, time::Duration};
use std::{borrow::BorrowMut, net::SocketAddrV4, ops::DerefMut, thread::sleep, time::Duration};
use super::*;
use serial_test::serial;
use zerocopy::FromZeroes;
struct VecHostIdentifier(Vec<u8>);
@@ -2255,7 +2479,7 @@ mod test {
fn keygen() -> Result<(SSk, SPk)> {
// TODO: Copied from the benchmark; deduplicate
let (mut sk, mut pk) = (SSk::zero(), SPk::zero());
StaticKem::keygen(sk.secret_mut(), pk.secret_mut())?;
StaticKem::keygen(sk.secret_mut(), pk.deref_mut())?;
Ok((sk, pk))
}
@@ -2565,4 +2789,186 @@ mod test {
.is_err());
});
}
#[test]
fn init_conf_retransmission() -> anyhow::Result<()> {
rosenpass_secret_memory::secret_policy_try_use_memfd_secrets();
fn keypair() -> anyhow::Result<(SSk, SPk)> {
let (mut sk, mut pk) = (SSk::zero(), SPk::zero());
StaticKem::keygen(sk.secret_mut(), pk.deref_mut())?;
Ok((sk, pk))
}
fn proc_initiation(
srv: &mut CryptoServer,
peer: PeerPtr,
) -> anyhow::Result<Envelope<InitHello>> {
let mut buf = MsgBuf::zero();
srv.initiate_handshake(peer, buf.as_mut_slice())?
.discard_result();
let msg = truncating_cast_into::<Envelope<InitHello>>(buf.borrow_mut())?;
Ok(msg.read())
}
fn proc_msg<Rx: AsBytes + FromBytes, Tx: AsBytes + FromBytes>(
srv: &mut CryptoServer,
rx: &Envelope<Rx>,
) -> anyhow::Result<Envelope<Tx>> {
let mut buf = MsgBuf::zero();
srv.handle_msg(rx.as_bytes(), buf.as_mut_slice())?
.resp
.context("Failed to produce RespHello message")?
.discard_result();
let msg = truncating_cast_into::<Envelope<Tx>>(buf.borrow_mut())?;
Ok(msg.read())
}
fn proc_init_hello(
srv: &mut CryptoServer,
ih: &Envelope<InitHello>,
) -> anyhow::Result<Envelope<RespHello>> {
proc_msg::<InitHello, RespHello>(srv, ih)
}
fn proc_resp_hello(
srv: &mut CryptoServer,
rh: &Envelope<RespHello>,
) -> anyhow::Result<Envelope<InitConf>> {
proc_msg::<RespHello, InitConf>(srv, rh)
}
fn proc_init_conf(
srv: &mut CryptoServer,
rh: &Envelope<InitConf>,
) -> anyhow::Result<Envelope<EmptyData>> {
proc_msg::<InitConf, EmptyData>(srv, rh)
}
fn poll(srv: &mut CryptoServer) -> anyhow::Result<()> {
// Discard all events; just apply the side effects
while !matches!(srv.poll()?, PollResult::Sleep(_)) {}
Ok(())
}
// TODO: Implement Clone on our message types
fn clone_msg<Msg: AsBytes + FromBytes>(msg: &Msg) -> anyhow::Result<Msg> {
Ok(truncating_cast_into_nomut::<Msg>(msg.as_bytes())?.read())
}
fn break_payload<Msg: AsBytes + FromBytes>(
srv: &mut CryptoServer,
peer: PeerPtr,
msg: &Envelope<Msg>,
) -> anyhow::Result<Envelope<Msg>> {
let mut msg = clone_msg(msg)?;
msg.as_bytes_mut()[memoffset::offset_of!(Envelope<Msg>, payload)] ^= 0x01;
msg.seal(peer, srv)?; // Recalculate seal; we do not want to focus on "seal broken" errs
Ok(msg)
}
fn time_travel_forward(srv: &mut CryptoServer, secs: f64) {
let dur = std::time::Duration::from_secs_f64(secs);
srv.timebase.0 = srv.timebase.0.checked_sub(dur).unwrap();
}
fn check_faulty_proc_init_conf(srv: &mut CryptoServer, ic_broken: &Envelope<InitConf>) {
let mut buf = MsgBuf::zero();
let res = srv.handle_msg(ic_broken.as_bytes(), buf.as_mut_slice());
assert!(res.is_err());
}
fn check_retransmission(
srv: &mut CryptoServer,
ic: &Envelope<InitConf>,
ic_broken: &Envelope<InitConf>,
rc: &Envelope<EmptyData>,
) -> anyhow::Result<()> {
// Processing the same RespHello package again leads to retransmission (i.e. exactly the
// same output)
let rc_dup = proc_init_conf(srv, ic)?;
assert_eq!(rc.as_bytes(), rc_dup.as_bytes());
// Though if we directly call handle_resp_hello() we get an error since
// retransmission is not being handled by the cryptographic code
let mut discard_resp_conf = EmptyData::new_zeroed();
let res = srv.handle_init_conf(&ic.payload, &mut discard_resp_conf);
assert!(res.is_err());
// Obviously, a broken InitConf message should still be rejected
check_faulty_proc_init_conf(srv, ic_broken);
Ok(())
}
let (ska, pka) = keypair()?;
let (skb, pkb) = keypair()?;
// initialize server and a pre-shared key
let mut a = CryptoServer::new(ska, pka.clone());
let mut b = CryptoServer::new(skb, pkb.clone());
// introduce peers to each other
let b_peer = a.add_peer(None, pkb)?;
let a_peer = b.add_peer(None, pka)?;
// Execute protocol up till the responder confirmation (EmptyData)
let ih1 = proc_initiation(&mut a, b_peer)?;
let rh1 = proc_init_hello(&mut b, &ih1)?;
let ic1 = proc_resp_hello(&mut a, &rh1)?;
let rc1 = proc_init_conf(&mut b, &ic1)?;
// Modified version of ic1 and rc1, for tests that require it
let ic1_broken = break_payload(&mut a, b_peer, &ic1)?;
assert_ne!(ic1.as_bytes(), ic1_broken.as_bytes());
// Modified version of rc1, for tests that require it
let rc1_broken = break_payload(&mut b, a_peer, &rc1)?;
assert_ne!(rc1.as_bytes(), rc1_broken.as_bytes());
// Retransmission works as designed
check_retransmission(&mut b, &ic1, &ic1_broken, &rc1)?;
// Even with a couple of poll operations in between (which clears the cache
// after a time out of two minutes…we should never hit this time out in this
// cache)
for _ in 0..4 {
poll(&mut b)?;
check_retransmission(&mut b, &ic1, &ic1_broken, &rc1)?;
}
// We can even validate that the data is coming out of the cache by changing the cache
// to use our broken messages. It does not matter that these messages are cryptographically
// broken since we insert them manually into the cache
// a_peer.known_init_conf_response()
KnownInitConfResponsePtr::insert_for_request_msg(
&mut b,
a_peer,
&ic1_broken,
rc1_broken.clone(),
);
check_retransmission(&mut b, &ic1_broken, &ic1, &rc1_broken)?;
// Lets reset to the correct message though
KnownInitConfResponsePtr::insert_for_request_msg(&mut b, a_peer, &ic1, rc1.clone());
// Again, nothing changes after calling poll
poll(&mut b)?;
check_retransmission(&mut b, &ic1, &ic1_broken, &rc1)?;
// Except if we jump forward into the future past the point where the responder
// starts to initiate rekeying; in this case, the automatic time out is triggered and the cache is cleared
time_travel_forward(&mut b, REKEY_AFTER_TIME_RESPONDER);
// As long as we do not call poll, everything is fine
check_retransmission(&mut b, &ic1, &ic1_broken, &rc1)?;
// But after we do, the response is gone and can not be recreated
// since the biscuit is stale
poll(&mut b)?;
check_faulty_proc_init_conf(&mut b, &ic1); // ic1 is now effectively broken
assert!(b.peers[0].known_init_conf_response.is_none()); // The cache is gone
Ok(())
}
}

View File

@@ -0,0 +1,332 @@
use std::{
borrow::Borrow,
io::{BufRead, BufReader, Write},
os::unix::net::UnixStream,
process::Stdio,
thread::sleep,
time::Duration,
};
use anyhow::{bail, Context};
use command_fds::{CommandFdExt, FdMapping};
use hex_literal::hex;
use rosenpass::api::{
self, add_listen_socket_response_status, add_psk_broker_response_status,
supply_keypair_response_status,
};
use rosenpass_util::{
b64::B64Display,
file::LoadValueB64,
io::IoErrorKind,
length_prefix_encoding::{decoder::LengthPrefixDecoder, encoder::LengthPrefixEncoder},
mem::{DiscardResultExt, MoveExt},
mio::WriteWithFileDescriptors,
zerocopy::ZerocopySliceExt,
};
use std::os::fd::{AsFd, AsRawFd};
use tempfile::TempDir;
use zerocopy::AsBytes;
use rosenpass::protocol::SymKey;
struct KillChild(std::process::Child);
impl Drop for KillChild {
fn drop(&mut self) {
self.0.kill().discard_result();
self.0.wait().discard_result()
}
}
#[test]
fn api_integration_api_setup() -> anyhow::Result<()> {
rosenpass_secret_memory::policy::secret_policy_use_only_malloc_secrets();
let dir = TempDir::with_prefix("rosenpass-api-integration-test")?;
macro_rules! tempfile {
($($lst:expr),+) => {{
let mut buf = dir.path().to_path_buf();
$(buf.push($lst);)*
buf
}}
}
let peer_a_endpoint = "[::1]:0";
let peer_a_listen = std::net::UdpSocket::bind(peer_a_endpoint)?;
let peer_a_endpoint = format!("{}", peer_a_listen.local_addr()?);
let peer_a_keypair = config::Keypair::new(tempfile!("a.pk"), tempfile!("a.sk"));
let peer_b_osk = tempfile!("b.osk");
let peer_b_wg_device = "mock_device";
let peer_b_wg_peer_id = hex!(
"
93 0f ee 77 0c 6b 54 7e 13 5f 13 92 21 97 26 53
7d 77 4a 6a 0f 6c eb 1a dd 6e 5b c4 1b 92 cd 99
"
);
use rosenpass::config;
let peer_a = config::Rosenpass {
config_file_path: tempfile!("a.config"),
keypair: None,
listen: vec![], // TODO: This could collide by accident
verbosity: config::Verbosity::Verbose,
api: api::config::ApiConfig {
listen_path: vec![tempfile!("a.sock")],
listen_fd: vec![],
stream_fd: vec![],
},
peers: vec![config::RosenpassPeer {
public_key: tempfile!("b.pk"),
key_out: None,
endpoint: None,
pre_shared_key: None,
wg: Some(config::WireGuard {
device: peer_b_wg_device.to_string(),
peer: format!("{}", peer_b_wg_peer_id.fmt_b64::<8129>()),
extra_params: vec![],
}),
}],
};
let peer_b_keypair = config::Keypair::new(tempfile!("b.pk"), tempfile!("b.sk"));
let peer_b = config::Rosenpass {
config_file_path: tempfile!("b.config"),
keypair: Some(peer_b_keypair.clone()),
listen: vec![],
verbosity: config::Verbosity::Verbose,
api: api::config::ApiConfig {
listen_path: vec![tempfile!("b.sock")],
listen_fd: vec![],
stream_fd: vec![],
},
peers: vec![config::RosenpassPeer {
public_key: tempfile!("a.pk"),
key_out: Some(peer_b_osk.clone()),
endpoint: Some(peer_a_endpoint.to_owned()),
pre_shared_key: None,
wg: None,
}],
};
// Generate the keys
rosenpass::cli::testing::generate_and_save_keypair(
peer_a_keypair.secret_key.clone(),
peer_a_keypair.public_key.clone(),
)?;
rosenpass::cli::testing::generate_and_save_keypair(
peer_b_keypair.secret_key.clone(),
peer_b_keypair.public_key.clone(),
)?;
// Write the configuration files
peer_a.commit()?;
peer_b.commit()?;
let (deliberate_fail_api_client, deliberate_fail_api_server) =
std::os::unix::net::UnixStream::pair()?;
let deliberate_fail_child_fd = 3;
// Start peer a
let _proc_a = KillChild(
std::process::Command::new(env!("CARGO_BIN_EXE_rosenpass"))
.args(["--api-stream-fd", &deliberate_fail_child_fd.to_string()])
.fd_mappings(vec![FdMapping {
parent_fd: deliberate_fail_api_server.move_here().as_raw_fd(),
child_fd: 3,
}])?
.args([
"exchange-config",
peer_a.config_file_path.to_str().context("")?,
])
.stdin(Stdio::null())
.stdout(Stdio::null())
.spawn()?,
);
// Start peer b
let mut proc_b = KillChild(
std::process::Command::new(env!("CARGO_BIN_EXE_rosenpass"))
.args([
"exchange-config",
peer_b.config_file_path.to_str().context("")?,
])
.stdin(Stdio::null())
.stderr(Stdio::null())
.stdout(Stdio::piped())
.spawn()?,
);
// Acquire stdout
let mut out_b = BufReader::new(proc_b.0.stdout.take().context("")?).lines();
// Now connect to the peers
let api_path = peer_a.api.listen_path[0].as_path();
// Wait for the socket to be created
let attempt = 0;
while !api_path.exists() {
sleep(Duration::from_millis(200));
assert!(
attempt < 50,
"Api failed to be created even after 50 seconds"
);
}
let api = UnixStream::connect(api_path)?;
let (psk_broker_sock, psk_broker_server_sock) = UnixStream::pair()?;
// Send AddListenSocket request
{
let fd = peer_a_listen.as_fd();
let mut fds = vec![&fd].into();
let mut api = WriteWithFileDescriptors::<UnixStream, _, _, _>::new(&api, &mut fds);
LengthPrefixEncoder::from_message(api::AddListenSocketRequest::new().as_bytes())
.write_all_to_stdio(&mut api)?;
assert!(fds.is_empty(), "Failed to write all file descriptors");
std::mem::forget(peer_a_listen);
}
// Read response
{
let mut decoder = LengthPrefixDecoder::new([0u8; api::MAX_RESPONSE_LEN]);
let res = decoder.read_all_from_stdio(&api)?;
let res = res.zk_parse::<api::AddListenSocketResponse>()?;
assert_eq!(
*res,
api::AddListenSocketResponse::new(add_listen_socket_response_status::OK)
);
}
// Deliberately break API connection given via FD; this checks that the
// API connections are closed when invalid data is received and it also
// implicitly checks that other connections are unaffected
{
use std::io::ErrorKind as K;
let client = deliberate_fail_api_client;
let err = loop {
if let Err(e) = client.borrow().write(&[0xffu8; 16]) {
break e;
}
};
// NotConnected happens on Mac
assert!(matches!(
err.io_error_kind(),
K::ConnectionReset | K::BrokenPipe | K::NotConnected
));
}
// Send SupplyKeypairRequest
{
use rustix::fs::{open, Mode, OFlags};
let sk = open(peer_a_keypair.secret_key, OFlags::RDONLY, Mode::empty())?;
let pk = open(peer_a_keypair.public_key, OFlags::RDONLY, Mode::empty())?;
let mut fds = vec![&sk, &pk].into();
let mut api = WriteWithFileDescriptors::<UnixStream, _, _, _>::new(&api, &mut fds);
LengthPrefixEncoder::from_message(api::SupplyKeypairRequest::new().as_bytes())
.write_all_to_stdio(&mut api)?;
assert!(fds.is_empty(), "Failed to write all file descriptors");
}
// Read response
{
let mut decoder = LengthPrefixDecoder::new([0u8; api::MAX_RESPONSE_LEN]);
let res = decoder.read_all_from_stdio(&api)?;
let res = res.zk_parse::<api::SupplyKeypairResponse>()?;
assert_eq!(
*res,
api::SupplyKeypairResponse::new(supply_keypair_response_status::OK)
);
}
// Send AddPskBroker request
{
let mut fds = vec![psk_broker_server_sock.as_fd()].into();
let mut api = WriteWithFileDescriptors::<UnixStream, _, _, _>::new(&api, &mut fds);
LengthPrefixEncoder::from_message(api::AddPskBrokerRequest::new().as_bytes())
.write_all_to_stdio(&mut api)?;
assert!(fds.is_empty(), "Failed to write all file descriptors");
}
// Read response
{
let mut decoder = LengthPrefixDecoder::new([0u8; api::MAX_RESPONSE_LEN]);
let res = decoder.read_all_from_stdio(&api)?;
let res = res.zk_parse::<api::AddPskBrokerResponse>()?;
assert_eq!(
*res,
api::AddPskBrokerResponse::new(add_psk_broker_response_status::OK)
);
}
// Wait for the keys to successfully exchange a key
let mut attempt = 0;
loop {
// Read OSK generated by A
let osk_a = {
use rosenpass_wireguard_broker::api::msgs as M;
type SetPskReqPkg = M::Envelope<M::SetPskRequest>;
type SetPskResPkg = M::Envelope<M::SetPskResponse>;
// Receive request
let mut decoder = LengthPrefixDecoder::new([0u8; M::REQUEST_MSG_BUFFER_SIZE]);
let req = decoder.read_all_from_stdio(&psk_broker_sock)?;
let req = req.zk_parse::<SetPskReqPkg>()?;
assert_eq!(req.msg_type, M::MsgType::SetPsk as u8);
assert_eq!(req.payload.peer_id, peer_b_wg_peer_id);
assert_eq!(req.payload.iface()?, peer_b_wg_device);
// Send response
let res = SetPskResPkg {
msg_type: M::MsgType::SetPsk as u8,
reserved: [0u8; 3],
payload: M::SetPskResponse {
return_code: M::SetPskResponseReturnCode::Success as u8,
},
};
LengthPrefixEncoder::from_message(res.as_bytes())
.write_all_to_stdio(&psk_broker_sock)?;
SymKey::from_slice(&req.payload.psk)
};
// Read OSK generated by B
let osk_b = {
let line = out_b.next().context("")??;
let words = line.split(' ').collect::<Vec<_>>();
// FIXED FIXED PEER-ID FIXED FILENAME STATUS
// output-key peer KZqXTZ4l2aNnkJtLPhs4D8JxHTGmRSL9w3Qr+X8JxFk= key-file "client-A-osk" exchanged
let peer_id = words
.get(2)
.with_context(|| format!("Bad rosenpass output: `{line}`"))?;
assert_eq!(
line,
format!(
"output-key peer {peer_id} key-file \"{}\" exchanged",
peer_b_osk.to_str().context("")?
)
);
SymKey::load_b64::<64, _>(peer_b_osk.clone())?
};
// TODO: This may be flaky. Both rosenpass instances are not guaranteed to produce
// the same number of output events; they merely guarantee eventual consistency of OSK.
// Correctly, we should use tokio to read any number of generated OSKs and indicate
// success on consensus
match osk_a.secret() == osk_b.secret() {
true => break,
false if attempt > 10 => bail!("Peers did not produce a matching key even after ten attempts. Something is wrong with the key exchange!"),
false => {},
};
attempt += 1;
}
Ok(())
}

View File

@@ -0,0 +1,194 @@
use std::{
io::{BufRead, BufReader},
net::ToSocketAddrs,
os::unix::net::UnixStream,
process::Stdio,
};
use anyhow::{bail, Context};
use rosenpass::api;
use rosenpass_to::{ops::copy_slice_least_src, To};
use rosenpass_util::{
file::LoadValueB64,
length_prefix_encoding::{decoder::LengthPrefixDecoder, encoder::LengthPrefixEncoder},
};
use rosenpass_util::{mem::DiscardResultExt, zerocopy::ZerocopySliceExt};
use tempfile::TempDir;
use zerocopy::AsBytes;
use rosenpass::protocol::SymKey;
struct KillChild(std::process::Child);
impl Drop for KillChild {
fn drop(&mut self) {
self.0.kill().discard_result();
self.0.wait().discard_result()
}
}
#[test]
fn api_integration_test() -> anyhow::Result<()> {
rosenpass_secret_memory::policy::secret_policy_use_only_malloc_secrets();
let dir = TempDir::with_prefix("rosenpass-api-integration-test")?;
macro_rules! tempfile {
($($lst:expr),+) => {{
let mut buf = dir.path().to_path_buf();
$(buf.push($lst);)*
buf
}}
}
let peer_a_endpoint = "[::1]:61423";
let peer_a_osk = tempfile!("a.osk");
let peer_b_osk = tempfile!("b.osk");
use rosenpass::config;
let peer_a_keypair = config::Keypair::new(tempfile!("a.pk"), tempfile!("a.sk"));
let peer_a = config::Rosenpass {
config_file_path: tempfile!("a.config"),
keypair: Some(peer_a_keypair.clone()),
listen: peer_a_endpoint.to_socket_addrs()?.collect(), // TODO: This could collide by accident
verbosity: config::Verbosity::Verbose,
api: api::config::ApiConfig {
listen_path: vec![tempfile!("a.sock")],
listen_fd: vec![],
stream_fd: vec![],
},
peers: vec![config::RosenpassPeer {
public_key: tempfile!("b.pk"),
key_out: Some(peer_a_osk.clone()),
endpoint: None,
pre_shared_key: None,
wg: None,
}],
};
let peer_b_keypair = config::Keypair::new(tempfile!("b.pk"), tempfile!("b.sk"));
let peer_b = config::Rosenpass {
config_file_path: tempfile!("b.config"),
keypair: Some(peer_b_keypair.clone()),
listen: vec![],
verbosity: config::Verbosity::Verbose,
api: api::config::ApiConfig {
listen_path: vec![tempfile!("b.sock")],
listen_fd: vec![],
stream_fd: vec![],
},
peers: vec![config::RosenpassPeer {
public_key: tempfile!("a.pk"),
key_out: Some(peer_b_osk.clone()),
endpoint: Some(peer_a_endpoint.to_owned()),
pre_shared_key: None,
wg: None,
}],
};
// Generate the keys
rosenpass::cli::testing::generate_and_save_keypair(
peer_a_keypair.secret_key.clone(),
peer_a_keypair.public_key.clone(),
)?;
rosenpass::cli::testing::generate_and_save_keypair(
peer_b_keypair.secret_key.clone(),
peer_b_keypair.public_key.clone(),
)?;
// Write the configuration files
peer_a.commit()?;
peer_b.commit()?;
// Start peer a
let mut proc_a = KillChild(
std::process::Command::new(env!("CARGO_BIN_EXE_rosenpass"))
.args([
"exchange-config",
peer_a.config_file_path.to_str().context("")?,
])
.stdin(Stdio::null())
.stdout(Stdio::piped())
.spawn()?,
);
// Start peer b
let mut proc_b = KillChild(
std::process::Command::new(env!("CARGO_BIN_EXE_rosenpass"))
.args([
"exchange-config",
peer_b.config_file_path.to_str().context("")?,
])
.stdin(Stdio::null())
.stdout(Stdio::piped())
.spawn()?,
);
// Acquire stdout
let mut out_a = BufReader::new(proc_a.0.stdout.take().context("")?).lines();
let mut out_b = BufReader::new(proc_b.0.stdout.take().context("")?).lines();
// Wait for the keys to successfully exchange a key
let mut attempt = 0;
loop {
let line_a = out_a.next().context("")??;
let line_b = out_b.next().context("")??;
let words_a = line_a.split(' ').collect::<Vec<_>>();
let words_b = line_b.split(' ').collect::<Vec<_>>();
// FIXED FIXED PEER-ID FIXED FILENAME STATUS
// output-key peer KZqXTZ4l2aNnkJtLPhs4D8JxHTGmRSL9w3Qr+X8JxFk= key-file "client-A-osk" exchanged
let peer_a_id = words_b
.get(2)
.with_context(|| format!("Bad rosenpass output: `{line_b}`"))?;
let peer_b_id = words_a
.get(2)
.with_context(|| format!("Bad rosenpass output: `{line_a}`"))?;
assert_eq!(
line_a,
format!(
"output-key peer {peer_b_id} key-file \"{}\" exchanged",
peer_a_osk.to_str().context("")?
)
);
assert_eq!(
line_b,
format!(
"output-key peer {peer_a_id} key-file \"{}\" exchanged",
peer_b_osk.to_str().context("")?
)
);
// Read OSKs
let osk_a = SymKey::load_b64::<64, _>(peer_a_osk.clone())?;
let osk_b = SymKey::load_b64::<64, _>(peer_b_osk.clone())?;
match osk_a.secret() == osk_b.secret() {
true => break,
false if attempt > 10 => bail!("Peers did not produce a matching key even after ten attempts. Something is wrong with the key exchange!"),
false => {},
};
attempt += 1;
}
// Now connect to the peers
let api_a = UnixStream::connect(&peer_a.api.listen_path[0])?;
let api_b = UnixStream::connect(&peer_b.api.listen_path[0])?;
for conn in ([api_a, api_b]).iter() {
let mut echo = [0u8; 256];
copy_slice_least_src("Hello World".as_bytes()).to(&mut echo);
let req = api::PingRequest::new(echo);
LengthPrefixEncoder::from_message(req.as_bytes()).write_all_to_stdio(conn)?;
let mut decoder = LengthPrefixDecoder::new([0u8; api::MAX_RESPONSE_LEN]);
let res = decoder.read_all_from_stdio(conn)?;
let res = res.zk_parse::<api::PingResponse>()?;
assert_eq!(*res, api::PingResponse::new(echo));
}
Ok(())
}

View File

@@ -1,3 +1,4 @@
use std::fs::File;
use std::{
fs,
net::UdpSocket,
@@ -5,9 +6,10 @@ use std::{
sync::{Arc, Mutex},
time::Duration,
};
use tempfile::tempdir;
use clap::Parser;
use rosenpass::{app_server::AppServerTestBuilder, cli::CliArgs};
use rosenpass::{app_server::AppServerTestBuilder, cli::CliArgs, config::EXAMPLE_CONFIG};
use rosenpass_secret_memory::{Public, Secret};
use rosenpass_wireguard_broker::{WireguardBrokerMio, WG_KEY_LEN, WG_PEER_LEN};
use serial_test::serial;
@@ -15,9 +17,19 @@ use std::io::Write;
const BIN: &str = "rosenpass";
fn setup_tests() {
use rosenpass_secret_memory as SM;
#[cfg(feature = "experiment_memfd_secret")]
SM::secret_policy_try_use_memfd_secrets();
#[cfg(not(feature = "experiment_memfd_secret"))]
SM::secret_policy_use_only_malloc_secrets();
}
// check that we can generate keys
#[test]
fn generate_keys() {
setup_tests();
let tmpdir = PathBuf::from(env!("CARGO_TARGET_TMPDIR")).join("keygen");
fs::create_dir_all(&tmpdir).unwrap();
@@ -94,14 +106,11 @@ fn run_server_client_exchange(
.unwrap();
std::thread::spawn(move || {
cli.command
.run(Some(
server_test_builder
.termination_handler(Some(server_terminate_rx))
.build()
.unwrap(),
))
let test_helpers = server_test_builder
.termination_handler(Some(server_terminate_rx))
.build()
.unwrap();
cli.run(None, Some(test_helpers)).unwrap();
});
let cli = CliArgs::try_parse_from(
@@ -112,14 +121,11 @@ fn run_server_client_exchange(
.unwrap();
std::thread::spawn(move || {
cli.command
.run(Some(
client_test_builder
.termination_handler(Some(client_terminate_rx))
.build()
.unwrap(),
))
let test_helpers = client_test_builder
.termination_handler(Some(client_terminate_rx))
.build()
.unwrap();
cli.run(None, Some(test_helpers)).unwrap();
});
// give them some time to do the key exchange under load
@@ -130,10 +136,51 @@ fn run_server_client_exchange(
client_terminate.send(()).unwrap();
}
// verify that EXAMPLE_CONFIG is correct
#[test]
fn check_example_config() {
setup_tests();
setup_logging();
let tmp_dir = tempdir().unwrap();
let config_path = tmp_dir.path().join("config.toml");
let mut config_file = File::create(config_path.to_owned()).unwrap();
config_file
.write_all(
EXAMPLE_CONFIG
.replace("/path/to", tmp_dir.path().to_str().unwrap())
.as_bytes(),
)
.unwrap();
let output = test_bin::get_test_bin(BIN)
.args(["gen-keys"])
.arg(&config_path)
.output()
.expect("EXAMPLE_CONFIG not valid");
fs::copy(
tmp_dir.path().join("rp-public-key"),
tmp_dir.path().join("rp-peer-public-key"),
)
.unwrap();
let output = test_bin::get_test_bin(BIN)
.args(["validate"])
.arg(&config_path)
.output()
.expect("EXAMPLE_CONFIG not valid");
let stderr = String::from_utf8_lossy(&output.stderr);
assert!(stderr.contains("has passed all logical checks"));
}
// check that we can exchange keys
#[test]
#[serial]
fn check_exchange_under_normal() {
setup_tests();
setup_logging();
let tmpdir = PathBuf::from(env!("CARGO_TARGET_TMPDIR")).join("exchange");
@@ -206,6 +253,7 @@ fn check_exchange_under_normal() {
#[test]
#[serial]
fn check_exchange_under_dos() {
setup_tests();
setup_logging();
//Generate binary with responder with feature integration_test
@@ -283,9 +331,11 @@ struct MockBrokerInner {
interface: Option<String>,
}
#[allow(dead_code)]
#[derive(Debug, Default)]
struct MockBroker {
inner: Arc<Mutex<MockBrokerInner>>,
mio_token: Option<mio::Token>,
}
impl WireguardBrokerMio for MockBroker {
@@ -294,8 +344,9 @@ impl WireguardBrokerMio for MockBroker {
fn register(
&mut self,
_registry: &mio::Registry,
_token: mio::Token,
token: mio::Token,
) -> Result<(), Self::MioError> {
self.mio_token = Some(token);
Ok(())
}
@@ -304,8 +355,13 @@ impl WireguardBrokerMio for MockBroker {
}
fn unregister(&mut self, _registry: &mio::Registry) -> Result<(), Self::MioError> {
self.mio_token = None;
Ok(())
}
fn mio_token(&self) -> Option<mio::Token> {
self.mio_token
}
}
impl rosenpass_wireguard_broker::WireGuardBroker for MockBroker {
@@ -321,7 +377,7 @@ impl rosenpass_wireguard_broker::WireGuardBroker for MockBroker {
if let Ok(ref mut mutex) = lock {
**mutex = MockBrokerInner {
psk: Some(config.psk.clone()),
peer_id: Some(config.peer_id.clone()),
peer_id: Some(*config.peer_id),
interface: Some(std::str::from_utf8(config.interface).unwrap().to_string()),
};
break Ok(());

View File

@@ -20,9 +20,9 @@ rosenpass-ciphers = { workspace = true }
rosenpass-cipher-traits = { workspace = true }
rosenpass-secret-memory = { workspace = true }
rosenpass-util = { workspace = true }
rosenpass-wireguard-broker = {workspace = true}
rosenpass-wireguard-broker = { workspace = true }
tokio = {workspace = true}
tokio = { workspace = true }
[target.'cfg(any(target_os = "linux", target_os = "freebsd"))'.dependencies]
ctrlc-async = "3.2"
@@ -35,8 +35,9 @@ netlink-packet-generic = "0.3"
netlink-packet-wireguard = "0.2"
[dev-dependencies]
tempfile = {workspace = true}
stacker = {workspace = true}
tempfile = { workspace = true }
stacker = { workspace = true }
[features]
enable_memfd_alloc = []
experiment_memfd_secret = []
experiment_libcrux = ["rosenpass-ciphers/experiment_libcrux"]

View File

@@ -2,7 +2,9 @@ use std::{net::SocketAddr, path::PathBuf};
use anyhow::Result;
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
use crate::key::WG_B64_LEN;
#[derive(Default)]
pub struct ExchangePeer {
pub public_keys_dir: PathBuf,
@@ -186,8 +188,7 @@ pub async fn exchange(options: ExchangeOptions) -> Result<()> {
let pk = SPk::load(&pqpk)?;
let mut srv = Box::new(AppServer::new(
sk,
pk,
Some((sk, pk)),
if let Some(listen) = options.listen {
vec![listen]
} else {

View File

@@ -1,11 +1,12 @@
use std::{
fs::{self, DirBuilder},
ops::DerefMut,
os::unix::fs::{DirBuilderExt, PermissionsExt},
path::Path,
};
use anyhow::{anyhow, Result};
use rosenpass_util::file::{LoadValueB64, StoreValueB64};
use rosenpass_util::file::{LoadValueB64, StoreValue, StoreValueB64};
use zeroize::Zeroize;
use rosenpass::protocol::{SPk, SSk};
@@ -56,8 +57,8 @@ pub fn genkey(private_keys_dir: &Path) -> Result<()> {
if !pqsk_path.exists() && !pqpk_path.exists() {
let mut pqsk = SSk::random();
let mut pqpk = SPk::random();
StaticKem::keygen(pqsk.secret_mut(), pqpk.secret_mut())?;
pqpk.store_secret(pqpk_path)?;
StaticKem::keygen(pqsk.secret_mut(), pqpk.deref_mut())?;
pqpk.store(pqpk_path)?;
pqsk.store_secret(pqsk_path)?;
} else {
eprintln!(

View File

@@ -11,9 +11,9 @@ mod key;
#[tokio::main]
async fn main() {
#[cfg(feature = "enable_memfd_alloc")]
#[cfg(feature = "experiment_memfd_secret")]
policy::secret_policy_try_use_memfd_secrets();
#[cfg(not(feature = "enable_memfd_alloc"))]
#[cfg(not(feature = "experiment_memfd_secret"))]
policy::secret_policy_use_only_malloc_secrets();
let cli = match Cli::parse(std::env::args().peekable()) {

View File

@@ -21,6 +21,6 @@ log = { workspace = true }
[dev-dependencies]
allocator-api2-tests = { workspace = true }
tempfile = {workspace = true}
base64ct = {workspace = true}
procspawn = {workspace = true}
tempfile = { workspace = true }
base64ct = { workspace = true }
procspawn = { workspace = true }

View File

@@ -6,6 +6,7 @@ pub mod alloc;
mod public;
pub use crate::public::Public;
pub use crate::public::PublicBox;
mod secret;
pub use crate::secret::Secret;

View File

@@ -172,12 +172,154 @@ impl<const N: usize> StoreValueB64Writer for Public<N> {
}
}
#[derive(Clone, Hash, PartialEq, Eq, PartialOrd, Ord)]
#[repr(transparent)]
pub struct PublicBox<const N: usize> {
pub inner: Box<Public<N>>,
}
impl<const N: usize> PublicBox<N> {
/// Create a new [PublicBox] from a byte slice
pub fn from_slice(value: &[u8]) -> Self {
Self {
inner: Box::new(Public::from_slice(value)),
}
}
/// Create a new [PublicBox] from a byte array
pub fn new(value: [u8; N]) -> Self {
Self {
inner: Box::new(Public::new(value)),
}
}
/// Create a zero initialized [PublicBox]
pub fn zero() -> Self {
Self {
inner: Box::new(Public::zero()),
}
}
/// Create a random initialized [PublicBox]
pub fn random() -> Self {
Self {
inner: Box::new(Public::random()),
}
}
/// Randomize all bytes in an existing [PublicBox]
pub fn randomize(&mut self) {
self.inner.randomize()
}
}
impl<const N: usize> Randomize for PublicBox<N> {
fn try_fill<R: Rng + ?Sized>(&mut self, rng: &mut R) -> Result<(), rand::Error> {
self.inner.try_fill(rng)
}
}
impl<const N: usize> fmt::Debug for PublicBox<N> {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
debug_crypto_array(&**self, fmt)
}
}
impl<const N: usize> Deref for PublicBox<N> {
type Target = [u8; N];
fn deref(&self) -> &[u8; N] {
self.inner.deref()
}
}
impl<const N: usize> DerefMut for PublicBox<N> {
fn deref_mut(&mut self) -> &mut [u8; N] {
self.inner.deref_mut()
}
}
impl<const N: usize> Borrow<[u8]> for PublicBox<N> {
fn borrow(&self) -> &[u8] {
self.deref()
}
}
impl<const N: usize> BorrowMut<[u8]> for PublicBox<N> {
fn borrow_mut(&mut self) -> &mut [u8] {
self.deref_mut()
}
}
impl<const N: usize> LoadValue for PublicBox<N> {
type Error = anyhow::Error;
// This is implemented separately from Public to avoid allocating too much stack memory
fn load<P: AsRef<Path>>(path: P) -> anyhow::Result<Self> {
let mut p = Self::random();
fopen_r(path)?.read_exact_to_end(p.deref_mut())?;
Ok(p)
}
}
impl<const N: usize> StoreValue for PublicBox<N> {
type Error = anyhow::Error;
fn store<P: AsRef<Path>>(&self, path: P) -> anyhow::Result<()> {
self.inner.store(path)
}
}
impl<const N: usize> LoadValueB64 for PublicBox<N> {
type Error = anyhow::Error;
// This is implemented separately from Public to avoid allocating too much stack memory
fn load_b64<const F: usize, P: AsRef<Path>>(path: P) -> Result<Self, Self::Error>
where
Self: Sized,
{
// A vector is used here to ensure heap allocation without copy from stack
let mut f = vec![0u8; F];
let mut v = PublicBox::zero();
let p = path.as_ref();
let len = fopen_r(p)?
.read_slice_to_end(&mut f)
.with_context(|| format!("Could not load file {p:?}"))?;
b64_decode(&f[0..len], v.deref_mut())
.with_context(|| format!("Could not decode base64 file {p:?}"))?;
Ok(v)
}
}
impl<const N: usize> StoreValueB64 for PublicBox<N> {
type Error = anyhow::Error;
fn store_b64<const F: usize, P: AsRef<Path>>(&self, path: P) -> anyhow::Result<()> {
self.inner.store_b64::<F, P>(path)
}
}
impl<const N: usize> StoreValueB64Writer for PublicBox<N> {
type Error = anyhow::Error;
fn store_b64_writer<const F: usize, W: std::io::Write>(
&self,
writer: W,
) -> Result<(), Self::Error> {
self.inner.store_b64_writer::<F, W>(writer)
}
}
#[cfg(test)]
mod tests {
#[cfg(test)]
#[allow(clippy::module_inception)]
mod tests {
use crate::Public;
use crate::{Public, PublicBox};
use rosenpass_util::{
b64::b64_encode,
file::{
@@ -185,32 +327,35 @@ mod tests {
Visibility,
},
};
use std::{fs, os::unix::fs::PermissionsExt};
use std::{fs, ops::Deref, os::unix::fs::PermissionsExt};
use tempfile::tempdir;
/// test loading a public from an example file, and then storing it again in a different file
#[test]
fn test_public_load_store() {
const N: usize = 100;
/// Number of bytes in payload for load and store tests
const N: usize = 100;
/// Convenience function for running a load/store test
fn run_load_store_test<
T: LoadValue<Error = anyhow::Error>
+ StoreValue<Error = anyhow::Error>
+ Deref<Target = [u8; N]>,
>() {
// Generate original random bytes
let original_bytes: [u8; N] = [rand::random(); N];
// Create a temporary directory
let temp_dir = tempdir().unwrap();
// Store the original public to an example file in the temporary directory
// Store the original bytes to an example file in the temporary directory
let example_file = temp_dir.path().join("example_file");
std::fs::write(example_file.clone(), &original_bytes).unwrap();
std::fs::write(&example_file, original_bytes).unwrap();
// Load the public from the example file
// Load the value from the example file into our generic type
let loaded_public = T::load(&example_file).unwrap();
let loaded_public = Public::load(&example_file).unwrap();
// Check that the loaded value matches the original bytes
assert_eq!(loaded_public.deref(), &original_bytes);
// Check that the loaded public matches the original bytes
assert_eq!(&loaded_public.value, &original_bytes);
// Store the loaded public to a different file in the temporary directory
// Store the loaded value to a different file in the temporary directory
let new_file = temp_dir.path().join("new_file");
loaded_public.store(&new_file).unwrap();
@@ -224,10 +369,13 @@ mod tests {
assert_eq!(new_file_contents, original_file_contents);
}
/// test loading a base64 encoded public from an example file, and then storing it again in a different file
#[test]
fn test_public_load_store_base64() {
const N: usize = 100;
/// Convenience function for running a base64 load/store test
fn run_base64_load_store_test<
T: LoadValueB64<Error = anyhow::Error>
+ StoreValueB64<Error = anyhow::Error>
+ StoreValueB64Writer<Error = anyhow::Error>
+ Deref<Target = [u8; N]>,
>() {
// Generate original random bytes
let original_bytes: [u8; N] = [rand::random(); N];
// Create a temporary directory
@@ -238,9 +386,9 @@ mod tests {
std::fs::write(&example_file, encoded_public).unwrap();
// Load the public from the example file
let loaded_public = Public::load_b64::<{ N * 2 }, _>(&example_file).unwrap();
let loaded_public = T::load_b64::<{ N * 2 }, _>(&example_file).unwrap();
// Check that the loaded public matches the original bytes
assert_eq!(&loaded_public.value, &original_bytes);
assert_eq!(loaded_public.deref(), &original_bytes);
// Store the loaded public to a different file in the temporary directory
let new_file = temp_dir.path().join("new_file");
@@ -253,7 +401,7 @@ mod tests {
// Check that the contents of the new file match the original file
assert_eq!(new_file_contents, original_file_contents);
//Check new file permissions are public
// Check new file permissions are public
let metadata = fs::metadata(&new_file).unwrap();
assert_eq!(metadata.permissions().mode() & 0o000777, 0o644);
@@ -271,9 +419,35 @@ mod tests {
// Check that the contents of the new file match the original file
assert_eq!(new_file_contents, original_file_contents);
//Check new file permissions are public
// Check new file permissions are public
let metadata = fs::metadata(&new_file).unwrap();
assert_eq!(metadata.permissions().mode() & 0o000777, 0o644);
}
/// Test loading a [Public] from an example file, and then storing it again in a new file
#[test]
fn test_public_load_store() {
run_load_store_test::<Public<N>>();
}
/// Test loading a [PublicBox] from an example file, and then storing it again in a new file
#[test]
fn test_public_box_load_store() {
run_load_store_test::<PublicBox<N>>();
}
/// Test loading a base64-encoded [Public] from an example file, and then storing it again
/// in a different file
#[test]
fn test_public_load_store_base64() {
run_base64_load_store_test::<Public<N>>();
}
/// Test loading a base64-encoded [PublicBox] from an example file, and then storing it
/// again in a different file
#[test]
fn test_public_box_load_store_base64() {
run_base64_load_store_test::<PublicBox<N>>();
}
}
}

View File

@@ -1,6 +1,5 @@
use std::cell::RefCell;
use std::collections::HashMap;
use std::convert::TryInto;
use std::fmt;
use std::ops::{Deref, DerefMut};
use std::path::Path;
@@ -387,7 +386,7 @@ mod test {
// Store the original secret to an example file in the temporary directory
let example_file = temp_dir.path().join("example_file");
std::fs::write(example_file.clone(), &original_bytes).unwrap();
std::fs::write(&example_file, original_bytes).unwrap();
// Load the secret from the example file
let loaded_secret = Secret::load(&example_file).unwrap();

View File

@@ -1,3 +1,5 @@
#![warn(missing_docs)]
#![recursion_limit = "256"]
#![doc = include_str!(concat!(env!("CARGO_MANIFEST_DIR"), "/README.md"))]
#[cfg(doctest)]

View File

@@ -5,23 +5,70 @@ use crate::CondenseBeside;
pub struct Beside<Val, Ret>(pub Val, pub Ret);
impl<Val, Ret> Beside<Val, Ret> {
/// Get an immutable reference to the destination value
///
/// # Example
/// ```
/// use rosenpass_to::Beside;
///
/// let beside = Beside(1, 2);
/// assert_eq!(beside.dest(), &1);
/// ```
pub fn dest(&self) -> &Val {
&self.0
}
/// Get an immutable reference to the return value
///
/// # Example
/// ```
/// use rosenpass_to::Beside;
///
/// let beside = Beside(1, 2);
/// assert_eq!(beside.ret(), &2);
/// ```
pub fn ret(&self) -> &Ret {
&self.1
}
/// Get a mutable reference to the destination value
///
/// # Example
/// ```
/// use rosenpass_to::Beside;
///
/// let mut beside = Beside(1, 2);
/// *beside.dest_mut() = 3;
/// assert_eq!(beside.dest(), &3);
/// ```
pub fn dest_mut(&mut self) -> &mut Val {
&mut self.0
}
/// Get a mutable reference to the return value
///
/// # Example
/// ```
/// use rosenpass_to::Beside;
///
/// let mut beside = Beside(1, 2);
/// *beside.ret_mut() = 3;
/// assert_eq!(beside.ret(), &3);
/// ```
pub fn ret_mut(&mut self) -> &mut Ret {
&mut self.1
}
/// Perform beside condensation. See [CondenseBeside]
///
/// # Example
/// ```
/// use rosenpass_to::Beside;
/// use rosenpass_to::CondenseBeside;
///
/// let beside = Beside(1, ());
/// assert_eq!(beside.condense(), 1);
/// ```
pub fn condense(self) -> <Ret as CondenseBeside<Val>>::Condensed
where
Ret: CondenseBeside<Val>,

View File

@@ -7,8 +7,10 @@
/// The function [Beside::condense()](crate::Beside::condense) is a shorthand for using the
/// condense trait.
pub trait CondenseBeside<Val> {
/// The type that results from condensation.
type Condensed;
/// Takes ownership of `self` and condenses it with the given value.
fn condense(self, ret: Val) -> Self::Condensed;
}

View File

@@ -1,6 +1,7 @@
/// Helper performing explicit unsized coercion.
/// Used by the [to](crate::to()) function.
pub trait DstCoercion<Dst: ?Sized> {
/// Performs an explicit coercion to the destination type.
fn coerce_dest(&mut self) -> &mut Dst;
}

View File

@@ -6,8 +6,8 @@
//! - `Dst: ?Sized`; (e.g. [u8]) The target to write to
//! - `Out: Sized = &mut Dst`; (e.g. &mut [u8]) A reference to the target to write to
//! - `Coercable: ?Sized + DstCoercion<Dst>`; (e.g. `[u8]`, `[u8; 16]`) Some value that
//! destination coercion can be applied to. Usually either `Dst` itself (e.g. `[u8]` or some sized variant of
//! `Dst` (e.g. `[u8; 64]`).
//! destination coercion can be applied to. Usually either `Dst` itself (e.g. `[u8]` or some sized variant of
//! `Dst` (e.g. `[u8; 64]`).
//! - `Ret: Sized`; (anything) must be `CondenseBeside<_>` if condensing is to be applied. The ordinary return value of a function with an output
//! - `Val: Sized + BorrowMut<Dst>`; (e.g. [u8; 16]) Some owned storage that can be borrowed as `Dst`
//! - `Condensed: Sized = CondenseBeside<Val>::Condensed`; (e.g. [u8; 16], Result<[u8; 16]>) The combiation of Val and Ret after condensing was applied (`Beside<Val, Ret>::condense()`/`Ret::condense(v)` for all `v : Val`).

View File

@@ -1,13 +1,16 @@
use crate::{Beside, CondenseBeside};
use std::borrow::BorrowMut;
// The To trait is the core of the to crate; most functions with destinations will either return
// an object that is an instance of this trait or they will return `-> impl To<Destination,
// Return_value`.
//
// A quick way to implement a function with destination is to use the
// [with_destination(|param: &mut Type| ...)] higher order function.
/// The To trait is the core of the to crate; most functions with destinations will either return
/// an object that is an instance of this trait or they will return `-> impl To<Destination,
/// Return_value`.
///
/// A quick way to implement a function with destination is to use the
/// [with_destination(|param: &mut Type| ...)] higher order function.
pub trait To<Dst: ?Sized, Ret>: Sized {
/// Writes self to the destination `out` and returns a value of type `Ret`.
///
/// This is the core method that must be implemented by all types implementing `To`.
fn to(self, out: &mut Dst) -> Ret;
/// Generate a destination on the fly with a lambda.

View File

@@ -1,20 +1,38 @@
use crate::To;
use std::marker::PhantomData;
/// A struct that wraps a closure and implements the `To` trait
///
/// This allows passing closures that operate on a destination type `Dst`
/// and return `Ret`.
///
/// # Type Parameters
/// * `Dst` - The destination type the closure operates on
/// * `Ret` - The return type of the closure
/// * `Fun` - The closure type that implements `FnOnce(&mut Dst) -> Ret`
struct ToClosure<Dst, Ret, Fun>
where
Dst: ?Sized,
Fun: FnOnce(&mut Dst) -> Ret,
{
/// The function to call.
fun: Fun,
/// Phantom data to hold the destination type
_val: PhantomData<Box<Dst>>,
}
/// Implementation of the `To` trait for ToClosure
///
/// This enables calling the wrapped closure with a destination reference.
impl<Dst, Ret, Fun> To<Dst, Ret> for ToClosure<Dst, Ret, Fun>
where
Dst: ?Sized,
Fun: FnOnce(&mut Dst) -> Ret,
{
/// Execute the wrapped closure with the given destination
///
/// # Arguments
/// * `out` - Mutable reference to the destination
fn to(self, out: &mut Dst) -> Ret {
(self.fun)(out)
}
@@ -22,6 +40,14 @@ where
/// Used to create a function with destination.
///
/// Creates a wrapper that implements the `To` trait for a closure that
/// operates on a destination type.
///
/// # Type Parameters
/// * `Dst` - The destination type the closure operates on
/// * `Ret` - The return type of the closure
/// * `Fun` - The closure type that implements `FnOnce(&mut Dst) -> Ret`
///
/// See the tutorial in [readme.me]..
pub fn with_destination<Dst, Ret, Fun>(fun: Fun) -> impl To<Dst, Ret>
where

View File

@@ -16,5 +16,14 @@ base64ct = { workspace = true }
anyhow = { workspace = true }
typenum = { workspace = true }
static_assertions = { workspace = true }
rustix = {workspace = true}
zeroize = {workspace = true}
rustix = { workspace = true }
zeroize = { workspace = true }
zerocopy = { workspace = true }
thiserror = { workspace = true }
mio = { workspace = true }
tempfile = { workspace = true }
uds = { workspace = true, optional = true, features = ["mio_1xx"] }
[features]
experiment_file_descriptor_passing = ["uds"]

View File

@@ -1,8 +1,13 @@
//! Utilities for working with Base64
use base64ct::{Base64, Decoder as B64Reader, Encoder as B64Writer};
use zeroize::Zeroize;
use std::fmt::Display;
/// Formatter that displays its input as base64.
///
/// Use through [B64Display].
pub struct B64DisplayHelper<'a, const F: usize>(&'a [u8]);
impl<const F: usize> Display for B64DisplayHelper<'_, F> {
@@ -15,7 +20,25 @@ impl<const F: usize> Display for B64DisplayHelper<'_, F> {
}
}
/// Extension trait that can be used to display values as Base64
///
/// # Examples
///
/// ```
/// use rosenpass_util::b64::B64Display;
///
/// let a = vec![0,1,2,3,4,5];
/// assert_eq!(
/// format!("{}", a.fmt_b64::<10>()), // Maximum size of the encoded buffer
/// "AAECAwQF",
/// );
/// ```
pub trait B64Display {
/// Display this value as base64
///
/// # Examples
///
/// See [B64Display].
fn fmt_b64<const F: usize>(&self) -> B64DisplayHelper<F>;
}
@@ -31,6 +54,11 @@ impl<T: AsRef<[u8]>> B64Display for T {
}
}
/// Decode a base64-encoded value
///
/// # Examples
///
/// See [b64_encode].
pub fn b64_decode(input: &[u8], output: &mut [u8]) -> anyhow::Result<()> {
let mut reader = B64Reader::<Base64>::new(input).map_err(|e| anyhow::anyhow!(e))?;
match reader.decode(output) {
@@ -49,6 +77,23 @@ pub fn b64_decode(input: &[u8], output: &mut [u8]) -> anyhow::Result<()> {
}
}
/// Encode a value as base64.
///
/// ```
/// use rosenpass_util::b64::{b64_encode, b64_decode};
///
/// let bytes = b"Hello World";
///
/// let mut encoder_buffer = [0u8; 64];
/// let encoded = b64_encode(bytes, &mut encoder_buffer)?;
///
/// let mut bytes_decoded = [0u8; 11];
/// b64_decode(encoded.as_bytes(), &mut bytes_decoded);
/// assert_eq!(bytes, &bytes_decoded);
///
/// Ok::<(), anyhow::Error>(())
/// ```
///
pub fn b64_encode<'o>(input: &[u8], output: &'o mut [u8]) -> anyhow::Result<&'o str> {
let mut writer = B64Writer::<Base64>::new(output).map_err(|e| anyhow::anyhow!(e))?;
writer.encode(input).map_err(|e| anyhow::anyhow!(e))?;

675
util/src/build.rs Normal file
View File

@@ -0,0 +1,675 @@
//! Lazy construction of values
use crate::{
functional::ApplyExt,
mem::{SwapWithDefaultExt, SwapWithExt},
};
/// Errors returned by [ConstructionSite::erect]
#[derive(thiserror::Error, Debug, Eq, PartialEq)]
pub enum ConstructionSiteErectError<E> {
/// Attempted to erect an empty construction site
#[error("Construction site is void")]
IsVoid,
/// Attempted to erect a construction that is already standing
#[error("Construction is already built")]
AlreadyBuilt,
/// Other error
#[error("Other construction site error {0:?}")]
Other(#[from] E),
}
/// A type that can build some other type
///
/// # Examples
///
/// ```
/// use rosenpass_util::build::Build;
/// use anyhow::{Context, Result};
///
/// #[derive(Eq, PartialEq, Debug)]
/// struct Person {
/// pub fav_pokemon: String,
/// pub fav_number: u8,
/// }
///
/// #[derive(Default, Clone)]
/// struct PersonBuilder {
/// pub fav_pokemon: Option<String>,
/// pub fav_number: Option<u8>,
/// }
///
/// impl Build<Person> for &PersonBuilder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<Person, Self::Error> {
/// let fav_pokemon = self.fav_pokemon.clone().context("Missing fav pokemon")?;
/// let fav_number = self.fav_number.context("Missing fav number")?;
/// Ok(Person {
/// fav_pokemon,
/// fav_number,
/// })
/// }
/// }
///
/// let mut person_builder = PersonBuilder::default();
/// assert!(person_builder.build().is_err());
///
/// person_builder.fav_pokemon = Some("Krabby".to_owned());
/// person_builder.fav_number = Some(0);
/// assert_eq!(
/// person_builder.build().unwrap(),
/// Person {
/// fav_pokemon: "Krabby".to_owned(),
/// fav_number: 0
/// }
/// );
/// ```
pub trait Build<T>: Sized {
/// Error returned by the builder
type Error;
/// Build the type
///
/// # Examples
///
/// See [Self].
fn build(self) -> Result<T, Self::Error>;
}
/// A type that can be incrementally built from a type that can [Build] it
///
/// This is similar to an option, where [Self::Void] is [std::Option::None],
/// [Self::Product] is [std::Option::Some], except that there is a third
/// intermediate state [Self::Builder] that represents a Some/Product value
/// in the process of being made.
///
/// # Examples
///
/// ```
/// use std::borrow::Borrow;
/// use rosenpass_util::build::{ConstructionSite, Build};
/// use anyhow::{Context, Result};
///
/// #[derive(Eq, PartialEq, Debug)]
/// struct Person {
/// pub fav_pokemon: String,
/// pub fav_number: u8,
/// }
///
/// #[derive(Eq, PartialEq, Default, Clone, Debug)]
/// struct PersonBuilder {
/// pub fav_pokemon: Option<String>,
/// pub fav_number: Option<u8>,
/// }
///
/// impl Build<Person> for &PersonBuilder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<Person, Self::Error> {
/// let fav_pokemon = self.fav_pokemon.clone().context("Missing fav pokemon")?;
/// let fav_number = self.fav_number.context("Missing fav number")?;
/// Ok(Person {
/// fav_pokemon,
/// fav_number,
/// })
/// }
/// }
///
/// impl Build<Person> for PersonBuilder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<Person, Self::Error> {
/// self.borrow().build()
/// }
/// }
///
/// // Allocate the construction site
/// let mut site = ConstructionSite::void();
///
/// // Start construction
/// site = ConstructionSite::Builder(PersonBuilder::default());
///
/// // Use the builder to build the value
/// site.builder_mut().unwrap().fav_pokemon = Some("Krabby".to_owned());
/// site.builder_mut().unwrap().fav_number = Some(0);
///
/// // Use `erect` to call Build::build
/// site.erect();
///
/// assert_eq!(
/// site,
/// ConstructionSite::Product(Person {
/// fav_pokemon: "Krabby".to_owned(),
/// fav_number: 0
/// }),
/// );
/// ```
#[derive(Debug, Eq, PartialEq, Clone)]
pub enum ConstructionSite<Builder, T>
where
Builder: Build<T>,
{
/// The site is empty
Void,
/// The site is being built
Builder(Builder),
/// The site has been built and is now finished
Product(T),
}
/// Initializes the construction site as [ConstructionSite::Void]
impl<Builder, T> Default for ConstructionSite<Builder, T>
where
Builder: Build<T>,
{
fn default() -> Self {
Self::Void
}
}
impl<Builder, T> ConstructionSite<Builder, T>
where
Builder: Build<T>,
{
/// Initializes the construction site as [ConstructionSite::Void]
///
/// # Examples
///
/// See [Self].
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House;
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder;
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House)
/// }
/// }
///
/// assert_eq!(
/// ConstructionSite::<Builder, House>::void(),
/// ConstructionSite::Void,
/// );
/// ```
pub fn void() -> Self {
Self::Void
}
/// Initialize the construction site from its builder
///
/// # Examples
///
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House;
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder;
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House)
/// }
/// }
///
/// assert_eq!(
/// ConstructionSite::<Builder, House>::new(Builder),
/// ConstructionSite::Builder(Builder),
/// );
/// ```
pub fn new(builder: Builder) -> Self {
Self::Builder(builder)
}
/// Initialize the construction site from its product
///
/// # Examples
///
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House;
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder;
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House)
/// }
/// }
///
/// assert_eq!(
/// ConstructionSite::<Builder, House>::from_product(House),
/// ConstructionSite::Product(House),
/// );
/// ```
pub fn from_product(value: T) -> Self {
Self::Product(value)
}
/// Extract the construction site and replace it with [Self::Void]
///
/// # Examples
///
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House;
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder;
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House)
/// }
/// }
///
/// let mut a = ConstructionSite::<Builder, House>::from_product(House);
/// let a_backup = a.clone();
///
/// let b = a.take();
/// assert_eq!(a, ConstructionSite::void());
/// assert_eq!(b, ConstructionSite::Product(House));
/// ```
pub fn take(&mut self) -> Self {
self.swap_with_default()
}
/// Apply the given function to Self, temporarily converting
/// the mutable reference into an owned value.
///
/// This is useful if you have some function that needs to modify
/// the construction site as an owned value but all you have is a reference.
///
/// # Examples
///
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House(u32);
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder(u32);
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House(self.0))
/// }
/// }
///
/// #[derive(Debug, PartialEq, Eq)]
/// enum FancyMatchState {
/// New,
/// Built,
/// Increment,
/// };
///
/// fn fancy_match(site: &mut ConstructionSite<Builder, House>, def: u32) -> FancyMatchState {
/// site.modify_taken_with_return(|site| {
/// use ConstructionSite as C;
/// use FancyMatchState as F;
/// let (prod, state) = match site {
/// C::Void => (House(def), F::New),
/// C::Builder(b) => (b.build().unwrap(), F::Built),
/// C::Product(House(v)) => (House(v + 1), F::Increment),
/// };
/// let prod = ConstructionSite::from_product(prod);
/// (prod, state)
/// })
/// }
///
/// let mut a = ConstructionSite::void();
/// let r = fancy_match(&mut a, 42);
/// assert_eq!(a, ConstructionSite::Product(House(42)));
/// assert_eq!(r, FancyMatchState::New);
///
/// let mut a = ConstructionSite::new(Builder(13));
/// let r = fancy_match(&mut a, 42);
/// assert_eq!(a, ConstructionSite::Product(House(13)));
/// assert_eq!(r, FancyMatchState::Built);
///
/// let r = fancy_match(&mut a, 42);
/// assert_eq!(a, ConstructionSite::Product(House(14)));
/// assert_eq!(r, FancyMatchState::Increment);
/// ```
pub fn modify_taken_with_return<R, F>(&mut self, f: F) -> R
where
F: FnOnce(Self) -> (Self, R),
{
let (site, res) = self.take().apply(f);
self.swap_with(site);
res
}
/// Apply the given function to Self, temporarily converting
/// the mutable reference into an owned value.
///
/// This is useful if you have some function that needs to modify
/// the construction site as an owned value but all you have is a reference.
///
/// # Examples
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House(u32);
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder(u32);
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House(self.0))
/// }
/// }
///
/// fn fancy_match(site: &mut ConstructionSite<Builder, House>, def: u32) {
/// site.modify_taken(|site| {
/// use ConstructionSite as C;
/// let prod = match site {
/// C::Void => House(def),
/// C::Builder(b) => b.build().unwrap(),
/// C::Product(House(v)) => House(v + 1),
/// };
/// ConstructionSite::from_product(prod)
/// })
/// }
///
/// let mut a = ConstructionSite::void();
/// fancy_match(&mut a, 42);
/// assert_eq!(a, ConstructionSite::Product(House(42)));
///
/// let mut a = ConstructionSite::new(Builder(13));
/// fancy_match(&mut a, 42);
/// assert_eq!(a, ConstructionSite::Product(House(13)));
///
/// fancy_match(&mut a, 42);
/// assert_eq!(a, ConstructionSite::Product(House(14)));
/// ```
pub fn modify_taken<F>(&mut self, f: F)
where
F: FnOnce(Self) -> Self,
{
self.take().apply(f).swap_with_mut(self)
}
/// If this constructions site contains [Self::Builder], call the inner [Build]'s [Build::build]
/// and have the construction site contain a product.
///
/// # Examples
///
/// See [Self].
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build, ConstructionSiteErectError};
/// use std::convert::Infallible;
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House;
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder;
///
/// impl Build<House> for Builder {
/// type Error = Infallible;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House)
/// }
/// }
///
/// let mut a = ConstructionSite::<Builder, House>::void();
/// assert_eq!(a.erect(), Err(ConstructionSiteErectError::IsVoid));
/// assert_eq!(a, ConstructionSite::void());
///
/// let mut a = ConstructionSite::<Builder, House>::from_product(House);
/// assert_eq!(a.erect(), Err(ConstructionSiteErectError::AlreadyBuilt));
/// assert_eq!(a, ConstructionSite::from_product(House));
///
/// let mut a = ConstructionSite::<Builder, House>::new(Builder);
/// a.erect().unwrap();
/// assert_eq!(a, ConstructionSite::from_product(House));
/// ```
#[allow(clippy::result_unit_err)]
pub fn erect(&mut self) -> Result<(), ConstructionSiteErectError<Builder::Error>> {
self.modify_taken_with_return(|site| {
let builder = match site {
site @ Self::Void => return (site, Err(ConstructionSiteErectError::IsVoid)),
site @ Self::Product(_) => {
return (site, Err(ConstructionSiteErectError::AlreadyBuilt))
}
Self::Builder(builder) => builder,
};
let product = match builder.build() {
Err(e) => {
return (Self::void(), Err(ConstructionSiteErectError::Other(e)));
}
Ok(p) => p,
};
(Self::from_product(product), Ok(()))
})
}
/// Returns `true` if the construction site is [`Void`].
///
/// [`Void`]: ConstructionSite::Void
///
/// # Examples
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House;
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder;
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House)
/// }
/// }
///
/// type Site = ConstructionSite<Builder, House>;
///
/// assert_eq!(Site::Void.is_void(), true);
/// assert_eq!(Site::Builder(Builder).is_void(), false);
/// assert_eq!(Site::Product(House).is_void(), false);
/// ```
#[must_use]
pub fn is_void(&self) -> bool {
matches!(self, Self::Void)
}
/// Returns `true` if the construction site is [`InProgress`].
///
/// [`InProgress`]: ConstructionSite::InProgress
///
/// # Examples
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House;
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder;
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House)
/// }
/// }
///
/// type Site = ConstructionSite<Builder, House>;
///
/// assert_eq!(Site::Void.in_progress(), false);
/// assert_eq!(Site::Builder(Builder).in_progress(), true);
/// assert_eq!(Site::Product(House).in_progress(), false);
/// ```
#[must_use]
pub fn in_progress(&self) -> bool {
matches!(self, Self::Builder(..))
}
/// Returns `true` if the construction site is [`Done`].
///
/// [`Done`]: ConstructionSite::Done
///
/// # Examples
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House;
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder;
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House)
/// }
/// }
///
/// type Site = ConstructionSite<Builder, House>;
///
/// assert_eq!(Site::Void.is_available(), false);
/// assert_eq!(Site::Builder(Builder).is_available(), false);
/// assert_eq!(Site::Product(House).is_available(), true);
/// ```
#[must_use]
pub fn is_available(&self) -> bool {
matches!(self, Self::Product(..))
}
/// Returns the value of [Self::Builder]
///
/// # Examples
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House;
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder;
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House)
/// }
/// }
///
/// type Site = ConstructionSite<Builder, House>;
///
/// assert_eq!(Site::Void.into_builder(), None);
/// assert_eq!(Site::Builder(Builder).into_builder(), Some(Builder));
/// assert_eq!(Site::Product(House).into_builder(), None);
/// ```
pub fn into_builder(self) -> Option<Builder> {
use ConstructionSite as S;
match self {
S::Builder(v) => Some(v),
_ => None,
}
}
/// Returns the value of [Self::Builder] as a reference
///
/// # Examples
///
/// See [Self::into_builder].
pub fn builder_ref(&self) -> Option<&Builder> {
use ConstructionSite as S;
match self {
S::Builder(v) => Some(v),
_ => None,
}
}
/// Returns the value of [Self::Builder] as a mutable reference
///
/// # Examples
///
/// Similar to [Self::into_builder].
pub fn builder_mut(&mut self) -> Option<&mut Builder> {
use ConstructionSite as S;
match self {
S::Builder(v) => Some(v),
_ => None,
}
}
/// Returns the value of [Self::Product]
///
/// # Examples
///
/// Similar to [Self::into_builder].
pub fn into_product(self) -> Option<T> {
use ConstructionSite as S;
match self {
S::Product(v) => Some(v),
_ => None,
}
}
/// Returns the value of [Self::Product] as a reference
///
/// # Examples
///
/// Similar to [Self::into_builder].
pub fn product_ref(&self) -> Option<&T> {
use ConstructionSite as S;
match self {
S::Product(v) => Some(v),
_ => None,
}
}
/// Returns the value of [Self::Product] as a mutable reference
///
/// # Examples
///
/// Similar to [Self::into_builder].
pub fn product_mut(&mut self) -> Option<&mut T> {
use ConstructionSite as S;
match self {
S::Product(v) => Some(v),
_ => None,
}
}
}

156
util/src/controlflow.rs Normal file
View File

@@ -0,0 +1,156 @@
/// A collection of control flow utility macros
#[macro_export]
/// A simple for loop to repeat a $body a number of times
///
/// # Examples
///
/// ```
/// use rosenpass_util::repeat;
/// let mut sum = 0;
/// repeat!(10, {
/// sum += 1;
/// });
/// assert_eq!(sum, 10);
/// ```
macro_rules! repeat {
($times:expr, $body:expr) => {
for _ in 0..($times) {
$body
}
};
}
#[macro_export]
/// Return unless the condition $cond is true, with return value $val, if given.
///
/// # Examples
///
/// ```
/// use rosenpass_util::return_unless;
/// fn test_fn() -> i32 {
/// return_unless!(true, 1);
/// 0
/// }
/// assert_eq!(test_fn(), 0);
/// fn test_fn2() -> i32 {
/// return_unless!(false, 1);
/// 0
/// }
/// assert_eq!(test_fn2(), 1);
/// ```
macro_rules! return_unless {
($cond:expr) => {
if !($cond) {
return;
}
};
($cond:expr, $val:expr) => {
if !($cond) {
return $val;
}
};
}
#[macro_export]
/// Return if the condition $cond is true, with return value $val, if given.
///
/// # Examples
///
/// ```
/// use rosenpass_util::return_if;
/// fn test_fn() -> i32 {
/// return_if!(true, 1);
/// 0
/// }
/// assert_eq!(test_fn(), 1);
/// fn test_fn2() -> i32 {
/// return_if!(false, 1);
/// 0
/// }
/// assert_eq!(test_fn2(), 0);
/// ```
macro_rules! return_if {
($cond:expr) => {
if $cond {
return;
}
};
($cond:expr, $val:expr) => {
if $cond {
return $val;
}
};
}
#[macro_export]
/// Break unless the condition is true, from the loop with label $val, if given.
///
/// # Examples
///
/// ```
/// use rosenpass_util::break_if;
/// let mut sum = 0;
/// for i in 0..10 {
/// break_if!(i == 5);
/// sum += 1;
/// }
/// assert_eq!(sum, 5);
/// let mut sum = 0;
/// 'one: for _ in 0..10 {
/// for j in 0..20 {
/// break_if!(j == 5, 'one);
/// sum += 1;
/// }
/// }
/// assert_eq!(sum, 5);
/// ```
macro_rules! break_if {
($cond:expr) => {
if $cond {
break;
}
};
($cond:expr, $val:tt) => {
if $cond {
break $val;
}
};
}
#[macro_export]
/// Continue if the condition is true, in the loop with label $val, if given.
///
/// # Examples
///
/// ```
/// use rosenpass_util::continue_if;
/// let mut sum = 0;
/// for i in 0..10 {
/// continue_if!(i == 5);
/// sum += 1;
/// }
/// assert_eq!(sum, 9);
/// let mut sum = 0;
/// 'one: for i in 0..10 {
/// continue_if!(i == 5, 'one);
/// sum += 1;
/// }
/// assert_eq!(sum, 9);
/// ```
macro_rules! continue_if {
($cond:expr) => {
if $cond {
continue;
}
};
($cond:expr, $val:tt) => {
if $cond {
continue $val;
}
};
}

Some files were not shown because too many files have changed in this diff Show More