Compare commits

..

76 Commits

Author SHA1 Message Date
dependabot[bot]
9cdf3d33e4 build(deps): bump cachix/cachix-action from 15 to 16
Bumps [cachix/cachix-action](https://github.com/cachix/cachix-action) from 15 to 16.
- [Release notes](https://github.com/cachix/cachix-action/releases)
- [Commits](https://github.com/cachix/cachix-action/compare/v15...v16)

---
updated-dependencies:
- dependency-name: cachix/cachix-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-08 15:32:50 +00:00
Karolin Varner
2e17779447 chore(deps): bump anyhow from 1.0.96 to 1.0.98 (#690) 2025-08-08 17:30:36 +02:00
Rosenpass CI Bot
75763bf27d Regenerate cargo vet exemptions 2025-08-07 23:45:10 +00:00
dependabot[bot]
83ad7652bc chore(deps): bump anyhow from 1.0.96 to 1.0.98
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.96 to 1.0.98.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.96...1.0.98)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-version: 1.0.98
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-07 23:44:29 +00:00
Karolin Varner
76a8a39560 fix: Benchmarks should run on ubicloud runners 2025-08-07 20:00:05 +02:00
Karolin Varner
de72e4a2a1 Use serde for JSON-encoding benchmark data (#667) 2025-08-07 16:40:16 +02:00
Karolin Varner
f0467ea28b chore(deps): bump actions/download-artifact from 4 to 5 (#686) 2025-08-07 16:04:46 +02:00
dependabot[bot]
15a4dfa03b chore(deps): bump actions/download-artifact from 4 to 5
Dependabot couldn't find the original pull request head commit, cd15f7d879f6ecb6179eb8f559b55553968eccfe.
2025-08-07 16:04:29 +02:00
Karolin Varner
1a8713a26f chore(deps): bump log from 0.4.26 to 0.4.27 (#681) 2025-08-07 16:04:01 +02:00
Rosenpass CI Bot
2694f4a86b Regenerate cargo vet exemptions 2025-08-07 16:03:32 +02:00
dependabot[bot]
b905c0aa06 chore(deps): bump log from 0.4.26 to 0.4.27
Bumps [log](https://github.com/rust-lang/log) from 0.4.26 to 0.4.27.
- [Release notes](https://github.com/rust-lang/log/releases)
- [Changelog](https://github.com/rust-lang/log/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/log/compare/0.4.26...0.4.27)

---
updated-dependencies:
- dependency-name: log
  dependency-version: 0.4.27
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-07 16:03:32 +02:00
Karolin Varner
4f2519fb9c fix: Compiling rp should be disabled on mac (#688) 2025-08-07 16:02:40 +02:00
Karolin Varner
72e6542958 fix: Compiling rp should be disabled on mac 2025-08-07 12:45:02 +02:00
Jan Winkelmann (keks)
1e6e17e094 bump version of serde_json in supply chain exception 2025-08-06 17:58:38 +02:00
Jan Winkelmann (keks)
8e7fd174e8 nix fmt 2025-08-06 17:58:38 +02:00
Jan Winkelmann (keks)
7908359eab Use serde for JSON-encoding benchmark data 2025-08-06 17:58:38 +02:00
Karolin Varner
15ae4b4ae5 Fix signal handling in rp and rosenpass (#685) 2025-08-06 15:59:49 +02:00
Karolin Varner
b5107c77d8 chore(rp): Docs fix 2025-08-04 08:44:15 +02:00
Karolin Varner
335584b187 fix: clippy fix (remove warnings) 2025-08-04 08:44:15 +02:00
Karolin Varner
3c0e167347 fix(rosenpass): Integrate signal handlers with mio
With this commit, rosenpass uses a signal handler based on the signal-hook-mio crate.

Even though, in this commit, no rosenpass-rp code is touched, this also
fixes the signal handling in rosenpass-rp. The way rosenpass is
integrated in rp is a bit of a hack – it just directly embeds
rosenpass in the same process (though on a dedicated thread). For this
reason, rp now just inherits rosenpass' signal handlers. The
rosenpass event_loop() will terminate. The main loop of `rp` just spends
most of the time waiting for rosenpass itself to finish, and exits when
it finishes.

Unfortunately, this means we are not using signalfd(2)[^0]; the
signal-hook-mio crate appears to use a pipe-based mechanism to deliver
events to mio instead.

This may not be such a bad thing, as signalfd has some severe drawbacks
with respect to subprocesses and masked signals[^1].

Fixes: #358 (https://github.com/rosenpass/rosenpass/issues/385)
Fixes: #522 (https://github.com/rosenpass/rosenpass/issues/522)
Fixes: #678 (https://github.com/rosenpass/rosenpass/pull/678)

[^0]: https://unixism.net/2021/02/making-signals-less-painful-under-linux/
[^1]: https://ldpreload.com/blog/signalfd-is-useless?reposted-on-request
2025-08-04 08:44:15 +02:00
Karolin Varner
6f6fdef542 chore(rp): Rename crate rp -> rosenpass-rp 2025-08-04 08:44:15 +02:00
Karolin Varner
c839126e29 chore(rp): Move remaining sync io in exchange() into spawn_blocking 2025-08-04 08:44:15 +02:00
Karolin Varner
a1698f36a6 fix(rp): Start the proper rosenpass server on a dedicated thread
We should not block the tokio executor indefinetly.
2025-08-04 08:44:15 +02:00
Karolin Varner
2d6550da0f chore(rp): Simplify peer configuration code 2025-08-04 08:44:15 +02:00
Karolin Varner
bae336d633 fix(rp): Make sure that the WG SK is erased ASAP 2025-08-04 08:44:15 +02:00
Karolin Varner
6c929f7ddc chore(rp): Simplify error handling in exchange() 2025-08-04 08:44:15 +02:00
Karolin Varner
41eb620751 chore(rp): Simplify code to setup Rosenpass AppServer 2025-08-04 08:44:15 +02:00
Karolin Varner
8561aaf137 chore(rp): Move functionality to set wg sk and port into function 2025-08-04 08:44:15 +02:00
Karolin Varner
f0ee7a33c9 chore(rp): Make sure genetlink is cleaned up 2025-08-04 08:44:15 +02:00
Karolin Varner
1d4a70f863 fix(rp): Use async commands to set up ip addr
We don't want to block the tokio runtime.
2025-08-04 08:44:15 +02:00
Karolin Varner
f4e8e4314b chore: Use RAII for erasing the WireGuard device in rp
This, for now, disables correct handling of program termination,
but not because the RAII does not work. Instead, we need to implement
a proper signal handling concept.

We also removed some teardown handlers which are not covered by RAII,
like removing the routes we set up. The reason for this is, that this
is going to be taken care of by removing the wireguard device anyway.
2025-08-04 08:44:15 +02:00
Karolin Varner
1b9be7519b chore: Unnecessary string clone in rp 2025-08-04 08:44:15 +02:00
Karolin Varner
c689f8e78a feat(rp): Enable logging 2025-08-04 08:44:15 +02:00
Karolin Varner
edcbf290fc chore: Use default error handler in rp main() 2025-08-04 08:44:15 +02:00
Karolin Varner
31a5dbe420 feat: Janitor, utilities for cleaning up with tokio 2025-08-04 08:44:15 +02:00
Karolin Varner
a85f9b8e63 chore: Better error handling in link_create_and_up in rp 2025-08-03 15:15:14 +02:00
Karolin Varner
21ea526435 chore: Restructure imports in rosenpass_rp::exchange 2025-08-03 15:15:14 +02:00
Karolin Varner
35e956e340 fix: Simplify structure of rp::exchange
Before this commit, there was a submodule rp::exchange::netlink
and there where platform checks, printing error messages on systems
other than freebsd and linux.

Neither is really necessary. If the application won't compile on other
systems it won't work, and if it happens to work then why give users a
spurious error message.
2025-08-03 15:15:14 +02:00
Karolin Varner
3371d7f00f chore: Clippy fixes for rp crate 2025-08-03 15:15:14 +02:00
Karolin Varner
3f2a9bb96b chore(deps): bump tokio from 1.44.2 to 1.46.1 (#679) 2025-07-31 12:22:35 +02:00
Rosenpass CI Bot
8dfa67a2dd Regenerate cargo vet exemptions 2025-07-30 23:45:24 +00:00
dependabot[bot]
f31d635df8 chore(deps): bump tokio from 1.44.2 to 1.46.1
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.44.2 to 1.46.1.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.44.2...tokio-1.46.1)

---
updated-dependencies:
- dependency-name: tokio
  dependency-version: 1.46.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-07-30 23:44:49 +00:00
Karolin Varner
75702dfc03 chore(deps): bump clap_mangen from 0.2.24 to 0.2.27 (#657) 2025-07-30 16:13:12 +02:00
Rosenpass CI Bot
3af479a27e Regenerate cargo vet exemptions 2025-07-29 15:20:29 +00:00
dependabot[bot]
e76e5b253f chore(deps): bump clap_mangen from 0.2.24 to 0.2.27
Dependabot couldn't find the original pull request head commit, 518c533e040c5dd92156f84f8c20cffb9c7eacf6.
2025-07-29 15:19:47 +00:00
Karolin Varner
0d944afbd8 Add another checkout step for the supply-chain action in case of a dependabot PR (#677) 2025-07-29 17:18:03 +02:00
Karolin Varner
8d81be56f3 fix: Re-trigger CI when cargo vet exemptions are regenerated for Dependabot PRs
Co-authored-by: David Niehues <niehues@utilacy.com>
2025-07-29 17:16:11 +02:00
Karolin Varner
16b3914c46 Make the CI restart once cargo-vet exemptions for dependabot have been pushed (new iteration (#674) 2025-07-29 15:52:31 +02:00
David Niehues
ae060f7cfb fixes to PR 2025-07-29 15:39:23 +02:00
David Niehues
afa6212264 fix(CI+dependabot): adapt the supply-chain workflow for cargo-vet to work with dependabot, i.e. regenerating exemptions for dependabot and restart the CI afterwards 2025-07-29 15:22:43 +02:00
David Niehues
3c744c253b fix(CI+dependabot): add instructions on how to set up a repository to work with the supply-chain+dependabot accomodations 2025-07-29 15:22:43 +02:00
Karolin Varner
53e6553c8b fix(rosenpass): Fix the error message if the secret key is invalid (#669) 2025-07-29 14:15:22 +02:00
David Niehues
4cd2cdfcff fix(rosenpass): Fix the error message if the secret key is invalid 2025-07-29 14:14:36 +02:00
Karolin Varner
3e03e47935 fix: Regression caused by benchmarks (#670) 2025-07-09 19:20:15 +02:00
Karolin Varner
7003671cde fix: Regression caused by benchmarks
CI keeps failing for external pull requests as GH's permission
model was not fully accounted for
2025-07-09 10:08:05 +02:00
Karolin Varner
91fc50c1e1 Specify WireGuard OSK as a protocol extension & allow for custom OSK domain separators (#664) 2025-07-07 12:05:19 +02:00
Karolin Varner
b1a7d94295 feat: Support for custom osk (output key) domain separators in Rosenpass app
This allows for custom protocol extensions with custom domain
separators to be used without modifying the Rosenpass source code
2025-06-25 19:48:29 +02:00
Karolin Varner
48b7bb2f14 feat(whitepaper): Introduce protocol extensions & specify WG integration as one 2025-06-25 19:48:29 +02:00
Karolin Varner
77e3682820 chore: Whitespace issues in the whitepaper 2025-06-25 19:48:29 +02:00
Karolin Varner
8bad02bcda feat: Disallow unknown fields in rosenpass and rp configuration 2025-06-25 19:48:29 +02:00
Karolin Varner
864407f90b chore: Fix module documentation for app_server 2025-06-25 19:38:51 +02:00
Karolin Varner
4deee59e90 chore: Restructure imports in various places 2025-06-25 19:38:51 +02:00
Karolin Varner
c82ed332f6 Start splitting protocol.rs into multiple files (#655) 2025-06-24 14:50:52 +02:00
Karolin Varner
5ced547a07 chore: PeerIndex split from protocol.rs 2025-06-24 14:01:31 +02:00
Karolin Varner
bdaedc4e2a chore: CookieStore split from protocol.rs 2025-06-24 14:01:31 +02:00
Karolin Varner
4e77e67f10 chore: Split utils for zerocopy in protocol into own file 2025-06-24 14:01:31 +02:00
Karolin Varner
f33c3a6928 chore: Split protocol testutils into own file 2025-06-24 14:01:31 +02:00
Karolin Varner
348650d507 chore: protocol::test should not import super::* 2025-06-24 14:01:31 +02:00
Karolin Varner
c318cf7bac chore: Split protocol tests into own file 2025-06-24 14:01:31 +02:00
Karolin Varner
d9a6430472 chore: Remove unused type SymHash 2025-06-24 14:01:31 +02:00
Karolin Varner
9656fa7025 chore: Split basic types from protocol.rs into own file 2025-06-24 14:01:31 +02:00
Karolin Varner
53ddad30f1 fix: Incorrect reference in protocol.rs
REKEY_TIMEOUT is not used at all
2025-06-24 14:01:31 +02:00
Karolin Varner
7e8e502bca chore: Split constants from protocol.rs into own file 2025-06-24 14:01:31 +02:00
Karolin Varner
d81649c1d1 chore: Restructure imports in protocol.rs 2025-06-24 14:01:31 +02:00
Karolin Varner
da642186f2 chore: Move timing related thing out of protocol.rs 2025-06-24 14:01:31 +02:00
Karolin Varner
ad6d053015 fix: Missing imports (CI Failure on Main) (#663) 2025-06-24 12:35:43 +02:00
67 changed files with 3885 additions and 1978 deletions

View File

@@ -3,11 +3,6 @@ secret_key = "rp-a-secret-key"
listen = ["127.0.0.1:9999"]
verbosity = "Verbose"
[api]
listen_path = []
listen_fd = []
stream_fd = []
[[peers]]
public_key = "rp-b-public-key"
endpoint = "127.0.0.1:9998"

View File

@@ -3,11 +3,6 @@ secret_key = "rp-b-secret-key"
listen = ["127.0.0.1:9998"]
verbosity = "Verbose"
[api]
listen_path = []
listen_fd = []
stream_fd = []
[[peers]]
public_key = "rp-a-public-key"
endpoint = "127.0.0.1:9999"

View File

@@ -4,7 +4,7 @@ permissions:
contents: write
on:
pull_request:
#pull_request:
push:
env:
@@ -21,7 +21,7 @@ jobs:
matrix:
system: ["x86_64-linux", "i686-linux"]
runs-on: ubuntu-latest
runs-on: ubicloud-standard-2
defaults:
run:
shell: bash
@@ -93,7 +93,7 @@ jobs:
ciphers-primitives-bench-status:
if: ${{ always() }}
needs: [prim-benchmark]
runs-on: ubuntu-latest
runs-on: ubicloud-standard-2
steps:
- name: Successful
if: ${{ !(contains(needs.*.result, 'failure')) }}

View File

@@ -4,7 +4,7 @@ permissions:
contents: write
on:
pull_request:
#pull_request:
push:
env:
@@ -21,7 +21,7 @@ jobs:
matrix:
system: ["x86_64-linux", "i686-linux"]
runs-on: ubuntu-latest
runs-on: ubicloud-standard-2
defaults:
run:
shell: bash
@@ -80,7 +80,7 @@ jobs:
ciphers-protocol-bench-status:
if: ${{ always() }}
needs: [proto-benchmark]
runs-on: ubuntu-latest
runs-on: ubicloud-standard-2
steps:
- name: Successful
if: ${{ !(contains(needs.*.result, 'failure')) }}

View File

@@ -255,7 +255,7 @@ jobs:
target: [rp, rosenpass]
steps:
- name: Download digests
uses: actions/download-artifact@v4
uses: actions/download-artifact@v5
with:
path: ${{ runner.temp }}/digests
pattern: digests-${{ matrix.target }}-*

View File

@@ -23,7 +23,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -42,7 +42,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -58,7 +58,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -74,7 +74,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -91,7 +91,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -106,7 +106,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}

View File

@@ -23,7 +23,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -39,7 +39,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -56,7 +56,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -71,7 +71,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -88,7 +88,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -105,7 +105,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -121,7 +121,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -140,7 +140,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -164,7 +164,7 @@ jobs:
# nix_path: nixpkgs=channel:nixos-unstable
# extra_nix_config: |
# system = aarch64-linux
# - uses: cachix/cachix-action@v15
# - uses: cachix/cachix-action@v16
# with:
# name: rosenpass
# authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -180,7 +180,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -201,7 +201,7 @@ jobs:
nix_path: nixpkgs=channel:nixos-unstable
extra_nix_config: |
system = aarch64-linux
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -222,7 +222,7 @@ jobs:
nix_path: nixpkgs=channel:nixos-unstable
extra_nix_config: |
system = aarch64-linux
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -239,7 +239,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -261,7 +261,7 @@ jobs:
nix_path: nixpkgs=channel:nixos-unstable
extra_nix_config: |
system = aarch64-linux
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -277,7 +277,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -293,7 +293,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -310,7 +310,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -326,7 +326,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -341,7 +341,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -356,7 +356,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}

View File

@@ -162,7 +162,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}

View File

@@ -13,7 +13,7 @@ jobs:
steps:
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -32,7 +32,7 @@ jobs:
steps:
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -53,7 +53,7 @@ jobs:
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -71,7 +71,7 @@ jobs:
steps:
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
- uses: cachix/cachix-action@v15
- uses: cachix/cachix-action@v16
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}

View File

@@ -28,10 +28,10 @@ jobs:
~/.cargo/registry/cache/
~/.cache/cargo-supply-chain/
key: cargo-supply-chain-cache
- name: Install stable toolchain # Cargo-supply-chain is incompatible with older versions
- name: Install nightly toolchain
run: |
rustup toolchain install stable
rustup default stable
rustup toolchain install nightly
rustup override set nightly
- uses: actions/cache@v4
with:
path: ${{ runner.tool_cache }}/cargo-supply-chain
@@ -39,7 +39,7 @@ jobs:
- name: Add the tool cache directory to the search path
run: echo "${{ runner.tool_cache }}/cargo-supply-chain/bin" >> $GITHUB_PATH
- name: Ensure that the tool cache is populated with the cargo-supply-chain binary
run: cargo +stable install --root ${{ runner.tool_cache }}/cargo-supply-chain cargo-supply-chain
run: cargo install --root ${{ runner.tool_cache }}/cargo-supply-chain cargo-supply-chain
- name: Update data for cargo-supply-chain
run: cargo supply-chain update
- name: Generate cargo-supply-chain report about publishers
@@ -54,6 +54,8 @@ jobs:
contents: write
steps:
- uses: actions/checkout@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}
- uses: actions/cache@v4
with:
path: |
@@ -61,10 +63,10 @@ jobs:
~/.cargo/registry/index/
~/.cargo/registry/cache/
key: cargo-vet-cache
- name: Install stable toolchain # Since we are running/compiling cargo-vet, we should rely on the stable toolchain.
- name: Install nightly toolchain
run: |
rustup toolchain install stable
rustup default stable
rustup toolchain install nightly
rustup override set nightly
- uses: actions/cache@v4
with:
path: ${{ runner.tool_cache }}/cargo-vet
@@ -72,24 +74,104 @@ jobs:
- name: Add the tool cache directory to the search path
run: echo "${{ runner.tool_cache }}/cargo-vet/bin" >> $GITHUB_PATH
- name: Ensure that the tool cache is populated with the cargo-vet binary
run: cargo +stable install --root ${{ runner.tool_cache }}/cargo-vet cargo-vet
- name: Regenerate vet exemptions for dependabot PRs
if: github.actor == 'dependabot[bot]' # Run only for Dependabot PRs
run: cargo vet regenerate exemptions
- name: Check for changes in case of dependabot PR
if: github.actor == 'dependabot[bot]' # Run only for Dependabot PRs
run: git diff --exit-code || echo "Changes detected, committing..."
- name: Commit and push changes for dependabot PRs
if: success() && github.actor == 'dependabot[bot]'
run: cargo install --root ${{ runner.tool_cache }}/cargo-vet cargo-vet
- name: Check which event triggered this CI run, a push or a pull request.
run: |
git fetch origin ${{ github.head_ref }}
git switch ${{ github.head_ref }}
git config --global user.name "github-actions[bot]"
git config --global user.email "github-actions@github.com"
git add supply-chain/*
git commit -m "Regenerate cargo vet exemptions"
git push origin ${{ github.head_ref }}
EVENT_NAME="${{ github.event_name }}"
IS_PR="false"
IS_PUSH="false"
if [[ "$EVENT_NAME" == "pull_request" ]]; then
echo "This CI run was triggered in the context of a pull request."
IS_PR="true"
elif [[ "$EVENT_NAME" == "push" ]]; then
echo "This CI run was triggered in the context of a push."
IS_PUSH="true"
else
echo "ERROR: This CI run was not triggered in the context of a pull request or a push. Exiting with error."
exit 1
fi
echo "IS_PR=$IS_PR" >> $GITHUB_ENV
echo "IS_PUSH=$IS_PUSH" >> $GITHUB_ENV
shell: bash
- name: Check if last commit was by Dependabot
run: |
# Depending on the trigger for, the relevant commit has to be deduced differently.
if [[ "$IS_PR" == true ]]; then
# This is the commit ID for the last commit to the head branch of the pull request.
# If we used github.sha here instead, it would point to a merge commit between the PR and the main branch, which is only created for the CI run.
SHA="${{ github.event.pull_request.head.sha }}"
REF="${{ github.head_ref }}"
elif [[ "$IS_PUSH" == "true" ]]; then
SHA="${{ github.sha }}" # This is the last commit to the branch.
REF=${GITHUB_REF#refs/heads/}
else
echo "ERROR: This action only supports pull requests and push events as triggers. Exiting with error."
exit 1
fi
echo "Commit SHA is $SHA"
echo "Branch is $REF"
echo "REF=$REF" >> $GITHUB_ENV
COMMIT_AUTHOR=$(gh api repos/${{ github.repository }}/commits/$SHA --jq .author.login) # .author.login might be null, but for dependabot it will always be there and cannot be spoofed in contrast to .commit.author.name
echo "The author of the last commit is $COMMIT_AUTHOR"
if [[ "$COMMIT_AUTHOR" == "dependabot[bot]" ]]; then
echo "The last commit was made by dependabot"
LAST_COMMIT_IS_BY_DEPENDABOT=true
else
echo "The last commit was made by $COMMIT_AUTHOR not by dependabot"
LAST_COMMIT_IS_BY_DEPENDABOT=false
fi
echo "LAST_COMMIT_IS_BY_DEPENDABOT=$LAST_COMMIT_IS_BY_DEPENDABOT" >> $GITHUB_ENV
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
shell: bash
- name: Check if the last commit's message ends in "--regenerate-exemptions"
run: |
# Get commit message
COMMIT_MESSAGE=$(git log -1 --pretty=format:"%s")
if [[ "$COMMIT_MESSAGE" == *"--regenerate-exemptions" ]]; then
echo "The last commit message ends in --regenerate-exemptions"
REGEN_EXEMP=true
else
echo "The last commit message does not end in --regenerate-exemptions"
REGEN_EXEMP=false
fi
echo "REGEN_EXEMP=$REGEN_EXEMP" >> $GITHUB_ENV
shell: bash
- name: Check if the CI run happens in the context of a dependabot PR # Even if a PR is created by dependabot, the last commit can, and often should be, the regeneration of the cargo vet exemptions. It could also be from an individual making manual changes.
run: |
IN_DEPENDABOT_PR_CONTEXT="false"
if [[ $IS_PR == "true" && "${{ github.event.pull_request.user.login }}" == "dependabot[bot]" ]]; then
IN_DEPENDABOT_PR_CONTEXT="true"
echo "This CI run is in the context of PR by dependabot."
else
echo "This CI run is NOT in the context of PR by dependabot."
IN_DEPENDABOT_PR_CONTEXT="false"
fi
echo "IN_DEPENDABOT_PR_CONTEXT=$IN_DEPENDABOT_PR_CONTEXT" >> $GITHUB_ENV
shell: bash
- uses: actions/checkout@v4
if: env.IN_DEPENDABOT_PR_CONTEXT == 'true'
with:
token: ${{ secrets.CI_BOT_PAT }}
- name: In case of a dependabot PR, ensure that we are not in a detached HEAD state
if: env.IN_DEPENDABOT_PR_CONTEXT == 'true'
run: |
git fetch origin $REF # ensure that we are up to date.
git switch $REF # ensure that we are NOT in a detached HEAD state. This is important for the commit action in the end
shell: bash
- name: Regenerate cargo vet exemptions if we are in the context of a PR created by dependabot and the last commit is by dependabot or a regeneration of cargo vet exemptions was explicitly requested.
if: env.IN_DEPENDABOT_PR_CONTEXT == 'true' && (env.LAST_COMMIT_IS_BY_DEPENDABOT == 'true' || env.REGEN_EXEMP=='true') # Run only for Dependabot PRs or if specifically requested
run: cargo vet regenerate exemptions
- name: Commit and push changes if we are in the context of a PR created by dependabot and the last commit is by dependabot or a regeneration of cargo vet exemptions was explicitly requested.
if: env.IN_DEPENDABOT_PR_CONTEXT == 'true' && (env.LAST_COMMIT_IS_BY_DEPENDABOT == 'true' || env.REGEN_EXEMP=='true')
uses: stefanzweifel/git-auto-commit-action@v6
with:
commit_message: Regenerate cargo vet exemptions
commit_user_name: rosenpass-ci-bot[bot]
commit_user_email: noreply@rosenpass.eu
commit_author: Rosenpass CI Bot <noreply@rosenpass.eu>
env:
GITHUB_TOKEN: ${{ secrets.CI_BOT_PAT }}
- name: Invoke cargo-vet
run: cargo vet --locked

127
Cargo.lock generated
View File

@@ -110,9 +110,9 @@ dependencies = [
[[package]]
name = "anyhow"
version = "1.0.96"
version = "1.0.98"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6b964d184e89d9b6b67dd2715bc8e74cf3107fb2b529990c90cf517326150bf4"
checksum = "e16d2d3311acee920a9eb8d33b8cbc1787ce4a264e85f964c2404b969bdcd487"
dependencies = [
"backtrace",
]
@@ -408,9 +408,9 @@ checksum = "f46ad14479a25103f283c0f10005961cf086d8dc42205bb44c46ac563475dca6"
[[package]]
name = "clap_mangen"
version = "0.2.24"
version = "0.2.29"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fbae9cbfdc5d4fa8711c09bd7b83f644cb48281ac35bf97af3e47b0675864bdf"
checksum = "27b4c3c54b30f0d9adcb47f25f61fcce35c4dd8916638c6b82fbd5f4fb4179e2"
dependencies = [
"clap",
"roff",
@@ -1153,6 +1153,17 @@ dependencies = [
"generic-array",
]
[[package]]
name = "io-uring"
version = "0.7.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d93587f37623a1a17d94ef2bc9ada592f5465fe7732084ab7beefabe5c77c0c4"
dependencies = [
"bitflags 2.8.0",
"cfg-if",
"libc",
]
[[package]]
name = "ipc-channel"
version = "0.18.3"
@@ -1246,9 +1257,9 @@ checksum = "830d08ce1d1d941e6b30645f1a0eb5643013d835ce3779a5fc208261dbe10f55"
[[package]]
name = "libc"
version = "0.2.169"
version = "0.2.174"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b5aba8db14291edd000dfcc4d620c7ebfb122c613afb886ca8803fa4e128a20a"
checksum = "1171693293099992e19cddea4e8b849964e9846f4acee11b3948bcc337be8776"
[[package]]
name = "libcrux"
@@ -1408,7 +1419,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fc2f4eb4bc735547cfed7c0a4922cbd04a4655978c09b54f1f7b228750664c34"
dependencies = [
"cfg-if",
"windows-targets 0.48.5",
"windows-targets 0.52.6",
]
[[package]]
@@ -1429,9 +1440,9 @@ dependencies = [
[[package]]
name = "log"
version = "0.4.26"
version = "0.4.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "30bde2b3dc3671ae49d8e2e9f044c7c005836e7a023ee57cffa25ab82764bb9e"
checksum = "13dc2df351e3202783a1fe0d44375f7295ffb4049267b0f3018346dc122a1d94"
[[package]]
name = "memchr"
@@ -2057,8 +2068,10 @@ dependencies = [
"rosenpass-wireguard-broker",
"rustix",
"serde",
"serde_json",
"serial_test",
"signal-hook",
"signal-hook-mio",
"stacker",
"static_assertions",
"tempfile",
@@ -2141,6 +2154,38 @@ dependencies = [
"rosenpass-util",
]
[[package]]
name = "rosenpass-rp"
version = "0.2.1"
dependencies = [
"anyhow",
"base64ct",
"ctrlc-async",
"env_logger",
"futures",
"futures-util",
"genetlink",
"libc",
"log",
"netlink-packet-core",
"netlink-packet-generic",
"netlink-packet-wireguard",
"rosenpass",
"rosenpass-cipher-traits",
"rosenpass-ciphers",
"rosenpass-secret-memory",
"rosenpass-util",
"rosenpass-wireguard-broker",
"rtnetlink",
"serde",
"stacker",
"tempfile",
"tokio",
"toml",
"x25519-dalek",
"zeroize",
]
[[package]]
name = "rosenpass-secret-memory"
version = "0.1.0"
@@ -2173,11 +2218,13 @@ dependencies = [
"anyhow",
"base64ct",
"libcrux-test-utils",
"log",
"mio",
"rustix",
"static_assertions",
"tempfile",
"thiserror 1.0.69",
"tokio",
"typenum",
"uds",
"zerocopy 0.7.35",
@@ -2208,35 +2255,6 @@ dependencies = [
"zerocopy 0.7.35",
]
[[package]]
name = "rp"
version = "0.2.1"
dependencies = [
"anyhow",
"base64ct",
"ctrlc-async",
"futures",
"futures-util",
"genetlink",
"netlink-packet-core",
"netlink-packet-generic",
"netlink-packet-wireguard",
"rosenpass",
"rosenpass-cipher-traits",
"rosenpass-ciphers",
"rosenpass-secret-memory",
"rosenpass-util",
"rosenpass-wireguard-broker",
"rtnetlink",
"serde",
"stacker",
"tempfile",
"tokio",
"toml",
"x25519-dalek",
"zeroize",
]
[[package]]
name = "rtnetlink"
version = "0.14.1"
@@ -2359,9 +2377,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.139"
version = "1.0.140"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "44f86c3acccc9c65b153fe1b85a3be07fe5515274ec9f0653b4a0875731c72a6"
checksum = "20068b6e96dc6c9bd23e01df8827e6c7e1f2fddd43c21810382803c136b99373"
dependencies = [
"itoa",
"memchr",
@@ -2421,14 +2439,25 @@ checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64"
[[package]]
name = "signal-hook"
version = "0.3.17"
version = "0.3.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8621587d4798caf8eb44879d42e56b9a93ea5dcd315a6487c357130095b62801"
checksum = "d881a16cf4426aa584979d30bd82cb33429027e42122b169753d6ef1085ed6e2"
dependencies = [
"libc",
"signal-hook-registry",
]
[[package]]
name = "signal-hook-mio"
version = "0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "34db1a06d485c9142248b7a054f034b349b212551f3dfd19c94d45a754a217cd"
dependencies = [
"libc",
"mio",
"signal-hook",
]
[[package]]
name = "signal-hook-registry"
version = "1.4.2"
@@ -2461,12 +2490,12 @@ checksum = "7fcf8323ef1faaee30a44a340193b1ac6814fd9b7b4e88e9d4519a3e4abe1cfd"
[[package]]
name = "socket2"
version = "0.5.8"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c970269d99b64e60ec3bd6ad27270092a5394c4e309314b18ae3fe575695fbe8"
checksum = "233504af464074f9d066d7b5416c5f9b894a5862a6506e306f7b816cdd6f1807"
dependencies = [
"libc",
"windows-sys 0.52.0",
"windows-sys 0.59.0",
]
[[package]]
@@ -2630,20 +2659,22 @@ dependencies = [
[[package]]
name = "tokio"
version = "1.44.2"
version = "1.47.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e6b88822cbe49de4185e3a4cbf8321dd487cf5fe0c5c65695fef6346371e9c48"
checksum = "43864ed400b6043a4757a25c7a64a8efde741aed79a056a2fb348a406701bb35"
dependencies = [
"backtrace",
"bytes",
"io-uring",
"libc",
"mio",
"parking_lot",
"pin-project-lite",
"signal-hook-registry",
"slab",
"socket2",
"tokio-macros",
"windows-sys 0.52.0",
"windows-sys 0.59.0",
]
[[package]]

View File

@@ -46,14 +46,16 @@ memsec = { git = "https://github.com/rosenpass/memsec.git", rev = "aceb9baee8aec
] }
rand = "0.8.5"
typenum = "1.17.0"
log = { version = "0.4.22" }
log = { version = "0.4.27" }
clap = { version = "4.5.23", features = ["derive"] }
clap_mangen = "0.2.24"
clap_mangen = "0.2.29"
clap_complete = "4.5.40"
serde = { version = "1.0.217", features = ["derive"] }
arbitrary = { version = "1.4.1", features = ["derive"] }
anyhow = { version = "1.0.95", features = ["backtrace", "std"] }
anyhow = { version = "1.0.98", features = ["backtrace", "std"] }
mio = { version = "1.0.3", features = ["net", "os-poll"] }
signal-hook-mio = { version = "0.2.4", features = ["support-v1_0"] }
signal-hook = "0.3.17"
oqs-sys = { version = "0.9.1", default-features = false, features = [
'classic_mceliece',
'kyber',
@@ -67,7 +69,7 @@ chacha20poly1305 = { version = "0.10.1", default-features = false, features = [
zerocopy = { version = "0.7.35", features = ["derive"] }
home = "=0.5.9" # 5.11 requires rustc 1.81
derive_builder = "0.20.1"
tokio = { version = "1.42", features = ["macros", "rt-multi-thread"] }
tokio = { version = "1.46", features = ["macros", "rt-multi-thread"] }
postcard = { version = "1.1.1", features = ["alloc"] }
libcrux = { version = "0.0.2-pre.2" }
libcrux-chacha20poly1305 = { version = "0.0.2-beta.3" }
@@ -79,7 +81,6 @@ hex = { version = "0.4.3" }
heck = { version = "0.5.0" }
libc = { version = "0.2" }
uds = { git = "https://github.com/rosenpass/uds" }
signal-hook = "0.3.17"
lazy_static = "1.5"
#Dev dependencies
@@ -91,6 +92,7 @@ test_bin = "0.4.0"
criterion = "0.5.1"
allocator-api2-tests = "0.2.15"
procspawn = { version = "1.0.1", features = ["test-support"] }
serde_json = { version = "1.0.140" }
#Broker dependencies (might need cleanup or changes)
wireguard-uapi = { version = "3.0.0", features = ["xplatform"] }

View File

@@ -40,7 +40,7 @@ pub struct InferKeyedHash<Static, const KEY_LEN: usize, const HASH_LEN: usize>
where
Static: KeyedHash<KEY_LEN, HASH_LEN>,
{
pub _phantom_keyed_hasher: PhantomData<*const Static>,
pub _phantom_keyed_hasher: PhantomData<Static>,
}
impl<Static, const KEY_LEN: usize, const HASH_LEN: usize> InferKeyedHash<Static, KEY_LEN, HASH_LEN>

View File

@@ -83,6 +83,33 @@ impl HashDomain {
Ok(Self(new_key, self.1))
}
/// Version of [Self::mix] that accepts an iterator and mixes all values from the iterator into
/// this hash domain.
///
/// # Examples
///
/// ```rust
/// use rosenpass_ciphers::{hash_domain::HashDomain, KeyedHash};
///
/// let hasher = HashDomain::zero(KeyedHash::keyed_shake256());
/// assert_eq!(
/// hasher.clone().mix(b"Hello")?.mix(b"World")?.into_value(),
/// hasher.clone().mix_many([b"Hello", b"World"])?.into_value()
/// );
///
/// Ok::<(), anyhow::Error>(())
/// ```
pub fn mix_many<I, T>(mut self, it: I) -> Result<Self>
where
I: IntoIterator<Item = T>,
T: AsRef<[u8]>,
{
for e in it {
self = self.mix(e.as_ref())?;
}
Ok(self)
}
/// Creates a new [SecretHashDomain] by mixing in a new key `v`
/// by calling [SecretHashDomain::invoke_primitive] with this
/// [HashDomain]'s key as `k` and `v` as `d`.
@@ -161,6 +188,46 @@ impl SecretHashDomain {
Self::invoke_primitive(self.0.secret(), v, self.1)
}
/// Version of [Self::mix] that accepts an iterator and mixes all values from the iterator into
/// this hash domain.
///
/// # Examples
///
/// ```rust
/// use rosenpass_ciphers::{hash_domain::HashDomain, KeyedHash};
///
/// rosenpass_secret_memory::secret_policy_use_only_malloc_secrets();
///
/// let hasher = HashDomain::zero(KeyedHash::keyed_shake256());
/// assert_eq!(
/// hasher
/// .clone()
/// .turn_secret()
/// .mix(b"Hello")?
/// .mix(b"World")?
/// .into_secret()
/// .secret(),
/// hasher
/// .clone()
/// .turn_secret()
/// .mix_many([b"Hello", b"World"])?
/// .into_secret()
/// .secret(),
/// );
/// Ok::<(), anyhow::Error>(())
/// ```
pub fn mix_many<I, T>(mut self, it: I) -> Result<Self>
where
I: IntoIterator<Item = T>,
T: AsRef<[u8]>,
{
for e in it {
self = self.mix(e.as_ref())?;
}
Ok(self)
}
/// Creates a new [SecretHashDomain] by mixing in a new key `v`
/// by calling [SecretHashDomain::invoke_primitive] with the key of this
/// [HashDomainNamespace] as `k` and `v` as `d`.

View File

@@ -2,6 +2,7 @@
\usepackage{amssymb}
\usepackage{mathtools}
\usepackage{fontspec}
\usepackage{dirtytalk}
%font fallback
\directlua{luaotfload.add_fallback

View File

@@ -8,13 +8,15 @@ author:
- Lisa Schmidt = {Scientific Illustrator \\url{mullana.de}}
- Prabhpreet Dua
abstract: |
Rosenpass is used to create post-quantum-secure VPNs. Rosenpass computes a shared key, WireGuard (WG) [@wg] uses the shared key to establish a secure connection. Rosenpass can also be used without WireGuard, deriving post-quantum-secure symmetric keys for another application. The Rosenpass protocol builds on “Post-quantum WireGuard” (PQWG) [@pqwg] and improves it by using a cookie mechanism to provide security against state disruption attacks.
Rosenpass is a post-quantum-secure authenticated key exchange protocol. Its main practical use case is creating post-quantum-secure VPNs by combining WireGuard and Rosenpass.
The WireGuard implementation enjoys great trust from the cryptography community and has excellent performance characteristics. To preserve these features, the Rosenpass application runs side-by-side with WireGuard and supplies a new post-quantum-secure pre-shared key (PSK) every two minutes. WireGuard itself still performs the pre-quantum-secure key exchange and transfers any transport data with no involvement from Rosenpass at all.
In this combination, Rosenpass generates a post-quantum-secure shared key every two minutes that is then used by WireGuard (WG) [@wg] to establish a secure connection. Rosenpass can also be used without WireGuard, providing post-quantum-secure symmetric keys for other applications, as long as the other application accepts a pre-shared key and provides cryptographic security based on the pre-shared key alone.
The Rosenpass protocol builds on “Post-quantum WireGuard” (PQWG) [@pqwg] and improves it by using a cookie mechanism to provide security against state disruption attacks. From a cryptographic perspective, Rosenpass can be thought of as a post-quantum secure variant of the Noise IK[@noise] key exchange. \say{Noise IK} means that the protocol makes both parties authenticate themselves, but that the initiator knows before the protocol starts which other party they are communicating with. There is no negotiation step where the responder communicates their identity to the initiator.
The Rosenpass project consists of a protocol description, an implementation written in Rust, and a symbolic analysis of the protocols security using ProVerif [@proverif]. We are working on a cryptographic security proof using CryptoVerif [@cryptoverif].
This document is a guide for engineers and researchers implementing the protocol; a scientific paper discussing the security properties of Rosenpass is work in progress.
This document is a guide for engineers and researchers implementing the protocol.
---
\enlargethispage{5mm}
@@ -31,7 +33,7 @@ abstract: |
# Security
Rosenpass inherits most security properties from Post-Quantum WireGuard (PQWG). The security properties mentioned here are covered by the symbolic analysis in the Rosenpass repository.
Rosenpass inherits most security properties from Post-Quantum WireGuard (PQWG). The security properties mentioned here are covered by the symbolic analysis in the Rosenpass repository.
## Secrecy
Three key encapsulations using the keypairs `sski`/`spki`, `sskr`/`spkr`, and `eski`/`epki` provide secrecy (see Section \ref{variables} for an introduction of the variables). Their respective ciphertexts are called `scti`, `sctr`, and `ectr` and the resulting keys are called `spti`, `sptr`, `epti`. A single secure encapsulation is sufficient to provide secrecy. We use two different KEMs (Key Encapsulation Mechanisms; see Section \ref{skem}): Kyber and Classic McEliece.
@@ -154,16 +156,18 @@ Rosenpass uses two types of ID variables. See Figure \ref{img:HashingTree} for h
The first lower-case character indicates whether the variable is a session ID (`sid`) or a peer ID (`pid`). The final character indicates the role using the characters `i`, `r`, `m`, or `t`, for `initiator`, `responder`, `mine`, or `theirs` respectively.
### Symmetric Keys
Rosenpass uses two symmetric key variables `psk` and `osk` in its interface, and maintains the entire handshake state in a variable called the chaining key.
### Symmetric Keys {#symmetric-keys}
Rosenpass uses two main symmetric key variables `psk` and `osk` in its interface, and maintains the entire handshake state in a variable called the chaining key.
* `psk`: A pre-shared key that can be optionally supplied as input to Rosenpass.
* `osk`: The output shared key, generated by Rosenpass and supplied to WireGuard for use as its pre-shared key.
* `ck`: The chaining key.
* `osk`: The output shared key, generated by Rosenpass. The main use case is to supply the key to WireGuard for use as its pre-shared key.
* `ck`: The chaining key. This refers to various intermediate keys produced during the execution of the protocol, before the final `osk` is produced.
We mix all key material (e.g. `psk`) into the chaining key and derive symmetric keys such as `osk` from it. We authenticate public values by mixing them into the chaining key; in particular, we include the entire protocol transcript in the chaining key, i.e., all values transmitted over the network.
The protocol allows for multiple `osk`s to be generated; each of these keys is labeled with a domain separator to make sure different key usages are always given separate keys. The domain separator for using Rosenpass and WireGuard together is a token generated using the domain separator sequence `["rosenpass.eu", "wireguard psk"]` (see Fig. \ref{img:HashingTree}), as described in \ref{protocol-extension-wireguard-psk}. Third-parties using Rosenpass-keys for other purposes are asked to define their own protocol-extensions. Standard protocol extensions are described in \ref{protocol-extensions}.
We mix all key material (e.g. `psk`) into the chaining key, and derive symmetric keys such as `osk` from it. We authenticate public values by mixing them into the chaining key; in particular, we include the entire protocol transcript in the chaining key, i.e., all values transmitted over the network.
## Hashes
@@ -182,7 +186,7 @@ Using one hash function for multiple purposes can cause real-world security issu
\setupimage{landscape,fullpage,label=img:HashingTree}
![Rosenpass Hashing Tree](graphics/rosenpass-wp-hashing-tree-rgb.svg)
Each tree node $\circ{}$ in Figure 3 represents the application of the keyed hash function, using the previous chaining key value as first parameter. The root of the tree is the zero key. In level one, the `PROTOCOL` identifier is applied to the zero key to generate a label unique across cryptographic protocols (unless the same label is deliberately used elsewhere). In level two, purpose identifiers are applied to the protocol label to generate labels to use with each separate hash function application within the Rosenpass protocol. The following layers contain the inputs used in each separate usage of the hash function: Beneath the identifiers `"mac"`, `"cookie"`, `"peer id"`, and `"biscuit additional data"` are hash functions or message authentication codes with a small number of inputs. The second, third, and fourth column in Figure 3 cover the long sequential branch beneath the identifier `"chaining key init"` representing the entire protocol execution, one column for each message processed during the handshake. The leaves beneath `"chaining key extract"` in the left column represent pseudo-random labels for use when extracting values from the chaining key during the protocol execution. These values such as `mix >` appear as outputs in the left column, and then as inputs `< mix` in the other three columns.
Each tree node $\circ{}$ in Figure \ref{img:HashingTree} represents the application of the keyed hash function, using the previous chaining key value as first parameter. The root of the tree is the zero key. In level one, the `PROTOCOL` identifier is applied to the zero key to generate a label unique across cryptographic protocols (unless the same label is deliberately used elsewhere). In level two, purpose identifiers are applied to the protocol label to generate labels to use with each separate hash function application within the Rosenpass protocol. The following layers contain the inputs used in each separate usage of the hash function: Beneath the identifiers `"mac"`, `"cookie"`, `"peer id"`, and `"biscuit additional data"` are hash functions or message authentication codes with a small number of inputs. The second, third, and fourth column in Figure \ref{img:HashingTree} cover the long sequential branch beneath the identifier `"chaining key init"` representing the entire protocol execution, one column for each message processed during the handshake. The leaves beneath `"chaining key extract"` in the left column represent pseudo-random labels for use when extracting values from the chaining key during the protocol execution. These values such as `mix >` appear as outputs in the left column, and then as inputs `< mix` in the other three columns.
The protocol identifier depends on the hash function used with the respective peer is defined as follows if BLAKE2s [@rfc_blake2] is used:
@@ -289,7 +293,7 @@ fn lookup_session(sid);
The protocol framework used by Rosenpass allows arbitrarily many different keys to be extracted using labels for each key. The `extract_key` function is used to derive protocol-internal keys, its labels are under the “chaining key extract” node in Figure \ref{img:HashingTree}. The export key function is used to export application keys.
Third-party applications using the protocol are supposed to choose a unique label (e.g., their domain name) and use that as their own namespace for custom labels. The Rosenpass project itself uses the rosenpass.eu namespace.
Third-party applications using the protocol are supposed to define a protocol extension (see \ref{protocol-extensions}) and choose a globally unique label, such as their domain name for custom labels of their own. The Rosenpass project itself uses the `["rosenpass.eu"]` namespace in the WireGuard PSK protocol extension (see \ref{protocol-extension-wireguard-psk}).
Applications can cache or statically compile the pseudo-random label values into their binary to improve performance.
@@ -395,7 +399,7 @@ fn load_biscuit(nct) {
// In December 2024, the InitConf retransmission mechanisim was redesigned
// in a backwards-compatible way. See the changelog.
//
//
// -- 2024-11-30, Karolin Varner
if (protocol_version!(< "0.3.0")) {
// Ensure that the biscuit is used only once
@@ -421,6 +425,18 @@ fn enter_live() {
txkr ← extract_key("responder payload encryption");
txnm ← 0;
txnt ← 0;
// Setup output keys for protocol extensions such as the
// WireGuard PSK protocol extension.
setup_osks();
}
```
The final step `setup_osks()` can be defined by protocol extensions (see \ref{protocol-extensions}) to set up `osk`s for custom use cases. By default, the WireGuard PSK (see \ref{protocol-extension-wireguard-psk}) is active.
```pseudorust
fn setup_osks() {
... // Defined by protocol extensions
}
```
@@ -448,11 +464,11 @@ ICR5 and ICR6 perform biscuit replay protection using the biscuit number. This i
### Denial of Service Mitigation and Cookies
Rosenpass derives its cookie-based DoS mitigation technique for a responder when receiving InitHello messages from Wireguard [@wg].
Rosenpass derives its cookie-based DoS mitigation technique for a responder when receiving InitHello messages from Wireguard [@wg].
When the responder is under load, it may choose to not process further InitHello handshake messages, but instead to respond with a cookie reply message (see Figure \ref{img:MessageTypes}).
The sender of the exchange then uses this cookie in order to resend the message and have it accepted the following time by the reciever.
The sender of the exchange then uses this cookie in order to resend the message and have it accepted the following time by the reciever.
For an initiator, Rosenpass ignores all messages when under load.
@@ -465,7 +481,7 @@ cookie_value = lhash("cookie-value", cookie_secret, initiator_host_info)[0..16]
cookie_encrypted = XAEAD(lhash("cookie-key", spkm), nonce, cookie_value, mac_peer)
```
where `cookie_secret` is a secret variable that changes every two minutes to a random value. Moreover, `lhash` is always instantiated with SHAKE256 when computing `cookie_value` for compatability reasons. `initiator_host_info` is used to identify the initiator host, and is implementation-specific for the client. This paramaters used to identify the host must be carefully chosen to ensure there is a unique mapping, especially when using IPv4 and IPv6 addresses to identify the host (such as taking care of IPv6 link-local addresses). `cookie_value` is a truncated 16 byte value from the above hash operation. `mac_peer` is the `mac` field of the peer's handshake message to which message is the reply.
where `cookie_secret` is a secret variable that changes every two minutes to a random value. Moreover, `lhash` is always instantiated with SHAKE256 when computing `cookie_value` for compatability reasons. `initiator_host_info` is used to identify the initiator host, and is implementation-specific for the client. This paramaters used to identify the host must be carefully chosen to ensure there is a unique mapping, especially when using IPv4 and IPv6 addresses to identify the host (such as taking care of IPv6 link-local addresses). `cookie_value` is a truncated 16 byte value from the above hash operation. `mac_peer` is the `mac` field of the peer's handshake message to which message is the reply.
#### Envelope `mac` Field
@@ -495,13 +511,13 @@ else {
Here, `seconds_since_update(peer.cookie_value)` is the amount of time in seconds ellapsed since last cookie was received, and `COOKIE_WIRE_DATA` are the message contents of all bytes of the retransmitted message prior to the `cookie` field.
The inititator can use an invalid value for the `cookie` value, when the responder is not under load, and the responder must ignore this value.
However, when the responder is under load, it may reject InitHello messages with the invalid `cookie` value, and issue a cookie reply message.
However, when the responder is under load, it may reject InitHello messages with the invalid `cookie` value, and issue a cookie reply message.
### Conditions to trigger DoS Mechanism
This whitepaper does not mandate any specific mechanism to detect responder contention (also mentioned as the under load condition) that would trigger use of the cookie mechanism.
For the reference implemenation, Rosenpass has derived inspiration from the Linux implementation of Wireguard. This implementation suggests that the reciever keep track of the number of messages it is processing at a given time.
For the reference implemenation, Rosenpass has derived inspiration from the Linux implementation of Wireguard. This implementation suggests that the reciever keep track of the number of messages it is processing at a given time.
On receiving an incoming message, if the length of the message queue to be processed exceeds a threshold `MAX_QUEUED_INCOMING_HANDSHAKES_THRESHOLD`, the client is considered under load and its state is stored as under load. In addition, the timestamp of this instant when the client was last under load is stored. When recieving subsequent messages, if the client is still in an under load state, the client will check if the time ellpased since the client was last under load has exceeded `LAST_UNDER_LOAD_WINDOW` seconds. If this is the case, the client will update its state to normal operation, and process the message in a normal fashion.
@@ -520,23 +536,159 @@ The responder uses less complex form of the same mechanism: The responder never
### Interaction with cookie reply system
The cookie reply system does not interfere with the retransmission logic discussed above.
The cookie reply system does not interfere with the retransmission logic discussed above.
When the initator is under load, it will ignore processing any incoming messages.
When a responder is under load and it receives an InitHello handshake message, the InitHello message will be discarded and a cookie reply message is sent. The initiator, then on the reciept of the cookie reply message, will store a decrypted `cookie_value` to set the `cookie` field to subsequently sent messages. As per the retransmission mechanism above, the initiator will send a retransmitted InitHello message with a valid `cookie` value appended. On receiving the retransmitted handshake message, the responder will validate the `cookie` value and resume with the handshake process.
When a responder is under load and it receives an InitHello handshake message, the InitHello message will be discarded and a cookie reply message is sent. The initiator, then on the reciept of the cookie reply message, will store a decrypted `cookie_value` to set the `cookie` field to subsequently sent messages. As per the retransmission mechanism above, the initiator will send a retransmitted InitHello message with a valid `cookie` value appended. On receiving the retransmitted handshake message, the responder will validate the `cookie` value and resume with the handshake process.
When the responder is under load and it recieves an InitConf message, the message will be directly processed without checking the validity of the cookie field.
# Protocol extensions {#protocol-extensions}
The main extension point for the Rosenpass protocol is to generate `osk`s (speak output shared keys, see Sec. \ref{symmetric-keys}) for purposes other than using them to secure WireGuard. By default, the Rosenpass application generates keys for the WireGuard PSK (see \ref{protocol-extension-wireguard-psk}). It would not be impossible to use the keys generated for WireGuard in other use cases, but this might lead to attacks[@oraclecloning]. Specifying a custom protocol extension in practice just means settling on alternative domain separators (see Sec. \ref{symmetric-keys}, Fig. \ref{img:HashingTree}).
## Using custom domain separators in the Rosenpass application
The Rosenpass application supports protocol extensions to change the OSK domain separator without modification of the source code.
The following example configuration file can be used to execute Rosenpass in outfile mode with custom domain separators.
In this mode, the Rosenpass application will write keys to the file specified with `key_out` and send notifications when new keys are exchanged via standard out.
This can be used to embed Rosenpass into third-party application.
```toml
# peer-a.toml
public_key = "peer-a.pk"
secret_key = "peer-a.sk"
listen = ["[::1]:6789"]
verbosity = "Verbose"
[[peers]]
public_key = "peer-b.pk"
key_out = "peer-a.osk" # path to store the key
osk_organization = "myorg.com"
osk_label = ["My Custom Messenger app", "Backend VPN Example Subusecase"]
```
## Extension: WireGuard PSK {#protocol-extension-wireguard-psk}
The WireGuard PSK protocol extension is active by default; this is the mode where Rosenpass is used to provide post-quantum security for WireGuard. Hybrid security (i.e. redundant pre-quantum and post-quantum security) is achieved because WireGuard provides pre-quantum security, with or without Rosenpass.
This extension uses the `"rosenpass.eu"` namespace for user-labels and specifies a single additional user-label:
* `["rosenpass.eu", "wireguard psk"]`
The label's full domain separator is
* `[PROTOCOL, "user", "rosenpass.eu", "wireguard psk"]`
and can be seen in Figure \ref{img:HashingTree}.
We require two extra per-peer configuration variables:
* `wireguard_interface` — Name of a local network interface. Identifies local WireGuard interface we are supplying a PSK to.
* `wireguard_peer` — A WireGuard public key. Identifies the particular WireGuard peer whose connection we are supplying PSKs for.
When creating the WireGuard interface for use with Rosenpass, the PSK used by WireGuard must be initialized to a random value; otherwise, WireGuard can establish an insecure key before Rosenpass had a change to exchange its own key.
```pseudorust
fn on_wireguard_setup() {
// We use a random PSK to make sure the other side will never
// have a matching PSK when the WireGuard interface is created.
//
// Never use a fixed value here as this would lead to an attack!
let fake_wireguard_psk = random_key();
// How the interface is create
let wg_peer = WireGuard::setup_peer()
.public_key(wireguard_peer)
... // Supply any custom peerconfiguration
.psk(fake_wireguard_psk);
// The random PSK must be supplied before the
// WireGuard interface comes up
WireGuard::setup_interface()
.name(wireguard_interface)
... // Supply any custom configuration
.add_peer(wg_peer)
.create();
}
```
Every time a key is successfully negotiated, we upload the key to WireGuard.
For this protocol extension, the `setup_osks()` function is thus defined as:
```pseudorust
fn setup_osks() {
// Generate WireGuard OSK (output shared key) from Rosenpass'
// perspective, respectively the PSK (preshared key) from
// WireGuard's perspective
let wireguard_psk = export_key("rosenpass.eu", "wireguard psk");
/// Supply the PSK to WireGuard
WireGuard::get_interface(wireguard_interface)
.get_peer(wireguard_peer)
.set_psk(wireguard_psk);
}
```
The Rosenpass protocol uses key renegotiation, just like WireGuard.
If no new `osk` is produced within a set amount of time, the OSK generated by Rosenpass times out.
In this case, the WireGuard PSK must be overwritten with a random key.
This interaction is visualized in Figure \ref{img:ExtWireguardPSKHybridSecurity}.
```pseudorust
fn on_key_timeout() {
// Generate a random deliberately invalid WireGuard PSK.
// Never use a fixed value here as this would lead to an attack!
let fake_wireguard_psk = random_key();
// Securely erase the PSK currently used by WireGuard by
// overwriting it with the fake key we just generated.
WireGuard::get_interface(wireguard_interface)
.get_peer(wireguard_peer)
.set_psk(fake_wireguard_psk);
}
```
\setupimage{label=img:ExtWireguardPSKHybridSecurity,fullpage}
![Rosenpass + WireGuard: Hybrid Security](graphics/rosenpass-wireguard-hybrid-security.pdf)
# Changelog
### 0.3.x
#### 2025-06-24 Specifying the `osk` used for WireGuard as a protocol extension
\vspace{0.5em}
Author: Karolin varner
PR: [#664](https://github.com/rosenpass/rosenpass/pull/664)
\vspace{0.5em}
We introduce the concept of protocol extensions to make the option of using Rosenpass for purposes other than encrypting WireGuard more explicit. This captures the status-quo in a better way and does not constitute a functional change of the protocol.
When we designed the Rosenpass protocol, we built it with support for alternative `osk`-labels in mind.
This is why we specified the domain separator for the `osk` to be `[PROTOCOL, "user", "rosenpass.eu", "wireguard psk"]`.
By choosing alternative values for the namespace (e.g. `"myorg.eu"` instead of `"rosenpass.eu`) and the label (e.g. `"MyApp Symmetric Encryption"`), the protocol could easily accommodate alternative usage scenarios.
By introducing the concept of protocol extensions, we make this possibility explicit.
1. Reworded the abstract to make it clearer that Rosenpass can be used for other purposes than to secure WireGuard
2. Reworded Section Symmetric Keys, adding references to the new section on protocol extension
3. Added a `setup_osks()` function in section Hashes, to make the reference to protocol extensions explicit
4. Added a new section on protocol extensions and the standard extension for using Rosenpass with WireGuard
5. Added a new graphic to showcase how Rosenpass and WireGuard interact
5. Minor formatting and intra-document references fixes
#### 2025-05-22 - SHAKE256 keyed hash
\vspace{0.5em}
Author: David Niehues
PR: [#653](https://github.com/rosenpass/rosenpass/pull/653)
PR: [#653](https://github.com/rosenpass/rosenpass/pull/653)
\vspace{0.5em}
@@ -554,9 +706,11 @@ In order to maintain compatablity without introducing an explcit version number
\vspace{0.5em}
Author: Karolin Varner
Issue: [#331](https://github.com/rosenpass/rosenpass/issues/331)
PR: [#513](https://github.com/rosenpass/rosenpass/pull/513)
Author: Karolin Varner
Issue: [#331](https://github.com/rosenpass/rosenpass/issues/331)
PR: [#513](https://github.com/rosenpass/rosenpass/pull/513)
\vspace{0.5em}
@@ -574,7 +728,7 @@ By removing all retransmission handling code from the cryptographic protocol, we
The responder does not need to do anything special to handle RespHello retransmission if the RespHello package is lost, the initiator retransmits InitHello and the responder can generate another RespHello package from that. InitConf retransmission needs to be handled specifically in the responder code because accepting an InitConf retransmission would reset the live session including the nonce counter, which would cause nonce reuse. Implementations must detect the case that `biscuit_no = biscuit_used` in ICR5, skip execution of ICR6 and ICR7, and just transmit another EmptyData package to confirm that the initiator can stop transmitting InitConf.
\end{quote}
by
by
\begin{quote}
The responder uses less complex form of the same mechanism: The responder never retransmits RespHello, instead the responder generates a new RespHello message if InitHello is retransmitted. Responder confirmation messages of completed handshake (EmptyData) messages are retransmitted by storing the most recent InitConf messages (or their hashes) and caching the associated EmptyData messages. Through this cache, InitConf retransmission is detected and the associated EmptyData message is retransmitted.
@@ -597,7 +751,7 @@ By removing all retransmission handling code from the cryptographic protocol, we
\begin{minted}{pseudorust}
// In December 2024, the InitConf retransmission mechanisim was redesigned
// in a backwards-compatible way. See the changelog.
//
//
// -- 2024-11-30, Karolin Varner
if (protocol_version!(< "0.3.0")) {
// Ensure that the biscuit is used only once
@@ -611,9 +765,11 @@ By removing all retransmission handling code from the cryptographic protocol, we
\vspace{0.5em}
Author: Prabhpreet Dua
Issue: [#137](https://github.com/rosenpass/rosenpass/issues/137)
PR: [#142](https://github.com/rosenpass/rosenpass/pull/142)
Author: Prabhpreet Dua
Issue: [#137](https://github.com/rosenpass/rosenpass/issues/137)
PR: [#142](https://github.com/rosenpass/rosenpass/pull/142)
\vspace{0.5em}

View File

@@ -44,6 +44,7 @@ let
xifthen
xkeyval
xurl
dirtytalk
;
}
);

View File

@@ -64,6 +64,8 @@ clap = { workspace = true }
clap_complete = { workspace = true }
clap_mangen = { workspace = true }
mio = { workspace = true }
signal-hook = { workspace = true }
signal-hook-mio = { workspace = true }
rand = { workspace = true }
zerocopy = { workspace = true }
home = { workspace = true }
@@ -76,7 +78,6 @@ heck = { workspace = true, optional = true }
command-fds = { workspace = true, optional = true }
rustix = { workspace = true, optional = true }
uds = { workspace = true, optional = true, features = ["mio_1xx"] }
signal-hook = { workspace = true, optional = true }
libcrux-test-utils = { workspace = true, optional = true }
[build-dependencies]
@@ -90,9 +91,9 @@ serial_test = { workspace = true }
procspawn = { workspace = true }
tempfile = { workspace = true }
rustix = { workspace = true }
serde_json = { workspace = true }
[features]
#default = ["experiment_libcrux_all"]
experiment_cookie_dos_mitigation = []
experiment_memfd_secret = ["rosenpass-wireguard-broker/experiment_memfd_secret"]
experiment_libcrux_all = ["rosenpass-ciphers/experiment_libcrux_all"]
@@ -109,7 +110,6 @@ experiment_api = [
"rosenpass-util/experiment_file_descriptor_passing",
"rosenpass-wireguard-broker/experiment_api",
]
internal_signal_handling_for_coverage_reports = ["signal-hook"]
internal_testing = []
internal_bin_gen_ipc_msg_types = ["hex", "heck"]
trace_bench = ["rosenpass-util/trace_bench", "dep:libcrux-test-utils"]

View File

@@ -1,15 +1,16 @@
use anyhow::Result;
use rosenpass::protocol::{
CryptoServer, HandleMsgResult, MsgBuf, PeerPtr, ProtocolVersion, SPk, SSk, SymKey,
};
use std::ops::DerefMut;
use anyhow::Result;
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use rosenpass_cipher_traits::primitives::Kem;
use rosenpass_ciphers::StaticKem;
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use rosenpass_secret_memory::secret_policy_try_use_memfd_secrets;
use rosenpass::protocol::basic_types::{MsgBuf, SPk, SSk, SymKey};
use rosenpass::protocol::osk_domain_separator::OskDomainSeparator;
use rosenpass::protocol::{CryptoServer, HandleMsgResult, PeerPtr, ProtocolVersion};
fn handle(
tx: &mut CryptoServer,
msgb: &mut MsgBuf,
@@ -54,8 +55,18 @@ fn make_server_pair(protocol_version: ProtocolVersion) -> Result<(CryptoServer,
CryptoServer::new(ska, pka.clone()),
CryptoServer::new(skb, pkb.clone()),
);
a.add_peer(Some(psk.clone()), pkb, protocol_version.clone())?;
b.add_peer(Some(psk), pka, protocol_version)?;
a.add_peer(
Some(psk.clone()),
pkb,
protocol_version.clone(),
OskDomainSeparator::default(),
)?;
b.add_peer(
Some(psk),
pka,
protocol_version,
OskDomainSeparator::default(),
)?;
Ok((a, b))
}

View File

@@ -1,22 +1,23 @@
use std::{
collections::HashMap,
hint::black_box,
io::{self, Write},
ops::DerefMut,
time::{Duration, Instant},
};
use anyhow::Result;
use libcrux_test_utils::tracing::{EventType, Trace as _};
use rosenpass_cipher_traits::primitives::Kem;
use rosenpass_ciphers::StaticKem;
use rosenpass_secret_memory::secret_policy_try_use_memfd_secrets;
use rosenpass_util::trace_bench::RpEventType;
use rosenpass_util::trace_bench::RpEvent;
use rosenpass::protocol::{
CryptoServer, HandleMsgResult, MsgBuf, PeerPtr, ProtocolVersion, SPk, SSk, SymKey,
};
use rosenpass::protocol::basic_types::{MsgBuf, SPk, SSk, SymKey};
use rosenpass::protocol::osk_domain_separator::OskDomainSeparator;
use rosenpass::protocol::{CryptoServer, HandleMsgResult, PeerPtr, ProtocolVersion};
use serde::ser::SerializeStruct;
const ITERATIONS: usize = 100;
@@ -77,8 +78,18 @@ fn make_server_pair(protocol_version: ProtocolVersion) -> Result<(CryptoServer,
CryptoServer::new(ska, pka.clone()),
CryptoServer::new(skb, pkb.clone()),
);
a.add_peer(Some(psk.clone()), pkb, protocol_version.clone())?;
b.add_peer(Some(psk), pka, protocol_version)?;
a.add_peer(
Some(psk.clone()),
pkb,
protocol_version.clone(),
OskDomainSeparator::default(),
)?;
b.add_peer(
Some(psk),
pka,
protocol_version,
OskDomainSeparator::default(),
)?;
Ok((a, b))
}
@@ -117,15 +128,30 @@ fn main() {
(v02, &v03_with_marker[1..])
};
// Perform statistical analysis on both trace sections and write results as JSON
write_json_arrays(
&mut std::io::stdout(), // Write to standard output
vec![
("V02", statistical_analysis(trace_v02.to_vec())),
("V03", statistical_analysis(trace_v03.to_vec())),
],
)
.expect("error writing json data");
// Perform statistical analysis on both trace sections
let analysis_v02 = statistical_analysis(trace_v02);
let analysis_v03 = statistical_analysis(trace_v03);
// Transform analysis results to JSON-encodable data type
let stats_v02 = analysis_v02
.iter()
.map(|(label, agg_stat)| JsonAggregateStat {
protocol_version: "V02",
label,
agg_stat,
});
let stats_v03 = analysis_v03
.iter()
.map(|(label, agg_stat)| JsonAggregateStat {
protocol_version: "V03",
label,
agg_stat: &agg_stat,
});
// Write results as JSON
let stats_all: Vec<_> = stats_v02.chain(stats_v03).collect();
let stats_json = serde_json::to_string_pretty(&stats_all).expect("error encoding to json");
println!("{stats_json}");
}
/// Performs a simple statistical analysis:
@@ -133,7 +159,7 @@ fn main() {
/// - extracts durations of spamns
/// - filters out empty bins
/// - calculates aggregate statistics (mean, std dev)
fn statistical_analysis(trace: Vec<RpEventType>) -> Vec<(&'static str, AggregateStat<Duration>)> {
fn statistical_analysis(trace: &[RpEvent]) -> Vec<(&'static str, AggregateStat<Duration>)> {
bin_events(trace)
.into_iter()
.map(|(label, spans)| (label, extract_span_durations(label, spans.as_slice())))
@@ -142,44 +168,6 @@ fn statistical_analysis(trace: Vec<RpEventType>) -> Vec<(&'static str, Aggregate
.collect()
}
/// Takes an iterator of ("protocol_version", iterator_of_stats) pairs and writes them
/// as a single flat JSON array to the provided writer.
///
/// # Arguments
/// * `w` - The writer to output JSON to (e.g., stdout, file).
/// * `item_groups` - An iterator producing tuples `(version, stats): (&'static str, II)`.
/// Here `II` is itself an iterator producing `(label, agg_stat): (&'static str, AggregateStat<Duration>)`,
/// where the label is the label of the span, e.g. "IHI2".
///
/// # Type Parameters
/// * `W` - A type that implements `std::io::Write`.
/// * `II` - An iterator type yielding (`&'static str`, `AggregateStat<Duration>`).
fn write_json_arrays<W: Write, II: IntoIterator<Item = (&'static str, AggregateStat<Duration>)>>(
w: &mut W,
item_groups: impl IntoIterator<Item = (&'static str, II)>,
) -> io::Result<()> {
// Flatten the groups into a single iterator of (protocol_version, label, stats)
let iter = item_groups.into_iter().flat_map(|(version, items)| {
items
.into_iter()
.map(move |(label, agg_stat)| (version, label, agg_stat))
});
let mut delim = ""; // Start with no delimiter
// Start the JSON array
write!(w, "[")?;
// Write the flattened statistics as JSON objects, separated by commas.
for (version, label, agg_stat) in iter {
write!(w, "{delim}")?; // Write delimiter (empty for first item, "," for subsequent)
agg_stat.write_json_ns(label, version, w)?; // Write the JSON object for the stat entry
delim = ","; // Set delimiter for the next iteration
}
// End the JSON array
write!(w, "]")
}
/// Used to group benchmark results in visualizations
enum RunTimeGroup {
/// For particularly long operations.
@@ -232,13 +220,13 @@ enum StatEntry {
/// Takes a flat list of events and organizes them into a HashMap where keys
/// are event labels and values are vectors of events with that label.
fn bin_events(events: Vec<RpEventType>) -> HashMap<&'static str, Vec<RpEventType>> {
fn bin_events(events: &[RpEvent]) -> HashMap<&'static str, Vec<RpEvent>> {
let mut spans = HashMap::<_, Vec<_>>::new();
for event in events {
// Get the vector for the event's label, or create a new one
let spans_for_label = spans.entry(event.label).or_default();
// Add the event to the vector
spans_for_label.push(event);
spans_for_label.push(event.clone());
}
spans
}
@@ -246,7 +234,7 @@ fn bin_events(events: Vec<RpEventType>) -> HashMap<&'static str, Vec<RpEventType
/// Processes a list of events (assumed to be for the same label), matching
/// `SpanOpen` and `SpanClose` events to calculate the duration of each span.
/// It handles potentially interleaved spans correctly.
fn extract_span_durations(label: &str, events: &[RpEventType]) -> Vec<Duration> {
fn extract_span_durations(label: &str, events: &[RpEvent]) -> Vec<Duration> {
let mut processing_list: Vec<StatEntry> = vec![]; // List to track open spans and final durations
for entry in events {
@@ -306,6 +294,7 @@ fn extract_span_durations(label: &str, events: &[RpEventType]) -> Vec<Duration>
/// Stores the mean, standard deviation, relative standard deviation (sd/mean),
/// and the number of samples used for calculation.
#[derive(Debug)]
#[allow(dead_code)]
struct AggregateStat<T> {
/// Average duration.
mean_duration: T,
@@ -355,32 +344,33 @@ impl AggregateStat<Duration> {
sample_size,
}
}
}
/// Writes the statistics as a JSON object to the provided writer.
/// Includes metadata like label, protocol_version, OS, architecture, and run time group.
///
/// # Arguments
/// * `label` - The specific benchmark/span label.
/// * `protocol_version` - Version of the protocol that is benchmarked.
/// * `w` - The output writer (must implement `std::io::Write`).
fn write_json_ns(
&self,
label: &str,
protocol_version: &str,
w: &mut impl io::Write,
) -> io::Result<()> {
// Format the JSON string using measured values and environment constants
writeln!(
w,
r#"{{"name":"{name}", "unit":"ns/iter", "value":"{value}", "range":"± {range}", "protocol version":"{protocol_version}", "sample size":"{sample_size}", "operating system":"{os}", "architecture":"{arch}", "run time":"{run_time}"}}"#,
name = label, // Benchmark name
value = self.mean_duration.as_nanos(), // Mean duration in nanoseconds
range = self.sd_duration.as_nanos(), // Standard deviation in nanoseconds
sample_size = self.sample_size, // Number of samples
os = std::env::consts::OS, // Operating system
arch = std::env::consts::ARCH, // CPU architecture
run_time = run_time_group(label), // Run time group category (long, medium, etc.)
protocol_version = protocol_version // Overall protocol_version (e.g., protocol version)
)
struct JsonAggregateStat<'a, T> {
agg_stat: &'a AggregateStat<T>,
label: &'a str,
protocol_version: &'a str,
}
impl<'a> serde::Serialize for JsonAggregateStat<'a, Duration> {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
let mut stat = serializer.serialize_struct("AggregateStat", 9)?;
stat.serialize_field("name", self.label)?;
stat.serialize_field("unit", "ns/iter")?;
stat.serialize_field("value", &self.agg_stat.mean_duration.as_nanos().to_string())?;
stat.serialize_field(
"range",
&format!("± {}", self.agg_stat.sd_duration.as_nanos()),
)?;
stat.serialize_field("protocol version", self.protocol_version)?;
stat.serialize_field("sample size", &self.agg_stat.sample_size)?;
stat.serialize_field("operating system", std::env::consts::OS)?;
stat.serialize_field("architecture", std::env::consts::ARCH)?;
stat.serialize_field("run time", &run_time_group(self.label).to_string())?;
stat.end()
}
}

View File

@@ -158,10 +158,10 @@ where
);
// Actually read the secrets
let mut sk = crate::protocol::SSk::zero();
let mut sk = crate::protocol::basic_types::SSk::zero();
sk_io.read_exact_til_end(sk.secret_mut()).einvalid_req()?;
let mut pk = crate::protocol::SPk::zero();
let mut pk = crate::protocol::basic_types::SPk::zero();
pk_io.read_exact_til_end(pk.borrow_mut()).einvalid_req()?;
// Retrieve the construction site

View File

@@ -8,6 +8,7 @@ use crate::app_server::AppServer;
/// Configuration options for the Rosenpass API
#[derive(Debug, Serialize, Deserialize, Default, Clone, PartialEq, Eq)]
#[serde(deny_unknown_fields)]
pub struct ApiConfig {
/// Where in the file-system to create the unix socket the rosenpass API will be listening for
/// connections on

View File

@@ -1,56 +1,37 @@
/// This contains the bulk of the rosenpass server IO handling code whereas
/// the actual cryptographic code lives in the [crate::protocol] module
use anyhow::bail;
//! This contains the bulk of the rosenpass server IO handling code whereas
//! the actual cryptographic code lives in the [crate::protocol] module
use anyhow::Context;
use anyhow::Result;
use std::collections::{HashMap, VecDeque};
use std::io::{stdout, ErrorKind, Write};
use std::net::{Ipv4Addr, Ipv6Addr, SocketAddr, SocketAddrV4, SocketAddrV6, ToSocketAddrs};
use std::time::{Duration, Instant};
use std::{cell::Cell, fmt::Debug, io, path::PathBuf, slice};
use mio::{Interest, Token};
use signal_hook_mio::v1_0 as signal_hook_mio;
use anyhow::{bail, Context, Result};
use derive_builder::Builder;
use log::{error, info, warn};
use mio::Interest;
use mio::Token;
use rosenpass_secret_memory::Public;
use rosenpass_secret_memory::Secret;
use rosenpass_util::build::ConstructionSite;
use rosenpass_util::file::StoreValueB64;
use rosenpass_util::functional::run;
use rosenpass_util::functional::ApplyExt;
use rosenpass_util::io::IoResultKindHintExt;
use rosenpass_util::io::SubstituteForIoErrorKindExt;
use rosenpass_util::option::SomeExt;
use rosenpass_util::result::OkExt;
use rosenpass_wireguard_broker::WireguardBrokerMio;
use rosenpass_wireguard_broker::{WireguardBrokerCfg, WG_KEY_LEN};
use zerocopy::AsBytes;
use std::cell::Cell;
use std::collections::HashMap;
use std::collections::VecDeque;
use std::fmt::Debug;
use std::io;
use std::io::stdout;
use std::io::ErrorKind;
use std::io::Write;
use std::net::Ipv4Addr;
use std::net::Ipv6Addr;
use std::net::SocketAddr;
use std::net::SocketAddrV4;
use std::net::SocketAddrV6;
use std::net::ToSocketAddrs;
use std::path::PathBuf;
use std::slice;
use std::time::Duration;
use std::time::Instant;
use crate::config::ProtocolVersion;
use crate::protocol::BuildCryptoServer;
use crate::protocol::HostIdentification;
use crate::{
config::Verbosity,
protocol::{CryptoServer, MsgBuf, PeerPtr, SPk, SSk, SymKey, Timing},
};
use rosenpass_util::attempt;
use rosenpass_util::b64::B64Display;
use rosenpass_util::fmt::debug::NullDebug;
use rosenpass_util::functional::{run, ApplyExt};
use rosenpass_util::io::{IoResultKindHintExt, SubstituteForIoErrorKindExt};
use rosenpass_util::{
b64::B64Display, build::ConstructionSite, file::StoreValueB64, result::OkExt,
};
use rosenpass_secret_memory::{Public, Secret};
use rosenpass_wireguard_broker::{WireguardBrokerCfg, WireguardBrokerMio, WG_KEY_LEN};
use crate::config::{ProtocolVersion, Verbosity};
use crate::protocol::basic_types::{MsgBuf, SPk, SSk, SymKey};
use crate::protocol::osk_domain_separator::OskDomainSeparator;
use crate::protocol::timing::Timing;
use crate::protocol::{BuildCryptoServer, CryptoServer, HostIdentification, PeerPtr};
/// The maximum size of a base64 encoded symmetric key (estimate)
pub const MAX_B64_KEY_SIZE: usize = 32 * 5 / 3;
@@ -151,7 +132,7 @@ pub struct BrokerStore {
/// The collection of WireGuard brokers. See [Self].
pub store: HashMap<
Public<BROKER_ID_BYTES>,
Box<dyn WireguardBrokerMio<Error = anyhow::Error, MioError = anyhow::Error>>,
Box<dyn WireguardBrokerMio<Error = anyhow::Error, MioError = anyhow::Error> + Send>,
>,
}
@@ -168,12 +149,12 @@ pub struct BrokerPeer {
///
/// This is woefully overengineered and there is very little reason why the broker
/// configuration should not live in the particular WireGuard broker.
peer_cfg: Box<dyn WireguardBrokerCfg>,
peer_cfg: Box<dyn WireguardBrokerCfg + Send>,
}
impl BrokerPeer {
/// Create a broker peer
pub fn new(ptr: BrokerStorePtr, peer_cfg: Box<dyn WireguardBrokerCfg>) -> Self {
pub fn new(ptr: BrokerStorePtr, peer_cfg: Box<dyn WireguardBrokerCfg + Send>) -> Self {
Self { ptr, peer_cfg }
}
@@ -308,12 +289,20 @@ pub enum AppServerIoSource {
Socket(usize),
/// IO source refers to a PSK broker in [AppServer::brokers]
PskBroker(Public<BROKER_ID_BYTES>),
/// IO source refers to our signal handlers
SignalHandler,
/// IO source refers to some IO sources used in the API;
/// see [AppServer::api_manager]
#[cfg(feature = "experiment_api")]
MioManager(crate::api::mio::MioManagerIoSource),
}
pub enum AppServerTryRecvResult {
None,
Terminate,
NetworkMessage(usize, Endpoint),
}
/// Number of epoll(7) events Rosenpass can receive at a time
const EVENT_CAPACITY: usize = 20;
@@ -354,6 +343,8 @@ pub struct AppServer {
/// MIO associates IO sources with numeric tokens. This struct takes care of generating these
/// tokens
pub mio_token_dispenser: MioTokenDispenser,
/// Mio-based handler for signals
pub signal_handler: NullDebug<signal_hook_mio::Signals>,
/// Helpers handling communication with WireGuard; these take a generated key and forward it to
/// WireGuard
pub brokers: BrokerStore,
@@ -379,16 +370,6 @@ pub struct AppServer {
/// Used by integration tests to force [Self] into DoS condition
/// and to terminate the AppServer after the test is complete
pub test_helpers: Option<AppServerTest>,
/// Helper for integration tests running rosenpass as a subprocess
/// to terminate properly upon receiving an appropriate system signal.
///
/// This is primarily needed for coverage testing, since llvm-cov does not
/// write coverage reports to disk when a process is stopped by the default
/// signal handler.
///
/// See <https://github.com/rosenpass/rosenpass/issues/385>
#[cfg(feature = "internal_signal_handling_for_coverage_reports")]
pub term_signal: terminate::TerminateRequested,
#[cfg(feature = "experiment_api")]
/// The Rosenpass unix socket API handler; this is an experimental
/// feature that can be used to embed Rosenpass in external applications
@@ -478,6 +459,8 @@ impl AppPeerPtr {
/// Instructs [AppServer::event_loop_without_error_handling] on how to proceed.
#[derive(Debug)]
pub enum AppPollResult {
/// Received request to terminate the application
Terminate,
/// Erase the key for a given peer. Corresponds to [crate::protocol::PollResult::DeleteKey]
DeleteKey(AppPeerPtr),
/// Send an initiation to the given peer. Corresponds to [crate::protocol::PollResult::SendInitiation]
@@ -824,10 +807,27 @@ impl AppServer {
verbosity: Verbosity,
test_helpers: Option<AppServerTest>,
) -> anyhow::Result<Self> {
// setup mio
// Setup Mio itself
let mio_poll = mio::Poll::new()?;
let events = mio::Events::with_capacity(EVENT_CAPACITY);
// And helpers to map mio tokens to internal event types
let mut mio_token_dispenser = MioTokenDispenser::default();
let mut io_source_index = HashMap::new();
// Setup signal handling
let signal_handler = attempt!({
let mut handler =
signal_hook_mio::Signals::new(signal_hook::consts::TERM_SIGNALS.iter())?;
let mio_token = mio_token_dispenser.dispense();
mio_poll
.registry()
.register(&mut handler, mio_token, Interest::READABLE)?;
let prev = io_source_index.insert(mio_token, AppServerIoSource::SignalHandler);
assert!(prev.is_none());
Ok(NullDebug(handler))
})
.context("Failed to set up signal (user triggered program termination) handler")?;
// bind each SocketAddr to a socket
let maybe_sockets: Result<Vec<_>, _> =
@@ -901,7 +901,6 @@ impl AppServer {
}
// register all sockets to mio
let mut io_source_index = HashMap::new();
for (idx, socket) in sockets.iter_mut().enumerate() {
let mio_token = mio_token_dispenser.dispense();
mio_poll
@@ -917,8 +916,6 @@ impl AppServer {
};
Ok(Self {
#[cfg(feature = "internal_signal_handling_for_coverage_reports")]
term_signal: terminate::TerminateRequested::new()?,
crypto_site,
peers: Vec::new(),
verbosity,
@@ -929,6 +926,7 @@ impl AppServer {
io_source_index,
mio_poll,
mio_token_dispenser,
signal_handler,
brokers: BrokerStore::default(),
all_sockets_drained: false,
under_load: DoSOperation::Normal,
@@ -999,7 +997,7 @@ impl AppServer {
/// Register a new WireGuard PSK broker
pub fn register_broker(
&mut self,
broker: Box<dyn WireguardBrokerMio<Error = anyhow::Error, MioError = anyhow::Error>>,
broker: Box<dyn WireguardBrokerMio<Error = anyhow::Error, MioError = anyhow::Error> + Send>,
) -> Result<BrokerStorePtr> {
let ptr = Public::from_slice((self.brokers.store.len() as u64).as_bytes());
if self.brokers.store.insert(ptr, broker).is_some() {
@@ -1036,6 +1034,7 @@ impl AppServer {
/// # Examples
///
/// See [Self::new].
#[allow(clippy::too_many_arguments)]
pub fn add_peer(
&mut self,
psk: Option<SymKey>,
@@ -1044,11 +1043,16 @@ impl AppServer {
broker_peer: Option<BrokerPeer>,
hostname: Option<String>,
protocol_version: ProtocolVersion,
osk_domain_separator: OskDomainSeparator,
) -> anyhow::Result<AppPeerPtr> {
let PeerPtr(pn) = match &mut self.crypto_site {
ConstructionSite::Void => bail!("Crypto server construction site is void"),
ConstructionSite::Builder(builder) => builder.add_peer(psk, pk, protocol_version),
ConstructionSite::Product(srv) => srv.add_peer(psk, pk, protocol_version.into())?,
ConstructionSite::Builder(builder) => {
builder.add_peer(psk, pk, protocol_version, osk_domain_separator)
}
ConstructionSite::Product(srv) => {
srv.add_peer(psk, pk, protocol_version.into(), osk_domain_separator)?
}
};
assert!(pn == self.peers.len());
@@ -1065,7 +1069,7 @@ impl AppServer {
Ok(AppPeerPtr(pn))
}
/// Main IO handler; this generally does not terminate
/// Main IO handler; this generally does not terminate other than through unix signals
///
/// # Examples
///
@@ -1082,23 +1086,6 @@ impl AppServer {
Err(e) => e,
};
#[cfg(feature = "internal_signal_handling_for_coverage_reports")]
{
let terminated_by_signal = err
.downcast_ref::<std::io::Error>()
.filter(|e| e.kind() == std::io::ErrorKind::Interrupted)
.filter(|_| self.term_signal.value())
.is_some();
if terminated_by_signal {
log::warn!(
"\
Terminated by signal; this signal handler is correct during coverage testing \
but should be otherwise disabled"
);
return Ok(());
}
}
// This should not happen…
failure_cnt = if msgs_processed > 0 {
0
@@ -1151,6 +1138,7 @@ impl AppServer {
use AppPollResult::*;
use KeyOutputReason::*;
// TODO: We should read from this using a mio channel
if let Some(AppServerTest {
termination_handler: Some(terminate),
..
@@ -1174,6 +1162,8 @@ impl AppServer {
#[allow(clippy::redundant_closure_call)]
match (have_crypto, poll_result) {
(_, Terminate) => return Ok(()),
(CryptoSrv::Missing, SendInitiation(_)) => {}
(CryptoSrv::Avail, SendInitiation(peer)) => tx_maybe_with!(peer, || self
.crypto_server_mut()?
@@ -1321,6 +1311,7 @@ impl AppServer {
pub fn poll(&mut self, rx_buf: &mut [u8]) -> anyhow::Result<AppPollResult> {
use crate::protocol::PollResult as C;
use AppPollResult as A;
use AppServerTryRecvResult as R;
let res = loop {
// Call CryptoServer's poll (if available)
let crypto_poll = self
@@ -1337,12 +1328,14 @@ impl AppServer {
break A::SendRetransmission(AppPeerPtr(no))
}
Some(C::Sleep(timeout)) => timeout, // No event from crypto-server, do IO
None => crate::protocol::UNENDING, // Crypto server is uninitialized, do IO
None => crate::protocol::timing::UNENDING, // Crypto server is uninitialized, do IO
};
// Perform IO (look for a message)
if let Some((len, addr)) = self.try_recv(rx_buf, io_poll_timeout)? {
break A::ReceivedMessage(len, addr);
match self.try_recv(rx_buf, io_poll_timeout)? {
R::None => {}
R::Terminate => break A::Terminate,
R::NetworkMessage(len, addr) => break A::ReceivedMessage(len, addr),
}
};
@@ -1360,12 +1353,12 @@ impl AppServer {
&mut self,
buf: &mut [u8],
timeout: Timing,
) -> anyhow::Result<Option<(usize, Endpoint)>> {
) -> anyhow::Result<AppServerTryRecvResult> {
let timeout = Duration::from_secs_f64(timeout);
// if there is no time to wait on IO, well, then, lets not waste any time!
if timeout.is_zero() {
return Ok(None);
return Ok(AppServerTryRecvResult::None);
}
// NOTE when using mio::Poll, there are some particularities (taken from
@@ -1475,12 +1468,19 @@ impl AppServer {
// blocking poll, we go through all available IO sources to see if we missed anything.
{
while let Some(ev) = self.short_poll_queue.pop_front() {
if let Some(v) = self.try_recv_from_mio_token(buf, ev.token())? {
return Ok(Some(v));
match self.try_recv_from_mio_token(buf, ev.token())? {
AppServerTryRecvResult::None => continue,
res => return Ok(res),
}
}
}
// Drain operating system signals
match self.try_recv_from_signal_handler()? {
AppServerTryRecvResult::None => {} // Nop
res => return Ok(res),
}
// drain all sockets
let mut would_block_count = 0;
for sock_no in 0..self.sockets.len() {
@@ -1488,11 +1488,11 @@ impl AppServer {
.try_recv_from_listen_socket(buf, sock_no)
.io_err_kind_hint()
{
Ok(None) => continue,
Ok(Some(v)) => {
Ok(AppServerTryRecvResult::None) => continue,
Ok(res) => {
// at least one socket was not drained...
self.all_sockets_drained = false;
return Ok(Some(v));
return Ok(res);
}
Err((_, ErrorKind::WouldBlock)) => {
would_block_count += 1;
@@ -1520,12 +1520,24 @@ impl AppServer {
self.performed_long_poll = true;
Ok(None)
Ok(AppServerTryRecvResult::None)
}
/// Internal helper for [Self::try_recv]
fn perform_mio_poll_and_register_events(&mut self, timeout: Duration) -> io::Result<()> {
self.mio_poll.poll(&mut self.events, Some(timeout))?;
loop {
use std::io::ErrorKind as IOE;
match self
.mio_poll
.poll(&mut self.events, Some(timeout))
.io_err_kind_hint()
{
Ok(()) => break,
Err((_, IOE::Interrupted)) => continue,
Err((err, _)) => return Err(err),
}
}
// Fill the short poll buffer with the acquired events
self.events
.iter()
@@ -1539,12 +1551,12 @@ impl AppServer {
&mut self,
buf: &mut [u8],
token: mio::Token,
) -> anyhow::Result<Option<(usize, Endpoint)>> {
) -> anyhow::Result<AppServerTryRecvResult> {
let io_source = match self.io_source_index.get(&token) {
Some(io_source) => *io_source,
None => {
log::warn!("No IO source assiociated with mio token ({token:?}). Polling using mio tokens directly is an experimental feature and IO handler should recover when all available io sources are polled. This is a developer error. Please report it.");
return Ok(None);
return Ok(AppServerTryRecvResult::None);
}
};
@@ -1556,11 +1568,13 @@ impl AppServer {
&mut self,
buf: &mut [u8],
io_source: AppServerIoSource,
) -> anyhow::Result<Option<(usize, Endpoint)>> {
) -> anyhow::Result<AppServerTryRecvResult> {
match io_source {
AppServerIoSource::SignalHandler => self.try_recv_from_signal_handler()?.ok(),
AppServerIoSource::Socket(idx) => self
.try_recv_from_listen_socket(buf, idx)
.substitute_for_ioerr_wouldblock(None)?
.substitute_for_ioerr_wouldblock(AppServerTryRecvResult::None)?
.ok(),
AppServerIoSource::PskBroker(key) => self
@@ -1569,7 +1583,7 @@ impl AppServer {
.get_mut(&key)
.with_context(|| format!("No PSK broker under key {key:?}"))?
.process_poll()
.map(|_| None),
.map(|_| AppServerTryRecvResult::None),
#[cfg(feature = "experiment_api")]
AppServerIoSource::MioManager(mmio_src) => {
@@ -1577,17 +1591,28 @@ impl AppServer {
MioManagerFocus(self)
.poll_particular(mmio_src)
.map(|_| None)
.map(|_| AppServerTryRecvResult::None)
}
}
}
/// Internal helper for [Self::try_recv]
fn try_recv_from_signal_handler(&mut self) -> io::Result<AppServerTryRecvResult> {
#[allow(clippy::never_loop)]
for signal in self.signal_handler.pending() {
log::debug!("Received operating system signal no {signal}.");
log::info!("Received termination request; exiting.");
return Ok(AppServerTryRecvResult::Terminate);
}
Ok(AppServerTryRecvResult::None)
}
/// Internal helper for [Self::try_recv]
fn try_recv_from_listen_socket(
&mut self,
buf: &mut [u8],
idx: usize,
) -> io::Result<Option<(usize, Endpoint)>> {
) -> io::Result<AppServerTryRecvResult> {
use std::io::ErrorKind as K;
let (n, addr) = loop {
match self.sockets[idx].recv_from(buf).io_err_kind_hint() {
@@ -1599,8 +1624,7 @@ impl AppServer {
SocketPtr(idx)
.apply(|sp| SocketBoundEndpoint::new(sp, addr))
.apply(Endpoint::SocketBoundAddress)
.apply(|ep| (n, ep))
.some()
.apply(|ep| AppServerTryRecvResult::NetworkMessage(n, ep))
.ok()
}
@@ -1652,48 +1676,3 @@ impl crate::api::mio::MioManagerContext for MioManagerFocus<'_> {
self.0
}
}
/// These signal handlers are used exclusively used during coverage testing
/// to ensure that the llvm-cov can produce reports during integration tests
/// with multiple processes where subprocesses are terminated via kill(2).
///
/// llvm-cov does not support producing coverage reports when the process exits
/// through a signal, so this is necessary.
///
/// The functionality of exiting gracefully upon reception of a terminating signal
/// is desired for the production variant of Rosenpass, but we should make sure
/// to use a higher quality implementation; in particular, we should use signalfd(2).
///
#[cfg(feature = "internal_signal_handling_for_coverage_reports")]
mod terminate {
use signal_hook::flag::register as sig_register;
use std::sync::{
atomic::{AtomicBool, Ordering},
Arc,
};
/// Automatically register a signal handler for common termination signals;
/// whether one of these signals was issued can be polled using [Self::value].
///
/// The signal handler is not removed when this struct goes out of scope.
#[derive(Debug)]
pub struct TerminateRequested {
value: Arc<AtomicBool>,
}
impl TerminateRequested {
/// Register signal handlers watching for common termination signals
pub fn new() -> anyhow::Result<Self> {
let value = Arc::new(AtomicBool::new(false));
for sig in signal_hook::consts::TERM_SIGNALS.iter().copied() {
sig_register(sig, Arc::clone(&value))?;
}
Ok(Self { value })
}
/// Check whether a termination signal has been set
pub fn value(&self) -> bool {
self.value.load(Ordering::Relaxed)
}
}
}

View File

@@ -17,7 +17,7 @@ use std::path::PathBuf;
use crate::app_server::AppServerTest;
use crate::app_server::{AppServer, BrokerPeer};
use crate::protocol::{SPk, SSk, SymKey};
use crate::protocol::basic_types::{SPk, SSk, SymKey};
use super::config;
@@ -490,7 +490,8 @@ impl CliArgs {
cfg_peer.key_out,
broker_peer,
cfg_peer.endpoint.clone(),
cfg_peer.protocol_version.into(),
cfg_peer.protocol_version,
cfg_peer.osk_domain_separator.try_into()?,
)?;
}
@@ -514,7 +515,7 @@ impl CliArgs {
fn create_broker(
broker_interface: Option<BrokerInterface>,
) -> Result<
Box<dyn WireguardBrokerMio<MioError = anyhow::Error, Error = anyhow::Error>>,
Box<dyn WireguardBrokerMio<MioError = anyhow::Error, Error = anyhow::Error> + Send>,
anyhow::Error,
> {
if let Some(interface) = broker_interface {
@@ -607,8 +608,8 @@ impl CliArgs {
/// generate secret and public keys, store in files according to the paths passed as arguments
pub fn generate_and_save_keypair(secret_key: PathBuf, public_key: PathBuf) -> anyhow::Result<()> {
let mut ssk = crate::protocol::SSk::random();
let mut spk = crate::protocol::SPk::random();
let mut ssk = crate::protocol::basic_types::SSk::random();
let mut spk = crate::protocol::basic_types::SPk::random();
StaticKem.keygen(ssk.secret_mut(), spk.deref_mut())?;
ssk.store_secret(secret_key)?;
spk.store(public_key)

View File

@@ -7,20 +7,19 @@
//! - TODO: support `~` in <https://github.com/rosenpass/rosenpass/issues/237>
//! - TODO: provide tooling to create config file from shell <https://github.com/rosenpass/rosenpass/issues/247>
use crate::protocol::{SPk, SSk};
use rosenpass_util::file::LoadValue;
use std::{
collections::HashSet,
fs,
io::Write,
net::{Ipv4Addr, Ipv6Addr, SocketAddr, SocketAddrV4, SocketAddrV6, ToSocketAddrs},
path::{Path, PathBuf},
};
use std::net::{Ipv4Addr, Ipv6Addr, SocketAddr, SocketAddrV4, SocketAddrV6, ToSocketAddrs};
use std::path::{Path, PathBuf};
use std::{collections::HashSet, fs, io::Write};
use anyhow::{bail, ensure};
use rosenpass_util::file::{fopen_w, Visibility};
use serde::{Deserialize, Serialize};
use rosenpass_util::file::{fopen_w, LoadValue, Visibility};
use crate::protocol::basic_types::{SPk, SSk};
use crate::protocol::osk_domain_separator::OskDomainSeparator;
use crate::app_server::AppServer;
#[cfg(feature = "experiment_api")]
@@ -36,6 +35,7 @@ fn empty_api_config() -> crate::api::config::ApiConfig {
///
/// i.e. configuration for the `rosenpass exchange` and `rosenpass exchange-config` commands
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq)]
#[serde(deny_unknown_fields)]
pub struct Rosenpass {
// TODO: Raise error if secret key or public key alone is set during deserialization
// SEE: https://github.com/serde-rs/serde/issues/2793
@@ -77,6 +77,7 @@ pub struct Rosenpass {
/// Public key and secret key locations.
#[derive(Debug, Deserialize, Serialize, PartialEq, Eq, Clone)]
#[serde(deny_unknown_fields)]
pub struct Keypair {
/// path to the public key file
pub public_key: PathBuf,
@@ -104,6 +105,7 @@ impl Keypair {
///
/// - TODO: replace this type with [`log::LevelFilter`], also see <https://github.com/rosenpass/rosenpass/pull/246>
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize, Copy, Clone)]
#[serde(deny_unknown_fields)]
pub enum Verbosity {
Quiet,
Verbose,
@@ -111,6 +113,7 @@ pub enum Verbosity {
/// The protocol version to be used by a peer.
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize, Copy, Clone, Default)]
#[serde(deny_unknown_fields)]
pub enum ProtocolVersion {
#[default]
V02,
@@ -119,6 +122,7 @@ pub enum ProtocolVersion {
/// Configuration data for a single Rosenpass peer
#[derive(Debug, Default, PartialEq, Eq, Serialize, Deserialize)]
#[serde(deny_unknown_fields)]
pub struct RosenpassPeer {
/// path to the public key of the peer
pub public_key: PathBuf,
@@ -150,10 +154,78 @@ pub struct RosenpassPeer {
#[serde(default)]
/// The protocol version to use for the exchange
pub protocol_version: ProtocolVersion,
/// Allows using a custom domain separator
#[serde(flatten)]
pub osk_domain_separator: RosenpassPeerOskDomainSeparator,
}
/// Configuration for [crate::protocol::osk_domain_separator::OskDomainSeparator]
///
/// Refer to its documentation for more information and examples of how to use this.
#[derive(Debug, Default, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(deny_unknown_fields)]
pub struct RosenpassPeerOskDomainSeparator {
/// If Rosenpass is used for purposes other then securing WireGuard,
/// a custom domain separator and domain separator must be specified.
///
/// Use `osk_organization` to indicate the organization who specifies the use case
/// and `osk_label` for a specific purpose within that organization.
///
/// ```toml
/// [[peer]]
/// public_key = "my_public_key"
/// ...
/// osk_organization = "myorg.com"
/// osk_label = ["My Custom Messenger app"]
/// ```
pub osk_organization: Option<String>,
// If Rosenpass is used for purposes other then securing WireGuard,
/// a custom domain separator and domain separator must be specified.
///
/// Use `osk_organization` to indicate the organization who specifies the use case
/// and `osk_label` for a specific purpose within that organization.
///
/// ```toml
/// [[peer]]
/// public_key = "my_public_key"
/// ...
/// osk_namespace = "myorg.com"
/// osk_label = ["My Custom Messenger app"]
/// ```
pub osk_label: Option<Vec<String>>,
}
impl RosenpassPeerOskDomainSeparator {
pub fn org_and_label(&self) -> anyhow::Result<Option<(&String, &Vec<String>)>> {
match (&self.osk_organization, &self.osk_label) {
(None, None) => Ok(None),
(Some(org), Some(label)) => Ok(Some((org, label))),
(Some(_), None) => bail!("Specified osk_organization but not osk_label in config file. You need to specify both, or none."),
(None, Some(_)) => bail!("Specified osk_label but not osk_organization in config file. You need to specify both, or none."),
}
}
pub fn validate(&self) -> anyhow::Result<()> {
let _org_and_label: Option<(_, _)> = self.org_and_label()?;
Ok(())
}
}
impl TryFrom<RosenpassPeerOskDomainSeparator> for OskDomainSeparator {
type Error = anyhow::Error;
fn try_from(val: RosenpassPeerOskDomainSeparator) -> anyhow::Result<Self> {
match val.org_and_label()? {
None => Ok(OskDomainSeparator::default()),
Some((org, label)) => Ok(OskDomainSeparator::custom_utf8(org, label)),
}
}
}
/// Information for supplying exchanged keys directly to WireGuard
#[derive(Debug, Default, PartialEq, Eq, Serialize, Deserialize)]
#[serde(deny_unknown_fields)]
pub struct WireGuard {
/// Name of the WireGuard interface to supply with pre-shared keys generated by the Rosenpass
/// key exchange
@@ -292,7 +364,7 @@ impl Rosenpass {
// check the secret-key file is a valid key
ensure!(
SSk::load(&keypair.secret_key).is_ok(),
"could not load public-key file {:?}: invalid key",
"could not load secret-key file {:?}: invalid key",
keypair.secret_key
);
}
@@ -337,6 +409,10 @@ impl Rosenpass {
);
}
}
if let Err(e) = peer.osk_domain_separator.validate() {
bail!("Invalid OSK domain separation configuration for peer {i}: {e}");
}
}
Ok(())

View File

@@ -295,25 +295,21 @@ hash_domain_ns!(
/// We do recommend that third parties base their specific domain separators
/// on a internet domain and/or mix in much more specific information.
///
/// We only really use this to derive a output key for wireguard; see [osk].
///
/// See [_ckextract].
///
/// # Examples
///
/// See the [module](self) documentation on how to use the hash domains in general.
_ckextract, _user, "user");
_ckextract, cke_user, "user");
hash_domain_ns!(
/// Chaining key domain separator for any rosenpass specific purposes.
///
/// We only really use this to derive a output key for wireguard; see [osk].
///
/// See [_ckextract].
///
/// # Examples
///
/// See the [module](self) documentation on how to use the hash domains in general.
_user, _rp, "rosenpass.eu");
cke_user, cke_user_rosenpass, "rosenpass.eu");
hash_domain!(
/// Chaining key domain separator for deriving the key sent to WireGuard.
///
@@ -325,4 +321,4 @@ hash_domain!(
/// Check out its source code!
///
/// See the [module](self) documentation on how to use the hash domains in general.
_rp, osk, "wireguard psk");
cke_user_rosenpass, ext_wireguard_psk_osk, "wireguard psk");

View File

@@ -0,0 +1,38 @@
//! Key types and other fundamental types used in the Rosenpass protocol
use rosenpass_cipher_traits::primitives::{Aead, Kem};
use rosenpass_ciphers::{EphemeralKem, StaticKem, XAead, KEY_LEN};
use rosenpass_secret_memory::{Public, PublicBox, Secret};
use crate::msgs::{BISCUIT_ID_LEN, MAX_MESSAGE_LEN, SESSION_ID_LEN};
/// Static public key
///
/// Using [PublicBox] instead of [Public] because Classic McEliece keys are very large.
pub type SPk = PublicBox<{ StaticKem::PK_LEN }>;
/// Static secret key
pub type SSk = Secret<{ StaticKem::SK_LEN }>;
/// Ephemeral public key
pub type EPk = Public<{ EphemeralKem::PK_LEN }>;
pub type ESk = Secret<{ EphemeralKem::SK_LEN }>;
/// Symmetric key
pub type SymKey = Secret<KEY_LEN>;
/// Variant of [SymKey] for use cases where the value is public
pub type PublicSymKey = [u8; 32];
/// Peer ID (derived from the public key, see the hash derivations in the [whitepaper](https://rosenpass.eu/whitepaper.pdf))
pub type PeerId = Public<KEY_LEN>;
/// Session ID
pub type SessionId = Public<SESSION_ID_LEN>;
/// Biscuit ID
pub type BiscuitId = Public<BISCUIT_ID_LEN>;
/// Nonce for use with random-nonce AEAD
pub type XAEADNonce = Public<{ XAead::NONCE_LEN }>;
/// Buffer capably of holding any Rosenpass protocol message
pub type MsgBuf = Public<MAX_MESSAGE_LEN>;
/// Server-local peer number; this is just the index in [super::CryptoServer::peers]
pub type PeerNo = usize;

View File

@@ -1,12 +1,14 @@
use super::{CryptoServer, PeerPtr, SPk, SSk, SymKey};
use crate::config::ProtocolVersion;
use rosenpass_util::{
build::Build,
mem::{DiscardResultExt, SwapWithDefaultExt},
result::ensure_or,
};
use thiserror::Error;
use rosenpass_util::mem::{DiscardResultExt, SwapWithDefaultExt};
use rosenpass_util::{build::Build, result::ensure_or};
use crate::config::ProtocolVersion;
use super::basic_types::{SPk, SSk, SymKey};
use super::osk_domain_separator::OskDomainSeparator;
use super::{CryptoServer, PeerPtr};
#[derive(Debug, Clone)]
/// A pair of matching public/secret keys used to launch the crypto server.
///
@@ -47,7 +49,8 @@ impl Keypair {
/// # Example
///
/// ```rust
/// use rosenpass::protocol::{Keypair, SSk, SPk};
/// use rosenpass::protocol::basic_types::{SSk, SPk};
/// use rosenpass::protocol::Keypair;
///
/// // We have to define the security policy before using Secrets.
/// use rosenpass_secret_memory::secret_policy_use_only_malloc_secrets;
@@ -66,12 +69,13 @@ impl Keypair {
/// Creates a new "empty" key pair. All bytes are initialized to zero.
///
/// See [SSk:zero()][crate::protocol::SSk::zero] and [SPk:zero()][crate::protocol::SPk::zero], respectively.
/// See [SSk:zero()][SSk::zero] and [SPk:zero()][SPk::zero], respectively.
///
/// # Example
///
/// ```rust
/// use rosenpass::protocol::{Keypair, SSk, SPk};
/// use rosenpass::protocol::basic_types::{SSk, SPk};
/// use rosenpass::protocol::Keypair;
///
/// // We have to define the security policy before using Secrets.
/// use rosenpass_secret_memory::secret_policy_use_only_malloc_secrets;
@@ -90,7 +94,7 @@ impl Keypair {
/// Creates a new (securely-)random key pair. The mechanism is described in [rosenpass_secret_memory::Secret].
///
/// See [SSk:random()][crate::protocol::SSk::random] and [SPk:random()][crate::protocol::SPk::random], respectively.
/// See [SSk:random()][SSk::random] and [SPk:random()][SPk::random], respectively.
pub fn random() -> Self {
Self::new(SSk::random(), SPk::random())
}
@@ -127,7 +131,7 @@ pub struct MissingKeypair;
///
/// There are multiple ways of creating a crypto server:
///
/// 1. Provide the key pair at initialization time (using [CryptoServer::new][crate::protocol::CryptoServer::new])
/// 1. Provide the key pair at initialization time (using [CryptoServer::new][CryptoServer::new])
/// 2. Provide the key pair at a later time (using [BuildCryptoServer::empty])
///
/// With BuildCryptoServer, you can gradually configure parameters as they become available.
@@ -145,19 +149,23 @@ pub struct MissingKeypair;
///
/// ```rust
/// use rosenpass_util::build::Build;
/// use rosenpass::protocol::{BuildCryptoServer, Keypair, PeerParams, SPk, SymKey};
/// use rosenpass_secret_memory::secret_policy_use_only_malloc_secrets;
///
/// use rosenpass::config::ProtocolVersion;
///
/// use rosenpass::protocol::basic_types::{SPk, SymKey};
/// use rosenpass::protocol::{BuildCryptoServer, Keypair, PeerParams};
/// use rosenpass::protocol::osk_domain_separator::OskDomainSeparator;
///
/// // We have to define the security policy before using Secrets.
/// use rosenpass_secret_memory::secret_policy_use_only_malloc_secrets;
/// secret_policy_use_only_malloc_secrets();
///
/// let keypair = Keypair::random();
/// let peer1 = PeerParams { psk: Some(SymKey::random()), pk: SPk::random(), protocol_version: ProtocolVersion::V02 };
/// let peer2 = PeerParams { psk: None, pk: SPk::random(), protocol_version: ProtocolVersion::V02 };
/// let peer1 = PeerParams { psk: Some(SymKey::random()), pk: SPk::random(), protocol_version: ProtocolVersion::V02, osk_domain_separator: OskDomainSeparator::default() };
/// let peer2 = PeerParams { psk: None, pk: SPk::random(), protocol_version: ProtocolVersion::V02, osk_domain_separator: OskDomainSeparator::default() };
///
/// let mut builder = BuildCryptoServer::new(Some(keypair.clone()), vec![peer1]);
/// builder.add_peer(peer2.psk.clone(), peer2.pk, ProtocolVersion::V02);
/// builder.add_peer(peer2.psk.clone(), peer2.pk, ProtocolVersion::V02, OskDomainSeparator::default());
///
/// let server = builder.build().expect("build failed");
/// assert_eq!(server.peers.len(), 2);
@@ -187,16 +195,17 @@ impl Build<CryptoServer> for BuildCryptoServer {
let mut srv = CryptoServer::new(sk, pk);
for (
idx,
PeerParams {
for (idx, params) in self.peers.into_iter().enumerate() {
let PeerParams {
psk,
pk,
protocol_version,
},
) in self.peers.into_iter().enumerate()
{
let PeerPtr(idx2) = srv.add_peer(psk, pk, protocol_version.into())?;
osk_domain_separator,
} = params;
let PeerPtr(idx2) =
srv.add_peer(psk, pk, protocol_version.into(), osk_domain_separator)?;
assert!(idx == idx2, "Peer id changed during CryptoServer construction from {idx} to {idx2}. This is a developer error.")
}
@@ -205,13 +214,13 @@ impl Build<CryptoServer> for BuildCryptoServer {
}
#[derive(Debug)]
/// Cryptographic key(s) identifying the connected [peer][crate::protocol::Peer] ("client")
/// Cryptographic key(s) identifying the connected [peer][super::Peer] ("client")
/// for a given session that is being managed by the crypto server.
///
/// Each peer must be identified by a [public key (SPk)][crate::protocol::SPk].
/// Optionally, a [symmetric key (SymKey)][crate::protocol::SymKey]
/// Each peer must be identified by a [public key (SPk)][SPk].
/// Optionally, a [symmetric key (SymKey)][SymKey]
/// can be provided when setting up the connection.
/// For more information on the intended usage and security considerations, see [Peer::psk][crate::protocol::Peer::psk] and [Peer::spkt][crate::protocol::Peer::spkt].
/// For more information on the intended usage and security considerations, see [Peer::psk][super::Peer::psk] and [Peer::spkt][super::Peer::spkt].
pub struct PeerParams {
/// Pre-shared (symmetric) encryption keys that should be used with this peer.
pub psk: Option<SymKey>,
@@ -219,6 +228,7 @@ pub struct PeerParams {
pub pk: SPk,
/// The used protocol version.
pub protocol_version: ProtocolVersion,
pub osk_domain_separator: OskDomainSeparator,
}
impl BuildCryptoServer {
@@ -317,13 +327,16 @@ impl BuildCryptoServer {
///
/// ```rust
/// use rosenpass::config::ProtocolVersion;
///
/// use rosenpass_util::build::Build;
/// use rosenpass::protocol::basic_types::{SymKey, SPk};
/// use rosenpass::protocol::{BuildCryptoServer, Keypair};
/// use rosenpass::protocol::osk_domain_separator::OskDomainSeparator;
///
/// // We have to define the security policy before using Secrets.
/// use rosenpass_secret_memory::secret_policy_use_only_malloc_secrets;
/// secret_policy_use_only_malloc_secrets();
///
/// use rosenpass_util::build::Build;
/// use rosenpass::protocol::{BuildCryptoServer, Keypair, SymKey, SPk};
///
/// // Deferred initialization: Create builder first, add some peers later
/// let keypair_option = Some(Keypair::random());
/// let mut builder = BuildCryptoServer::new(keypair_option, Vec::new());
@@ -335,7 +348,7 @@ impl BuildCryptoServer {
/// // Now we've found a peer that should be added to the configuration
/// let pre_shared_key = SymKey::random();
/// let public_key = SPk::random();
/// builder.with_added_peer(Some(pre_shared_key.clone()), public_key.clone(), ProtocolVersion::V02);
/// builder.with_added_peer(Some(pre_shared_key.clone()), public_key.clone(), ProtocolVersion::V02, OskDomainSeparator::default());
///
/// // New server instances will then start with the peer being registered already
/// let server = builder.build().expect("build failed");
@@ -350,12 +363,14 @@ impl BuildCryptoServer {
psk: Option<SymKey>,
pk: SPk,
protocol_version: ProtocolVersion,
osk_domain_separator: OskDomainSeparator,
) -> &mut Self {
// TODO: Check here already whether peer was already added
self.peers.push(PeerParams {
psk,
pk,
protocol_version,
osk_domain_separator,
});
self
}
@@ -366,9 +381,10 @@ impl BuildCryptoServer {
psk: Option<SymKey>,
pk: SPk,
protocol_version: ProtocolVersion,
osk_domain_separator: OskDomainSeparator,
) -> PeerPtr {
let id = PeerPtr(self.peers.len());
self.with_added_peer(psk, pk, protocol_version);
self.with_added_peer(psk, pk, protocol_version, osk_domain_separator);
id
}
@@ -381,19 +397,23 @@ impl BuildCryptoServer {
/// Extracting the server configuration from a builder:
///
/// ```rust
/// // We have to define the security policy before using Secrets.
/// use rosenpass_util::build::Build;
/// use rosenpass_secret_memory::secret_policy_use_only_malloc_secrets;
///
/// use rosenpass::config::ProtocolVersion;
/// use rosenpass::hash_domains::protocol;
/// use rosenpass_secret_memory::secret_policy_use_only_malloc_secrets;
/// secret_policy_use_only_malloc_secrets();
///
/// use rosenpass_util::build::Build;
/// use rosenpass::protocol::{BuildCryptoServer, Keypair, SymKey, SPk};
/// use rosenpass::protocol::basic_types::{SymKey, SPk};
/// use rosenpass::protocol::{BuildCryptoServer, Keypair};
/// use rosenpass::protocol::osk_domain_separator::OskDomainSeparator;
///
/// // We have to define the security policy before using Secrets.
/// secret_policy_use_only_malloc_secrets();
///
/// let keypair = Keypair::random();
/// let peer_pk = SPk::random();
/// let mut builder = BuildCryptoServer::new(Some(keypair.clone()), vec![]);
/// builder.add_peer(None, peer_pk, ProtocolVersion::V02);
/// builder.add_peer(None, peer_pk, ProtocolVersion::V02, OskDomainSeparator::default());
///
/// // Extract configuration parameters from the decomissioned builder
/// let (keypair_option, peers) = builder.take_parts();

View File

@@ -0,0 +1,64 @@
//! Constants and configuration values used in the rosenpass core protocol
use crate::msgs::MAC_SIZE;
use super::timing::Timing;
/// Time after which the responder attempts to rekey the session
///
/// From the wireguard paper: rekey every two minutes,
/// discard the key if no rekey is achieved within three
pub const REKEY_AFTER_TIME_RESPONDER: Timing = 120.0;
/// Time after which the initiator attempts to rekey the session.
///
/// This happens ten seconds after [REKEY_AFTER_TIME_RESPONDER], so
/// parties would usually switch roles after every handshake.
///
/// From the wireguard paper: rekey every two minutes,
/// discard the key if no rekey is achieved within three
pub const REKEY_AFTER_TIME_INITIATOR: Timing = 130.0;
/// Time after which either party rejects the current key.
///
/// At this point a new key should have been negotiated.
///
/// Rejection happens 50-60 seconds after key renegotiation
/// to allow for a graceful handover.
/// From the wireguard paper: rekey every two minutes,
/// discard the key if no rekey is achieved within three
pub const REJECT_AFTER_TIME: Timing = 180.0;
/// The length of the `cookie_secret` in the [whitepaper](https://rosenpass.eu/whitepaper.pdf)
pub const COOKIE_SECRET_LEN: usize = MAC_SIZE;
/// The life time of the `cookie_secret` in the [whitepaper](https://rosenpass.eu/whitepaper.pdf)
pub const COOKIE_SECRET_EPOCH: Timing = 120.0;
/// Length of a cookie value (see info about the cookie mechanism in the [whitepaper](https://rosenpass.eu/whitepaper.pdf))
pub const COOKIE_VALUE_LEN: usize = MAC_SIZE;
/// Time after which to delete a cookie, as the initiator, for a certain peer (see info about the cookie mechanism in the [whitepaper](https://rosenpass.eu/whitepaper.pdf))
pub const PEER_COOKIE_VALUE_EPOCH: Timing = 120.0;
/// Seconds until the biscuit key is changed; we issue biscuits
/// using one biscuit key for one epoch and store the biscuit for
/// decryption for a second epoch
///
/// The biscuit mechanism is used to make sure the responder is stateless in our protocol.
pub const BISCUIT_EPOCH: Timing = 300.0;
/// The initiator opportunistically retransmits their messages; it applies an increasing delay
/// between each retreansmission. This is the factor by which the delay grows after each
/// retransmission.
pub const RETRANSMIT_DELAY_GROWTH: Timing = 2.0;
/// The initiator opportunistically retransmits their messages; it applies an increasing delay
/// between each retreansmission. This is the initial delay between retransmissions.
pub const RETRANSMIT_DELAY_BEGIN: Timing = 0.5;
/// The initiator opportunistically retransmits their messages; it applies an increasing delay
/// between each retreansmission. This is the maximum delay between retransmissions.
pub const RETRANSMIT_DELAY_END: Timing = 10.0;
/// The initiator opportunistically retransmits their messages; it applies an increasing delay
/// between each retreansmission. This is the jitter (randomness) applied to the retransmission
/// delay.
pub const RETRANSMIT_DELAY_JITTER: Timing = 0.5;
/// This is the maximum delay that can separate two events for us to consider the events to have
/// happened at the same time.
pub const EVENT_GRACE: Timing = 0.0025;

View File

@@ -0,0 +1,98 @@
//! Cryptographic key management for cookies and biscuits used in the protocol
//!
//! Cookies in general are conceptually similar to browser cookies;
//! i.e. mechanisms to store information in the party connected to via network.
//!
//! In our case specifically we refer to any mechanisms in the Rosenpass protocol
//! where a peer stores some information in the other party that is cryptographically
//! protected using a temporary, randomly generated key. This file contains the mechanisms
//! used to store the secret keys.
//!
//! We have two cookie-mechanisms in particular:
//!
//! - Rosenpass "biscuits" — the mechanism used to make sure the Rosenpass protocol is stateless
//! with respect to the responder
//! - WireGuard's cookie mechanism to enable proof of IP ownership; Rosenpass has experimental
//! support for this mechanism
//!
//! The CookieStore type is also used to store cookie secrets sent from the responder to the
//! initiator. This is a bad design and we should separate out this functionality.
//!
//! TODO: CookieStore should not be used for cookie secrets sent from responder to initiator.
//! TODO: Move cookie lifetime management functionality into here
use rosenpass_ciphers::KEY_LEN;
use rosenpass_secret_memory::Secret;
use super::{constants::COOKIE_SECRET_LEN, timing::Timing};
/// Container for storing cookie secrets like [BiscuitKey] or [CookieSecret].
///
/// This is really just a secret key and a time stamp of creation. Concrete
/// usages (such as for the biscuit key) impose a time limit about how long
/// a key can be used and the time of creation is used to impose that time limit.
///
/// # Examples
///
/// ```
/// use rosenpass_util::time::Timebase;
/// use rosenpass::protocol::{timing::BCE, basic_types::SymKey, cookies::CookieStore};
///
/// rosenpass_secret_memory::secret_policy_try_use_memfd_secrets();
///
/// let fixed_secret = SymKey::random();
/// let timebase = Timebase::default();
///
/// let mut store = CookieStore::<32>::new();
/// assert_ne!(store.value.secret(), SymKey::zero().secret());
/// assert_eq!(store.created_at, BCE);
///
/// let time_before_call = timebase.now();
/// store.update(&timebase, fixed_secret.secret());
/// assert_eq!(store.value.secret(), fixed_secret.secret());
/// assert!(store.created_at < timebase.now());
/// assert!(store.created_at > time_before_call);
///
/// // Same as new()
/// store.erase();
/// assert_ne!(store.value.secret(), SymKey::zero().secret());
/// assert_eq!(store.created_at, BCE);
///
/// let secret_before_call = store.value.clone();
/// let time_before_call = timebase.now();
/// store.randomize(&timebase);
/// assert_ne!(store.value.secret(), secret_before_call.secret());
/// assert!(store.created_at < timebase.now());
/// assert!(store.created_at > time_before_call);
/// ```
#[derive(Debug)]
pub struct CookieStore<const N: usize> {
/// Time of creation of the secret key
pub created_at: Timing,
/// The secret key
pub value: Secret<N>,
}
/// Stores cookie secret, which is used to create a rotating the cookie value
///
/// Concrete value is in [super::CryptoServer::cookie_secrets].
///
/// The pointer type is [super::ServerCookieSecretPtr].
pub type CookieSecret = CookieStore<COOKIE_SECRET_LEN>;
/// Storage for our biscuit keys.
///
/// The biscuit keys encrypt what we call "biscuits".
/// These biscuits contain the responder state for a particular handshake. By moving
/// state into these biscuits, we make sure the responder is stateless.
///
/// A Biscuit is like a fancy cookie. To avoid state disruption attacks,
/// the responder doesn't store state. Instead the state is stored in a
/// Biscuit, that is encrypted using the [BiscuitKey] which is only known to
/// the Responder. Thus secrecy of the Responder state is not violated, still
/// the responder can avoid storing this state.
///
/// Concrete value is in [super::CryptoServer::biscuit_keys].
///
/// The pointer type is [super::BiscuitKeyPtr].
pub type BiscuitKey = CookieStore<KEY_LEN>;

View File

@@ -0,0 +1,45 @@
//! Quick lookup of values in [super::CryptoServer]
use std::collections::HashMap;
use super::basic_types::{PeerId, PeerNo, SessionId};
use super::KnownResponseHash;
/// Maps various keys to peer (numbers).
///
/// See:
/// - [super::CryptoServer::index]
/// - [super::CryptoServer::peers]
/// - [PeerNo]
/// - [super::PeerPtr]
/// - [super::Peer]
pub type PeerIndex = HashMap<PeerIndexKey, PeerNo>;
/// We maintain various indices in [super::CryptoServer::index], mapping some key to a particular
/// [PeerNo], i.e. to an index in [super::CryptoServer::peers]. These are the possible index key.
#[derive(Hash, PartialEq, Eq, PartialOrd, Ord, Debug)]
pub enum PeerIndexKey {
/// Lookup of a particular peer given the [PeerId], i.e. a value derived from the peers public
/// key as created by [super::CryptoServer::pidm] or [super::Peer::pidt].
///
/// The peer id is used by the initiator to tell the responder about its identity in
/// [crate::msgs::InitHello].
///
/// See also the pointer types [super::PeerPtr].
Peer(PeerId),
/// Lookup of a particular session id.
///
/// This is used to look up both established sessions (see
/// [super::CryptoServer::lookup_session]) and ongoing handshakes (see [super::CryptoServer::lookup_handshake]).
///
/// Lookup of a peer to get an established session or a handshake is sufficient, because a peer
/// contains a limited number of sessions and handshakes ([super::Peer::session] and [super::Peer::handshake] respectively).
///
/// See also the pointer types [super::IniHsPtr] and [super::SessionPtr].
Sid(SessionId),
/// Lookup of a cached response ([crate::msgs::Envelope]<[crate::msgs::EmptyData]>) to an [crate::msgs::InitConf] (i.e.
/// [crate::msgs::Envelope]<[crate::msgs::InitConf]>) message.
///
/// See [super::KnownInitConfResponsePtr] on how this value is maintained.
KnownInitConfResponse(KnownResponseHash),
}

View File

@@ -24,12 +24,15 @@
//!
//! ```
//! use std::ops::DerefMut;
//!
//! use rosenpass_secret_memory::policy::*;
//! use rosenpass_cipher_traits::primitives::Kem;
//! use rosenpass_ciphers::StaticKem;
//! use rosenpass::{
//! protocol::{SSk, SPk, MsgBuf, PeerPtr, CryptoServer, SymKey},
//! };
//!
//! use rosenpass::protocol::basic_types::{SSk, SPk, MsgBuf, SymKey};
//! use rosenpass::protocol::{PeerPtr, CryptoServer};
//! use rosenpass::protocol::osk_domain_separator::OskDomainSeparator;
//!
//! # fn main() -> anyhow::Result<()> {
//! // Set security policy for storing secrets
//!
@@ -50,8 +53,8 @@
//! let mut b = CryptoServer::new(peer_b_sk, peer_b_pk.clone());
//!
//! // introduce peers to each other
//! a.add_peer(Some(psk.clone()), peer_b_pk, ProtocolVersion::V03)?;
//! b.add_peer(Some(psk), peer_a_pk, ProtocolVersion::V03)?;
//! a.add_peer(Some(psk.clone()), peer_b_pk, ProtocolVersion::V03, OskDomainSeparator::default())?;
//! b.add_peer(Some(psk), peer_a_pk, ProtocolVersion::V03, OskDomainSeparator::default())?;
//!
//! // declare buffers for message exchange
//! let (mut a_buf, mut b_buf) = (MsgBuf::zero(), MsgBuf::zero());
@@ -76,8 +79,20 @@
//! ```
mod build_crypto_server;
pub use build_crypto_server::*;
pub mod basic_types;
pub mod constants;
pub mod cookies;
pub mod index;
pub mod osk_domain_separator;
pub mod testutils;
pub mod timing;
pub mod zerocopy;
#[allow(clippy::module_inception)]
mod protocol;
pub use build_crypto_server::*;
pub use protocol::*;
#[cfg(test)]
mod test;

View File

@@ -0,0 +1,91 @@
//! Management of domain separators for the OSK (output key) in the rosenpass protocol
//!
//! The domain separator is there to ensure that keys are bound to the purpose they are used for.
//!
//! See the whitepaper section on protocol extensions for more details on how this is used.
//!
//! # See also
//!
//! - [crate::protocol::Peer]
//! - [crate::protocol::CryptoServer::add_peer]
//! - [crate::protocol::CryptoServer::osk]
//!
//! # Examples
//!
//! There are some basic examples of using custom domain separators in the examples of
//! [super::CryptoServer::poll]. Look for the test function `test_osk_label_mismatch()`
//! in particular.
use rosenpass_ciphers::subtle::keyed_hash::KeyedHash;
use rosenpass_util::result::OkExt;
use crate::hash_domains;
use super::basic_types::PublicSymKey;
/// The OSK (output shared key) domain separator to use for a specific peer
///
#[derive(Clone, PartialEq, Eq, Debug, PartialOrd, Ord, Default)]
pub enum OskDomainSeparator {
/// By default we use the domain separator that indicates that the resulting keys
/// are used by WireGuard to establish a connection
#[default]
ExtensionWireguardPsk,
/// Used for user-defined domain separators
Custom {
/// A globally unique string identifying the vendor or group who defines this domain
/// separator (we use our domain ourselves "rosenpass.eu")
namespace: Vec<u8>,
/// Any custom labels within that namespace. Could be descriptive prose.
labels: Vec<Vec<u8>>,
},
}
impl OskDomainSeparator {
/// Construct [OskDomainSeparator::ExtensionWireguardPsk]
pub fn for_wireguard_psk() -> Self {
Self::ExtensionWireguardPsk
}
/// Construct [OskDomainSeparator::Custom] from strings
pub fn custom_utf8<I, T>(namespace: &str, label: I) -> Self
where
I: IntoIterator<Item = T>,
T: AsRef<str>,
{
let namespace = namespace.as_bytes().to_owned();
let labels = label
.into_iter()
.map(|e| e.as_ref().as_bytes().to_owned())
.collect::<Vec<_>>();
Self::Custom { namespace, labels }
}
/// Variant of [Self::custom_utf8] that takes just one label (instead of a sequence)
pub fn custom_utf8_single_label(namespace: &str, label: &str) -> Self {
Self::custom_utf8(namespace, std::iter::once(label))
}
/// The domain separator is not just an encoded string, it instead uses
/// [rosenpass_ciphers::hash_domain::HashDomain], starting from [hash_domains::cke_user].
///
/// This means, that the domain separator is really a sequence of multiple different domain
/// separators, each of which is allowed to be quite long. This is very useful as it allows
/// users to avoid specifying complex, prosaic domain separators. To ensure that this does not
/// force us create extra overhead when the protocol is executed, this sequence of strings is
/// compressed into a single, fixed-length hash of all the inputs. This hash could be created
/// at program startup and cached.
///
/// This function generates this fixed-length hash.
pub fn compress_with(&self, hash_choice: KeyedHash) -> anyhow::Result<PublicSymKey> {
use OskDomainSeparator as O;
match &self {
O::ExtensionWireguardPsk => hash_domains::ext_wireguard_psk_osk(hash_choice),
O::Custom { namespace, labels } => hash_domains::cke_user(hash_choice)?
.mix(namespace)?
.mix_many(labels)?
.into_value()
.ok(),
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,703 @@
use std::{borrow::BorrowMut, fmt::Display, net::SocketAddrV4, ops::DerefMut};
use anyhow::{Context, Result};
use serial_test::serial;
use zerocopy::{AsBytes, FromBytes, FromZeroes};
use rosenpass_cipher_traits::primitives::Kem;
use rosenpass_ciphers::StaticKem;
use rosenpass_secret_memory::Public;
use rosenpass_util::mem::DiscardResultExt;
use crate::msgs::{EmptyData, Envelope, InitConf, InitHello, MsgType, RespHello, MAX_MESSAGE_LEN};
use super::basic_types::{MsgBuf, SPk, SSk, SymKey};
use super::constants::REKEY_AFTER_TIME_RESPONDER;
use super::osk_domain_separator::OskDomainSeparator;
use super::zerocopy::{truncating_cast_into, truncating_cast_into_nomut};
use super::{
CryptoServer, HandleMsgResult, HostIdentification, KnownInitConfResponsePtr, PeerPtr,
PollResult, ProtocolVersion,
};
struct VecHostIdentifier(Vec<u8>);
impl HostIdentification for VecHostIdentifier {
fn encode(&self) -> &[u8] {
&self.0
}
}
impl Display for VecHostIdentifier {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{:?}", self.0)
}
}
impl From<Vec<u8>> for VecHostIdentifier {
fn from(v: Vec<u8>) -> Self {
VecHostIdentifier(v)
}
}
fn setup_logging() {
use std::io::Write;
let mut log_builder = env_logger::Builder::from_default_env(); // sets log level filter from environment (or defaults)
log_builder.filter_level(log::LevelFilter::Info);
log_builder.format_timestamp_nanos();
log_builder.format(|buf, record| {
let ts_format = buf.timestamp_nanos().to_string();
writeln!(buf, "{}: {}", &ts_format[14..], record.args())
});
let _ = log_builder.try_init();
}
#[test]
#[serial]
fn handles_incorrect_size_messages_v02() {
handles_incorrect_size_messages(ProtocolVersion::V02)
}
#[test]
#[serial]
fn handles_incorrect_size_messages_v03() {
handles_incorrect_size_messages(ProtocolVersion::V03)
}
/// Ensure that the protocol implementation can deal with truncated
/// messages and with overlong messages.
///
/// This test performs a complete handshake between two randomly generated
/// servers; instead of delivering the message correctly at first messages
/// of length zero through about 1.2 times the correct message size are delivered.
///
/// Producing an error is expected on each of these messages.
///
/// Finally the correct message is delivered and the same process
/// starts again in the other direction.
///
/// Through all this, the handshake should still successfully terminate;
/// i.e. an exchanged key must be produced in both servers.
fn handles_incorrect_size_messages(protocol_version: ProtocolVersion) {
setup_logging();
rosenpass_secret_memory::secret_policy_try_use_memfd_secrets();
stacker::grow(8 * 1024 * 1024, || {
const OVERSIZED_MESSAGE: usize = ((MAX_MESSAGE_LEN as f32) * 1.2) as usize;
type MsgBufPlus = Public<OVERSIZED_MESSAGE>;
const PEER0: PeerPtr = PeerPtr(0);
let (mut me, mut they) = make_server_pair(protocol_version).unwrap();
let (mut msgbuf, mut resbuf) = (MsgBufPlus::zero(), MsgBufPlus::zero());
// Process the entire handshake
let mut msglen = Some(me.initiate_handshake(PEER0, &mut *resbuf).unwrap());
while let Some(l) = msglen {
std::mem::swap(&mut me, &mut they);
std::mem::swap(&mut msgbuf, &mut resbuf);
msglen = test_incorrect_sizes_for_msg(&mut me, &*msgbuf, l, &mut *resbuf);
}
assert_eq!(
me.osk(PEER0).unwrap().secret(),
they.osk(PEER0).unwrap().secret()
);
});
}
/// Used in handles_incorrect_size_messages() to first deliver many truncated
/// and overlong messages, finally the correct message is delivered and the response
/// returned.
fn test_incorrect_sizes_for_msg(
srv: &mut CryptoServer,
msgbuf: &[u8],
msglen: usize,
resbuf: &mut [u8],
) -> Option<usize> {
resbuf.fill(0);
for l in 0..(((msglen as f32) * 1.2) as usize) {
if l == msglen {
continue;
}
let res = srv.handle_msg(&msgbuf[..l], resbuf);
assert!(res.is_err()); // handle_msg should raise an error
assert!(!resbuf.iter().any(|x| *x != 0)); // resbuf should not have been changed
}
// Apply the proper handle_msg operation
srv.handle_msg(&msgbuf[..msglen], resbuf).unwrap().resp
}
fn keygen() -> Result<(SSk, SPk)> {
// TODO: Copied from the benchmark; deduplicate
let (mut sk, mut pk) = (SSk::zero(), SPk::zero());
StaticKem.keygen(sk.secret_mut(), pk.deref_mut())?;
Ok((sk, pk))
}
fn make_server_pair(protocol_version: ProtocolVersion) -> Result<(CryptoServer, CryptoServer)> {
// TODO: Copied from the benchmark; deduplicate
let psk = SymKey::random();
let ((ska, pka), (skb, pkb)) = (keygen()?, keygen()?);
let (mut a, mut b) = (
CryptoServer::new(ska, pka.clone()),
CryptoServer::new(skb, pkb.clone()),
);
a.add_peer(
Some(psk.clone()),
pkb,
protocol_version.clone(),
OskDomainSeparator::default(),
)?;
b.add_peer(
Some(psk),
pka,
protocol_version,
OskDomainSeparator::default(),
)?;
Ok((a, b))
}
#[test]
#[serial]
fn test_regular_exchange_v02() {
test_regular_exchange(ProtocolVersion::V02)
}
#[test]
#[serial]
fn test_regular_exchange_v03() {
test_regular_exchange(ProtocolVersion::V03)
}
fn test_regular_exchange(protocol_version: ProtocolVersion) {
setup_logging();
rosenpass_secret_memory::secret_policy_try_use_memfd_secrets();
stacker::grow(8 * 1024 * 1024, || {
type MsgBufPlus = Public<MAX_MESSAGE_LEN>;
let (mut a, mut b) = make_server_pair(protocol_version).unwrap();
let mut a_to_b_buf = MsgBufPlus::zero();
let mut b_to_a_buf = MsgBufPlus::zero();
let ip_a: SocketAddrV4 = "127.0.0.1:8080".parse().unwrap();
let mut ip_addr_port_a = ip_a.ip().octets().to_vec();
ip_addr_port_a.extend_from_slice(&ip_a.port().to_be_bytes());
let _ip_b: SocketAddrV4 = "127.0.0.1:8081".parse().unwrap();
let init_hello_len = a.initiate_handshake(PeerPtr(0), &mut *a_to_b_buf).unwrap();
let init_msg_type: MsgType = a_to_b_buf.value[0].try_into().unwrap();
assert_eq!(init_msg_type, MsgType::InitHello);
//B handles InitHello, sends RespHello
let HandleMsgResult { resp, .. } = b
.handle_msg(&a_to_b_buf.as_slice()[..init_hello_len], &mut *b_to_a_buf)
.unwrap();
let resp_hello_len = resp.unwrap();
let resp_msg_type: MsgType = b_to_a_buf.value[0].try_into().unwrap();
assert_eq!(resp_msg_type, MsgType::RespHello);
let HandleMsgResult {
resp,
exchanged_with,
} = a
.handle_msg(&b_to_a_buf[..resp_hello_len], &mut *a_to_b_buf)
.unwrap();
let init_conf_len = resp.unwrap();
let init_conf_msg_type: MsgType = a_to_b_buf.value[0].try_into().unwrap();
assert_eq!(exchanged_with, Some(PeerPtr(0)));
assert_eq!(init_conf_msg_type, MsgType::InitConf);
//B handles InitConf, sends EmptyData
let HandleMsgResult {
resp: _,
exchanged_with,
} = b
.handle_msg(&a_to_b_buf.as_slice()[..init_conf_len], &mut *b_to_a_buf)
.unwrap();
let empty_data_msg_type: MsgType = b_to_a_buf.value[0].try_into().unwrap();
assert_eq!(exchanged_with, Some(PeerPtr(0)));
assert_eq!(empty_data_msg_type, MsgType::EmptyData);
});
}
#[test]
#[serial]
fn test_regular_init_conf_retransmit_v02() {
test_regular_init_conf_retransmit(ProtocolVersion::V02)
}
#[test]
#[serial]
fn test_regular_init_conf_retransmit_v03() {
test_regular_init_conf_retransmit(ProtocolVersion::V03)
}
fn test_regular_init_conf_retransmit(protocol_version: ProtocolVersion) {
setup_logging();
rosenpass_secret_memory::secret_policy_try_use_memfd_secrets();
stacker::grow(8 * 1024 * 1024, || {
type MsgBufPlus = Public<MAX_MESSAGE_LEN>;
let (mut a, mut b) = make_server_pair(protocol_version).unwrap();
let mut a_to_b_buf = MsgBufPlus::zero();
let mut b_to_a_buf = MsgBufPlus::zero();
let ip_a: SocketAddrV4 = "127.0.0.1:8080".parse().unwrap();
let mut ip_addr_port_a = ip_a.ip().octets().to_vec();
ip_addr_port_a.extend_from_slice(&ip_a.port().to_be_bytes());
let _ip_b: SocketAddrV4 = "127.0.0.1:8081".parse().unwrap();
let init_hello_len = a.initiate_handshake(PeerPtr(0), &mut *a_to_b_buf).unwrap();
let init_msg_type: MsgType = a_to_b_buf.value[0].try_into().unwrap();
assert_eq!(init_msg_type, MsgType::InitHello);
//B handles InitHello, sends RespHello
let HandleMsgResult { resp, .. } = b
.handle_msg(&a_to_b_buf.as_slice()[..init_hello_len], &mut *b_to_a_buf)
.unwrap();
let resp_hello_len = resp.unwrap();
let resp_msg_type: MsgType = b_to_a_buf.value[0].try_into().unwrap();
assert_eq!(resp_msg_type, MsgType::RespHello);
//A handles RespHello, sends InitConf, exchanges keys
let HandleMsgResult {
resp,
exchanged_with,
} = a
.handle_msg(&b_to_a_buf[..resp_hello_len], &mut *a_to_b_buf)
.unwrap();
let init_conf_len = resp.unwrap();
let init_conf_msg_type: MsgType = a_to_b_buf.value[0].try_into().unwrap();
assert_eq!(exchanged_with, Some(PeerPtr(0)));
assert_eq!(init_conf_msg_type, MsgType::InitConf);
//B handles InitConf, sends EmptyData
let HandleMsgResult {
resp: _,
exchanged_with,
} = b
.handle_msg(&a_to_b_buf.as_slice()[..init_conf_len], &mut *b_to_a_buf)
.unwrap();
let empty_data_msg_type: MsgType = b_to_a_buf.value[0].try_into().unwrap();
assert_eq!(exchanged_with, Some(PeerPtr(0)));
assert_eq!(empty_data_msg_type, MsgType::EmptyData);
//B handles InitConf again, sends EmptyData
let HandleMsgResult {
resp: _,
exchanged_with,
} = b
.handle_msg(&a_to_b_buf.as_slice()[..init_conf_len], &mut *b_to_a_buf)
.unwrap();
let empty_data_msg_type: MsgType = b_to_a_buf.value[0].try_into().unwrap();
assert!(exchanged_with.is_none());
assert_eq!(empty_data_msg_type, MsgType::EmptyData);
});
}
#[test]
#[serial]
#[cfg(feature = "experiment_cookie_dos_mitigation")]
fn cookie_reply_mechanism_responder_under_load_v02() {
cookie_reply_mechanism_initiator_bails_on_message_under_load(ProtocolVersion::V02)
}
#[test]
#[serial]
#[cfg(feature = "experiment_cookie_dos_mitigation")]
fn cookie_reply_mechanism_responder_under_load_v03() {
cookie_reply_mechanism_initiator_bails_on_message_under_load(ProtocolVersion::V03)
}
#[cfg(feature = "experiment_cookie_dos_mitigation")]
fn cookie_reply_mechanism_responder_under_load(protocol_version: ProtocolVersion) {
use std::{thread::sleep, time::Duration};
use super::{Lifecycle, MortalExt};
setup_logging();
rosenpass_secret_memory::secret_policy_try_use_memfd_secrets();
stacker::grow(8 * 1024 * 1024, || {
type MsgBufPlus = Public<MAX_MESSAGE_LEN>;
let (mut a, mut b) = make_server_pair(protocol_version.clone()).unwrap();
let mut a_to_b_buf = MsgBufPlus::zero();
let mut b_to_a_buf = MsgBufPlus::zero();
let ip_a: SocketAddrV4 = "127.0.0.1:8080".parse().unwrap();
let mut ip_addr_port_a = ip_a.ip().octets().to_vec();
ip_addr_port_a.extend_from_slice(&ip_a.port().to_be_bytes());
let _ip_b: SocketAddrV4 = "127.0.0.1:8081".parse().unwrap();
let init_hello_len = a.initiate_handshake(PeerPtr(0), &mut *a_to_b_buf).unwrap();
let socket_addr_a = std::net::SocketAddr::V4(ip_a);
let mut ip_addr_port_a = match socket_addr_a.ip() {
std::net::IpAddr::V4(ipv4) => ipv4.octets().to_vec(),
std::net::IpAddr::V6(ipv6) => ipv6.octets().to_vec(),
};
ip_addr_port_a.extend_from_slice(&socket_addr_a.port().to_be_bytes());
let ip_addr_port_a: VecHostIdentifier = ip_addr_port_a.into();
//B handles handshake under load, should send cookie reply message with invalid cookie
let HandleMsgResult { resp, .. } = b
.handle_msg_under_load(
&a_to_b_buf.as_slice()[..init_hello_len],
&mut *b_to_a_buf,
&ip_addr_port_a,
)
.unwrap();
let cookie_reply_len = resp.unwrap();
//A handles cookie reply message
a.handle_msg(&b_to_a_buf[..cookie_reply_len], &mut *a_to_b_buf)
.unwrap();
assert_eq!(PeerPtr(0).cv().lifecycle(&a), Lifecycle::Young);
let expected_cookie_value =
crate::hash_domains::cookie_value(protocol_version.keyed_hash())
.unwrap()
.mix(
b.active_or_retired_cookie_secrets()[0]
.unwrap()
.get(&b)
.value
.secret(),
)
.unwrap()
.mix(ip_addr_port_a.encode())
.unwrap()
.into_value()[..16]
.to_vec();
assert_eq!(
PeerPtr(0).cv().get(&a).map(|x| &x.value.secret()[..]),
Some(&expected_cookie_value[..])
);
let retx_init_hello_len = loop {
match a.poll().unwrap() {
PollResult::SendRetransmission(peer) => {
break a.retransmit_handshake(peer, &mut *a_to_b_buf).unwrap();
}
PollResult::Sleep(time) => {
sleep(Duration::from_secs_f64(time));
}
_ => {}
}
};
let retx_msg_type: MsgType = a_to_b_buf.value[0].try_into().unwrap();
assert_eq!(retx_msg_type, MsgType::InitHello);
//B handles retransmitted message
let HandleMsgResult { resp, .. } = b
.handle_msg_under_load(
&a_to_b_buf.as_slice()[..retx_init_hello_len],
&mut *b_to_a_buf,
&ip_addr_port_a,
)
.unwrap();
let _resp_hello_len = resp.unwrap();
let resp_msg_type: MsgType = b_to_a_buf.value[0].try_into().unwrap();
assert_eq!(resp_msg_type, MsgType::RespHello);
});
}
#[test]
#[serial]
#[cfg(feature = "experiment_cookie_dos_mitigation")]
fn cookie_reply_mechanism_initiator_bails_on_message_under_load_v02() {
cookie_reply_mechanism_initiator_bails_on_message_under_load(ProtocolVersion::V02)
}
#[test]
#[serial]
#[cfg(feature = "experiment_cookie_dos_mitigation")]
fn cookie_reply_mechanism_initiator_bails_on_message_under_load_v03() {
cookie_reply_mechanism_initiator_bails_on_message_under_load(ProtocolVersion::V03)
}
#[cfg(feature = "experiment_cookie_dos_mitigation")]
fn cookie_reply_mechanism_initiator_bails_on_message_under_load(protocol_version: ProtocolVersion) {
setup_logging();
rosenpass_secret_memory::secret_policy_try_use_memfd_secrets();
stacker::grow(8 * 1024 * 1024, || {
type MsgBufPlus = Public<MAX_MESSAGE_LEN>;
let (mut a, mut b) = make_server_pair(protocol_version).unwrap();
let mut a_to_b_buf = MsgBufPlus::zero();
let mut b_to_a_buf = MsgBufPlus::zero();
let ip_a: SocketAddrV4 = "127.0.0.1:8080".parse().unwrap();
let mut ip_addr_port_a = ip_a.ip().octets().to_vec();
ip_addr_port_a.extend_from_slice(&ip_a.port().to_be_bytes());
let ip_b: SocketAddrV4 = "127.0.0.1:8081".parse().unwrap();
//A initiates handshake
let init_hello_len = a.initiate_handshake(PeerPtr(0), &mut *a_to_b_buf).unwrap();
//B handles InitHello message, should respond with RespHello
let HandleMsgResult { resp, .. } = b
.handle_msg(&a_to_b_buf.as_slice()[..init_hello_len], &mut *b_to_a_buf)
.unwrap();
let resp_hello_len = resp.unwrap();
let resp_msg_type: MsgType = b_to_a_buf.value[0].try_into().unwrap();
assert_eq!(resp_msg_type, MsgType::RespHello);
let socket_addr_b = std::net::SocketAddr::V4(ip_b);
let mut ip_addr_port_b = [0u8; 18];
let mut ip_addr_port_b_len = 0;
match socket_addr_b.ip() {
std::net::IpAddr::V4(ipv4) => {
ip_addr_port_b[0..4].copy_from_slice(&ipv4.octets());
ip_addr_port_b_len += 4;
}
std::net::IpAddr::V6(ipv6) => {
ip_addr_port_b[0..16].copy_from_slice(&ipv6.octets());
ip_addr_port_b_len += 16;
}
};
ip_addr_port_b[ip_addr_port_b_len..ip_addr_port_b_len + 2]
.copy_from_slice(&socket_addr_b.port().to_be_bytes());
ip_addr_port_b_len += 2;
let ip_addr_port_b: VecHostIdentifier =
ip_addr_port_b[..ip_addr_port_b_len].to_vec().into();
//A handles RespHello message under load, should not send cookie reply
assert!(a
.handle_msg_under_load(
&b_to_a_buf[..resp_hello_len],
&mut *a_to_b_buf,
&ip_addr_port_b
)
.is_err());
});
}
#[test]
fn init_conf_retransmission_v02() -> Result<()> {
init_conf_retransmission(ProtocolVersion::V02)
}
#[test]
fn init_conf_retransmission_v03() -> Result<()> {
init_conf_retransmission(ProtocolVersion::V03)
}
fn init_conf_retransmission(protocol_version: ProtocolVersion) -> anyhow::Result<()> {
rosenpass_secret_memory::secret_policy_try_use_memfd_secrets();
fn keypair() -> Result<(SSk, SPk)> {
let (mut sk, mut pk) = (SSk::zero(), SPk::zero());
StaticKem.keygen(sk.secret_mut(), pk.deref_mut())?;
Ok((sk, pk))
}
fn proc_initiation(srv: &mut CryptoServer, peer: PeerPtr) -> Result<Envelope<InitHello>> {
let mut buf = MsgBuf::zero();
srv.initiate_handshake(peer, buf.as_mut_slice())?
.discard_result();
let msg = truncating_cast_into::<Envelope<InitHello>>(buf.borrow_mut())?;
Ok(msg.read())
}
fn proc_msg<Rx: AsBytes + FromBytes, Tx: AsBytes + FromBytes>(
srv: &mut CryptoServer,
rx: &Envelope<Rx>,
) -> anyhow::Result<Envelope<Tx>> {
let mut buf = MsgBuf::zero();
srv.handle_msg(rx.as_bytes(), buf.as_mut_slice())?
.resp
.context("Failed to produce RespHello message")?
.discard_result();
let msg = truncating_cast_into::<Envelope<Tx>>(buf.borrow_mut())?;
Ok(msg.read())
}
fn proc_init_hello(
srv: &mut CryptoServer,
ih: &Envelope<InitHello>,
) -> anyhow::Result<Envelope<RespHello>> {
proc_msg::<InitHello, RespHello>(srv, ih)
}
fn proc_resp_hello(
srv: &mut CryptoServer,
rh: &Envelope<RespHello>,
) -> anyhow::Result<Envelope<InitConf>> {
proc_msg::<RespHello, InitConf>(srv, rh)
}
fn proc_init_conf(
srv: &mut CryptoServer,
rh: &Envelope<InitConf>,
) -> anyhow::Result<Envelope<EmptyData>> {
proc_msg::<InitConf, EmptyData>(srv, rh)
}
fn poll(srv: &mut CryptoServer) -> anyhow::Result<()> {
// Discard all events; just apply the side effects
while !matches!(srv.poll()?, PollResult::Sleep(_)) {}
Ok(())
}
// TODO: Implement Clone on our message types
fn clone_msg<Msg: AsBytes + FromBytes>(msg: &Msg) -> anyhow::Result<Msg> {
Ok(truncating_cast_into_nomut::<Msg>(msg.as_bytes())?.read())
}
fn break_payload<Msg: AsBytes + FromBytes>(
srv: &mut CryptoServer,
peer: PeerPtr,
msg: &Envelope<Msg>,
) -> anyhow::Result<Envelope<Msg>> {
let mut msg = clone_msg(msg)?;
msg.as_bytes_mut()[memoffset::offset_of!(Envelope<Msg>, payload)] ^= 0x01;
msg.seal(peer, srv)?; // Recalculate seal; we do not want to focus on "seal broken" errs
Ok(msg)
}
fn check_faulty_proc_init_conf(srv: &mut CryptoServer, ic_broken: &Envelope<InitConf>) {
let mut buf = MsgBuf::zero();
let res = srv.handle_msg(ic_broken.as_bytes(), buf.as_mut_slice());
assert!(res.is_err());
}
// we this as a closure in orer to use the protocol_version variable in it.
let check_retransmission = |srv: &mut CryptoServer,
ic: &Envelope<InitConf>,
ic_broken: &Envelope<InitConf>,
rc: &Envelope<EmptyData>|
-> Result<()> {
// Processing the same RespHello package again leads to retransmission (i.e. exactly the
// same output)
let rc_dup = proc_init_conf(srv, ic)?;
assert_eq!(rc.as_bytes(), rc_dup.as_bytes());
// Though if we directly call handle_resp_hello() we get an error since
// retransmission is not being handled by the cryptographic code
let mut discard_resp_conf = EmptyData::new_zeroed();
let res = srv.handle_init_conf(
&ic.payload,
&mut discard_resp_conf,
protocol_version.clone().keyed_hash(),
);
assert!(res.is_err());
// Obviously, a broken InitConf message should still be rejected
check_faulty_proc_init_conf(srv, ic_broken);
Ok(())
};
let (ska, pka) = keypair()?;
let (skb, pkb) = keypair()?;
// initialize server and a pre-shared key
let mut a = CryptoServer::new(ska, pka.clone());
let mut b = CryptoServer::new(skb, pkb.clone());
// introduce peers to each other
let b_peer = a.add_peer(
None,
pkb,
protocol_version.clone(),
OskDomainSeparator::default(),
)?;
let a_peer = b.add_peer(
None,
pka,
protocol_version.clone(),
OskDomainSeparator::default(),
)?;
// Execute protocol up till the responder confirmation (EmptyData)
let ih1 = proc_initiation(&mut a, b_peer)?;
let rh1 = proc_init_hello(&mut b, &ih1)?;
let ic1 = proc_resp_hello(&mut a, &rh1)?;
let rc1 = proc_init_conf(&mut b, &ic1)?;
// Modified version of ic1 and rc1, for tests that require it
let ic1_broken = break_payload(&mut a, b_peer, &ic1)?;
assert_ne!(ic1.as_bytes(), ic1_broken.as_bytes());
// Modified version of rc1, for tests that require it
let rc1_broken = break_payload(&mut b, a_peer, &rc1)?;
assert_ne!(rc1.as_bytes(), rc1_broken.as_bytes());
// Retransmission works as designed
check_retransmission(&mut b, &ic1, &ic1_broken, &rc1)?;
// Even with a couple of poll operations in between (which clears the cache
// after a time out of two minutes…we should never hit this time out in this
// cache)
for _ in 0..4 {
poll(&mut b)?;
check_retransmission(&mut b, &ic1, &ic1_broken, &rc1)?;
}
// We can even validate that the data is coming out of the cache by changing the cache
// to use our broken messages. It does not matter that these messages are cryptographically
// broken since we insert them manually into the cache
// a_peer.known_init_conf_response()
KnownInitConfResponsePtr::insert_for_request_msg(
&mut b,
a_peer,
&ic1_broken,
rc1_broken.clone(),
);
check_retransmission(&mut b, &ic1_broken, &ic1, &rc1_broken)?;
// Lets reset to the correct message though
KnownInitConfResponsePtr::insert_for_request_msg(&mut b, a_peer, &ic1, rc1.clone());
// Again, nothing changes after calling poll
poll(&mut b)?;
check_retransmission(&mut b, &ic1, &ic1_broken, &rc1)?;
// Except if we jump forward into the future past the point where the responder
// starts to initiate rekeying; in this case, the automatic time out is triggered and the cache is cleared
super::testutils::time_travel_forward(&mut b, REKEY_AFTER_TIME_RESPONDER);
// As long as we do not call poll, everything is fine
check_retransmission(&mut b, &ic1, &ic1_broken, &rc1)?;
// But after we do, the response is gone and can not be recreated
// since the biscuit is stale
poll(&mut b)?;
check_faulty_proc_init_conf(&mut b, &ic1); // ic1 is now effectively broken
assert!(b.peers[0].known_init_conf_response.is_none()); // The cache is gone
Ok(())
}

View File

@@ -0,0 +1,54 @@
//! Helpers used in tests
use std::ops::DerefMut;
use rosenpass_cipher_traits::primitives::Kem;
use rosenpass_ciphers::StaticKem;
use super::{
basic_types::{SPk, SSk},
osk_domain_separator::OskDomainSeparator,
CryptoServer, PeerPtr, ProtocolVersion,
};
/// Helper for tests and examples
pub struct ServerForTesting {
pub peer: PeerPtr,
pub peer_keys: (SSk, SPk),
pub srv: CryptoServer,
}
/// TODO: Document that the protocol version is only used for creating the peer for testing
impl ServerForTesting {
pub fn new(protocol_version: ProtocolVersion) -> anyhow::Result<Self> {
let (mut sskm, mut spkm) = (SSk::zero(), SPk::zero());
StaticKem.keygen(sskm.secret_mut(), spkm.deref_mut())?;
let mut srv = CryptoServer::new(sskm, spkm);
let (mut sskt, mut spkt) = (SSk::zero(), SPk::zero());
StaticKem.keygen(sskt.secret_mut(), spkt.deref_mut())?;
let peer = srv.add_peer(
None,
spkt.clone(),
protocol_version,
OskDomainSeparator::default(),
)?;
let peer_keys = (sskt, spkt);
Ok(ServerForTesting {
peer,
peer_keys,
srv,
})
}
pub fn tuple(self) -> (PeerPtr, (SSk, SPk), CryptoServer) {
(self.peer, self.peer_keys, self.srv)
}
}
/// Time travel forward in time
pub fn time_travel_forward(srv: &mut CryptoServer, secs: f64) {
let dur = std::time::Duration::from_secs_f64(secs);
srv.timebase.0 = srv.timebase.0.checked_sub(dur).unwrap();
}

View File

@@ -0,0 +1,46 @@
//! Time-keeping related utilities for the Rosenpass protocol
use super::constants::EVENT_GRACE;
/// A type for time, e.g. for backoff before re-tries
pub type Timing = f64;
/// Magic time stamp to indicate some object is ancient; "Before Common Era"
///
/// This is for instance used as a magic time stamp indicating age when some
/// cryptographic object certainly needs to be refreshed.
///
/// Using this instead of Timing::MIN or Timing::INFINITY to avoid floating
/// point math weirdness.
pub const BCE: Timing = -3600.0 * 24.0 * 356.0 * 10_000.0;
/// Magic time stamp to indicate that some process is not time-limited
///
/// Actually it's eight hours; This is intentional to avoid weirdness
/// regarding unexpectedly large numbers in system APIs as this is < i16::MAX
pub const UNENDING: Timing = 3600.0 * 8.0;
/// An even `ev` has happened relative to a point in time `now`
/// if the `ev` does not lie in the future relative to now.
///
/// An event lies in the future relative to `now` if
/// does not lie in the past or present.
///
/// An event `ev` lies in the past if `ev < now`. It lies in the
/// present if the absolute difference between `ev` and `now` is
/// smaller than [EVENT_GRACE].
///
/// Think of this as `ev <= now` for with [EVENT_GRACE] applied.
///
/// # Examples
///
/// ```
/// use rosenpass::protocol::{timing::has_happened, constants::EVENT_GRACE};
/// assert!(has_happened(EVENT_GRACE * -1.0, 0.0));
/// assert!(has_happened(0.0, 0.0));
/// assert!(has_happened(EVENT_GRACE * 0.999, 0.0));
/// assert!(!has_happened(EVENT_GRACE * 1.001, 0.0));
/// ```
pub fn has_happened(ev: Timing, now: Timing) -> bool {
(ev - now) < EVENT_GRACE
}

View File

@@ -0,0 +1,21 @@
//! Helpers for working with the zerocopy crate
use std::mem::size_of;
use zerocopy::{FromBytes, Ref};
use crate::RosenpassError;
/// Used to parse a network message using [zerocopy]
pub fn truncating_cast_into<T: FromBytes>(
buf: &mut [u8],
) -> Result<Ref<&mut [u8], T>, RosenpassError> {
Ref::new(&mut buf[..size_of::<T>()]).ok_or(RosenpassError::BufferSizeMismatch)
}
/// Used to parse a network message using [zerocopy], mutably
pub fn truncating_cast_into_nomut<T: FromBytes>(
buf: &[u8],
) -> Result<Ref<&[u8], T>, RosenpassError> {
Ref::new(&buf[..size_of::<T>()]).ok_or(RosenpassError::BufferSizeMismatch)
}

View File

@@ -15,7 +15,7 @@ use rosenpass::api::{
supply_keypair_response_status,
};
use rosenpass::config::ProtocolVersion;
use rosenpass::protocol::SymKey;
use rosenpass::protocol::basic_types::SymKey;
use rosenpass_util::{
b64::B64Display,
file::LoadValueB64,
@@ -105,7 +105,8 @@ fn api_integration_api_setup(protocol_version: ProtocolVersion) -> anyhow::Resul
peer: format!("{}", peer_b_wg_peer_id.fmt_b64::<8129>()),
extra_params: vec![],
}),
protocol_version: protocol_version.clone(),
protocol_version: protocol_version,
osk_domain_separator: Default::default(),
}],
};
@@ -126,7 +127,8 @@ fn api_integration_api_setup(protocol_version: ProtocolVersion) -> anyhow::Resul
endpoint: Some(peer_a_endpoint.to_owned()),
pre_shared_key: None,
wg: None,
protocol_version: protocol_version.clone(),
protocol_version: protocol_version,
osk_domain_separator: Default::default(),
}],
};

View File

@@ -17,7 +17,7 @@ use tempfile::TempDir;
use zerocopy::AsBytes;
use rosenpass::config::ProtocolVersion;
use rosenpass::protocol::SymKey;
use rosenpass::protocol::basic_types::SymKey;
struct KillChild(std::process::Child);
@@ -82,7 +82,8 @@ fn api_integration_test(protocol_version: ProtocolVersion) -> anyhow::Result<()>
endpoint: None,
pre_shared_key: None,
wg: None,
protocol_version: protocol_version.clone(),
protocol_version: protocol_version,
osk_domain_separator: Default::default(),
}],
};
@@ -103,7 +104,8 @@ fn api_integration_test(protocol_version: ProtocolVersion) -> anyhow::Result<()>
endpoint: Some(peer_a_endpoint.to_owned()),
pre_shared_key: None,
wg: None,
protocol_version: protocol_version.clone(),
protocol_version: protocol_version,
osk_domain_separator: Default::default(),
}],
};

View File

@@ -1,21 +1,14 @@
use std::{
net::SocketAddr,
ops::DerefMut,
str::FromStr,
sync::mpsc,
thread::{self, sleep},
time::Duration,
};
use std::thread::{self, sleep};
use std::{net::SocketAddr, ops::DerefMut, str::FromStr, sync::mpsc, time::Duration};
use rosenpass::config::ProtocolVersion;
use rosenpass::{
app_server::{AppServer, AppServerTest, MAX_B64_KEY_SIZE},
protocol::{SPk, SSk, SymKey},
};
use rosenpass_cipher_traits::primitives::Kem;
use rosenpass_ciphers::StaticKem;
use rosenpass_util::{file::LoadValueB64, functional::run, mem::DiscardResultExt, result::OkExt};
use rosenpass::app_server::{AppServer, AppServerTest, MAX_B64_KEY_SIZE};
use rosenpass::protocol::basic_types::{SPk, SSk, SymKey};
use rosenpass::{config::ProtocolVersion, protocol::osk_domain_separator::OskDomainSeparator};
#[test]
fn key_exchange_with_app_server_v02() -> anyhow::Result<()> {
key_exchange_with_app_server(ProtocolVersion::V02)
@@ -69,7 +62,8 @@ fn key_exchange_with_app_server(protocol_version: ProtocolVersion) -> anyhow::Re
outfile,
broker_peer,
hostname,
protocol_version.clone(),
protocol_version,
OskDomainSeparator::default(),
)?;
srv.app_srv.event_loop()

View File

@@ -144,7 +144,7 @@ fn check_example_config() {
let tmp_dir = tempdir().unwrap();
let config_path = tmp_dir.path().join("config.toml");
let mut config_file = File::create(config_path.to_owned()).unwrap();
let mut config_file = File::create(&config_path).unwrap();
config_file
.write_all(

View File

@@ -9,29 +9,49 @@ use rosenpass_cipher_traits::primitives::Kem;
use rosenpass_ciphers::StaticKem;
use rosenpass_util::result::OkExt;
use rosenpass::protocol::{
testutils::time_travel_forward, CryptoServer, HostIdentification, MsgBuf, PeerPtr, PollResult,
ProtocolVersion, SPk, SSk, SymKey, Timing, UNENDING,
};
use rosenpass::protocol::basic_types::{MsgBuf, SPk, SSk, SymKey};
use rosenpass::protocol::osk_domain_separator::OskDomainSeparator;
use rosenpass::protocol::testutils::time_travel_forward;
use rosenpass::protocol::timing::{Timing, UNENDING};
use rosenpass::protocol::{CryptoServer, HostIdentification, PeerPtr, PollResult, ProtocolVersion};
// TODO: Most of the utility functions in here should probably be moved to
// rosenpass::protocol::testutils;
#[test]
fn test_successful_exchange_with_poll_v02() -> anyhow::Result<()> {
test_successful_exchange_with_poll(ProtocolVersion::V02)
test_successful_exchange_with_poll(ProtocolVersion::V02, OskDomainSeparator::default())
}
#[test]
fn test_successful_exchange_with_poll_v03() -> anyhow::Result<()> {
test_successful_exchange_with_poll(ProtocolVersion::V03)
test_successful_exchange_with_poll(ProtocolVersion::V03, OskDomainSeparator::default())
}
fn test_successful_exchange_with_poll(protocol_version: ProtocolVersion) -> anyhow::Result<()> {
#[test]
fn test_successful_exchange_with_poll_v02_custom_domain_separator() -> anyhow::Result<()> {
test_successful_exchange_with_poll(
ProtocolVersion::V02,
OskDomainSeparator::custom_utf8_single_label("example.org", "Example Label"),
)
}
#[test]
fn test_successful_exchange_with_poll_v03_custom_domain_separator() -> anyhow::Result<()> {
test_successful_exchange_with_poll(
ProtocolVersion::V03,
OskDomainSeparator::custom_utf8_single_label("example.org", "Example Label"),
)
}
fn test_successful_exchange_with_poll(
protocol_version: ProtocolVersion,
osk_domain_separator: OskDomainSeparator,
) -> anyhow::Result<()> {
// Set security policy for storing secrets; choose the one that is faster for testing
rosenpass_secret_memory::policy::secret_policy_use_only_malloc_secrets();
let mut sim = RosenpassSimulator::new(protocol_version)?;
let mut sim = RosenpassSimulator::new(protocol_version, osk_domain_separator)?;
sim.poll_loop(150)?; // Poll 75 times
let transcript = sim.transcript;
@@ -104,7 +124,7 @@ fn test_successful_exchange_under_packet_loss(
rosenpass_secret_memory::policy::secret_policy_use_only_malloc_secrets();
// Create the simulator
let mut sim = RosenpassSimulator::new(protocol_version)?;
let mut sim = RosenpassSimulator::new(protocol_version, OskDomainSeparator::default())?;
// Make sure the servers are set to under load condition
sim.srv_a.under_load = true;
@@ -181,6 +201,94 @@ fn test_successful_exchange_under_packet_loss(
Ok(())
}
#[test]
fn test_osk_label_mismatch() -> anyhow::Result<()> {
// Set security policy for storing secrets; choose the one that is faster for testing
rosenpass_secret_memory::policy::secret_policy_use_only_malloc_secrets();
let ds_wg = OskDomainSeparator::for_wireguard_psk();
let ds_custom1 = OskDomainSeparator::custom_utf8("example.com", ["Example Label"]);
let ds_custom2 =
OskDomainSeparator::custom_utf8("example.com", ["Example Label", "Second Token"]);
// Create the simulator
let mut sim = RosenpassSimulator::new(ProtocolVersion::V03, ds_custom1.clone())?;
assert_eq!(sim.srv_a.srv.peers[0].osk_domain_separator, ds_custom1);
assert_eq!(sim.srv_b.srv.peers[0].osk_domain_separator, ds_custom1);
// Deliberately produce a label mismatch
sim.srv_b.srv.peers[0].osk_domain_separator = ds_custom2.clone();
assert_eq!(sim.srv_a.srv.peers[0].osk_domain_separator, ds_custom1);
assert_eq!(sim.srv_b.srv.peers[0].osk_domain_separator, ds_custom2);
// Perform the key exchanges
for _ in 0..300 {
let ev = sim.poll()?;
assert!(!matches!(ev, TranscriptEvent::CompletedExchange(_)),
"We deliberately provoked a mismatch in OSK domain separator, but still saw a successfully completed key exchange");
// Wait for a key exchange that failed with a KeyMismatch event
let (osk_a_custom1, osk_b_custom2) = match ev {
TranscriptEvent::FailedExchangeWithKeyMismatch(osk_a, osk_b) => {
(osk_a.clone(), osk_b.clone())
}
_ => continue,
};
// The OSKs have been produced through the call to the function CryptoServer::osk(…)
assert_eq!(
sim.srv_a.srv.osk(PeerPtr(0))?.secret(),
osk_a_custom1.secret()
);
assert_eq!(
sim.srv_b.srv.osk(PeerPtr(0))?.secret(),
osk_b_custom2.secret()
);
// They are not matching (obviously)
assert_ne!(osk_a_custom1.secret(), osk_b_custom2.secret());
// We can manually generate OSKs with matching labels
let osk_a_custom2 = sim
.srv_a
.srv
.osk_with_domain_separator(PeerPtr(0), &ds_custom2)?;
let osk_b_custom1 = sim
.srv_b
.srv
.osk_with_domain_separator(PeerPtr(0), &ds_custom1)?;
let osk_a_wg = sim
.srv_a
.srv
.osk_with_domain_separator(PeerPtr(0), &ds_wg)?;
let osk_b_wg = sim
.srv_b
.srv
.osk_with_domain_separator(PeerPtr(0), &ds_wg)?;
// The key exchange may have failed for some other reason, in this case we expect a
// successful-but-label-mismatch exchange later in the protocol
if osk_a_custom1.secret() != osk_b_custom1.secret() {
continue;
}
// But if one of the labeled keys match, all should match
assert_eq!(osk_a_custom2.secret(), osk_b_custom2.secret());
assert_eq!(osk_a_wg.secret(), osk_b_wg.secret());
// But the three keys do not match each other
assert_ne!(osk_a_custom1.secret(), osk_a_custom2.secret());
assert_ne!(osk_a_custom1.secret(), osk_a_wg.secret());
assert_ne!(osk_a_custom2.secret(), osk_a_wg.secret());
// The test succeeded
return Ok(());
}
panic!("Test did not succeed even after allowing for a large number of communication rounds");
}
type MessageType = u8;
/// Lets record the events that are produced by Rosenpass
@@ -193,6 +301,7 @@ enum TranscriptEvent {
event: ServerEvent,
},
CompletedExchange(SymKey),
FailedExchangeWithKeyMismatch(SymKey, SymKey),
}
#[derive(Debug)]
@@ -292,7 +401,10 @@ struct SimulatorServer {
impl RosenpassSimulator {
/// Set up the simulator
fn new(protocol_version: ProtocolVersion) -> anyhow::Result<Self> {
fn new(
protocol_version: ProtocolVersion,
osk_domain_separator: OskDomainSeparator,
) -> anyhow::Result<Self> {
// Set up the first server
let (mut peer_a_sk, mut peer_a_pk) = (SSk::zero(), SPk::zero());
StaticKem.keygen(peer_a_sk.secret_mut(), peer_a_pk.deref_mut())?;
@@ -305,8 +417,18 @@ impl RosenpassSimulator {
// Generate a PSK and introduce the Peers to each other.
let psk = SymKey::random();
let peer_a = srv_a.add_peer(Some(psk.clone()), peer_b_pk, protocol_version.clone())?;
let peer_b = srv_b.add_peer(Some(psk), peer_a_pk, protocol_version.clone())?;
let peer_a = srv_a.add_peer(
Some(psk.clone()),
peer_b_pk,
protocol_version.clone(),
osk_domain_separator.clone(),
)?;
let peer_b = srv_b.add_peer(
Some(psk),
peer_a_pk,
protocol_version.clone(),
osk_domain_separator.clone(),
)?;
// Set up the individual server data structures
let srv_a = SimulatorServer::new(srv_a, peer_b);
@@ -566,10 +688,18 @@ impl ServerPtr {
None => return Ok(()),
};
// Make sure the OSK of server A always comes first
let (osk_a, osk_b) = match self == ServerPtr::A {
true => (osk, other_osk),
false => (other_osk, osk),
};
// Issue the successful exchange event if the OSKs are equal;
// be careful to use constant time comparison for things like this!
if rosenpass_constant_time::memcmp(osk.secret(), other_osk.secret()) {
self.enqueue_upcoming_poll_event(sim, TE::CompletedExchange(osk));
if rosenpass_constant_time::memcmp(osk_a.secret(), osk_b.secret()) {
self.enqueue_upcoming_poll_event(sim, TE::CompletedExchange(osk_a));
} else {
self.enqueue_upcoming_poll_event(sim, TE::FailedExchangeWithKeyMismatch(osk_a, osk_b));
}
Ok(())

View File

@@ -1,5 +1,5 @@
[package]
name = "rp"
name = "rosenpass-rp"
version = "0.2.1"
edition = "2021"
license = "MIT OR Apache-2.0"
@@ -8,7 +8,9 @@ homepage = "https://rosenpass.eu/"
repository = "https://github.com/rosenpass/rosenpass"
rust-version = "1.77.0"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[[bin]]
name = "rp"
path = "src/main.rs"
[dependencies]
anyhow = { workspace = true }
@@ -17,12 +19,15 @@ serde = { workspace = true }
toml = { workspace = true }
x25519-dalek = { workspace = true, features = ["static_secrets"] }
zeroize = { workspace = true }
libc = { workspace = true }
log = { workspace = true }
env_logger = { workspace = true }
rosenpass = { workspace = true }
rosenpass-ciphers = { workspace = true }
rosenpass-cipher-traits = { workspace = true }
rosenpass-secret-memory = { workspace = true }
rosenpass-util = { workspace = true }
rosenpass-util = { workspace = true, features = ["tokio"] }
rosenpass-wireguard-broker = { workspace = true }
tokio = { workspace = true }

View File

@@ -1,20 +1,68 @@
use anyhow::Error;
use serde::Deserialize;
use std::future::Future;
use std::ops::DerefMut;
use std::pin::Pin;
use std::sync::Arc;
use std::{net::SocketAddr, path::PathBuf, process::Command};
use std::any::type_name;
use std::{borrow::Borrow, net::SocketAddr, path::PathBuf};
use tokio::process::Command;
use anyhow::{bail, ensure, Context, Result};
use futures_util::TryStreamExt as _;
use serde::Deserialize;
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
use crate::key::WG_B64_LEN;
use anyhow::Result;
use rosenpass::config::ProtocolVersion;
use rosenpass::{
app_server::{AppServer, BrokerPeer},
config::Verbosity,
protocol::{
basic_types::{SPk, SSk, SymKey},
osk_domain_separator::OskDomainSeparator,
},
};
use rosenpass_secret_memory::Secret;
use rosenpass_util::file::{LoadValue as _, LoadValueB64};
use rosenpass_util::functional::{ApplyExt, MutatingExt};
use rosenpass_util::result::OkExt;
use rosenpass_util::tokio::janitor::{spawn_cleanup_job, try_spawn_daemon};
use rosenpass_wireguard_broker::brokers::native_unix::{
NativeUnixBroker, NativeUnixBrokerConfigBaseBuilder,
};
use tokio::task::spawn_blocking;
use crate::key::WG_B64_LEN;
/// Extra-special measure to structure imports from the various
/// netlink related crates used in [super]
mod netlink {
/// Re-exports from [::netlink_packet_core]
pub mod core {
pub use ::netlink_packet_core::{NetlinkMessage, NLM_F_ACK, NLM_F_REQUEST};
}
/// Re-exports from [::rtnetlink]
pub mod rtnl {
pub use ::rtnetlink::Error;
pub use ::rtnetlink::Handle;
}
/// Re-exports from [::genetlink] and [::netlink_packet_generic]
pub mod genl {
pub use ::genetlink::GenetlinkHandle as Handle;
pub use ::netlink_packet_generic::GenlMessage as Message;
}
/// Re-exports from [::netlink_packet_wireguard]
pub mod wg {
pub use ::netlink_packet_wireguard::constants::WG_KEY_LEN as KEY_LEN;
pub use ::netlink_packet_wireguard::nlas::WgDeviceAttrs as DeviceAttrs;
pub use ::netlink_packet_wireguard::{Wireguard, WireguardCmd};
}
}
type WgSecretKey = Secret<{ netlink::wg::KEY_LEN }>;
/// Used to define a peer for the rosenpass connection that consists of
/// a directory for storing public keys and optionally an IP address and port of the endpoint,
/// for how long the connection should be kept alive and a list of allowed IPs for the peer.
#[derive(Default, Deserialize)]
#[serde(deny_unknown_fields)]
pub struct ExchangePeer {
/// Directory where public keys are stored
pub public_keys_dir: PathBuf,
@@ -31,6 +79,7 @@ pub struct ExchangePeer {
/// Options for the exchange operation of the `rp` binary.
#[derive(Default, Deserialize)]
#[serde(deny_unknown_fields)]
pub struct ExchangeOptions {
/// Whether the cli output should be verbose.
pub verbose: bool,
@@ -41,283 +90,401 @@ pub struct ExchangeOptions {
pub dev: Option<String>,
/// The IP-address rosenpass should run under.
pub ip: Option<String>,
/// The IP-address and port that the rosenpass [AppServer](rosenpass::app_server::AppServer)
/// The IP-address and port that the rosenpass [AppServer]
/// should use.
pub listen: Option<SocketAddr>,
/// Other peers a connection should be initialized to
pub peers: Vec<ExchangePeer>,
}
#[cfg(not(any(target_os = "linux", target_os = "freebsd")))]
pub async fn exchange(_: ExchangeOptions) -> Result<()> {
use anyhow::anyhow;
Err(anyhow!(
"Your system {} is not yet supported. We are happy to receive patches to address this :)",
std::env::consts::OS
))
/// Manage the lifetime of WireGuard devices uses for rp
#[derive(Debug, Default)]
struct WireGuardDeviceImpl {
// TODO: Can we merge these two somehow?
rtnl_netlink_handle_cache: Option<netlink::rtnl::Handle>,
genl_netlink_handle_cache: Option<netlink::genl::Handle>,
/// Handle and name of the device
device: Option<(u32, String)>,
}
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
mod netlink {
use anyhow::Result;
use futures_util::{StreamExt as _, TryStreamExt as _};
use genetlink::GenetlinkHandle;
use netlink_packet_core::{NLM_F_ACK, NLM_F_REQUEST};
use netlink_packet_wireguard::nlas::WgDeviceAttrs;
use rtnetlink::Handle;
impl WireGuardDeviceImpl {
fn take(&mut self) -> WireGuardDeviceImpl {
Self::default().mutating(|nu| std::mem::swap(self, nu))
}
async fn open(&mut self, device_name: String) -> anyhow::Result<()> {
let mut rtnl_link = self.rtnl_netlink_handle()?.link();
let device_name_ref = &device_name;
// Make sure that there is no device called `device_name` before we start
rtnl_link
.get()
.match_name(device_name.to_owned())
.execute()
// Count the number of occurences
.try_fold(0, |acc, _val| async move {
Ok(acc + 1)
}).await
// Extract the error's raw system error code
.map_err(|e| {
use netlink::rtnl::Error as E;
match e {
E::NetlinkError(msg) => {
let raw_code = -msg.raw_code();
(E::NetlinkError(msg), Some(raw_code))
},
_ => (e, None),
}
})
.apply(|r| {
match r {
// No such device, which is exactly what we are expecting
Ok(0) | Err((_, Some(libc::ENODEV))) => Ok(()),
// Device already exists
Ok(_) => bail!("\
Trying to create a network device for Rosenpass under the name \"{device_name}\", \
but at least one device under the name aready exists."),
// Other error
Err((e, _)) => bail!(e),
}
})?;
// Add the link, equivalent to `ip link add <link_name> type wireguard`.
rtnl_link
.add()
.wireguard(device_name.to_owned())
.execute()
.await?;
log::info!("Created network device!");
// Retrieve a handle for the newly created device
let device_handle = rtnl_link
.get()
.match_name(device_name.to_owned())
.execute()
.err_into::<anyhow::Error>()
.try_fold(Option::None, |acc, val| async move {
ensure!(acc.is_none(), "\
Created a network device for Rosenpass under the name \"{device_name_ref}\", \
but upon trying to determine the handle for the device using named-based lookup, we received multiple handles. \
We checked beforehand whether the device already exists. \
This should not happen. Unsure how to proceed. Terminating.");
Ok(Some(val))
}).await?
.with_context(|| format!("\
Created a network device for Rosenpass under the name \"{device_name}\", \
but upon trying to determine the handle for the device using named-based lookup, we received no handle. \
This should not happen. Unsure how to proceed. Terminating."))?
.apply(|msg| msg.header.index);
// Now we can actually start to mark the device as initialized.
// Note that if the handle retrieval above does not work, the destructor
// will not run and the device will not be erased.
// This is, for now, the desired behavior as we need the handle to erase
// the device anyway.
self.device = Some((device_handle, device_name));
// Activate the link, equivalent to `ip link set dev <DEV> up`.
rtnl_link.set(device_handle).up().execute().await?;
Ok(())
}
async fn close(mut self) {
// Check if the device is properly initialized and retrieve the device info
let (device_handle, device_name) = match self.device.take() {
Some(val) => val,
// Nothing to do, not yet properly initialized
None => return,
};
// Erase the network device; the rest of the function is just error handling
let res = async move {
self.rtnl_netlink_handle()?
.link()
.del(device_handle)
.execute()
.await?;
log::debug!("Erased network interface!");
anyhow::Ok(())
}
.await;
// Here we test if the error needs printing at all
let res = 'do_print: {
// Short-circuit if the deletion was successful
let err = match res {
Ok(()) => break 'do_print Ok(()),
Err(err) => err,
};
// Extract the rtnetlink error, so we can inspect it
let err = match err.downcast::<netlink::rtnl::Error>() {
Ok(rtnl_err) => rtnl_err,
Err(other_err) => break 'do_print Err(other_err),
};
// TODO: This is a bit brittle, as the rtnetlink error enum looks like
// E::NetlinkError is a sort of "unknown error" case. If they explicitly
// add support for the "no such device" errors or other ones we check for in
// this block, then this code may no longer filter these errors
// Extract the raw netlink error code
use netlink::rtnl::Error as E;
let error_code = match err {
E::NetlinkError(ref msg) => -msg.raw_code(),
err => break 'do_print Err(err.into()),
};
// Check whether its just the "no such device" error
#[allow(clippy::single_match)]
match error_code {
libc::ENODEV => break 'do_print Ok(()),
_ => {}
}
// Otherwise, we just print the error
Err(err.into())
};
if let Err(err) = res {
log::warn!("Could not remove network device `{device_name}`: {err:?}");
}
}
pub async fn add_ip_address(&self, addr: &str) -> anyhow::Result<()> {
// TODO: Migrate to using netlink
Command::new("ip")
.args(["address", "add", addr, "dev", self.name()?])
.status()
.await?;
Ok(())
}
pub fn is_open(&self) -> bool {
self.device.is_some()
}
pub fn maybe_name(&self) -> Option<&str> {
self.device.as_ref().map(|slot| slot.1.borrow())
}
/// Return the raw handle for this device
pub fn maybe_raw_handle(&self) -> Option<u32> {
self.device.as_ref().map(|slot| slot.0)
}
pub fn name(&self) -> anyhow::Result<&str> {
self.maybe_name()
.with_context(|| format!("{} has not been initialized!", type_name::<Self>()))
}
/// Return the raw handle for this device
pub fn raw_handle(&self) -> anyhow::Result<u32> {
self.maybe_raw_handle()
.with_context(|| format!("{} has not been initialized!", type_name::<Self>()))
}
pub async fn set_private_key_and_listen_addr(
&mut self,
wgsk: &WgSecretKey,
listen_port: Option<u16>,
) -> anyhow::Result<()> {
use netlink as nl;
// The attributes to set
// TODO: This exposes the secret key; we should probably run this in a separate process
// or on a separate stack and have zeroizing allocator globally.
let mut attrs = vec![
nl::wg::DeviceAttrs::IfIndex(self.raw_handle()?),
nl::wg::DeviceAttrs::PrivateKey(*wgsk.secret()),
];
// Optional listen port for WireGuard
if let Some(port) = listen_port {
attrs.push(nl::wg::DeviceAttrs::ListenPort(port));
}
// The netlink request we are trying to send
let req = nl::wg::Wireguard {
cmd: nl::wg::WireguardCmd::SetDevice,
nlas: attrs,
};
// Boilerplate; wrap the request into more structures
let req = req
.apply(nl::genl::Message::from_payload)
.apply(nl::core::NetlinkMessage::from)
.mutating(|req| {
req.header.flags = nl::core::NLM_F_REQUEST | nl::core::NLM_F_ACK;
});
// Send the request
self.genl_netlink_handle()?
.request(req)
.await?
// Collect all errors (let try_fold do all the work)
.try_fold((), |_, _| async move { Ok(()) })
.await?;
Ok(())
}
fn take_rtnl_netlink_handle(&mut self) -> Result<netlink::rtnl::Handle> {
if let Some(handle) = self.rtnl_netlink_handle_cache.take() {
Ok(handle)
} else {
let (connection, handle, _) = rtnetlink::new_connection()?;
// Making sure that the connection has a chance to terminate before the
// application exits
try_spawn_daemon(async move {
connection.await;
Ok(())
})?;
Ok(handle)
}
}
fn rtnl_netlink_handle(&mut self) -> Result<&mut netlink::rtnl::Handle> {
let netlink_handle = self.take_rtnl_netlink_handle()?;
self.rtnl_netlink_handle_cache.insert(netlink_handle).ok()
}
fn take_genl_netlink_handle(&mut self) -> Result<netlink::genl::Handle> {
if let Some(handle) = self.genl_netlink_handle_cache.take() {
Ok(handle)
} else {
let (connection, handle, _) = genetlink::new_connection()?;
// Making sure that the connection has a chance to terminate before the
// application exits
try_spawn_daemon(async move {
connection.await;
Ok(())
})?;
Ok(handle)
}
}
fn genl_netlink_handle(&mut self) -> Result<&mut netlink::genl::Handle> {
let netlink_handle = self.take_genl_netlink_handle()?;
self.genl_netlink_handle_cache.insert(netlink_handle).ok()
}
}
struct WireGuardDevice {
_impl: WireGuardDeviceImpl,
}
impl WireGuardDevice {
/// Creates a netlink named `link_name` and changes the state to up. It returns the index
/// of the interface in the list of interfaces as the result or an error if any of the
/// operations of creating the link or changing its state to up fails.
pub async fn link_create_and_up(rtnetlink: &Handle, link_name: String) -> Result<u32> {
// Add the link, equivalent to `ip link add <link_name> type wireguard`.
rtnetlink
.link()
.add()
.wireguard(link_name.clone())
.execute()
.await?;
pub async fn create_device(device_name: String) -> Result<Self> {
let mut _impl = WireGuardDeviceImpl::default();
_impl.open(device_name).await?;
assert!(_impl.is_open()); // Sanity check
Ok(WireGuardDevice { _impl })
}
// Retrieve the link to be able to up it, equivalent to `ip link show` and then
// using the link shown that is identified by `link_name`.
let link = rtnetlink
.link()
.get()
.match_name(link_name.clone())
.execute()
.into_stream()
.into_future()
pub fn name(&self) -> &str {
self._impl.name().unwrap()
}
/// Return the raw handle for this device
#[allow(dead_code)]
pub fn raw_handle(&self) -> u32 {
self._impl.raw_handle().unwrap()
}
pub async fn add_ip_address(&self, addr: &str) -> anyhow::Result<()> {
self._impl.add_ip_address(addr).await
}
pub async fn set_private_key_and_listen_addr(
&mut self,
wgsk: &WgSecretKey,
listen_port: Option<u16>,
) -> anyhow::Result<()> {
self._impl
.set_private_key_and_listen_addr(wgsk, listen_port)
.await
.0
.unwrap()?;
// Up the link, equivalent to `ip link set dev <DEV> up`.
rtnetlink
.link()
.set(link.header.index)
.up()
.execute()
.await?;
Ok(link.header.index)
}
/// Deletes a link using rtnetlink. The link is specified using its index in the list of links.
pub async fn link_cleanup(rtnetlink: &Handle, index: u32) -> Result<()> {
rtnetlink.link().del(index).execute().await?;
Ok(())
}
/// Deletes a link using rtnetlink. The link is specified using its index in the list of links.
/// In contrast to [link_cleanup], this function create a new socket connection to netlink and
/// *ignores errors* that occur during deletion.
pub async fn link_cleanup_standalone(index: u32) -> Result<()> {
let (connection, rtnetlink, _) = rtnetlink::new_connection()?;
tokio::spawn(connection);
// We don't care if this fails, as the device may already have been auto-cleaned up.
let _ = rtnetlink.link().del(index).execute().await;
Ok(())
}
/// This replicates the functionality of the `wg set` command line tool.
///
/// It sets the specified WireGuard attributes of the indexed device by
/// communicating with WireGuard's generic netlink interface, like the
/// `wg` tool does.
pub async fn wg_set(
genetlink: &mut GenetlinkHandle,
index: u32,
mut attr: Vec<WgDeviceAttrs>,
) -> Result<()> {
use futures_util::StreamExt as _;
use netlink_packet_core::{NetlinkMessage, NetlinkPayload};
use netlink_packet_generic::GenlMessage;
use netlink_packet_wireguard::{Wireguard, WireguardCmd};
// Scope our `set` command to only the device of the specified index.
attr.insert(0, WgDeviceAttrs::IfIndex(index));
// Construct the WireGuard-specific netlink packet
let wgc = Wireguard {
cmd: WireguardCmd::SetDevice,
nlas: attr,
};
// Construct final message.
let genl = GenlMessage::from_payload(wgc);
let mut nlmsg = NetlinkMessage::from(genl);
nlmsg.header.flags = NLM_F_REQUEST | NLM_F_ACK;
// Send and wait for the ACK or error.
let (res, _) = genetlink.request(nlmsg).await?.into_future().await;
if let Some(res) = res {
let res = res?;
if let NetlinkPayload::Error(err) = res.payload {
return Err(err.to_io().into());
}
}
Ok(())
}
}
/// A wrapper for a list of cleanup handlers that can be used in an asynchronous context
/// to clean up after the usage of rosenpass or if the `rp` binary is interrupted with ctrl+c
/// or a `SIGINT` signal in general.
#[derive(Clone)]
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
struct CleanupHandlers(
Arc<::futures::lock::Mutex<Vec<Pin<Box<dyn Future<Output = Result<(), Error>> + Send>>>>>,
);
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
impl CleanupHandlers {
/// Creates a new list of [CleanupHandlers].
fn new() -> Self {
CleanupHandlers(Arc::new(::futures::lock::Mutex::new(vec![])))
}
/// Enqueues a new cleanup handler in the form of a [Future].
async fn enqueue(&self, handler: Pin<Box<dyn Future<Output = Result<(), Error>> + Send>>) {
self.0.lock().await.push(Box::pin(handler))
}
/// Runs all cleanup handlers. Following the documentation of [futures::future::try_join_all]:
/// If any cleanup handler returns an error then all other cleanup handlers will be canceled and
/// an error will be returned immediately. If all cleanup handlers complete successfully,
/// however, then the returned future will succeed with a Vec of all the successful results.
async fn run(self) -> Result<Vec<()>, Error> {
futures::future::try_join_all(self.0.lock().await.deref_mut()).await
impl Drop for WireGuardDevice {
fn drop(&mut self) {
let _impl = self._impl.take();
spawn_cleanup_job(async move {
_impl.close().await;
Ok(())
});
}
}
/// Sets up the rosenpass link and wireguard and configures both with the configuration specified by
/// `options`.
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
pub async fn exchange(options: ExchangeOptions) -> Result<()> {
use std::fs;
// Load the server parameter files
let wgsk = options.private_keys_dir.join("wgsk");
let rpsk = options.private_keys_dir.join("pqsk");
let rppk = options.private_keys_dir.join("pqpk");
let (wgsk, rpsk, rppk) = spawn_blocking(move || {
let wgsk = WgSecretKey::load_b64::<WG_B64_LEN, _>(wgsk)?;
let rpsk = SSk::load(rpsk)?;
let wgpk = SPk::load(rppk)?;
anyhow::Ok((wgsk, rpsk, wgpk))
})
.await??;
use anyhow::anyhow;
use netlink_packet_wireguard::{constants::WG_KEY_LEN, nlas::WgDeviceAttrs};
use rosenpass::{
app_server::{AppServer, BrokerPeer},
config::Verbosity,
protocol::{SPk, SSk, SymKey},
};
use rosenpass_secret_memory::Secret;
use rosenpass_util::file::{LoadValue as _, LoadValueB64};
use rosenpass_wireguard_broker::brokers::native_unix::{
NativeUnixBroker, NativeUnixBrokerConfigBaseBuilder, NativeUnixBrokerConfigBaseBuilderError,
};
// Setup the WireGuard device
let device = options.dev.as_deref().unwrap_or("rosenpass0");
let mut device = WireGuardDevice::create_device(device.to_owned()).await?;
let (connection, rtnetlink, _) = rtnetlink::new_connection()?;
tokio::spawn(connection);
// Assign WG secret key & port
device
.set_private_key_and_listen_addr(&wgsk, options.listen.map(|ip| ip.port() + 1))
.await?;
std::mem::drop(wgsk);
let link_name = options.dev.clone().unwrap_or("rosenpass0".to_string());
let link_index = netlink::link_create_and_up(&rtnetlink, link_name.clone()).await?;
// Set up a list of (initiallc empty) cleanup handlers that are to be run if
// ctrl-c is hit or generally a `SIGINT` signal is received and always in the end.
let cleanup_handlers = CleanupHandlers::new();
let final_cleanup_handlers = (&cleanup_handlers).clone();
cleanup_handlers
.enqueue(Box::pin(async move {
netlink::link_cleanup_standalone(link_index).await
}))
.await;
ctrlc_async::set_async_handler(async move {
final_cleanup_handlers
.run()
.await
.expect("Failed to clean up");
})?;
// Run `ip address add <ip> dev <dev>` and enqueue `ip address del <ip> dev <dev>` as a cleanup.
if let Some(ip) = options.ip {
let dev = options.dev.clone().unwrap_or("rosenpass0".to_string());
Command::new("ip")
.arg("address")
.arg("add")
.arg(ip.clone())
.arg("dev")
.arg(dev.clone())
.status()
.expect("failed to configure ip");
cleanup_handlers
.enqueue(Box::pin(async move {
Command::new("ip")
.arg("address")
.arg("del")
.arg(ip)
.arg("dev")
.arg(dev)
.status()
.expect("failed to remove ip");
Ok(())
}))
.await;
// Assign the public IP address for the interface
if let Some(ref ip) = options.ip {
device.add_ip_address(ip).await?;
}
// Deploy the classic wireguard private key.
let (connection, mut genetlink, _) = genetlink::new_connection()?;
tokio::spawn(connection);
let wgsk_path = options.private_keys_dir.join("wgsk");
let wgsk = Secret::<WG_KEY_LEN>::load_b64::<WG_B64_LEN, _>(wgsk_path)?;
let mut attr: Vec<WgDeviceAttrs> = Vec::with_capacity(2);
attr.push(WgDeviceAttrs::PrivateKey(*wgsk.secret()));
if let Some(listen) = options.listen {
if listen.port() == u16::MAX {
return Err(anyhow!("You may not use {} as the listen port.", u16::MAX));
}
attr.push(WgDeviceAttrs::ListenPort(listen.port() + 1));
}
netlink::wg_set(&mut genetlink, link_index, attr).await?;
// set up the rosenpass AppServer
let pqsk = options.private_keys_dir.join("pqsk");
let pqpk = options.private_keys_dir.join("pqpk");
let sk = SSk::load(&pqsk)?;
let pk = SPk::load(&pqpk)?;
let mut srv = Box::new(AppServer::new(
Some((sk, pk)),
if let Some(listen) = options.listen {
vec![listen]
} else {
Vec::with_capacity(0)
},
if options.verbose {
Verbosity::Verbose
} else {
Verbosity::Quiet
Some((rpsk, rppk)),
Vec::from_iter(options.listen),
match options.verbose {
true => Verbosity::Verbose,
false => Verbosity::Quiet,
},
None,
)?);
let broker_store_ptr = srv.register_broker(Box::new(NativeUnixBroker::new()))?;
fn cfg_err_map(e: NativeUnixBrokerConfigBaseBuilderError) -> anyhow::Error {
anyhow::Error::msg(format!("NativeUnixBrokerConfigBaseBuilderError: {:?}", e))
}
// Configure everything per peer.
for peer in options.peers {
let wgpk = peer.public_keys_dir.join("wgpk");
// TODO: Some of this is sync but should be async
let wgpk = peer
.public_keys_dir
.join("wgpk")
.apply(tokio::fs::read_to_string)
.await?;
let pqpk = peer.public_keys_dir.join("pqpk");
let psk = peer.public_keys_dir.join("psk");
let (pqpk, psk) = spawn_blocking(move || {
let pqpk = SPk::load(pqpk)?;
let psk = psk
.exists()
.then(|| SymKey::load_b64::<WG_B64_LEN, _>(psk))
.transpose()?;
anyhow::Ok((pqpk, psk))
})
.await??;
let mut extra_params: Vec<String> = Vec::with_capacity(6);
if let Some(endpoint) = peer.endpoint {
@@ -337,11 +504,11 @@ pub async fn exchange(options: ExchangeOptions) -> Result<()> {
}
let peer_cfg = NativeUnixBrokerConfigBaseBuilder::default()
.peer_id_b64(&fs::read_to_string(wgpk)?)?
.interface(link_name.clone())
.peer_id_b64(&wgpk)?
.interface(device.name().to_owned())
.extra_params_ser(&extra_params)?
.build()
.map_err(cfg_err_map)?;
.with_context(|| format!("Could not configure broker to supply keys from Rosenpass to WireGuard for peer {wgpk}."))?;
let broker_peer = Some(BrokerPeer::new(
broker_store_ptr.clone(),
@@ -349,64 +516,26 @@ pub async fn exchange(options: ExchangeOptions) -> Result<()> {
));
srv.add_peer(
if psk.exists() {
Some(SymKey::load_b64::<WG_B64_LEN, _>(psk))
} else {
None
}
.transpose()?,
SPk::load(&pqpk)?,
psk,
pqpk,
None,
broker_peer,
peer.endpoint.map(|x| x.to_string()),
peer.protocol_version,
OskDomainSeparator::for_wireguard_psk(),
)?;
// Configure routes, equivalent to `ip route replace <allowed_ips> dev <dev>` and set up
// the cleanup as `ip route del <allowed_ips>`.
if let Some(allowed_ips) = peer.allowed_ips {
Command::new("ip")
.arg("route")
.arg("replace")
.arg(allowed_ips.clone())
.arg("dev")
.arg(options.dev.clone().unwrap_or("rosenpass0".to_string()))
.args(["route", "replace", &allowed_ips, "dev", device.name()])
.status()
.expect("failed to configure route");
cleanup_handlers
.enqueue(Box::pin(async move {
Command::new("ip")
.arg("route")
.arg("del")
.arg(allowed_ips)
.status()
.expect("failed to remove ip");
Ok(())
}))
.await;
.await
.with_context(|| format!("Could not configure routes for peer {wgpk}"))?;
}
}
let out = srv.event_loop();
netlink::link_cleanup(&rtnetlink, link_index).await?;
match out {
Ok(_) => Ok(()),
Err(e) => {
// Check if the returned error is actually EINTR, in which case, the run actually
// succeeded.
let is_ok = if let Some(e) = e.root_cause().downcast_ref::<std::io::Error>() {
matches!(e.kind(), std::io::ErrorKind::Interrupted)
} else {
false
};
if is_ok {
Ok(())
} else {
Err(e)
}
}
}
log::info!("Starting to perform rosenpass key exchanges!");
spawn_blocking(move || srv.event_loop()).await?
}

View File

@@ -9,7 +9,7 @@ use anyhow::{anyhow, Result};
use rosenpass_util::file::{LoadValueB64, StoreValue, StoreValueB64};
use zeroize::Zeroize;
use rosenpass::protocol::{SPk, SSk};
use rosenpass::protocol::basic_types::{SPk, SSk};
use rosenpass_cipher_traits::primitives::Kem;
use rosenpass_ciphers::StaticKem;
use rosenpass_secret_memory::{file::StoreSecret as _, Public, Secret};
@@ -118,7 +118,7 @@ pub fn pubkey(private_keys_dir: &Path, public_keys_dir: &Path) -> Result<()> {
mod tests {
use std::fs;
use rosenpass::protocol::{SPk, SSk};
use rosenpass::protocol::basic_types::{SPk, SSk};
use rosenpass_secret_memory::secret_policy_try_use_memfd_secrets;
use rosenpass_secret_memory::Secret;
use rosenpass_util::file::LoadValue;

View File

@@ -1,59 +1,19 @@
use std::{fs, process::exit};
use cli::{Cli, Command};
use exchange::exchange;
use key::{genkey, pubkey};
use rosenpass_secret_memory::policy;
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
mod cli;
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
mod exchange;
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
mod key;
#[tokio::main]
async fn main() {
#[cfg(feature = "experiment_memfd_secret")]
policy::secret_policy_try_use_memfd_secrets();
#[cfg(not(feature = "experiment_memfd_secret"))]
policy::secret_policy_use_only_malloc_secrets();
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
mod main_supported_platforms;
let cli = match Cli::parse(std::env::args().peekable()) {
Ok(cli) => cli,
Err(err) => {
eprintln!("{}", err);
exit(1);
}
};
let command = cli.command.unwrap();
let res = match command {
Command::GenKey { private_keys_dir } => genkey(&private_keys_dir),
Command::PubKey {
private_keys_dir,
public_keys_dir,
} => pubkey(&private_keys_dir, &public_keys_dir),
Command::Exchange(mut options) => {
options.verbose = cli.verbose;
exchange(options).await
}
Command::ExchangeConfig { config_file } => {
let s: String = fs::read_to_string(config_file).expect("cannot read config");
let mut options: exchange::ExchangeOptions =
toml::from_str::<exchange::ExchangeOptions>(&s).expect("cannot parse config");
options.verbose = options.verbose || cli.verbose;
exchange(options).await
}
Command::Help => {
println!("Usage: rp [verbose] genkey|pubkey|exchange [ARGS]...");
Ok(())
}
};
match res {
Ok(_) => {}
Err(err) => {
eprintln!("An error occurred: {}", err);
exit(1);
}
}
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
fn main() -> anyhow::Result<()> {
main_supported_platforms::main()
}
#[cfg(not(any(target_os = "linux", target_os = "freebsd")))]
fn main() {
panic!("Unfortunately, the rp command is currently not supported on your platform. See https://github.com/rosenpass/rosenpass/issues/689 for more information and discussion.")
}

View File

@@ -0,0 +1,58 @@
use std::{fs, process::exit};
use rosenpass_util::tokio::janitor::ensure_janitor;
use rosenpass_secret_memory::policy;
use crate::cli::{Cli, Command};
use crate::exchange::{exchange, ExchangeOptions};
use crate::key::{genkey, pubkey};
#[tokio::main]
pub async fn main() -> anyhow::Result<()> {
#[cfg(feature = "experiment_memfd_secret")]
policy::secret_policy_try_use_memfd_secrets();
#[cfg(not(feature = "experiment_memfd_secret"))]
policy::secret_policy_use_only_malloc_secrets();
ensure_janitor(async move { main_impl().await }).await
}
async fn main_impl() -> anyhow::Result<()> {
let cli = match Cli::parse(std::env::args().peekable()) {
Ok(cli) => cli,
Err(err) => {
eprintln!("{}", err);
exit(1);
}
};
// init logging
// TODO: Taken from rosenpass; we should deduplicate the code.
env_logger::Builder::from_default_env().init(); // sets log level filter from environment (or defaults)
let command = cli.command.unwrap();
match command {
Command::GenKey { private_keys_dir } => genkey(&private_keys_dir),
Command::PubKey {
private_keys_dir,
public_keys_dir,
} => pubkey(&private_keys_dir, &public_keys_dir),
Command::Exchange(mut options) => {
options.verbose = cli.verbose;
exchange(options).await
}
Command::ExchangeConfig { config_file } => {
let s: String = fs::read_to_string(config_file).expect("cannot read config");
let mut options: ExchangeOptions =
toml::from_str::<ExchangeOptions>(&s).expect("cannot parse config");
options.verbose = options.verbose || cli.verbose;
exchange(options).await
}
Command::Help => {
println!("Usage: rp [verbose] genkey|pubkey|exchange [ARGS]...");
Ok(())
}
}
}

View File

@@ -1,5 +1,6 @@
use std::process::Command;
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
#[test]
fn smoketest() -> anyhow::Result<()> {
let tmpdir = tempfile::tempdir()?;

View File

@@ -379,10 +379,7 @@ impl<const N: usize> StoreSecret for Secret<N> {
#[cfg(test)]
mod test {
use crate::{
secret_policy_try_use_memfd_secrets, secret_policy_use_only_malloc_secrets,
test_spawn_process_provided_policies,
};
use crate::{secret_policy_use_only_malloc_secrets, test_spawn_process_provided_policies};
use super::*;
use std::{fs, os::unix::fs::PermissionsExt};

25
supply-chain-CI.md Normal file
View File

@@ -0,0 +1,25 @@
# Continuous Integration for supply chain protection
This repository's CI uses non-standard mechanisms to harmonize the usage of `dependabot` together with [`cargo vet`](https://mozilla.github.io/cargo-vet/). Since cargo-vet audits for new versions of crates are rarely immediately available once dependabots bumps the version,
the exemptions for `cargo vet` have to be regenerated for each push request opened by dependabot. To make this work, some setup is neccessary to setup the CI. The required steps are as follows:
1. Create a mew user on github. For the purpose of these instructions, we will assume that its mail address is `ci@example.com` and that its username is `ci-bot`. Protect this user account as you would any other user account that you intend to gve write permissions to. For example, setup MFA or protect the email address of the user. Make sure to verify your e-mail.
2. Add `ci-bot` as a member of your organizaton with write access to the repository.
3. In your organization, go to "Settings" -> "Personal Access tokens" -> "Settings". There select "Allow access via fine-grained personal access tokens" and save. Depending on your preferences either choose "Require administrator approval" or "Do not require administrator approval".
4. Create a new personal access token as `ci-bot` for the rosenpass repository. That is, in the settings for `ci-bot`, select "Developer settings" -> "Personal Access tokens" -> "Fine-grained tokens". Then click on "Generate new token". Enter a name of your choosing and choose an expiration date that you feel comfortable with. A shorter expiration period will requrie more manual management by you but is more secure than a longer one. Select your organization as the resource owner and select the rosenpass repository as the repository. Under "Repository permissions", grant "Read and write"-access to the "Contens" premission for the token. Grant no other permissions to the token, except for the read-only access to the "Metadata" permission, which is mandatory. Then generate the token and copy it for the next steps.
5. If you chose "Require administrator approval" in step 3, approve the fine grained access token by, as a organization administrator, going to "Settings" -> "Personal Access tokens" -> "Pending requests" and grant the request.
6. Now, with your account that has administrative permissions for the repository, open the settings page for the repository and select "Secrets and variables" -> "Actions" and click "New repository secret". In the name field enter "CI_BOT_PAT". This name is mandatory, since it is explicitly referenced in the supply-chain workflow. Below, enter the token that was generated in step 4.
7. Analogously to step 6, open the settings page for the repository and select "Secrets and variables" -> "Dependabot" and click "New repository secret". In the name field enter "CI_BOT_PAT". This name is mandatory, since it is explicitly referenced in the supply-chain workflow. Below, enter the token that was generated in step 4.
## What this does
For the `cargo vet` check in the CI for dependabot, the `cargo vet`-exemptions have to automatically be regenerated, because otherwise this CI job will always fail for dependabot PRs. After the exemptions have been regenerated, they need to be commited and pushed to the PR. This invalidates the CI run that pushed the commit so that it does not show up in the PR anymore but does not trigger a new CI run. This is a [protection by Github](https://docs.github.com/en/actions/security-for-github-actions/security-guides/automatic-token-authentication#using-the-github_token-in-a-workflow) to prevent infinite loops. However, in this case it prevents us from having a proper CI run for dependabot PRs. The solution to this is to execute `push` operation with a personal access token.
## Preventing infinite loops
The CI is configured to avoid infinite loops by only regenerating and pushing the `cargo vet` exemptions if the CI run happens with respect to a PR opened by dependabot and not for any other pushed or pull requests. In addition one of the following conditions has to be met:
- The last commit was performed by dependabot
- The last commit message ends in `--regenerate-exemptions`
Summarizing, the exemptions are only regenerated in the context of pull requests opened by dependabot and, the last commit was was performed by dependabot or the last commit message ends in `--regenerate-exemptions`.

View File

@@ -74,7 +74,7 @@ version = "3.0.7"
criteria = "safe-to-deploy"
[[exemptions.anyhow]]
version = "1.0.96"
version = "1.0.98"
criteria = "safe-to-deploy"
[[exemptions.atomic-polyfill]]
@@ -142,7 +142,7 @@ version = "0.7.4"
criteria = "safe-to-deploy"
[[exemptions.clap_mangen]]
version = "0.2.24"
version = "0.2.29"
criteria = "safe-to-deploy"
[[exemptions.cmake]]
@@ -257,10 +257,6 @@ criteria = "safe-to-deploy"
version = "0.10.2"
criteria = "safe-to-deploy"
[[exemptions.fastrand]]
version = "2.3.0"
criteria = "safe-to-deploy"
[[exemptions.findshlibs]]
version = "0.10.2"
criteria = "safe-to-run"
@@ -285,10 +281,6 @@ criteria = "safe-to-deploy"
version = "0.2.15"
criteria = "safe-to-deploy"
[[exemptions.gimli]]
version = "0.31.1"
criteria = "safe-to-deploy"
[[exemptions.hash32]]
version = "0.2.1"
criteria = "safe-to-deploy"
@@ -341,6 +333,10 @@ criteria = "safe-to-deploy"
version = "2.1.0"
criteria = "safe-to-deploy"
[[exemptions.io-uring]]
version = "0.7.9"
criteria = "safe-to-deploy"
[[exemptions.ipc-channel]]
version = "0.18.3"
criteria = "safe-to-run"
@@ -370,7 +366,7 @@ version = "1.3.0"
criteria = "safe-to-deploy"
[[exemptions.libc]]
version = "0.2.169"
version = "0.2.174"
criteria = "safe-to-deploy"
[[exemptions.libcrux]]
@@ -529,10 +525,6 @@ criteria = "safe-to-deploy"
version = "1.0.15"
criteria = "safe-to-deploy"
[[exemptions.pin-project-lite]]
version = "0.2.16"
criteria = "safe-to-deploy"
[[exemptions.pkg-config]]
version = "0.3.31"
criteria = "safe-to-deploy"
@@ -581,14 +573,6 @@ criteria = "safe-to-deploy"
version = "0.9.0"
criteria = "safe-to-deploy"
[[exemptions.rand_chacha]]
version = "0.9.0"
criteria = "safe-to-deploy"
[[exemptions.rand_core]]
version = "0.9.3"
criteria = "safe-to-deploy"
[[exemptions.redox_syscall]]
version = "0.5.9"
criteria = "safe-to-deploy"
@@ -630,7 +614,7 @@ version = "3.0.7"
criteria = "safe-to-run"
[[exemptions.serde_json]]
version = "1.0.139"
version = "1.0.140"
criteria = "safe-to-deploy"
[[exemptions.serde_spanned]]
@@ -646,7 +630,11 @@ version = "3.2.0"
criteria = "safe-to-run"
[[exemptions.signal-hook]]
version = "0.3.17"
version = "0.3.18"
criteria = "safe-to-deploy"
[[exemptions.signal-hook-mio]]
version = "0.2.4"
criteria = "safe-to-deploy"
[[exemptions.signal-hook-registry]]
@@ -658,7 +646,7 @@ version = "0.4.9"
criteria = "safe-to-deploy"
[[exemptions.socket2]]
version = "0.5.8"
version = "0.6.0"
criteria = "safe-to-deploy"
[[exemptions.spin]]
@@ -702,7 +690,7 @@ version = "2.0.11"
criteria = "safe-to-deploy"
[[exemptions.tokio]]
version = "1.44.2"
version = "1.47.0"
criteria = "safe-to-deploy"
[[exemptions.tokio-macros]]
@@ -733,10 +721,6 @@ criteria = "safe-to-deploy"
version = "1.0.17"
criteria = "safe-to-deploy"
[[exemptions.utf8parse]]
version = "0.2.2"
criteria = "safe-to-deploy"
[[exemptions.uuid]]
version = "1.14.0"
criteria = "safe-to-deploy"
@@ -847,7 +831,7 @@ criteria = "safe-to-deploy"
[[exemptions.windows-targets]]
version = "0.48.5"
criteria = "safe-to-deploy"
criteria = "safe-to-run"
[[exemptions.windows-targets]]
version = "0.52.6"
@@ -859,7 +843,7 @@ criteria = "safe-to-deploy"
[[exemptions.windows_aarch64_gnullvm]]
version = "0.48.5"
criteria = "safe-to-deploy"
criteria = "safe-to-run"
[[exemptions.windows_aarch64_gnullvm]]
version = "0.52.6"
@@ -871,7 +855,7 @@ criteria = "safe-to-deploy"
[[exemptions.windows_aarch64_msvc]]
version = "0.48.5"
criteria = "safe-to-deploy"
criteria = "safe-to-run"
[[exemptions.windows_aarch64_msvc]]
version = "0.52.6"
@@ -883,7 +867,7 @@ criteria = "safe-to-deploy"
[[exemptions.windows_i686_gnu]]
version = "0.48.5"
criteria = "safe-to-deploy"
criteria = "safe-to-run"
[[exemptions.windows_i686_gnu]]
version = "0.52.6"
@@ -899,7 +883,7 @@ criteria = "safe-to-deploy"
[[exemptions.windows_i686_msvc]]
version = "0.48.5"
criteria = "safe-to-deploy"
criteria = "safe-to-run"
[[exemptions.windows_i686_msvc]]
version = "0.52.6"
@@ -911,7 +895,7 @@ criteria = "safe-to-deploy"
[[exemptions.windows_x86_64_gnu]]
version = "0.48.5"
criteria = "safe-to-deploy"
criteria = "safe-to-run"
[[exemptions.windows_x86_64_gnu]]
version = "0.52.6"
@@ -923,7 +907,7 @@ criteria = "safe-to-deploy"
[[exemptions.windows_x86_64_gnullvm]]
version = "0.48.5"
criteria = "safe-to-deploy"
criteria = "safe-to-run"
[[exemptions.windows_x86_64_gnullvm]]
version = "0.52.6"
@@ -935,7 +919,7 @@ criteria = "safe-to-deploy"
[[exemptions.windows_x86_64_msvc]]
version = "0.48.5"
criteria = "safe-to-deploy"
criteria = "safe-to-run"
[[exemptions.windows_x86_64_msvc]]
version = "0.52.6"

View File

@@ -35,7 +35,7 @@ who = "Alex Crichton <alex@alexcrichton.com>"
criteria = "safe-to-deploy"
user-id = 73222 # wasmtime-publish
start = "2023-01-01"
end = "2025-05-08"
end = "2026-06-03"
notes = """
The Bytecode Alliance uses the `wasmtime-publish` crates.io account to automate
publication of this crate from CI. This repository requires all PRs are reviewed
@@ -144,6 +144,21 @@ who = "Dan Gohman <dev@sunfishcode.online>"
criteria = "safe-to-deploy"
delta = "0.3.9 -> 0.3.10"
[[audits.bytecode-alliance.audits.fastrand]]
who = "Alex Crichton <alex@alexcrichton.com>"
criteria = "safe-to-deploy"
delta = "2.0.0 -> 2.0.1"
notes = """
This update had a few doc updates but no otherwise-substantial source code
updates.
"""
[[audits.bytecode-alliance.audits.fastrand]]
who = "Alex Crichton <alex@alexcrichton.com>"
criteria = "safe-to-deploy"
delta = "2.1.1 -> 2.3.0"
notes = "Minor refactoring, nothing new."
[[audits.bytecode-alliance.audits.futures]]
who = "Joel Dice <joel.dice@gmail.com>"
criteria = "safe-to-deploy"
@@ -190,6 +205,18 @@ who = "Pat Hickey <pat@moreproductive.org>"
criteria = "safe-to-deploy"
delta = "0.3.28 -> 0.3.31"
[[audits.bytecode-alliance.audits.gimli]]
who = "Alex Crichton <alex@alexcrichton.com>"
criteria = "safe-to-deploy"
delta = "0.29.0 -> 0.31.0"
notes = "Various updates here and there, nothing too major, what you'd expect from a DWARF parsing crate."
[[audits.bytecode-alliance.audits.gimli]]
who = "Alex Crichton <alex@alexcrichton.com>"
criteria = "safe-to-deploy"
delta = "0.31.0 -> 0.31.1"
notes = "No fundmanetally new `unsafe` code, some small refactoring of existing code. Lots of changes in tests, not as many changes in the rest of the crate. More dwarf!"
[[audits.bytecode-alliance.audits.heck]]
who = "Alex Crichton <alex@alexcrichton.com>"
criteria = "safe-to-deploy"
@@ -207,6 +234,12 @@ who = "Dan Gohman <dev@sunfishcode.online>"
criteria = "safe-to-deploy"
delta = "1.0.11 -> 1.0.14"
[[audits.bytecode-alliance.audits.log]]
who = "Alex Crichton <alex@alexcrichton.com>"
criteria = "safe-to-deploy"
delta = "0.4.22 -> 0.4.27"
notes = "Lots of minor updates to macros and such, nothing touching `unsafe`"
[[audits.bytecode-alliance.audits.miniz_oxide]]
who = "Alex Crichton <alex@alexcrichton.com>"
criteria = "safe-to-deploy"
@@ -249,6 +282,12 @@ criteria = "safe-to-deploy"
version = "1.0.0"
notes = "I am the author of this crate."
[[audits.bytecode-alliance.audits.pin-project-lite]]
who = "Alex Crichton <alex@alexcrichton.com>"
criteria = "safe-to-deploy"
delta = "0.2.13 -> 0.2.14"
notes = "No substantive changes in this update"
[[audits.bytecode-alliance.audits.pin-utils]]
who = "Pat Hickey <phickey@fastly.com>"
criteria = "safe-to-deploy"
@@ -301,6 +340,12 @@ criteria = "safe-to-deploy"
version = "1.0.40"
notes = "Found no unsafe or ambient capabilities used"
[[audits.embark-studios.audits.utf8parse]]
who = "Johan Andersson <opensource@embark-studios.com>"
criteria = "safe-to-deploy"
version = "0.2.1"
notes = "Single unsafe usage that looks sound, no ambient capabilities"
[[audits.fermyon.audits.oorandom]]
who = "Radu Matei <radu.matei@fermyon.com>"
criteria = "safe-to-run"
@@ -411,6 +456,16 @@ delta = "1.0.1 -> 1.0.2"
notes = "No changes to any .rs files or Rust code."
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.fastrand]]
who = "George Burgess IV <gbiv@google.com>"
criteria = "safe-to-deploy"
version = "1.9.0"
notes = """
`does-not-implement-crypto` is certified because this crate explicitly says
that the RNG here is not cryptographically secure.
"""
aggregated-from = "https://chromium.googlesource.com/chromiumos/third_party/rust_crates/+/refs/heads/main/cargo-vet/audits.toml?format=TEXT"
[[audits.google.audits.glob]]
who = "George Burgess IV <gbiv@google.com>"
criteria = "safe-to-deploy"
@@ -524,20 +579,6 @@ describe in the review doc.
"""
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.log]]
who = "Lukasz Anforowicz <lukasza@chromium.org>"
criteria = "safe-to-deploy"
delta = "0.4.22 -> 0.4.25"
notes = "No impact on `unsafe` usage in `lib.rs`."
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.log]]
who = "Daniel Cheng <dcheng@chromium.org>"
criteria = "safe-to-deploy"
delta = "0.4.25 -> 0.4.26"
notes = "Only trivial code and documentation changes."
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.nom]]
who = "danakj@chromium.org"
criteria = "safe-to-deploy"
@@ -554,6 +595,20 @@ version = "0.1.46"
notes = "Contains no unsafe"
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.pin-project-lite]]
who = "David Koloski <dkoloski@google.com>"
criteria = "safe-to-deploy"
version = "0.2.9"
notes = "Reviewed on https://fxrev.dev/824504"
aggregated-from = "https://fuchsia.googlesource.com/fuchsia/+/refs/heads/main/third_party/rust_crates/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.pin-project-lite]]
who = "David Koloski <dkoloski@google.com>"
criteria = "safe-to-deploy"
delta = "0.2.9 -> 0.2.13"
notes = "Audited at https://fxrev.dev/946396"
aggregated-from = "https://fuchsia.googlesource.com/fuchsia/+/refs/heads/main/third_party/rust_crates/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.proc-macro-error-attr]]
who = "George Burgess IV <gbiv@google.com>"
criteria = "safe-to-deploy"
@@ -708,6 +763,24 @@ For more detailed unsafe review notes please see https://crrev.com/c/6362797
"""
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.rand_chacha]]
who = "Lukasz Anforowicz <lukasza@chromium.org>"
criteria = "safe-to-deploy"
version = "0.3.1"
notes = """
For more detailed unsafe review notes please see https://crrev.com/c/6362797
"""
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.rand_core]]
who = "Lukasz Anforowicz <lukasza@chromium.org>"
criteria = "safe-to-deploy"
version = "0.6.4"
notes = """
For more detailed unsafe review notes please see https://crrev.com/c/6362797
"""
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.regex-syntax]]
who = "Manish Goregaokar <manishearth@google.com>"
criteria = "safe-to-deploy"
@@ -1158,12 +1231,12 @@ version = "0.3.0"
[[audits.isrg.audits.rand_chacha]]
who = "David Cook <dcook@divviup.org>"
criteria = "safe-to-deploy"
version = "0.3.1"
delta = "0.3.1 -> 0.9.0"
[[audits.isrg.audits.rand_core]]
who = "David Cook <dcook@divviup.org>"
criteria = "safe-to-deploy"
version = "0.6.3"
delta = "0.6.4 -> 0.9.3"
[[audits.isrg.audits.rayon]]
who = "Brandon Pitman <bran@bran.land>"
@@ -1379,6 +1452,25 @@ criteria = "safe-to-deploy"
delta = "0.3.1 -> 0.3.3"
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.fastrand]]
who = "Mike Hommey <mh+mozilla@glandium.org>"
criteria = "safe-to-deploy"
delta = "1.9.0 -> 2.0.0"
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.fastrand]]
who = "Mike Hommey <mh+mozilla@glandium.org>"
criteria = "safe-to-deploy"
delta = "2.0.1 -> 2.1.0"
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.fastrand]]
who = "Chris Martin <cmartin@mozilla.com>"
criteria = "safe-to-deploy"
delta = "2.1.0 -> 2.1.1"
notes = "Fairly trivial changes, no chance of security regression."
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.fnv]]
who = "Bobby Holley <bobbyholley@gmail.com>"
criteria = "safe-to-deploy"
@@ -1409,6 +1501,23 @@ documentation.
"""
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.gimli]]
who = "Alex Franchuk <afranchuk@mozilla.com>"
criteria = "safe-to-deploy"
version = "0.30.0"
notes = """
Unsafe code blocks are sound. Minimal dependencies used. No use of
side-effectful std functions.
"""
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.gimli]]
who = "Chris Martin <cmartin@mozilla.com>"
criteria = "safe-to-deploy"
delta = "0.30.0 -> 0.29.0"
notes = "No unsafe code, mostly algorithms and parsing. Very unlikely to cause security issues."
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.hex]]
who = "Simon Friedberger <simon@mozilla.com>"
criteria = "safe-to-deploy"
@@ -1428,11 +1537,15 @@ delta = "1.0.0 -> 0.1.2"
notes = "Small refactor of some simple iterator logic, no unsafe code or capabilities."
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.rand_core]]
who = "Mike Hommey <mh+mozilla@glandium.org>"
[[audits.mozilla.audits.pin-project-lite]]
who = "Nika Layzell <nika@thelayzells.com>"
criteria = "safe-to-deploy"
delta = "0.6.3 -> 0.6.4"
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
delta = "0.2.14 -> 0.2.16"
notes = """
Only functional change is to work around a bug in the negative_impls feature
(https://github.com/taiki-e/pin-project/issues/340#issuecomment-2432146009)
"""
aggregated-from = "https://raw.githubusercontent.com/mozilla/cargo-vet/main/supply-chain/audits.toml"
[[audits.mozilla.audits.rayon]]
who = "Josh Stone <jistone@redhat.com>"
@@ -1491,6 +1604,12 @@ criteria = "safe-to-deploy"
delta = "1.0.43 -> 1.0.69"
aggregated-from = "https://raw.githubusercontent.com/mozilla/glean/main/supply-chain/audits.toml"
[[audits.mozilla.audits.utf8parse]]
who = "Nika Layzell <nika@thelayzells.com>"
criteria = "safe-to-deploy"
delta = "0.2.1 -> 0.2.2"
aggregated-from = "https://raw.githubusercontent.com/mozilla/cargo-vet/main/supply-chain/audits.toml"
[[audits.mozilla.audits.zeroize]]
who = "Benjamin Beurdouche <beurdouche@mozilla.com>"
criteria = "safe-to-deploy"

View File

@@ -25,7 +25,15 @@ mio = { workspace = true }
tempfile = { workspace = true }
uds = { workspace = true, optional = true, features = ["mio_1xx"] }
libcrux-test-utils = { workspace = true, optional = true }
tokio = { workspace = true, optional = true, features = [
"macros",
"rt-multi-thread",
"sync",
"time",
] }
log = { workspace = true }
[features]
experiment_file_descriptor_passing = ["uds"]
trace_bench = ["dep:libcrux-test-utils"]
tokio = ["dep:tokio"]

View File

@@ -561,7 +561,7 @@ mod tests {
let mut file = FdIo(open_nullfd()?);
let mut buf = [0; 10];
assert!(matches!(file.read(&mut buf), Ok(0) | Err(_)));
assert!(matches!(file.write(&buf), Err(_)));
assert!(file.write(&buf).is_err());
Ok(())
}

82
util/src/fmt/debug.rs Normal file
View File

@@ -0,0 +1,82 @@
//! Helpers for string formatting with the debug formatter; extensions for [std::fmt::Debug]
use std::any::type_name;
use std::borrow::{Borrow, BorrowMut};
use std::ops::{Deref, DerefMut};
/// Debug formatter which just prints the type name;
/// used to wrap values which do not support the Debug
/// trait themselves
///
/// # Examples
///
/// ```rust
/// use rosenpass_util::fmt::debug::NullDebug;
///
/// // Does not implement debug
/// struct NoDebug;
///
/// #[derive(Debug)]
/// struct ShouldSupportDebug {
/// #[allow(dead_code)]
/// no_debug: NullDebug<NoDebug>,
/// }
///
/// let val = ShouldSupportDebug {
/// no_debug: NullDebug(NoDebug),
/// };
/// ```
pub struct NullDebug<T>(pub T);
impl<T> std::fmt::Debug for NullDebug<T> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str("NullDebug<")?;
f.write_str(type_name::<T>())?;
f.write_str(">")?;
Ok(())
}
}
impl<T> From<T> for NullDebug<T> {
fn from(value: T) -> Self {
NullDebug(value)
}
}
impl<T> Deref for NullDebug<T> {
type Target = T;
fn deref(&self) -> &Self::Target {
self.0.borrow()
}
}
impl<T> DerefMut for NullDebug<T> {
fn deref_mut(&mut self) -> &mut Self::Target {
self.0.borrow_mut()
}
}
impl<T> Borrow<T> for NullDebug<T> {
fn borrow(&self) -> &T {
self.deref()
}
}
impl<T> BorrowMut<T> for NullDebug<T> {
fn borrow_mut(&mut self) -> &mut T {
self.deref_mut()
}
}
impl<T> AsRef<T> for NullDebug<T> {
fn as_ref(&self) -> &T {
self.deref()
}
}
impl<T> AsMut<T> for NullDebug<T> {
fn as_mut(&mut self) -> &mut T {
self.deref_mut()
}
}

3
util/src/fmt/mod.rs Normal file
View File

@@ -0,0 +1,3 @@
//! Helpers for string formatting; extensions for [std::fmt]
pub mod debug;

View File

@@ -618,7 +618,7 @@ mod tests {
#[test]
fn test_lpe_error_conversion_downcast_invalid() {
let pos_error = PositionOutOfBufferBounds;
let sanity_error = SanityError::PositionOutOfBufferBounds(pos_error.into());
let sanity_error = SanityError::PositionOutOfBufferBounds(pos_error);
match MessageLenSanityError::try_from(sanity_error) {
Ok(_) => panic!("Conversion should always fail (incompatible enum variant)"),
Err(err) => assert!(matches!(err, PositionOutOfBufferBounds)),

View File

@@ -14,6 +14,7 @@ pub mod controlflow;
pub mod fd;
/// File system operations and handling.
pub mod file;
pub mod fmt;
/// Functional programming utilities.
pub mod functional;
/// Input/output operations.
@@ -30,6 +31,8 @@ pub mod option;
pub mod result;
/// Time and duration utilities.
pub mod time;
#[cfg(feature = "tokio")]
pub mod tokio;
/// Trace benchmarking utilities
#[cfg(feature = "trace_bench")]
pub mod trace_bench;

View File

@@ -302,6 +302,6 @@ mod test_forgetting {
drop_was_called.store(false, SeqCst);
let forgetting = Forgetting::new(SetFlagOnDrop(drop_was_called.clone()));
drop(forgetting);
assert_eq!(drop_was_called.load(SeqCst), false);
assert!(!drop_was_called.load(SeqCst));
}
}

View File

@@ -39,7 +39,7 @@ use crate::fd::{claim_fd_inplace, IntoStdioErr};
/// &io_stream,
/// &mut read_fd_buffer,
/// );
////
///
/// // Simulated reads; the actual operations will depend on the protocol (implementation details)
/// let mut recv_buffer = Vec::<u8>::new();
/// let bytes_read = fd_passing_sock.read(&mut recv_buffer[..]).expect("error reading from socket");

618
util/src/tokio/janitor.rs Normal file
View File

@@ -0,0 +1,618 @@
//! Facilities to spawn tasks that will be reliably executed
//! before the current tokio context finishes.
//!
//! Asynchronous applications often need to manage multiple parallel tasks.
//! Tokio supports spawning these tasks with [tokio::task::spawn], but when the
//! tokio event loop exits, all lingering background tasks will aborted.
//!
//! Tokio supports managing multiple parallel tasks, all of which should exit successfully, through
//! [tokio::task::JoinSet]. This is a useful and very explicit API. To launch a background job,
//! user code needs to be aware of which JoinSet to use, so this can lead to a JoinSet needing to
//! be handed around in many parts of the application.
//!
//! This level of explicitness avoids bugs, but it can be cumbersome to use and it can introduce a
//! [function coloring](https://morestina.net/1686/rust-async-is-colored) issue;
//! creating a strong distinction between functions which have access
//! to a JoinSet (one color) and those that have not (the other color). Functions with the color
//! that has access to a JoinSet can call those functions that do not need access, but not the
//! other way around. This can make refactoring quite difficult: your refactor needs to use a
//! function that requires a JoinSet? Then have fun spending quite a bit of time recoloring
//! possibly many parts of your code base.
//!
//! This module solves this issue by essentially registering a central [JoinSet] through ambient
//! (semi-global), task-local variables. The mechanism to register this task-local JoinSet is
//! [tokio::task_local].
//!
//! # Error-handling
//!
//! The janitor accepts daemons/cleanup jobs which return an [anyhow::Error].
//! When any daemon returns an error, then the entire janitor will immediately exit with a failure
//! without awaiting the other registered tasks.
//!
//! The janitor can generally produce errors in three scenarios:
//!
//! - A daemon panics
//! - A daemon returns an error
//! - An internal error
//!
//! When [enter_janitor]/[ensure_janitor] is used to set up a janitor, these functions will always
//! panic in case of a janitor error. **This also means, that these functions panic if any daemon
//! returns an error**.
//!
//! You can explicitly handle janitor errors through [try_enter_janitor]/[try_ensure_janitor].
//!
//! # Examples
//!
#![doc = "```ignore"]
#![doc = include_str!("../../tests/janitor.rs")]
#![doc = "```"]
use std::any::type_name;
use std::future::Future;
use anyhow::{bail, Context};
use tokio::task::{AbortHandle, JoinError, JoinHandle, JoinSet};
use tokio::task_local;
use tokio::sync::mpsc::unbounded_channel as janitor_channel;
use crate::tokio::local_key::LocalKeyExt;
/// Type for the message queue from [JanitorClient]/[JanitorSupervisor] to [JanitorAgent]: Receiving side
type JanitorQueueRx = tokio::sync::mpsc::UnboundedReceiver<JanitorTicket>;
/// Type for the message queue from [JanitorClient]/[JanitorSupervisor] to [JanitorAgent]: Sending side
type JanitorQueueTx = tokio::sync::mpsc::UnboundedSender<JanitorTicket>;
/// Type for the message queue from [JanitorClient]/[JanitorSupervisor] to [JanitorAgent]: Sending side, Weak reference
type WeakJanitorQueueTx = tokio::sync::mpsc::WeakUnboundedSender<JanitorTicket>;
/// Type of the return value for jobs submitted to [spawn_daemon]/[spawn_cleanup_job]
type CleanupJobResult = anyhow::Result<()>;
/// Handle by which we internally refer to cleanup jobs submitted by [spawn_daemon]/[spawn_cleanup_job]
/// to the current [JanitorAgent]
type CleanupJob = JoinHandle<CleanupJobResult>;
task_local! {
/// Handle to the current [JanitorAgent]; this is where [ensure_janitor]/[enter_janitor]
/// register the newly created janitor
static CURRENT_JANITOR: JanitorClient;
}
/// Messages supported by [JanitorAgent]
#[derive(Debug)]
enum JanitorTicket {
/// This message transmits a new cleanup job to the [JanitorAgent]
CleanupJob(CleanupJob),
}
/// Represents the background task which actually manages cleanup jobs.
///
/// This is what is started by [enter_janitor]/[ensure_janitor]
/// and what receives the messages sent by [JanitorSupervisor]/[JanitorClient]
#[derive(Debug)]
struct JanitorAgent {
/// Background tasks currently registered with this agent.
///
/// This contains two types of tasks:
///
/// 1. Background jobs launched through [enter_janitor]/[ensure_janitor]
/// 2. A single task waiting for new [JanitorTicket]s being transmitted from a [JanitorSupervisor]/[JanitorClient]
tasks: JoinSet<AgentInternalEvent>,
/// Whether this [JanitorAgent] will ever receive new [JanitorTicket]s
///
/// Communication between [JanitorAgent] and [JanitorSupervisor]/[JanitorClient] uses a message
/// queue (see [JanitorQueueTx]/[JanitorQueueRx]/[WeakJanitorQueueTx]), but you may notice that
/// the Agent does not actually contain a field storing the message queue.
/// Instead, to appease the borrow checker, the message queue is moved into the internal
/// background task (see [Self::tasks]) that waits for new [JanitorTicket]s.
///
/// Since our state machine still needs to know, whether that queue is closed, we maintain this
/// flag.
///
/// See [AgentInternalEvent::TicketQueueClosed].
ticket_queue_closed: bool,
}
/// These are the return values (events) returned by [JanitorAgent] internal tasks (see
/// [JanitorAgent::tasks]).
#[derive(Debug)]
enum AgentInternalEvent {
/// Notifies the [JanitorAgent] state machine that a cleanup job finished successfully
///
/// Sent by genuine background tasks registered through [enter_janitor]/[ensure_janitor].
CleanupJobSuccessful,
/// Notifies the [JanitorAgent] state machine that a cleanup job finished with a tokio
/// [JoinError].
///
/// Sent by genuine background tasks registered through [enter_janitor]/[ensure_janitor].
CleanupJobJoinError(JoinError),
/// Notifies the [JanitorAgent] state machine that a cleanup job returned an error.
///
/// Sent by genuine background tasks registered through [enter_janitor]/[ensure_janitor].
CleanupJobReturnedError(anyhow::Error),
/// Notifies the [JanitorAgent] state machine that a new cleanup job was received through the
/// ticket queue.
///
/// Sent by the background task managing the ticket queue.
ReceivedCleanupJob(JanitorQueueRx, CleanupJob),
/// Notifies the [JanitorAgent] state machine that a new cleanup job was received through the
/// ticket queue.
///
/// Sent by the background task managing the ticket queue.
///
/// See [JanitorAgent::ticket_queue_closed].
TicketQueueClosed,
}
impl JanitorAgent {
/// Create a new agent. Start with [Self::start].
fn new() -> Self {
let tasks = JoinSet::new();
let ticket_queue_closed = false;
Self {
tasks,
ticket_queue_closed,
}
}
/// Main entry point for the [JanitorAgent]. Launches the background task and returns a [JanitorSupervisor]
/// which can be used to send tickets to the agent and to wait for agent termination.
pub async fn start() -> JanitorSupervisor {
let (queue_tx, queue_rx) = janitor_channel();
let join_handle = tokio::spawn(async move { Self::new().event_loop(queue_rx).await });
JanitorSupervisor::new(join_handle, queue_tx)
}
/// Event loop, processing events from the ticket queue and from [Self::tasks]
async fn event_loop(&mut self, queue_rx: JanitorQueueRx) -> anyhow::Result<()> {
// Seed the internal task list with a single task to receive
self.spawn_internal_recv_ticket_task(queue_rx).await;
// Process all incoming events until handle_one_event indicates there are
// no more events to process
while self.handle_one_event().await?.is_some() {}
Ok(())
}
/// Process events from [Self::tasks] (and by proxy from the ticket queue)
///
/// This is the agent's main state machine.
async fn handle_one_event(&mut self) -> anyhow::Result<Option<()>> {
use AgentInternalEvent as E;
match (self.tasks.join_next().await, self.ticket_queue_closed) {
// Normal, successful operation
// CleanupJob exited successfully, no action neccesary
(Some(Ok(E::CleanupJobSuccessful)), _) => Ok(Some(())),
// New cleanup job scheduled, add to task list and wait for another task
(Some(Ok(E::ReceivedCleanupJob(queue_rx, job))), _) => {
self.spawn_internal_recv_ticket_task(queue_rx).await;
self.spawn_internal_cleanup_task(job).await;
Ok(Some(()))
}
// Ticket queue is closed; now we are just waiting for the remaining cleanup jobs
// to terminate
(Some(Ok(E::TicketQueueClosed)), _) => {
self.ticket_queue_closed = true;
Ok(Some(()))
}
// No more tasks in the task manager and the ticket queue is already closed.
// This just means we are done and can finally terminate the janitor agent
(Option::None, true) => Ok(None),
// Error handling
// User callback errors
// Some cleanup job returned an error as a result
(Some(Ok(E::CleanupJobReturnedError(err))), _) => Err(err).with_context(|| {
format!("Error in cleanup job handled by {}", type_name::<Self>())
}),
// JoinError produced by the user task: The user task was cancelled.
(Some(Ok(E::CleanupJobJoinError(err))), _) if err.is_cancelled() => Err(err).with_context(|| {
format!(
"Error in cleanup job handled by {me}; the cleanup task was cancelled.
This should not happend and likely indicates a developer error in {me}.",
me = type_name::<Self>()
)
}),
// JoinError produced by the user task: The user task panicked
(Some(Ok(E::CleanupJobJoinError(err))), _) => Err(err).with_context(|| {
format!(
"Error in cleanup job handled by {}; looks like the cleanup task panicked.",
type_name::<Self>()
)
}),
// Internal errors: Internal task error
// JoinError produced by JoinSet::join_next(): The internal task was cancelled
(Some(Err(err)), _) if err.is_cancelled() => Err(err).with_context(|| {
format!(
"Internal error in {me}; internal async task was cancelled. \
This is probably a developer error in {me}.",
me = type_name::<Self>()
)
}),
// JoinError produced by JoinSet::join_next(): The internal task panicked
(Some(Err(err)), _) => Err(err).with_context(|| {
format!(
"Internal error in {me}; internal async task panicked. \
This is probably a developer error in {me}.",
me = type_name::<Self>()
)
}),
// Internal errors: State machine failure
// No tasks left, but ticket queue was not drained
(Option::None, false) => bail!("Internal error in {me}::handle_one_event(); \
there are no more internal tasks active, but the ticket queue was not drained. \
The {me}::handle_one_event() code is deliberately designed to never leave the internal task set empty; \
instead, there should always be one task to receive new cleanup jobs from the task queue unless the task \
queue has been closed. \
This is probably a developer error.",
me = type_name::<Self>())
}
}
/// Used by [Self::event_loop] and [Self::handle_one_event] to start the internal
/// task waiting for tickets on the ticket queue.
async fn spawn_internal_recv_ticket_task(
&mut self,
mut queue_rx: JanitorQueueRx,
) -> AbortHandle {
self.tasks.spawn(async {
use AgentInternalEvent as E;
use JanitorTicket as T;
let ticket = queue_rx.recv().await;
match ticket {
Some(T::CleanupJob(job)) => E::ReceivedCleanupJob(queue_rx, job),
Option::None => E::TicketQueueClosed,
}
})
}
/// Used by [Self::event_loop] and [Self::handle_one_event] to register
/// background deamons/cleanup jobs submitted via [JanitorTicket]
async fn spawn_internal_cleanup_task(&mut self, job: CleanupJob) -> AbortHandle {
self.tasks.spawn(async {
use AgentInternalEvent as E;
match job.await {
Ok(Ok(())) => E::CleanupJobSuccessful,
Ok(Err(e)) => E::CleanupJobReturnedError(e),
Err(e) => E::CleanupJobJoinError(e),
}
})
}
}
/// Client for [JanitorAgent]. Allows for [JanitorTicket]s (background jobs)
/// to be transmitted to the current [JanitorAgent].
///
/// This is stored in [CURRENT_JANITOR] as a task.-local variable.
#[derive(Debug)]
struct JanitorClient {
/// Queue we can use to send messages to the current janitor
queue_tx: WeakJanitorQueueTx,
}
impl JanitorClient {
/// Create a new client. Use through [JanitorSupervisor::get_client]
fn new(queue_tx: WeakJanitorQueueTx) -> Self {
Self { queue_tx }
}
/// Has the associated [JanitorAgent] shut down?
pub fn is_closed(&self) -> bool {
self.queue_tx
.upgrade()
.map(|channel| channel.is_closed())
.unwrap_or(false)
}
/// Spawn a new cleanup job/daemon with the associated [JanitorAgent].
///
/// Used internally by [spawn_daemon]/[spawn_cleanup_job].
pub fn spawn_cleanup_task<F>(&self, future: F) -> Result<(), TrySpawnCleanupJobError>
where
F: Future<Output = anyhow::Result<()>> + Send + 'static,
{
let background_task = tokio::spawn(future);
self.queue_tx
.upgrade()
.ok_or(TrySpawnCleanupJobError::ActiveJanitorTerminating)?
.send(JanitorTicket::CleanupJob(background_task))
.map_err(|_| TrySpawnCleanupJobError::ActiveJanitorTerminating)
}
}
/// Client for [JanitorAgent]. Allows waiting for [JanitorAgent] termination as well as creating
/// [JanitorClient]s, which in turn can be used to submit background daemons/termination jobs
/// to the agent.
#[derive(Debug)]
struct JanitorSupervisor {
/// Represents the tokio task associated with the [JanitorAgent].
///
/// We use this to wait for [JanitorAgent] termination in [enter_janitor]/[ensure_janitor]
agent_join_handle: CleanupJob,
/// Queue we can use to send messages to the current janitor
queue_tx: JanitorQueueTx,
}
impl JanitorSupervisor {
/// Create a new janitor supervisor. Use through [JanitorAgent::start]
pub fn new(agent_join_handle: CleanupJob, queue_tx: JanitorQueueTx) -> Self {
Self {
agent_join_handle,
queue_tx,
}
}
/// Create a [JanitorClient] for submitting background daemons/cleanup jobs
pub fn get_client(&self) -> JanitorClient {
JanitorClient::new(self.queue_tx.clone().downgrade())
}
/// Wait for [JanitorAgent] termination
pub async fn terminate_janitor(self) -> anyhow::Result<()> {
std::mem::drop(self.queue_tx);
self.agent_join_handle.await?
}
}
/// Return value of [try_enter_janitor].
#[derive(Debug)]
pub struct EnterJanitorResult<T, E> {
/// The result produced by the janitor itself.
///
/// This may contain an error if one of the background daemons/cleanup tasks returned an error,
/// panicked, or in case there is an internal error in the janitor.
pub janitor_result: anyhow::Result<()>,
/// Contains the result of the future passed to [try_enter_janitor].
pub callee_result: Result<T, E>,
}
impl<T, E> EnterJanitorResult<T, E> {
/// Create a new result from its components
pub fn new(janitor_result: anyhow::Result<()>, callee_result: Result<T, E>) -> Self {
Self {
janitor_result,
callee_result,
}
}
/// Turn this named type into a tuple
pub fn into_tuple(self) -> (anyhow::Result<()>, Result<T, E>) {
(self.janitor_result, self.callee_result)
}
/// Panic if [Self::janitor_result] contains an error; returning [Self::callee_result]
/// otherwise.
///
/// If this panics and both [Self::janitor_result] and [Self::callee_result] contain an error,
/// this will print both errors.
pub fn unwrap_janitor_result(self) -> Result<T, E>
where
E: std::fmt::Debug,
{
let me: EnsureJanitorResult<T, E> = self.into();
me.unwrap_janitor_result()
}
/// Panic if [Self::janitor_result] or [Self::callee_result] contain an error,
/// returning the Ok value of [Self::callee_result].
///
/// If this panics and both [Self::janitor_result] and [Self::callee_result] contain an error,
/// this will print both errors.
pub fn unwrap(self) -> T
where
E: std::fmt::Debug,
{
let me: EnsureJanitorResult<T, E> = self.into();
me.unwrap()
}
}
/// Return value of [try_ensure_janitor]. The only difference compared to [EnterJanitorResult]
/// is that [Self::janitor_result] contains None in case an ambient janitor had already existed.
#[derive(Debug)]
pub struct EnsureJanitorResult<T, E> {
/// See [EnterJanitorResult::janitor_result]
///
/// This is:
///
/// - `None` if a pre-existing ambient janitor was used
/// - `Some(Ok(()))` if a new janitor had to be created and it exited successfully
/// - `Some(Err(...))` if a new janitor had to be created and it exited with an error
pub janitor_result: Option<anyhow::Result<()>>,
/// See [EnterJanitorResult::callee]
pub callee_result: Result<T, E>,
}
impl<T, E> EnsureJanitorResult<T, E> {
/// See [EnterJanitorResult::new]
pub fn new(janitor_result: Option<anyhow::Result<()>>, callee_result: Result<T, E>) -> Self {
Self {
janitor_result,
callee_result,
}
}
/// Sets up a [EnsureJanitorResult] with [EnsureJanitorResult::janitor_result] = None.
pub fn from_callee_result(callee_result: Result<T, E>) -> Self {
Self::new(None, callee_result)
}
/// Turn this named type into a tuple
pub fn into_tuple(self) -> (Option<anyhow::Result<()>>, Result<T, E>) {
(self.janitor_result, self.callee_result)
}
/// See [EnterJanitorResult::unwrap_janitor_result]
///
/// If [Self::janitor_result] is None, this won't panic.
pub fn unwrap_janitor_result(self) -> Result<T, E>
where
E: std::fmt::Debug,
{
match self.into_tuple() {
(Some(Ok(())) | None, res) => res,
(Some(Err(err)), Ok(_)) => panic!(
"Callee in enter_janitor()/ensure_janitor() was successful, \
but the janitor or some of its deamons failed: {err:?}"
),
(Some(Err(jerr)), Err(cerr)) => panic!(
"Both the calee and the janitor or \
some of its deamons falied in enter_janitor()/ensure_janitor():\n\
\n\
Janitor/Daemon error: {jerr:?}
\n\
Callee error: {cerr:?}"
),
}
}
/// See [EnterJanitorResult::unwrap]
///
/// If [Self::janitor_result] is None, this is not considered a failure.
pub fn unwrap(self) -> T
where
E: std::fmt::Debug,
{
match self.unwrap_janitor_result() {
Ok(val) => val,
Err(err) => panic!(
"Janitor or and its deamons in in enter_janitor()/ensure_janitor() was successful, \
but the callee itself failed: {err:?}"
),
}
}
}
impl<T, E> From<EnterJanitorResult<T, E>> for EnsureJanitorResult<T, E> {
fn from(val: EnterJanitorResult<T, E>) -> Self {
EnsureJanitorResult::new(Some(val.janitor_result), val.callee_result)
}
}
/// Non-panicking version of [enter_janitor].
pub async fn try_enter_janitor<T, E, F>(future: F) -> EnterJanitorResult<T, E>
where
T: 'static,
F: Future<Output = Result<T, E>> + 'static,
{
let janitor_handle = JanitorAgent::start().await;
let callee_result = CURRENT_JANITOR
.scope(janitor_handle.get_client(), future)
.await;
let janitor_result = janitor_handle.terminate_janitor().await;
EnterJanitorResult::new(janitor_result, callee_result)
}
/// Non-panicking version of [ensure_janitor]
pub async fn try_ensure_janitor<T, E, F>(future: F) -> EnsureJanitorResult<T, E>
where
T: 'static,
F: Future<Output = Result<T, E>> + 'static,
{
match CURRENT_JANITOR.is_set() {
true => EnsureJanitorResult::from_callee_result(future.await),
false => try_enter_janitor(future).await.into(),
}
}
/// Register a janitor that can be used to register background daemons/cleanup jobs **only within
/// the future passed to this**.
///
/// The function will wait for both the given future and all background jobs registered with the
/// janitor to terminate.
///
/// For a version that does not panick, see [try_enter_janitor].
pub async fn enter_janitor<T, E, F>(future: F) -> Result<T, E>
where
T: 'static,
E: std::fmt::Debug,
F: Future<Output = Result<T, E>> + 'static,
{
try_enter_janitor(future).await.unwrap_janitor_result()
}
/// Variant of [enter_janitor] that will first check if a janitor already exists.
/// A new janitor is only set up, if no janitor has been previously registered.
pub async fn ensure_janitor<T, E, F>(future: F) -> Result<T, E>
where
T: 'static,
E: std::fmt::Debug,
F: Future<Output = Result<T, E>> + 'static,
{
try_ensure_janitor(future).await.unwrap_janitor_result()
}
/// Error returned by [try_spawn_cleanup_job]
#[derive(thiserror::Error, Debug)]
pub enum TrySpawnCleanupJobError {
/// No active janitor exists
#[error("No janitor registered. Did the developer forget to call enter_janitor(…) or ensure_janitor(…)?")]
NoActiveJanitor,
/// The currently active janitor is in the process of terminating
#[error("There is a registered janitor, but it is currently in the process of terminating and won't accept new tasks.")]
ActiveJanitorTerminating,
}
/// Check whether a janitor has been set up with [enter_janitor]/[ensure_janitor]
pub fn has_active_janitor() -> bool {
CURRENT_JANITOR
.try_with(|client| client.is_closed())
.unwrap_or(false)
}
/// Non-panicking variant of [spawn_cleanup_job].
///
/// This function is available under two names; see [spawn_cleanup_job] for details about this:
///
/// 1. [try_spawn_cleanup_job]
/// 2. [try_spawn_daemon]
pub fn try_spawn_cleanup_job<F>(future: F) -> Result<(), TrySpawnCleanupJobError>
where
F: Future<Output = anyhow::Result<()>> + Send + 'static,
{
CURRENT_JANITOR
.try_with(|client| client.spawn_cleanup_task(future))
.map_err(|_| TrySpawnCleanupJobError::NoActiveJanitor)??;
Ok(())
}
/// Register a cleanup job or a daemon with the current janitor registered through
/// [enter_janitor]/[ensure_janitor]:
///
/// This function is available under two names:
///
/// 1. [spawn_cleanup_job]
/// 2. [spawn_daemon]
///
/// The first name should be used in destructors and to spawn cleanup actions which immediately
/// begin their task.
///
/// The second name should be used for any other tasks; e.g. when the janitor setup is used to
/// manage multiple parallel jobs, all of which must be waited for.
pub fn spawn_cleanup_job<F>(future: F)
where
F: Future<Output = anyhow::Result<()>> + Send + 'static,
{
if let Err(e) = try_spawn_cleanup_job(future) {
panic!("Could not spawn cleanup job/daemon: {e:?}");
}
}
pub use spawn_cleanup_job as spawn_daemon;
pub use try_spawn_cleanup_job as try_spawn_daemon;

View File

@@ -0,0 +1,13 @@
//! Helpers for [tokio::task::LocalKey]
/// Extension trait for [tokio::task::LocalKey]
pub trait LocalKeyExt {
/// Check whether a tokio LocalKey is set
fn is_set(&'static self) -> bool;
}
impl<T: 'static> LocalKeyExt for tokio::task::LocalKey<T> {
fn is_set(&'static self) -> bool {
self.try_with(|_| ()).is_ok()
}
}

4
util/src/tokio/mod.rs Normal file
View File

@@ -0,0 +1,4 @@
//! Tokio-related utilities
pub mod janitor;
pub mod local_key;

View File

@@ -10,7 +10,7 @@ static TRACE: OnceLock<RpTrace> = OnceLock::new();
pub type RpTrace = tracing::MutexTrace<&'static str, Instant>;
/// The trace event type used to trace Rosenpass for performance measurement.
pub type RpEventType = tracing::TraceEvent<&'static str, Instant>;
pub type RpEvent = tracing::TraceEvent<&'static str, Instant>;
// Re-export to make functionality available and callers don't need to also directly depend on
// [`libcrux_test_utils`].

85
util/tests/janitor.rs Normal file
View File

@@ -0,0 +1,85 @@
#![cfg(feature = "tokio")]
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::Arc;
use std::time::Duration;
use tokio::time::sleep;
use rosenpass_util::tokio::janitor::{enter_janitor, spawn_cleanup_job, try_spawn_daemon};
#[tokio::test]
async fn janitor_demo() -> anyhow::Result<()> {
let count = Arc::new(AtomicUsize::new(0));
// Make sure the program has access to an ambient janitor
{
let count = count.clone();
enter_janitor(async move {
let _drop_guard = AsyncDropDemo::new(count.clone()).await;
// Start a background job
{
let count = count.clone();
try_spawn_daemon(async move {
for _ in 0..17 {
count.fetch_add(1, Ordering::Relaxed);
sleep(Duration::from_micros(200)).await;
}
Ok(())
})?;
}
// Start another
{
let count = count.clone();
try_spawn_daemon(async move {
for _ in 0..6 {
count.fetch_add(100, Ordering::Relaxed);
sleep(Duration::from_micros(800)).await;
}
Ok(())
})?;
}
// Note how this function just starts a couple background jobs, but exits immediately
anyhow::Ok(())
})
}
.await;
// At this point, all background jobs have finished, now we can check the result of all our
// additions
assert_eq!(count.load(Ordering::Acquire), 41617);
Ok(())
}
/// Demo of how janitor can be used to implement async destructors
struct AsyncDropDemo {
count: Arc<AtomicUsize>,
}
impl AsyncDropDemo {
async fn new(count: Arc<AtomicUsize>) -> Self {
count.fetch_add(1000, Ordering::Relaxed);
sleep(Duration::from_micros(50)).await;
AsyncDropDemo { count }
}
}
impl Drop for AsyncDropDemo {
fn drop(&mut self) {
let count = self.count.clone();
// This necessarily uses the panicking variant;
// we use spawn_cleanup_job because this makes more semantic sense in this context
spawn_cleanup_job(async move {
for _ in 0..4 {
count.fetch_add(10000, Ordering::Relaxed);
sleep(Duration::from_micros(800)).await;
}
Ok(())
})
}
}