Compare commits

...

190 Commits

Author SHA1 Message Date
wucke13
96bc5bfb2e feat: add first draft of DoS avoiding log
The concept is simple: Log messages are only emitted if the current log
level allows for it __and__ if the log message was caused by a trusted
party. The less trusted a party is, the less likely it is to cause
log messages. For example, error messages about broken input received
from an untrusted party are to be silently ignored, as to not allow
**anyone** to cause massive amounts of log messages.
2023-12-23 01:41:28 +01:00
wucke13
d84efa7422 Merge pull request #197 from guhitb/main
Add backwards compatibility for keygen command
2023-12-21 11:28:25 +01:00
user
61ef5b92bb fix: add deprecated keygen command
This allows users to use the old keygen command, while being informed
about its deprecation.
2023-12-20 16:03:47 +01:00
wucke13
184cff0e5e Merge pull request #196 from rosenpass/dev/fix-65
fix: remove OSFONTDIR var from whitepaper build
2023-12-03 14:01:25 +01:00
wucke13
9819148b6f fix: remove OSFONTDIR var from whitepaper build
Fixes #65. I checked with `pdffonts` that the whitepaper still has all fonts embedded.
2023-12-03 13:27:47 +01:00
Morgan Hill
3a0ebd2cbc feat: Add fuzzing for libsodium allocator 2023-12-02 14:14:05 +01:00
Karolin Varner
1eefb5f263 fix: Guaranteed results typo 2023-12-02 12:21:41 +01:00
Karolin Varner
d45e24e9b6 feat: Move lenses into library 2023-12-02 12:21:41 +01:00
Karolin Varner
972e82b35f chore: Move kems out of rosenpass crate 2023-12-02 10:42:13 +01:00
Karolin Varner
101c9bf4b3 feat: Add an internal library for guaranteed results
This is helpful for functions that have to return a result to
implement some interface but that do not actually need to return
a result value.
2023-12-02 10:42:13 +01:00
Marei (peiTeX)
955d57ea49 fix output of authorlist to support unlimited authors 2023-12-01 20:25:58 +01:00
Karolin Varner
838f700a74 chore: Upgrade dependencies 2023-12-01 18:43:32 +01:00
Karolin Varner
5448cdc565 feat: Use the rand crate for random values instead of sodium 2023-12-01 18:37:33 +01:00
Karolin Varner
77cd8a9fd1 feat: Move prftree into ciphers crate
- Use a new nomenclature for these functions based on the idea of a hash
  domain (as in domain separation); this makes much more sence
- Remove the ciphers::hash export; we did not even export a hash
  function in the purest sence of the word. This gets us around the
  difficulty of figuring out what we should call the underlying
  primitive
2023-12-01 18:36:46 +01:00
Karolin Varner
0f89ab7976 chore: Shorten fuzzing runtime to make sure the CI finishes quickly 2023-12-01 18:30:16 +01:00
Karolin Varner
70fa9bd6d7 feat: Wrap sodium_malloc as a custom allocator
This lets us get rid of quite a few unsafe blocks.
2023-12-01 18:29:53 +01:00
Karolin Varner
85a61808de feat: Use the zeroize crate for zeroization 2023-12-01 18:11:05 +01:00
Karolin Varner
cf132bca11 chore: Move rest of coloring.rs into secret-memory crate
Also removes the StoreSecret trait from cli.rs as it was
redundant.
2023-12-01 18:11:05 +01:00
Karolin Varner
7bda010a9b chore: Move Public and debug_crypto_array into secret-memory crate 2023-12-01 18:11:05 +01:00
Olaf Pichler
36089fd37f Added example for additional PSK 2023-12-01 15:44:42 +01:00
Olaf Pichler
31d43accd5 #172 removed exchange_command 2023-12-01 15:44:42 +01:00
Olaf Pichler
205c301012 Added indications that file paths are used 2023-12-01 15:44:42 +01:00
Olaf Pichler
d014095469 Added indication that exchange_command is not used 2023-12-01 15:44:42 +01:00
Olaf Pichler
7cece82119 added WireGuard config example to gen-config 2023-12-01 15:44:42 +01:00
Ezhil Shanmugham
284ebb261f fix: enabled fuzzing 2023-12-01 11:43:37 +01:00
Jemilu Mohammed
ba224a2200 add default member
add shared dependencies to workspace dependencies

all package level dependencies now rely on workspace
2023-11-30 18:44:28 +01:00
Jemilu Mohammed
ca35e47d2a manage features in workspaces cargo.toml file 2023-11-30 18:44:28 +01:00
Jemilu Mohammed
181154b470 move external dependencies to workspace level 2023-11-30 18:44:28 +01:00
Karolin Varner
cc8c13e121 chore: Remove lprf.rs (dead code) 2023-11-30 11:26:24 +01:00
Karolin Varner
40861cc2ea fix: Nix flake failing due to rosenpass-to
README.md was missing; added it to the list of source files
2023-11-29 11:36:28 +01:00
Karolin Varner
09aa0e027e chore: Move hashing functions into sodium/ciphers crate
This finishes the last step of removing sodium.rs from the rosenpass crate
itself and also removes the NOTHING and NONCE0 constants.

Hashing functions now use destination parameters;
rosenpass_constant_time::xor now does too.
2023-11-29 11:36:28 +01:00
Morgan Hill
d44793e07f Remove unwrap from fuzz targets that return errors
When fuzzing we are interested in what happens inside the target function
not necessarily what it returns. Functions returning errors with bogus
input in generally desired behaviour.
2023-11-29 11:36:07 +01:00
Karolin Varner
d539be3142 feat: Rosenpass-to for nicely handling destination parameters 2023-11-26 11:18:47 +01:00
Morgan Hill
a49254a021 feat(fuzzing): Add initial set of fuzzing targets
These targets can be used with rust nightly and cargo-fuzz to fuzz
several bits of Rosenpass's API. Fuzzing is an automated way of
exploring code paths that may not be hit in unit tests or normal
operation. For example the `handle_msg` target exposed the DoS condition
fixed in 0.2.1.

The other targets focus on the FFI with libsodium and liboqs.

Co-authored-by: Karolin Varner <karo@cupdev.net>
2023-11-26 11:05:19 +01:00
Karolin Varner
86300ca936 chore: Use naming scheme without rosenpass- for crates 2023-11-26 10:38:24 +01:00
Karolin Varner
3ddf736b60 chore: Move xchacha20 implementation out of rosenpass::sodium 2023-11-26 10:38:24 +01:00
Karolin Varner
c64e721c2f chore: Move chacha20 implementation out of rosenpass::sodium
Introduces a new crate for selected ciphers which references
a cipher implementation in the rosenpass-sodium crate.
2023-11-26 10:38:24 +01:00
Karolin Varner
4c51ead078 chore: Move libsodium's helper function into their own namespace 2023-11-26 10:38:24 +01:00
Karolin Varner
c5c34523f3 chore: Move libsodium's memzero, randombytes fns into rosenpass-sodium 2023-11-26 10:38:24 +01:00
Karolin Varner
6553141637 chore: Move libsodium's increment into rosenpass-sodium crate 2023-11-26 10:38:24 +01:00
Karolin Varner
a3de526db8 chore: Move libsodium's compare into rosenpass-sodium crate 2023-11-26 10:38:24 +01:00
Karolin Varner
5da0e4115e chore: Move memcmp into rosenpass-sodium crate 2023-11-26 10:38:24 +01:00
Karolin Varner
99634d9702 chore: Move sodium init integration into rosenpass-sodium crate 2023-11-26 10:38:24 +01:00
Karolin Varner
46156fcb29 fix: Setup cargo fmt to check the entire workspace 2023-11-26 10:38:24 +01:00
Karolin Varner
e50542193f chore: Move file utils into coloring or the util crate 2023-11-26 10:38:24 +01:00
Karolin Varner
3db9755580 chore: move functional utils into utils library 2023-11-26 10:38:24 +01:00
Karolin Varner
556dbd2600 chore: move time utils into util crate 2023-11-26 10:38:24 +01:00
Karolin Varner
6cd42ebf50 chore: move max_usize into util crate 2023-11-26 10:38:24 +01:00
Karolin Varner
a220c11e67 chore: Move xor_into, copying and base64 utils into own crates 2023-11-26 10:38:24 +01:00
Emil Engler
c9cef05b29 doc: Add bibliography to the manual page
Fixes #153
2023-11-26 09:51:11 +01:00
wucke13
0b4b1279cf chore: Release rosenpass version 0.2.1 2023-11-18 23:16:22 +01:00
wucke13
44264a7bb6 chore: Release rosenpass version 0.2.1-rc.3 2023-11-18 22:58:57 +01:00
wucke13
b095bdaa7c refine ab085998bb
This commit refines the above by making cargo release emit no prefix for release tags even if only a single package is released.
2023-11-18 22:57:53 +01:00
wucke13
9597e485bf chore: Release rosenpass version 0.2.1-rc.2 2023-11-18 22:48:35 +01:00
wucke13
ab085998bb add new trigger for release workflow
The change to a multi crate cargo workspace makes `cargo release` behave differently. Now it prefixes the release tags (e.g. `v0.2.0`) with the package name, so for example `rosenpass-v0.2.0`. This change adds the
2023-11-18 22:43:47 +01:00
wucke13
3901e668cb chore: Release rosenpass version 0.2.1-rc.1 2023-11-18 22:30:46 +01:00
wucke13
b7444bf9b4 add readme link to rosenpass package 2023-11-18 22:25:05 +01:00
Benjamin Lipp
0051cbd48e doc: Add unit test for xor_into 2023-11-15 14:32:19 +01:00
Karolin Varner
27746781c0 fix: Doctest should pass buffers of correct length to handle_msg 2023-11-12 14:42:23 +01:00
Karolin Varner
93439858d1 fix crash on undersized buffers going through the lenses
Co-authored-by: wucke13 <wucke13@gmail.com>
2023-11-12 14:42:23 +01:00
wucke13
1223048b48 Merge pull request #148 from rosenpass/dev/wucke13-update-lock-files
update lock files
2023-11-12 13:44:10 +01:00
wucke13
932bde39cc flake.lock: Update
Flake lock file updates:

• Updated input 'fenix':
    'github:nix-community/fenix/add522038f2a32aa1263c8d3c81e1ea2265cc4e1' (2023-08-23)
  → 'github:nix-community/fenix/81ab0b4f7ae9ebb57daa0edf119c4891806e4d3a' (2023-11-12)
• Updated input 'fenix/rust-analyzer-src':
    'github:rust-lang/rust-analyzer/9e3bf69ad3c736893b285f47f4d014ae1aed1cb0' (2023-08-22)
  → 'github:rust-lang/rust-analyzer/5fcf5289e726785d20d3aa4d13d90a43ed248e83' (2023-11-11)
• Updated input 'flake-utils':
    'github:numtide/flake-utils/919d646de7be200f3bf08cb76ae1f09402b6f9b4' (2023-07-11)
  → 'github:numtide/flake-utils/ff7b65b44d01cf9ba6a71320833626af21126384' (2023-09-12)
• Updated input 'naersk':
    'github:nix-community/naersk/78789c30d64dea2396c9da516bbcc8db3a475207' (2023-08-18)
  → 'github:nix-community/naersk/aeb58d5e8faead8980a807c840232697982d47b9' (2023-10-27)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/78287547942dd8e8afff0ae47fb8e2553db79d7e' (2023-08-08)
  → 'github:NixOS/nixpkgs/34bdaaf1f0b7fb6d9091472edc968ff10a8c2857' (2023-11-01)
2023-11-12 13:29:22 +01:00
wucke13
1d9e62e56b update Cargo.lock 2023-11-12 13:29:06 +01:00
wucke13
3af722a066 Merge pull request #143 from rosenpass/dev/refactor-rp-to-workspace
refactor rp to workspace
2023-11-12 13:27:31 +01:00
wucke13
df60b0bfc3 refine source filter
In particular, replace the error prone sourceByRegex filter for the rosenpass derivation with a simple file suffix filter.
2023-11-12 13:16:34 +01:00
wucke13
6274c6fcdd add workspace Cargo.toml 2023-11-12 13:16:34 +01:00
wucke13
cd00f023fb move the current rosenpass codebase into a subdir
This is preparation to make the rosenpass repo a workspace, which can contain multiple crates.
2023-11-12 13:16:34 +01:00
Karolin Varner
13563237cb chore: rustfmt 2023-11-08 22:05:30 +01:00
Karolin Varner
447a4f7a44 fix: Restore benchmarks to working order 2023-11-08 22:05:30 +01:00
wucke13
6bac6a59ff Merge pull request #141 from rosenpass/dev/engler/app_server
app_server: Replace `is_ok()` by `if let`
2023-10-19 23:09:05 +02:00
Emil Engler
e5e04c6d95 app_server: Replace is_ok() by if let
This commit replaces an `is_ok()` call with a call to `if let`, thereby
fixing a clippy warning.
2023-10-19 13:54:13 +02:00
Emil Engler
15ce25ccd2 Merge pull request #140 from rosenpass/AliceOrunitia-patch-1
Update rosenpass.1
2023-10-19 13:51:28 +02:00
Alice Michaela Bowman
1b383d494c Update rosenpass.1
Small grammatical changes.
2023-10-19 12:44:56 +02:00
Emil Engler
605b6463ff Merge pull request #134 from rosenpass/dev/engler/stack
Follow-ups to the stack increasements
2023-10-06 10:55:14 +02:00
Ashish SHUKLA
04eb86af87 cli: move wg exit status check to thread 2023-10-06 08:27:43 +02:00
Ashish SHUKLA
bf850e3072 cli: handle the exit status of wg process 2023-10-06 08:27:43 +02:00
Ashish SHUKLA
dd39936220 cli: reap spawned wireguard child
Fixes #132
2023-10-06 08:27:43 +02:00
wucke13
b15f17133f Merge pull request #135 from lorenzleutgeb/patch-1
config: Default `WireGuard::extra_params` to empty `Vec`
2023-09-28 19:59:15 +02:00
Lorenz Leutgeb
b50820ecc0 config: Default WireGuard::extra_params to empty Vec
Otherwise, omitting `extra_params` in the configuration file will result in a `WireGuard` configuration object of `None`, even though not specifying `extra_params` is sane.
2023-09-28 11:16:38 +02:00
Emil Engler
f323839967 test: Fix wrong comment
This commit fixes a wrong comment, claiming that the unit tests uses a
stack size of 16MB, where in fact it only uses 8MiB.
2023-09-28 07:58:17 +02:00
Emil Engler
6e15c38254 flake: Remove redundant stack increase
This commit removes the setting of `RUST_MIN_STACK` by the Nix
development shell, because the tests now set the stack size on their
own.

See #128
2023-10-06 10:43:41 +02:00
Emil Engler
b7a76849b7 test: Ensure 8MiB of stack size for key generation
This commit ensures that the call to `StaticKEM::keygen` has a stack of
8MiB.

Especially on Darwin system, this commit is necessary in order to
prevent a stack overflow, as this system only provides stack sizes of
roughly 500KB which is way to small for a Classic McEliece key.

Fixes #118
2023-09-22 16:30:00 +02:00
Emil Engler
d2d72143b5 Merge pull request #126 from rosenpass/dev/engler/unsafe
Remove some `unsafe`s
2023-09-18 07:20:04 -10:00
Emil Engler
1135cd7bbb util: Remove unsafe from store_secret 2023-09-14 10:36:53 +02:00
Emil Engler
51f04f749f cli: Remove unsafe from store_secret
This commit removes the `unsafe` block from the `store_secret` function,
as I see no reason why we should have one here.
2023-09-14 10:34:07 +02:00
Emil Engler
37d1326481 Merge pull request #123 from rosenpass/dev/engler/unsafe
cli: Move `StaticKEM::keygen` out of `unsafe`
2023-09-13 18:09:28 +02:00
Emil Engler
d0a84294aa cli: Move StaticKEM::keygen out of unsafe
This commit moves the `StaticKEM::keygen` call out of an `unsafe` call,
because the function is not unsafe.
2023-09-13 16:36:35 +02:00
wucke13
a98f64c17d Merge pull request #119 from rosenpass/dev/engler/clippy
Fix all clippy warnings
2023-09-07 12:25:47 +02:00
Emil Engler
d6a7ebe88f clippy: Allow false positive with redundancies
This commit allows a redundant closure call in the regard of clippy
warnings, as it is a false positive in our case.
2023-09-06 17:40:34 +02:00
Emil Engler
212336728c build: Fix clippy warnings in build.rs
This commit fixes the clippy warnings in `build.rs`, by making use of
the `if let` language feature.
2023-09-06 17:32:26 +02:00
Emil Engler
f48a923dbf refactor: Remove redundant references
This commit removes redundant references, noted by clippy.
2023-09-06 17:31:56 +02:00
Emil Engler
7b5d0f7d66 Merge pull request #117 from rosenpass/dev/engler/rp-ip
doc: Clarify the assumptions about the server
2023-09-06 17:20:27 +02:00
Emil Engler
1e37f89e83 doc: Clarify the assumptions about the server
This commit clarifies the assumptions about the server/responder in the
`rp.1` manual page, by specifying an IP and open UDP ports that the rest
of this tutorial is going to assume.

Reported-by: Robert Clausecker <fuzxxl@gmail.com>

Fixes #116
2023-09-06 14:25:48 +02:00
wucke13
b997238f42 chore: Release rosenpass version 0.2.0 2023-09-05 19:33:50 +02:00
wucke13
d915e63445 bump versions 2023-08-29 23:48:48 +02:00
wucke13
53d7996dd3 Merge pull request #111 from rosenpass/dev/bsd-port-for-rp-script
add freebsd support, prepare for other BSDs
2023-08-29 23:39:10 +02:00
wucke13
47b4d394ef small fixups for rp script 2023-08-29 23:32:14 +02:00
Emil Engler
578d9e2eb5 Merge pull request #114 from rosenpass/dev/update-deps
cargo: Update outdated dependencies
2023-08-25 11:50:33 +02:00
wucke13
d6b83a4a0b add freebsd support, prepare for other BSDs 2023-08-23 18:20:17 +02:00
Emil Engler
959cd50ef6 Merge pull request #113 from rosenpass/dev/ci/shellcheck 2023-08-23 16:09:05 +02:00
Emil Engler
6025623aad cargo: Update outdated dependencies 2023-08-23 14:44:11 +02:00
Emil Engler
5a67b4708a ci: Perform a shellcheck 2023-08-23 14:39:38 +02:00
wucke13
45145cdd9b Merge pull request #110 from rosenpass/dev/update-oqs-and-flake
Dev/update oqs and flake
2023-08-23 12:59:44 +02:00
wucke13
66e696fea3 flake.lock: Update
Flake lock file updates:

• Updated input 'fenix':
    'github:nix-community/fenix/6e6a94c4d0cac4821b6452fbae46609b89a8ddcf' (2023-06-09)
  → 'github:nix-community/fenix/add522038f2a32aa1263c8d3c81e1ea2265cc4e1' (2023-08-23)
• Updated input 'fenix/rust-analyzer-src':
    'github:rust-lang/rust-analyzer/9c03aa1ac2e67051db83a85baf3cfee902e4dd84' (2023-06-08)
  → 'github:rust-lang/rust-analyzer/9e3bf69ad3c736893b285f47f4d014ae1aed1cb0' (2023-08-22)
• Updated input 'flake-utils':
    'github:numtide/flake-utils/a1720a10a6cfe8234c0e93907ffe81be440f4cef' (2023-05-31)
  → 'github:numtide/flake-utils/919d646de7be200f3bf08cb76ae1f09402b6f9b4' (2023-07-11)
• Updated input 'naersk':
    'github:nix-community/naersk/88cd22380154a2c36799fe8098888f0f59861a15' (2023-03-23)
  → 'github:nix-community/naersk/78789c30d64dea2396c9da516bbcc8db3a475207' (2023-08-18)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/81ed90058a851eb73be835c770e062c6938c8a9e' (2023-06-08)
  → 'github:NixOS/nixpkgs/78287547942dd8e8afff0ae47fb8e2553db79d7e' (2023-08-08)
2023-08-23 11:33:20 +02:00
wucke13
91d0592ad6 update oqs-sys from 0.7.2 to 0.8.0 2023-08-23 11:32:51 +02:00
Emil Engler
8ff9b53365 cli: include a static compiled manual page
This commit re-introduces a static and pre-compiled version of the
manual page back into the source code, in case that an installed version
cannot be found on the host system.
2023-08-21 14:05:34 +02:00
Marek Küthe
067a839d4b rp: Defaults to dual-stack
If currently no IP address, only on IPv6 is listen by default. This commit would make it listen dual-stack - i.e. IPv4 and IPv6 - by default.

Signed-off-by: Marek Küthe <m.k@mk16.de>
2023-08-21 14:04:46 +02:00
Marek Küthe
38835fb0f8 Readme: Add mirrors
Signed-off-by: Marek Küthe <m.k@mk16.de>
2023-08-21 13:59:34 +02:00
wucke13
a2b177470c Merge pull request #101 from rosenpass/dev/fix-ci
add .gitlab-ci.yml
2023-07-01 00:03:52 +02:00
wucke13
1c1e38e2f7 add .gitlab-ci.yml
This gitlab-ci.yml solely is there to enable mirroring to
https://gitlab.com/rosenpass/rosenpass
2023-06-30 23:54:40 +02:00
wucke13
46383bdc4d Merge pull request #99 from rosenpass/dev/fix-ci
add smoke test for devshell and test without nix
2023-06-30 22:31:10 +02:00
wucke13
2805d686e6 default pinpointed macos-13, update nix action
This resolves an error with the darwin based builds, where the install
fails. Pinpointing the macos version will prevent random failrue in
the future --- now we have to opt-in to potential breaking changes when
a new macos release is added to the GitHub Actions runners.

relevant error message:

```console
...
---- Reminders -----------------------------------------------------------------
[ 1 ]
Nix won't work in active shell sessions until you restart them.

Could not set environment: 150: Operation not permitted while System Integrity Protection is engaged
Error: Process completed with exit code 150.
```

fixes #100
2023-06-30 22:17:35 +02:00
wucke13
b274519bad add smoke test for devshell and test without nix
This commit adds two new jobs. One checks that `cargo test` runs
through, and second one checking that `cargo test` inside the nix
devshell runs through as well.

fixes #98
2023-06-30 21:23:04 +02:00
wucke13
3086c7fb93 Merge pull request #97 from rosenpass/engler/cargo-build-hotfix
fix devshell bug introduced in #90
2023-06-30 21:08:40 +02:00
wucke13
d21e3af1bb fix broken devShell
The use of a fakecmake in the main step of the Rosenpass build removed real CMake from the devShell, essentially breaking cargo build from within it. This commit fixes that, by explicitly placing the real CMake in the devShell's nativeBuildInputs.
2023-06-30 21:03:32 +02:00
wucke13
b0332971df Merge pull request #89 from rosenpass/dev/update-flake
update flake.lock
2023-06-14 20:33:58 +02:00
wucke13
be508b486a refine CI further
- include default jobs
- clean up generator script
- fix wrong dependency estimation for release-package
2023-06-14 19:12:44 +02:00
wucke13
4314a0915a fix tex build after update 2023-06-14 18:56:12 +02:00
wucke13
0d2ca37bbb flake.lock: Update
Flake lock file updates:

• Updated input 'fenix':
    'github:nix-community/fenix/d8067f4d1d3d30732703209bec5ca7d62aaececc' (2023-01-20)
  → 'github:nix-community/fenix/6e6a94c4d0cac4821b6452fbae46609b89a8ddcf' (2023-06-09)
• Updated input 'fenix/rust-analyzer-src':
    'github:rust-lang/rust-analyzer/6e52c64031825920983515b9e975e93232739f7f' (2023-01-19)
  → 'github:rust-lang/rust-analyzer/9c03aa1ac2e67051db83a85baf3cfee902e4dd84' (2023-06-08)
• Updated input 'flake-utils':
    'github:numtide/flake-utils/5aed5285a952e0b949eb3ba02c12fa4fcfef535f' (2022-11-02)
  → 'github:numtide/flake-utils/a1720a10a6cfe8234c0e93907ffe81be440f4cef' (2023-05-31)
• Added input 'flake-utils/systems':
    'github:nix-systems/default/da67096a3b9bf56a91d16901293e51ba5b49a27e' (2023-04-09)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/2dea8991d89b9f1e78d874945f78ca15f6954289' (2023-01-06)
  → 'github:NixOS/nixpkgs/81ed90058a851eb73be835c770e062c6938c8a9e' (2023-06-08)
• Updated input 'nixpkgs-unstable':
    'github:NixOS/nixpkgs/1bddde315297c092712b0ef03d9def7a474b28ae' (2023-02-15)
  → 'github:NixOS/nixpkgs/ba0f52d80375147840b83f1511599fbe333be3ad' (2023-06-09)
2023-06-14 18:56:12 +02:00
wucke13
7b69afabbc Merge pull request #90 from rosenpass/dev/overhaul-ci
CLI improvements and CI refinery
2023-06-14 18:18:01 +02:00
wucke13
e24172d9b5 move if on upload pdf job in CI 2023-06-10 23:10:59 +02:00
wucke13
d01c96c1de add i686 system
This still excludes static builds due to a bug in oqs-sys.
Once oqs-sys is bumped to use liboqs 0.8, full 32 bit x86 support is viable.
2023-06-10 16:00:12 +02:00
wucke13
4a3b59fd15 refine cli of exchange command
This implements feedback from #87 on ambiguities of the CLI
2023-06-10 04:03:55 +02:00
wucke13
11d60bcced add GH-Actions based CI with cachix 2023-06-10 03:44:02 +02:00
wucke13
73a8489232 add private-key argument to cli parser
- fixes #72
2023-06-09 22:20:24 +02:00
Karolin Varner
2ac2c84c71 Trigger Website upload CI 2023-06-02 09:28:33 +02:00
Karolin Varner
a0f79478cc Trigger upload-doc CI job 2023-06-02 09:28:33 +02:00
Karolin Varner
7e6985fdc6 fix: Revert spell correction zeroized -> zeroed
This is an established term.
2023-06-01 11:51:27 +02:00
Steffen Vogel
b958eacaae fix: Typos in Rust code, readme and man pages 2023-06-01 11:51:27 +02:00
Karolin Varner
397a776c55 fix: Race condition due to concurrent handshake
After establishing a session in responder role, the peer
should abort ongoing handshakes in initiator role.

Also adds an extra wait period before creating an
initiation if peer had been the initiator in the previous
handshake. This makes sure that unless there are huge latencies,
there are no concurrent handshakes in the first place.

Fixes: #43
2023-05-26 11:46:00 +02:00
Karolin Varner
19fe7360d2 fix: Git directory detection should not print an error if we are not in a git repo 2023-05-26 11:46:00 +02:00
Karolin Varner
b29720b0c6 fix: Formatting 2023-05-23 22:26:56 +02:00
Karolin Varner
78e32a6f14 fix: Show cargo fmt errors 2023-05-23 22:26:56 +02:00
Karolin Varner
5f78857ff5 fix: Show warnings from git directory detection 2023-05-23 11:36:20 +02:00
Karolin Varner
69f62673a5 fix: Reintroduce ability to actually supply wireguard with keys
Regression introduced in b99d072879
due to forgetfullness
2023-05-23 11:26:01 +02:00
Karolin Varner
097fd0332d chore: Upgrade crate dependencies 2023-05-23 11:24:39 +02:00
Mullana
303c5a569c "key chaining..." to "chaining key..." (fixed) 2023-05-23 08:58:24 +02:00
Karolin Varner
7aa48b95af fix: Escape uses of angle brackets and pointy brackets in documentation
This was a regression introduced in b99d072879
which went unnoticed because of the broken CI

https://github.com/rosenpass/rosenpass/issues/62
2023-05-23 08:54:53 +02:00
Karolin Varner
229224d078 fix: Restore QC/doc CI job to operation
https://github.com/rosenpass/rosenpass/issues/62
https://github.com/rust-lang/rust/issues/108378
2023-05-23 08:54:53 +02:00
Karolin Varner
e12cd18a42 fix: Disable broken CI jobs
These are cross compilation static build jobs
which are nice to have but non-essential.

https://github.com/rosenpass/rosenpass/issues/62
2023-05-23 08:54:53 +02:00
Mullana
0b1a00a32e key chaining..." to "chaining key... (svg) 2023-05-23 00:30:00 +02:00
Mullana
7c3cd1acf6 "key chaining..." to "chaining key..." 2023-05-23 00:23:29 +02:00
Karolin Varner
3856d774ff chore: Move slides into their own repo 2023-05-22 11:43:31 +02:00
Karolin Varner
62fab066d4 feat: Restart host discovery on connection loss
This will retry other sockets and the host-name given on the
command-line when a connection loss is detected.
2023-05-22 11:42:51 +02:00
Karolin Varner
9469b62f58 fix: Host-path discovery
When rosenpass is started, we either know no peer address or we know a
hostname. How to contact this hostname may not be entirely clear because
we now have multiple sockets we could send on and DNS may return
multiple addresses.

To robustly handle host path discovery, we try each
socket-ip-combination in a round robin fashion; the struct stores the
offset of the last used combination internally and and will continue
with the next combination on every call.
2023-05-22 11:42:51 +02:00
Karolin Varner
f8bea94330 fix: Always send messages to a peer using the socket they contacted us with
To implement this it was necessary to introduce an `Endpoint` abstraction
over SocketAddr's that includes the information which socket was used.
2023-05-22 11:42:51 +02:00
Karolin Varner
f3c343c472 fix: Handle the various possible dual-stack configurations reliably 2023-05-22 11:42:51 +02:00
Karolin Varner
7154af52f9 chore: Indicate that the listen parameter can be given multiple times in the help 2023-05-22 11:42:51 +02:00
Karolin Varner
e03fed404f chore: Cleanup unneccesary debug output 2023-05-22 11:42:51 +02:00
Karolin Varner
42798699e4 fix: Adjust the rp(1) script to support the new rosenpass(1) command line parameters
The previous commit still introduces breaking changes;
this means we are now developing a 1.x.x version instead
of a 0.x.x version. We will create a 0.x.x development branch
where we might backport some of the features we are introducing now
2023-05-22 11:42:51 +02:00
wucke13
b99d072879 major rewrite of application server & frontend
- adds TOML based configuation files
  - with example configuratios in config-examples
- reimplments arcane CLI argument parser as automaton
- adds a new CLI focused arround configuration files
- moves all file utility stuff from `main.rs` to `util.rs`
- moves all AppServer stuff to dedicated `app_server.rs`
- add mio for multi-listen-socket support (should fix #27)
- consistency: rename private to secret
2023-05-22 11:42:51 +02:00
wucke13
d5b2a9414f Merge pull request #53 from emilengler/invoke-man
invoke `man(1)` when requesting help
2023-04-27 09:32:43 +02:00
Emil Engler
13cc7e05ed invoke man(1) when requesting help
This commit invokes `man(1)` when requesting help and emits the built-in
manual, if the manual page is not found on the system.
2023-04-25 14:54:30 +02:00
wucke13
096c811491 Merge pull request #58 from AliceOrunitia/alice/doc-upload
Alice/doc upload
2023-04-24 11:47:29 +02:00
wucke13
cefe9ce762 Merge pull request #59 from rosenpass/dev/use-naersk
move to naersk + fenix
2023-04-23 22:05:05 +02:00
wucke13
378fddb645 fix or exclude failing CI actions
Due to https://github.com/open-quantum-safe/liboqs-rust/issues/202 it is not
yet possible to build the static Rosenpass version for `i686`. The CI actions
which fail for this reason have been excluded for now. Further on, some
the workflow names have been shortened for better overview.
2023-04-23 17:19:31 +02:00
wucke13
695ef6a769 replace pkgs.rustPlatform with naersk + fenix
Now that fenix + naersk are used, we don't have the problem of hour-long
builds of a `pkgsStatic.rustc` running in qemu-aarch64. Thus, we can now
finally add these without a big penalty in CI runtime. In addition to
that, the i686 target is added as well.
2023-04-23 00:03:31 +02:00
Alice Bowman
b4d74d64f7 feat(website): upload man pages to website 2023-04-22 15:32:49 +02:00
Emil Engler
0456ded6b9 doc: add a manual page for rp(1) 2023-04-15 18:05:23 +02:00
wucke13
838fd19694 Merge pull request #52 from rosenpass/dev/new-release
chore: Release rosenpass version 0.1.2-rc.4
2023-04-14 09:40:33 +02:00
wucke13
94d57f2f87 chore: Release rosenpass version 0.1.2-rc.4 2023-04-13 19:52:09 +02:00
Emil Engler
279b3c49fc doc: add rosenpass.1 manual page
This commit adds a manual page for the rosenpass(1) utility written in
mdoc(7).
2023-04-11 20:00:02 +02:00
wucke13
9c40c77f71 Merge pull request #42 from rosenpass/dev/fix-#41
fix #41
2023-04-09 18:18:19 +02:00
wucke13
c79dffa627 fix #41
Adds a check for empty messages as well as unit test verifying that
empty messages are handled as desired.
2023-04-09 17:54:51 +02:00
wucke13
b8f19c5510 remove multimatch macro and fix typo 2023-04-09 17:52:41 +02:00
wucke13
f459b91abf fix documentation 2023-04-09 17:52:41 +02:00
wucke13
801ce4cd34 add check for broken documentation to qc workflow 2023-04-09 17:52:41 +02:00
wucke13
a36da78bc8 Merge pull request #38 from rosenpass/dev/fix-small-todos
improve documentation
2023-04-05 16:54:05 +02:00
wucke13
df02f616bf remove code format snowflakes
this also enables the `cargo fmt` check in the flake
2023-04-05 16:35:31 +02:00
wucke13
87b08bcee1 rename SKEM -> StaticKEM & EKEM -> EphemeralKEM 2023-04-05 16:35:26 +02:00
wucke13
897fa3daf6 improve documentation
- fix key-exchange doctest example
- add more info on the CryptoServer struct
- add more doc-strings
2023-04-04 22:13:23 +02:00
wucke13
953b861b4c add rustfmt::skip attributes on _special_ code
related to https://github.com/rust-lang/rustfmt/issues/4306
2023-04-04 22:13:23 +02:00
wucke13
1a61a99575 rename protocol::Server -> protocol::CryptoServer 2023-04-04 22:13:12 +02:00
Karolin Varner
25a7a0736b feat(papers): Reorder RWPQC slides 2023-03-24 18:09:21 +09:00
Marei (peiTeX)
844e9b3c7e support abstract only documents 2023-03-22 15:39:54 +09:00
Karolin Varner
a723951c71 feat(papers): CrossFyre 2023 Submission abstract 2023-03-22 15:39:54 +09:00
Marei (peiTeX)
be9ac58bf9 enlarge images 2023-03-20 23:49:02 +09:00
Marei (peiTeX)
75853159fe fix enquote 2023-03-20 23:49:02 +09:00
Marei (peiTeX)
95aba257fd fix node alignment 2023-03-20 23:49:02 +09:00
Karolin Varner
34d0bab5c5 feat(papers): Add RWPQC 23 slides 2023-03-20 23:49:02 +09:00
Mullana
91d1986126 transparent background for key exchange CMYK PDF 2023-03-20 11:58:32 +01:00
Mullana
319785cf6e Transparent Background für key exchange RGB PDF 2023-03-20 11:50:29 +01:00
Marei (peiTeX)
df5a6125cd small layout adjustments 2023-03-17 17:44:04 +01:00
Marei (peiTeX)
80697e6189 relative postioning in tikzpictures 2023-03-17 17:44:04 +01:00
Marei (peiTeX)
6212153c48 choose rgb images for slides 2023-03-17 17:44:04 +01:00
Marei (peiTeX)
4645ed5569 rule to rosenpass-pink 2023-03-17 17:44:04 +01:00
Karolin Varner
2aeb9067e2 feat(papers): Add YRCS talk slides 2023-03-17 17:44:04 +01:00
Benjamin Lipp
c64917fe2e Add LaTeX beamer template for talk 2023-03-17 17:44:04 +01:00
Karolin Varner
a011cc1e1c fix(whitepaper): Rollback adding an article to state, acknowledgement and replay
All of these are abstract so these are – in my view – zero articles.
https://www.toppr.com/guides/english/articles/omission-of-the-article
2023-03-09 07:57:31 +01:00
timothy mellor
ad75d2218c Lektorat für whitepaper 2023-03-09 07:57:31 +01:00
124 changed files with 8779 additions and 3208 deletions

200
.ci/gen-workflow-files.nu Executable file
View File

@@ -0,0 +1,200 @@
#!/usr/bin/env nu
use log *
# cd to git root
cd (git rev-parse --show-toplevel)
# check if a subject depends on a potential dependency
def depends [
subject:string # package to examine
maybe_dep:string # maybe a dependency of subject
] {
not ( nix why-depends --quiet --derivation $subject $maybe_dep | is-empty )
}
# get attribute names of the attribute set
def get-attr-names [
expr: # nix expression to get attrNames of
] {
nix eval --json $expr --apply builtins.attrNames | from json
}
def job-id [
system:string,
derivation:string,
] {
$"($system)---($derivation)"
}
# map from nixos system to github runner type
let systems_map = {
# aarch64-darwin
# aarch64-linux
i686-linux: ubuntu-latest,
x86_64-darwin: macos-13,
x86_64-linux: ubuntu-latest
}
let targets = (get-attr-names ".#packages"
| par-each {|system| { $system : (get-attr-names $".#packages.($system)") } }
| reduce {|it, acc| $acc | merge $it }
)
mut cachix_workflow = {
name: "Nix",
permissions: {contents: write},
on: {
pull_request: null,
push: {branches: [main]}
},
jobs: {},
}
mut release_workflow = {
name: "Release",
permissions: {contents: write},
on: { push: {tags: ["v*"]} },
jobs: {},
}
let runner_setup = [
{
uses: "actions/checkout@v3"
}
{
uses: "cachix/install-nix-action@v22",
with: { nix_path: "nixpkgs=channel:nixos-unstable" }
}
{
uses: "cachix/cachix-action@v12",
with: {
name: rosenpass,
authToken: "${{ secrets.CACHIX_AUTH_TOKEN }}"
}
}
]
for system in ($targets | columns) {
if ($systems_map | get -i $system | is-empty) {
log info $"skipping ($system), since there are no GH-Actions runners for it"
continue
}
# lookup the correct runner for $system
let runs_on = [ ($systems_map | get $system) ]
# add jobs for all derivations
let derivations = ($targets | get $system)
for derivation in $derivations {
if ($system == "i686-linux") and ($derivation | str contains "static") {
log info $"skipping ($system).($derivation), due to liboqs 0.8 not present in oqs-sys"
continue
}
if ($system == "i686-linux") and ($derivation | str contains "release-package") {
log info $"skipping ($system).($derivation), due to liboqs 0.8 not present in oqs-sys"
continue
}
# job_id for GH-Actions
let id = ( job-id $system $derivation )
# name displayed
let name = $"($system).($derivation)"
# collection of dependencies
# TODO currently only considers dependencies on the same $system
let needs = ($derivations
| filter {|it| $it != $derivation and $it != "default" } # filter out self and default
| par-each {|it| {
name: $it, # the other derivation
# does self depend on $it?
needed: (depends $".#packages.($system).($derivation)" $".#packages.($system).($it)")
} }
| filter {|it| $it.needed}
| each {|it| job-id $system $it.name}
)
mut new_job = {
name: $"Build ($name)",
"runs-on": $runs_on,
needs: $needs,
steps: ($runner_setup | append [
{
name: Build,
run: $"nix build .#packages.($system).($derivation) --print-build-logs"
}
])
}
$cachix_workflow.jobs = ($cachix_workflow.jobs | insert $id $new_job )
}
# add check job
$cachix_workflow.jobs = ($cachix_workflow.jobs | insert $"($system)---check" {
name: $"Run Nix checks on ($system)",
"runs-on": $runs_on,
steps: ($runner_setup | append {
name: Check,
run: "nix flake check . --print-build-logs"
})
})
# add release job
$release_workflow.jobs = ($release_workflow.jobs | insert $"($system)---release" {
name: $"Build release artifacts for ($system)",
"runs-on": $runs_on,
steps: ($runner_setup | append [
{
name: "Build release",
run: "nix build .#release-package --print-build-logs"
}
{
name: Release,
uses: "softprops/action-gh-release@v1",
with: {
draft: "${{ contains(github.ref_name, 'rc') }}",
prerelease: "${{ contains(github.ref_name, 'alpha') || contains(github.ref_name, 'beta') }}",
files: "result/*"
}
}
])
})
}
# add whitepaper job with upload
let system = "x86_64-linux"
$cachix_workflow.jobs = ($cachix_workflow.jobs | insert $"($system)---whitepaper-upload" {
name: $"Upload whitepaper ($system)",
"runs-on": ($systems_map | get $system),
"if": "${{ github.ref == 'refs/heads/main' }}",
steps: ($runner_setup | append [
{
name: "Git add git sha and commit",
run: "cd papers && ./tex/gitinfo2.sh && git add gitHeadInfo.gin"
}
{
name: Build,
run: $"nix build .#packages.($system).whitepaper --print-build-logs"
}
{
name: "Deploy PDF artifacts",
uses: "peaceiris/actions-gh-pages@v3",
with: {
github_token: "${{ secrets.GITHUB_TOKEN }}",
publish_dir: result/,
publish_branch: papers-pdf,
force_orphan: true
}
}
])
})
log info "saving nix-cachix workflow"
$cachix_workflow | to yaml | save --force .github/workflows/nix.yaml
$release_workflow | to yaml | save --force .github/workflows/release.yaml
log info "prettify generated yaml"
prettier -w .github/workflows/

49
.github/workflows/doc-upload.yml vendored Normal file
View File

@@ -0,0 +1,49 @@
name: Update website docs
on:
push:
branches:
- main
paths:
- "doc/**"
jobs:
update-website:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Clone rosenpass-website repository
uses: actions/checkout@v3
with:
repository: rosenpass/rosenpass-website
ref: main
path: rosenpass-website
token: ${{ secrets.PRIVACC }}
- name: Copy docs to website repo
run: |
cp -R doc/* rosenpass-website/static/docs/
- name: Install mandoc
run: |
sudo apt-get update
sudo apt-get install -y mandoc
- name: Compile man pages to HTML
run: |
cd rosenpass-website/static/docs/
for file in *.1; do
mandoc -Thtml "$file" > "${file%.*}.html"
done
- name: Commit changes to website repo
uses: EndBug/add-and-commit@v9
with:
author_name: GitHub Actions
author_email: actions@github.com
message: Update docs
cwd: rosenpass-website/static/docs
github_token: ${{ secrets.PRIVACC }

View File

@@ -1,74 +1,346 @@
name: Nix Related Actions
name: Nix
permissions:
contents: write
on:
pull_request:
pull_request: null
push:
branches: [main]
branches:
- main
jobs:
build:
name: Build ${{ matrix.derivation }} on ${{ matrix.nix-system }}
i686-linux---default:
name: Build i686-linux.default
runs-on:
- nix
- ${{ matrix.nix-system }}
strategy:
fail-fast: false
matrix:
nix-system:
- x86_64-linux
# - aarch64-linux
derivation:
- rosenpass
- rosenpass-static
- rosenpass-oci-image
- rosenpass-static-oci-image
- proof-proverif
- whitepaper
- ubuntu-latest
needs:
- i686-linux---rosenpass
steps:
- uses: actions/checkout@v3
- name: Generate gitHeadInfo.gin for the whitepaper
if: ${{ matrix.derivation == 'whitepaper' }}
run: ( cd papers && ./tex/gitinfo2.sh && git add gitHeadInfo.gin )
- name: Build ${{ matrix.derivation }}@${{ matrix.nix-system }}
run: |
# build the package
nix build .#packages.${{ matrix.nix-system }}.${{ matrix.derivation }} --print-build-logs
# copy over the results
if [[ -f $(readlink --canonicalize result ) ]]; then
mkdir -- ${{ matrix.derivation }}-${{ matrix.nix-system }}
fi
cp --recursive -- $(readlink --canonicalize result) ${{ matrix.derivation }}-${{ matrix.nix-system }}
chmod --recursive ug+rw -- ${{ matrix.derivation }}-${{ matrix.nix-system }}
# add version information
git rev-parse --abbrev-ref HEAD > ${{ matrix.derivation }}-${{ matrix.nix-system }}/git-version
git rev-parse HEAD > ${{ matrix.derivation }}-${{ matrix.nix-system }}/git-sha
# override the `rp` script to keep compatible with non-nix systems
if [[ -f ${{ matrix.derivation }}-${{ matrix.nix-system }}/bin/rp ]]; then
cp --force -- rp ${{ matrix.derivation }}-${{ matrix.nix-system }}/bin/
fi
- name: Upload build results
uses: actions/upload-artifact@v3
- uses: cachix/install-nix-action@v22
with:
name: ${{ matrix.derivation }}-${{ matrix.nix-system }}
path: ${{ matrix.derivation }}-${{ matrix.nix-system }}
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.i686-linux.default --print-build-logs
i686-linux---rosenpass:
name: Build i686-linux.rosenpass
runs-on:
- ubuntu-latest
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.i686-linux.rosenpass --print-build-logs
i686-linux---rosenpass-oci-image:
name: Build i686-linux.rosenpass-oci-image
runs-on:
- ubuntu-latest
needs:
- i686-linux---rosenpass
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.i686-linux.rosenpass-oci-image --print-build-logs
i686-linux---check:
name: Run Nix checks on i686-linux
runs-on:
- ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Check
run: nix flake check . --print-build-logs
x86_64-darwin---default:
name: Build x86_64-darwin.default
runs-on:
- macos-13
needs:
- x86_64-darwin---rosenpass
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-darwin.default --print-build-logs
x86_64-darwin---release-package:
name: Build x86_64-darwin.release-package
runs-on:
- macos-13
needs:
- x86_64-darwin---rosenpass
- x86_64-darwin---rosenpass-oci-image
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-darwin.release-package --print-build-logs
x86_64-darwin---rosenpass:
name: Build x86_64-darwin.rosenpass
runs-on:
- macos-13
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-darwin.rosenpass --print-build-logs
x86_64-darwin---rosenpass-oci-image:
name: Build x86_64-darwin.rosenpass-oci-image
runs-on:
- macos-13
needs:
- x86_64-darwin---rosenpass
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-darwin.rosenpass-oci-image --print-build-logs
x86_64-darwin---check:
name: Run Nix checks on x86_64-darwin
runs-on:
- macos-13
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Check
run: nix flake check . --print-build-logs
x86_64-linux---default:
name: Build x86_64-linux.default
runs-on:
- ubuntu-latest
needs:
- x86_64-linux---rosenpass
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-linux.default --print-build-logs
x86_64-linux---proof-proverif:
name: Build x86_64-linux.proof-proverif
runs-on:
- ubuntu-latest
needs:
- x86_64-linux---proverif-patched
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-linux.proof-proverif --print-build-logs
x86_64-linux---proverif-patched:
name: Build x86_64-linux.proverif-patched
runs-on:
- ubuntu-latest
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-linux.proverif-patched --print-build-logs
x86_64-linux---release-package:
name: Build x86_64-linux.release-package
runs-on:
- ubuntu-latest
needs:
- x86_64-linux---rosenpass-static-oci-image
- x86_64-linux---rosenpass-static
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-linux.release-package --print-build-logs
x86_64-linux---rosenpass:
name: Build x86_64-linux.rosenpass
runs-on:
- ubuntu-latest
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-linux.rosenpass --print-build-logs
x86_64-linux---rosenpass-oci-image:
name: Build x86_64-linux.rosenpass-oci-image
runs-on:
- ubuntu-latest
needs:
- x86_64-linux---rosenpass
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-linux.rosenpass-oci-image --print-build-logs
x86_64-linux---rosenpass-static:
name: Build x86_64-linux.rosenpass-static
runs-on:
- ubuntu-latest
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-linux.rosenpass-static --print-build-logs
x86_64-linux---rosenpass-static-oci-image:
name: Build x86_64-linux.rosenpass-static-oci-image
runs-on:
- ubuntu-latest
needs:
- x86_64-linux---rosenpass-static
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-linux.rosenpass-static-oci-image --print-build-logs
x86_64-linux---whitepaper:
name: Build x86_64-linux.whitepaper
runs-on:
- ubuntu-latest
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-linux.whitepaper --print-build-logs
x86_64-linux---check:
name: Run Nix checks on x86_64-linux
runs-on:
- ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Check
run: nix flake check . --print-build-logs
x86_64-linux---whitepaper-upload:
name: Upload whitepaper x86_64-linux
runs-on: ubuntu-latest
if: ${{ github.ref == 'refs/heads/main' }}
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Git add git sha and commit
run: cd papers && ./tex/gitinfo2.sh && git add gitHeadInfo.gin
- name: Build
run: nix build .#packages.x86_64-linux.whitepaper --print-build-logs
- name: Deploy PDF artifacts
if: ${{ matrix.derivation == 'whitepaper' && github.ref == 'refs/heads/main' }}
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ${{ matrix.derivation }}-${{ matrix.nix-system }}
publish_dir: result/
publish_branch: papers-pdf
force_orphan: true
checks:
name: Run Nix checks
runs-on: nixos
needs: build
steps:
- uses: actions/checkout@v3
- name: Run Checks
run: nix flake check . --print-build-logs

View File

@@ -1,4 +1,4 @@
name: Quality Control
name: QC
on:
pull_request:
push:
@@ -12,15 +12,31 @@ jobs:
prettier:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actionsx/prettier@v2
with:
args: --check .
shellcheck:
name: Shellcheck
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run ShellCheck
uses: ludeeus/action-shellcheck@master
cargo-audit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions-rs/audit-check@v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
cargo-clippy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/cache@v3
with:
path: |
@@ -31,17 +47,104 @@ jobs:
target/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- run: rustup component add clippy
- name: Install xmllint
- name: Install libsodium
run: sudo apt-get install -y libsodium-dev
- uses: actions-rs/clippy-check@v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
args: --all-features
cargo-audit:
cargo-doc:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- uses: actions-rs/audit-check@v1
- uses: actions/checkout@v3
- uses: actions/cache@v3
with:
token: ${{ secrets.GITHUB_TOKEN }}
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
target/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- run: rustup component add clippy
- name: Install libsodium
run: sudo apt-get install -y libsodium-dev
# `--no-deps` used as a workaround for a rust compiler bug. See:
# - https://github.com/rosenpass/rosenpass/issues/62
# - https://github.com/rust-lang/rust/issues/108378
- run: RUSTDOCFLAGS="-D warnings" cargo doc --no-deps --document-private-items
cargo-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/cache@v3
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
target/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- name: Install libsodium
run: sudo apt-get install -y libsodium-dev
# liboqs requires quite a lot of stack memory, thus we adjust
# the default stack size picked for new threads (which is used
# by `cargo test`) to be _big enough_. Setting it to 8 MiB
- run: RUST_MIN_STACK=8388608 cargo test
cargo-test-nix-devshell-x86_64-linux:
runs-on:
- ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/cache@v3
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
target/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- uses: cachix/install-nix-action@v21
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- run: nix develop --command cargo test
cargo-fuzz:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/cache@v3
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
target/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- name: Install libsodium
run: sudo apt-get install -y libsodium-dev
- name: Install nightly toolchain
run: |
rustup toolchain install nightly
rustup default nightly
- name: Install cargo-fuzz
run: cargo install cargo-fuzz
- name: Run fuzzing
run: |
cargo fuzz run fuzz_aead_enc_into -- -max_total_time=5
cargo fuzz run fuzz_blake2b -- -max_total_time=5
cargo fuzz run fuzz_handle_msg -- -max_total_time=5
ulimit -s 8192000 && RUST_MIN_STACK=33554432000 && cargo fuzz run fuzz_kyber_encaps -- -max_total_time=5
cargo fuzz run fuzz_mceliece_encaps -- -max_total_time=5
cargo fuzz run fuzz_box_sodium_alloc -- -max_total_time=5
cargo fuzz run fuzz_vec_sodium_alloc -- -max_total_time=5

View File

@@ -3,28 +3,69 @@ permissions:
contents: write
on:
push:
tags: ["v*"]
tags:
- v*
jobs:
release:
name: Release for ${{ matrix.nix-system }}
i686-linux---release:
name: Build release artifacts for i686-linux
runs-on:
- nix
- ${{ matrix.nix-system }}
strategy:
fail-fast: false
matrix:
nix-system:
- x86_64-linux
# - aarch64-linux
- ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build release-package for ${{ matrix.nix-system }}
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build release
run: nix build .#release-package --print-build-logs
- name: Release
uses: softprops/action-gh-release@v1
with:
draft: ${{ contains(github.ref_name, 'rc') }}
prerelease: ${{ contains(github.ref_name, 'alpha') || contains(github.ref_name, 'beta') }}
files: |
result/*
files: result/*
x86_64-darwin---release:
name: Build release artifacts for x86_64-darwin
runs-on:
- macos-13
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build release
run: nix build .#release-package --print-build-logs
- name: Release
uses: softprops/action-gh-release@v1
with:
draft: ${{ contains(github.ref_name, 'rc') }}
prerelease: ${{ contains(github.ref_name, 'alpha') || contains(github.ref_name, 'beta') }}
files: result/*
x86_64-linux---release:
name: Build release artifacts for x86_64-linux
runs-on:
- ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build release
run: nix build .#release-package --print-build-logs
- name: Release
uses: softprops/action-gh-release@v1
with:
draft: ${{ contains(github.ref_name, 'rc') }}
prerelease: ${{ contains(github.ref_name, 'alpha') || contains(github.ref_name, 'beta') }}
files: result/*

17
.gitlab-ci.yml Normal file
View File

@@ -0,0 +1,17 @@
# TODO use CI_JOB_TOKEN once https://gitlab.com/groups/gitlab-org/-/epics/6310 is fixed
pull-from-gh:
only: ["schedules"]
variables:
REMOTE: "https://github.com/rosenpass/rosenpass.git"
LOCAL: " git@gitlab.com:rosenpass/rosenpass.git"
GIT_STRATEGY: none
before_script:
- mkdir ~/.ssh/
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- echo "$REPO_SSH_KEY" > ~/.ssh/id_ed25519
- chmod 600 --recursive ~/.ssh/
- git config --global user.email "ci@gitlab.com"
- git config --global user.name "CI"
script:
- git clone --mirror $REMOTE rosenpass
- cd rosenpass && git push --mirror $LOCAL

1271
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,35 +1,61 @@
[package]
name = "rosenpass"
version = "0.1.1"
authors = ["Karolin Varner <karo@cupdev.net>", "wucke13 <wucke13@gmail.com>"]
edition = "2021"
license = "MIT OR Apache-2.0"
description = "Build post-quantum-secure VPNs with WireGuard!"
homepage = "https://rosenpass.eu/"
repository = "https://github.com/rosenpass/rosenpass"
readme = "readme.md"
[workspace]
resolver = "2"
[[bench]]
name = "handshake"
harness = false
members = [
"rosenpass",
"cipher-traits",
"ciphers",
"util",
"constant-time",
"sodium",
"oqs",
"to",
"fuzz",
"secret-memory",
"lenses",
"rosenpass-log",
]
[dependencies]
anyhow = { version = "1.0.52", features = ["backtrace"] }
base64 = "0.13.0"
clap = { version = "3.0.0", features = ["yaml"] }
static_assertions = "1.1.0"
memoffset = "0.6.5"
libsodium-sys-stable = { version = "1.19.26", features = ["use-pkg-config"] }
oqs-sys = { version = "0.7.1", default-features = false, features = ['classic_mceliece', 'kyber'] }
lazy_static = "1.4.0"
thiserror = "1.0.38"
paste = "1.0.11"
log = { version = "0.4.17", optional = true }
env_logger = { version = "0.10.0", optional = true }
default-members = [
"rosenpass"
]
[dev-dependencies]
criterion = "0.3.5"
[workspace.metadata.release]
# ensure that adding `--package` as argument to `cargo release` still creates version tags in the form of `vx.y.z`
tag-prefix = ""
[workspace.dependencies]
rosenpass = { path = "rosenpass" }
rosenpass-util = { path = "util" }
rosenpass-constant-time = { path = "constant-time" }
rosenpass-sodium = { path = "sodium" }
rosenpass-cipher-traits = { path = "cipher-traits" }
rosenpass-ciphers = { path = "ciphers" }
rosenpass-to = { path = "to" }
rosenpass-secret-memory = { path = "secret-memory" }
rosenpass-oqs = { path = "oqs" }
rosenpass-lenses = { path = "lenses" }
criterion = "0.4.0"
test_bin = "0.4.0"
[features]
default = ["log", "env_logger"]
libfuzzer-sys = "0.4"
stacker = "0.1.15"
doc-comment = "0.3.3"
base64 = "0.21.5"
zeroize = "1.7.0"
memoffset = "0.9.0"
lazy_static = "1.4.0"
thiserror = "1.0.50"
paste = "1.0.14"
env_logger = "0.10.1"
toml = "0.7.8"
static_assertions = "1.1.0"
allocator-api2 = "0.2.16"
rand = "0.8.5"
log = { version = "0.4.20" }
clap = { version = "4.4.10", features = ["derive"] }
serde = { version = "1.0.193", features = ["derive"] }
arbitrary = { version = "1.3.2", features = ["derive"] }
anyhow = { version = "1.0.75", features = ["backtrace", "std"] }
mio = { version = "0.8.9", features = ["net", "os-poll"] }
libsodium-sys-stable= { version = "1.20.4", features = ["use-pkg-config"] }
oqs-sys = { version = "0.8", default-features = false, features = ['classic_mceliece', 'kyber'] }

12
cipher-traits/Cargo.toml Normal file
View File

@@ -0,0 +1,12 @@
[package]
name = "rosenpass-cipher-traits"
authors = ["Karolin Varner <karo@cupdev.net>", "wucke13 <wucke13@gmail.com>"]
version = "0.1.0"
edition = "2021"
license = "MIT OR Apache-2.0"
description = "Rosenpass internal traits for cryptographic primitives"
homepage = "https://rosenpass.eu/"
repository = "https://github.com/rosenpass/rosenpass"
readme = "readme.md"
[dependencies]

5
cipher-traits/readme.md Normal file
View File

@@ -0,0 +1,5 @@
# Rosenpass internal libsodium bindings
Rosenpass internal library providing traits for cryptographic primitives.
This is an internal library; not guarantee is made about its API at this point in time.

47
cipher-traits/src/kem.rs Normal file
View File

@@ -0,0 +1,47 @@
//! Traits and implementations for Key Encapsulation Mechanisms (KEMs)
//!
//! KEMs are the interface provided by almost all post-quantum
//! secure key exchange mechanisms.
//!
//! Conceptually KEMs are akin to public-key encryption, but instead of encrypting
//! arbitrary data, KEMs are limited to the transmission of keys, randomly chosen during
//!
//! encapsulation.
//! The [KEM] Trait describes the basic API offered by a Key Encapsulation
//! Mechanism. Two implementations for it are provided, [StaticKEM] and [EphemeralKEM].
use std::result::Result;
/// Key Encapsulation Mechanism
///
/// The KEM interface defines three operations: Key generation, key encapsulation and key
/// decapsulation.
pub trait Kem {
type Error;
/// Secrete Key length
const SK_LEN: usize;
/// Public Key length
const PK_LEN: usize;
/// Ciphertext length
const CT_LEN: usize;
/// Shared Secret length
const SHK_LEN: usize;
/// Generate a keypair consisting of secret key (`sk`) and public key (`pk`)
///
/// `keygen() -> sk, pk`
fn keygen(sk: &mut [u8], pk: &mut [u8]) -> Result<(), Self::Error>;
/// From a public key (`pk`), generate a shared key (`shk`, for local use)
/// and a cipher text (`ct`, to be sent to the owner of the `pk`).
///
/// `encaps(pk) -> shk, ct`
fn encaps(shk: &mut [u8], ct: &mut [u8], pk: &[u8]) -> Result<(), Self::Error>;
/// From a secret key (`sk`) and a cipher text (`ct`) derive a shared key
/// (`shk`)
///
/// `decaps(sk, ct) -> shk`
fn decaps(shk: &mut [u8], sk: &[u8], ct: &[u8]) -> Result<(), Self::Error>;
}

2
cipher-traits/src/lib.rs Normal file
View File

@@ -0,0 +1,2 @@
mod kem;
pub use kem::Kem;

20
ciphers/Cargo.toml Normal file
View File

@@ -0,0 +1,20 @@
[package]
name = "rosenpass-ciphers"
authors = ["Karolin Varner <karo@cupdev.net>", "wucke13 <wucke13@gmail.com>"]
version = "0.1.0"
edition = "2021"
license = "MIT OR Apache-2.0"
description = "Rosenpass internal ciphers and other cryptographic primitives used by rosenpass."
homepage = "https://rosenpass.eu/"
repository = "https://github.com/rosenpass/rosenpass"
readme = "readme.md"
[dependencies]
anyhow = { workspace = true }
rosenpass-sodium = { workspace = true }
rosenpass-to = { workspace = true }
rosenpass-constant-time = { workspace = true }
rosenpass-secret-memory = { workspace = true }
rosenpass-oqs = { workspace = true }
static_assertions = { workspace = true }
zeroize = { workspace = true }

5
ciphers/readme.md Normal file
View File

@@ -0,0 +1,5 @@
# Rosenpass internal cryptographic primitives
Ciphers and other cryptographic primitives used by rosenpass.
This is an internal library; not guarantee is made about its API at this point in time.

109
ciphers/src/hash_domain.rs Normal file
View File

@@ -0,0 +1,109 @@
use anyhow::Result;
use rosenpass_secret_memory::Secret;
use rosenpass_to::To;
use crate::subtle::incorrect_hmac_blake2b as hash;
pub use hash::KEY_LEN;
// TODO Use a proper Dec interface
#[derive(Clone, Debug)]
pub struct HashDomain([u8; KEY_LEN]);
#[derive(Clone, Debug)]
pub struct HashDomainNamespace([u8; KEY_LEN]);
#[derive(Clone, Debug)]
pub struct SecretHashDomain(Secret<KEY_LEN>);
#[derive(Clone, Debug)]
pub struct SecretHashDomainNamespace(Secret<KEY_LEN>);
impl HashDomain {
pub fn zero() -> Self {
Self([0u8; KEY_LEN])
}
pub fn dup(self) -> HashDomainNamespace {
HashDomainNamespace(self.0)
}
pub fn turn_secret(self) -> SecretHashDomain {
SecretHashDomain(Secret::from_slice(&self.0))
}
// TODO: Protocol! Use domain separation to ensure that
pub fn mix(self, v: &[u8]) -> Result<Self> {
Ok(Self(hash::hash(&self.0, v).collect::<[u8; KEY_LEN]>()?))
}
pub fn mix_secret<const N: usize>(self, v: Secret<N>) -> Result<SecretHashDomain> {
SecretHashDomain::invoke_primitive(&self.0, v.secret())
}
pub fn into_value(self) -> [u8; KEY_LEN] {
self.0
}
}
impl HashDomainNamespace {
pub fn mix(&self, v: &[u8]) -> Result<HashDomain> {
Ok(HashDomain(
hash::hash(&self.0, v).collect::<[u8; KEY_LEN]>()?,
))
}
pub fn mix_secret<const N: usize>(&self, v: Secret<N>) -> Result<SecretHashDomain> {
SecretHashDomain::invoke_primitive(&self.0, v.secret())
}
}
impl SecretHashDomain {
pub fn invoke_primitive(k: &[u8], d: &[u8]) -> Result<SecretHashDomain> {
let mut r = SecretHashDomain(Secret::zero());
hash::hash(k, d).to(r.0.secret_mut())?;
Ok(r)
}
pub fn zero() -> Self {
Self(Secret::zero())
}
pub fn dup(self) -> SecretHashDomainNamespace {
SecretHashDomainNamespace(self.0)
}
pub fn danger_from_secret(k: Secret<KEY_LEN>) -> Self {
Self(k)
}
pub fn mix(self, v: &[u8]) -> Result<SecretHashDomain> {
Self::invoke_primitive(self.0.secret(), v)
}
pub fn mix_secret<const N: usize>(self, v: Secret<N>) -> Result<SecretHashDomain> {
Self::invoke_primitive(self.0.secret(), v.secret())
}
pub fn into_secret(self) -> Secret<KEY_LEN> {
self.0
}
pub fn into_secret_slice(mut self, v: &[u8], dst: &[u8]) -> Result<()> {
hash::hash(v, dst).to(self.0.secret_mut())
}
}
impl SecretHashDomainNamespace {
pub fn mix(&self, v: &[u8]) -> Result<SecretHashDomain> {
SecretHashDomain::invoke_primitive(self.0.secret(), v)
}
pub fn mix_secret<const N: usize>(&self, v: Secret<N>) -> Result<SecretHashDomain> {
SecretHashDomain::invoke_primitive(self.0.secret(), v.secret())
}
// TODO: This entire API is not very nice; we need this for biscuits, but
// it might be better to extract a special "biscuit"
// labeled subkey and reinitialize the chain with this
pub fn danger_into_secret(self) -> Secret<KEY_LEN> {
self.0
}
}

29
ciphers/src/lib.rs Normal file
View File

@@ -0,0 +1,29 @@
use static_assertions::const_assert;
pub mod subtle;
pub const KEY_LEN: usize = 32;
const_assert!(KEY_LEN == aead::KEY_LEN);
const_assert!(KEY_LEN == xaead::KEY_LEN);
const_assert!(KEY_LEN == hash_domain::KEY_LEN);
/// Authenticated encryption with associated data
pub mod aead {
pub use rosenpass_sodium::aead::chacha20poly1305_ietf::{
decrypt, encrypt, KEY_LEN, NONCE_LEN, TAG_LEN,
};
}
/// Authenticated encryption with associated data with a constant nonce
pub mod xaead {
pub use rosenpass_sodium::aead::xchacha20poly1305_ietf::{
decrypt, encrypt, KEY_LEN, NONCE_LEN, TAG_LEN,
};
}
pub mod hash_domain;
pub mod kem {
pub use rosenpass_oqs::ClassicMceliece460896 as StaticKem;
pub use rosenpass_oqs::Kyber512 as EphemeralKem;
}

View File

@@ -0,0 +1,44 @@
use anyhow::ensure;
use rosenpass_constant_time::xor;
use rosenpass_sodium::hash::blake2b;
use rosenpass_to::{ops::copy_slice, with_destination, To};
use zeroize::Zeroizing;
pub const KEY_LEN: usize = 32;
pub const KEY_MIN: usize = KEY_LEN;
pub const KEY_MAX: usize = KEY_LEN;
pub const OUT_MIN: usize = blake2b::OUT_MIN;
pub const OUT_MAX: usize = blake2b::OUT_MAX;
/// This is a woefully incorrect implementation of hmac_blake2b.
/// See <https://github.com/rosenpass/rosenpass/issues/68#issuecomment-1563612222>
///
/// It accepts 32 byte keys, exclusively.
///
/// This will be replaced, likely by Kekkac at some point soon.
/// <https://github.com/rosenpass/rosenpass/pull/145>
#[inline]
pub fn hash<'a>(key: &'a [u8], data: &'a [u8]) -> impl To<[u8], anyhow::Result<()>> + 'a {
const IPAD: [u8; KEY_LEN] = [0x36u8; KEY_LEN];
const OPAD: [u8; KEY_LEN] = [0x5Cu8; KEY_LEN];
with_destination(|out: &mut [u8]| {
// Not bothering with padding; the implementation
// uses appropriately sized keys.
ensure!(key.len() == KEY_LEN);
type Key = Zeroizing<[u8; KEY_LEN]>;
let mut tmp_key = Key::default();
copy_slice(key).to(tmp_key.as_mut());
xor(&IPAD).to(tmp_key.as_mut());
let mut outer_data = Key::default();
blake2b::hash(tmp_key.as_ref(), data).to(outer_data.as_mut())?;
copy_slice(key).to(tmp_key.as_mut());
xor(&OPAD).to(tmp_key.as_mut());
blake2b::hash(tmp_key.as_ref(), outer_data.as_ref()).to(out)?;
Ok(())
})
}

View File

@@ -0,0 +1 @@
pub mod incorrect_hmac_blake2b;

2
config-examples/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
peer-*-*-key
peer-*-out

View File

@@ -0,0 +1,18 @@
public_key = "peer-a-public-key"
secret_key = "peer-a-secret-key"
listen = ["[::]:10001"]
verbosity = "Quiet"
[[peers]]
public_key = "peer-b-public-key"
endpoint = "localhost:10002"
key_out = "peer-a-rp-out-key"
# exchange_command = [
# "wg",
# "set",
# "wg0",
# "peer",
# "<PEER_ID>",
# "preshared-key",
# "/dev/stdin",
# ]

View File

@@ -0,0 +1,18 @@
public_key = "peer-b-public-key"
secret_key = "peer-b-secret-key"
listen = ["[::]:10002"]
verbosity = "Quiet"
[[peers]]
public_key = "peer-a-public-key"
endpoint = "localhost:10001"
key_out = "peer-b-rp-out-key"
# exchange_command = [
# "wg",
# "set",
# "wg0",
# "peer",
# "<PEER_ID>",
# "preshared-key",
# "/dev/stdin",
# ]

15
constant-time/Cargo.toml Normal file
View File

@@ -0,0 +1,15 @@
[package]
name = "rosenpass-constant-time"
version = "0.1.0"
authors = ["Karolin Varner <karo@cupdev.net>", "wucke13 <wucke13@gmail.com>"]
edition = "2021"
license = "MIT OR Apache-2.0"
description = "Rosenpass internal utilities for constant time crypto implementations"
homepage = "https://rosenpass.eu/"
repository = "https://github.com/rosenpass/rosenpass"
readme = "readme.md"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
rosenpass-to = { workspace = true }

5
constant-time/readme.md Normal file
View File

@@ -0,0 +1,5 @@
# Rosenpass constant time library
Rosenpass internal library providing basic constant-time operations.
This is an internal library; not guarantee is made about its API at this point in time.

26
constant-time/src/lib.rs Normal file
View File

@@ -0,0 +1,26 @@
use rosenpass_to::{with_destination, To};
/// Xors the source into the destination
///
/// # Examples
///
/// ```
/// use rosenpass_constant_time::xor;
/// use rosenpass_to::To;
/// assert_eq!(
/// xor(b"world").to_this(|| b"hello".to_vec()),
/// b"\x1f\n\x1e\x00\x0b");
/// ```
///
/// # Panics
///
/// If source and destination are of different sizes.
#[inline]
pub fn xor<'a>(src: &'a [u8]) -> impl To<[u8], ()> + 'a {
with_destination(|dst: &mut [u8]| {
assert!(src.len() == dst.len());
for (dv, sv) in dst.iter_mut().zip(src.iter()) {
*dv ^= *sv;
}
})
}

114
doc/rosenpass.1 Normal file
View File

@@ -0,0 +1,114 @@
.Dd $Mdocdate$
.Dt ROSENPASS 1
.Os
.Sh NAME
.Nm rosenpass
.Nd builds post-quantum-secure VPNs
.Sh SYNOPSIS
.Nm
.Op COMMAND
.Op Ar OPTIONS ...
.Op Ar ARGS ...
.Sh DESCRIPTION
.Nm
performs cryptographic key exchanges that are secure against quantum-computers
and then outputs the keys.
These keys can then be passed to various services, such as wireguard or other
vpn services, as pre-shared-keys to achieve security against attackers with
quantum computers.
.Pp
This is a research project and quantum computers are not thought to become
practical in fewer than ten years.
If you are not specifically tasked with developing post-quantum secure systems,
you probably do not need this tool.
.Ss COMMANDS
.Bl -tag -width Ds
.It Ar keygen private-key <file-path> public-key <file-path>
Generate a keypair to use in the exchange command later.
Send the public-key file to your communication partner and keep the private-key
file secret!
.It Ar exchange private-key <file-path> public-key <file-path> [ OPTIONS ] PEERS
Start a process to exchange keys with the specified peers.
You should specify at least one peer.
.Pp
Its
.Ar OPTIONS
are as follows:
.Bl -tag -width Ds
.It Ar listen <ip>[:<port>]
Instructs
.Nm
to listen on the specified interface and port.
By default,
.Nm
will listen on all interfaces and select a random port.
.It Ar verbose
Extra logging.
.El
.El
.Ss PEER
Each
.Ar PEER
is defined as follows:
.Qq peer public-key <file-path> [endpoint <ip>[:<port>]] [preshared-key <file-path>] [outfile <file-path>] [wireguard <dev> <peer> <extra_params>]
.Pp
Providing a
.Ar PEER
instructs
.Nm
to exchange keys with the given peer and write the resulting PSK into the given
output file.
You must either specify the outfile or wireguard output option.
.Pp
The parameters of
.Ar PEER
are as follows:
.Bl -tag -width Ds
.It Ar endpoint <ip>[:<port>]
Specifies the address where the peer can be reached.
This will be automatically updated after the first successful key exchange with
the peer.
If this is unspecified, the peer must initiate the connection.
.It Ar preshared-key <file-path>
You may specify a pre-shared key which will be mixed into the final secret.
.It Ar outfile <file-path>
You may specify a file to write the exchanged keys to.
If this option is specified,
.Nm
will write a notification to standard out every time the key is updated.
.It Ar wireguard <dev> <peer> <extra_params>
This allows you to directly specify a wireguard peer to deploy the
pre-shared-key to.
You may specify extra parameters you would pass to
.Qq wg set
besides the preshared-key parameter which is used by
.Nm .
This makes it possible to add peers entirely from
.Nm .
.El
.Sh EXIT STATUS
.Ex -std
.Sh SEE ALSO
.Xr rp 1 ,
.Xr wg 1
.Rs
.%A Karolin Varner
.%A Benjamin Lipp
.%A Wanja Zaeske
.%A Lisa Schmidt
.%D 2023
.%T Rosenpass
.%U https://rosenpass.eu/whitepaper.pdf
.Re
.Sh STANDARDS
This tool is the reference implementation of the Rosenpass protocol, as
specified within the whitepaper referenced above.
.Sh AUTHORS
Rosenpass was created by Karolin Varner, Benjamin Lipp, Wanja Zaeske,
Marei Peischl, Stephan Ajuvo, and Lisa Schmidt.
.Pp
This manual page was written by
.An Emil Engler
.Sh BUGS
The bugs are tracked at
.Lk https://github.com/rosenpass/rosenpass/issues .

119
doc/rp.1 Normal file
View File

@@ -0,0 +1,119 @@
.Dd $Mdocdate$
.Dt RP 1
.Os
.Sh NAME
.Nm rp
.Nd high-level interface to rosenpass
.Sh SYNOPSIS
.Nm
.Op Ar explain
.Op Ar verbose
.Ar genkey Ar ... | Ar pubkey ... | Ar exchange ...
.Nm
.Op ...
.Ar genkey PRIVATE_KEYS_DIR
.Nm
.Op ...
.Ar pubkey Ar PRIVATE_KEYS_DIR Ar PUBLIC_KEYS_DIR
.Nm
.Op ...
.\" Splitting this across several lines
.Ar exchange Ar PRIVATE_KEYS_DIR
.Op dev <device>
.Op listen <ip>:<port>
.\" Because the peer argument is complicated, it would be heel to represent it
.\" in mdoc... Using an ugly hack instead, thereby losing semantic.
[peer PUBLIC_KEYS_DIR [endpoint <ip>:<port>] [persistent-keepalive <interval>]
[allowed-ips <ip1>/<cidr1>[,<ip2>/<cidr2>] ...]] ...
.Sh DESCRIPTION
The
.Nm
program
is used to build a VPN with WireGuard and Rosenpass.
.Pp
The optional
.Op explain
and
.Op verbose
options can be used to obtain further help or to enable a detailed view on the
operations, respectively.
.Ss COMMANDS
.Bl -tag -width Ds
.It Ar genkey Ar PRIVATE_KEYS_DIR
Creates a new directory with appropriate permissions and generates all the
necessary private keys required for a peer to participate in a rosenpass
connection.
.It Ar pubkey Ar PRIVATE_KEYS_DIR Ar PUBLIC_KEYS_DIR
Creates a fresh directory at
.Ar PUBLIC_KEYS_DIR ,
which contains the extracted public keys from the private keys generated by
.Ar genkey
and located inside
.Ar PRIVATE_KEYS_DIR .
.It Ar exchange Ar PRIVATE_KEYS_DIR [dev <device>] [listen <ip>:<port>] [PEERS]
Starts the VPN on interface
.Ar device ,
listening on the provided IP and port combination, allowing connections from
.Ar PEERS .
.El
.Sh EXIT STATUS
.Ex -std
.Sh EXAMPLES
In this example, we will assume that the server has an interface bound to
192.168.0.1, that accepts incoming connections on port 9999/UDP for Rosenpass
and port 10000/UDP for WireGuard.
.Pp
To create a VPN connection, start by generating secret keys on both hosts.
.Bd -literal -offset indent
rp genkey server.rosenpass-secret
rp genkey client.rosenpass-secret
.Ed
.Pp
Extract the public keys:
.Bd -literal -offset indent
rp pubkey server.rosenpass-secret server.rosenpass-public
rp pubkey client.rosenpass-secret client.rosenpass-public
.Ed
.Pp
Copy the
.Qq -public
directories to the other peers and then start the VPN.
On the server:
.Bd -literal -offset indent
sudo rp exchange server.rosenpass-secret dev rosenpass0 listen 192.168.0.1:9999 \\
peer client.rosenpass-public allowed-ips fe80::/64
.Ed
.Pp
On the client:
.Bd -literal -offset indent
sudo rp exchange client.rosenpass-secret dev rosenpass 0 \\
peer server.rosenpass-public endpoint 192.168.0.1:9999 allowed-ips fe80::/64
.Ed
.Pp
Assign IP addresses:
.Bd -literal -offset indent
sudo ip a add fe80::1/64 dev rosenpass0 # Server
sudo ip a add fe80::2/64 dev rosenpass0 # Client
.Ed
.Pp
Test the connection by pinging the server on the client machine:
.Bd -literal -offset indent
ping fe80::1%rosenpass0 # Client
.Ed
.Pp
You can watch how rosenpass replaces the WireGuard PSK with the following:
.Bd -literal -offset indent
watch -n 0.2 'wg show all; wg show all preshared-keys'
.Ed
.Sh SEE ALSO
.Xr rosenpass 1 ,
.Xr wg 1
.Sh AUTHORS
Rosenpass was created by Karolin Varner, Benjamin Lipp, Wanja Zaeske,
Marei Peischl, Stephan Ajuvo, and Lisa Schmidt.
.Pp
This manual page was written by
.An Emil Engler
.Sh BUGS
The bugs are tracked at
.Lk https://github.com/rosenpass/rosenpass/issues .

81
flake.lock generated
View File

@@ -8,11 +8,11 @@
"rust-analyzer-src": "rust-analyzer-src"
},
"locked": {
"lastModified": 1674240251,
"narHash": "sha256-AVMmf/CtcGensTZmMicToDpOwySEGNKYgRPC7lu3m8w=",
"lastModified": 1699770036,
"narHash": "sha256-bZmI7ytPAYLpyFNgj5xirDkKuAniOkj1xHdv5aIJ5GM=",
"owner": "nix-community",
"repo": "fenix",
"rev": "d8067f4d1d3d30732703209bec5ca7d62aaececc",
"rev": "81ab0b4f7ae9ebb57daa0edf119c4891806e4d3a",
"type": "github"
},
"original": {
@@ -22,12 +22,15 @@
}
},
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1667395993,
"narHash": "sha256-nuEHfE/LcWyuSWnS8t12N1wc105Qtau+/OdUAjtQ0rA=",
"lastModified": 1694529238,
"narHash": "sha256-zsNZZGTGnMOf9YpHKJqMSsa0dXbfmxeoJ7xHlrt+xmY=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "5aed5285a952e0b949eb3ba02c12fa4fcfef535f",
"rev": "ff7b65b44d01cf9ba6a71320833626af21126384",
"type": "github"
},
"original": {
@@ -36,13 +39,33 @@
"type": "github"
}
},
"naersk": {
"inputs": {
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1698420672,
"narHash": "sha256-/TdeHMPRjjdJub7p7+w55vyABrsJlt5QkznPYy55vKA=",
"owner": "nix-community",
"repo": "naersk",
"rev": "aeb58d5e8faead8980a807c840232697982d47b9",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "naersk",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1672968032,
"narHash": "sha256-26Jns3GmHem44a06UN5Rj/KOD9qNJThyQrom02Ijur8=",
"lastModified": 1698846319,
"narHash": "sha256-4jyW/dqFBVpWFnhl0nvP6EN4lP7/ZqPxYRjl6var0Oc=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "2dea8991d89b9f1e78d874945f78ca15f6954289",
"rev": "34bdaaf1f0b7fb6d9091472edc968ff10a8c2857",
"type": "github"
},
"original": {
@@ -50,37 +73,22 @@
"type": "indirect"
}
},
"nixpkgs-unstable": {
"locked": {
"lastModified": 1676496762,
"narHash": "sha256-GFAxjaTgh8KJ8q7BYaI4EVGI5K98ooW70fG/83rSb08=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "1bddde315297c092712b0ef03d9def7a474b28ae",
"type": "github"
},
"original": {
"owner": "NixOS",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"fenix": "fenix",
"flake-utils": "flake-utils",
"nixpkgs": "nixpkgs",
"nixpkgs-unstable": "nixpkgs-unstable"
"naersk": "naersk",
"nixpkgs": "nixpkgs"
}
},
"rust-analyzer-src": {
"flake": false,
"locked": {
"lastModified": 1674162026,
"narHash": "sha256-iY0bxoVE7zAZmp0BB/m5hZW5pWHUfgntDvc1m2zyt/U=",
"lastModified": 1699715108,
"narHash": "sha256-yPozsobJU55gj+szgo4Lpcg1lHvGQYAT6Y4MrC80mWE=",
"owner": "rust-lang",
"repo": "rust-analyzer",
"rev": "6e52c64031825920983515b9e975e93232739f7f",
"rev": "5fcf5289e726785d20d3aa4d13d90a43ed248e83",
"type": "github"
},
"original": {
@@ -89,6 +97,21 @@
"repo": "rust-analyzer",
"type": "github"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",

216
flake.nix
View File

@@ -1,8 +1,11 @@
{
inputs = {
nixpkgs-unstable.url = "github:NixOS/nixpkgs";
flake-utils.url = "github:numtide/flake-utils";
# for quicker rust builds
naersk.url = "github:nix-community/naersk";
naersk.inputs.nixpkgs.follows = "nixpkgs";
# for rust nightly with llvm-tools-preview
fenix.url = "github:nix-community/fenix";
fenix.inputs.nixpkgs.follows = "nixpkgs";
@@ -19,12 +22,16 @@
"aarch64-linux"
# unsuported best-effort
"i686-linux"
"x86_64-darwin"
"aarch64-darwin"
# "x86_64-windows"
]
(system:
let
scoped = (scope: scope.result);
lib = nixpkgs.lib;
# normal nixpkgs
pkgs = import nixpkgs {
inherit system;
@@ -47,14 +54,41 @@
)
];
};
# parsed Cargo.toml
cargoToml = builtins.fromTOML (builtins.readFile ./Cargo.toml);
cargoToml = builtins.fromTOML (builtins.readFile ./rosenpass/Cargo.toml);
# source files relevant for rust
src = pkgs.lib.sourceByRegex ./. [
"Cargo\\.(toml|lock)"
"(src|benches)(/.*\\.(rs|md))?"
"rp"
];
src = scoped rec {
# File suffices to include
extensions = [
"lock"
"rs"
"toml"
];
# Files to explicitly include
files = [
"to/README.md"
];
src = ./.;
filter = (path: type: scoped rec {
inherit (lib) any id removePrefix hasSuffix;
anyof = (any id);
basename = baseNameOf (toString path);
relative = removePrefix (toString src + "/") (toString path);
result = anyof [
(type == "directory")
(any (ext: hasSuffix ".${ext}" basename) extensions)
(any (file: file == relative) files)
];
});
result = pkgs.lib.sources.cleanSourceWith { inherit src filter; };
};
# builds a bin path for all dependencies for the `rp` shellscript
rpBinPath = p: with p; lib.makeBinPath [
coreutils
@@ -62,60 +96,119 @@
gawk
wireguard-tools
];
# a function to generate a nix derivation for rosenpass against any
# given set of nixpkgs
rpDerivation = p:
let
isStatic = p.stdenv.hostPlatform.isStatic;
in
p.rustPlatform.buildRustPackage {
# metadata and source
pname = cargoToml.package.name;
version = cargoToml.package.version;
inherit src;
cargoLock = {
lockFile = src + "/Cargo.lock";
# whether we want to build a statically linked binary
isStatic = p.targetPlatform.isStatic;
# the rust target of `p`
target = p.rust.toRustTargetSpec p.targetPlatform;
# convert a string to shout case
shout = string: builtins.replaceStrings [ "-" ] [ "_" ] (pkgs.lib.toUpper string);
# suitable Rust toolchain
toolchain = with inputs.fenix.packages.${system}; combine [
stable.cargo
stable.rustc
targets.${target}.stable.rust-std
];
# naersk with a custom toolchain
naersk = pkgs.callPackage inputs.naersk {
cargo = toolchain;
rustc = toolchain;
};
nativeBuildInputs = with pkgs; [
cmake # for oqs build in the oqs-sys crate
makeWrapper # for the rp shellscript
pkg-config # let libsodium-sys-stable find libsodium
removeReferencesTo
rustPlatform.bindgenHook # for C-bindings in the crypto libs
];
buildInputs = with p; [ bash libsodium ];
# used to trick the build.rs into believing that CMake was ran **again**
fakecmake = pkgs.writeScriptBin "cmake" ''
#! ${pkgs.stdenv.shell} -e
true
'';
in
naersk.buildPackage
{
# metadata and source
name = cargoToml.package.name;
version = cargoToml.package.version;
inherit src;
cargoBuildOptions = x: x ++ [ "-p" "rosenpass" ];
cargoTestOptions = x: x ++ [ "-p" "rosenpass" ];
doCheck = true;
nativeBuildInputs = with pkgs; [
p.stdenv.cc
cmake # for oqs build in the oqs-sys crate
mandoc # for the built-in manual
makeWrapper # for the rp shellscript
pkg-config # let libsodium-sys-stable find libsodium
removeReferencesTo
rustPlatform.bindgenHook # for C-bindings in the crypto libs
];
buildInputs = with p; [ bash libsodium ];
override = x: {
preBuild =
# nix defaults to building for aarch64 _without_ the armv8-a crypto
# extensions, but liboqs depens on these
(lib.optionalString (system == "aarch64-linux") ''
NIX_CFLAGS_COMPILE="$NIX_CFLAGS_COMPILE -march=armv8-a+crypto"
''
);
# fortify is only compatible with dynamic linking
hardeningDisable = lib.optional isStatic "fortify";
};
overrideMain = x: {
# CMake detects that it was served a _foreign_ target dir, and CMake
# would be executed again upon the second build step of naersk.
# By adding our specially optimized CMake version, we reduce the cost
# of recompilation by 99 % while, while avoiding any CMake errors.
nativeBuildInputs = [ (lib.hiPrio fakecmake) ] ++ x.nativeBuildInputs;
# make sure that libc is linked, under musl this is not the case per
# default
preBuild = (lib.optionalString isStatic ''
NIX_CFLAGS_COMPILE="$NIX_CFLAGS_COMPILE -lc"
'');
preInstall = ''
install -D ${./rp} $out/bin/rp
wrapProgram $out/bin/rp --prefix PATH : "${ rpBinPath p }"
'';
};
# We want to build for a specific target...
CARGO_BUILD_TARGET = target;
# ... which might require a non-default linker:
"CARGO_TARGET_${shout target}_LINKER" =
let
inherit (p.stdenv) cc;
in
"${cc}/bin/${cc.targetPrefix}cc";
meta = with pkgs.lib;
{
inherit (cargoToml.package) description homepage;
license = with licenses; [ mit asl20 ];
maintainers = [ maintainers.wucke13 ];
platforms = platforms.all;
};
} // (lib.mkIf isStatic {
# otherwise pkg-config tries to link non-existent dynamic libs
# documented here: https://docs.rs/pkg-config/latest/pkg_config/
PKG_CONFIG_ALL_STATIC = true;
# nix defaults to building for aarch64 _without_ the armv8-a
# crypto extensions, but liboqs depens on these
preBuild =
if system == "aarch64-linux" then ''
NIX_CFLAGS_COMPILE="$NIX_CFLAGS_COMPILE -march=armv8-a+crypto"
'' else "";
preInstall = ''
install -D rp $out/bin/rp
wrapProgram $out/bin/rp --prefix PATH : "${ rpBinPath p }"
'';
# nix progated the *.dev outputs of buildInputs for static
# builds, but that is non-sense for an executables only package
postFixup =
if isStatic then ''
remove-references-to -t ${p.bash.dev} -t ${p.libsodium.dev} \
$out/nix-support/propagated-build-inputs
'' else "";
meta = with pkgs.lib; {
inherit (cargoToml.package) description homepage;
license = with licenses; [ mit asl20 ];
maintainers = [ maintainers.wucke13 ];
platforms = platforms.all;
};
};
# tell rust to build everything statically linked
CARGO_BUILD_RUSTFLAGS = "-C target-feature=+crt-static";
});
# a function to generate a docker image based of rosenpass
rosenpassOCI = name: pkgs.dockerTools.buildImage rec {
inherit name;
@@ -178,14 +271,11 @@
#
packages.whitepaper =
let
pkgs = import inputs.nixpkgs-unstable {
inherit system;
};
tlsetup = (pkgs.texlive.combine {
inherit (pkgs.texlive) scheme-basic acmart amsfonts ccicons
csquotes csvsimple doclicense fancyvrb fontspec gobble
koma-script ifmtarg latexmk lm markdown mathtools minted noto
nunito pgf soul soulutf8 unicode-math lualatex-math
nunito pgf soul unicode-math lualatex-math paralist
gitinfo2 eso-pic biblatex biblatex-trad biblatex-software
xkeyval xurl xifthen biber;
});
@@ -201,7 +291,6 @@
];
buildPhase = ''
export HOME=$(mktemp -d)
export OSFONTDIR="$(kpsewhich --var-value TEXMF)/fonts/{opentype/public/nunito,truetype/google/noto}"
latexmk -r tex/CI.rc
'';
installPhase = ''
@@ -222,7 +311,7 @@
packages.proof-proverif = pkgs.stdenv.mkDerivation {
name = "rosenpass-proverif-proof";
version = "unstable";
src = pkgs.lib.sourceByRegex ./. [
src = pkgs.lib.sources.sourceByRegex ./. [
"analyze.sh"
"marzipan(/marzipan.awk)?"
"analysis(/.*)?"
@@ -243,6 +332,7 @@
inherit (packages.proof-proverif) CRYPTOVERIF_LIB;
inputsFrom = [ packages.default ];
nativeBuildInputs = with pkgs; [
cmake # override the fakecmake from the main step above
cargo-release
clippy
nodePackages.prettier
@@ -257,12 +347,10 @@
checks = {
# Blocked by https://github.com/rust-lang/rustfmt/issues/4306
# @dakoraa wants a coding style suitable for her accessible coding setup
# cargo-fmt = pkgs.runCommand "check-cargo-fmt"
# { inherit (devShells.default) nativeBuildInputs buildInputs; } ''
# cargo fmt --manifest-path=${src}/Cargo.toml --check > $out
# '';
cargo-fmt = pkgs.runCommand "check-cargo-fmt"
{ inherit (self.devShells.${system}.default) nativeBuildInputs buildInputs; } ''
cargo fmt --manifest-path=${./.}/Cargo.toml --check --all && touch $out
'';
nixpkgs-fmt = pkgs.runCommand "check-nixpkgs-fmt"
{ nativeBuildInputs = [ pkgs.nixpkgs-fmt ]; } ''
nixpkgs-fmt --check ${./.} && touch $out
@@ -272,6 +360,8 @@
cd ${./.} && prettier --check . && touch $out
'';
};
formatter = pkgs.nixpkgs-fmt;
}))
];
}

4
fuzz/.gitignore vendored Normal file
View File

@@ -0,0 +1,4 @@
target
corpus
artifacts
coverage

1286
fuzz/Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

61
fuzz/Cargo.toml Normal file
View File

@@ -0,0 +1,61 @@
[package]
name = "rosenpass-fuzzing"
version = "0.0.1"
publish = false
edition = "2021"
[package.metadata]
cargo-fuzz = true
[dependencies]
arbitrary = { workspace = true }
libfuzzer-sys = { workspace = true }
stacker = { workspace = true }
rosenpass-secret-memory = { workspace = true }
rosenpass-sodium = { workspace = true }
rosenpass-ciphers = { workspace = true }
rosenpass-cipher-traits = { workspace = true }
rosenpass-to = { workspace = true }
rosenpass = { workspace = true }
[[bin]]
name = "fuzz_handle_msg"
path = "fuzz_targets/handle_msg.rs"
test = false
doc = false
[[bin]]
name = "fuzz_blake2b"
path = "fuzz_targets/blake2b.rs"
test = false
doc = false
[[bin]]
name = "fuzz_aead_enc_into"
path = "fuzz_targets/aead_enc_into.rs"
test = false
doc = false
[[bin]]
name = "fuzz_mceliece_encaps"
path = "fuzz_targets/mceliece_encaps.rs"
test = false
doc = false
[[bin]]
name = "fuzz_kyber_encaps"
path = "fuzz_targets/kyber_encaps.rs"
test = false
doc = false
[[bin]]
name = "fuzz_box_sodium_alloc"
path = "fuzz_targets/box_sodium_alloc.rs"
test = false
doc = false
[[bin]]
name = "fuzz_vec_sodium_alloc"
path = "fuzz_targets/vec_sodium_alloc.rs"
test = false
doc = false

View File

@@ -0,0 +1,32 @@
#![no_main]
extern crate arbitrary;
extern crate rosenpass;
use libfuzzer_sys::fuzz_target;
use rosenpass_ciphers::aead;
use rosenpass_sodium::init as sodium_init;
#[derive(arbitrary::Arbitrary, Debug)]
pub struct Input {
pub key: [u8; 32],
pub nonce: [u8; 12],
pub ad: Box<[u8]>,
pub plaintext: Box<[u8]>,
}
fuzz_target!(|input: Input| {
sodium_init().unwrap();
let mut ciphertext: Vec<u8> = Vec::with_capacity(input.plaintext.len() + 16);
ciphertext.resize(input.plaintext.len() + 16, 0);
aead::encrypt(
ciphertext.as_mut_slice(),
&input.key,
&input.nonce,
&input.ad,
&input.plaintext,
)
.unwrap();
});

View File

@@ -0,0 +1,22 @@
#![no_main]
extern crate arbitrary;
extern crate rosenpass;
use libfuzzer_sys::fuzz_target;
use rosenpass_sodium::{hash::blake2b, init as sodium_init};
use rosenpass_to::To;
#[derive(arbitrary::Arbitrary, Debug)]
pub struct Blake2b {
pub key: [u8; 32],
pub data: Box<[u8]>,
}
fuzz_target!(|input: Blake2b| {
sodium_init().unwrap();
let mut out = [0u8; 32];
blake2b::hash(&input.key, &input.data).to(&mut out).unwrap();
});

View File

@@ -0,0 +1,12 @@
#![no_main]
use libfuzzer_sys::fuzz_target;
use rosenpass_sodium::{
alloc::{Alloc as SodiumAlloc, Box as SodiumBox},
init,
};
fuzz_target!(|data: &[u8]| {
let _ = init();
let _ = SodiumBox::new_in(data, SodiumAlloc::new());
});

View File

@@ -0,0 +1,21 @@
#![no_main]
extern crate rosenpass;
use libfuzzer_sys::fuzz_target;
use rosenpass::protocol::CryptoServer;
use rosenpass_secret_memory::Secret;
use rosenpass_sodium::init as sodium_init;
fuzz_target!(|rx_buf: &[u8]| {
sodium_init().unwrap();
let sk = Secret::from_slice(&[0; 13568]);
let pk = Secret::from_slice(&[0; 524160]);
let mut cs = CryptoServer::new(sk, pk);
let mut tx_buf = [0; 10240];
// We expect errors while fuzzing therefore we do not check the result.
let _ = cs.handle_msg(rx_buf, &mut tx_buf);
});

View File

@@ -0,0 +1,20 @@
#![no_main]
extern crate arbitrary;
extern crate rosenpass;
use libfuzzer_sys::fuzz_target;
use rosenpass_cipher_traits::Kem;
use rosenpass_ciphers::kem::EphemeralKem;
#[derive(arbitrary::Arbitrary, Debug)]
pub struct Input {
pub pk: [u8; 800],
}
fuzz_target!(|input: Input| {
let mut ciphertext = [0u8; 768];
let mut shared_secret = [0u8; 32];
EphemeralKem::encaps(&mut shared_secret, &mut ciphertext, &input.pk).unwrap();
});

View File

@@ -0,0 +1,15 @@
#![no_main]
extern crate rosenpass;
use libfuzzer_sys::fuzz_target;
use rosenpass_cipher_traits::Kem;
use rosenpass_ciphers::kem::StaticKem;
fuzz_target!(|input: [u8; StaticKem::PK_LEN]| {
let mut ciphertext = [0u8; 188];
let mut shared_secret = [0u8; 32];
// We expect errors while fuzzing therefore we do not check the result.
let _ = StaticKem::encaps(&mut shared_secret, &mut ciphertext, &input);
});

View File

@@ -0,0 +1,13 @@
#![no_main]
use libfuzzer_sys::fuzz_target;
use rosenpass_sodium::{
alloc::{Alloc as SodiumAlloc, Vec as SodiumVec},
init,
};
fuzz_target!(|data: &[u8]| {
let _ = init();
let mut vec = SodiumVec::new_in(SodiumAlloc::new());
vec.extend_from_slice(data);
});

16
lenses/Cargo.toml Normal file
View File

@@ -0,0 +1,16 @@
[package]
name = "rosenpass-lenses"
version = "0.1.0"
authors = ["Karolin Varner <karo@cupdev.net>", "wucke13 <wucke13@gmail.com>"]
edition = "2021"
license = "MIT OR Apache-2.0"
description = "Rosenpass internal library for parsing binary data securely"
homepage = "https://rosenpass.eu/"
repository = "https://github.com/rosenpass/rosenpass"
readme = "readme.md"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
paste = { workspace = true }
thiserror = { workspace = true }

3
lenses/readme.md Normal file
View File

@@ -0,0 +1,3 @@
# Rosenpass internal binary parsing library
This is an internal library; no guarantee is made about its API at this point in time.

206
lenses/src/lib.rs Normal file
View File

@@ -0,0 +1,206 @@
use std::result::Result;
/// Common trait shared by all Lenses
pub trait LenseView {
const LEN: usize;
}
/// Error during lense creation
#[derive(thiserror::Error, Debug, Eq, PartialEq, Clone)]
pub enum LenseError {
#[error("buffer size mismatch")]
BufferSizeMismatch,
}
pub type LenseResult<T> = Result<T, LenseError>;
impl LenseError {
pub fn ensure_exact_buffer_size(len: usize, required: usize) -> LenseResult<()> {
(len == required)
.then_some(())
.ok_or(LenseError::BufferSizeMismatch)
}
pub fn ensure_sufficient_buffer_size(len: usize, required: usize) -> LenseResult<()> {
(len >= required)
.then_some(())
.ok_or(LenseError::BufferSizeMismatch)
}
}
/// A macro to create data lenses.
#[macro_export]
macro_rules! lense(
// prefix @ offset ; optional meta ; field name : field length, ...
(token_muncher_ref @ $offset:expr ; $( $attr:meta )* ; $field:ident : $len:expr $(, $( $tail:tt )+ )?) => {
::paste::paste!{
#[allow(rustdoc::broken_intra_doc_links)]
$( #[ $attr ] )*
///
#[doc = lense!(maybe_docstring_link $len)]
/// bytes long
pub fn $field(&self) -> &__ContainerType::Output {
&self.0[$offset .. $offset + $len]
}
/// The bytes until the
#[doc = lense!(maybe_docstring_link Self::$field)]
/// field
pub fn [< until_ $field >](&self) -> &__ContainerType::Output {
&self.0[0 .. $offset]
}
// if the tail exits, consume it as well
$(
lense!{token_muncher_ref @ $offset + $len ; $( $tail )+ }
)?
}
};
// prefix @ offset ; optional meta ; field name : field length, ...
(token_muncher_mut @ $offset:expr ; $( $attr:meta )* ; $field:ident : $len:expr $(, $( $tail:tt )+ )?) => {
::paste::paste!{
#[allow(rustdoc::broken_intra_doc_links)]
$( #[ $attr ] )*
///
#[doc = lense!(maybe_docstring_link $len)]
/// bytes long
pub fn [< $field _mut >](&mut self) -> &mut __ContainerType::Output {
&mut self.0[$offset .. $offset + $len]
}
// if the tail exits, consume it as well
$(
lense!{token_muncher_mut @ $offset + $len ; $( $tail )+ }
)?
}
};
// switch that yields literals unchanged, but creates docstring links to
// constants
// TODO the doc string link doesn't work if $x is taken from a generic,
(maybe_docstring_link $x:literal) => (stringify!($x));
(maybe_docstring_link $x:expr) => (stringify!([$x]));
// struct name < optional generics > := optional doc string field name : field length, ...
($type:ident $( < $( $generic:ident ),+ > )? := $( $( #[ $attr:meta ] )* $field:ident : $len:expr ),+) => (::paste::paste!{
#[allow(rustdoc::broken_intra_doc_links)]
/// A data lense to manipulate byte slices.
///
//// # Fields
///
$(
/// - `
#[doc = stringify!($field)]
/// `:
#[doc = lense!(maybe_docstring_link $len)]
/// bytes
)+
pub struct $type<__ContainerType $(, $( $generic ),+ )? > (
__ContainerType,
// The phantom data is required, since all generics declared on a
// type need to be used on the type.
// https://doc.rust-lang.org/stable/error_codes/E0392.html
$( $( ::core::marker::PhantomData<$generic> ),+ )?
);
impl<__ContainerType $(, $( $generic: LenseView ),+ )? > $type<__ContainerType $(, $( $generic ),+ )? >{
$(
/// Size in bytes of the field `
#[doc = !($field)]
/// `
pub const fn [< $field _len >]() -> usize{
$len
}
)+
/// Verify that `len` exactly holds [Self]
pub fn check_size(len: usize) -> ::rosenpass_lenses::LenseResult<()> {
::rosenpass_lenses::LenseError::ensure_exact_buffer_size(len, $( $len + )+ 0)
}
}
// read-only accessor functions
impl<'a, __ContainerType $(, $( $generic: LenseView ),+ )?> $type<&'a __ContainerType $(, $( $generic ),+ )?>
where
__ContainerType: std::ops::Index<std::ops::Range<usize>> + ?Sized,
{
lense!{token_muncher_ref @ 0 ; $( $( $attr )* ; $field : $len ),+ }
/// View into all bytes belonging to this Lense
pub fn all_bytes(&self) -> &__ContainerType::Output {
&self.0[0..Self::LEN]
}
}
// mutable accessor functions
impl<'a, __ContainerType $(, $( $generic: LenseView ),+ )?> $type<&'a mut __ContainerType $(, $( $generic ),+ )?>
where
__ContainerType: std::ops::IndexMut<std::ops::Range<usize>> + ?Sized,
{
lense!{token_muncher_ref @ 0 ; $( $( $attr )* ; $field : $len ),+ }
lense!{token_muncher_mut @ 0 ; $( $( $attr )* ; $field : $len ),+ }
/// View into all bytes belonging to this Lense
pub fn all_bytes(&self) -> &__ContainerType::Output {
&self.0[0..Self::LEN]
}
/// View into all bytes belonging to this Lense
pub fn all_bytes_mut(&mut self) -> &mut __ContainerType::Output {
&mut self.0[0..Self::LEN]
}
}
// lense trait, allowing us to know the implementing lenses size
impl<__ContainerType $(, $( $generic: LenseView ),+ )? > LenseView for $type<__ContainerType $(, $( $generic ),+ )? >{
/// Number of bytes required to store this type in binary format
const LEN: usize = $( $len + )+ 0;
}
/// Extension trait to allow checked creation of a lense over
/// some byte slice that contains a
#[doc = lense!(maybe_docstring_link $type)]
pub trait [< $type Ext >] {
type __ContainerType;
/// Create a lense to the byte slice
fn [< $type:snake >] $(< $($generic : LenseView),* >)? (self) -> ::rosenpass_lenses::LenseResult< $type<Self::__ContainerType, $( $($generic),+ )? >>;
/// Create a lense to the byte slice, automatically truncating oversized buffers
fn [< $type:snake _ truncating >] $(< $($generic : LenseView),* >)? (self) -> ::rosenpass_lenses::LenseResult< $type<Self::__ContainerType, $( $($generic),+ )? >>;
}
impl<'a> [< $type Ext >] for &'a [u8] {
type __ContainerType = &'a [u8];
fn [< $type:snake >] $(< $($generic : LenseView),* >)? (self) -> ::rosenpass_lenses::LenseResult< $type<Self::__ContainerType, $( $($generic),+ )? >> {
$type::<Self::__ContainerType, $( $($generic),+ )? >::check_size(self.len())?;
Ok($type ( self, $( $( ::core::marker::PhantomData::<$generic> ),+ )? ))
}
fn [< $type:snake _ truncating >] $(< $($generic : LenseView),* >)? (self) -> ::rosenpass_lenses::LenseResult< $type<Self::__ContainerType, $( $($generic),+ )? >> {
let required_size = $( $len + )+ 0;
::rosenpass_lenses::LenseError::ensure_sufficient_buffer_size(self.len(), required_size)?;
[< $type Ext >]::[< $type:snake >](&self[..required_size])
}
}
impl<'a> [< $type Ext >] for &'a mut [u8] {
type __ContainerType = &'a mut [u8];
fn [< $type:snake >] $(< $($generic : LenseView),* >)? (self) -> ::rosenpass_lenses::LenseResult< $type<Self::__ContainerType, $( $($generic),+ )? >> {
$type::<Self::__ContainerType, $( $($generic),+ )? >::check_size(self.len())?;
Ok($type ( self, $( $( ::core::marker::PhantomData::<$generic> ),+ )? ))
}
fn [< $type:snake _ truncating >] $(< $($generic : LenseView),* >)? (self) -> ::rosenpass_lenses::LenseResult< $type<Self::__ContainerType, $( $($generic),+ )? >> {
let required_size = $( $len + )+ 0;
::rosenpass_lenses::LenseError::ensure_sufficient_buffer_size(self.len(), required_size)?;
[< $type Ext >]::[< $type:snake >](&mut self[..required_size])
}
}
});
);

16
oqs/Cargo.toml Normal file
View File

@@ -0,0 +1,16 @@
[package]
name = "rosenpass-oqs"
authors = ["Karolin Varner <karo@cupdev.net>", "wucke13 <wucke13@gmail.com>"]
version = "0.1.0"
edition = "2021"
license = "MIT OR Apache-2.0"
description = "Rosenpass internal bindings to liboqs"
homepage = "https://rosenpass.eu/"
repository = "https://github.com/rosenpass/rosenpass"
readme = "readme.md"
[dependencies]
rosenpass-cipher-traits = { workspace = true }
rosenpass-util = { workspace = true }
oqs-sys = { workspace = true }
paste = { workspace = true }

5
oqs/readme.md Normal file
View File

@@ -0,0 +1,5 @@
# Rosenpass internal liboqs bindings
Rosenpass internal library providing bindings to liboqs.
This is an internal library; not guarantee is made about its API at this point in time.

80
oqs/src/kem_macro.rs Normal file
View File

@@ -0,0 +1,80 @@
macro_rules! oqs_kem {
($name:ident) => { ::paste::paste!{
mod [< $name:snake >] {
use rosenpass_cipher_traits::Kem;
use rosenpass_util::result::Guaranteed;
pub enum [< $name:camel >] {}
/// # Panic & Safety
///
/// This Trait impl calls unsafe [oqs_sys] functions, that write to byte
/// slices only identified using raw pointers. It must be ensured that the raw
/// pointers point into byte slices of sufficient length, to avoid UB through
/// overwriting of arbitrary data. This is ensured through assertions in the
/// implementation.
///
/// __Note__: This requirement is stricter than necessary, it would suffice
/// to only check that the buffers are big enough, allowing them to be even
/// bigger. However, from a correctness point of view it does not make sense to
/// allow bigger buffers.
impl Kem for [< $name:camel >] {
type Error = ::std::convert::Infallible;
const SK_LEN: usize = ::oqs_sys::kem::[<OQS_KEM _ $name:snake _ length_secret_key >] as usize;
const PK_LEN: usize = ::oqs_sys::kem::[<OQS_KEM _ $name:snake _ length_public_key >] as usize;
const CT_LEN: usize = ::oqs_sys::kem::[<OQS_KEM _ $name:snake _ length_ciphertext >] as usize;
const SHK_LEN: usize = ::oqs_sys::kem::[<OQS_KEM _ $name:snake _ length_shared_secret >] as usize;
fn keygen(sk: &mut [u8], pk: &mut [u8]) -> Guaranteed<()> {
assert_eq!(sk.len(), Self::SK_LEN);
assert_eq!(pk.len(), Self::PK_LEN);
unsafe {
oqs_call!(
::oqs_sys::kem::[< OQS_KEM _ $name:snake _ keypair >],
pk.as_mut_ptr(),
sk.as_mut_ptr()
);
}
Ok(())
}
fn encaps(shk: &mut [u8], ct: &mut [u8], pk: &[u8]) -> Guaranteed<()> {
assert_eq!(shk.len(), Self::SHK_LEN);
assert_eq!(ct.len(), Self::CT_LEN);
assert_eq!(pk.len(), Self::PK_LEN);
unsafe {
oqs_call!(
::oqs_sys::kem::[< OQS_KEM _ $name:snake _ encaps >],
ct.as_mut_ptr(),
shk.as_mut_ptr(),
pk.as_ptr()
);
}
Ok(())
}
fn decaps(shk: &mut [u8], sk: &[u8], ct: &[u8]) -> Guaranteed<()> {
assert_eq!(shk.len(), Self::SHK_LEN);
assert_eq!(sk.len(), Self::SK_LEN);
assert_eq!(ct.len(), Self::CT_LEN);
unsafe {
oqs_call!(
::oqs_sys::kem::[< OQS_KEM _ $name:snake _ decaps >],
shk.as_mut_ptr(),
ct.as_ptr(),
sk.as_ptr()
);
}
Ok(())
}
}
}
pub use [< $name:snake >] :: [< $name:camel >];
}}
}

21
oqs/src/lib.rs Normal file
View File

@@ -0,0 +1,21 @@
macro_rules! oqs_call {
($name:path, $($args:expr),*) => {{
use oqs_sys::common::OQS_STATUS::*;
match $name($($args),*) {
OQS_SUCCESS => {}, // nop
OQS_EXTERNAL_LIB_ERROR_OPENSSL => {
panic!("OpenSSL error in liboqs' {}.", stringify!($name));
},
OQS_ERROR => {
panic!("Unknown error in liboqs' {}.", stringify!($name));
}
}
}};
($name:ident) => { oqs_call!($name, ) };
}
#[macro_use]
mod kem_macro;
oqs_kem!(kyber_512);
oqs_kem!(classic_mceliece_460896);

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 227 KiB

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 725 KiB

After

Width:  |  Height:  |  Size: 725 KiB

View File

@@ -1345,7 +1345,7 @@
<g transform="matrix(1,0,0,1,420.66,-1031.32)">
<g transform="matrix(31.25,0,0,31.25,1431.32,1459.33)">
</g>
<text x="1179.63px" y="1459.33px" style="font-family:'Nunito-Medium', 'Nunito';font-weight:500;font-size:31.25px;">&quot;k<tspan x="1207.79px 1224.25px " y="1459.33px 1459.33px ">ey</tspan> chaining init&quot;</text>
<text x="1179.63px" y="1459.33px" style="font-family:'Nunito-Medium', 'Nunito';font-weight:500;font-size:31.25px;">&quot;chaining k<tspan x="1334px 1350.47px " y="1459.33px 1459.33px ">ey</tspan> init&quot;</text>
</g>
</g>
<g transform="matrix(0.389246,0,0,0.136584,299.374,1166.87)">
@@ -1437,7 +1437,7 @@
<g transform="matrix(0.99675,0,0,0.996238,-597.124,-172.692)">
<g transform="matrix(31.25,0,0,31.25,1492.94,1459.33)">
</g>
<text x="1187.16px" y="1459.33px" style="font-family:'Nunito-Medium', 'Nunito';font-weight:500;font-size:31.25px;">&quot;k<tspan x="1215.32px 1231.79px " y="1459.33px 1459.33px ">ey</tspan> chaining e<tspan x="1398.88px " y="1459.33px ">x</tspan>tr<tspan x="1437.88px " y="1459.33px ">a</tspan>ct&quot;</text>
<text x="1187.16px" y="1459.33px" style="font-family:'Nunito-Medium', 'Nunito';font-weight:500;font-size:31.25px;">&quot;chaining k<tspan x="1341.54px 1358px " y="1459.33px 1459.33px ">ey</tspan> e<tspan x="1398.88px " y="1459.33px ">x</tspan>tr<tspan x="1437.88px " y="1459.33px ">a</tspan>ct&quot;</text>
</g>
<g transform="matrix(0.99675,0,0,0.996238,-380.054,-779.158)">
<g transform="matrix(31.25,0,0,31.25,1463.54,1459.33)">

Before

Width:  |  Height:  |  Size: 218 KiB

After

Width:  |  Height:  |  Size: 218 KiB

View File

@@ -79,6 +79,8 @@
letter-csv .initial:n = ,
letter-content .tl_set:N = \l_letter_csv_content_tl,
letter-content .initial:n=,
tableofcontents .bool_gset:N = \g__ptxcd_tableofcontents_bool,
tableofcontents .initial:n = true,
}
\tl_new:N \l__markdown_sequence_tl

View File

@@ -171,8 +171,17 @@ version={4.0},
\ExplSyntaxOn
\SetTemplatePreamble{
\hypersetup{pdftitle=\inserttitle,pdfauthor=The~Rosenpass~Project}
\title{\vspace*{-2.5cm}\includegraphics[width=4cm]{RosenPass-Logo}}
\author{\csname insertauthor\endcsname}
\exp_args:NV\tl_if_eq:nnTF \inserttitle{Rosenpass} {
\title{\vspace*{-2.5cm}\includegraphics[width=4cm]{RosenPass-Logo}}
} {
\titlehead{\centerline{\includegraphics[width=4cm]{RosenPass-Logo}}}
\title{\inserttitle}
}
\ifx\csname insertauthor\endcsname\relax
\author{}
\else
\author{\parbox{\linewidth}{\centering\insertauthor}}
\fi
\subject{\csname insertsubject\endcsname}
\date{\vspace{-1cm}}
}
@@ -374,29 +383,28 @@ version={4.0},
}
}
}
\makeatother
\ExplSyntaxOff
% end of namepartpicturesetup
\newcommand{\captionbox}[1]{{\setlength{\fboxsep}{.5ex}\colorbox{rosenpass-gray}{#1}}}
\makeatletter
\renewenvironment{abstract}{
\small
\begin{center}\normalfont\sectfont\nobreak\abstractname\@endparpenalty\@M\end{center}%
}{
\par
}
\makeatother
\SetTemplateBegin{
\maketitle
\begin{abstract}
\noindent\csname insertabstract\endcsname
\end{abstract}
\tableofcontents
\bool_if:NT \g__ptxcd_tableofcontents_bool \tableofcontents
\clearpage
}
\makeatother
\ExplSyntaxOff
\SetTemplateEnd{
}
\SetTemplateEnd{}

View File

@@ -7,13 +7,13 @@ author:
- Wanja Zaeske
- Lisa Schmidt = {Scientific Illustrator \\url{mullana.de}}
abstract: |
Rosenpass is used to create post-quantum-secure VPNs. Rosenpass computes a shared key, WireGuard (WG) [@wg] uses the shared key to establish a secure connection. Rosenpass can also be used without WireGuard, deriving post-quantum-secure symmetric keys for some other application. The Rosenpass protocol builds on “Post-quantum WireGuard” (PQWG) [@pqwg] and improves it by using a cookie mechanism to provide security against state disruption attacks.
Rosenpass is used to create post-quantum-secure VPNs. Rosenpass computes a shared key, WireGuard (WG) [@wg] uses the shared key to establish a secure connection. Rosenpass can also be used without WireGuard, deriving post-quantum-secure symmetric keys for another application. The Rosenpass protocol builds on “Post-quantum WireGuard” (PQWG) [@pqwg] and improves it by using a cookie mechanism to provide security against state disruption attacks.
The WireGuard implementation enjoys great trust from the cryptography community and has excellent performance characteristics. To preserve these features, the Rosenpass application runs side-by-side with WireGuard and supplies a new post-quantum-secure pre-shared key (PSK) every two minutes. WireGuard itself still performs the pre-quantum-secure key exchange and transfers any transport data with no involvement from Rosenpass at all.
The Rosenpass project consists of a protocol description, an implementation written in Rust, and a symbolic analysis of the protocols security using ProVerif [@proverif]. We are working on a cryptographic security proof using CryptoVerif [@cryptoverif].
This document is a guide to engineers and researchers implementing the protocol; a scientific paper discussing the security properties of Rosenpass is work in progress.
This document is a guide for engineers and researchers implementing the protocol; a scientific paper discussing the security properties of Rosenpass is work in progress.
---
\enlargethispage{5mm}
@@ -169,7 +169,7 @@ Rosenpass uses a cryptographic hash function for multiple purposes:
* Computing the cookie to guard against denial of service attacks. This is a feature adopted from WireGuard, but not yet included in the implementation of Rosenpass.
* Computing the peer ID
* Key derivation during and after the handshake
* Computing the additional data for the biscuit encryption, to prove some privacy for its contents
* Computing the additional data for the biscuit encryption, to provide some privacy for its contents
Using one hash function for multiple purposes can cause real-world security issues and even key recovery attacks [@oraclecloning]. We choose a tree-based domain separation scheme based on a keyed hash function the previously introduced primitive `hash` to make sure all our hash function calls can be seen as distinct.
@@ -237,7 +237,7 @@ For each peer, the server stores:
The initiator stores the following local state for each ongoing handshake:
* A reference to the peer structure
* A state indicator to keep track of the message expected from the responder next
* A state indicator to keep track of the next message expected from the responder
* `sidi` Initiator session ID
* `sidr` Responder session ID
* `ck` The chaining key

View File

@@ -14,14 +14,14 @@ This repository contains
## Getting started
First, [install rosenpass](#Getting-Rosenpass). Then, check out the help funtions of `rp` & `rosenpass`:
First, [install rosenpass](#Getting-Rosenpass). Then, check out the help functions of `rp` & `rosenpass`:
```sh
rp help
rosenpass help
```
Follow [quickstart instructions](https://rosenpass.eu/#start) to get a VPN up and running.
Follow [quick start instructions](https://rosenpass.eu/#start) to get a VPN up and running.
## Software architecture
@@ -54,7 +54,7 @@ We are working on a cryptographic proof of security, but we already provide a sy
(manual) $ ./analyze.sh
```
The analysis is implemented according to modern software engineering principles: Using the C preprocessor, we where able to split the analysis into multiple files and uses some metaprogramming to avoid repetition.
The analysis is implemented according to modern software engineering principles: Using the C preprocessor, we where able to split the analysis into multiple files and uses some meta programming to avoid repetition.
The code uses a variety of optimizations to speed up analysis such as using secret functions to model trusted/malicious setup. We split the model into two separate entry points which can be analyzed in parallel. Each is much faster than both models combined.
A wrapper script provides instant feedback about which queries execute as expected in color: A red cross if a query fails and a green check if it succeeds.
@@ -62,15 +62,22 @@ A wrapper script provides instant feedback about which queries execute as expect
[^libsodium]: https://doc.libsodium.org/
[^wg]: https://www.wireguard.com/
[^pqwg]: https://eprint.iacr.org/2020/379
[^pqwg-statedis]: Unless supplied with a pre-shared-key, but this defeates the purpose of a key exchange protocol
[^pqwg-statedis]: Unless supplied with a pre-shared-key, but this defeats the purpose of a key exchange protocol
[^wg-statedis]: https://lists.zx2c4.com/pipermail/wireguard/2021-August/006916.htmlA
# Getting Rosenpass
Rosenpass is packaged for more and more distros, maybe also for the distro of your choice?
Rosenpass is packaged for more and more distributions, maybe also for the distribution of your choice?
[![Packaging status](https://repology.org/badge/vertical-allrepos/rosenpass.svg)](https://repology.org/project/rosenpass/versions)
# Mirrors
Don't want to use GitHub or only have an IPv6 connection? Rosenpass has set up two mirrors for this:
- [NotABug](https://notabug.org/rosenpass/rosenpass)
- [GitLab](https://gitlab.com/rosenpass/rosenpass/)
# Supported by
Funded through <a href="https://nlnet.nl/">NLNet</a> with financial support for the European Commission's <a href="https://nlnet.nl/assure">NGI Assure</a> program.

9
rosenpass-log/Cargo.toml Normal file
View File

@@ -0,0 +1,9 @@
[package]
name = "rosenpass-log"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
log.workspace = true

110
rosenpass-log/src/lib.rs Normal file
View File

@@ -0,0 +1,110 @@
#![allow(unused_macros)]
/// Whenever a log event occurs, the cause of the event must be decided on. This cause will then
/// be used to decide, if an actual log event is to be cause. The goal is to prevent especially
/// external, unautherized entities from causing excessive loggin, which otherwise might open the
/// door to MITM attacks
pub enum Cause {
/// An unauthorized entitiy triggered this event via Network
///
/// Example: a InitHello message in the rosenpass protocol
UnauthorizedNetwork,
/// An authorized entitity triggered this event via Network
///
/// Example: a handshake was succesful (which asserts the peer is authorized)
AuthorizedNetwork,
/// A local entity like rosenpassctl triggered this event
///
/// Example: the broker adds a new peer
LocalNetwork,
/// The user caused this event
///
/// Examples:
/// - The process was started
/// - Ctrl+C was used to send sig SIGINT
User,
/// The developer wanted this in the log!
Developer,
}
// Rational: All events are to be displayed if trace level debugging is configured
macro_rules! trace {
($cause:expr, $($tail:tt)* ) => {{
use crate::Cause::*;
match $cause {
UnauthorizedNetwork | AuthorizedNetwork | LocalNetwork | User | Developer => {
::log::trace!($($tail)*);
}
}
}}
}
// Rational: All events are to be displayed if debug level debugging is configured
macro_rules! debug {
($cause:expr, $($tail:tt)* ) => {{
use crate::Cause::*;
match $cause {
UnauthorizedNetwork | AuthorizedNetwork | LocalNetwork | User | Developer => {
::log::debug!($($tail)*);
}
}
}}
}
// Rational: Only authorized causes shall be able to emit info messages
macro_rules! info {
($cause:expr, $($tail:tt)* ) => {{
use crate::Cause::*;
match $cause {
UnauthorizedNetwork => {},
AuthorizedNetwork | LocalNetwork | User | Developer => {
::log::info!($($tail)*);
}
}
}}
}
// Rational: Only authorized causes shall be able to emit info messages
macro_rules! warn {
($cause:expr, $($tail:tt)* ) => {{
use crate::Cause::*;
match $cause {
UnauthorizedNetwork => {},
AuthorizedNetwork | LocalNetwork | User | Developer =>{
::log::warn!($($tail)*);
}
}
}}
}
// Rational: Only local sources shall be able to cause errors to be displayed
macro_rules! error {
($cause:expr, $($tail:tt)* ) => {{
use crate::Cause::*;
match $cause {
UnauthorizedNetwork | AuthorizedNetwork => {},
LocalNetwork | User | Developer => {
::log::error!($($tail)*);
}
}
}}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn expand_all_macros() {
use Cause::*;
trace!(UnauthorizedNetwork, "beep");
debug!(UnauthorizedNetwork, "boop");
info!(LocalNetwork, "tock");
warn!(LocalNetwork, "möp");
error!(User, "knirsch");
}
}

45
rosenpass/Cargo.toml Normal file
View File

@@ -0,0 +1,45 @@
[package]
name = "rosenpass"
version = "0.2.1"
authors = ["Karolin Varner <karo@cupdev.net>", "wucke13 <wucke13@gmail.com>"]
edition = "2021"
license = "MIT OR Apache-2.0"
description = "Build post-quantum-secure VPNs with WireGuard!"
homepage = "https://rosenpass.eu/"
repository = "https://github.com/rosenpass/rosenpass"
readme = "readme.md"
[[bench]]
name = "handshake"
harness = false
[dependencies]
rosenpass-util = { workspace = true }
rosenpass-constant-time = { workspace = true }
rosenpass-sodium = { workspace = true }
rosenpass-ciphers = { workspace = true }
rosenpass-cipher-traits = { workspace = true }
rosenpass-to = { workspace = true }
rosenpass-secret-memory = { workspace = true }
rosenpass-lenses = { workspace = true }
anyhow = { workspace = true }
static_assertions = { workspace = true }
memoffset = { workspace = true }
libsodium-sys-stable = { workspace = true }
thiserror = { workspace = true }
paste = { workspace = true }
log = { workspace = true }
env_logger = { workspace = true }
serde = { workspace = true }
toml = { workspace = true }
clap = { workspace = true }
mio = { workspace = true }
rand = { workspace = true }
[build-dependencies]
anyhow = { workspace = true }
[dev-dependencies]
criterion = { workspace = true }
test_bin = { workspace = true }
stacker = { workspace = true }

View File

@@ -1,17 +1,18 @@
use anyhow::Result;
use rosenpass::pqkem::KEM;
use rosenpass::{
pqkem::{CCAKEM, KEM},
protocol::{CcaPk, CcaSk, HandleMsgResult, MsgBuf, PeerPtr, Server, SymKey},
pqkem::StaticKEM,
protocol::{CryptoServer, HandleMsgResult, MsgBuf, PeerPtr, SPk, SSk, SymKey},
sodium::sodium_init,
};
use criterion::{black_box, criterion_group, criterion_main, Criterion};
fn handle(
tx: &mut Server,
tx: &mut CryptoServer,
msgb: &mut MsgBuf,
msgl: usize,
rx: &mut Server,
rx: &mut CryptoServer,
resb: &mut MsgBuf,
) -> Result<(Option<SymKey>, Option<SymKey>)> {
let HandleMsgResult {
@@ -30,7 +31,7 @@ fn handle(
Ok((txk, rxk.or(xch)))
}
fn hs(ini: &mut Server, res: &mut Server) -> Result<()> {
fn hs(ini: &mut CryptoServer, res: &mut CryptoServer) -> Result<()> {
let (mut inib, mut resb) = (MsgBuf::zero(), MsgBuf::zero());
let sz = ini.initiate_handshake(PeerPtr(0), &mut *inib)?;
let (kini, kres) = handle(ini, &mut inib, sz, res, &mut resb)?;
@@ -38,16 +39,19 @@ fn hs(ini: &mut Server, res: &mut Server) -> Result<()> {
Ok(())
}
fn keygen() -> Result<(CcaSk, CcaPk)> {
let (mut sk, mut pk) = (CcaSk::zero(), CcaPk::zero());
CCAKEM::keygen(sk.secret_mut(), pk.secret_mut())?;
fn keygen() -> Result<(SSk, SPk)> {
let (mut sk, mut pk) = (SSk::zero(), SPk::zero());
StaticKEM::keygen(sk.secret_mut(), pk.secret_mut())?;
Ok((sk, pk))
}
fn make_server_pair() -> Result<(Server, Server)> {
fn make_server_pair() -> Result<(CryptoServer, CryptoServer)> {
let psk = SymKey::random();
let ((ska, pka), (skb, pkb)) = (keygen()?, keygen()?);
let (mut a, mut b) = (Server::new(ska, pka.clone()), Server::new(skb, pkb.clone()));
let (mut a, mut b) = (
CryptoServer::new(ska, pka.clone()),
CryptoServer::new(skb, pkb.clone()),
);
a.add_peer(Some(psk.clone()), pkb)?;
b.add_peer(Some(psk), pka)?;
Ok((a, b))
@@ -58,12 +62,12 @@ fn criterion_benchmark(c: &mut Criterion) {
let (mut a, mut b) = make_server_pair().unwrap();
c.bench_function("cca_secret_alloc", |bench| {
bench.iter(|| {
CcaSk::zero();
SSk::zero();
})
});
c.bench_function("cca_public_alloc", |bench| {
bench.iter(|| {
CcaPk::zero();
SPk::zero();
})
});
c.bench_function("keygen", |bench| {

53
rosenpass/build.rs Normal file
View File

@@ -0,0 +1,53 @@
use anyhow::bail;
use anyhow::Result;
use std::env;
use std::fs::File;
use std::io::Write;
use std::path::PathBuf;
use std::process::Command;
/// Invokes a troff compiler to compile a manual page
fn render_man(compiler: &str, man: &str) -> Result<String> {
let out = Command::new(compiler).args(["-Tascii", man]).output()?;
if !out.status.success() {
bail!("{} returned an error", compiler);
}
Ok(String::from_utf8(out.stdout)?)
}
/// Generates the manual page
fn generate_man() -> String {
// This function is purposely stupid and redundant
let man = render_man("mandoc", "./doc/rosenpass.1");
if let Ok(man) = man {
return man;
}
let man = render_man("groff", "./doc/rosenpass.1");
if let Ok(man) = man {
return man;
}
// TODO: Link to online manual here
"Cannot render manual page\n".into()
}
fn man() {
let out_dir = PathBuf::from(env::var("OUT_DIR").unwrap());
let man = generate_man();
let path = out_dir.join("rosenpass.1.ascii");
let mut file = File::create(&path).unwrap();
file.write_all(man.as_bytes()).unwrap();
println!("cargo:rustc-env=ROSENPASS_MAN={}", path.display());
}
fn main() {
// For now, rerun the build script on every time, as the build script
// is not very expensive right now.
println!("cargo:rerun-if-changed=./");
man();
}

1
rosenpass/readme.md Symbolic link
View File

@@ -0,0 +1 @@
../readme.md

739
rosenpass/src/app_server.rs Normal file
View File

@@ -0,0 +1,739 @@
use anyhow::bail;
use anyhow::Result;
use log::{debug, error, info, warn};
use mio::Interest;
use mio::Token;
use rosenpass_util::file::fopen_w;
use std::cell::Cell;
use std::io::Write;
use std::io::ErrorKind;
use std::net::Ipv4Addr;
use std::net::Ipv6Addr;
use std::net::SocketAddr;
use std::net::SocketAddrV4;
use std::net::SocketAddrV6;
use std::net::ToSocketAddrs;
use std::path::PathBuf;
use std::process::Command;
use std::process::Stdio;
use std::slice;
use std::thread;
use std::time::Duration;
use crate::{
config::Verbosity,
protocol::{CryptoServer, MsgBuf, PeerPtr, SPk, SSk, SymKey, Timing},
};
use rosenpass_util::attempt;
use rosenpass_util::b64::{b64_writer, fmt_b64};
const IPV4_ANY_ADDR: Ipv4Addr = Ipv4Addr::new(0, 0, 0, 0);
const IPV6_ANY_ADDR: Ipv6Addr = Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 0);
fn ipv4_any_binding() -> SocketAddr {
// addr, port
SocketAddr::V4(SocketAddrV4::new(IPV4_ANY_ADDR, 0))
}
fn ipv6_any_binding() -> SocketAddr {
// addr, port, flowinfo, scope_id
SocketAddr::V6(SocketAddrV6::new(IPV6_ANY_ADDR, 0, 0, 0))
}
#[derive(Default, Debug)]
pub struct AppPeer {
pub outfile: Option<PathBuf>,
pub outwg: Option<WireguardOut>, // TODO make this a generic command
pub initial_endpoint: Option<Endpoint>,
pub current_endpoint: Option<Endpoint>,
}
impl AppPeer {
pub fn endpoint(&self) -> Option<&Endpoint> {
self.current_endpoint
.as_ref()
.or(self.initial_endpoint.as_ref())
}
}
#[derive(Default, Debug)]
pub struct WireguardOut {
// impl KeyOutput
pub dev: String,
pub pk: String,
pub extra_params: Vec<String>,
}
/// Holds the state of the application, namely the external IO
///
/// Responsible for file IO, network IO
// TODO add user control via unix domain socket and stdin/stdout
#[derive(Debug)]
pub struct AppServer {
pub crypt: CryptoServer,
pub sockets: Vec<mio::net::UdpSocket>,
pub events: mio::Events,
pub mio_poll: mio::Poll,
pub peers: Vec<AppPeer>,
pub verbosity: Verbosity,
pub all_sockets_drained: bool,
}
/// A socket pointer is an index assigned to a socket;
/// right now the index is just the sockets index in AppServer::sockets.
///
/// Holding this as a reference instead of an &mut UdpSocket is useful
/// to deal with the borrow checker, because otherwise we could not refer
/// to a socket and another member of AppServer at the same time.
#[derive(Debug)]
pub struct SocketPtr(pub usize);
impl SocketPtr {
pub fn get<'a>(&self, srv: &'a AppServer) -> &'a mio::net::UdpSocket {
&srv.sockets[self.0]
}
pub fn get_mut<'a>(&self, srv: &'a mut AppServer) -> &'a mut mio::net::UdpSocket {
&mut srv.sockets[self.0]
}
pub fn send_to(&self, srv: &AppServer, buf: &[u8], addr: SocketAddr) -> anyhow::Result<()> {
self.get(srv).send_to(buf, addr)?;
Ok(())
}
}
/// Index based pointer to a Peer
#[derive(Debug, Copy, Clone)]
pub struct AppPeerPtr(pub usize);
impl AppPeerPtr {
/// Takes an index based handle and returns the actual peer
pub fn lift(p: PeerPtr) -> Self {
Self(p.0)
}
/// Returns an index based handle to one Peer
pub fn lower(&self) -> PeerPtr {
PeerPtr(self.0)
}
pub fn get_app<'a>(&self, srv: &'a AppServer) -> &'a AppPeer {
&srv.peers[self.0]
}
pub fn get_app_mut<'a>(&self, srv: &'a mut AppServer) -> &'a mut AppPeer {
&mut srv.peers[self.0]
}
}
#[derive(Debug)]
pub enum AppPollResult {
DeleteKey(AppPeerPtr),
SendInitiation(AppPeerPtr),
SendRetransmission(AppPeerPtr),
ReceivedMessage(usize, Endpoint),
}
#[derive(Debug)]
pub enum KeyOutputReason {
Exchanged,
Stale,
}
/// Represents a communication partner rosenpass may be sending packets to
///
/// Generally at the start of Rosenpass either no address or a Hostname is known;
/// later when we actually start to receive RespHello packages, we know the specific Address
/// and socket to use with a peer
#[derive(Debug)]
pub enum Endpoint {
/// Rosenpass supports multiple sockets, so we include the information
/// which socket an address can be reached on. This probably does not
/// make much of a difference in most setups where two sockets are just
/// used to enable dual stack operation; it does make a difference in
/// more complex use cases.
///
/// For instance it enables using multiple interfaces with overlapping
/// ip spaces, such as listening on a private IP network and a public IP
/// at the same time. It also would reply on the same port RespHello was
/// sent to when listening on multiple ports on the same interface. This
/// may be required for some arcane firewall setups.
SocketBoundAddress {
/// The socket the address can be reached under; this is generally
/// determined when we actually receive an RespHello message
socket: SocketPtr,
/// Just the address
addr: SocketAddr,
},
// A host name or IP address; storing the hostname here instead of an
// ip address makes sure that we look up the host name whenever we try
// to make a connection; this may be beneficial in some setups where a host-name
// at first can not be resolved but becomes resolvable later.
Discovery(HostPathDiscoveryEndpoint),
}
impl Endpoint {
/// Start discovery from some addresses
pub fn discovery_from_addresses(addresses: Vec<SocketAddr>) -> Self {
Endpoint::Discovery(HostPathDiscoveryEndpoint::from_addresses(addresses))
}
/// Start endpoint discovery from a hostname
pub fn discovery_from_hostname(hostname: String) -> anyhow::Result<Self> {
let host = HostPathDiscoveryEndpoint::lookup(hostname)?;
Ok(Endpoint::Discovery(host))
}
// Restart discovery; joining two sources of (potential) addresses
//
// This is used when the connection to an endpoint is lost in order
// to include the addresses specified on the command line and the
// address last used in the discovery process
pub fn discovery_from_multiple_sources(
a: Option<&Endpoint>,
b: Option<&Endpoint>,
) -> Option<Self> {
let sources = match (a, b) {
(Some(e), None) | (None, Some(e)) => e.addresses().iter().chain(&[]),
(Some(e1), Some(e2)) => e1.addresses().iter().chain(e2.addresses()),
(None, None) => return None,
};
let lower_size_bound = sources.size_hint().0;
let mut dedup = std::collections::HashSet::with_capacity(lower_size_bound);
let mut addrs = Vec::with_capacity(lower_size_bound);
for a in sources {
if dedup.insert(a) {
addrs.push(*a);
}
}
Some(Self::discovery_from_addresses(addrs))
}
pub fn send(&self, srv: &AppServer, buf: &[u8]) -> anyhow::Result<()> {
use Endpoint::*;
match self {
SocketBoundAddress { socket, addr } => socket.send_to(srv, buf, *addr),
Discovery(host) => host.send_scouting(srv, buf),
}
}
fn addresses(&self) -> &[SocketAddr] {
use Endpoint::*;
match self {
SocketBoundAddress { addr, .. } => slice::from_ref(addr),
Discovery(host) => host.addresses(),
}
}
}
/// Handles host-path discovery
///
/// When rosenpass is started, we either know no peer address
/// or we know a hostname. How to contact this hostname may not
/// be entirely clear for two reasons:
///
/// 1. We have multiple sockets; only a subset of those may be able to contact the host
/// 2. DNS resolution can return multiple addresses
///
/// We could just use the first working socket and the first address returned, but this
/// may be error prone: Some of the sockets may appear to be able to contact the host,
/// but the packets will be dropped. Some of the addresses may appear to be reachable
/// but the packets could be lost.
///
/// In contrast to TCP, UDP has no mechanism to ensure packets actually arrive.
///
/// To robustly handle host path discovery, we try each socket-ip-combination in a round
/// robin fashion; the struct stores the offset of the last used combination internally and
/// and will continue with the next combination on every call.
///
/// Retransmission handling will continue normally; i.e. increasing the distance between
/// retransmissions on every retransmission, until it is long enough to bore a human. Therefor
/// it is important to avoid having a large number of sockets drop packets not just for efficiency
/// but to avoid latency issues too.
///
// TODO: We might consider adjusting the retransmission handling to account for host-path discovery
#[derive(Debug)]
pub struct HostPathDiscoveryEndpoint {
scouting_state: Cell<(usize, usize)>, // addr_off, sock_off
addresses: Vec<SocketAddr>,
}
impl HostPathDiscoveryEndpoint {
pub fn from_addresses(addresses: Vec<SocketAddr>) -> Self {
let scouting_state = Cell::new((0, 0));
Self {
addresses,
scouting_state,
}
}
/// Lookup a hostname
pub fn lookup(hostname: String) -> anyhow::Result<Self> {
Ok(Self {
addresses: ToSocketAddrs::to_socket_addrs(&hostname)?.collect(),
scouting_state: Cell::new((0, 0)),
})
}
pub fn addresses(&self) -> &Vec<SocketAddr> {
&self.addresses
}
fn insert_next_scout_offset(&self, srv: &AppServer, addr_no: usize, sock_no: usize) {
self.scouting_state.set((
(addr_no + 1) % self.addresses.len(),
(sock_no + 1) % srv.sockets.len(),
));
}
/// Attempt to reach the host
///
/// Will round-robin-try different socket-ip-combinations on each call.
pub fn send_scouting(&self, srv: &AppServer, buf: &[u8]) -> anyhow::Result<()> {
let (addr_off, sock_off) = self.scouting_state.get();
let mut addrs = (self.addresses)
.iter()
.enumerate()
.cycle()
.skip(addr_off)
.take(self.addresses.len());
let mut sockets = (srv.sockets)
.iter()
.enumerate()
.cycle()
.skip(sock_off)
.take(srv.sockets.len());
for (addr_no, addr) in addrs.by_ref() {
for (sock_no, sock) in sockets.by_ref() {
let res = sock.send_to(buf, *addr);
let err = match res {
Ok(_) => {
self.insert_next_scout_offset(srv, addr_no, sock_no);
return Ok(());
}
Err(e) => e,
};
// TODO: replace this by
// e.kind() == io::ErrorKind::NetworkUnreachable
// once https://github.com/rust-lang/rust/issues/86442 lands
let ignore = err
.to_string()
.starts_with("Address family not supported by protocol");
if !ignore {
warn!("Socket #{} refusing to send to {}: ", sock_no, addr);
}
}
}
bail!("Unable to send message: All sockets returned errors.")
}
}
impl AppServer {
pub fn new(
sk: SSk,
pk: SPk,
addrs: Vec<SocketAddr>,
verbosity: Verbosity,
) -> anyhow::Result<Self> {
// setup mio
let mio_poll = mio::Poll::new()?;
let events = mio::Events::with_capacity(8);
// bind each SocketAddr to a socket
let maybe_sockets: Result<Vec<_>, _> =
addrs.into_iter().map(mio::net::UdpSocket::bind).collect();
let mut sockets = maybe_sockets?;
// When no socket is specified, rosenpass should open one port on all
// available interfaces best-effort. Here are the cases how this can possibly go:
//
// Some operating systems (such as Linux [^linux] and FreeBSD [^freebsd])
// using IPv6 sockets to handle IPv4 connections; on these systems
// binding to the `[::]:0` address will typically open a dual-stack
// socket. Some other systems such as OpenBSD [^openbsd] do not support this feature.
//
// Dual-stack systems provide a flag to enable or disable this
// behavior the IPV6_V6ONLY flag. OpenBSD supports this flag
// read-only. MIO[^mio] provides a way to read this flag but not
// to write it.
//
// - One dual-stack IPv6 socket, if the operating supports dual-stack sockets and
// correctly reports this
// - One IPv6 socket and one IPv4 socket if the operating does not support dual stack
// sockets or disables them by default assuming this is also correctly reported
// - One IPv6 socket and no IPv4 socket if IPv6 socket is not dual-stack and opening
// the IPv6 socket fails
// - One IPv4 socket and no IPv6 socket if opening the IPv6 socket fails
// - One dual-stack IPv6 socket and a redundant IPv4 socket if dual-stack sockets are
// supported but the operating system does not correctly report this (specifically,
// if the only_v6() call raises an error)
// - Rosenpass exits if no socket could be opened
//
// [^freebsd]: https://man.freebsd.org/cgi/man.cgi?query=ip6&sektion=4&manpath=FreeBSD+6.0-RELEASE
// [^openbsd]: https://man.openbsd.org/ip6.4
// [^linux]: https://man7.org/linux/man-pages/man7/ipv6.7.html
// [^mio]: https://docs.rs/mio/0.8.6/mio/net/struct.UdpSocket.html#method.only_v6
if sockets.is_empty() {
macro_rules! try_register_socket {
($title:expr, $binding:expr) => {{
let r = mio::net::UdpSocket::bind($binding);
match r {
Ok(sock) => {
sockets.push(sock);
Some(sockets.len() - 1)
}
Err(e) => {
warn!("Could not bind to {} socket: {}", $title, e);
None
}
}
}};
}
let v6 = try_register_socket!("IPv6", ipv6_any_binding());
let need_v4 = match v6.map(|no| sockets[no].only_v6()) {
Some(Ok(v)) => v,
None => true,
Some(Err(e)) => {
warn!("Unable to detect whether the IPv6 socket supports dual-stack operation: {}", e);
true
}
};
if need_v4 {
try_register_socket!("IPv4", ipv4_any_binding());
}
}
if sockets.is_empty() {
bail!("No sockets to listen on!")
}
// register all sockets to mio
for (i, socket) in sockets.iter_mut().enumerate() {
mio_poll
.registry()
.register(socket, Token(i), Interest::READABLE)?;
}
// TODO use mio::net::UnixStream together with std::os::unix::net::UnixStream for Linux
Ok(Self {
crypt: CryptoServer::new(sk, pk),
peers: Vec::new(),
verbosity,
sockets,
events,
mio_poll,
all_sockets_drained: false,
})
}
pub fn verbose(&self) -> bool {
matches!(self.verbosity, Verbosity::Verbose)
}
pub fn add_peer(
&mut self,
psk: Option<SymKey>,
pk: SPk,
outfile: Option<PathBuf>,
outwg: Option<WireguardOut>,
hostname: Option<String>,
) -> anyhow::Result<AppPeerPtr> {
let PeerPtr(pn) = self.crypt.add_peer(psk, pk)?;
assert!(pn == self.peers.len());
let initial_endpoint = hostname
.map(Endpoint::discovery_from_hostname)
.transpose()?;
let current_endpoint = None;
self.peers.push(AppPeer {
outfile,
outwg,
initial_endpoint,
current_endpoint,
});
Ok(AppPeerPtr(pn))
}
pub fn listen_loop(&mut self) -> anyhow::Result<()> {
const INIT_SLEEP: f64 = 0.01;
const MAX_FAILURES: i32 = 10;
let mut failure_cnt = 0;
loop {
let msgs_processed = 0usize;
let err = match self.event_loop() {
Ok(()) => return Ok(()),
Err(e) => e,
};
// This should not happen…
failure_cnt = if msgs_processed > 0 {
0
} else {
failure_cnt + 1
};
let sleep = INIT_SLEEP * 2.0f64.powf(f64::from(failure_cnt - 1));
let tries_left = MAX_FAILURES - (failure_cnt - 1);
error!(
"unexpected error after processing {} messages: {:?} {}",
msgs_processed,
err,
err.backtrace()
);
if tries_left > 0 {
error!("re-initializing networking in {sleep}! {tries_left} tries left.");
std::thread::sleep(self.crypt.timebase.dur(sleep));
continue;
}
bail!("too many network failures");
}
}
pub fn event_loop(&mut self) -> anyhow::Result<()> {
let (mut rx, mut tx) = (MsgBuf::zero(), MsgBuf::zero());
/// if socket address for peer is known, call closure
/// assumes that closure leaves a message in `tx`
/// assumes that closure returns the length of message in bytes
macro_rules! tx_maybe_with {
($peer:expr, $fn:expr) => {
attempt!({
let p = $peer;
if p.get_app(self).endpoint().is_some() {
let len = $fn()?;
let ep: &Endpoint = p.get_app(self).endpoint().unwrap();
ep.send(self, &tx[..len])?;
}
Ok(())
})
};
}
loop {
use crate::protocol::HandleMsgResult;
use AppPollResult::*;
use KeyOutputReason::*;
match self.poll(&mut *rx)? {
#[allow(clippy::redundant_closure_call)]
SendInitiation(peer) => tx_maybe_with!(peer, || self
.crypt
.initiate_handshake(peer.lower(), &mut *tx))?,
#[allow(clippy::redundant_closure_call)]
SendRetransmission(peer) => tx_maybe_with!(peer, || self
.crypt
.retransmit_handshake(peer.lower(), &mut *tx))?,
DeleteKey(peer) => {
self.output_key(peer, Stale, &SymKey::random())?;
// There was a loss of connection apparently; restart host discovery
// starting from the last used address but including all the initially
// specified addresses
// TODO: We could do this preemptively, before any connection loss actually occurs.
let p = peer.get_app_mut(self);
p.current_endpoint = Endpoint::discovery_from_multiple_sources(
p.current_endpoint.as_ref(),
p.initial_endpoint.as_ref(),
);
}
ReceivedMessage(len, endpoint) => {
match self.crypt.handle_msg(&rx[..len], &mut *tx) {
Err(ref e) => {
self.verbose().then(|| {
info!(
"error processing incoming message from {:?}: {:?} {}",
endpoint,
e,
e.backtrace()
);
});
}
Ok(HandleMsgResult {
resp,
exchanged_with,
..
}) => {
if let Some(len) = resp {
endpoint.send(self, &tx[0..len])?;
}
if let Some(p) = exchanged_with {
let ap = AppPeerPtr::lift(p);
ap.get_app_mut(self).current_endpoint = Some(endpoint);
// TODO: Maybe we should rather call the key "rosenpass output"?
self.output_key(ap, Exchanged, &self.crypt.osk(p)?)?;
}
}
}
}
};
}
}
pub fn output_key(
&self,
peer: AppPeerPtr,
why: KeyOutputReason,
key: &SymKey,
) -> anyhow::Result<()> {
let peerid = peer.lower().get(&self.crypt).pidt()?;
let ap = peer.get_app(self);
if self.verbose() {
let msg = match why {
KeyOutputReason::Exchanged => "Exchanged key with peer",
KeyOutputReason::Stale => "Erasing outdated key from peer",
};
info!("{} {}", msg, fmt_b64(&*peerid));
}
if let Some(of) = ap.outfile.as_ref() {
// This might leave some fragments of the secret on the stack;
// in practice this is likely not a problem because the stack likely
// will be overwritten by something else soon but this is not exactly
// guaranteed. It would be possible to remedy this, but since the secret
// data will linger in the linux page cache anyways with the current
// implementation, going to great length to erase the secret here is
// not worth it right now.
b64_writer(fopen_w(of)?).write_all(key.secret())?;
let why = match why {
KeyOutputReason::Exchanged => "exchanged",
KeyOutputReason::Stale => "stale",
};
// this is intentionally writing to stdout instead of stderr, because
// it is meant to allow external detection of a successful key-exchange
println!(
"output-key peer {} key-file {of:?} {why}",
fmt_b64(&*peerid)
);
}
if let Some(owg) = ap.outwg.as_ref() {
let mut child = Command::new("wg")
.arg("set")
.arg(&owg.dev)
.arg("peer")
.arg(&owg.pk)
.arg("preshared-key")
.arg("/dev/stdin")
.stdin(Stdio::piped())
.args(&owg.extra_params)
.spawn()?;
b64_writer(child.stdin.take().unwrap()).write_all(key.secret())?;
thread::spawn(move || {
let status = child.wait();
if let Ok(status) = status {
if status.success() {
debug!("successfully passed psk to wg")
} else {
error!("could not pass psk to wg {:?}", status)
}
} else {
error!("wait failed: {:?}", status)
}
});
}
Ok(())
}
pub fn poll(&mut self, rx_buf: &mut [u8]) -> anyhow::Result<AppPollResult> {
use crate::protocol::PollResult as C;
use AppPollResult as A;
loop {
return Ok(match self.crypt.poll()? {
C::DeleteKey(PeerPtr(no)) => A::DeleteKey(AppPeerPtr(no)),
C::SendInitiation(PeerPtr(no)) => A::SendInitiation(AppPeerPtr(no)),
C::SendRetransmission(PeerPtr(no)) => A::SendRetransmission(AppPeerPtr(no)),
C::Sleep(timeout) => match self.try_recv(rx_buf, timeout)? {
Some((len, addr)) => A::ReceivedMessage(len, addr),
None => continue,
},
});
}
}
/// Tries to receive a new message
///
/// - might wait for an duration up to `timeout`
/// - returns immediately if an error occurs
/// - returns immediately if a new message is received
pub fn try_recv(
&mut self,
buf: &mut [u8],
timeout: Timing,
) -> anyhow::Result<Option<(usize, Endpoint)>> {
let timeout = Duration::from_secs_f64(timeout);
// if there is no time to wait on IO, well, then, lets not waste any time!
if timeout.is_zero() {
return Ok(None);
}
// NOTE when using mio::Poll, there are some particularities (taken from
// https://docs.rs/mio/latest/mio/struct.Poll.html):
//
// - poll() might return readiness, even if nothing is ready
// - in this case, a WouldBlock error is returned from actual IO operations
// - after receiving readiness for a source, it must be drained until a WouldBlock
// is received
//
// This would usually require us to maintain the drainage status of each socket;
// a socket would only become drained when it returned WouldBlock and only
// non-drained when receiving a readiness event from mio for it. Then, only the
// ready sockets should be worked on, ideally without requiring an O(n) search
// through all sockets for checking their drained status. However, our use-case
// is primarily heaving one or two sockets (if IPv4 and IPv6 IF_ANY listen is
// desired on a non-dual-stack OS), thus just checking every socket after any
// readiness event seems to be good enough™ for now.
// only poll if we drained all sockets before
if self.all_sockets_drained {
self.mio_poll.poll(&mut self.events, Some(timeout))?;
}
let mut would_block_count = 0;
for (sock_no, socket) in self.sockets.iter_mut().enumerate() {
match socket.recv_from(buf) {
Ok((n, addr)) => {
// at least one socket was not drained...
self.all_sockets_drained = false;
return Ok(Some((
n,
Endpoint::SocketBoundAddress {
socket: SocketPtr(sock_no),
addr,
},
)));
}
Err(e) if e.kind() == ErrorKind::WouldBlock => {
would_block_count += 1;
}
// TODO if one socket continuously returns an error, then we never poll, thus we never wait for a timeout, thus we have a spin-lock
Err(e) => return Err(e.into()),
}
}
// if each socket returned WouldBlock, then we drained them all at least once indeed
self.all_sockets_drained = would_block_count == self.sockets.len();
Ok(None)
}
}

295
rosenpass/src/cli.rs Normal file
View File

@@ -0,0 +1,295 @@
use anyhow::{bail, ensure};
use clap::Parser;
use rosenpass_cipher_traits::Kem;
use rosenpass_ciphers::kem::StaticKem;
use rosenpass_secret_memory::file::StoreSecret;
use rosenpass_util::file::{LoadValue, LoadValueB64};
use std::path::PathBuf;
use crate::app_server;
use crate::app_server::AppServer;
use crate::protocol::{SPk, SSk, SymKey};
use super::config;
#[derive(Parser, Debug)]
#[command(author, version, about, long_about)]
pub enum Cli {
/// Start Rosenpass in server mode and carry on with the key exchange
///
/// This will parse the configuration file and perform the key exchange
/// with the specified peers. If a peer's endpoint is specified, this
/// Rosenpass instance will try to initiate a key exchange with the peer,
/// otherwise only initiation attempts from the peer will be responded to.
ExchangeConfig { config_file: PathBuf },
/// Start in daemon mode, performing key exchanges
///
/// The configuration is read from the command line. The `peer` token
/// always separates multiple peers, e. g. if the token `peer` appears
/// in the WIREGUARD_EXTRA_ARGS it is not put into the WireGuard arguments
/// but instead a new peer is created.
/* Explanation: `first_arg` and `rest_of_args` are combined into one
* `Vec<String>`. They are only used to trick clap into displaying some
* guidance on the CLI usage.
*/
#[allow(rustdoc::broken_intra_doc_links)]
#[allow(rustdoc::invalid_html_tags)]
Exchange {
/// public-key <PATH> secret-key <PATH> [listen <ADDR>:<PORT>]... [verbose]
#[clap(value_name = "OWN_CONFIG")]
first_arg: String,
/// peer public-key <PATH> [ENDPOINT] [PSK] [OUTFILE] [WG]
///
/// ENDPOINT := endpoint <HOST/IP>:<PORT>
///
/// PSK := preshared-key <PATH>
///
/// OUTFILE := outfile <PATH>
///
/// WG := wireguard <WIREGUARD_DEV> <WIREGUARD_PEER> [WIREGUARD_EXTRA_ARGS]...
#[clap(value_name = "PEERS")]
rest_of_args: Vec<String>,
/// Save the parsed configuration to a file before starting the daemon
#[clap(short, long)]
config_file: Option<PathBuf>,
},
/// Generate a demo config file
GenConfig {
config_file: PathBuf,
/// Forcefully overwrite existing config file
#[clap(short, long)]
force: bool,
},
/// Generate the keys mentioned in a configFile
///
/// Generates secret- & public-key to their destination. If a config file
/// is provided then the key file destination is taken from there.
/// Otherwise the
GenKeys {
config_file: Option<PathBuf>,
/// where to write public-key to
#[clap(short, long)]
public_key: Option<PathBuf>,
/// where to write secret-key to
#[clap(short, long)]
secret_key: Option<PathBuf>,
/// Forcefully overwrite public- & secret-key file
#[clap(short, long)]
force: bool,
},
/// Deprecated - use gen-keys instead
#[allow(rustdoc::broken_intra_doc_links)]
#[allow(rustdoc::invalid_html_tags)]
Keygen {
// NOTE yes, the legacy keygen argument initially really accepted "privet-key", not "secret-key"!
/// public-key <PATH> private-key <PATH>
args: Vec<String>,
},
/// Validate a configuration
Validate { config_files: Vec<PathBuf> },
/// Show the rosenpass manpage
// TODO make this the default, but only after the manpage has been adjusted once the CLI stabilizes
Man,
}
impl Cli {
pub fn run() -> anyhow::Result<()> {
let cli = Self::parse();
use Cli::*;
match cli {
Man => {
let man_cmd = std::process::Command::new("man")
.args(["1", "rosenpass"])
.status();
if !(man_cmd.is_ok() && man_cmd.unwrap().success()) {
println!(include_str!(env!("ROSENPASS_MAN")));
}
}
GenConfig { config_file, force } => {
ensure!(
force || !config_file.exists(),
"config file {config_file:?} already exists"
);
config::Rosenpass::example_config().store(config_file)?;
}
// Deprecated - use gen-keys instead
Keygen { args } => {
log::warn!("The 'keygen' command is deprecated. Please use the 'gen-keys' command instead.");
let mut public_key: Option<PathBuf> = None;
let mut secret_key: Option<PathBuf> = None;
// Manual arg parsing, since clap wants to prefix flags with "--"
let mut args = args.into_iter();
loop {
match (args.next().as_ref().map(String::as_str), args.next()) {
(Some("private-key"), Some(opt)) | (Some("secret-key"), Some(opt)) => {
secret_key = Some(opt.into());
}
(Some("public-key"), Some(opt)) => {
public_key = Some(opt.into());
}
(Some(flag), _) => {
bail!("Unknown option `{}`", flag);
}
(_, _) => break,
};
}
if secret_key.is_none() {
bail!("private-key is required");
}
if public_key.is_none() {
bail!("public-key is required");
}
generate_and_save_keypair(secret_key.unwrap(), public_key.unwrap())?;
}
GenKeys {
config_file,
public_key,
secret_key,
force,
} => {
// figure out where the key file is specified, in the config file or directly as flag?
let (pkf, skf) = match (config_file, public_key, secret_key) {
(Some(config_file), _, _) => {
ensure!(
config_file.exists(),
"config file {config_file:?} does not exist"
);
let config = config::Rosenpass::load(config_file)?;
(config.public_key, config.secret_key)
}
(_, Some(pkf), Some(skf)) => (pkf, skf),
_ => {
bail!("either a config-file or both public-key and secret-key file are required")
}
};
// check that we are not overriding something unintentionally
let mut problems = vec![];
if !force && pkf.is_file() {
problems.push(format!(
"public-key file {pkf:?} exist, refusing to overwrite it"
));
}
if !force && skf.is_file() {
problems.push(format!(
"secret-key file {skf:?} exist, refusing to overwrite it"
));
}
if !problems.is_empty() {
bail!(problems.join("\n"));
}
// generate the keys and store them in files
generate_and_save_keypair(skf, pkf)?;
}
ExchangeConfig { config_file } => {
ensure!(
config_file.exists(),
"config file '{config_file:?}' does not exist"
);
let config = config::Rosenpass::load(config_file)?;
config.validate()?;
Self::event_loop(config)?;
}
Exchange {
first_arg,
mut rest_of_args,
config_file,
} => {
rest_of_args.insert(0, first_arg);
let args = rest_of_args;
let mut config = config::Rosenpass::parse_args(args)?;
if let Some(p) = config_file {
config.store(&p)?;
config.config_file_path = p;
}
config.validate()?;
Self::event_loop(config)?;
}
Validate { config_files } => {
for file in config_files {
match config::Rosenpass::load(&file) {
Ok(config) => {
eprintln!("{file:?} is valid TOML and conforms to the expected schema");
match config.validate() {
Ok(_) => eprintln!("{file:?} is passed all logical checks"),
Err(_) => eprintln!("{file:?} contains logical errors"),
}
}
Err(e) => eprintln!("{file:?} is not valid: {e}"),
}
}
}
}
Ok(())
}
fn event_loop(config: config::Rosenpass) -> anyhow::Result<()> {
// load own keys
let sk = SSk::load(&config.secret_key)?;
let pk = SPk::load(&config.public_key)?;
// start an application server
let mut srv = std::boxed::Box::<AppServer>::new(AppServer::new(
sk,
pk,
config.listen,
config.verbosity,
)?);
for cfg_peer in config.peers {
srv.add_peer(
// psk, pk, outfile, outwg, tx_addr
cfg_peer.pre_shared_key.map(SymKey::load_b64).transpose()?,
SPk::load(&cfg_peer.public_key)?,
cfg_peer.key_out,
cfg_peer.wg.map(|cfg| app_server::WireguardOut {
dev: cfg.device,
pk: cfg.peer,
extra_params: cfg.extra_params,
}),
cfg_peer.endpoint.clone(),
)?;
}
srv.event_loop()
}
}
/// generate secret and public keys, store in files according to the paths passed as arguments
fn generate_and_save_keypair(secret_key: PathBuf, public_key: PathBuf) -> anyhow::Result<()> {
let mut ssk = crate::protocol::SSk::random();
let mut spk = crate::protocol::SPk::random();
StaticKem::keygen(ssk.secret_mut(), spk.secret_mut())?;
ssk.store_secret(secret_key)?;
spk.store_secret(public_key)
}

444
rosenpass/src/config.rs Normal file
View File

@@ -0,0 +1,444 @@
use std::{
collections::HashSet,
fs,
io::Write,
net::{Ipv4Addr, Ipv6Addr, SocketAddr, SocketAddrV4, SocketAddrV6, ToSocketAddrs},
path::{Path, PathBuf},
};
use anyhow::{bail, ensure};
use rosenpass_util::file::fopen_w;
use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize)]
pub struct Rosenpass {
pub public_key: PathBuf,
pub secret_key: PathBuf,
pub listen: Vec<SocketAddr>,
#[serde(default)]
pub verbosity: Verbosity,
pub peers: Vec<RosenpassPeer>,
#[serde(skip)]
pub config_file_path: PathBuf,
}
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
pub enum Verbosity {
Quiet,
Verbose,
}
#[derive(Debug, Default, PartialEq, Eq, Serialize, Deserialize)]
pub struct RosenpassPeer {
pub public_key: PathBuf,
pub endpoint: Option<String>,
pub pre_shared_key: Option<PathBuf>,
#[serde(default)]
pub key_out: Option<PathBuf>,
// TODO make this field only available on binary builds, not on library builds
#[serde(flatten)]
pub wg: Option<WireGuard>,
}
#[derive(Debug, Default, PartialEq, Eq, Serialize, Deserialize)]
pub struct WireGuard {
pub device: String,
pub peer: String,
#[serde(default)]
pub extra_params: Vec<String>,
}
impl Rosenpass {
/// Load a config file from a file path
///
/// no validation is conducted
pub fn load<P: AsRef<Path>>(p: P) -> anyhow::Result<Self> {
let mut config: Self = toml::from_str(&fs::read_to_string(&p)?)?;
config.config_file_path = p.as_ref().to_owned();
Ok(config)
}
/// Write a config to a file
pub fn store<P: AsRef<Path>>(&self, p: P) -> anyhow::Result<()> {
let serialized_config =
toml::to_string_pretty(&self).expect("unable to serialize the default config");
fs::write(p, serialized_config)?;
Ok(())
}
/// Commit the configuration to where it came from, overwriting the original file
pub fn commit(&self) -> anyhow::Result<()> {
let mut f = fopen_w(&self.config_file_path)?;
f.write_all(toml::to_string_pretty(&self)?.as_bytes())?;
self.store(&self.config_file_path)
}
/// Validate a configuration
pub fn validate(&self) -> anyhow::Result<()> {
// check the public-key file exists
ensure!(
self.public_key.is_file(),
"public-key file {:?} does not exist",
self.public_key
);
// check the secret-key file exists
ensure!(
self.secret_key.is_file(),
"secret-key file {:?} does not exist",
self.secret_key
);
for (i, peer) in self.peers.iter().enumerate() {
// check peer's public-key file exists
ensure!(
peer.public_key.is_file(),
"peer {i} public-key file {:?} does not exist",
peer.public_key
);
// check endpoint is usable
if let Some(addr) = peer.endpoint.as_ref() {
ensure!(
addr.to_socket_addrs().is_ok(),
"peer {i} endpoint {} can not be parsed to a socket address",
addr
);
}
// TODO warn if neither out_key nor exchange_command is defined
}
Ok(())
}
/// Creates a new configuration
pub fn new<P1: AsRef<Path>, P2: AsRef<Path>>(public_key: P1, secret_key: P2) -> Self {
Self {
public_key: PathBuf::from(public_key.as_ref()),
secret_key: PathBuf::from(secret_key.as_ref()),
listen: vec![],
verbosity: Verbosity::Quiet,
peers: vec![],
config_file_path: PathBuf::new(),
}
}
/// Add IPv4 __and__ IPv6 IF_ANY address to the listen interfaces
pub fn add_if_any(&mut self, port: u16) {
let ipv4_any = SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(0, 0, 0, 0), port));
let ipv6_any = SocketAddr::V6(SocketAddrV6::new(
Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 0),
port,
0,
0,
));
self.listen.push(ipv4_any);
self.listen.push(ipv6_any);
}
/// from chaotic args
/// Quest: the grammar is undecideable, what do we do here?
pub fn parse_args(args: Vec<String>) -> anyhow::Result<Self> {
let mut config = Self::new("", "");
#[derive(Debug, Hash, PartialEq, Eq)]
enum State {
Own,
OwnPublicKey,
OwnSecretKey,
OwnListen,
Peer,
PeerPsk,
PeerPublicKey,
PeerEndpoint,
PeerOutfile,
PeerWireguardDev,
PeerWireguardPeer,
PeerWireguardExtraArgs,
}
let mut already_set = HashSet::new();
// TODO idea: use config.peers.len() to give index of peer with conflicting argument
use State::*;
let mut state = Own;
let mut current_peer = None;
let p_exists = "a peer should exist by now";
let wg_exists = "a peer wireguard should exist by now";
for arg in args {
state = match (state, arg.as_str(), &mut current_peer) {
(Own, "public-key", None) => OwnPublicKey,
(Own, "secret-key", None) => OwnSecretKey,
(Own, "private-key", None) => {
log::warn!(
"the private-key argument is deprecated, please use secret-key instead"
);
OwnSecretKey
}
(Own, "listen", None) => OwnListen,
(Own, "verbose", None) => {
config.verbosity = Verbosity::Verbose;
Own
}
(Own, "peer", None) => {
ensure!(
already_set.contains(&OwnPublicKey),
"public-key file must be set"
);
ensure!(
already_set.contains(&OwnSecretKey),
"secret-key file must be set"
);
already_set.clear();
current_peer = Some(RosenpassPeer::default());
Peer
}
(OwnPublicKey, pk, None) => {
ensure!(
already_set.insert(OwnPublicKey),
"public-key was already set"
);
config.public_key = pk.into();
Own
}
(OwnSecretKey, sk, None) => {
ensure!(
already_set.insert(OwnSecretKey),
"secret-key was already set"
);
config.secret_key = sk.into();
Own
}
(OwnListen, l, None) => {
already_set.insert(OwnListen); // multiple listen directives are allowed
for socket_addr in l.to_socket_addrs()? {
config.listen.push(socket_addr);
}
Own
}
(Peer | PeerWireguardExtraArgs, "peer", maybe_peer @ Some(_)) => {
// TODO check current peer
// commit current peer, create a new one
config.peers.push(maybe_peer.take().expect(p_exists));
already_set.clear();
current_peer = Some(RosenpassPeer::default());
Peer
}
(Peer, "public-key", Some(_)) => PeerPublicKey,
(Peer, "endpoint", Some(_)) => PeerEndpoint,
(Peer, "preshared-key", Some(_)) => PeerPsk,
(Peer, "outfile", Some(_)) => PeerOutfile,
(Peer, "wireguard", Some(_)) => PeerWireguardDev,
(PeerPublicKey, pk, Some(peer)) => {
ensure!(
already_set.insert(PeerPublicKey),
"public-key was already set"
);
peer.public_key = pk.into();
Peer
}
(PeerEndpoint, e, Some(peer)) => {
ensure!(already_set.insert(PeerEndpoint), "endpoint was already set");
peer.endpoint = Some(e.to_owned());
Peer
}
(PeerPsk, psk, Some(peer)) => {
ensure!(already_set.insert(PeerEndpoint), "peer psk was already set");
peer.pre_shared_key = Some(psk.into());
Peer
}
(PeerOutfile, of, Some(peer)) => {
ensure!(
already_set.insert(PeerOutfile),
"peer outfile was already set"
);
peer.key_out = Some(of.into());
Peer
}
(PeerWireguardDev, dev, Some(peer)) => {
ensure!(
already_set.insert(PeerWireguardDev),
"peer wireguard-dev was already set"
);
assert!(peer.wg.is_none());
peer.wg = Some(WireGuard {
device: dev.to_string(),
..Default::default()
});
PeerWireguardPeer
}
(PeerWireguardPeer, p, Some(peer)) => {
ensure!(
already_set.insert(PeerWireguardPeer),
"peer wireguard-peer was already set"
);
peer.wg.as_mut().expect(wg_exists).peer = p.to_string();
PeerWireguardExtraArgs
}
(PeerWireguardExtraArgs, arg, Some(peer)) => {
peer.wg
.as_mut()
.expect(wg_exists)
.extra_params
.push(arg.to_string());
PeerWireguardExtraArgs
}
// error cases
(Own, x, None) => {
bail!("unrecognised argument {x}");
}
(Own | OwnPublicKey | OwnSecretKey | OwnListen, _, Some(_)) => {
panic!("current_peer is not None while in Own* state, this must never happen")
}
(State::Peer, arg, Some(_)) => {
bail!("unrecongnised argument {arg}");
}
(
Peer
| PeerEndpoint
| PeerOutfile
| PeerPublicKey
| PeerPsk
| PeerWireguardDev
| PeerWireguardPeer
| PeerWireguardExtraArgs,
_,
None,
) => {
panic!("got peer options but no peer was created")
}
};
}
if let Some(p) = current_peer {
// TODO ensure peer is propagated with sufficient information
config.peers.push(p);
}
Ok(config)
}
}
impl Rosenpass {
/// Generate an example configuration
pub fn example_config() -> Self {
let peer = RosenpassPeer {
public_key: "/path/to/rp-peer-public-key".into(),
endpoint: Some("my-peer.test:9999".into()),
key_out: Some("/path/to/rp-key-out.txt".into()),
pre_shared_key: Some("additional pre shared key".into()),
wg: Some(WireGuard {
device: "wirgeguard device e.g. wg0".into(),
peer: "wireguard public key".into(),
extra_params: vec!["passed to".into(), "wg set".into()],
}),
};
Self {
public_key: "/path/to/rp-public-key".into(),
secret_key: "/path/to/rp-secret-key".into(),
peers: vec![peer],
..Self::new("", "")
}
}
}
impl Default for Verbosity {
fn default() -> Self {
Self::Quiet
}
}
#[cfg(test)]
mod test {
use std::net::IpAddr;
use super::*;
fn split_str(s: &str) -> Vec<String> {
s.split(" ").map(|s| s.to_string()).collect()
}
#[test]
fn test_simple_cli_parse() {
let args = split_str(
"public-key /my/public-key secret-key /my/secret-key verbose \
listen 0.0.0.0:9999 peer public-key /peer/public-key endpoint \
peer.test:9999 outfile /peer/rp-out",
);
let config = Rosenpass::parse_args(args).unwrap();
assert_eq!(config.public_key, PathBuf::from("/my/public-key"));
assert_eq!(config.secret_key, PathBuf::from("/my/secret-key"));
assert_eq!(config.verbosity, Verbosity::Verbose);
assert_eq!(
&config.listen,
&vec![SocketAddr::new(IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)), 9999)]
);
assert_eq!(
config.peers,
vec![RosenpassPeer {
public_key: PathBuf::from("/peer/public-key"),
endpoint: Some("peer.test:9999".into()),
pre_shared_key: None,
key_out: Some(PathBuf::from("/peer/rp-out")),
..Default::default()
}]
)
}
#[test]
fn test_cli_parse_multiple_peers() {
let args = split_str(
"public-key /my/public-key secret-key /my/secret-key verbose \
peer public-key /peer-a/public-key endpoint \
peer.test:9999 outfile /peer-a/rp-out \
peer public-key /peer-b/public-key outfile /peer-b/rp-out",
);
let config = Rosenpass::parse_args(args).unwrap();
assert_eq!(config.public_key, PathBuf::from("/my/public-key"));
assert_eq!(config.secret_key, PathBuf::from("/my/secret-key"));
assert_eq!(config.verbosity, Verbosity::Verbose);
assert!(&config.listen.is_empty());
assert_eq!(
config.peers,
vec![
RosenpassPeer {
public_key: PathBuf::from("/peer-a/public-key"),
endpoint: Some("peer.test:9999".into()),
pre_shared_key: None,
key_out: Some(PathBuf::from("/peer-a/rp-out")),
..Default::default()
},
RosenpassPeer {
public_key: PathBuf::from("/peer-b/public-key"),
endpoint: None,
pre_shared_key: None,
key_out: Some(PathBuf::from("/peer-b/rp-out")),
..Default::default()
}
]
)
}
}

View File

@@ -0,0 +1,46 @@
//! Pseudo Random Functions (PRFs) with a tree-like label scheme which
//! ensures their uniqueness
use anyhow::Result;
use rosenpass_ciphers::{hash_domain::HashDomain, KEY_LEN};
// TODO Use labels that can serve as identifiers
macro_rules! hash_domain_ns {
($base:ident, $name:ident, $($lbl:expr),* ) => {
pub fn $name() -> Result<HashDomain> {
let t = $base()?;
$( let t = t.mix($lbl.as_bytes())?; )*
Ok(t)
}
}
}
macro_rules! hash_domain {
($base:ident, $name:ident, $($lbl:expr),* ) => {
pub fn $name() -> Result<[u8; KEY_LEN]> {
let t = $base()?;
$( let t = t.mix($lbl.as_bytes())?; )*
Ok(t.into_value())
}
}
}
pub fn protocol() -> Result<HashDomain> {
HashDomain::zero().mix("Rosenpass v1 mceliece460896 Kyber512 ChaChaPoly1305 BLAKE2s".as_bytes())
}
hash_domain_ns!(protocol, mac, "mac");
hash_domain_ns!(protocol, cookie, "cookie");
hash_domain_ns!(protocol, peerid, "peer id");
hash_domain_ns!(protocol, biscuit_ad, "biscuit additional data");
hash_domain_ns!(protocol, ckinit, "chaining key init");
hash_domain_ns!(protocol, _ckextract, "chaining key extract");
hash_domain!(_ckextract, mix, "mix");
hash_domain!(_ckextract, hs_enc, "handshake encryption");
hash_domain!(_ckextract, ini_enc, "initiator handshake encryption");
hash_domain!(_ckextract, res_enc, "responder handshake encryption");
hash_domain_ns!(_ckextract, _user, "user");
hash_domain_ns!(_user, _rp, "rosenpass.eu");
hash_domain!(_rp, osk, "wireguard psk");

24
rosenpass/src/lib.rs Normal file
View File

@@ -0,0 +1,24 @@
use rosenpass_lenses::LenseError;
pub mod app_server;
pub mod cli;
pub mod config;
pub mod hash_domains;
pub mod msgs;
pub mod protocol;
#[derive(thiserror::Error, Debug)]
pub enum RosenpassError {
#[error("buffer size mismatch")]
BufferSizeMismatch,
#[error("invalid message type")]
InvalidMessageType(u8),
}
impl From<LenseError> for RosenpassError {
fn from(value: LenseError) -> Self {
match value {
LenseError::BufferSizeMismatch => RosenpassError::BufferSizeMismatch,
}
}
}

23
rosenpass/src/main.rs Normal file
View File

@@ -0,0 +1,23 @@
use log::error;
use rosenpass::cli::Cli;
use rosenpass_util::attempt;
use std::process::exit;
/// Catches errors, prints them through the logger, then exits
pub fn main() {
// default to displaying warning and error log messages only
env_logger::Builder::from_env(env_logger::Env::default().default_filter_or("warn")).init();
let res = attempt!({
rosenpass_sodium::init()?;
Cli::run()
});
match res {
Ok(_) => {}
Err(e) => {
error!("{e}");
exit(1);
}
}
}

208
rosenpass/src/msgs.rs Normal file
View File

@@ -0,0 +1,208 @@
//! Data structures representing the messages going over the wire
//!
//! This module contains de-/serialization of the protocol's messages. Thats kind
//! of a lie, since no actual ser/de happens. Instead, the structures offer views
//! into mutable byte slices (`&mut [u8]`), allowing to modify the fields of an
//! always serialized instance of the data in question. This is closely related
//! to the concept of lenses in function programming; more on that here:
//! [https://sinusoid.es/misc/lager/lenses.pdf](https://sinusoid.es/misc/lager/lenses.pdf)
//!
//! # Example
//!
//! The following example uses the [`lense` macro](rosenpass_lenses::lense) to create a lense that
//! might be useful when dealing with UDP headers.
//!
//! ```
//! use rosenpass_lenses::{lense, LenseView};
//! use rosenpass::RosenpassError;
//! # fn main() -> Result<(), RosenpassError> {
//!
//! lense! {UdpDatagramHeader :=
//! source_port: 2,
//! dest_port: 2,
//! length: 2,
//! checksum: 2
//! }
//!
//! let mut buf = [0u8; 8];
//!
//! // read-only lense, no check of size:
//! let lense = UdpDatagramHeader(&buf);
//! assert_eq!(lense.checksum(), &[0, 0]);
//!
//! // mutable lense, runtime check of size
//! let mut lense = buf.as_mut().udp_datagram_header()?;
//! lense.source_port_mut().copy_from_slice(&53u16.to_be_bytes()); // some DNS, anyone?
//!
//! // the original buffer is still available
//! assert_eq!(buf, [0, 53, 0, 0, 0, 0, 0, 0]);
//!
//! // read-only lense, runtime check of size
//! let lense = buf.as_ref().udp_datagram_header()?;
//! assert_eq!(lense.source_port(), &[0, 53]);
//! # Ok(())
//! # }
//! ```
use super::RosenpassError;
use rosenpass_cipher_traits::Kem;
use rosenpass_ciphers::kem::{EphemeralKem, StaticKem};
use rosenpass_ciphers::{aead, xaead, KEY_LEN};
use rosenpass_lenses::{lense, LenseView};
// Macro magic ////////////////////////////////////////////////////////////////
lense! { Envelope<M> :=
/// [MsgType] of this message
msg_type: 1,
/// Reserved for future use
reserved: 3,
/// The actual Paylod
payload: M::LEN,
/// Message Authentication Code (mac) over all bytes until (exclusive)
/// `mac` itself
mac: 16,
/// Currently unused, TODO: do something with this
cookie: 16
}
lense! { InitHello :=
/// Randomly generated connection id
sidi: 4,
/// Kyber 512 Ephemeral Public Key
epki: EphemeralKem::PK_LEN,
/// Classic McEliece Ciphertext
sctr: StaticKem::CT_LEN,
/// Encryped: 16 byte hash of McEliece initiator static key
pidic: aead::TAG_LEN + 32,
/// Encrypted TAI64N Time Stamp (against replay attacks)
auth: aead::TAG_LEN
}
lense! { RespHello :=
/// Randomly generated connection id
sidr: 4,
/// Copied from InitHello
sidi: 4,
/// Kyber 512 Ephemeral Ciphertext
ecti: EphemeralKem::CT_LEN,
/// Classic McEliece Ciphertext
scti: StaticKem::CT_LEN,
/// Empty encrypted message (just an auth tag)
auth: aead::TAG_LEN,
/// Responders handshake state in encrypted form
biscuit: BISCUIT_CT_LEN
}
lense! { InitConf :=
/// Copied from InitHello
sidi: 4,
/// Copied from RespHello
sidr: 4,
/// Responders handshake state in encrypted form
biscuit: BISCUIT_CT_LEN,
/// Empty encrypted message (just an auth tag)
auth: aead::TAG_LEN
}
lense! { EmptyData :=
/// Copied from RespHello
sid: 4,
/// Nonce
ctr: 8,
/// Empty encrypted message (just an auth tag)
auth: aead::TAG_LEN
}
lense! { Biscuit :=
/// H(spki) Ident ifies the initiator
pidi: KEY_LEN,
/// The biscuit number (replay protection)
biscuit_no: 12,
/// Chaining key
ck: KEY_LEN
}
lense! { DataMsg :=
dummy: 4
}
lense! { CookieReply :=
dummy: 4
}
// Traits /////////////////////////////////////////////////////////////////////
pub trait WireMsg: std::fmt::Debug {
const MSG_TYPE: MsgType;
const MSG_TYPE_U8: u8 = Self::MSG_TYPE as u8;
const BYTES: usize;
}
// Constants //////////////////////////////////////////////////////////////////
pub const SESSION_ID_LEN: usize = 4;
pub const BISCUIT_ID_LEN: usize = 12;
pub const WIRE_ENVELOPE_LEN: usize = 1 + 3 + 16 + 16; // TODO verify this
/// Size required to fit any message in binary form
pub const MAX_MESSAGE_LEN: usize = 2500; // TODO fix this
/// Recognized message types
#[repr(u8)]
#[derive(Hash, PartialEq, Eq, PartialOrd, Ord, Debug, Clone, Copy)]
pub enum MsgType {
InitHello = 0x81,
RespHello = 0x82,
InitConf = 0x83,
EmptyData = 0x84,
DataMsg = 0x85,
CookieReply = 0x86,
}
impl TryFrom<u8> for MsgType {
type Error = RosenpassError;
fn try_from(value: u8) -> Result<Self, Self::Error> {
Ok(match value {
0x81 => MsgType::InitHello,
0x82 => MsgType::RespHello,
0x83 => MsgType::InitConf,
0x84 => MsgType::EmptyData,
0x85 => MsgType::DataMsg,
0x86 => MsgType::CookieReply,
_ => return Err(RosenpassError::InvalidMessageType(value)),
})
}
}
/// length in bytes of an unencrypted Biscuit (plain text)
pub const BISCUIT_PT_LEN: usize = Biscuit::<()>::LEN;
/// Length in bytes of an encrypted Biscuit (cipher text)
pub const BISCUIT_CT_LEN: usize = BISCUIT_PT_LEN + xaead::NONCE_LEN + xaead::TAG_LEN;
#[cfg(test)]
mod test_constants {
use crate::msgs::{BISCUIT_CT_LEN, BISCUIT_PT_LEN};
use rosenpass_ciphers::{xaead, KEY_LEN};
#[test]
fn sodium_keysize() {
assert_eq!(KEY_LEN, 32);
}
#[test]
fn biscuit_pt_len() {
assert_eq!(BISCUIT_PT_LEN, 2 * KEY_LEN + 12);
}
#[test]
fn biscuit_ct_len() {
assert_eq!(
BISCUIT_CT_LEN,
BISCUIT_PT_LEN + xaead::NONCE_LEN + xaead::TAG_LEN
);
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -8,21 +8,21 @@ fn generate_keys() {
let tmpdir = PathBuf::from(env!("CARGO_TARGET_TMPDIR")).join("keygen");
fs::create_dir_all(&tmpdir).unwrap();
let priv_key_path = tmpdir.join("private-key");
let pub_key_path = tmpdir.join("public-key");
let secret_key_path = tmpdir.join("secret-key");
let public_key_path = tmpdir.join("public-key");
let output = test_bin::get_test_bin(BIN)
.args(["keygen", "private-key"])
.arg(&priv_key_path)
.arg("public-key")
.arg(&pub_key_path)
.args(["gen-keys", "--secret-key"])
.arg(&secret_key_path)
.arg("--public-key")
.arg(&public_key_path)
.output()
.expect("Failed to start {BIN}");
assert_eq!(String::from_utf8_lossy(&output.stdout), "");
assert!(priv_key_path.is_file());
assert!(pub_key_path.is_file());
assert!(secret_key_path.is_file());
assert!(public_key_path.is_file());
// cleanup
fs::remove_dir_all(&tmpdir).unwrap();
@@ -46,22 +46,22 @@ fn check_exchange() {
let tmpdir = PathBuf::from(env!("CARGO_TARGET_TMPDIR")).join("exchange");
fs::create_dir_all(&tmpdir).unwrap();
let priv_key_paths = [tmpdir.join("private-key-0"), tmpdir.join("private-key-1")];
let pub_key_paths = [tmpdir.join("public-key-0"), tmpdir.join("public-key-1")];
let secret_key_paths = [tmpdir.join("secret-key-0"), tmpdir.join("secret-key-1")];
let public_key_paths = [tmpdir.join("public-key-0"), tmpdir.join("public-key-1")];
let shared_key_paths = [tmpdir.join("shared-key-0"), tmpdir.join("shared-key-1")];
// generate key pairs
for (priv_key_path, pub_key_path) in priv_key_paths.iter().zip(pub_key_paths.iter()) {
for (secret_key_path, pub_key_path) in secret_key_paths.iter().zip(public_key_paths.iter()) {
let output = test_bin::get_test_bin(BIN)
.args(["keygen", "private-key"])
.arg(&priv_key_path)
.arg("public-key")
.args(["gen-keys", "--secret-key"])
.arg(&secret_key_path)
.arg("--public-key")
.arg(&pub_key_path)
.output()
.expect("Failed to start {BIN}");
assert_eq!(String::from_utf8_lossy(&output.stdout), "");
assert!(priv_key_path.is_file());
assert!(secret_key_path.is_file());
assert!(pub_key_path.is_file());
}
@@ -69,12 +69,12 @@ fn check_exchange() {
let port = find_udp_socket();
let listen_addr = format!("localhost:{port}");
let mut server = test_bin::get_test_bin(BIN)
.args(["exchange", "private-key"])
.arg(&priv_key_paths[0])
.args(["exchange", "secret-key"])
.arg(&secret_key_paths[0])
.arg("public-key")
.arg(&pub_key_paths[0])
.arg(&public_key_paths[0])
.args(["listen", &listen_addr, "verbose", "peer", "public-key"])
.arg(&pub_key_paths[1])
.arg(&public_key_paths[1])
.arg("outfile")
.arg(&shared_key_paths[0])
.stdout(Stdio::null())
@@ -82,14 +82,16 @@ fn check_exchange() {
.spawn()
.expect("Failed to start {BIN}");
std::thread::sleep(Duration::from_millis(500));
// start second process, the client
let mut client = test_bin::get_test_bin(BIN)
.args(["exchange", "private-key"])
.arg(&priv_key_paths[1])
.args(["exchange", "secret-key"])
.arg(&secret_key_paths[1])
.arg("public-key")
.arg(&pub_key_paths[1])
.arg(&public_key_paths[1])
.args(["verbose", "peer", "public-key"])
.arg(&pub_key_paths[0])
.arg(&public_key_paths[0])
.args(["endpoint", &listen_addr])
.arg("outfile")
.arg(&shared_key_paths[1])

69
rp
View File

@@ -43,6 +43,17 @@ dbg() {
echo >&2 "$@"
}
detect_git_dir() {
# https://stackoverflow.com/questions/3618078/pipe-only-stderr-through-a-filter
(
git -C "${scriptdir}" rev-parse --show-toplevel 3>&1 1>&2 2>&3 3>&- \
| sed '
/not a git repository/d;
s/^/WARNING: /'
) 3>&1 1>&2 2>&3 3>&-
}
# Cleanup subsystem (sigterm)
cleanup_init() {
@@ -141,9 +152,9 @@ genkey() {
umask 077
mkdir -p $(enquote "${skdir}")
wg genkey > $(enquote "${skdir}"/wgsk)
$(enquote "${binary}") keygen \\
private-key $(enquote "${skdir}"/pqsk) \\
public-key $(enquote "${skdir}"/pqpk)"
$(enquote "${binary}") gen-keys \\
-s $(enquote "${skdir}"/pqsk) \\
-p $(enquote "${skdir}"/pqpk)"
}
pubkey() {
@@ -186,7 +197,7 @@ exchange() {
lip="${listen%:*}";
lport="${listen/*:/}";
if [[ "$lip" = "$lport" ]]; then
lip="[0::0]"
lip="[::]"
fi
shift;;
-h | -help | --help | help) usage; return 0;;
@@ -198,15 +209,41 @@ exchange() {
fatal "Needs at least one peer specified"
fi
frag "
# Create the Wireguard interface
ip link add dev $(enquote "${dev}") type wireguard || true"
# os dependent setup
case "$OSTYPE" in
linux-*) # could be linux-gnu or linux-musl
frag "
# Create the WireGuard interface
ip link add dev $(enquote "${dev}") type wireguard || true"
cleanup "
ip link del dev $(enquote "${dev}") || true"
cleanup "
ip link del dev $(enquote "${dev}") || true"
frag "
ip link set dev $(enquote "${dev}") up"
frag "
ip link set dev $(enquote "${dev}") up"
;;
freebsd*)
frag "
# load the WireGuard kernel module
kldload -n if_wg || fatal 'Cannot load if_wg kernel module'"
frag "
# Create the WireGuard interface
ifconfig wg create name $(enquote "${dev}") || true"
cleanup "
ifconfig $(enquote "${dev}") destroy || true"
frag "
ifconfig $(enquote "${dev}") up"
;;
*)
fatal "Your system $OSTYPE is not yet supported. We are happy to receive patches to address this :)"
;;
esac
frag "
# Deploy the classic wireguard private key
@@ -225,7 +262,7 @@ exchange() {
frag_append "verbose"
fi
frag_append_esc " private-key $(enquote "${skdir}/pqsk")"
frag_append_esc " secret-key $(enquote "${skdir}/pqsk")"
frag_append_esc " public-key $(enquote "${skdir}/pqpk")"
if test -n "${lport}"; then
@@ -244,7 +281,7 @@ exchange() {
local arg; arg="$1"; shift
case "${arg}" in
peer) set -- "peer" "$@"; break;; # Next peer
endpoint) ip="${1%:*}"; port="${1/*:/}"; shift;;
endpoint) ip="${1%:*}"; port="${1##*:}"; shift;;
persistent-keepalive) keepalive="${1}"; shift;;
allowed-ips) allowedips="${1}"; shift;;
-h | -help | --help | help) usage; return 0;;
@@ -314,8 +351,10 @@ main() {
project_name="rosenpass"
verbose=0
scriptdir="$(dirname "${script}")"
gitdir="$(git -C "${scriptdir}" rev-parse --show-toplevel 2>/dev/null)" || true
nixdir="$(readlink -f result/bin/rp | grep -Pio '^/nix/store/[^/]+(?=/bin/[^/]+)')" || true
gitdir="$(detect_git_dir)" || true
if [[ -d /nix ]]; then
nixdir="$(readlink -f result/bin/rp | grep -Pio '^/nix/store/[^/]+(?=/bin/[^/]+)')" || true
fi
binary="$(find_rosenpass_binary)"
# Parse command

20
secret-memory/Cargo.toml Normal file
View File

@@ -0,0 +1,20 @@
[package]
name = "rosenpass-secret-memory"
version = "0.1.0"
authors = ["Karolin Varner <karo@cupdev.net>", "wucke13 <wucke13@gmail.com>"]
edition = "2021"
license = "MIT OR Apache-2.0"
description = "Rosenpass internal utilities for storing secrets in memory"
homepage = "https://rosenpass.eu/"
repository = "https://github.com/rosenpass/rosenpass"
readme = "readme.md"
[dependencies]
anyhow = { workspace = true }
rosenpass-to = { workspace = true }
rosenpass-sodium = { workspace = true }
rosenpass-util = { workspace = true }
libsodium-sys-stable = { workspace = true }
lazy_static = { workspace = true }
zeroize = { workspace = true }
rand = { workspace = true }

5
secret-memory/readme.md Normal file
View File

@@ -0,0 +1,5 @@
# Rosenpass secure memory library
Rosenpass internal library providing utilities for securely storing secret data in memory.
This is an internal library; not guarantee is made about its API at this point in time.

View File

@@ -0,0 +1,20 @@
use std::fmt;
/// Writes the contents of an `&[u8]` as hexadecimal symbols to a [std::fmt::Formatter]
pub fn debug_crypto_array(v: &[u8], fmt: &mut fmt::Formatter) -> fmt::Result {
fmt.write_str("[{}]=")?;
if v.len() > 64 {
for byte in &v[..32] {
std::fmt::LowerHex::fmt(byte, fmt)?;
}
fmt.write_str("")?;
for byte in &v[v.len() - 32..] {
std::fmt::LowerHex::fmt(byte, fmt)?;
}
} else {
for byte in v {
std::fmt::LowerHex::fmt(byte, fmt)?;
}
}
Ok(())
}

View File

@@ -0,0 +1,7 @@
use std::path::Path;
pub trait StoreSecret {
type Error;
fn store_secret<P: AsRef<Path>>(&self, path: P) -> Result<(), Self::Error>;
}

9
secret-memory/src/lib.rs Normal file
View File

@@ -0,0 +1,9 @@
pub mod debug;
pub mod file;
pub mod rand;
mod public;
pub use crate::public::Public;
mod secret;
pub use crate::secret::Secret;

112
secret-memory/src/public.rs Normal file
View File

@@ -0,0 +1,112 @@
use crate::debug::debug_crypto_array;
use rand::{Fill as Randomize, Rng};
use rosenpass_to::{ops::copy_slice, To};
use rosenpass_util::file::{fopen_r, LoadValue, ReadExactToEnd, StoreValue};
use rosenpass_util::functional::mutating;
use std::borrow::{Borrow, BorrowMut};
use std::fmt;
use std::ops::{Deref, DerefMut};
use std::path::Path;
/// Contains information in the form of a byte array that may be known to the
/// public
// TODO: We should get rid of the Public type; just use a normal value
#[derive(Copy, Clone, Hash, PartialEq, Eq, PartialOrd, Ord)]
#[repr(transparent)]
pub struct Public<const N: usize> {
pub value: [u8; N],
}
impl<const N: usize> Public<N> {
/// Create a new [Public] from a byte slice
pub fn from_slice(value: &[u8]) -> Self {
copy_slice(value).to_this(|| Self::zero())
}
/// Create a new [Public] from a byte array
pub fn new(value: [u8; N]) -> Self {
Self { value }
}
/// Create a zero initialized [Public]
pub fn zero() -> Self {
Self { value: [0u8; N] }
}
/// Create a random initialized [Public]
pub fn random() -> Self {
mutating(Self::zero(), |r| r.randomize())
}
/// Randomize all bytes in an existing [Public]
pub fn randomize(&mut self) {
self.try_fill(&mut crate::rand::rng()).unwrap()
}
}
impl<const N: usize> Randomize for Public<N> {
fn try_fill<R: Rng + ?Sized>(&mut self, rng: &mut R) -> Result<(), rand::Error> {
self.value.try_fill(rng)
}
}
impl<const N: usize> fmt::Debug for Public<N> {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
debug_crypto_array(&self.value, fmt)
}
}
impl<const N: usize> Deref for Public<N> {
type Target = [u8; N];
fn deref(&self) -> &[u8; N] {
&self.value
}
}
impl<const N: usize> DerefMut for Public<N> {
fn deref_mut(&mut self) -> &mut [u8; N] {
&mut self.value
}
}
impl<const N: usize> Borrow<[u8; N]> for Public<N> {
fn borrow(&self) -> &[u8; N] {
&self.value
}
}
impl<const N: usize> BorrowMut<[u8; N]> for Public<N> {
fn borrow_mut(&mut self) -> &mut [u8; N] {
&mut self.value
}
}
impl<const N: usize> Borrow<[u8]> for Public<N> {
fn borrow(&self) -> &[u8] {
&self.value
}
}
impl<const N: usize> BorrowMut<[u8]> for Public<N> {
fn borrow_mut(&mut self) -> &mut [u8] {
&mut self.value
}
}
impl<const N: usize> LoadValue for Public<N> {
type Error = anyhow::Error;
fn load<P: AsRef<Path>>(path: P) -> anyhow::Result<Self> {
let mut v = Self::random();
fopen_r(path)?.read_exact_to_end(&mut *v)?;
Ok(v)
}
}
impl<const N: usize> StoreValue for Public<N> {
type Error = anyhow::Error;
fn store<P: AsRef<Path>>(&self, path: P) -> anyhow::Result<()> {
std::fs::write(path, **self)?;
Ok(())
}
}

View File

@@ -0,0 +1,5 @@
pub type Rng = rand::rngs::ThreadRng;
pub fn rng() -> Rng {
rand::thread_rng()
}

237
secret-memory/src/secret.rs Normal file
View File

@@ -0,0 +1,237 @@
use crate::file::StoreSecret;
use anyhow::Context;
use lazy_static::lazy_static;
use rand::{Fill as Randomize, Rng};
use rosenpass_sodium::alloc::{Alloc as SodiumAlloc, Box as SodiumBox, Vec as SodiumVec};
use rosenpass_util::{
b64::b64_reader,
file::{fopen_r, LoadValue, LoadValueB64, ReadExactToEnd},
functional::mutating,
};
use std::{collections::HashMap, convert::TryInto, fmt, path::Path, sync::Mutex};
use zeroize::{Zeroize, ZeroizeOnDrop};
// This might become a problem in library usage; it's effectively a memory
// leak which probably isn't a problem right now because most memory will
// be reused…
lazy_static! {
static ref SECRET_CACHE: Mutex<SecretMemoryPool> = Mutex::new(SecretMemoryPool::new());
}
/// Pool that stores secret memory allocations
///
/// Allocation of secret memory is expensive. Thus, this struct provides a
/// pool of secret memory, readily available to yield protected, slices of
/// memory.
///
/// Further information about the protection in place can be found in in the
/// [libsodium documentation](https://libsodium.gitbook.io/doc/memory_management#guarded-heap-allocations)
#[derive(Debug)] // TODO check on Debug derive, is that clever
struct SecretMemoryPool {
pool: HashMap<usize, Vec<SodiumBox<[u8]>>>,
}
impl SecretMemoryPool {
/// Create a new [SecretMemoryPool]
#[allow(clippy::new_without_default)]
pub fn new() -> Self {
Self {
pool: HashMap::new(),
}
}
/// Return secret back to the pool for future re-use
pub fn release<const N: usize>(&mut self, mut sec: SodiumBox<[u8; N]>) {
sec.zeroize();
// This conversion sequence is weird but at least it guarantees
// that the heap allocation is preserved according to the docs
let sec: SodiumVec<u8> = sec.into();
let sec: SodiumBox<[u8]> = sec.into();
self.pool.entry(N).or_default().push(sec);
}
/// Take protected memory from the pool, allocating new one if no suitable
/// chunk is found in the inventory.
///
/// The secret is guaranteed to be full of nullbytes
pub fn take<const N: usize>(&mut self) -> SodiumBox<[u8; N]> {
let entry = self.pool.entry(N).or_default();
match entry.pop() {
None => SodiumBox::new_in([0u8; N], SodiumAlloc::default()),
Some(sec) => sec.try_into().unwrap(),
}
}
}
/// Storeage for a secret backed by [rosenpass_sodium::alloc::Alloc]
pub struct Secret<const N: usize> {
storage: Option<SodiumBox<[u8; N]>>,
}
impl<const N: usize> Secret<N> {
pub fn from_slice(slice: &[u8]) -> Self {
let mut new_self = Self::zero();
new_self.secret_mut().copy_from_slice(slice);
new_self
}
/// Returns a new [Secret] that is zero initialized
pub fn zero() -> Self {
// Using [SecretMemoryPool] here because this operation is expensive,
// yet it is used in hot loops
Self {
storage: Some(SECRET_CACHE.lock().unwrap().take()),
}
}
/// Returns a new [Secret] that is randomized
pub fn random() -> Self {
mutating(Self::zero(), |r| r.randomize())
}
/// Sets all data an existing secret to random bytes
pub fn randomize(&mut self) {
self.try_fill(&mut crate::rand::rng()).unwrap()
}
/// Borrows the data
pub fn secret(&self) -> &[u8; N] {
self.storage.as_ref().unwrap()
}
/// Borrows the data mutably
pub fn secret_mut(&mut self) -> &mut [u8; N] {
self.storage.as_mut().unwrap()
}
}
impl<const N: usize> ZeroizeOnDrop for Secret<N> {}
impl<const N: usize> Zeroize for Secret<N> {
fn zeroize(&mut self) {
self.secret_mut().zeroize();
}
}
impl<const N: usize> Randomize for Secret<N> {
fn try_fill<R: Rng + ?Sized>(&mut self, rng: &mut R) -> Result<(), rand::Error> {
// Zeroize self first just to make sure the barriers from the zeroize create take
// effect to prevent the compiler from optimizing this away.
// We should at some point replace this with our own barriers.
self.zeroize();
self.secret_mut().try_fill(rng)
}
}
impl<const N: usize> Drop for Secret<N> {
fn drop(&mut self) {
self.storage
.take()
.map(|sec| SECRET_CACHE.lock().unwrap().release(sec));
}
}
impl<const N: usize> Clone for Secret<N> {
fn clone(&self) -> Self {
Self::from_slice(self.secret())
}
}
/// The Debug implementation of [Secret] does not reveal the secret data,
/// instead a placeholder `<SECRET>` is used
impl<const N: usize> fmt::Debug for Secret<N> {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
fmt.write_str("<SECRET>")
}
}
impl<const N: usize> LoadValue for Secret<N> {
type Error = anyhow::Error;
fn load<P: AsRef<Path>>(path: P) -> anyhow::Result<Self> {
let mut v = Self::random();
let p = path.as_ref();
fopen_r(p)?
.read_exact_to_end(v.secret_mut())
.with_context(|| format!("Could not load file {p:?}"))?;
Ok(v)
}
}
impl<const N: usize> LoadValueB64 for Secret<N> {
type Error = anyhow::Error;
fn load_b64<P: AsRef<Path>>(path: P) -> anyhow::Result<Self> {
use std::io::Read;
let mut v = Self::random();
let p = path.as_ref();
// This might leave some fragments of the secret on the stack;
// in practice this is likely not a problem because the stack likely
// will be overwritten by something else soon but this is not exactly
// guaranteed. It would be possible to remedy this, but since the secret
// data will linger in the Linux page cache anyways with the current
// implementation, going to great length to erase the secret here is
// not worth it right now.
b64_reader(&mut fopen_r(p)?)
.read_exact(v.secret_mut())
.with_context(|| format!("Could not load base64 file {p:?}"))?;
Ok(v)
}
}
impl<const N: usize> StoreSecret for Secret<N> {
type Error = anyhow::Error;
fn store_secret<P: AsRef<Path>>(&self, path: P) -> anyhow::Result<()> {
std::fs::write(path, self.secret())?;
Ok(())
}
}
#[cfg(test)]
mod test {
use super::*;
/// check that we can alloc using the magic pool
#[test]
fn secret_memory_pool_take() {
rosenpass_sodium::init().unwrap();
const N: usize = 0x100;
let mut pool = SecretMemoryPool::new();
let secret: SodiumBox<[u8; N]> = pool.take();
assert_eq!(secret.as_ref(), &[0; N]);
}
/// check that a secrete lives, even if its [SecretMemoryPool] is deleted
#[test]
fn secret_memory_pool_drop() {
rosenpass_sodium::init().unwrap();
const N: usize = 0x100;
let mut pool = SecretMemoryPool::new();
let secret: SodiumBox<[u8; N]> = pool.take();
std::mem::drop(pool);
assert_eq!(secret.as_ref(), &[0; N]);
}
/// check that a secrete can be reborn, freshly initialized with zero
#[test]
fn secret_memory_pool_release() {
rosenpass_sodium::init().unwrap();
const N: usize = 1;
let mut pool = SecretMemoryPool::new();
let mut secret: SodiumBox<[u8; N]> = pool.take();
let old_secret_ptr = secret.as_ref().as_ptr();
secret.as_mut()[0] = 0x13;
pool.release(secret);
// now check that we get the same ptr
let new_secret: SodiumBox<[u8; N]> = pool.take();
assert_eq!(old_secret_ptr, new_secret.as_ref().as_ptr());
// and that the secret was zeroized
assert_eq!(new_secret.as_ref(), &[0; N]);
}
}

18
sodium/Cargo.toml Normal file
View File

@@ -0,0 +1,18 @@
[package]
name = "rosenpass-sodium"
authors = ["Karolin Varner <karo@cupdev.net>", "wucke13 <wucke13@gmail.com>"]
version = "0.1.0"
edition = "2021"
license = "MIT OR Apache-2.0"
description = "Rosenpass internal bindings to libsodium"
homepage = "https://rosenpass.eu/"
repository = "https://github.com/rosenpass/rosenpass"
readme = "readme.md"
[dependencies]
rosenpass-util = { workspace = true }
rosenpass-to = { workspace = true }
anyhow = { workspace = true }
libsodium-sys-stable = { workspace = true }
log = { workspace = true }
allocator-api2 = { workspace = true }

5
sodium/readme.md Normal file
View File

@@ -0,0 +1,5 @@
# Rosenpass internal libsodium bindings
Rosenpass internal library providing bindings to libsodium.
This is an internal library; not guarantee is made about its API at this point in time.

View File

@@ -0,0 +1,63 @@
use libsodium_sys as libsodium;
use std::ffi::c_ulonglong;
use std::ptr::{null, null_mut};
pub const KEY_LEN: usize = libsodium::crypto_aead_chacha20poly1305_IETF_KEYBYTES as usize;
pub const TAG_LEN: usize = libsodium::crypto_aead_chacha20poly1305_IETF_ABYTES as usize;
pub const NONCE_LEN: usize = libsodium::crypto_aead_chacha20poly1305_IETF_NPUBBYTES as usize;
#[inline]
pub fn encrypt(
ciphertext: &mut [u8],
key: &[u8],
nonce: &[u8],
ad: &[u8],
plaintext: &[u8],
) -> anyhow::Result<()> {
assert!(ciphertext.len() == plaintext.len() + TAG_LEN);
assert!(key.len() == KEY_LEN);
assert!(nonce.len() == NONCE_LEN);
let mut clen: u64 = 0;
sodium_call!(
crypto_aead_chacha20poly1305_ietf_encrypt,
ciphertext.as_mut_ptr(),
&mut clen,
plaintext.as_ptr(),
plaintext.len() as c_ulonglong,
ad.as_ptr(),
ad.len() as c_ulonglong,
null(), // nsec is not used
nonce.as_ptr(),
key.as_ptr()
)?;
assert!(clen as usize == ciphertext.len());
Ok(())
}
#[inline]
pub fn decrypt(
plaintext: &mut [u8],
key: &[u8],
nonce: &[u8],
ad: &[u8],
ciphertext: &[u8],
) -> anyhow::Result<()> {
assert!(ciphertext.len() == plaintext.len() + TAG_LEN);
assert!(key.len() == KEY_LEN);
assert!(nonce.len() == NONCE_LEN);
let mut mlen: u64 = 0;
sodium_call!(
crypto_aead_chacha20poly1305_ietf_decrypt,
plaintext.as_mut_ptr(),
&mut mlen as *mut c_ulonglong,
null_mut(), // nsec is not used
ciphertext.as_ptr(),
ciphertext.len() as c_ulonglong,
ad.as_ptr(),
ad.len() as c_ulonglong,
nonce.as_ptr(),
key.as_ptr()
)?;
assert!(mlen as usize == plaintext.len());
Ok(())
}

2
sodium/src/aead/mod.rs Normal file
View File

@@ -0,0 +1,2 @@
pub mod chacha20poly1305_ietf;
pub mod xchacha20poly1305_ietf;

View File

@@ -0,0 +1,63 @@
use libsodium_sys as libsodium;
use std::ffi::c_ulonglong;
use std::ptr::{null, null_mut};
pub const KEY_LEN: usize = libsodium::crypto_aead_xchacha20poly1305_IETF_KEYBYTES as usize;
pub const TAG_LEN: usize = libsodium::crypto_aead_xchacha20poly1305_ietf_ABYTES as usize;
pub const NONCE_LEN: usize = libsodium::crypto_aead_xchacha20poly1305_IETF_NPUBBYTES as usize;
#[inline]
pub fn encrypt(
ciphertext: &mut [u8],
key: &[u8],
nonce: &[u8],
ad: &[u8],
plaintext: &[u8],
) -> anyhow::Result<()> {
assert!(ciphertext.len() == plaintext.len() + NONCE_LEN + TAG_LEN);
assert!(key.len() == libsodium::crypto_aead_xchacha20poly1305_IETF_KEYBYTES as usize);
let (n, ct) = ciphertext.split_at_mut(NONCE_LEN);
n.copy_from_slice(nonce);
let mut clen: u64 = 0;
sodium_call!(
crypto_aead_xchacha20poly1305_ietf_encrypt,
ct.as_mut_ptr(),
&mut clen,
plaintext.as_ptr(),
plaintext.len() as c_ulonglong,
ad.as_ptr(),
ad.len() as c_ulonglong,
null(), // nsec is not used
nonce.as_ptr(),
key.as_ptr()
)?;
assert!(clen as usize == ct.len());
Ok(())
}
#[inline]
pub fn decrypt(
plaintext: &mut [u8],
key: &[u8],
ad: &[u8],
ciphertext: &[u8],
) -> anyhow::Result<()> {
assert!(ciphertext.len() == plaintext.len() + NONCE_LEN + TAG_LEN);
assert!(key.len() == KEY_LEN);
let (n, ct) = ciphertext.split_at(NONCE_LEN);
let mut mlen: u64 = 0;
sodium_call!(
crypto_aead_xchacha20poly1305_ietf_decrypt,
plaintext.as_mut_ptr(),
&mut mlen as *mut c_ulonglong,
null_mut(), // nsec is not used
ct.as_ptr(),
ct.len() as c_ulonglong,
ad.as_ptr(),
ad.len() as c_ulonglong,
n.as_ptr(),
key.as_ptr()
)?;
assert!(mlen as usize == plaintext.len());
Ok(())
}

View File

@@ -0,0 +1,95 @@
use allocator_api2::alloc::{AllocError, Allocator, Layout};
use libsodium_sys as libsodium;
use std::fmt;
use std::os::raw::c_void;
use std::ptr::NonNull;
#[derive(Clone, Default)]
struct AllocatorContents;
/// Memory allocation using sodium_malloc/sodium_free
#[derive(Clone, Default)]
pub struct Alloc {
_dummy_private_data: AllocatorContents,
}
impl Alloc {
pub fn new() -> Self {
Alloc {
_dummy_private_data: AllocatorContents,
}
}
}
unsafe impl Allocator for Alloc {
fn allocate(&self, layout: Layout) -> Result<NonNull<[u8]>, AllocError> {
// Call sodium allocator
let ptr = unsafe { libsodium::sodium_malloc(layout.size()) };
// Ensure the right allocation is used
let off = ptr.align_offset(layout.align());
if off != 0 {
log::error!("Allocation {layout:?} was requested but libsodium returned allocation \
with offset {off} from the requested alignment. Libsodium always allocates values \
at the end of a memory page for security reasons, custom alignments are not supported. \
You could try allocating an oversized value.");
return Err(AllocError);
}
// Convert to a pointer size
let ptr = core::ptr::slice_from_raw_parts_mut(ptr as *mut u8, layout.size());
// Conversion to a *const u8, then to a &[u8]
match NonNull::new(ptr) {
None => {
log::error!(
"Allocation {layout:?} was requested but libsodium returned a null pointer"
);
Err(AllocError)
}
Some(ret) => Ok(ret),
}
}
unsafe fn deallocate(&self, ptr: NonNull<u8>, _layout: Layout) {
unsafe {
libsodium::sodium_free(ptr.as_ptr() as *mut c_void);
}
}
}
impl fmt::Debug for Alloc {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
fmt.write_str("<libsodium based Rust allocator>")
}
}
#[cfg(test)]
mod test {
use super::*;
/// checks that the can malloc with libsodium
#[test]
fn sodium_allocation() {
crate::init().unwrap();
let alloc = Alloc::new();
sodium_allocation_impl::<0>(&alloc);
sodium_allocation_impl::<7>(&alloc);
sodium_allocation_impl::<8>(&alloc);
sodium_allocation_impl::<64>(&alloc);
sodium_allocation_impl::<999>(&alloc);
}
fn sodium_allocation_impl<const N: usize>(alloc: &Alloc) {
crate::init().unwrap();
let layout = Layout::new::<[u8; N]>();
let mem = alloc.allocate(layout).unwrap();
// https://libsodium.gitbook.io/doc/memory_management#guarded-heap-allocations
// promises us that allocated memory is initialized with the magic byte 0xDB
assert_eq!(unsafe { mem.as_ref() }, &[0xDBu8; N]);
let mem = NonNull::new(mem.as_ptr() as *mut u8).unwrap();
unsafe { alloc.deallocate(mem, layout) };
}
}

10
sodium/src/alloc/mod.rs Normal file
View File

@@ -0,0 +1,10 @@
//! Access to sodium_malloc/sodium_free
mod allocator;
pub use allocator::Alloc;
/// A box backed by sodium_malloc
pub type Box<T> = allocator_api2::boxed::Box<T, Alloc>;
/// A vector backed by sodium_malloc
pub type Vec<T> = allocator_api2::vec::Vec<T, Alloc>;

View File

@@ -0,0 +1,31 @@
use libsodium_sys as libsodium;
use rosenpass_to::{with_destination, To};
use std::ffi::c_ulonglong;
use std::ptr::null;
pub const KEY_MIN: usize = libsodium::crypto_generichash_blake2b_KEYBYTES_MIN as usize;
pub const KEY_MAX: usize = libsodium::crypto_generichash_blake2b_KEYBYTES_MAX as usize;
pub const OUT_MIN: usize = libsodium::crypto_generichash_blake2b_BYTES_MIN as usize;
pub const OUT_MAX: usize = libsodium::crypto_generichash_blake2b_BYTES_MAX as usize;
#[inline]
pub fn hash<'a>(key: &'a [u8], data: &'a [u8]) -> impl To<[u8], anyhow::Result<()>> + 'a {
with_destination(|out: &mut [u8]| {
assert!(key.is_empty() || (KEY_MIN <= key.len() && key.len() <= KEY_MAX));
assert!(OUT_MIN <= out.len() && out.len() <= OUT_MAX);
let kptr = match key.len() {
// NULL key
0 => null(),
_ => key.as_ptr(),
};
sodium_call!(
crypto_generichash_blake2b,
out.as_mut_ptr(),
out.len(),
data.as_ptr(),
data.len() as c_ulonglong,
kptr,
key.len()
)
})
}

1
sodium/src/hash/mod.rs Normal file
View File

@@ -0,0 +1 @@
pub mod blake2b;

28
sodium/src/helpers.rs Normal file
View File

@@ -0,0 +1,28 @@
use libsodium_sys as libsodium;
use std::os::raw::c_void;
#[inline]
pub fn memcmp(a: &[u8], b: &[u8]) -> bool {
a.len() == b.len()
&& unsafe {
let r = libsodium::sodium_memcmp(
a.as_ptr() as *const c_void,
b.as_ptr() as *const c_void,
a.len(),
);
r == 0
}
}
#[inline]
pub fn compare(a: &[u8], b: &[u8]) -> i32 {
assert!(a.len() == b.len());
unsafe { libsodium::sodium_compare(a.as_ptr(), b.as_ptr(), a.len()) }
}
#[inline]
pub fn increment(v: &mut [u8]) {
unsafe {
libsodium::sodium_increment(v.as_mut_ptr(), v.len());
}
}

21
sodium/src/lib.rs Normal file
View File

@@ -0,0 +1,21 @@
use libsodium_sys as libsodium;
macro_rules! sodium_call {
($name:ident, $($args:expr),*) => { ::rosenpass_util::attempt!({
anyhow::ensure!(unsafe{libsodium::$name($($args),*)} > -1,
"Error in libsodium's {}.", stringify!($name));
Ok(())
})};
($name:ident) => { sodium_call!($name, ) };
}
#[inline]
pub fn init() -> anyhow::Result<()> {
log::trace!("initializing libsodium");
sodium_call!(sodium_init)
}
pub mod aead;
pub mod alloc;
pub mod hash;
pub mod helpers;

View File

@@ -1,358 +0,0 @@
//! This module contains various types for dealing with secrets
//!
//! These types use type level coloring to make accidential leackage of secrets extra hard.
//!
use crate::{
sodium::{rng, zeroize},
util::{cpy, mutating},
};
use lazy_static::lazy_static;
use libsodium_sys as libsodium;
use std::{
collections::HashMap,
convert::TryInto,
fmt,
ops::{Deref, DerefMut},
os::raw::c_void,
ptr::null_mut,
sync::Mutex,
};
// This might become a problem in library usage; it's effectively a memory
// leak which probably isn't a problem right now because most memory will
// be reused…
lazy_static! {
static ref SECRET_CACHE: Mutex<SecretMemoryPool> = Mutex::new(SecretMemoryPool::new());
}
/// Pool that stores secret memory allocations
///
/// Allocation of secret memory is expensive. Thus, this struct provides a
/// pool of secret memory, readily available to yield protected, slices of
/// memory.
///
/// Further information about the protection in place can be found in in the
/// [libsodium documentation](https://libsodium.gitbook.io/doc/memory_management#guarded-heap-allocations)
#[derive(Debug)] // TODO check on Debug derive, is that clever
pub struct SecretMemoryPool {
pool: HashMap<usize, Vec<*mut c_void>>,
}
impl SecretMemoryPool {
/// Create a new [SecretMemoryPool]
#[allow(clippy::new_without_default)]
pub fn new() -> Self {
let pool = HashMap::new();
Self { pool }
}
/// Return secrete back to the pool for future re-use
///
/// This consumes the [Secret], but its memory is re-used.
pub fn release<const N: usize>(&mut self, mut s: Secret<N>) {
unsafe {
self.release_by_ref(&mut s);
}
std::mem::forget(s);
}
/// Return secret back to the pool for future re-use, by slice
///
/// # Safety
///
/// After calling this function on a [Secret], the secret must never be
/// used again for anything.
unsafe fn release_by_ref<const N: usize>(&mut self, s: &mut Secret<N>) {
s.zeroize();
let Secret { ptr: secret } = s;
// don't call Secret::drop, that could cause a double free
self.pool.entry(N).or_default().push(*secret);
}
/// Take protected memory from the pool, allocating new one if no suitable
/// chunk is found in the inventory.
///
/// The secret is guaranteed to be full of nullbytes
///
/// # Safety
///
/// This function contains an unsafe call to [libsodium::sodium_malloc].
/// This call has no known safety invariants, thus nothing can go wrong™.
/// However, just like normal `malloc()` this can return a null ptr. Thus
/// the returned pointer is checked for null; causing the program to panic
/// if it is null.
pub fn take<const N: usize>(&mut self) -> Secret<N> {
let entry = self.pool.entry(N).or_default();
let secret = entry.pop().unwrap_or_else(|| {
let ptr = unsafe { libsodium::sodium_malloc(N) };
assert!(
!ptr.is_null(),
"libsodium::sodium_mallloc() returned a null ptr"
);
ptr
});
let mut s = Secret { ptr: secret };
s.zeroize();
s
}
}
impl Drop for SecretMemoryPool {
/// # Safety
///
/// The drop implementation frees the contained elements using
/// [libsodium::sodium_free]. This is safe as long as every `*mut c_void`
/// contained was initialized with a call to [libsodium::sodium_malloc]
fn drop(&mut self) {
for ptr in self.pool.drain().flat_map(|(_, x)| x.into_iter()) {
unsafe {
libsodium::sodium_free(ptr);
}
}
}
}
/// # Safety
///
/// No safety implications are known, since the `*mut c_void` in
/// is essentially used like a `&mut u8` [SecretMemoryPool].
unsafe impl Send for SecretMemoryPool {}
/// Store for a secret
///
/// Uses memory allocated with [libsodium::sodium_malloc],
/// esentially can do the same things as `[u8; N].as_mut_ptr()`.
pub struct Secret<const N: usize> {
ptr: *mut c_void,
}
impl<const N: usize> Clone for Secret<N> {
fn clone(&self) -> Self {
let mut new = Self::zero();
new.secret_mut().clone_from_slice(self.secret());
new
}
}
impl<const N: usize> Drop for Secret<N> {
fn drop(&mut self) {
self.zeroize();
// the invariant that the [Secret] is not used after the
// `release_by_ref` call is guaranteed, since this is a drop implementation
unsafe { SECRET_CACHE.lock().unwrap().release_by_ref(self) };
self.ptr = null_mut();
}
}
impl<const N: usize> Secret<N> {
pub fn from_slice(slice: &[u8]) -> Self {
let mut new_self = Self::zero();
new_self.secret_mut().copy_from_slice(slice);
new_self
}
/// Returns a new [Secret] that is zero initialized
pub fn zero() -> Self {
// Using [SecretMemoryPool] here because this operation is expensive,
// yet it is used in hot loops
let s = SECRET_CACHE.lock().unwrap().take();
assert_eq!(s.secret(), &[0u8; N]);
s
}
/// Returns a new [Secret] that is randomized
pub fn random() -> Self {
mutating(Self::zero(), |r| r.randomize())
}
/// Sets all data of an existing secret to null bytes
pub fn zeroize(&mut self) {
zeroize(self.secret_mut());
}
/// Sets all data an existing secret to random bytes
pub fn randomize(&mut self) {
rng(self.secret_mut());
}
/// Borrows the data
pub fn secret(&self) -> &[u8; N] {
// - calling `from_raw_parts` is safe, because `ptr` is initalized with
// as `N` byte allocation from the creation of `Secret` onwards. `ptr`
// stays valid over the full lifetime of `Secret`
//
// - calling uwnrap is safe, because we can guarantee that the slice has
// exactly the required size `N` to create an array of `N` elements.
let ptr = self.ptr as *const u8;
let slice = unsafe { std::slice::from_raw_parts(ptr, N) };
slice.try_into().unwrap()
}
/// Borrows the data mutably
pub fn secret_mut(&mut self) -> &mut [u8; N] {
// the same safety argument as for `secret()` holds
let ptr = self.ptr as *mut u8;
let slice = unsafe { std::slice::from_raw_parts_mut(ptr, N) };
slice.try_into().unwrap()
}
}
/// The Debug implementation of [Secret] does not reveal the secret data,
/// instead a placeholder `<SECRET>` is used
impl<const N: usize> fmt::Debug for Secret<N> {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
fmt.write_str("<SECRET>")
}
}
/// Contains information in the form of a byte array that may be known to the
/// public
// TODO: We should get rid of the Public type; just use a normal value
#[derive(Copy, Clone, Hash, PartialEq, Eq, PartialOrd, Ord)]
#[repr(transparent)]
pub struct Public<const N: usize> {
pub value: [u8; N],
}
impl<const N: usize> Public<N> {
/// Create a new [Public] from a byte slice
pub fn from_slice(value: &[u8]) -> Self {
mutating(Self::zero(), |r| cpy(value, &mut r.value))
}
/// Create a new [Public] from a byte array
pub fn new(value: [u8; N]) -> Self {
Self { value }
}
/// Create a zero initialized [Public]
pub fn zero() -> Self {
Self { value: [0u8; N] }
}
/// Create a random initialized [Public]
pub fn random() -> Self {
mutating(Self::zero(), |r| r.randomize())
}
/// Randomize all bytes in an existing [Public]
pub fn randomize(&mut self) {
rng(&mut self.value);
}
}
/// Writes the contents of an `&[u8]` as hexadecimal symbols to a [std::fmt::Formatter]
pub fn debug_crypto_array(v: &[u8], fmt: &mut fmt::Formatter) -> fmt::Result {
fmt.write_str("[{}]=")?;
if v.len() > 64 {
for byte in &v[..32] {
std::fmt::LowerHex::fmt(byte, fmt)?;
}
fmt.write_str("")?;
for byte in &v[v.len() - 32..] {
std::fmt::LowerHex::fmt(byte, fmt)?;
}
} else {
for byte in v {
std::fmt::LowerHex::fmt(byte, fmt)?;
}
}
Ok(())
}
impl<const N: usize> fmt::Debug for Public<N> {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
debug_crypto_array(&self.value, fmt)
}
}
impl<const N: usize> Deref for Public<N> {
type Target = [u8; N];
fn deref(&self) -> &[u8; N] {
&self.value
}
}
impl<const N: usize> DerefMut for Public<N> {
fn deref_mut(&mut self) -> &mut [u8; N] {
&mut self.value
}
}
#[cfg(test)]
mod test {
use super::*;
/// https://libsodium.gitbook.io/doc/memory_management#guarded-heap-allocations
/// promises us that allocated memory is initialized with this magic byte
const SODIUM_MAGIC_BYTE: u8 = 0xdb;
/// must be called before any interaction with libsodium
fn init() {
unsafe { libsodium_sys::sodium_init() };
}
/// checks that whe can malloc with libsodium
#[test]
fn sodium_malloc() {
init();
const N: usize = 8;
let ptr = unsafe { libsodium_sys::sodium_malloc(N) };
let mem = unsafe { std::slice::from_raw_parts(ptr as *mut u8, N) };
assert_eq!(mem, &[SODIUM_MAGIC_BYTE; N])
}
/// checks that whe can free with libsodium
#[test]
fn sodium_free() {
init();
const N: usize = 8;
let ptr = unsafe { libsodium_sys::sodium_malloc(N) };
unsafe { libsodium_sys::sodium_free(ptr) }
}
/// check that we can alloc using the magic pool
#[test]
fn secret_memory_pool_take() {
init();
const N: usize = 0x100;
let mut pool = SecretMemoryPool::new();
let secret: Secret<N> = pool.take();
assert_eq!(secret.secret(), &[0; N]);
}
/// check that a secrete lives, even if its [SecretMemoryPool] is deleted
#[test]
fn secret_memory_pool_drop() {
init();
const N: usize = 0x100;
let mut pool = SecretMemoryPool::new();
let secret: Secret<N> = pool.take();
std::mem::drop(pool);
assert_eq!(secret.secret(), &[0; N]);
}
/// check that a secrete can be reborn, freshly initialized with zero
#[test]
fn secret_memory_pool_release() {
init();
const N: usize = 1;
let mut pool = SecretMemoryPool::new();
let mut secret: Secret<N> = pool.take();
let old_secret_ptr = secret.ptr;
secret.secret_mut()[0] = 0x13;
pool.release(secret);
// now check that we get the same ptr
let new_secret: Secret<N> = pool.take();
assert_eq!(old_secret_ptr, new_secret.ptr);
// and that the secret was zeroized
assert_eq!(new_secret.secret(), &[0; N]);
}
}

View File

@@ -1,45 +0,0 @@
use {
crate::{prftree::PrfTree, sodium::KEY_SIZE},
anyhow::Result,
};
pub fn protocol() -> Result<PrfTree> {
PrfTree::zero().mix("Rosenpass v1 mceliece460896 Kyber512 ChaChaPoly1305 BLAKE2s".as_bytes())
}
// TODO Use labels that can serve as idents
macro_rules! prflabel {
($base:ident, $name:ident, $($lbl:expr),* ) => {
pub fn $name() -> Result<PrfTree> {
let t = $base()?;
$( let t = t.mix($lbl.as_bytes())?; )*
Ok(t)
}
}
}
prflabel!(protocol, mac, "mac");
prflabel!(protocol, cookie, "cookie");
prflabel!(protocol, peerid, "peer id");
prflabel!(protocol, biscuit_ad, "biscuit additional data");
prflabel!(protocol, ckinit, "chaining key init");
prflabel!(protocol, _ckextract, "chaining key extract");
macro_rules! prflabel_leaf {
($base:ident, $name:ident, $($lbl:expr),* ) => {
pub fn $name() -> Result<[u8; KEY_SIZE]> {
let t = $base()?;
$( let t = t.mix($lbl.as_bytes())?; )*
Ok(t.into_value())
}
}
}
prflabel_leaf!(_ckextract, mix, "mix");
prflabel_leaf!(_ckextract, hs_enc, "handshake encryption");
prflabel_leaf!(_ckextract, ini_enc, "initiator handshake encryption");
prflabel_leaf!(_ckextract, res_enc, "responder handshake encryption");
prflabel!(_ckextract, _user, "user");
prflabel!(_user, _rp, "rosenpass.eu");
prflabel_leaf!(_rp, osk, "wireguard psk");

View File

@@ -1,56 +0,0 @@
#[macro_use]
pub mod util;
#[macro_use]
pub mod sodium;
pub mod coloring;
pub mod labeled_prf;
pub mod msgs;
pub mod pqkem;
pub mod prftree;
pub mod protocol;
#[derive(thiserror::Error, Debug)]
pub enum RosenpassError {
#[error("error in OQS")]
Oqs,
#[error("error from external library while calling OQS")]
OqsExternalLib,
#[error("buffer size mismatch, required {required_size} but only found {actual_size}")]
BufferSizeMismatch {
required_size: usize,
actual_size: usize,
},
#[error("invalid message type")]
InvalidMessageType(u8),
}
impl RosenpassError {
/// Helper function to check a buffer size
fn check_buffer_size(required_size: usize, actual_size: usize) -> Result<(), Self> {
if required_size != actual_size {
Err(Self::BufferSizeMismatch {
required_size,
actual_size,
})
} else {
Ok(())
}
}
}
/// Extension trait to attach function calls to foreign types.
trait RosenpassMaybeError {
/// Checks whether something is an error or not
fn to_rg_error(&self) -> Result<(), RosenpassError>;
}
impl RosenpassMaybeError for oqs_sys::common::OQS_STATUS {
fn to_rg_error(&self) -> Result<(), RosenpassError> {
use oqs_sys::common::OQS_STATUS;
match self {
OQS_STATUS::OQS_SUCCESS => Ok(()),
OQS_STATUS::OQS_ERROR => Err(RosenpassError::Oqs),
OQS_STATUS::OQS_EXTERNAL_LIB_ERROR_OPENSSL => Err(RosenpassError::OqsExternalLib),
}
}
}

View File

@@ -1,106 +0,0 @@
//! The rosenpass protocol relies on a special type
//! of hash function for most of its hashing or
//! message authentication needs: an incrementable
//! pseudo random function.
//!
//! This is a generalization of a PRF operating
//! on a sequence of inputs instead of a single input.
//!
//! Like a Dec function the Iprf features efficient
//! incrementability.
//!
//! You can also think of an Iprf as a Dec function with
//! a fixed size output.
//!
//! The idea behind a Iprf is that it can be efficiently
//! constructed from an Dec function as well as a PRF.
//!
//! TODO Base the construction on a proper Dec function
pub struct Iprf([u8; KEY_SIZE]);
pub struct IprfBranch([u8; KEY_SIZE]);
pub struct SecretIprf(Secret<KEY_SIZE>);
pub struct SecretIprfBranch(Secret<KEY_SIZE>);
pub fn prf_into(out: &mut [u8], key: &[u8], data: &[u8]) {
// TODO: The error handling with sodium is a scurge
hmac_into(out, key, data).unwrap()
}
pub fn prf(key: &[u8], data: &[u8]) -> [u8; KEY_SIZE]{
mutating([0u8; KEY_SIZE], |r| prf_into(r, key, data))
}
impl Iprf {
fn zero() -> Self {
Self([0u8; KEY_SIZE])
}
fn dup(self) -> IprfBranch {
IprfBranch(self.0)
}
// TODO: Protocol! Use domain separation to ensure that
fn mix(self, v: &[u8]) -> Self {
Self(prf(&self.0, v))
}
fn mix_secret<const N: usize>(self, v: Secret<N>) -> SecretIprf {
SecretIprf::prf_invoc(&self.0, v.secret())
}
fn into_value(self) -> [u8; KEY_SIZE] {
self.0
}
fn extract(self, v: &[u8], dst: &mut [u8]) {
prf_into(&self.0, v, dst)
}
}
impl IprfBranch {
fn mix(&self, v: &[u8]) -> Iprf {
Iprf(prf(self.0, v))
}
fn mix_secret<const N: usize>(&self, v: Secret<N>) -> SecretIprf {
SecretIprf::prf_incov(self.0, v.secret())
}
}
impl SecretIprf {
fn prf_invoc(k: &[u8], d: &[u8]) -> SecretIprf {
mutating(SecretIprf(Secret::zero()), |r|
prf_into(k, d, r.secret_mut()))
}
fn from_key(k: Secret<N>) -> SecretIprf {
Self(k)
}
fn mix(self, v: &[u8]) -> SecretIprf {
Self::prf_invoc(self.0.secret(), v)
}
fn mix_secret<const N: usize>(self, v: Secret<N>) -> SecretIprf {
Self::prf_invoc(self.0.secret(), v.secret())
}
fn into_secret(self) -> Secret<KEY_SIZE> {
self.0
}
fn into_secret_slice(self, v: &[u8], dst: &[u8]) {
prf_into(self.0.secret(), v, dst)
}
}
impl SecretIprfBranch {
fn mix(&self, v: &[u8]) -> SecretIprf {
SecretIprf::prf_invoc(self.0.secret(), v)
}
fn mix_secret<const N: usize>(&self, v: Secret<N>) -> SecretIprf {
SecretIprf::prf_invoc(self.0.secret(), v.secret())
}
}

View File

@@ -1,646 +0,0 @@
use anyhow::{bail, ensure, Context, Result};
use log::{error, info};
use rosenpass::{
attempt,
coloring::{Public, Secret},
multimatch,
pqkem::{SKEM, KEM},
protocol::{SPk, SSk, MsgBuf, PeerPtr, Server as CryptoServer, SymKey, Timing},
sodium::sodium_init,
util::{b64_reader, b64_writer, fmt_b64},
};
use std::{
fs::{File, OpenOptions},
io::{ErrorKind, Read, Write},
net::{SocketAddr, ToSocketAddrs, UdpSocket},
path::Path,
process::{exit, Command, Stdio},
time::Duration,
};
/// Open a file writable
pub fn fopen_w<P: AsRef<Path>>(path: P) -> Result<File> {
Ok(OpenOptions::new()
.read(false)
.write(true)
.create(true)
.truncate(true)
.open(path)?)
}
/// Open a file readable
pub fn fopen_r<P: AsRef<Path>>(path: P) -> Result<File> {
Ok(OpenOptions::new()
.read(true)
.write(false)
.create(false)
.truncate(false)
.open(path)?)
}
pub trait ReadExactToEnd {
fn read_exact_to_end(&mut self, buf: &mut [u8]) -> Result<()>;
}
impl<R: Read> ReadExactToEnd for R {
fn read_exact_to_end(&mut self, buf: &mut [u8]) -> Result<()> {
let mut dummy = [0u8; 8];
self.read_exact(buf)?;
ensure!(self.read(&mut dummy)? == 0, "File too long!");
Ok(())
}
}
pub trait LoadValue {
fn load<P: AsRef<Path>>(path: P) -> Result<Self>
where
Self: Sized;
}
pub trait LoadValueB64 {
fn load_b64<P: AsRef<Path>>(path: P) -> Result<Self>
where
Self: Sized;
}
trait StoreValue {
fn store<P: AsRef<Path>>(&self, path: P) -> Result<()>;
}
trait StoreSecret {
unsafe fn store_secret<P: AsRef<Path>>(&self, path: P) -> Result<()>;
}
impl<T: StoreValue> StoreSecret for T {
unsafe fn store_secret<P: AsRef<Path>>(&self, path: P) -> Result<()> {
self.store(path)
}
}
impl<const N: usize> LoadValue for Secret<N> {
fn load<P: AsRef<Path>>(path: P) -> Result<Self> {
let mut v = Self::random();
let p = path.as_ref();
fopen_r(p)?
.read_exact_to_end(v.secret_mut())
.with_context(|| format!("Could not load file {p:?}"))?;
Ok(v)
}
}
impl<const N: usize> LoadValueB64 for Secret<N> {
fn load_b64<P: AsRef<Path>>(path: P) -> Result<Self> {
let mut v = Self::random();
let p = path.as_ref();
// This might leave some fragments of the secret on the stack;
// in practice this is likely not a problem because the stack likely
// will be overwritten by something else soon but this is not exactly
// guaranteed. It would be possible to remedy this, but since the secret
// data will linger in the linux page cache anyways with the current
// implementation, going to great length to erase the secret here is
// not worth it right now.
b64_reader(&mut fopen_r(p)?)
.read_exact(v.secret_mut())
.with_context(|| format!("Could not load base64 file {p:?}"))?;
Ok(v)
}
}
impl<const N: usize> StoreSecret for Secret<N> {
unsafe fn store_secret<P: AsRef<Path>>(&self, path: P) -> Result<()> {
std::fs::write(path, self.secret())?;
Ok(())
}
}
impl<const N: usize> LoadValue for Public<N> {
fn load<P: AsRef<Path>>(path: P) -> Result<Self> {
let mut v = Self::random();
fopen_r(path)?.read_exact_to_end(&mut *v)?;
Ok(v)
}
}
impl<const N: usize> StoreValue for Public<N> {
fn store<P: AsRef<Path>>(&self, path: P) -> Result<()> {
std::fs::write(path, **self)?;
Ok(())
}
}
macro_rules! bail_usage {
($args:expr, $($pt:expr),*) => {{
error!($($pt),*);
cmd_help()?;
exit(1);
}}
}
macro_rules! ensure_usage {
($args:expr, $ck:expr, $($pt:expr),*) => {{
if !$ck {
bail_usage!($args, $($pt),*);
}
}}
}
macro_rules! mandatory_opt {
($args:expr, $val:expr, $name:expr) => {{
ensure_usage!($args, $val.is_some(), "{0} option is mandatory", $name)
}};
}
pub struct ArgsWalker {
pub argv: Vec<String>,
pub off: usize,
}
impl ArgsWalker {
pub fn get(&self) -> Option<&str> {
self.argv.get(self.off).map(|s| s as &str)
}
pub fn prev(&mut self) -> Option<&str> {
assert!(self.off > 0);
self.off -= 1;
self.get()
}
#[allow(clippy::should_implement_trait)]
pub fn next(&mut self) -> Option<&str> {
assert!(self.todo() > 0);
self.off += 1;
self.get()
}
pub fn opt(&mut self, dst: &mut Option<String>) -> Result<()> {
let cmd = &self.argv[self.off - 1];
ensure_usage!(&self, self.todo() > 0, "Option {} takes a value", cmd);
ensure_usage!(&self, dst.is_none(), "Cannot set {} multiple times.", cmd);
*dst = Some(String::from(self.next().unwrap()));
Ok(())
}
fn todo(&self) -> usize {
self.argv.len() - self.off
}
}
#[derive(Default, Debug)]
pub struct WireguardOut {
// impl KeyOutput
dev: String,
pk: String,
extra_params: Vec<String>,
}
#[derive(Default, Debug)]
pub struct AppPeer {
pub outfile: Option<String>,
pub outwg: Option<WireguardOut>,
pub tx_addr: Option<SocketAddr>,
}
#[derive(Debug)]
pub enum Verbosity {
Quiet,
Verbose,
}
/// Holds the state of the application, namely the external IO
#[derive(Debug)]
pub struct AppServer {
pub crypt: CryptoServer,
pub sock: UdpSocket,
pub peers: Vec<AppPeer>,
pub verbosity: Verbosity,
}
/// Index based pointer to a Peer
#[derive(Debug)]
pub struct AppPeerPtr(pub usize);
impl AppPeerPtr {
/// Takes an index based handle and returns the actual peer
pub fn lift(p: PeerPtr) -> Self {
Self(p.0)
}
/// Returns an index based handle to one Peer
pub fn lower(&self) -> PeerPtr {
PeerPtr(self.0)
}
pub fn get_app<'a>(&self, srv: &'a AppServer) -> &'a AppPeer {
&srv.peers[self.0]
}
pub fn get_app_mut<'a>(&self, srv: &'a mut AppServer) -> &'a mut AppPeer {
&mut srv.peers[self.0]
}
}
#[derive(Debug)]
pub enum AppPollResult {
DeleteKey(AppPeerPtr),
SendInitiation(AppPeerPtr),
SendRetransmission(AppPeerPtr),
ReceivedMessage(usize, SocketAddr),
}
#[derive(Debug)]
pub enum KeyOutputReason {
Exchanged,
Stale,
}
/// Catches errors, prints them through the logger, then exits
pub fn main() {
env_logger::init();
match rosenpass_main() {
Ok(_) => {}
Err(e) => {
error!("{e}");
exit(1);
}
}
}
/// Entry point to the whole program
pub fn rosenpass_main() -> Result<()> {
sodium_init()?;
let mut args = ArgsWalker {
argv: std::env::args().collect(),
off: 0, // skipping executable path
};
// Command parsing
match args.next() {
Some("help") | Some("-h") | Some("-help") | Some("--help") => cmd_help()?,
Some("keygen") => cmd_keygen(args)?,
Some("exchange") => cmd_exchange(args)?,
Some(cmd) => bail_usage!(&args, "No such command {}", cmd),
None => bail_usage!(&args, "Expected a command!"),
};
Ok(())
}
/// Print the usage information
pub fn cmd_help() -> Result<()> {
eprint!(include_str!("usage.md"), env!("CARGO_BIN_NAME"));
Ok(())
}
/// Generate a keypair
pub fn cmd_keygen(mut args: ArgsWalker) -> Result<()> {
let mut sf: Option<String> = None;
let mut pf: Option<String> = None;
// Arg parsing
loop {
match args.next() {
Some("private-key") => args.opt(&mut sf)?,
Some("public-key") => args.opt(&mut pf)?,
Some(opt) => bail_usage!(&args, "Unknown option `{}`", opt),
None => break,
};
}
mandatory_opt!(&args, sf, "private-key");
mandatory_opt!(&args, pf, "private-key");
// Cmd
let (mut ssk, mut spk) = (SSk::random(), SPk::random());
unsafe {
SKEM::keygen(ssk.secret_mut(), spk.secret_mut())?;
ssk.store_secret(sf.unwrap())?;
spk.store_secret(pf.unwrap())?;
}
Ok(())
}
pub fn cmd_exchange(mut args: ArgsWalker) -> Result<()> {
// Argument parsing
let mut sf: Option<String> = None;
let mut pf: Option<String> = None;
let mut listen: Option<String> = None;
let mut verbosity = Verbosity::Quiet;
// Global parameters
loop {
match args.next() {
Some("private-key") => args.opt(&mut sf)?,
Some("public-key") => args.opt(&mut pf)?,
Some("listen") => args.opt(&mut listen)?,
Some("verbose") => {
verbosity = Verbosity::Verbose;
}
Some("peer") => {
args.prev();
break;
}
Some(opt) => bail_usage!(&args, "Unknown option `{}`", opt),
None => break,
};
}
mandatory_opt!(&args, sf, "private-key");
mandatory_opt!(&args, pf, "public-key");
let mut srv = std::boxed::Box::<AppServer>::new(AppServer::new(
// sk, pk, addr
SSk::load(&sf.unwrap())?,
SPk::load(&pf.unwrap())?,
listen.as_deref().unwrap_or("[0::0]:0"),
verbosity,
)?);
// Peer parameters
'_parseAllPeers: while args.todo() > 0 {
let mut pf: Option<String> = None;
let mut outfile: Option<String> = None;
let mut outwg: Option<WireguardOut> = None;
let mut endpoint: Option<String> = None;
let mut pskf: Option<String> = None;
args.next(); // skip "peer" starter itself
'parseOnePeer: loop {
match args.next() {
// Done with this peer
Some("peer") => {
args.prev();
break 'parseOnePeer;
}
None => break 'parseOnePeer,
// Options
Some("public-key") => args.opt(&mut pf)?,
Some("endpoint") => args.opt(&mut endpoint)?,
Some("preshared-key") => args.opt(&mut pskf)?,
Some("outfile") => args.opt(&mut outfile)?,
// Wireguard out
Some("wireguard") => {
ensure_usage!(
&args,
outwg.is_none(),
"Cannot set wireguard output for the same peer multiple times."
);
ensure_usage!(&args, args.todo() >= 2, "Option wireguard takes to values");
let dev = String::from(args.next().unwrap());
let pk = String::from(args.next().unwrap());
let wg = outwg.insert(WireguardOut {
dev,
pk,
extra_params: Vec::new(),
});
'_parseWgOutExtra: loop {
match args.next() {
Some("peer") => {
args.prev();
break 'parseOnePeer;
}
None => break 'parseOnePeer,
Some(xtra) => wg.extra_params.push(xtra.to_string()),
};
}
}
// Invalid
Some(opt) => bail_usage!(&args, "Unknown peer option `{}`", opt),
};
}
mandatory_opt!(&args, pf, "private-key");
ensure_usage!(
&args,
outfile.is_some() || outwg.is_some(),
"Either of the outfile or wireguard option is mandatory"
);
let tx_addr = endpoint
.map(|e| {
e.to_socket_addrs()?
.next()
.context("Expected address in endpoint parameter")
})
.transpose()?;
srv.add_peer(
// psk, pk, outfile, outwg, tx_addr
pskf.map(SymKey::load_b64).transpose()?,
SPk::load(&pf.unwrap())?,
outfile,
outwg,
tx_addr,
)?;
}
srv.listen_loop()
}
impl AppServer {
pub fn new<A: ToSocketAddrs>(
sk: SSk,
pk: SPk,
addr: A,
verbosity: Verbosity,
) -> Result<Self> {
Ok(Self {
crypt: CryptoServer::new(sk, pk),
sock: UdpSocket::bind(addr)?,
peers: Vec::new(),
verbosity,
})
}
pub fn verbose(&self) -> bool {
matches!(self.verbosity, Verbosity::Verbose)
}
pub fn add_peer(
&mut self,
psk: Option<SymKey>,
pk: SPk,
outfile: Option<String>,
outwg: Option<WireguardOut>,
tx_addr: Option<SocketAddr>,
) -> Result<AppPeerPtr> {
let PeerPtr(pn) = self.crypt.add_peer(psk, pk)?;
assert!(pn == self.peers.len());
self.peers.push(AppPeer {
outfile,
outwg,
tx_addr,
});
Ok(AppPeerPtr(pn))
}
pub fn listen_loop(&mut self) -> Result<()> {
const INIT_SLEEP: f64 = 0.01;
const MAX_FAILURES: i32 = 10;
let mut failure_cnt = 0;
loop {
let msgs_processed = 0usize;
let err = match self.event_loop() {
Ok(()) => return Ok(()),
Err(e) => e,
};
// This should not happen…
failure_cnt = if msgs_processed > 0 {
0
} else {
failure_cnt + 1
};
let sleep = INIT_SLEEP * 2.0f64.powf(f64::from(failure_cnt - 1));
let tries_left = MAX_FAILURES - (failure_cnt - 1);
error!(
"unexpected error after processing {} messages: {:?} {}",
msgs_processed,
err,
err.backtrace()
);
if tries_left > 0 {
error!("reinitializing networking in {sleep}! {tries_left} tries left.");
std::thread::sleep(self.crypt.timebase.dur(sleep));
continue;
}
bail!("too many network failures");
}
}
pub fn event_loop(&mut self) -> Result<()> {
let (mut rx, mut tx) = (MsgBuf::zero(), MsgBuf::zero());
macro_rules! tx_maybe_with {
($peer:expr, $fn:expr) => {
attempt!({
let p = $peer.get_app(self);
if let Some(addr) = p.tx_addr {
let len = $fn()?;
self.sock.send_to(&tx[..len], addr)?;
}
Ok(())
})
};
}
loop {
use rosenpass::protocol::HandleMsgResult;
use AppPollResult::*;
use KeyOutputReason::*;
match self.poll(&mut *rx)? {
SendInitiation(peer) => tx_maybe_with!(peer, || self
.crypt
.initiate_handshake(peer.lower(), &mut *tx))?,
SendRetransmission(peer) => tx_maybe_with!(peer, || self
.crypt
.retransmit_handshake(peer.lower(), &mut *tx))?,
DeleteKey(peer) => self.output_key(peer, Stale, &SymKey::random())?,
ReceivedMessage(len, addr) => {
multimatch!(self.crypt.handle_msg(&rx[..len], &mut *tx),
Err(ref e) =>
self.verbose().then(||
info!("error processing incoming message from {:?}: {:?} {}", addr, e, e.backtrace())),
Ok(HandleMsgResult { resp: Some(len), .. }) => {
self.sock.send_to(&tx[0..len], addr)?
},
Ok(HandleMsgResult { exchanged_with: Some(p), .. }) => {
let ap = AppPeerPtr::lift(p);
ap.get_app_mut(self).tx_addr = Some(addr);
// TODO: Maybe we should rather call the key "rosenpass output"?
self.output_key(ap, Exchanged, &self.crypt.osk(p)?)?;
}
);
}
};
}
}
pub fn output_key(&self, peer: AppPeerPtr, why: KeyOutputReason, key: &SymKey) -> Result<()> {
let peerid = peer.lower().get(&self.crypt).pidt()?;
let ap = peer.get_app(self);
if self.verbose() {
let msg = match why {
KeyOutputReason::Exchanged => "Exchanged key with peer",
KeyOutputReason::Stale => "Erasing outdated key from peer",
};
info!("{} {}", msg, fmt_b64(&*peerid));
}
if let Some(of) = ap.outfile.as_ref() {
// This might leave some fragments of the secret on the stack;
// in practice this is likely not a problem because the stack likely
// will be overwritten by something else soon but this is not exactly
// guaranteed. It would be possible to remedy this, but since the secret
// data will linger in the linux page cache anyways with the current
// implementation, going to great length to erase the secret here is
// not worth it right now.
b64_writer(fopen_w(of)?).write_all(key.secret())?;
let why = match why {
KeyOutputReason::Exchanged => "exchanged",
KeyOutputReason::Stale => "stale",
};
println!(
"output-key peer {} key-file {} {}",
fmt_b64(&*peerid),
of,
why
);
}
if let Some(owg) = ap.outwg.as_ref() {
let child = Command::new("wg")
.arg("set")
.arg(&owg.dev)
.arg("peer")
.arg(&owg.pk)
.arg("preshared-key")
.arg("/dev/stdin")
.stdin(Stdio::piped())
.args(&owg.extra_params)
.spawn()?;
b64_writer(child.stdin.unwrap()).write_all(key.secret())?;
}
Ok(())
}
pub fn poll(&mut self, rx_buf: &mut [u8]) -> Result<AppPollResult> {
use rosenpass::protocol::PollResult as C;
use AppPollResult as A;
loop {
return Ok(match self.crypt.poll()? {
C::DeleteKey(PeerPtr(no)) => A::DeleteKey(AppPeerPtr(no)),
C::SendInitiation(PeerPtr(no)) => A::SendInitiation(AppPeerPtr(no)),
C::SendRetransmission(PeerPtr(no)) => A::SendRetransmission(AppPeerPtr(no)),
C::Sleep(timeout) => match self.try_recv(rx_buf, timeout)? {
Some((len, addr)) => A::ReceivedMessage(len, addr),
None => continue,
},
});
}
}
pub fn try_recv(&self, buf: &mut [u8], timeout: Timing) -> Result<Option<(usize, SocketAddr)>> {
if timeout == 0.0 {
return Ok(None);
}
self.sock
.set_read_timeout(Some(Duration::from_secs_f64(timeout)))?;
match self.sock.recv_from(buf) {
Ok(x) => Ok(Some(x)),
Err(e) => match e.kind() {
ErrorKind::WouldBlock => Ok(None),
ErrorKind::TimedOut => Ok(None),
_ => Err(anyhow::Error::new(e)),
},
}
}
}

View File

@@ -1,384 +0,0 @@
//! # Messages
//!
//! This module contains data structures that help in the
//! serialization/deserialization (ser/de) of messages. Thats kind of a lie,
//! since no actual ser/de happens. Instead, the structures offer views into
//! mutable byte slices (`&mut [u8]`), allowing to modify the fields of an
//! always serialized instance of the data in question. This is closely related
//! to the concept of lenses in function programming; more on that here:
//! [https://sinusoid.es/misc/lager/lenses.pdf](https://sinusoid.es/misc/lager/lenses.pdf)
//!
//! # Example
//!
//! The following example uses the [`data_lense` macro](crate::data_lense) to create a lense that
//! might be useful when dealing with UDP headers.
//!
//! ```
//! use rosenpass::{data_lense, RosenpassError, msgs::LenseView};
//! # fn main() -> Result<(), RosenpassError> {
//!
//! data_lense! {UdpDatagramHeader :=
//! source_port: 2,
//! dest_port: 2,
//! length: 2,
//! checksum: 2
//! }
//!
//! let mut buf = [0u8; 8];
//!
//! // read-only lense, no check of size:
//! let lense = UdpDatagramHeader(&buf);
//! assert_eq!(lense.checksum(), &[0, 0]);
//!
//! // mutable lense, runtime check of size
//! let mut lense = buf.as_mut().udp_datagram_header()?;
//! lense.source_port_mut().copy_from_slice(&53u16.to_be_bytes()); // some DNS, anyone?
//!
//! // the original buffer is still available
//! assert_eq!(buf, [0, 53, 0, 0, 0, 0, 0, 0]);
//!
//! // read-only lense, runtime check of size
//! let lense = buf.as_ref().udp_datagram_header()?;
//! assert_eq!(lense.source_port(), &[0, 53]);
//! # Ok(())
//! # }
//! ```
use super::RosenpassError;
use crate::{pqkem::*, sodium};
// Macro magic ////////////////////////////////////////////////////////////////
/// A macro to create data lenses. Refer to the [`msgs` mod](crate::msgs) for
/// an example and further elaboration
// TODO implement TryFrom<[u8]> and From<[u8; Self::len()]>
#[macro_export]
macro_rules! data_lense(
// prefix @ offset ; optional meta ; field name : field length, ...
(token_muncher_ref @ $offset:expr ; $( $attr:meta )* ; $field:ident : $len:expr $(, $( $tail:tt )+ )?) => {
::paste::paste!{
#[allow(rustdoc::broken_intra_doc_links)]
$( #[ $attr ] )*
///
#[doc = data_lense!(maybe_docstring_link $len)]
/// bytes long
pub fn $field(&self) -> &__ContainerType::Output {
&self.0[$offset .. $offset + $len]
}
/// The bytes until the
#[doc = data_lense!(maybe_docstring_link Self::$field)]
/// field
pub fn [< until_ $field >](&self) -> &__ContainerType::Output {
&self.0[0 .. $offset]
}
// if the tail exits, consume it as well
$(
data_lense!{token_muncher_ref @ $offset + $len ; $( $tail )+ }
)?
}
};
// prefix @ offset ; optional meta ; field name : field length, ...
(token_muncher_mut @ $offset:expr ; $( $attr:meta )* ; $field:ident : $len:expr $(, $( $tail:tt )+ )?) => {
::paste::paste!{
#[allow(rustdoc::broken_intra_doc_links)]
$( #[ $attr ] )*
///
#[doc = data_lense!(maybe_docstring_link $len)]
/// bytes long
pub fn [< $field _mut >](&mut self) -> &mut __ContainerType::Output {
&mut self.0[$offset .. $offset + $len]
}
// if the tail exits, consume it as well
$(
data_lense!{token_muncher_mut @ $offset + $len ; $( $tail )+ }
)?
}
};
// switch that yields literals unchanged, but creates docstring links to
// constants
// TODO the doc string link doesn't work if $x is taken from a generic,
(maybe_docstring_link $x:literal) => (stringify!($x));
(maybe_docstring_link $x:expr) => (stringify!([$x]));
// struct name < optional generics > := optional doc string field name : field length, ...
($type:ident $( < $( $generic:ident ),+ > )? := $( $( #[ $attr:meta ] )* $field:ident : $len:expr ),+) => (::paste::paste!{
#[allow(rustdoc::broken_intra_doc_links)]
/// A data lense to manipulate byte slices.
///
//// # Fields
///
$(
/// - `
#[doc = stringify!($field)]
/// `:
#[doc = data_lense!(maybe_docstring_link $len)]
/// bytes
)+
pub struct $type<__ContainerType $(, $( $generic ),+ )? > (
__ContainerType,
// The phantom data is required, since all generics declared on a
// type need to be used on the type.
// https://doc.rust-lang.org/stable/error_codes/E0392.html
$( $( ::core::marker::PhantomData<$generic> ),+ )?
);
impl<__ContainerType $(, $( $generic: LenseView ),+ )? > $type<__ContainerType $(, $( $generic ),+ )? >{
$(
/// Size in bytes of the field `
#[doc = !($field)]
/// `
pub const fn [< $field _len >]() -> usize{
$len
}
)+
/// Verify that `len` is sufficiently long to hold [Self]
pub fn check_size(len: usize) -> Result<(), RosenpassError>{
let required_size = $( $len + )+ 0;
let actual_size = len;
if required_size < actual_size {
Err(RosenpassError::BufferSizeMismatch {
required_size,
actual_size,
})
}else{
Ok(())
}
}
}
// read-only accessor functions
impl<'a, __ContainerType $(, $( $generic: LenseView ),+ )?> $type<&'a __ContainerType $(, $( $generic ),+ )?>
where
__ContainerType: std::ops::Index<std::ops::Range<usize>> + ?Sized,
{
data_lense!{token_muncher_ref @ 0 ; $( $( $attr )* ; $field : $len ),+ }
/// View into all bytes belonging to this Lense
pub fn all_bytes(&self) -> &__ContainerType::Output {
&self.0[0..Self::LEN]
}
}
// mutable accessor functions
impl<'a, __ContainerType $(, $( $generic: LenseView ),+ )?> $type<&'a mut __ContainerType $(, $( $generic ),+ )?>
where
__ContainerType: std::ops::IndexMut<std::ops::Range<usize>> + ?Sized,
{
data_lense!{token_muncher_ref @ 0 ; $( $( $attr )* ; $field : $len ),+ }
data_lense!{token_muncher_mut @ 0 ; $( $( $attr )* ; $field : $len ),+ }
/// View into all bytes belonging to this Lense
pub fn all_bytes(&self) -> &__ContainerType::Output {
&self.0[0..Self::LEN]
}
/// View into all bytes belonging to this Lense
pub fn all_bytes_mut(&mut self) -> &mut __ContainerType::Output {
&mut self.0[0..Self::LEN]
}
}
// lense trait, allowing us to know the implementing lenses size
impl<__ContainerType $(, $( $generic: LenseView ),+ )? > LenseView for $type<__ContainerType $(, $( $generic ),+ )? >{
/// Number of bytes required to store this type in binary format
const LEN: usize = $( $len + )+ 0;
}
/// Extension trait to allow checked creation of a lense over
/// some byte slice that contains a
#[doc = data_lense!(maybe_docstring_link $type)]
pub trait [< $type Ext >] {
type __ContainerType;
/// Create a lense to the byte slice
fn [< $type:snake >] $(< $($generic),* >)? (self) -> Result< $type<Self::__ContainerType, $( $($generic),+ )? >, RosenpassError>;
}
impl<'a> [< $type Ext >] for &'a [u8] {
type __ContainerType = &'a [u8];
fn [< $type:snake >] $(< $($generic),* >)? (self) -> Result< $type<Self::__ContainerType, $( $($generic),+ )? >, RosenpassError> {
Ok($type ( self, $( $( ::core::marker::PhantomData::<$generic> ),+ )? ))
}
}
impl<'a> [< $type Ext >] for &'a mut [u8] {
type __ContainerType = &'a mut [u8];
fn [< $type:snake >] $(< $($generic),* >)? (self) -> Result< $type<Self::__ContainerType, $( $($generic),+ )? >, RosenpassError> {
Ok($type ( self, $( $( ::core::marker::PhantomData::<$generic> ),+ )? ))
}
}
});
);
/// Common trait shared by all Lenses
pub trait LenseView {
const LEN: usize;
}
data_lense! { Envelope<M> :=
/// [MsgType] of this message
msg_type: 1,
/// Reserved for future use
reserved: 3,
/// The actual Paylod
payload: M::LEN,
/// Message Authentication Code (mac) over all bytes until (exclusive)
/// `mac` itself
mac: sodium::MAC_SIZE,
/// Currently unused, TODO: do something with this
cookie: sodium::MAC_SIZE
}
data_lense! { InitHello :=
/// Randomly generated connection id
sidi: 4,
/// Kyber 512 Ephemeral Public Key
epki: EKEM::PK_LEN,
/// Classic McEliece Ciphertext
sctr: SKEM::CT_LEN,
/// Encryped: 16 byte hash of McEliece initiator static key
pidic: sodium::AEAD_TAG_LEN + 32,
/// Encrypted TAI64N Time Stamp (against replay attacks)
auth: sodium::AEAD_TAG_LEN
}
data_lense! { RespHello :=
/// Randomly generated connection id
sidr: 4,
/// Copied from InitHello
sidi: 4,
/// Kyber 512 Ephemeral Ciphertext
ecti: EKEM::CT_LEN,
/// Classic McEliece Ciphertext
scti: SKEM::CT_LEN,
/// Empty encrypted message (just an auth tag)
auth: sodium::AEAD_TAG_LEN,
/// Responders handshake state in encrypted form
biscuit: BISCUIT_CT_LEN
}
data_lense! { InitConf :=
/// Copied from InitHello
sidi: 4,
/// Copied from RespHello
sidr: 4,
/// Responders handshake state in encrypted form
biscuit: BISCUIT_CT_LEN,
/// Empty encrypted message (just an auth tag)
auth: sodium::AEAD_TAG_LEN
}
data_lense! { EmptyData :=
/// Copied from RespHello
sid: 4,
/// Nonce
ctr: 8,
/// Empty encrypted message (just an auth tag)
auth: sodium::AEAD_TAG_LEN
}
data_lense! { Biscuit :=
/// H(spki) Ident ifies the initiator
pidi: sodium::KEY_SIZE,
/// The biscuit number (replay protection)
biscuit_no: 12,
/// Chaining key
ck: sodium::KEY_SIZE
}
data_lense! { DataMsg :=
dummy: 4
}
data_lense! { CookieReply :=
dummy: 4
}
// Traits /////////////////////////////////////////////////////////////////////
pub trait WireMsg: std::fmt::Debug {
const MSG_TYPE: MsgType;
const MSG_TYPE_U8: u8 = Self::MSG_TYPE as u8;
const BYTES: usize;
}
// Constants //////////////////////////////////////////////////////////////////
pub const SESSION_ID_LEN: usize = 4;
pub const BISCUIT_ID_LEN: usize = 12;
pub const WIRE_ENVELOPE_LEN: usize = 1 + 3 + 16 + 16; // TODO verify this
/// Size required to fit any message in binary form
pub const MAX_MESSAGE_LEN: usize = 2500; // TODO fix this
/// Recognized message types
#[repr(u8)]
#[derive(Hash, PartialEq, Eq, PartialOrd, Ord, Debug, Clone, Copy)]
pub enum MsgType {
InitHello = 0x81,
RespHello = 0x82,
InitConf = 0x83,
EmptyData = 0x84,
DataMsg = 0x85,
CookieReply = 0x86,
}
impl TryFrom<u8> for MsgType {
type Error = RosenpassError;
fn try_from(value: u8) -> Result<Self, Self::Error> {
Ok(match value {
0x81 => MsgType::InitHello,
0x82 => MsgType::RespHello,
0x83 => MsgType::InitConf,
0x84 => MsgType::EmptyData,
0x85 => MsgType::DataMsg,
0x86 => MsgType::CookieReply,
_ => return Err(RosenpassError::InvalidMessageType(value)),
})
}
}
/// length in bytes of an unencrypted Biscuit (plain text)
pub const BISCUIT_PT_LEN: usize = Biscuit::<()>::LEN;
/// Length in bytes of an encrypted Biscuit (cipher text)
pub const BISCUIT_CT_LEN: usize = BISCUIT_PT_LEN + sodium::XAEAD_NONCE_LEN + sodium::XAEAD_TAG_LEN;
#[cfg(test)]
mod test_constants {
use crate::{
msgs::{BISCUIT_CT_LEN, BISCUIT_PT_LEN},
sodium,
};
#[test]
fn sodium_keysize() {
assert_eq!(sodium::KEY_SIZE, 32);
}
#[test]
fn biscuit_pt_len() {
assert_eq!(BISCUIT_PT_LEN, 2 * sodium::KEY_SIZE + 12);
}
#[test]
fn biscuit_ct_len() {
assert_eq!(
BISCUIT_CT_LEN,
BISCUIT_PT_LEN + sodium::XAEAD_NONCE_LEN + sodium::XAEAD_TAG_LEN
);
}
}

View File

@@ -1,176 +0,0 @@
//! This module contains Traits and implementations for Key Encapsulation
//! Mechanisms (KEM). KEMs are the interface provided by almost all post-quantum
//! secure key exchange mechanisms.
//!
//! Conceptually KEMs are akin to public-key encryption, but instead of encrypting
//! arbitrary data, KEMs are limited to the transmission of keys, randomly chosen during
//!
//! encapsulation.
//! The [KEM] Trait describes the basic API offered by a Key Encapsulation
//! Mechanism. Two implementations for it are provided, [SKEM] and [EKEM].
use crate::{RosenpassError, RosenpassMaybeError};
/// Key Encapsulation Mechanism
///
/// The KEM interface defines three operations: Key generation, key encapsulation and key
/// decapsulation.
pub trait KEM {
/// Secrete Key length
const SK_LEN: usize;
/// Public Key length
const PK_LEN: usize;
/// Ciphertext length
const CT_LEN: usize;
/// Shared Secret length
const SHK_LEN: usize;
/// Generate a keypair consisting of secret key (`sk`) and public key (`pk`)
///
/// `keygen() -> sk, pk`
fn keygen(sk: &mut [u8], pk: &mut [u8]) -> Result<(), RosenpassError>;
/// From a public key (`pk`), generate a shared key (`shk`, for local use)
/// and a cipher text (`ct`, to be sent to the owner of the `pk`).
///
/// `encaps(pk) -> shk, ct`
fn encaps(shk: &mut [u8], ct: &mut [u8], pk: &[u8]) -> Result<(), RosenpassError>;
/// From a secret key (`sk`) and a cipher text (`ct`) derive a shared key
/// (`shk`)
///
/// `decaps(sk, ct) -> shk`
fn decaps(shk: &mut [u8], sk: &[u8], ct: &[u8]) -> Result<(), RosenpassError>;
}
/// A KEM that is secure against Chosen Ciphertext Attacks (CCA).
/// In the context of rosenpass this is used for static keys.
/// Uses [Classic McEliece](https://classic.mceliece.org/) 460896 from liboqs.
///
/// Classic McEliece is chosen because of its high security margin and its small
/// ciphertexts. The public keys are humongous, but (being static keys) the are never transmitted over
/// the wire so this is not a big problem.
pub struct SKEM;
/// # Safety
///
/// This Trait impl calls unsafe [oqs_sys] functions, that write to byte
/// slices only identified using raw pointers. It must be ensured that the raw
/// pointers point into byte slices of sufficient length, to avoid UB through
/// overwriting of arbitrary data. This is checked in the following code before
/// the unsafe calls, and an early return with an Err occurs if the byte slice
/// size does not match the required size.
///
/// __Note__: This requirement is stricter than necessary, it would suffice
/// to only check that the buffers are big enough, allowing them to be even
/// bigger. However, from a correctness point of view it does not make sense to
/// allow bigger buffers.
impl KEM for SKEM {
const SK_LEN: usize = oqs_sys::kem::OQS_KEM_classic_mceliece_460896_length_secret_key as usize;
const PK_LEN: usize = oqs_sys::kem::OQS_KEM_classic_mceliece_460896_length_public_key as usize;
const CT_LEN: usize = oqs_sys::kem::OQS_KEM_classic_mceliece_460896_length_ciphertext as usize;
const SHK_LEN: usize =
oqs_sys::kem::OQS_KEM_classic_mceliece_460896_length_shared_secret as usize;
fn keygen(sk: &mut [u8], pk: &mut [u8]) -> Result<(), RosenpassError> {
RosenpassError::check_buffer_size(sk.len(), Self::SK_LEN)?;
RosenpassError::check_buffer_size(pk.len(), Self::PK_LEN)?;
unsafe {
oqs_sys::kem::OQS_KEM_classic_mceliece_460896_keypair(pk.as_mut_ptr(), sk.as_mut_ptr())
.to_rg_error()
}
}
fn encaps(shk: &mut [u8], ct: &mut [u8], pk: &[u8]) -> Result<(), RosenpassError> {
RosenpassError::check_buffer_size(shk.len(), Self::SHK_LEN)?;
RosenpassError::check_buffer_size(ct.len(), Self::CT_LEN)?;
RosenpassError::check_buffer_size(pk.len(), Self::PK_LEN)?;
unsafe {
oqs_sys::kem::OQS_KEM_classic_mceliece_460896_encaps(
ct.as_mut_ptr(),
shk.as_mut_ptr(),
pk.as_ptr(),
)
.to_rg_error()
}
}
fn decaps(shk: &mut [u8], sk: &[u8], ct: &[u8]) -> Result<(), RosenpassError> {
RosenpassError::check_buffer_size(shk.len(), Self::SHK_LEN)?;
RosenpassError::check_buffer_size(sk.len(), Self::SK_LEN)?;
RosenpassError::check_buffer_size(ct.len(), Self::CT_LEN)?;
unsafe {
oqs_sys::kem::OQS_KEM_classic_mceliece_460896_decaps(
shk.as_mut_ptr(),
ct.as_ptr(),
sk.as_ptr(),
)
.to_rg_error()
}
}
}
/// Implements a KEM that is secure against Chosen Plaintext Attacks (CPA).
/// In the context of rosenpass this is used for ephemeral keys.
/// Currently the implementation uses
/// [Kyber 512](https://openquantumsafe.org/liboqs/algorithms/kem/kyber) from liboqs.
///
/// This is being used for ephemeral keys; since these are use-once the first post quantum
/// wireguard paper claimed that CPA security would be sufficient. Nonetheless we choose kyber
/// which provides CCA security since there are no publicly vetted KEMs out there which provide
/// only CPA security.
pub struct EKEM;
/// # Safety
///
/// This Trait impl calls unsafe [oqs_sys] functions, that write to byte
/// slices only identified using raw pointers. It must be ensured that the raw
/// pointers point into byte slices of sufficient length, to avoid UB through
/// overwriting of arbitrary data. This is checked in the following code before
/// the unsafe calls, and an early return with an Err occurs if the byte slice
/// size does not match the required size.
///
/// __Note__: This requirement is stricter than necessary, it would suffice
/// to only check that the buffers are big enough, allowing them to be even
/// bigger. However, from a correctness point of view it does not make sense to
/// allow bigger buffers.
impl KEM for EKEM {
const SK_LEN: usize = oqs_sys::kem::OQS_KEM_kyber_512_length_secret_key as usize;
const PK_LEN: usize = oqs_sys::kem::OQS_KEM_kyber_512_length_public_key as usize;
const CT_LEN: usize = oqs_sys::kem::OQS_KEM_kyber_512_length_ciphertext as usize;
const SHK_LEN: usize = oqs_sys::kem::OQS_KEM_kyber_512_length_shared_secret as usize;
fn keygen(sk: &mut [u8], pk: &mut [u8]) -> Result<(), RosenpassError> {
RosenpassError::check_buffer_size(sk.len(), Self::SK_LEN)?;
RosenpassError::check_buffer_size(pk.len(), Self::PK_LEN)?;
unsafe {
oqs_sys::kem::OQS_KEM_kyber_512_keypair(pk.as_mut_ptr(), sk.as_mut_ptr())
.to_rg_error()
}
}
fn encaps(shk: &mut [u8], ct: &mut [u8], pk: &[u8]) -> Result<(), RosenpassError> {
RosenpassError::check_buffer_size(shk.len(), Self::SHK_LEN)?;
RosenpassError::check_buffer_size(ct.len(), Self::CT_LEN)?;
RosenpassError::check_buffer_size(pk.len(), Self::PK_LEN)?;
unsafe {
oqs_sys::kem::OQS_KEM_kyber_512_encaps(
ct.as_mut_ptr(),
shk.as_mut_ptr(),
pk.as_ptr(),
)
.to_rg_error()
}
}
fn decaps(shk: &mut [u8], sk: &[u8], ct: &[u8]) -> Result<(), RosenpassError> {
RosenpassError::check_buffer_size(shk.len(), Self::SHK_LEN)?;
RosenpassError::check_buffer_size(sk.len(), Self::SK_LEN)?;
RosenpassError::check_buffer_size(ct.len(), Self::CT_LEN)?;
unsafe {
oqs_sys::kem::OQS_KEM_kyber_512_decaps(
shk.as_mut_ptr(),
ct.as_ptr(),
sk.as_ptr(),
)
.to_rg_error()
}
}
}

Some files were not shown because too many files have changed in this diff Show More