mirror of
https://github.com/trustedsec/hate_crack.git
synced 2026-03-12 21:23:05 -07:00
Merge pull request #79 from trustedsec/dev/fresh-install-fixes
Fresh install fixes, Lima VM E2E tests, princeprocessor submodule
This commit is contained in:
81
.claude/agents/readme-documentarian.md
Normal file
81
.claude/agents/readme-documentarian.md
Normal file
@@ -0,0 +1,81 @@
|
||||
---
|
||||
name: readme-documentarian
|
||||
description: "Use this agent when code changes have been made that could affect documentation, particularly README files. This includes after writing new features, modifying existing functionality, changing APIs, updating dependencies, altering project structure, or making any commit. The agent should be proactively invoked after meaningful code changes to ensure documentation stays in sync.\\n\\nExamples:\\n\\n- Example 1:\\n user: \"Add a new CLI flag --verbose to the main script\"\\n assistant: \"Here is the updated CLI code with the --verbose flag:\"\\n <code changes made>\\n assistant: \"Now let me use the readme-documentarian agent to check if the README needs to be updated to reflect this new CLI flag.\"\\n <Task tool invoked with readme-documentarian agent>\\n\\n- Example 2:\\n user: \"Refactor the authentication module to use OAuth2 instead of basic auth\"\\n assistant: \"I've refactored the authentication module. Here are the changes:\"\\n <code changes made>\\n assistant: \"Since this is a significant change to how authentication works, let me use the readme-documentarian agent to update the documentation accordingly.\"\\n <Task tool invoked with readme-documentarian agent>\\n\\n- Example 3:\\n user: \"Add pandas as a dependency and use it in the data processing pipeline\"\\n assistant: \"I've added pandas and integrated it into the data processing pipeline.\"\\n <code changes made>\\n assistant: \"A new dependency was added, so let me use the readme-documentarian agent to ensure the README reflects the updated requirements and usage.\"\\n <Task tool invoked with readme-documentarian agent>\\n\\n- Example 4:\\n user: \"Fix the bug where the config file path isn't resolved correctly\"\\n assistant: \"Here's the bug fix:\"\\n <code changes made>\\n assistant: \"Let me use the readme-documentarian agent to check if any documentation references this config file behavior and needs a correction.\"\\n <Task tool invoked with readme-documentarian agent>"
|
||||
tools: Edit, Write, NotebookEdit, Read
|
||||
model: haiku
|
||||
color: green
|
||||
---
|
||||
|
||||
You are an expert technical documentarian with deep experience in open-source project documentation, developer experience, and README best practices. You have an obsessive attention to detail when it comes to keeping documentation accurate and in sync with code.
|
||||
|
||||
## Core Responsibility
|
||||
|
||||
Your job is to review recent code changes and ensure all README files (and related documentation) accurately reflect the current state of the project. You are proactive - you do not wait to be asked. Whenever code changes are made, you audit the documentation.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. **Identify what changed**: Read the recent code changes, diffs, or newly written code. Understand what was added, removed, or modified.
|
||||
|
||||
2. **Find all README files**: Search the project for all README.md (and README.rst, docs/, etc.) files at every level of the directory tree - not just the root.
|
||||
|
||||
3. **Audit documentation against changes**: For each relevant README, check whether the changes affect any of the following sections:
|
||||
- Project description or overview
|
||||
- Installation instructions
|
||||
- Dependencies and requirements
|
||||
- Usage examples and CLI flags/arguments
|
||||
- API documentation
|
||||
- Configuration options
|
||||
- Environment variables
|
||||
- Project structure descriptions
|
||||
- Contributing guidelines
|
||||
- Changelog or version notes
|
||||
- Badge URLs or CI references
|
||||
- License references
|
||||
|
||||
4. **Make precise updates**: If documentation is outdated, update it. Be surgical - change only what needs changing. Preserve the existing tone, style, and formatting conventions of the README.
|
||||
|
||||
5. **Report findings**: After your audit, provide a brief summary of what you checked and what (if anything) you updated.
|
||||
|
||||
## Documentation Standards
|
||||
|
||||
- Never use the em dash character. Always use the regular dash (-).
|
||||
- Be concise and direct in documentation text. No filler.
|
||||
- Use consistent heading levels and formatting with the existing document.
|
||||
- Keep code examples runnable and accurate.
|
||||
- If the project uses `uv` for Python package management, ensure installation instructions reference `uv` (not pip) unless pip instructions are also warranted.
|
||||
- If the project uses `ruff`, `mypy`, `pytest`, or `pre-commit`, ensure these are accurately documented.
|
||||
- Pin to the conventions already established in the README - do not impose a new style.
|
||||
|
||||
## Decision Framework
|
||||
|
||||
- **Update**: When code changes directly contradict or invalidate existing documentation.
|
||||
- **Add**: When new features, flags, dependencies, or behaviors have no corresponding documentation.
|
||||
- **Remove**: When documented features or behaviors no longer exist in the code.
|
||||
- **Leave alone**: When changes are purely internal (refactors, performance tweaks, internal variable renames) with no user-facing impact. Do not make unnecessary edits.
|
||||
|
||||
## Edge Cases
|
||||
|
||||
- If you are unsure whether a change is user-facing, err on the side of checking and noting it in your summary rather than silently ignoring it.
|
||||
- If a README references versioning, do not bump version numbers unless explicitly instructed - just flag it.
|
||||
- If documentation references external links, do not validate them unless the change specifically involves URL updates.
|
||||
- If there is no README at all and the project clearly needs one, flag this and offer to create one.
|
||||
|
||||
## Quality Assurance
|
||||
|
||||
Before finalizing any README update:
|
||||
1. Re-read the full README to ensure your changes flow naturally with surrounding content.
|
||||
2. Verify any code examples or commands you wrote are syntactically correct.
|
||||
3. Ensure no orphaned references remain (e.g., referencing a removed feature elsewhere in the doc).
|
||||
4. Confirm heading hierarchy is consistent.
|
||||
|
||||
## Update your agent memory
|
||||
|
||||
As you discover documentation patterns, project structure, naming conventions, and recurring documentation gaps, update your agent memory. This builds institutional knowledge across conversations. Write concise notes about what you found and where.
|
||||
|
||||
Examples of what to record:
|
||||
- README structure and style conventions used in this project
|
||||
- Which directories have their own README files
|
||||
- Common documentation gaps you have previously identified and fixed
|
||||
- Project-specific terminology and naming patterns
|
||||
- Dependencies and tools the project uses (uv, ruff, mypy, pytest, pre-commit, etc.)
|
||||
- Sections that frequently need updates when certain types of changes are made
|
||||
23
.claude/hooks/doc-audit-trigger.sh
Executable file
23
.claude/hooks/doc-audit-trigger.sh
Executable file
@@ -0,0 +1,23 @@
|
||||
#!/bin/bash
|
||||
# PostToolUse hook: detects the documentation audit warning from prek post-commit
|
||||
# hooks and injects a prompt for Claude to invoke the readme-documentarian agent.
|
||||
set -euo pipefail
|
||||
|
||||
input=$(cat)
|
||||
|
||||
stdout=$(echo "$input" | python3 -c "
|
||||
import sys, json
|
||||
try:
|
||||
d = json.load(sys.stdin)
|
||||
resp = d.get('tool_response', {})
|
||||
if isinstance(resp, dict):
|
||||
print(resp.get('stdout', ''))
|
||||
else:
|
||||
print(str(resp))
|
||||
except Exception:
|
||||
pass
|
||||
")
|
||||
|
||||
if echo "$stdout" | grep -q '\[Documentation Audit\].*documentation was not updated'; then
|
||||
printf '{"additionalContext": "The post-commit documentation audit flagged that code changed but README.md was not updated. Invoke the readme-documentarian agent now to review the recent changes and update the documentation."}\n'
|
||||
fi
|
||||
15
.claude/settings.json
Normal file
15
.claude/settings.json
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"hooks": {
|
||||
"PostToolUse": [
|
||||
{
|
||||
"matcher": "Bash",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": ".claude/hooks/doc-audit-trigger.sh"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -1,5 +1,4 @@
|
||||
.git
|
||||
.gitmodules
|
||||
.gitignore
|
||||
.DS_Store
|
||||
.idea
|
||||
|
||||
8
.gitmodules
vendored
8
.gitmodules
vendored
@@ -10,3 +10,11 @@
|
||||
path = omen
|
||||
url = https://github.com/RUB-SysSec/OMEN.git
|
||||
ignore = dirty
|
||||
[submodule "hashcat"]
|
||||
path = hashcat
|
||||
url = https://github.com/hashcat/hashcat.git
|
||||
ignore = dirty
|
||||
[submodule "princeprocessor"]
|
||||
path = princeprocessor
|
||||
url = https://github.com/hashcat/princeprocessor.git
|
||||
ignore = dirty
|
||||
|
||||
@@ -7,6 +7,7 @@ RUN apt-get update \
|
||||
build-essential \
|
||||
ca-certificates \
|
||||
curl \
|
||||
git \
|
||||
gzip \
|
||||
hashcat \
|
||||
ocl-icd-libopencl1 \
|
||||
|
||||
71
Makefile
71
Makefile
@@ -1,17 +1,27 @@
|
||||
.DEFAULT_GOAL := submodules
|
||||
.DEFAULT_GOAL := install
|
||||
.PHONY: install reinstall update dev-install dev-reinstall clean hashcat-utils submodules submodules-pre vendor-assets clean-vendor test coverage lint check ruff ty
|
||||
|
||||
hashcat-utils: submodules
|
||||
$(MAKE) -C hashcat-utils
|
||||
|
||||
submodules:
|
||||
@# Initialize submodules when present
|
||||
@if [ -f .gitmodules ] && command -v git >/dev/null 2>&1; then \
|
||||
@# Initialize submodules only when inside a git repo (not in Docker/CI copies)
|
||||
@if [ -d .git ] && [ -f .gitmodules ] && command -v git >/dev/null 2>&1; then \
|
||||
git submodule update --init --recursive; \
|
||||
fi; \
|
||||
$(MAKE) submodules-pre; \
|
||||
if [ -f .gitmodules ] && command -v git >/dev/null 2>&1; then \
|
||||
for path in $$(git config --file .gitmodules --get-regexp path | awk '{print $$2}'); do \
|
||||
if [ "$$path" = "hashcat" ] && command -v hashcat >/dev/null 2>&1; then \
|
||||
echo "hashcat already installed in PATH, skipping submodule compilation"; \
|
||||
continue; \
|
||||
fi; \
|
||||
if [ "$$path" = "princeprocessor" ]; then \
|
||||
$(MAKE) -C "$$path/src" CFLAGS_LINUX64="-W -Wall -std=c99 -O2 -s -DLINUX"; \
|
||||
if [ -f "$$path/src/pp64.bin" ]; then cp "$$path/src/pp64.bin" "$$path/"; \
|
||||
elif [ -f "$$path/src/ppAppleArm64.bin" ]; then cp "$$path/src/ppAppleArm64.bin" "$$path/pp64.bin"; fi; \
|
||||
continue; \
|
||||
fi; \
|
||||
if [ -f "$$path/Makefile" ] || [ -f "$$path/makefile" ]; then \
|
||||
$(MAKE) -C "$$path"; \
|
||||
fi; \
|
||||
@@ -22,6 +32,9 @@ submodules:
|
||||
submodules-pre:
|
||||
@# Pre-step: basic sanity checks and file generation before building submodules.
|
||||
@# Ensure required directories exist (whether as submodules or vendored copies).
|
||||
@# hashcat is optional here: submodule is compiled if present, else PATH hashcat is used.
|
||||
@test -d hashcat || command -v hashcat >/dev/null 2>&1 || { \
|
||||
echo "Error: hashcat not found. Either initialize the hashcat submodule or install hashcat."; exit 1; }
|
||||
@test -d hashcat-utils || { echo "Error: missing required directory: hashcat-utils"; exit 1; }
|
||||
@test -d princeprocessor || { echo "Error: missing required directory: princeprocessor"; exit 1; }
|
||||
@test -d omen || { echo "Warning: missing directory: omen (OMEN attacks will not be available)"; }
|
||||
@@ -35,7 +48,32 @@ vendor-assets:
|
||||
exit 1; \
|
||||
fi
|
||||
@echo "Syncing assets into package for uv tool install..."
|
||||
@rm -rf hate_crack/hashcat-utils hate_crack/princeprocessor hate_crack/omen
|
||||
@rm -rf hate_crack/hashcat hate_crack/hashcat-utils hate_crack/princeprocessor hate_crack/omen
|
||||
@mkdir -p hate_crack/hashcat
|
||||
@if [ -f hashcat/hashcat ]; then \
|
||||
echo "Vendoring compiled hashcat submodule binary..."; \
|
||||
cp hashcat/hashcat hate_crack/hashcat/hashcat; \
|
||||
[ -d hashcat/rules ] && cp -R hashcat/rules hate_crack/hashcat/rules || true; \
|
||||
[ -d hashcat/OpenCL ] && cp -R hashcat/OpenCL hate_crack/hashcat/OpenCL || true; \
|
||||
[ -d hashcat/modules ] && cp -R hashcat/modules hate_crack/hashcat/modules || true; \
|
||||
elif [ -f hashcat/hashcat.app ]; then \
|
||||
echo "Vendoring compiled hashcat submodule binary (macOS app)..."; \
|
||||
cp hashcat/hashcat.app hate_crack/hashcat/hashcat; \
|
||||
[ -d hashcat/rules ] && cp -R hashcat/rules hate_crack/hashcat/rules || true; \
|
||||
[ -d hashcat/OpenCL ] && cp -R hashcat/OpenCL hate_crack/hashcat/OpenCL || true; \
|
||||
[ -d hashcat/modules ] && cp -R hashcat/modules hate_crack/hashcat/modules || true; \
|
||||
elif command -v hashcat >/dev/null 2>&1; then \
|
||||
HASHCAT_PATH=$$(command -v hashcat); \
|
||||
echo "Using system hashcat from $$HASHCAT_PATH..."; \
|
||||
cp "$$HASHCAT_PATH" hate_crack/hashcat/hashcat; \
|
||||
HASHCAT_DIR=$$(dirname $$(realpath "$$HASHCAT_PATH")); \
|
||||
[ -d "$$HASHCAT_DIR/rules" ] && cp -R "$$HASHCAT_DIR/rules" hate_crack/hashcat/rules || true; \
|
||||
[ -d "$$HASHCAT_DIR/OpenCL" ] && cp -R "$$HASHCAT_DIR/OpenCL" hate_crack/hashcat/OpenCL || true; \
|
||||
[ -d "$$HASHCAT_DIR/modules" ] && cp -R "$$HASHCAT_DIR/modules" hate_crack/hashcat/modules || true; \
|
||||
else \
|
||||
echo "Error: hashcat not found. Either compile the hashcat submodule or install hashcat."; \
|
||||
exit 1; \
|
||||
fi
|
||||
@cp -R hashcat-utils hate_crack/
|
||||
@cp -R princeprocessor hate_crack/
|
||||
@if [ -d omen ]; then \
|
||||
@@ -46,23 +84,31 @@ vendor-assets:
|
||||
|
||||
clean-vendor:
|
||||
@echo "Cleaning up vendored assets from working tree..."
|
||||
@rm -rf hate_crack/hashcat-utils hate_crack/princeprocessor hate_crack/omen
|
||||
@rm -rf hate_crack/hashcat hate_crack/hashcat-utils hate_crack/princeprocessor hate_crack/omen
|
||||
|
||||
install: submodules vendor-assets
|
||||
@echo "Detecting OS and installing dependencies..."
|
||||
@if [ "$(shell uname)" = "Darwin" ]; then \
|
||||
echo "Detected macOS"; \
|
||||
xcode-select -p >/dev/null 2>&1 || { \
|
||||
echo "Xcode Command Line Tools not found. Installing..."; \
|
||||
xcode-select --install; \
|
||||
echo "Re-run 'make' after the Xcode CLT installation completes."; \
|
||||
exit 1; \
|
||||
}; \
|
||||
command -v brew >/dev/null 2>&1 || { echo >&2 "Homebrew not found. Please install Homebrew first: https://brew.sh/"; exit 1; }; \
|
||||
brew install p7zip transmission-cli; \
|
||||
elif [ -f /etc/debian_version ]; then \
|
||||
echo "Detected Debian/Ubuntu"; \
|
||||
command -v gcc >/dev/null 2>&1 || { sudo apt-get update && sudo apt-get install -y build-essential; }; \
|
||||
sudo apt-get update; \
|
||||
sudo apt-get install -y p7zip-full transmission-cli; \
|
||||
else \
|
||||
echo "Unsupported OS. Please install dependencies manually."; \
|
||||
exit 1; \
|
||||
fi
|
||||
@uv tool install -e . --force --reinstall
|
||||
@command -v uv >/dev/null 2>&1 || { echo "uv not found. Installing uv..."; curl -LsSf https://astral.sh/uv/install.sh | sh; }
|
||||
@uv tool install -e .
|
||||
|
||||
update: submodules vendor-assets
|
||||
@uv tool install -e . --force --reinstall
|
||||
@@ -78,16 +124,25 @@ dev-reinstall: uninstall dev-install
|
||||
|
||||
clean:
|
||||
-$(MAKE) -C hashcat-utils clean
|
||||
-$(MAKE) -C hashcat clean
|
||||
-@if [ -f .gitmodules ]; then git submodule deinit -f --all; fi
|
||||
rm -rf .pytest_cache .ruff_cache build dist *.egg-info
|
||||
rm -rf ~/.cache/uv
|
||||
find . -name "__pycache__" -type d -prune -exec rm -rf {} +
|
||||
|
||||
test:
|
||||
uv run pytest -v
|
||||
@# Auto-set HATE_CRACK_SKIP_INIT when hashcat-utils binaries are not built
|
||||
@if [ -z "$$HATE_CRACK_SKIP_INIT" ] && [ ! -f hate_crack/hashcat-utils/bin/expander.bin ] && [ ! -f hate_crack/hashcat-utils/bin/expander.app ]; then \
|
||||
echo "[test] hashcat-utils not built, setting HATE_CRACK_SKIP_INIT=1"; \
|
||||
export HATE_CRACK_SKIP_INIT=1; \
|
||||
fi; \
|
||||
HATE_CRACK_SKIP_INIT=$${HATE_CRACK_SKIP_INIT:-1} uv run pytest -v
|
||||
|
||||
coverage:
|
||||
uv run pytest --cov=hate_crack --cov-report=term-missing
|
||||
@if [ -z "$$HATE_CRACK_SKIP_INIT" ] && [ ! -f hate_crack/hashcat-utils/bin/expander.bin ] && [ ! -f hate_crack/hashcat-utils/bin/expander.app ]; then \
|
||||
echo "[coverage] hashcat-utils not built, setting HATE_CRACK_SKIP_INIT=1"; \
|
||||
fi; \
|
||||
HATE_CRACK_SKIP_INIT=$${HATE_CRACK_SKIP_INIT:-1} uv run pytest --cov=hate_crack --cov-report=term-missing
|
||||
|
||||
ruff:
|
||||
uv run ruff check hate_crack
|
||||
|
||||
190
README.md
190
README.md
@@ -19,37 +19,44 @@ The pytest workflow tests across Python 3.9-3.14 via a matrix build.
|
||||
|
||||
## Installation
|
||||
|
||||
### 1. Install hashcat
|
||||
Get the latest hashcat binaries (https://hashcat.net/hashcat/)
|
||||
### 1. Install hashcat (Optional)
|
||||
|
||||
```bash
|
||||
git clone https://github.com/hashcat/hashcat.git
|
||||
cd hashcat/
|
||||
make
|
||||
make install
|
||||
```
|
||||
Hashcat is included as a git submodule and will be compiled automatically. If you already have hashcat installed and in your PATH, the build step will skip the submodule compilation.
|
||||
|
||||
To manually install hashcat instead (e.g., system package or pre-built binary), ensure `hashcat` is available in your PATH or set `hcatPath` in `config.json`.
|
||||
|
||||
### 2. Download hate_crack
|
||||
|
||||
Clone with submodules (required for hashcat-utils, princeprocessor, and optionally omen):
|
||||
|
||||
```bash
|
||||
git clone --recurse-submodules https://github.com/trustedsec/hate_crack.git
|
||||
cd hate_crack
|
||||
```
|
||||
|
||||
* Customize binary and wordlist paths in "config.json"
|
||||
* The hashcat-utils repo is a submodule. If you didn't clone with --recurse-submodules then initialize with:
|
||||
If you cloned without submodules, initialize them:
|
||||
|
||||
```bash
|
||||
git submodule update --init --recursive
|
||||
```
|
||||
|
||||
Then customize configuration in `config.json` if needed (wordlist paths, API keys, etc.). Most users can skip this step as default paths work out-of-the-box.
|
||||
|
||||
### 3. Install dependencies and hate_crack
|
||||
|
||||
The easiest way is to use `make install` which auto-detects your OS and installs:
|
||||
The easiest way is to run `make` (or `make install`), which auto-detects your OS and installs:
|
||||
- External dependencies (p7zip, transmission-cli)
|
||||
- Builds submodules (hashcat-utils, princeprocessor, and optionally hashcat if not in PATH)
|
||||
- Python tool via uv
|
||||
|
||||
```bash
|
||||
make install
|
||||
make
|
||||
```
|
||||
|
||||
This is idempotent - it skips tools already installed. To force a clean reinstall:
|
||||
|
||||
```bash
|
||||
make reinstall
|
||||
```
|
||||
|
||||
**Or install dependencies manually:**
|
||||
@@ -101,38 +108,37 @@ This project depends on and is inspired by a number of external projects and ser
|
||||
|
||||
-------------------------------------------------------------------
|
||||
## Usage
|
||||
You can run hate_crack as a tool, as a script, or via `uv run`:
|
||||
|
||||
After installing with `make`, run hate_crack from anywhere:
|
||||
|
||||
```bash
|
||||
uv run hate_crack.py
|
||||
# or
|
||||
uv run hate_crack.py <hash_file> <hash_type> [options]
|
||||
hate_crack
|
||||
# or with arguments:
|
||||
hate_crack <hash_file> <hash_type> [options]
|
||||
```
|
||||
|
||||
Alternatively, run via `uv`:
|
||||
|
||||
```bash
|
||||
uv run hate_crack.py <hash_file> <hash_type>
|
||||
```
|
||||
|
||||
### Run as a tool (recommended)
|
||||
Install once from the repo root:
|
||||
|
||||
Install using `make` from the repository root - this builds submodules and bundles assets:
|
||||
|
||||
```bash
|
||||
uv tool install .
|
||||
cd /path/to/hate_crack
|
||||
make
|
||||
hate_crack
|
||||
```
|
||||
|
||||
**Important:** The tool needs access to `hashcat-utils` and `princeprocessor` subdirectories from the hate_crack repository.
|
||||
|
||||
The tool will automatically search for these assets in:
|
||||
- The directory that contains the hate_crack checkout (and includes `config.json`, `hashcat-utils/`, and `princeprocessor/`)
|
||||
- The installed package (bundled by `make install`)
|
||||
- Current working directory and parent directory
|
||||
- `~/hate_crack`, `~/hate-crack`, or `~/.hate_crack`
|
||||
|
||||
**Option 1 - Run from repository directory:**
|
||||
```bash
|
||||
cd /path/to/hate_crack
|
||||
hate_crack <hash_file> <hash_type>
|
||||
```
|
||||
|
||||
Run `make install` to install the tool with all assets bundled into the package.
|
||||
|
||||
**Note:** The `hcatPath` in `config.json` is for the hashcat binary location (optional if hashcat is in PATH), not for hate_crack assets.
|
||||
**Note:** The `hcatPath` in `config.json` is for the hashcat binary location only (optional if hashcat is in PATH). Hate_crack assets (hashcat-utils, princeprocessor, omen) are loaded from the repository directory and bundled automatically by `make install`.
|
||||
|
||||
### Run as a script
|
||||
The script uses a `uv` shebang. Make it executable and run:
|
||||
@@ -172,59 +178,84 @@ cd /path/to/hate_crack # the repository checkout
|
||||
make install
|
||||
```
|
||||
|
||||
**Example config.json:**
|
||||
**Default configuration (config.json.example):**
|
||||
|
||||
Most users can use defaults without customization:
|
||||
- `hcatWordlists`: `./wordlists` (relative to repo root or HOME/.hate_crack)
|
||||
- `rules_directory`: `./hashcat/rules` (includes submodule rules)
|
||||
- `hcatTuning`: `` (empty string - no default tuning flags)
|
||||
|
||||
**Example config.json customizations:**
|
||||
```json
|
||||
{
|
||||
"hcatPath": "/usr/local/bin", # Location of hashcat binary (or omit if in PATH)
|
||||
"hcatBin": "hashcat", # Hashcat binary name
|
||||
"hcatPath": "/usr/local/bin", # Location of hashcat binary (optional, auto-detected from PATH)
|
||||
"hcatBin": "hashcat", # Hashcat binary name
|
||||
"hcatWordlists": "./wordlists", # Dictionary wordlist directory (relative or absolute)
|
||||
"rules_directory": "./hashcat/rules", # Rules directory (relative or absolute)
|
||||
"hcatTuning": "", # Additional hashcat flags (empty by default)
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
**Configuration loading:**
|
||||
- Missing config keys are automatically backfilled from `config.json.example` on startup
|
||||
- Config is searched in multiple locations: repo root, current working directory, `~/.hate_crack`, `/opt/hate_crack`
|
||||
|
||||
-------------------------------------------------------------------
|
||||
### Makefile helpers
|
||||
Install OS dependencies + tool (auto-detects macOS vs Debian/Ubuntu):
|
||||
### Makefile Targets
|
||||
|
||||
**Default (full installation)** - builds submodules, vendors assets, installs dependencies and tool:
|
||||
|
||||
```bash
|
||||
make
|
||||
# or explicitly:
|
||||
make install
|
||||
```
|
||||
|
||||
Rebuild submodules and reinstall the tool (quick update after pulling changes):
|
||||
This is idempotent - it skips tools already installed.
|
||||
|
||||
```bash
|
||||
make update
|
||||
```
|
||||
|
||||
Reinstall the Python tool in-place (keeps OS deps as-is):
|
||||
**Force clean reinstall:**
|
||||
|
||||
```bash
|
||||
make reinstall
|
||||
```
|
||||
|
||||
Uninstall OS dependencies + tool:
|
||||
**Quick update** - rebuilds submodules and reinstalls tool (after pulling changes):
|
||||
|
||||
```bash
|
||||
make update
|
||||
```
|
||||
|
||||
**Uninstall** - removes OS dependencies and tool:
|
||||
|
||||
```bash
|
||||
make uninstall
|
||||
```
|
||||
|
||||
Build hashcat-utils only:
|
||||
**Build hashcat-utils only:**
|
||||
|
||||
```bash
|
||||
make hashcat-utils
|
||||
```
|
||||
|
||||
Clean build/test artifacts:
|
||||
|
||||
```bash
|
||||
make clean
|
||||
```
|
||||
|
||||
Run the test suite:
|
||||
**Run tests** - automatically handles HATE_CRACK_SKIP_INIT when needed:
|
||||
|
||||
```bash
|
||||
make test
|
||||
```
|
||||
|
||||
**Coverage report:**
|
||||
|
||||
```bash
|
||||
make coverage
|
||||
```
|
||||
|
||||
**Clean build/test artifacts:**
|
||||
|
||||
```bash
|
||||
make clean
|
||||
```
|
||||
|
||||
-------------------------------------------------------------------
|
||||
## Development
|
||||
|
||||
@@ -265,37 +296,57 @@ The project uses GitHub Actions to automatically run quality checks on every pus
|
||||
|
||||
### Running Linters and Type Checks
|
||||
|
||||
Before pushing changes, run these checks locally to catch issues early:
|
||||
Before pushing changes, run these checks locally. Use `make lint` for everything, or run individual checks:
|
||||
|
||||
**Ruff (linting and formatting):**
|
||||
```bash
|
||||
.venv/bin/ruff check hate_crack
|
||||
make ruff
|
||||
# or manually:
|
||||
uv run ruff check hate_crack
|
||||
```
|
||||
|
||||
Auto-fix issues:
|
||||
```bash
|
||||
.venv/bin/ruff check --fix hate_crack
|
||||
uv run ruff format hate_crack
|
||||
uv run ruff check --fix hate_crack
|
||||
```
|
||||
|
||||
**ty (type checking):**
|
||||
```bash
|
||||
.venv/bin/ty check hate_crack
|
||||
make ty
|
||||
# or manually:
|
||||
uv run ty check hate_crack
|
||||
```
|
||||
|
||||
**Run all checks together:**
|
||||
```bash
|
||||
.venv/bin/ruff check hate_crack && .venv/bin/ty check hate_crack && echo "✓ All checks passed"
|
||||
make lint
|
||||
```
|
||||
|
||||
### Running Tests
|
||||
|
||||
Tests auto-detect when submodules are not built and set `HATE_CRACK_SKIP_INIT=1` automatically.
|
||||
|
||||
```bash
|
||||
.venv/bin/pytest
|
||||
make test
|
||||
```
|
||||
|
||||
Or run pytest directly:
|
||||
|
||||
```bash
|
||||
uv run pytest -v
|
||||
```
|
||||
|
||||
With coverage:
|
||||
|
||||
```bash
|
||||
.venv/bin/pytest --cov=hate_crack
|
||||
make coverage
|
||||
```
|
||||
|
||||
Or with pytest:
|
||||
|
||||
```bash
|
||||
uv run pytest --cov=hate_crack
|
||||
```
|
||||
|
||||
### Git Hooks (prek)
|
||||
@@ -580,9 +631,7 @@ Select a task:
|
||||
```
|
||||
-------------------------------------------------------------------
|
||||
#### Quick Crack
|
||||
* Runs a dictionary attack using all wordlists configured in your "hcatOptimizedWordlists" path
|
||||
and optionally applies a rule that can be selected from a list by ID number. Multiple rules can be selected by using a
|
||||
comma separated list, and chains can be created by using the '+' symbol.
|
||||
Runs a dictionary attack using all wordlists configured in your `hcatWordlists` path and optionally applies rules. Multiple rules can be selected by comma-separated list, and chains can be created with the '+' symbol.
|
||||
|
||||
```
|
||||
Which rule(s) would you like to run?
|
||||
@@ -600,18 +649,18 @@ Choose wisely:
|
||||
|
||||
|
||||
#### Extensive Pure_Hate Methodology Crack
|
||||
Runs several attack methods provided by Martin Bos (formerly known as pure_hate)
|
||||
Runs several attack methods provided by Martin Bos (formerly known as pure_hate):
|
||||
* Brute Force Attack (7 characters)
|
||||
* Dictionary Attack
|
||||
* All wordlists in "hcatOptimizedWordlists" with "best64.rule"
|
||||
* wordlists/rockyou.txt with "d3ad0ne.rule"
|
||||
* wordlists/rockyou.txt with "T0XlC.rule"
|
||||
* All wordlists in `hcatWordlists` with `best64.rule`
|
||||
* `rockyou.txt` with `d3ad0ne.rule`
|
||||
* `rockyou.txt` with `T0XlC.rule`
|
||||
* Top Mask Attack (Target Time = 4 Hours)
|
||||
* Fingerprint Attack
|
||||
* Combinator Attack
|
||||
* Hybrid Attack
|
||||
* Extra - Just For Good Measure
|
||||
- Runs a dictionary attack using wordlists/rockyou.txt with chained "combinator.rule" and "InsidePro-PasswordsPro.rule" rules
|
||||
- Runs a dictionary attack using `rockyou.txt` with chained `combinator.rule` and `InsidePro-PasswordsPro.rule` rules
|
||||
|
||||
#### Brute Force Attack
|
||||
Brute forces all characters with the choice of a minimum and maximum password length.
|
||||
@@ -651,8 +700,7 @@ https://hashcat.net/events/p14-trondheim/prince-attack.pdf
|
||||
Runs a PRINCE attack using wordlists/rockyou.txt
|
||||
|
||||
#### YOLO Combinator Attack
|
||||
Runs a continuous combinator attack using random wordlists from the
|
||||
optimized wordlists for the left and right sides.
|
||||
Runs a continuous combinator attack using random wordlists from the configured wordlists directory for the left and right sides.
|
||||
|
||||
#### Middle Combinator Attack
|
||||
https://jeffh.net/2018/04/26/combinator_methods/
|
||||
@@ -678,10 +726,10 @@ https://jeffh.net/2018/04/26/combinator_methods/
|
||||
|
||||
#### Bandrel Methodology
|
||||
|
||||
* Prompts for input of comma separated names and then creates a pseudo hybrid attack by capitalizing the first letter
|
||||
and adding up to six additional characters at the end. Each word is limited to a total of five minutes.
|
||||
- Built in additional common words including seasons, months has been included as a customizable config.json entry
|
||||
- The default five minute time limit is customizable via the config.json
|
||||
Prompts for comma-separated names and creates a pseudo hybrid attack by capitalizing the first letter and adding up to six additional characters at the end. Each word is limited to a total of five minutes.
|
||||
|
||||
- Built-in common words (seasons, months) included as a customizable `config.json` entry (`bandrel_common_basedwords`)
|
||||
- The default five-minute time limit is customizable via `bandrelmaxruntime` in `config.json`
|
||||
|
||||
#### Loopback Attack
|
||||
https://hashcat.net/wiki/doku.php?id=loopback_attack
|
||||
|
||||
26
TESTING.md
26
TESTING.md
@@ -90,6 +90,7 @@ By default, external service checks are skipped. Enable them explicitly:
|
||||
- `HATE_CRACK_RUN_LIVE_HASHVIEW_TESTS=1` — run live Hashview wordlist upload tests
|
||||
- `HATE_CRACK_RUN_E2E=1` — run end-to-end local installation tests
|
||||
- `HATE_CRACK_RUN_DOCKER_TESTS=1` — run Docker-based end-to-end tests
|
||||
- `HATE_CRACK_RUN_LIMA_TESTS=1` — run Lima VM-based end-to-end tests (requires Lima installed)
|
||||
|
||||
When `HASHMOB_TEST_REAL` is enabled, tests will still skip if Hashmob returns errors like HTTP 523 (origin unreachable).
|
||||
|
||||
@@ -110,6 +111,7 @@ Highlights:
|
||||
7. UI menu options (all attack modes)
|
||||
8. Hashcat-utils submodule verification
|
||||
9. Docker and E2E installation tests (opt-in)
|
||||
10. Lima VM installation tests (opt-in)
|
||||
|
||||
## Benefits
|
||||
|
||||
@@ -151,8 +153,32 @@ HATE_CRACK_RUN_E2E=1 uv run pytest tests/test_e2e_local_install.py -v
|
||||
|
||||
# Run Docker tests
|
||||
HATE_CRACK_RUN_DOCKER_TESTS=1 uv run pytest tests/test_docker_script_install.py -v
|
||||
|
||||
# Run Lima VM tests
|
||||
# Prerequisite: brew install lima
|
||||
HATE_CRACK_RUN_LIMA_TESTS=1 uv run pytest tests/test_lima_vm_install.py -v
|
||||
```
|
||||
|
||||
## Lima VM Tests
|
||||
|
||||
`tests/test_lima_vm_install.py` runs hate_crack inside a real Ubuntu 24.04 VM via [Lima](https://lima-vm.io/). Unlike the Docker tests, this exercises a real kernel and full Ubuntu userspace, giving higher confidence that installation works on the distros users actually run.
|
||||
|
||||
**Prerequisites:**
|
||||
|
||||
```bash
|
||||
brew install lima
|
||||
```
|
||||
|
||||
**Run:**
|
||||
|
||||
```bash
|
||||
HATE_CRACK_RUN_LIMA_TESTS=1 uv run pytest tests/test_lima_vm_install.py -v
|
||||
```
|
||||
|
||||
**Note:** The first run takes several minutes - the VM provision script runs `apt-get install` for hashcat and all build dependencies. Subsequent runs on the same machine are faster if Lima caches the base image.
|
||||
|
||||
The VM is created with a unique name per test session and deleted automatically in teardown. To verify cleanup: `limactl list`.
|
||||
|
||||
## Note on Real API Testing
|
||||
|
||||
While these mocked tests validate the code logic, you may still want to occasionally run integration tests against a real Hashview instance to ensure the API hasn't changed. The test files can be easily modified to toggle between mocked and real API calls if needed.
|
||||
|
||||
@@ -1,12 +1,11 @@
|
||||
{
|
||||
"hcatPath": "/path/to/hashcat",
|
||||
"hcatBin": "hashcat",
|
||||
"hcatTuning": "--force --remove",
|
||||
"hcatTuning": "",
|
||||
"hcatPotfilePath": "~/.hashcat/hashcat.potfile",
|
||||
"hcatDebugLogPath": "./hashcat_debug",
|
||||
"hcatWordlists": "/Passwords/wordlists",
|
||||
"hcatOptimizedWordlists": "/Passwords/optimized_wordlists",
|
||||
"rules_directory": "/path/to/hashcat/rules",
|
||||
"hcatWordlists": "./wordlists",
|
||||
"rules_directory": "./hashcat/rules",
|
||||
"hcatDictionaryWordlist": ["rockyou.txt"],
|
||||
"hcatCombinationWordlist": ["rockyou.txt","rockyou.txt"],
|
||||
"hcatHybridlist": ["rockyou.txt"],
|
||||
|
||||
1
hashcat
Submodule
1
hashcat
Submodule
Submodule hashcat added at 2d71af3718
@@ -69,11 +69,11 @@ def quick_crack(ctx: Any) -> None:
|
||||
try:
|
||||
raw_choice = input(
|
||||
"\nEnter path of wordlist or wordlist directory (tab to autocomplete).\n"
|
||||
f"Press Enter for default optimized wordlists [{ctx.hcatOptimizedWordlists}]: "
|
||||
f"Press Enter for default wordlist directory [{ctx.hcatWordlists}]: "
|
||||
)
|
||||
raw_choice = raw_choice.strip()
|
||||
if raw_choice == "":
|
||||
wordlist_choice = ctx.hcatOptimizedWordlists
|
||||
wordlist_choice = ctx.hcatWordlists
|
||||
elif raw_choice.isdigit() and 1 <= int(raw_choice) <= len(wordlist_files):
|
||||
chosen = os.path.join(
|
||||
ctx.hcatWordlists, wordlist_files[int(raw_choice) - 1]
|
||||
|
||||
@@ -1,12 +1,11 @@
|
||||
{
|
||||
"hcatPath": "",
|
||||
"hcatBin": "hashcat",
|
||||
"hcatTuning": "--force --remove",
|
||||
"hcatTuning": "",
|
||||
"hcatPotfilePath": "~/.hashcat/hashcat.potfile",
|
||||
"hcatDebugLogPath": "./hashcat_debug",
|
||||
"hcatWordlists": "/Passwords/wordlists",
|
||||
"hcatOptimizedWordlists": "/Passwords/optimized_wordlists",
|
||||
"rules_directory": "/path/to/hashcat/rules",
|
||||
"hcatWordlists": "./wordlists",
|
||||
"rules_directory": "./hashcat/rules",
|
||||
"hcatDictionaryWordlist": ["rockyou.txt"],
|
||||
"hcatCombinationWordlist": ["rockyou.txt","rockyou.txt"],
|
||||
"hcatHybridlist": ["rockyou.txt"],
|
||||
|
||||
@@ -184,6 +184,16 @@ if not os.path.isfile(defaults_path):
|
||||
with open(defaults_path) as defaults:
|
||||
default_config = json.load(defaults)
|
||||
|
||||
_config_updated = False
|
||||
for _key, _value in default_config.items():
|
||||
if _key not in config_parser:
|
||||
config_parser[_key] = _value
|
||||
print(f"[config] Added missing key '{_key}' with default value")
|
||||
_config_updated = True
|
||||
if _config_updated:
|
||||
with open(_config_path, "w") as _cf:
|
||||
json.dump(config_parser, _cf, indent=2)
|
||||
|
||||
try:
|
||||
hashview_url = config_parser["hashview_url"]
|
||||
except KeyError as e:
|
||||
@@ -243,7 +253,10 @@ def ensure_binary(binary_path, build_dir=None, name=None):
|
||||
# NOTE: hcatPath is the hashcat install directory, NOT for hate_crack assets.
|
||||
# hashcat-utils and princeprocessor should ALWAYS use hate_path.
|
||||
hcatPath = config_parser.get("hcatPath", "")
|
||||
hcatBin = config_parser["hcatBin"]
|
||||
try:
|
||||
hcatBin = config_parser["hcatBin"]
|
||||
except KeyError:
|
||||
hcatBin = default_config["hcatBin"]
|
||||
# If hcatBin is not absolute and hcatPath is set, construct full path from hcatPath + hcatBin
|
||||
if not os.path.isabs(hcatBin) and hcatPath:
|
||||
_candidate = os.path.join(hcatPath, hcatBin)
|
||||
@@ -254,9 +267,20 @@ if not hcatPath:
|
||||
_which = shutil.which(hcatBin)
|
||||
if _which:
|
||||
hcatPath = os.path.dirname(os.path.realpath(_which))
|
||||
hcatTuning = config_parser["hcatTuning"]
|
||||
hcatWordlists = config_parser["hcatWordlists"]
|
||||
hcatOptimizedWordlists = config_parser["hcatOptimizedWordlists"]
|
||||
# Fall back to the vendored hashcat binary if not found via PATH or hcatPath
|
||||
if shutil.which(hcatBin) is None and not os.path.isfile(hcatBin):
|
||||
_vendored_hcat = os.path.join(hate_path, "hashcat", "hashcat")
|
||||
if os.path.isfile(_vendored_hcat) and os.access(_vendored_hcat, os.X_OK):
|
||||
hcatBin = _vendored_hcat
|
||||
hcatPath = os.path.join(hate_path, "hashcat")
|
||||
try:
|
||||
hcatTuning = config_parser["hcatTuning"]
|
||||
except KeyError:
|
||||
hcatTuning = default_config["hcatTuning"]
|
||||
try:
|
||||
hcatWordlists = config_parser["hcatWordlists"]
|
||||
except KeyError:
|
||||
hcatWordlists = "./wordlists"
|
||||
hcatRules: list[str] = []
|
||||
|
||||
|
||||
@@ -302,27 +326,16 @@ rulesDirectory = os.path.expanduser(rulesDirectory)
|
||||
if not os.path.isabs(rulesDirectory):
|
||||
rulesDirectory = os.path.join(hate_path, rulesDirectory)
|
||||
|
||||
# Normalize wordlist directories
|
||||
# Normalize wordlist directory
|
||||
hcatWordlists = os.path.expanduser(hcatWordlists)
|
||||
if not os.path.isabs(hcatWordlists):
|
||||
hcatWordlists = os.path.join(hate_path, hcatWordlists)
|
||||
hcatOptimizedWordlists = os.path.expanduser(hcatOptimizedWordlists)
|
||||
if not os.path.isabs(hcatOptimizedWordlists):
|
||||
hcatOptimizedWordlists = os.path.join(hate_path, hcatOptimizedWordlists)
|
||||
if not os.path.isdir(hcatWordlists):
|
||||
fallback_wordlists = os.path.join(hate_path, "wordlists")
|
||||
if os.path.isdir(fallback_wordlists):
|
||||
print(f"[!] hcatWordlists directory not found: {hcatWordlists}")
|
||||
print(f"[!] Falling back to {fallback_wordlists}")
|
||||
hcatWordlists = fallback_wordlists
|
||||
if not os.path.isdir(hcatOptimizedWordlists):
|
||||
fallback_optimized = os.path.join(hate_path, "optimized_wordlists")
|
||||
if os.path.isdir(fallback_optimized):
|
||||
print(
|
||||
f"[!] hcatOptimizedWordlists directory not found: {hcatOptimizedWordlists}"
|
||||
)
|
||||
print(f"[!] Falling back to {fallback_optimized}")
|
||||
hcatOptimizedWordlists = fallback_optimized
|
||||
|
||||
try:
|
||||
maxruntime = config_parser["bandrelmaxruntime"]
|
||||
@@ -1198,9 +1211,9 @@ def hcatDictionary(hcatHashType, hcatHashFile):
|
||||
global hcatDictionaryCount
|
||||
global hcatProcess
|
||||
rule_best66 = get_rule_path("best66.rule")
|
||||
optimized_lists = sorted(glob.glob(os.path.join(hcatOptimizedWordlists, "*")))
|
||||
optimized_lists = sorted(glob.glob(os.path.join(hcatWordlists, "*")))
|
||||
if not optimized_lists:
|
||||
optimized_lists = [os.path.join(hcatOptimizedWordlists, "*")]
|
||||
optimized_lists = [os.path.join(hcatWordlists, "*")]
|
||||
cmd = [
|
||||
hcatBin,
|
||||
"-m",
|
||||
@@ -1589,10 +1602,10 @@ def hcatYoloCombination(hcatHashType, hcatHashFile):
|
||||
global hcatProcess
|
||||
try:
|
||||
while 1:
|
||||
hcatLeft = random.choice(os.listdir(hcatOptimizedWordlists))
|
||||
hcatRight = random.choice(os.listdir(hcatOptimizedWordlists))
|
||||
left_path = os.path.join(hcatOptimizedWordlists, hcatLeft)
|
||||
right_path = os.path.join(hcatOptimizedWordlists, hcatRight)
|
||||
hcatLeft = random.choice(os.listdir(hcatWordlists))
|
||||
hcatRight = random.choice(os.listdir(hcatWordlists))
|
||||
left_path = os.path.join(hcatWordlists, hcatLeft)
|
||||
right_path = os.path.join(hcatWordlists, hcatRight)
|
||||
cmd = [
|
||||
hcatBin,
|
||||
"-m",
|
||||
@@ -3575,7 +3588,7 @@ def main():
|
||||
global lmHashesFound
|
||||
global debug_mode
|
||||
global hashview_url, hashview_api_key
|
||||
global hcatPath, hcatBin, hcatWordlists, hcatOptimizedWordlists, rulesDirectory
|
||||
global hcatPath, hcatBin, hcatWordlists, rulesDirectory
|
||||
global pipalPath, maxruntime, bandrelbasewords
|
||||
global hcatPotfilePath
|
||||
|
||||
@@ -3812,7 +3825,6 @@ def main():
|
||||
hcatPath=hcatPath,
|
||||
hcatBin=hcatBin,
|
||||
hcatWordlists=hcatWordlists,
|
||||
hcatOptimizedWordlists=hcatOptimizedWordlists,
|
||||
rules_directory=rulesDirectory,
|
||||
pipalPath=pipalPath,
|
||||
maxruntime=maxruntime,
|
||||
@@ -3824,7 +3836,6 @@ def main():
|
||||
hcatPath = config.hcatPath
|
||||
hcatBin = config.hcatBin
|
||||
hcatWordlists = config.hcatWordlists
|
||||
hcatOptimizedWordlists = config.hcatOptimizedWordlists
|
||||
rulesDirectory = config.rules_directory
|
||||
pipalPath = config.pipalPath
|
||||
maxruntime = config.maxruntime
|
||||
|
||||
41
lima/hate-crack-test.yaml
Normal file
41
lima/hate-crack-test.yaml
Normal file
@@ -0,0 +1,41 @@
|
||||
# Lima VM configuration for hate_crack E2E testing
|
||||
# Ubuntu 24.04 LTS with hashcat and build dependencies pre-installed.
|
||||
# Usage: limactl start --name hate-crack-e2e lima/hate-crack-test.yaml
|
||||
|
||||
images:
|
||||
- location: "https://cloud-images.ubuntu.com/releases/24.04/release/ubuntu-24.04-server-cloudimg-amd64.img"
|
||||
arch: "x86_64"
|
||||
- location: "https://cloud-images.ubuntu.com/releases/24.04/release/ubuntu-24.04-server-cloudimg-arm64.img"
|
||||
arch: "aarch64"
|
||||
|
||||
cpus: 2
|
||||
memory: "4GiB"
|
||||
disk: "20GiB"
|
||||
|
||||
# No host mounts - full isolation mirrors a real user install
|
||||
mounts: []
|
||||
|
||||
provision:
|
||||
- mode: system
|
||||
script: |
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
export DEBIAN_FRONTEND=noninteractive
|
||||
apt-get update -qq
|
||||
apt-get install -y --no-install-recommends \
|
||||
build-essential \
|
||||
ca-certificates \
|
||||
curl \
|
||||
git \
|
||||
gzip \
|
||||
hashcat \
|
||||
ocl-icd-libopencl1 \
|
||||
pocl-opencl-icd \
|
||||
p7zip-full \
|
||||
transmission-cli
|
||||
|
||||
- mode: user
|
||||
script: |
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
@@ -3,6 +3,7 @@ commands = [
|
||||
"uv run ruff check hate_crack",
|
||||
"uv run ty check hate_crack",
|
||||
"HATE_CRACK_SKIP_INIT=1 uv run pytest -q",
|
||||
"HATE_CRACK_RUN_LIMA_TESTS=1 uv run pytest tests/test_lima_vm_install.py -v",
|
||||
]
|
||||
|
||||
[hooks.post-commit]
|
||||
|
||||
1
princeprocessor
Submodule
1
princeprocessor
Submodule
Submodule princeprocessor added at 4160061be7
@@ -1,55 +0,0 @@
|
||||
* v0.20 -> v0.21:
|
||||
|
||||
- Exit if stdout is closed or has a error
|
||||
- Fix for "Bug --pw-min" issue
|
||||
- Print position when stopped
|
||||
- Allow wordlist as fileparameter
|
||||
- Load only NUM words from input wordlist or use 0 to disable
|
||||
|
||||
* v0.19 -> v0.20:
|
||||
|
||||
- Add dupe suppression
|
||||
- Add a fake-GMP header using uint128_t macros. This is to replace depency on GMP
|
||||
- Add --case-permute amplifier option, default is disabled
|
||||
- Fixed buffer overflow
|
||||
- Fixed accidental reverted changes
|
||||
- Fixed a bug where ee actually couldn't correctly support output longer than 31 but 32 is supported
|
||||
- More memory savings: Use only the actual space needed for each word
|
||||
|
||||
* v0.18 -> v0.19:
|
||||
|
||||
- Fixed missing free() in shutdown section
|
||||
- Fixed wrong version number in source
|
||||
- Fixed discrepancies with logic and error messages
|
||||
- Added validation check pw-max > elem-cnt-max
|
||||
- Untie IN_LEN_* from PW_* to allow --pw-max > 16 without recompilation
|
||||
- If out of memory, tell how much we tried to allocate
|
||||
- Allow hex input for --skip and --limit
|
||||
- Optimized output performance
|
||||
|
||||
* v0.17 -> v0.18:
|
||||
|
||||
- Fixed major bug where all candidates are of the same length till chain changes
|
||||
|
||||
* v0.16 -> v0.17:
|
||||
|
||||
- Fixed download url for binaries in README
|
||||
- Fixed copy paste bug in input verification
|
||||
- Fixed bug where pw_orders is not sorted
|
||||
- Fixed memory leak
|
||||
- Removed O_BINARY for stderr
|
||||
- Removed some unused code
|
||||
- Renamed variables so that they match the meaning from the presentation slides
|
||||
- Optimized seeking performance
|
||||
- Optimized output performance
|
||||
|
||||
* v0.15 -> v0.16:
|
||||
|
||||
- Open Source the project
|
||||
- License is MIT
|
||||
- Moved repository to github: https://github.com/jsteube/princeprocessor
|
||||
- Added CHANGES
|
||||
- Added LICENSE
|
||||
- Added README.md
|
||||
- Changed default value for --pw-max from 24 to 16 for faster startup time
|
||||
|
||||
@@ -1,33 +0,0 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2015 Jens Steube,
|
||||
Copyright (c) 2015 magnum
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
|
||||
------
|
||||
|
||||
malloc_tiny() and the hashed dupe suppression are based on code from John the
|
||||
Ripper password cracker:
|
||||
Copyright (c) 1996-99,2002-2003,2005-2006,2010-2012 by Solar Designer
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted.
|
||||
|
||||
There's ABSOLUTELY NO WARRANTY, express or implied.
|
||||
@@ -1,38 +0,0 @@
|
||||
princeprocessor
|
||||
==============
|
||||
|
||||
Standalone password candidate generator using the PRINCE algorithm
|
||||
|
||||
The name PRINCE is used as an acronym and stands for PRobability INfinite Chained Elements, which are the building blocks of the algorithm
|
||||
|
||||
Brief description
|
||||
--------------
|
||||
|
||||
The princeprocessor is a password candidate generator and can be thought of as an advanced combinator attack. Rather than taking as input two different wordlists and then outputting all the possible two word combinations though, princeprocessor only has one input wordlist and builds "chains" of combined words. These chains can have 1 to N words from the input wordlist concatenated together. So for example if it is outputting guesses of length four, it could generate them using combinations from the input wordlist such as:
|
||||
|
||||
- 4 letter word
|
||||
- 2 letter word + 2 letter word
|
||||
- 1 letter word + 3 letter word
|
||||
- 3 letter word + 1 letter word
|
||||
- 1 letter word + 1 letter word + 2 letter word
|
||||
- 1 letter word + 2 letter word + 1 letter word
|
||||
- 2 letter word + 1 letter word + 1 letter word
|
||||
- 1 letter word + 1 letter word + 1 letter word + 1 letter word
|
||||
|
||||
Detailed description
|
||||
--------------
|
||||
|
||||
I'm going to write a detailed description in case I'm extremely bored. Till that, use the following resources:
|
||||
|
||||
- My talk about princeprocessor on Passwords^14 conference in Trondheim, Norway. Slides: https://hashcat.net/events/p14-trondheim/prince-attack.pdf
|
||||
- Thanks to Matt Weir, he made a nice analysis of princeprocessor. You can find the post on his blog: http://reusablesec.blogspot.de/2014/12/tool-deep-dive-prince.html
|
||||
|
||||
Compile
|
||||
--------------
|
||||
|
||||
Simply run make
|
||||
|
||||
Binary distribution
|
||||
--------------
|
||||
|
||||
Binaries for Linux, Windows and OSX: https://github.com/jsteube/princeprocessor/releases
|
||||
@@ -1 +0,0 @@
|
||||
0
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -50,7 +50,6 @@ def test_config_with_explicit_hashcat_path():
|
||||
"hcatBin": "hashcat",
|
||||
"hcatTuning": "--force",
|
||||
"hcatWordlists": "./wordlists",
|
||||
"hcatOptimizedWordlists": "./optimized_wordlists",
|
||||
"rules_directory": "/opt/hashcat/rules",
|
||||
"hcatDictionaryWordlist": ["rockyou.txt"],
|
||||
"hcatCombinationWordlist": ["rockyou.txt"],
|
||||
|
||||
156
tests/test_lima_vm_install.py
Normal file
156
tests/test_lima_vm_install.py
Normal file
@@ -0,0 +1,156 @@
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import uuid
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
def _require_lima():
|
||||
if os.environ.get("HATE_CRACK_RUN_LIMA_TESTS") != "1":
|
||||
pytest.skip("Set HATE_CRACK_RUN_LIMA_TESTS=1 to run Lima VM tests.")
|
||||
if shutil.which("limactl") is None:
|
||||
pytest.skip("limactl not available")
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def lima_vm():
|
||||
_require_lima()
|
||||
repo_root = Path(__file__).resolve().parents[1]
|
||||
vm_name = f"hate-crack-e2e-{uuid.uuid4().hex[:8]}"
|
||||
yaml_path = str(repo_root / "lima" / "hate-crack-test.yaml")
|
||||
|
||||
try:
|
||||
start = subprocess.run(
|
||||
["limactl", "start", "--name", vm_name, yaml_path],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=300,
|
||||
)
|
||||
except subprocess.TimeoutExpired as exc:
|
||||
pytest.fail(f"limactl start timed out after {exc.timeout}s")
|
||||
|
||||
assert start.returncode == 0, (
|
||||
f"limactl start failed. stdout={start.stdout} stderr={start.stderr}"
|
||||
)
|
||||
|
||||
ssh_config = Path.home() / ".lima" / vm_name / "ssh.config"
|
||||
# Use rsync directly to exclude large runtime-only directories that aren't
|
||||
# needed for installation (wordlists, crack results, the hashcat binary -
|
||||
# the VM has hashcat installed via apt).
|
||||
rsync_cmd = [
|
||||
"rsync", "-a", "--delete",
|
||||
"--exclude=wordlists/",
|
||||
"--exclude=hashcat/",
|
||||
"--exclude=results/",
|
||||
"--exclude=*.pot",
|
||||
"--exclude=*.ntds",
|
||||
"--exclude=*.ntds.*",
|
||||
# Exclude host-compiled binaries so the VM always builds from source.
|
||||
# Keep the bin/ dir itself (empty is fine); make clean recreates it anyway.
|
||||
"--exclude=princeprocessor/*.bin",
|
||||
"--exclude=princeprocessor/src/*.bin",
|
||||
"--exclude=hashcat-utils/bin/*.bin",
|
||||
"--exclude=hashcat-utils/bin/*.exe",
|
||||
"--exclude=hashcat-utils/bin/*.app",
|
||||
"-e", f"ssh -F {ssh_config}",
|
||||
f"{repo_root}/",
|
||||
f"lima-{vm_name}:/tmp/hate_crack/",
|
||||
]
|
||||
try:
|
||||
copy = subprocess.run(
|
||||
rsync_cmd,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=120,
|
||||
)
|
||||
except subprocess.TimeoutExpired as exc:
|
||||
pytest.fail(f"rsync copy timed out after {exc.timeout}s")
|
||||
|
||||
assert copy.returncode == 0, (
|
||||
f"rsync copy failed. stdout={copy.stdout} stderr={copy.stderr}"
|
||||
)
|
||||
|
||||
install_cmd = (
|
||||
"cd /tmp/hate_crack && "
|
||||
"make submodules vendor-assets && "
|
||||
# Build the wheel directly (skips sdist) so freshly-compiled binaries
|
||||
# in hate_crack/hashcat-utils/bin/ are included via package-data.
|
||||
"rm -rf dist && "
|
||||
"$HOME/.local/bin/uv build --wheel && "
|
||||
"$HOME/.local/bin/uv tool install dist/hate_crack-*.whl && "
|
||||
"make clean-vendor"
|
||||
)
|
||||
try:
|
||||
install = subprocess.run(
|
||||
["limactl", "shell", vm_name, "--", "bash", "-lc", install_cmd],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=600,
|
||||
)
|
||||
except subprocess.TimeoutExpired as exc:
|
||||
pytest.fail(f"Installation timed out after {exc.timeout}s")
|
||||
|
||||
assert install.returncode == 0, (
|
||||
f"Installation failed. stdout={install.stdout} stderr={install.stderr}"
|
||||
)
|
||||
|
||||
yield vm_name
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["limactl", "delete", "--force", vm_name],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=60,
|
||||
)
|
||||
if result.returncode != 0:
|
||||
print(
|
||||
f"Warning: Failed to delete Lima VM {vm_name}. stderr={result.stderr}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
except Exception as e:
|
||||
print(
|
||||
f"Warning: Exception while deleting Lima VM {vm_name}: {e}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
|
||||
def _run_vm(vm_name, command, timeout=180):
|
||||
try:
|
||||
run = subprocess.run(
|
||||
["limactl", "shell", vm_name, "--", "bash", "-lc", command],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=timeout,
|
||||
)
|
||||
except subprocess.TimeoutExpired as exc:
|
||||
pytest.fail(f"Lima VM command timed out after {exc.timeout}s")
|
||||
return run
|
||||
|
||||
|
||||
def test_lima_vm_install_and_run(lima_vm):
|
||||
run = _run_vm(
|
||||
lima_vm,
|
||||
"cd /tmp/hate_crack && $HOME/.local/bin/hate_crack --help && ./hate_crack.py --help",
|
||||
timeout=120,
|
||||
)
|
||||
assert run.returncode == 0, (
|
||||
f"Lima VM install/run failed. stdout={run.stdout} stderr={run.stderr}"
|
||||
)
|
||||
|
||||
|
||||
def test_lima_hashcat_cracks_simple_password(lima_vm):
|
||||
command = (
|
||||
"set -euo pipefail; "
|
||||
"printf 'password\\nletmein\\n123456\\n' > /tmp/wordlist.txt; "
|
||||
"echo 5f4dcc3b5aa765d61d8327deb882cf99 > /tmp/hash.txt; "
|
||||
"hashcat -m 0 -a 0 --potfile-disable -o /tmp/out.txt /tmp/hash.txt /tmp/wordlist.txt --quiet; "
|
||||
"grep -q ':password' /tmp/out.txt"
|
||||
)
|
||||
run = _run_vm(lima_vm, command, timeout=180)
|
||||
assert run.returncode == 0, (
|
||||
f"Lima VM hashcat crack failed. stdout={run.stdout} stderr={run.stderr}"
|
||||
)
|
||||
Reference in New Issue
Block a user