Calls `detect_username_hash_format` once during preprocessing in
`main()` (after NTLM/NetNTLM handling and before the potfile check) and
stores the result in `hcatUsernamePrefix`. A new helper
`_maybe_append_username_flag` is invoked from the universal
command-finalization chokepoint `_append_potfile_arg`, so every hashcat
command path automatically receives `--username` when a `user:hash`
file is detected. The helper guards against duplicates if `--username`
is already present in `hcatTuning`.
Skips detection for modes 1000/5500/5600 because the existing NTLM
preprocessing already strips usernames into a bare `.nt` file.
Also adds `hcatUsernamePrefix` to the proxy sync tuple in
`hate_crack.py` so tests that patch the root module propagate into
`hate_crack.main`.
Refs #107
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Resolves merge conflict with origin/main (keys 19-22 from main kept,
wordlist tools submenu added at key 80).
Fixes test isolation bug in test_random_rules_attack.py where
load_cli_module() nuked hate_crack.attacks from sys.modules, breaking
__globals__ references in wordlist_tools_submenu for downstream tests.
Also corrects wrong menu key assertion (21 not 20) for generate_rules_crack.
Resolves merge conflict with origin/main (keys 19-21 used by ngram,
permute, random-rules). Combipow takes key 22.
- gzip support: decompress .gz wordlists to a temp file before passing
path to combipow.bin (which requires a filename argument)
- UI line-counting uses gzip.open for .gz files
- Update tests to reference key 22 instead of 21
- Add _is_gzipped() magic-byte detector and _wordlist_path() context
manager that transparently decompresses gzip files to a temp path
- Apply gzip handling to hcatCombinator3 and hcatCombinatorX via
contextlib.ExitStack so compressed wordlists work without manual prep
- Add hcatNgramX() wrapper using ngramX.bin <corpus> <group_size> piped
to hashcat, with gzip auto-detection on the corpus file
- Add ngram_attack() handler in attacks.py with tab-autocomplete corpus
selection and configurable group size (default 3)
- Register attack as menu option 19 in both main.py and hate_crack.py
- Fix wordlist_optimizer.py: .app extension on macOS was wrong, use .bin
- Add tests/test_ngram_gzip.py covering ngram_attack handler, _is_gzipped,
and _wordlist_path context manager (temp file cleanup, plain passthrough)
Add menu option 21 (Combipow Passphrase Attack) using combipow.bin from
hashcat-utils. Generates all unique non-empty subset combinations from a
short wordlist and pipes them into hashcat stdin.
- hcatCombipow(hash_type, hash_file, wordlist, use_space_sep) in main.py
- combipow_crack handler in attacks.py validates file, enforces 63-line
limit, prompts for space separator, then delegates to hcatCombipow
- Wired into get_main_menu_items() and get_main_menu_options() in both
main.py and hate_crack.py
- 12 tests covering menu presence, handler delegation, line-count
enforcement, file validation, and subprocess flag construction
Adds Permutation Attack (menu option 19) that generates all character
permutations of each word in a targeted wordlist and pipes them to
hashcat via permute.bin from hashcat-utils.
- hcatPermute() in main.py: pipes permute.bin < wordlist | hashcat
- permute_crack() in attacks.py: prompts for single wordlist file with
factorial-growth warning, tab-autocomplete support
- Menu option 19 wired in both main.py and hate_crack.py
- hcatPermuteCount tracking alongside other count globals
- Tests: test_permute_attack.py (handler behavior) and
test_permute_wrapper.py (subprocess wiring)
- README: added entry in menu listing and attack descriptions
- Add hcatGenerateRules() in main.py: runs generate-rules.bin to
produce N random rules, writes them to a temp file, runs hashcat
with -r against a chosen wordlist, cleans up on exit
- Add generate_rules_crack() handler in attacks.py with count
prompt (default 65536), wordlist picker with tab-completion,
input validation, and abort on invalid input
- Add dispatcher generate_rules_crack() in main.py and key "20"
in both main.py and hate_crack.py get_main_menu_options()
- Add ("20", "Random Rules Attack") to get_main_menu_items()
- Add tests: test_random_rules_attack.py (menu presence, handler
wiring), test_random_rules_wrapper.py (subprocess behavior,
cleanup, count passing, count tracking)
- Add key "20" to MENU_OPTION_TEST_CASES in test_ui_menu_options.py
- Update README: add option 20 to menu listing, add attack
description, add version history entry
Adds Rule File Tools submenu (menu option 81) with three operations:
- Clean: removes invalid/duplicate rules via cleanup-rules.bin
- Optimize: consolidates redundant operations via rules_optimize.bin
- Clean and optimize: both in sequence with temp file handling
Wired through the standard three-layer pattern: main.py utility
functions + dispatcher, attacks.py handlers + submenu, root
hate_crack.py menu registration.
- Add three hashcat wrapper functions: hcatAdHocMask, hcatMarkovTrain, hcatMarkovBruteForce
- Add corresponding attack handlers in attacks.py with OMEN-style training flow
- Consolidate 4 combinator attacks (keys 10/11/12) into interactive sub-menu (key 6)
- Add key 17 for ad-hoc mask attack and key 18 for markov brute force
- Update both main.py and hate_crack.py menu systems
- Add comprehensive test coverage for new handlers and wrappers
- Training source picker supports cracked passwords or any wordlist
- Add optional startup version check against GitHub releases (check_for_updates config option)
- Add packaging dependency for version comparison
- Fix PassGPT OOM on MPS by capping batch size to 64 and setting memory watermark limits
- Fix PassGPT output having spaces between every character
- Hide PassGPT menu item (17) unless torch/transformers are installed
- Fix mypy errors in passgpt_generate.py with type: ignore comments
- Update README with version check docs, optional ML deps section, and PassGPT CLI options
- Add test_version_check.py with 8 tests covering update check behavior
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add PassGPT as attack mode 17, using a GPT-2 model trained on leaked
password datasets to generate candidate passwords. The generator pipes
candidates to hashcat via stdin, matching the existing OMEN pipe pattern.
- Add standalone generator module (python -m hate_crack.passgpt_generate)
- Add [ml] optional dependency group (torch, transformers)
- Add config keys: passgptModel, passgptMaxCandidates, passgptBatchSize
- Wire up menu entries in main.py, attacks.py, and hate_crack.py
- Auto-detect GPU (CUDA/MPS) with CPU fallback
- Add unit tests for pipe construction, handler, and ML deps check
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add OMEN (Ordered Markov ENumerator) as a probability-ordered password
candidate generator. Trains n-gram models on leaked passwords via
createNG, then pipes candidates from enumNG into hashcat.
Also fix a pre-existing bug where ensure_binary() used quit(1) instead
of sys.exit(1) - quit() closes stdin before raising SystemExit, which
caused "ValueError: I/O operation on closed file" when any optional
binary check failed and the program continued to use input().
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Rename markov_attack → ollama_attack and hcatMarkov → hcatOllama across
menu, attacks, and tests. Remove candidate count prompts and cracked-output
default wordlist logic. Rename config keys (markov* → ollama*) and drop
ollamaUrl. Fix Dockerfile.test to use granular build steps.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add a new attack mode that uses a local LLM via Ollama to generate
password candidates, converts them into hashcat .hcstat2 Markov
statistics via hcstat2gen, and runs a Markov-enhanced mask attack.
Two generation sub-modes:
- Wordlist-based: feeds sample from an existing wordlist to the LLM
as pattern context (config-selectable default with Y/N override)
- Target-based: prompts for company name, industry, and location
for contextual password generation
Pipeline: Ollama API -> candidate file -> hcstat2gen -> LZMA compress
-> hashcat -a 3 --markov-hcstat2
Config additions: ollamaUrl, ollamaModel, markovCandidateCount,
markovWordlist. No new pip dependencies (uses stdlib urllib/lzma).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>