Compare commits

..

11 Commits
master ... mapa

Author SHA1 Message Date
Willi Ballenthin
7043aae69b plans: add chat sessions and transcripts exploring mapa problem statements 2026-03-17 12:21:35 +01:00
Willi Ballenthin
c8e277e8b4 mapa: html-map show function neighborhood
Fix html tooltip newline escaping
2026-03-17 09:58:30 +01:00
Willi Ballenthin
fe2373161d mapa: html-map assemblage ground truth 2026-03-16 20:34:53 +01:00
Willi Ballenthin
ac49d292cb mapa: consistent naming 2026-03-16 19:54:45 +01:00
Willi Ballenthin
877d8da73c mapa: add --output=html-map
Polish MAPA html split view
2026-03-16 19:54:45 +01:00
Willi Ballenthin
ae3a1b5dbf mapa: add assemblage ground truth rendering 2026-03-16 16:34:04 +01:00
Willi Ballenthin
22d90b157f mapa: QS string tags
mapa: add QS string tags

Vendor QS string databases and tag each string row with right-aligned
database-derived tags (#zlib, #winapi, #capa, #common, #code-junk,
etc.).
Tags are matched against raw strings before display trimming. The
visible
tag policy suppresses #common when a more-specific tag is present.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-16 15:58:24 +01:00
Willi Ballenthin
6db9e571eb plans: mapa QS string tags 2026-03-16 15:58:10 +01:00
Willi Ballenthin
a364659cc3 mapa: port from Lancelot/BinExport2 to IDALib/ida-domain
Replace the Lancelot/BinExport2 backend with an IDALib-only implementation
using ida-domain as the primary query surface.

New mapa/ package with four layers:
- model.py: backend-neutral dataclasses (MapaReport, MapaFunction, etc.)
- ida_db.py: database lifecycle with SHA-256 caching and flock guards
- collector.py: populates MapaReport from an open ida_domain.Database
- renderer.py: Rich-based text output from MapaReport
- cli.py: argument parsing, capa/assemblage loading, orchestration

Key behaviors preserved from the original:
- Report sections: meta, sections, libraries, functions (modules removed)
- Thunk chain resolution (depth 5, matching capa's THUNK_CHAIN_DEPTH_DELTA)
- Caller forwarding through thunks
- CFG stats with NOEXT|PREDS flags
- String extraction via data-reference chains (depth 10)
- Assemblage overlay and capa match attachment

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

mapa: suppress Lumina via IdaCommandOptions.plugin_options

Match capa's loader.py behavior: disable primary and secondary Lumina
servers by passing plugin_options through IdaCommandOptions, which maps
to IDA's -O switch. load_resources=True already provides -R.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

mapa: add __main__.py for python -m mapa invocation

scripts/mapa.py shadows the mapa package when run directly because
Python adds scripts/ to sys.path. The canonical invocation is now:

    python -m mapa <input_file> [options]

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

mapa: import idapro before ida_auto

idapro must be imported first because it mutates sys.path to make
ida_auto and other IDA modules available.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

mapa: guard against InvalidEAError in string/xref lookups

ida-domain raises InvalidEAError for unmapped addresses instead of
returning None. Guard data_refs_from_ea and strings.get_at calls
so the collector handles broken reference chains gracefully.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

mapa: change default/key theme color from black to blue

Black text is invisible on dark terminals. Use blue for function names,
keys, and values.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

mapa: use module.dll!func format for APIs and libraries

IDA strips .dll from PE import module names. Add it back so libraries
render as 'KERNEL32.dll' and API entries as 'KERNEL32.dll!CreateFileW'.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

mapa: lowercase module names in libraries and API entries

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

mapa: use FLOSS/capa regex-based string extraction instead of IDA string list

IDA's built-in string list has a minimum length threshold (~5 chars)
that silently drops short strings like "exec". Replace db.strings and
ida_bytes.get_strlit_contents with regex-based extraction from FLOSS/capa
that scans raw segment bytes for ASCII and UTF-16 LE strings (min 4 chars).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

mapa: simplify string extraction to on-demand via get_cstring_at

Replace upfront segment-scanning index with on-demand reads using
db.bytes.get_cstring_at, validated against FLOSS/capa printable ASCII
charset. The index approach missed mid-string references and did
unnecessary work scanning entire segments.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

mapa: add UTF-16 LE wide string extraction

Read raw bytes at data reference targets and check for both ASCII and
UTF-16 LE strings using FLOSS/capa printability heuristics. Neither
ida_domain's get_cstring_at nor get_string_at handle wide strings, so
we parse the byte patterns directly.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-16 15:04:26 +01:00
Willi Ballenthin
2456559bc0 plans: mapa idalib port 2026-03-16 12:15:28 +01:00
Willi Ballenthin
5dd1f49023 import codemap as mapa 2026-03-16 11:23:35 +01:00
57 changed files with 5986 additions and 3 deletions

View File

@@ -89,6 +89,31 @@ To use capa as a library or integrate with another tool, see [doc/installation.m
**Documentation:** [Usage and tips](doc/usage.md) · [Installation](doc/installation.md) · [Limitations](doc/limitations.md) · [FAQ](doc/faq.md)
# mapa html map
mapa can render a standalone HTML overview of functions, string tags, and referenced strings.
```bash
python -m mapa binaries/01/16/mpbindump.exe --output html-map > report.html
```
On the `mpbindump.exe` sample, the generated report contains:
```text
doctype <!doctype html>
functions 1406
tags 12
strings 81
```
Use `--open` to write the report to a temporary file and open it in your browser.
```bash
python -m mapa binaries/01/16/mpbindump.exe --output html-map --open
```
The executable proof for this example is in `doc/demos/mapa-html-map.md`.
# capa Explorer Web
The [capa Explorer Web](https://mandiant.github.io/capa/explorer/) enables you to interactively explore capa results in your web browser. Besides the online version you can download a standalone HTML file for local offline usage.

View File

@@ -490,10 +490,11 @@ def dumps_dynamic(extractor: DynamicFeatureExtractor) -> str:
taddr = Address.from_capa(t.address)
tfeatures = [
ThreadFeature(
thread=taddr,
basic_block=taddr,
address=Address.from_capa(addr),
feature=feature_from_capa(feature),
)
) # type: ignore
# Mypy is unable to recognise `basic_block` as an argument due to alias
for feature, addr in extractor.extract_thread_features(p, t)
]

View File

@@ -0,0 +1,34 @@
# mapa html-map demo
*2026-03-16T17:05:38Z by Showboat 0.6.1*
<!-- showboat-id: 1cf46a16-c3d7-459d-8593-c23080cb12f6 -->
Generate a standalone HTML report for a sample binary and summarize the report contents.
```bash
tmp=$(mktemp /tmp/mapa-html-map-XXXXXX.html)
PYTHONWARNINGS=ignore ./.venv/bin/python -m mapa binaries/01/16/mpbindump.exe --output html-map --quiet > "$tmp"
PYTHONWARNINGS=ignore /usr/bin/python3 - "$tmp" <<"PY"
import json
import re
import sys
from pathlib import Path
text = Path(sys.argv[1]).read_text()
match = re.search(r"<script type=\"application/json\" id=\"mapa-data\">(.*?)</script>", text, re.S)
data = json.loads(match.group(1))
print("doctype", text.splitlines()[0])
print("functions", len(data["functions"]))
print("tags", len(data["tags"]))
print("strings", len(data["strings"]))
PY
rm "$tmp"
```
```output
doctype <!doctype html>
functions 1406
tags 12
strings 81
```
To open the report directly in your browser, use `python -m mapa <sample> --output html-map --open`.

View File

@@ -0,0 +1,105 @@
# Compilation Unit Boundary Evaluation
Ground truth data from assemblage: ~500K binaries with PDB-derived source file mappings, ~1000 functions each. This document describes how to turn that data into a repeatable evaluation pipeline for CU boundary detection.
## Ground truth definition
Each function in a binary has a source file path from debug info. Source files are either **impl files** (`.c`, `.cpp`, `.cc`, `.cxx`) or **headers** (`.h`, `.hpp`, extensionless STL headers like `vector`, `xmemory`, etc.). A compilation unit is defined by its impl file — header-sourced functions are template instantiations or inline expansions compiled into whichever CU includes them.
To assign each function to a CU:
1. Sort functions by start address.
2. For each function, if its source is an impl file, that's its CU.
3. Otherwise, forward-fill: inherit the CU of the most recent preceding impl function. If none exists (header functions at the start of the binary), backward-fill from the first impl file.
4. Ground truth boundaries = the set of gap indices where `CU[i] != CU[i-1]`.
This assignment was validated on 10 binaries (21K functions). Every ground truth boundary has an impl file on at least one side — there are zero boundaries where both flanking functions come from headers. Forward-fill produces CU fragmentation ratios of 1.0-1.2x (nearly perfectly contiguous CUs).
## What the data looks like
Source file paths fall into classifiable categories based on filesystem structure. User code lives under `c:\assemblage\builds\HASH\PROJECT\*.cpp` (not under `external/`). Vendored libraries are at `...\PROJECT\external\LIBNAME\*`. MSVC STL headers are under `C:\Program Files\...\MSVC\...\include\` (e.g., `vector`, `xmemory`). CRT sources come from `D:\a\_work\...\vctools\crt\...` or `d:\th\minkernel\crts\...`.
Library identity is derivable from the path prefix. The assemblage build system makes external libraries explicit via the `external/LIBNAME/` convention.
### Template/STL interleaving
Functions from STL headers appear heavily interspersed within CUs. In SmxDecompiler.exe, 72% of functions come from MSVC STL headers, appearing in runs of up to 258 functions within a single CU. These are per-CU template instantiation copies (the same `std::_Verify_range` appears in 4 different CUs), not independent entities.
This means "source file changed" is a terrible boundary signal because it fires on every header-to-impl or header-to-header transition within a CU. On 3DSIFT.exe (heavy Eigen template use), source-file-change achieves only 3% precision. The actual CU boundary rate is low: only 5.9% of all inter-function gaps are real CU boundaries.
### CRT fragmentation
CRT code (from the ucrt static library) shows heavy fragmentation: dozens of tiny 1-3 function CUs interleaved by the linker. These are the hardest boundaries to detect and arguably the least useful to distinguish (separating `delete_scalar.cpp` from `new_array.cpp` has limited practical value). Consider collapsing all CRT functions into a single "CRT" meta-CU for scoring, or reporting CRT and non-CRT scores separately.
### Multi-fragment CUs
Some impl files appear in non-contiguous blocks (e.g., `inflate.c` interrupted by `infback.c`/`inffast.c` in zlib). This is uncommon in application code but frequent in CRT. For boundary scoring, treat each contiguous fragment as its own segment — the metric measures where transitions occur regardless of whether distant segments share a label.
## Metrics
### Primary: Boundary F1
For a predicted set of boundary gap indices vs the ground truth set, compute precision (fraction of predicted boundaries that are real), recall (fraction of real boundaries that are predicted), and F1 (harmonic mean). This directly measures the "where are the seams?" question. It's strict: a boundary predicted one function off from the true position scores as one FP + one FN.
### Secondary: WindowDiff
Standard text segmentation metric, more forgiving of near-misses. Slides a window of size `k` across the sequence and counts positions where the number of boundaries within the window differs between prediction and ground truth. Lower is better. Use `k = avg_segment_length / 2`.
### Library-level metrics
| Problem | Ground truth | Metric |
|---------|-------------|--------|
| CU boundaries | Impl-file assignment (above) | Boundary F1 + WindowDiff |
| Library clustering | Path-derived category (user-code, external:zlib, crt, msvc-stdlib, ...) | Adjusted Rand Index |
| User vs library | Binary classification from category | Per-function accuracy |
These are separate evaluations that can be run independently and solved in order: CU boundaries first, then library grouping, then user/library classification.
## Baseline results
Tested on the 10-binary sample (21K functions, 1238 CU boundaries):
| Heuristic | Precision | Recall | F1 range |
|-----------|-----------|--------|----------|
| No boundaries | - | 0.00 | 0.00 |
| Every source-file change | 0.03-0.79 | 1.00 | 0.06-0.88 |
| Category change | 0.11-0.50 | 0.02-0.28 | 0.04-0.29 |
| Impl-file stream tracking | 1.00 | 1.00 | 1.00 |
The impl-file oracle (skip headers, flag when the .c/.cpp changes) achieves perfect F1. This is the ceiling; it confirms that CU boundaries are exactly the impl-file transitions in the source data. The gap between the naive baselines and this oracle is the space we're trying to close using binary-level features.
## Evaluation pipeline
### Step 1: Ground truth extraction
From debug info, produce a normalized table per binary:
```
binary_hash | func_index | func_start | func_end | cu_id | library_category
```
`cu_id` is the impl file path (or a hash of it). `library_category` is derived from the path prefix. Store as Parquet or SQLite since the CSV format won't scale to 500K binaries.
Pre-compute per-binary boundary sets as arrays of gap indices.
### Step 2: Predictor interface
A predictor takes a binary's function list (start addresses and sizes only, no source info, no debug info) and returns a set of predicted boundary indices. The predictor can use any features extractable from the raw binary: inter-function gap sizes and padding patterns, call graph edge density across each gap, data reference patterns, string attribution, function naming (if symbols present), alignment byte patterns, etc.
### Step 3: Scoring
For each binary, compute boundary F1 between prediction and ground truth. Aggregate across the dataset:
Report both macro-average (mean F1 across binaries, treating each binary equally) and micro-average (pool all gaps, compute F1 once, biased toward larger binaries). Add per-category breakdowns (user code only, CRT only, etc.) to understand where the predictor succeeds and fails.
### Step 4: Iterate
Try different feature combinations and scoring strategies, compare F1. Per mapa-cu-ideas.md, the boundary scorer is the spine of the pipeline and every algorithm becomes a feature generator for gap scoring.
## Design decisions
Boundary F1 was chosen over clustering metrics (ARI, NMI) because those conflate boundary detection with label assignment. We want to decouple the two: find boundaries first, then label segments.
Forward-fill for CU assignment matches how compilation works. The compiler processes the impl file's own functions first, then template instantiations from included headers. Header functions belong to the most recent impl file. The 1.0-1.2x fragmentation ratio validates this model.
Header-to-header transitions are never counted as boundaries because the data shows zero CU boundaries where both flanking functions come from headers. Every CU has at least one impl file as its root. Template instantiations from different headers within the same CU are not separate CUs.

100
doc/plans/design.md Normal file
View File

@@ -0,0 +1,100 @@
# mapa design
## Architecture
Seven layers, each in its own module under the `mapa/` package:
| Module | Responsibility |
|---|---|
| `mapa/model.py` | Backend-neutral dataclasses: `MapaReport`, `MapaMeta`, `MapaSection`, `MapaLibrary`, `MapaFunction`, `MapaCall`, `MapaCaller`, `MapaString`, `MapaProgramString`, `AssemblageRecord` |
| `mapa/assemblage.py` | Assemblage CSV loading, column validation, SHA-256 filtering, RVA-to-VA conversion, exact-row deduplication |
| `mapa/ida_db.py` | IDA database lifecycle: `resolve_database()`, `open_database_session()`, SHA-256 caching, flock-based concurrency guard |
| `mapa/collector.py` | Populates `MapaReport` from an open `ida_domain.Database`. All IDA queries live here. |
| `mapa/renderer.py` | Rich-based text rendering and plain single-function summary formatting from `MapaReport`. No IDA dependency. |
| `mapa/html_renderer.py` | Self-contained `html-map` rendering from `MapaReport`. No IDA dependency. |
| `mapa/cli.py` | Argument parsing, capa/assemblage loading, output-mode selection, `--open` temp-file/browser handling, orchestration |
`scripts/mapa.py` is a thin entry point that delegates to `mapa.cli.main()`.
The CLI validates output-mode combinations before analysis. For `--output html-map --open`, it renders the HTML once, writes it to a temporary `.html` file via `NamedTemporaryFile(delete=False)`, and opens the browser with `webbrowser.open(file://...)`.
## Database lifecycle
Modeled on `idals.py` from idawilli:
1. If input is `.i64`/`.idb`, use directly.
2. Otherwise, hash the file (MD5 + SHA-256), check `~/.cache/mandiant/mapa/<sha256>.i64`.
3. On cache miss: acquire advisory flock, create database via `Database.open()` with `IdaCommandOptions(auto_analysis=True, new_database=True, output_database=..., load_resources=True)`, wait for `ida_auto.auto_wait()`.
4. On cache hit or after creation: open read-only with `new_database=False, save_on_close=False`.
5. Concurrency guard: poll for `.nam` file disappearance + `fcntl.flock` on `<db>.lock` + TOCTOU re-check.
## Assemblage loading
Assemblage loading is deferred until after mapa opens the IDA database, because the effective sample SHA-256 may come from either the raw input file or the database metadata.
`mapa.assemblage.load_assemblage_records()`:
- reads the CSV with `csv.DictReader`
- requires `hash`, `name`, `start`, `end`, and `source_file`
- filters rows by sample SHA-256, case-insensitively
- converts `start` and `end` RVAs to VAs by adding `db.base_address`
- strips the trailing provenance suffix from `source_file` only at render time, via `AssemblageRecord.source_path`
- deduplicates exact duplicate rows while preserving CSV order for distinct ambiguous matches
The result is `dict[int, list[AssemblageRecord]]`, keyed by function start VA.
## Collector design
The collector builds several indexes before the main function loop:
- import_index: `dict[int, (module, name)]` from `db.imports.get_all_imports()`
- extern_addrs: `set[int]` from functions in XTRN segments
- thunk_targets: `dict[int, int]` via `_resolve_thunk_target()` — follows code refs then data refs, max depth 5, single-target chains only
- resolved_callers/callees: built by walking all non-thunk function flowcharts, resolving call targets through thunk chains, classifying as internal vs API
String extraction follows single data-reference chains from each instruction up to depth 10. The collector returns both the discovered string VA and the raw string value for each hit.
The collector stores string data in two shapes:
- `MapaFunction.strings` for the text report and tooltip summaries. These stay function-local and deduplicate by trimmed display value.
- `MapaReport.program_strings` for `html-map`. These are keyed by string VA, preserve duplicate display values at different addresses, merge tags across repeated references, and track the set of referencing function addresses.
Assemblage data is attached per function during collection. `MapaFunction.assemblage_records` carries zero or more `AssemblageRecord` values for the function start address. The collector does not use Assemblage to rename functions, callers, or callees.
## ida-domain API usage
Primary queries used:
- `db.functions` — iteration, `get_at()`, `get_name()`, `get_flags()`, `get_flowchart()`
- `db.segments.get_all()` — section enumeration
- `db.imports.get_all_modules()`, `get_all_imports()` — library/import enumeration
- `db.xrefs.code_refs_from_ea()`, `data_refs_from_ea()`, `calls_from_ea()` — call/thunk resolution
- `db.instructions.is_call_instruction()`, `get_mnemonic()` — instruction classification
- `db.heads.size()` — instruction byte size
- `FlowChart` with `FlowChartFlags.NOEXT | FlowChartFlags.PREDS` — CFG traversal
- `FunctionFlags.THUNK`, `FunctionFlags.LIB` — function classification
No legacy `ida_*` module calls are used. All queries go through `ida-domain`.
## Rendering
`mapa/renderer.py` prints the text report in function address order. For each function, it prints the IDA-derived header first and then any Assemblage annotations as `assemblage name:` and `assemblage file:` lines. When multiple distinct Assemblage rows map to one function start address, the renderer prints all of them in order.
The text renderer also exposes a plain single-function summary formatter used by `html-map` tooltips. The row order matches text mode: Assemblage lines, xrefs, CFG stats, capa matches, internal calls, APIs, and strings.
For source-file separators, mapa uses the first Assemblage record's normalized source path as the function's primary source path. The text renderer tracks the last seen non-empty primary path across the function list. Missing Assemblage data does not trigger a separator and does not reset that state. When a later function introduces a different primary path, the renderer prints a muted horizontal rule with `[ <path> ]` immediately before that function.
`mapa/html_renderer.py` renders a single self-contained HTML document. It emits a split view: a left function pane and a right string pane, both with independent scrolling. The panes are separated by a draggable vertical divider implemented with a small inline pointer-event handler. The renderer emits one square per function in address order, one program-string row per string VA in address order, tag controls with visible function counts, direction and depth controls for neighborhood traversal, a small legend for heat/seed/dim semantics, right-aligned visible tags in each string row, inline JSON data for function summaries, direct tag memberships, direct string memberships, and caller/callee adjacency, plus a single floating tooltip and a small inline script. That script resolves the active seed source from a hovered or locked function, tag, or string; chooses caller-only, callee-only, or undirected traversal; runs a bounded breadth-first search from each seed; sums geometric-decay contributions using per-seed shortest distance; and renders the result as a heat overlay with a distinct seed outline.
## String tagging
Vendored Quantum Strand string databases live under `mapa/string_tags/data/` in five families: OSS/CRT libraries (gzipped JSONL), expert rules (plain JSONL), Windows API names (gzipped text), global prevalence (gzipped JSONL + binary hash files), and junk-code strings (gzipped JSONL).
The `mapa/string_tags/` package has three modules:
- `model.py``StringTagMatch` and `StringTagResult` dataclasses
- `loaders.py` — file-format readers using `msgspec`, `gzip`, `hashlib`, and `importlib.resources`
- `tagger.py``StringTagger` class with `tag_string(raw) -> StringTagResult`, plus `load_default_tagger()` which lazily loads and caches all databases process-wide
The collector tags raw strings before `rstrip()` trimming. When two raw strings collapse to the same display value, their tags and match metadata are merged. `MapaString` carries `tags: tuple[str, ...]` and `tag_matches: tuple[StringTagMatch, ...]`.
The text renderer uses a Rich `Text`-based helper to right-align the visible tag column on `string:` rows. The HTML renderer reuses the same visible-tag policy, builds its top tag controls from those visible tags only, shows the distinct-function count for each visible tag, and renders the visible tags right-aligned in each program-string row. The visible tag policy suppresses `#common` when a more-specific tag is also present.

256
doc/plans/mapa-cu-ideas.md Normal file
View File

@@ -0,0 +1,256 @@
> read problem-transcript.md and digest it, summarize it, restructure it and explain it.
>
> ok. now ultrathink. consider the problem deeply. try to give some insights, some novel perspective, ideas for techniques and algorithms. what strikes you?
I've been sitting with this for a while. Here's what strikes me, organized from the structural observations that feel most fundamental down to concrete algorithmic ideas.
------
## The Deep Structure: A Graph Pinned to a Line
The most mathematically distinctive thing about this problem is that you have a *graph* (the call graph, the data xref graph) whose nodes are *pinned to a one-dimensional manifold* (the address space). This is a very specific structure that most generic graph algorithms ignore entirely. It means you're not just doing community detection — you're doing community detection where the communities are *intervals*. That's a massive constraint that dramatically shrinks the search space.
Think about it: if you have 10,000 functions, the number of arbitrary partitions is astronomically large. But the number of ways to partition 10,000 linearly-ordered nodes into *contiguous groups* is only on the order of 2^10,000 in the worst case, and if you're looking for, say, 20 boundaries, it's just C(10000, 20) — large, but tractable with dynamic programming. The contiguity assumption turns a combinatorially explosive problem into something closer to 1D segmentation, which is a much better-understood and more tractable class of problems.
## The Call Edge Length Distribution — A Fingerprint of Structure
Here's an idea I find really compelling. For every edge in the call graph, compute the *address distance* between caller and callee: `|addr(A) - addr(B)|`. Now plot the histogram of these edge lengths across the whole binary.
I'd expect this histogram to be multimodal. There should be a peak near zero representing intra-compilation-unit calls (functions calling their neighbors). There should be a broader hump at medium range representing intra-library cross-CU calls. And there should be long-range outliers representing cross-boundary calls — where user code calls into a library at a distant address.
The valleys between these modes are telling you the *characteristic scales* of the hierarchy. The first valley gives you the typical size of a compilation unit. The second gives you the typical size of a library. This is essentially a multi-scale analysis of the binary's structure, and you get it almost for free before running any clustering algorithm. You could use those characteristic distances as thresholds for later steps.
## Min-Cut / Max-Flow for Boundary Detection
The transcript describes a "tension" along paths between known first-party anchors (like `main`) and known library anchors (like FLIRT-matched or string-anchored functions). This maps almost perfectly to a **minimum cut problem**.
Set it up like this: create a super-source *S* connected to all known first-party nodes, and a super-sink *T* connected to all known library nodes. The edges in the call graph get capacities. What should the capacity be? This is where you encode your priors — you could weight by *inverse address distance* (so cutting a long-range edge is cheap, cutting a short-range edge between neighbors is expensive), or by the number of shared data references, or both. Then the min-cut between S and T partitions the graph into first-party and third-party, and the *cut edges themselves are literally the API boundary* — the places where user code transitions into library code.
This is elegant because it naturally handles ambiguity. You don't have to hand-define a propagation rule; the flow algorithm figures out where the cheapest place to draw the line is. And you can add new anchors (from FLIRT, strings, human input) simply by connecting them to S or T, and the cut re-adjusts.
## The Genome Annotation Analogy
This problem is strikingly similar to **genome annotation** in bioinformatics. You have a long linear sequence (the binary). It contains functional regions (genes = functions). These regions cluster into operons and gene families (compilation units, libraries). You have some known markers (homolog matches ≈ FLIRT signatures). You want to segment the sequence into regions of different character, exploiting the fact that related elements are co-located.
The classic tool for this is the **Hidden Markov Model**. Imagine an HMM where the hidden states represent "this function belongs to first-party code," "this function belongs to library A, CU 1," etc. The observations at each state are the features of that function — its string references, its call graph characteristics, its instruction patterns. The transition probabilities encode the contiguity assumption: staying in the same CU is high probability, transitioning between CUs within a library is medium, jumping between libraries is low.
You'd train this in a semi-supervised way, anchoring some states with your ground truth, and letting Viterbi or forward-backward propagate through the rest. The HMM framework is natural here because the binary *is* a sequence, and the Markov property (what matters is what group you're currently in, not the full history) matches the linker's behavior well.
## Personalized PageRank as a Soft Classification
Here's a technique that handles the graph propagation cleanly. Run **personalized PageRank** (PPR) seeded from each anchor set. One PPR vector seeded from `main` gives you a "reachability-from-first-party" score for every node. Another seeded from known zlib functions gives a "zlib affinity" score. A third from OpenSSL anchors gives an "OpenSSL affinity" score.
Each function is then softly classified by which PPR vector dominates at that node. The beauty is that PPR naturally decays with graph distance and respects the connectivity structure — a tightly-knit library will "trap" the random walk, giving high PPR scores internally and low scores outside. The boundary between two regions is where the competing PPR scores cross over.
You could even visualize this as a heat map over the address space, and the boundaries would pop out visually.
## Exploiting the Bipartite Structure: Code ↔ Data
The transcript mentions that data locality mirrors code locality, but I think this signal is underexploited if you treat it as a separate, independent confirmation channel. Consider the *bipartite graph* between functions and the data items they reference. Two functions that reference overlapping or nearby data are likely from the same compilation unit. This gives you a similarity measure between functions that's *completely independent* of the call graph.
You could build a function-to-function similarity matrix where similarity = how much their referenced data overlaps or how close their referenced data is in the data section. Then cluster on this matrix. The resulting clusters should correspond to compilation units, and they give you a second opinion that you can reconcile with the call-graph-derived clusters.
When both signals agree, you have high confidence. Where they disagree, something interesting is happening — maybe a utility function shared across CUs, or a data section that got reordered.
## Directed Graph Asymmetry as a Library Detector
Here's a subtlety the transcript alludes to but doesn't fully develop. Call graphs are *directed*, and the directionality matters. Library code has a specific shape: it gets **called from far away** (user code invokes the library's public API) but **calls to nearby functions** (internal library routines). So a library entry point has high *in-degree from distant addresses*but low *out-degree to distant addresses*.
You could compute, for each function, two metrics: the average address distance of its callers, and the average address distance of its callees. Plot every function on this 2D plane. I'd expect clusters to emerge — first-party code has short-range callers and a mix of callees (some local, some long-range calls into libraries), while library internals have short-range everything. Library *entry points* are the distinctive ones: long-range callers, short-range callees. They're the "necks" between the communities.
## The Bisection Oracle Strategy
The "git bisect" idea from the transcript is really interesting and I think it deserves formal treatment. If you have an expensive oracle (heavy analysis, LLM query, human analyst) and you want to minimize queries, this is an **active learning** problem.
Concretely: you have a sequence of functions ordered by address, and you know that somewhere in this sequence there's a boundary between first-party and library code. You believe the sequence is monotonic (once you cross from first-party to library, you don't come back — at least within a region). Binary search finds the boundary in O(log n) queries. But real binaries might have multiple interleaved regions, so you'd need a slightly more sophisticated approach — maybe something like the **multi-armed bandit** formulation, where each candidate boundary location has an uncertainty estimate, and you query the point that maximally reduces global uncertainty.
## Strongly Connected Components as Ground Truth CU Detectors
One more concrete trick: compute the **strongly connected components** of the call graph. Mutual recursion (A calls B, B calls A) almost always happens within a single compilation unit. So SCCs give you "for free" sets of functions that definitely belong together. These become seeds for clustering — merge each SCC into a single super-node, then run your boundary detection on the reduced graph.
------
## Synthesis
If I were designing the pipeline, I'd layer these techniques:
Start with **cheap deterministic anchors** — FLIRT matches, string attribution, `main` identification. Then use the **edge-length histogram** to estimate the characteristic scales. Run **SCC detection** to collapse definite-same-CU functions. Build edge weights from address proximity and shared data references, then use **min-cut** from known-first-party to known-library to find the primary boundary. Within each side, use **agglomerative clustering** on the address axis, breaking only at points where the call graph edge density drops off, to find compilation unit boundaries. Use **personalized PageRank** from each library's anchors for soft assignment. Finally, reconcile against the **data xref bipartite clustering** as independent validation.
The result would be a hierarchical segmentation of the address space with confidence scores, and the boundaries would be exactly the API transition points that a reverse engineer cares about.
---
These are good, and the interesting thing is where they overlap with what I already proposed — and more importantly, where the *gaps between them* reveal something deeper. Let me think through what these trigger.
------
> here are some additional ideas. do they cause you to have any new insights or ideas?
>
> To transition MAPA from a good heuristic tool to a cutting-edge analytical engine, here are four algorithms that fit perfectly into their architecture: Algorithm 1: Spatial-Graph Label Propagation (Solving the "Tension") Instead of manually traversing graphs, use Label Propagation.
>
> 1. Build an adjacency matrix of the functions.
> 2. Weight the edges by both call frequency AND physical memory proximity. (e.g., Function A and B are strongly linked if A calls B and they are 100 bytes apart).
> 3. Lock the "Anchor" nodes (Ground Truth).
> 4. Run the propagation algorithm. It will mathematically resolve the "tension" the speaker mentioned, assigning a probability score to every single function (e.g., "Function 0x40100 is 88% likely to be zlib"). Algorithm 2: Spectral Clustering for "Compilation Units" The speaker wants to identify individual compilation units (source files). Because linkers lay out source files contiguously, compilation units form ultra-dense "cliques" in the graph.
>
> - Use Spectral Graph Partitioning. By calculating the Laplacian of the call graph, the algorithm will naturally shatter the binary into discrete chunks.
> - Because compilation units are sequential in memory, you can penalize the clustering algorithm for grouping functions that are physically far apart. The output will be distinct, colored blocks perfectly mapping to the original .c or .cpp files. Algorithm 3: Minimum Cut / Maximum Flow for API Boundaries To find the exact boundary between user code and library code, treat the graph like a pipe network carrying water from main to the library strings.
> - Run a Min-Cut/Max-Flow algorithm. The algorithm will identify the "bottleneck" edges where water struggles to pass from the malware to the library.
> - Those bottlenecks are, almost by definition, the public API functions of the statically linked library.
## The Three Algorithms Are Solving Three Different Sub-Problems
The most important realization is that these three aren't competing approaches — they're a **layered pipeline** attacking different levels of the hierarchy. Min-cut finds the macro boundary (first-party vs. library). Spectral clustering finds the meso-structure (compilation units). Label propagation assigns soft identities (which library, what confidence). That ordering matters, and each stage's output feeds the next.
But there's something missing from this pipeline: none of them address **how to discover the right number of groups**. Min-cut gives you a bisection, but a real binary might have first-party code plus six libraries. Spectral clustering needs you to pick *k*. Label propagation needs seed labels. So the first real question is: **how many libraries are we even looking for?**
## The Eigengap Tells You How Many Libraries Exist
This is where the spectral approach becomes more powerful than the proposal suggests. Don't just use spectral clustering to *partition* — first use the **eigenvalue spectrum of the graph Laplacian** to *discover* the structure. Specifically, compute the first, say, 50 eigenvalues of the address-weighted call graph Laplacian and look for the **eigengap** — the largest jump between consecutive eigenvalues. If the first three eigenvalues are near zero and then the fourth jumps up, that's telling you the graph has roughly three natural communities. You've learned the number of libraries (plus first-party code) before doing any clustering at all.
Even better: the **eigenvectors** themselves, when plotted against function address, should look like step functions. Within a compilation unit, all functions have similar eigenvector values (because they're tightly coupled). At a boundary, the values jump. So you can literally do **change-point detection on the Fiedler vector** (the eigenvector corresponding to the second-smallest eigenvalue) plotted over the address axis, and the change points are your compilation unit boundaries. This fuses the spectral and spatial approaches in a way that's more principled than penalizing distance in the clustering objective.
## The Weighting Scheme Deserves More Thought
Algorithm 1 proposes weighting edges by call frequency AND memory proximity. This sounds right but the *combination function* matters enormously. Additive weighting (`w = α·calls + β·proximity`) treats them as interchangeable signals. Multiplicative weighting (`w = calls · f(proximity)`) treats proximity as a **prior** that modulates graph evidence. I think the multiplicative form is more correct.
Here's why: a call edge between two functions 50 bytes apart is *qualitatively different* from a call edge between two functions 500KB apart. The first is almost certainly intra-CU. The second is almost certainly cross-boundary. The call exists in both cases, but its *meaning* changes based on distance. A multiplicative weight captures this — a long-range call gets a small weight, so cutting it is cheap (it's probably a cross-boundary call), while cutting a short-range call is expensive.
Concretely, something like `w(e) = calls(e) · exp(-distance(e) / λ)` where λ is the characteristic CU size (which you estimated from the edge-length histogram I described earlier). This exponential decay naturally encodes "nearby calls bind tightly, distant calls bind weakly."
## A Critique of Min-Cut That Leads Somewhere Interesting
The min-cut formulation as described has a subtle problem: it's **binary**. It partitions into first-party and library. But what if there are three libraries? You'd need to run it multiple times, or use multi-terminal cut, which gets expensive.
But here's what the min-cut framing *does* give you that's uniquely valuable: the **cut edges are the API boundary**, and the **capacity of the cut is a confidence measure**. A large min-cut value means the boundary is well-defined (lots of edges had to be severed). A small min-cut means the separation is tenuous — maybe you've drawn the line in the wrong place, or maybe the "library" is actually tightly integrated with user code.
This leads to a practical idea: **iterate**. Run min-cut to get a first-party/library partition. Then *within the library side*, run min-cut again between the two most distant anchor sets (say, zlib anchors vs. OpenSSL anchors). This recursive bisection via min-cut naturally produces the hierarchy. And at each level, the cut capacity tells you how confident you should be.
## A New Idea These Three Together Trigger: Diffusion Distance as a Universal Metric
All three algorithms are really measuring the same underlying thing from different angles: **how easily does information flow between two nodes?** Label propagation is a diffusion process. Spectral clustering uses the eigenvectors of the diffusion operator. Min-cut measures the bottleneck in flow.
This suggests a unifying abstraction: compute the **diffusion distance** between every pair of functions. Diffusion distance is the L2 distance between the rows of the matrix `exp(-t·L)` where L is the graph Laplacian and t is a time parameter. At small t, only very close (tightly connected, nearby) functions are similar. At large t, entire libraries blur into single points.
The beauty is that **different values of t reveal different levels of the hierarchy**. Small t resolves compilation units. Medium t resolves libraries. Large t resolves the first-party/library split. You get the entire hierarchical segmentation from a single mathematical object just by tuning a parameter.
You could visualize this as a dendrogram: at what "diffusion time" do two functions merge into the same cluster? Functions in the same CU merge almost immediately. Functions in the same library merge at medium time. Everything merges eventually. The dendrogram *is* the hierarchical classification you're after.
## One Final Practical Insight: Robustness to Bad Disassembly
All of these techniques assume the call graph is accurate, but in real binaries — especially malware — disassembly is imperfect. Indirect calls get missed. Tail calls look like jumps rather than calls. Obfuscated code deliberately breaks analysis.
This is where the **spatial signal acts as a safety net**. Even when the graph is incomplete or noisy, the address layout is always there and always reliable. If your spectral clustering says two functions belong in different groups but they're 20 bytes apart with no other functions between them, the spatial evidence should override. The address axis is your highest-confidence signal; the graph is the informative but noisy supplement. Any algorithm you build should be designed to degrade gracefully — if the graph were *completely* absent, you should still be able to do a passable job purely on address-range heuristics and data xref locality. The graph makes it better, but the 1D structure is the backbone.
---
> here are some further considerations. how should we integrate them into our list of strategies?
>
> 1. Features that seem underexploited.
>
> - 23a. Shared constant pools and tables.
> - 23b. Exception/unwind metadata neighborhoods.
> - 23c. Import-family fingerprints.
> - 23d. Repeated alignment and gap patterns between code blocks.
> - 23e. Data-section locality matching code-section locality.
> - 23f. Ego-network shape: not just who a function calls, but what its 1-hop and 2-hop neighborhood looks like.
> - 23g. Call-direction asymmetry across a candidate boundary.
>
> 1. A specific novel tactic: score gaps, not just nodes.
>
> - 24a. For each gap between adjacent functions in address order, compute “boundary likelihood.”
> - 24b. Features could include:
> - sudden change in seed evidence
> - reduced internal xrefs across the gap
> - padding/alignment pattern
> - change in referenced data neighborhood
> - change in import/API usage style
> - change in graph embedding neighborhood
> - 24c. Then solve for a sparse set of boundaries.
> - 24d. This is more direct than asking “what label is each function?”
>
> 1. Another promising tactic: recover “data islands” first.
>
> - 25a. Cluster strings/tables/rodata/data by address locality and xref neighborhoods.
> - 25b. Label those islands with higher confidence than code.
> - 25c. Then attach functions to the islands they reference.
> - 25d. Libraries often expose themselves through data long before their code is obvious.
>
> 1. A third tactic: active refinement near high-entropy frontiers.
>
> - 26a. Most of the binary may be easy.
> - 26b. The cost is concentrated near a few uncertain boundaries.
> - 26c. So only there should MAPA use expensive methods:
> - deeper semantic analysis
> - more complex embeddings
> - decompiler-derived features
> - analyst interaction
> - 26d. This is much better than spending equal effort on every function.
These four ideas aren't just additions to the list — they restructure how I think the whole pipeline should work. Let me explain why.
## The Paradigm Shift: Score Boundaries, Not Functions
Item 24 is the single most important idea in this entire conversation so far, and I think it should become the **primary framing** of the problem rather than a tactic alongside others.
Every algorithm we've discussed — label propagation, spectral clustering, min-cut, PPR, HMMs — has been asking "what label does each function get?" But that's the indirect question. The direct question is "where are the seams?" And those seams live *between* functions, in the gaps. This reframing matters practically because the number of gaps (N-1 for N functions) is fixed, the features per gap are rich and computable, and the output you actually want — a segmentation — is literally a sparse subset of those gaps marked as boundaries.
This turns the whole problem into **supervised or semi-supervised boundary classification on a 1D sequence of gaps**, which is vastly simpler and more tractable than graph community detection. You compute a feature vector for each gap, you have some gaps where you know a boundary exists (between a FLIRT-matched library function and an adjacent non-library function, for instance), and you train a classifier. Or you don't even need training — you define a scoring function and then solve for the *sparsest* set of boundaries that explains your evidence, which is essentially LASSO or L1-penalized segmentation.
The gap features listed in 24b are excellent, and I'd add a few more that fall out of the earlier discussion: the diffusion distance between the functions flanking the gap, the min-cut capacity if you sever the graph at that point, and the eigenvector discontinuity at that gap in the Fiedler vector. Every algorithm we previously discussed becomes a **feature generator for the gap scorer** rather than a standalone solution. That's the architectural insight — the boundary scorer is the spine, and everything else feeds into it.
## Data Islands as the Foundation Layer
Item 25 is the second most important idea, and it reorders the pipeline in a way I didn't previously consider. The argument is subtle and correct: data is *more reliably clusterable than code* for several reasons. Strings are human-readable and often uniquely attributable to a library. Data items don't have the ambiguity of indirect calls. The data section layout reflects compilation unit structure just as faithfully as the code section, but with less noise. And critically, data doesn't get obfuscated nearly as often as code does in malware — the strings still need to be there at runtime.
So the pipeline should be: cluster the data section first into "islands" using address locality and cross-reference structure. Label those islands using string content, constant patterns, and any other attributable features. Then project those labels back onto functions via data references. A function that references three strings in the "zlib island" is almost certainly zlib code, even if you've never seen its bytes before and FLIRT doesn't match it.
This is powerful because it provides anchors that are **denser and more confident** than what you get from FLIRT or code-side analysis alone. It also means the data-island clustering becomes the *first* stage of the pipeline, before any code-side graph analysis. You start from data, anchor the code, then refine.
## The Underexploited Features Suggest a Richer Feature Space for Gaps
Items 23a through 23g aren't just a wishlist — they fill specific blind spots in the gap scoring framework:
**23a (shared constant pools/tables)** and **23e (data-section locality)** feed directly into the data-island strategy. But they also provide a gap feature: if the functions on either side of a gap reference data from the *same* data island, the gap is unlikely to be a boundary. If they reference data from *different* islands, it probably is.
**23b (exception/unwind metadata)** is a particularly clever signal I hadn't considered. Exception handling tables are generated per-compilation-unit by most compilers, and the unwinder metadata groups functions together in a way that directly reflects CU boundaries. This is almost ground truth when it's present — you should parse `.pdata`/`.xdata` (on Windows) or `.eh_frame` (on Linux) and treat co-membership in the same unwind group as a hard constraint. Functions sharing an unwind info record are in the same CU, period.
**23c (import-family fingerprints)** is interesting because it captures *style*. First-party code that does file I/O will call a different set of APIs than zlib's compression internals. You're not just looking at individual API calls — you're looking at the *distribution* of API families a function touches. Two functions that both use memory allocation + string manipulation + crypto APIs probably belong to the same logical unit (maybe an OpenSSL subsystem), while a function that only touches file I/O is probably a different module. This becomes a gap feature: does the API usage style change across this gap?
**23d (alignment and padding patterns)** is a beautiful low-level signal. Compilers emit characteristic alignment between functions (16-byte alignment, `int3` padding on MSVC, `nop` sleds on GCC). But *linkers* insert different padding between compilation units — often larger gaps or different padding bytes. So the gap's physical characteristics (how many bytes of padding, what the padding looks like) are direct evidence of whether it's an intra-CU gap or an inter-CU gap. This is a nearly free feature that requires no graph analysis at all.
**23f (ego-network shape)** is where things get really interesting algorithmically. Two functions might both have degree 5 in the call graph, but one's ego network is a tree (it calls 5 leaf functions that call nothing else — probably a dispatcher or API facade) while the other's ego network is a dense clique (its 5 callees all call each other — probably deep library internals). The *shape* of the local neighborhood, not just its size, is a fingerprint of where you are in the program's structure. You could compute small graph motif counts or local clustering coefficients and use those as features.
**23g (call-direction asymmetry)** completes the picture for boundary detection specifically. At a real API boundary, you expect calls going *into* the library (from the user-code side) but not calls going *out* of the library back into user code (libraries don't call their callers). So at a candidate boundary gap, you count directed edges crossing it in each direction. A strong asymmetry — many calls going deeper, few or none coming back — is a strong boundary signal. A gap where calls flow freely in both directions is probably internal to a single module.
## Active Refinement as the Final Stage, Not a Separate Strategy
Item 26 should be understood as the **execution strategy** for the whole pipeline, not a separate algorithm. The core insight is that uncertainty is spatially concentrated. After running the cheap, fast stages (data island clustering, gap feature computation, label propagation from anchors), most of the binary is probably classified with high confidence. What remains is a handful of ambiguous frontier zones — maybe 5-10% of the address space where two competing labels are close in score.
This is where you focus your expensive techniques: run the decompiler only on the ambiguous functions. Compute embeddings only for the frontier. Query the analyst only about the uncertain gaps. And do it in the "bisect" order — query the gap in the middle of the largest remaining uncertain region first, because its answer splits the ambiguity in half.
This means the pipeline architecture should explicitly support a **confidence map** over the address space. Every stage produces not just labels but uncertainty estimates. The uncertainty map guides where to spend further effort. And there's a natural stopping criterion: stop when the maximum remaining uncertainty drops below a threshold, or when the analyst says "good enough."
## The Revised Pipeline
Integrating everything, here's how I'd now structure it:
**Stage 0 — Feature extraction.** Parse the binary. Extract functions, call graph, data xrefs, strings, exception metadata, alignment patterns, import calls. Cheap and mechanical.
**Stage 1 — Data island clustering.** Cluster the data section by address locality and xref structure. Label islands using string content and constant pool attribution. This gives you the densest, most confident set of anchors.
**Stage 2 — Anchor projection.** Project data-island labels onto functions via xrefs. Combine with FLIRT matches and known entry points (`main`, etc.). Every function now has either a confident label, a tentative label, or no label.
**Stage 3 — Gap scoring.** For every inter-function gap, compute the full feature vector: padding pattern, data-island change, API style change, call-graph edge density across the gap, call-direction asymmetry, ego-network similarity of flanking functions, exception metadata boundary, eigenvector discontinuity. Score each gap's boundary likelihood.
**Stage 4 — Segmentation.** Solve for the sparse set of boundaries that best explains the gap scores and the anchor constraints. This could be L1-penalized changepoint detection, or an HMM where the gap scores inform transition probabilities.
**Stage 5 — Label assignment.** Within each segment, assign a library/CU label using the anchors, propagated labels, and graph community structure. This is where label propagation or spectral methods work *within* the already-established boundaries rather than trying to find them.
**Stage 6 — Active refinement.** Identify the highest-uncertainty boundaries. Apply expensive techniques — decompilation, semantic analysis, analyst queries — specifically at those points. Update the segmentation. Repeat until confidence is sufficient.
The key design principle is that boundary detection (stages 3-4) and label assignment (stage 5) are **decoupled**. You find where the seams are first, then figure out what's on each side. And the most reliable signals (data, metadata, physical layout) come first, with the noisier signals (graph topology, semantic analysis) used for refinement rather than discovery.

View File

@@ -0,0 +1,83 @@
# mapa plan: html call-graph neighborhood view
This plan explores a larger-neighborhood view for the HTML map. Today the page shows only direct relationships. A tag marks functions that reference strings with that tag, and a string marks functions that reference that exact string. The proposed experiment is to push that signal backward through the call graph so the page can show the code region around those direct hits.
The real question is whether the larger neighborhood stays localized enough to be useful. If a `#zlib` string grows into one compact region when we include one, two, or three caller hops, that supports the idea that tagged strings can anchor a broader library region. If the overlay quickly spreads into generic wrappers and dispatchers, the direct-hit view is probably carrying most of the useful signal already.
## Current behavior
The collector already has most of the data needed for this experiment. `mapa/collector.py` resolves thunk targets, builds `resolved_callers` and `resolved_callees`, and stores those relationships on each `MapaFunction` as `callers` and `calls`. The report model therefore already contains a usable reverse call graph.
The HTML renderer does not use that graph today. `mapa/html_renderer.py` emits only the function tooltip text, a tag-to-function index, and string rows with their direct function indices. The browser script then applies three binary states: tag border on, string fill on, or dim. There is no traversal, no score, and no way to distinguish direct evidence from nearby code.
One detail matters for later weighting. `_collect_tag_entries()` collapses each tag to a set of functions. That is enough for the current binary view, but it drops multiplicity. If one function references several `#zlib` strings, the current page still treats that as one direct hit. That simplification is acceptable for a first neighborhood experiment, but it becomes relevant if we later want repeated string evidence to count more strongly.
## Proposed model
The first experiment should use caller-depth propagation only. For a selected tag or string, define the directly matching functions as depth 0. Then walk backward through callers. Depth 1 is the callers of direct-match functions, depth 2 is the callers of depth-1 functions, and so on. A bounded depth of 0 to 3, or at most 4, is enough for the experiment.
The overlay should weaken with distance. Direct matches stay strongest. Indirect matches fade with depth. A simple additive model is enough. Each direct-match function contributes a seed weight of 1.0. A caller at depth `d` receives `seed_weight * decay(d)`. If several matching paths reach the same function, add the contributions together. That gives the effect we want. A function that sits above several tagged descendants should stand out more than a function that reaches only one.
Two decay families are worth trying. Harmonic decay follows the original intuition of 1.0, 1/2, 1/3, 1/4. Geometric decay uses 1.0, 1/2, 1/4, 1/8. Harmonic keeps more mass at larger depths. Geometric is more conservative and is less likely to smear across the whole map through generic caller chains. The page should probably expose both and default to geometric.
For the first pass, the seed should be per direct-match function rather than per direct-match string. That keeps the experiment focused on graph expansion instead of mixing graph expansion with direct evidence density. If the idea looks useful, then a second pass can try per-string seeds or rarity-weighted seeds.
## What the page should show
The current page already has a useful split between tag selection and string selection. Tag selection uses the border and string selection uses the fill. That can stay. The propagation engine can compute two separate score arrays, one for the border channel and one for the fill channel. If both a tag and a string are active, both overlays remain visible.
The main challenge is interpretation. A single cumulative heat map can look compelling while hiding the reason a function is lit up. The user should be able to tell the difference between a direct match, a one-hop caller, and a function that receives several weak contributions through different paths. For that reason, the page should support both cumulative and exact-hop views. In cumulative mode, the page shows the total score up to the chosen depth. In exact-hop mode, the page shows only depth 0, or only depth 1, or only depth 2. If there is room, small multiples would be even better. Four aligned copies of the same function grid for depths 0 through 3 would make locality much easier to judge.
Direct matches should remain visually distinct from indirect ones. A direct seed should not disappear into the same faint channel as a weak propagated score. Tooltips should also report the exact score for the active overlay. Otherwise the view will be hard to debug.
## Recommended first implementation
The narrow version of this feature can stay almost entirely inside `mapa/html_renderer.py`. The collector already records the caller relationships needed for bounded backward traversal.
The renderer should emit one more structure in its inline JSON: `callersByIndex`, a reverse adjacency list keyed by function index. It can keep the existing direct seed data for tags and strings. The browser script can then compute scores on demand for the active selection. The data volume is small enough that this should stay cheap on typical samples.
The computation can stay simple. Initialize the direct-match functions as the depth-0 frontier. For each depth from 1 to `maxDepth`, push the previous frontier to its callers, multiply the step by the chosen decay factor, and add the contributions into a total score array. This counts bounded walks rather than strictly simple paths. With shallow depths, that is a reasonable approximation.
The first UI pass only needs a few controls: maximum caller depth, decay family, and view mode. The existing function grid and string list can remain in place.
A practical first cut would keep these defaults:
- caller-only propagation
- default depth of 2
- geometric decay
- additive scoring
- cumulative view, with exact-hop available as a switch
## Rendering considerations
The current 10-pixel square is large enough for a binary on/off signal, but it may be too small for subtle border intensity changes. Border alpha alone may be hard to read. A better approach is to keep direct tag hits as a solid border and render propagated tag scores with either a stronger border color ramp or a small glow. For strings, direct matches can keep the current solid fill while propagated scores use a fill alpha ramp. The exact styling can stay simple, but direct and indirect states should be separable at a glance.
The page should also keep dimming non-matches when any overlay is active. Otherwise weak propagated scores will be visually lost in large samples.
## Risks
The main risk is graph bleed. Generic wrappers, initialization code, dispatchers, shared error handlers, and utility helpers often sit one or two caller hops above many unrelated regions. Those functions can make the map look more connected than the underlying library region really is.
Cycles are another risk. Recursive or mutually recursive groups can accumulate score in ways that are mathematically consistent under a bounded-walk model but visually misleading. Strongly connected component collapsing is a possible future refinement, but it should not be part of the first pass.
These risks argue for conservative defaults. Caller-only propagation is easier to reason about than a bidirectional neighborhood. Depth should stay shallow. Geometric decay is a safer default than harmonic. Exact-hop inspection should be available so the user can see whether the first one or two shells are still localized.
## Variations worth testing
If the first pass looks promising, there are several obvious follow-ons.
One variation is seed definition. Compare per-function seeds, per-string seeds, and rarity-weighted seeds. The last option is appealing because `StringTagMatch` already preserves `global_count`, and rare strings are usually more diagnostic than common ones.
Another variation is degree normalization. Raw additive scoring favors functions that sit above many matching descendants. That is partly what we want, but it also rewards generic coordinator functions. A normalized variant could divide contributions by a degree term and ask a different question: how concentrated is the evidence in this function's neighborhood.
A thresholded view is also worth trying. Instead of showing a continuous score ramp, let the user set a minimum score and mark only functions above that threshold. That could make contiguous regions easier to spot.
A comparison mode would be useful as well. Showing direct-only and propagated views side by side would make it easy to see whether the larger neighborhood adds a coherent region or just noise.
## How to evaluate the idea
The function grid is address-ordered, so this experiment is really about locality in address space. The first thing to look for is whether shallow propagation expands a direct-hit cluster into a still-coherent region. Good test cases are binaries where we already expect a compact static-library region, such as zlib, OpenSSL, or sqlite3.
It would also help to add a few quantitative summaries. The page could report the smallest contiguous address span containing most of the score mass, how many separate spans remain above a threshold, and how those numbers change as depth increases. That would make the result less subjective.
The concept is worth implementing as an experiment. The hard data is already present in the report model, and a conservative first pass can stay mostly inside the HTML renderer. If shallow caller propagation still yields compact regions for known libraries, then richer weighting models are worth exploring. If it smears immediately, that is still a useful result and tells us that the direct-hit view is already close to the limit of the available signal.

View File

@@ -0,0 +1,502 @@
# mapa IDA/IDALib port plan
Goal: preserve the current `scripts/mapa.py` report while replacing the Lancelot/BinExport2 backend with IDALib. Use the `ida-domain` API for normal analysis where it cleanly maps to the needed data. Reuse the existing capa IDA backend as the semantic reference for imports, thunks, string resolution, function naming, and database bootstrap.
This revision adds concrete guidance from capa's existing IDA backend.
## Sources used
Primary sources:
- `scripts/mapa.py`
- https://github.com/HexRaysSA/ida-claude-plugins/blob/main/plugins/ida-plugin-development/skills/ida-domain-api/SKILL.md
- `capa/features/extractors/ida/idalib.py`
- `capa/features/extractors/ida/extractor.py`
- `capa/features/extractors/ida/helpers.py`
- `capa/features/extractors/ida/file.py`
- `capa/features/extractors/ida/function.py`
- `capa/features/extractors/ida/insn.py`
- `capa/features/extractors/ida/basicblock.py`
- `capa/ida/helpers.py`
- `capa/loader.py`
- `tests/fixtures.py`
- `tests/test_idalib_features.py`
- `capa/features/common.py`
- `idals.py` from https://github.com/williballenthin/idawilli/tree/master/idals
Domain API references:
- Overview: https://ida-domain.docs.hex-rays.com/llms.txt
- Getting started: https://ida-domain.docs.hex-rays.com/getting_started/index.md
- Examples: https://ida-domain.docs.hex-rays.com/examples/index.md
- Database: https://ida-domain.docs.hex-rays.com/ref/database/index.md
- Functions: https://ida-domain.docs.hex-rays.com/ref/functions/index.md
- Flowchart: https://ida-domain.docs.hex-rays.com/ref/flowchart/index.md
- Instructions: https://ida-domain.docs.hex-rays.com/ref/instructions/index.md
- Xrefs: https://ida-domain.docs.hex-rays.com/ref/xrefs/index.md
- Strings: https://ida-domain.docs.hex-rays.com/ref/strings/index.md
- Segments: https://ida-domain.docs.hex-rays.com/ref/segments/index.md
- Names: https://ida-domain.docs.hex-rays.com/ref/names/index.md
- Entries: https://ida-domain.docs.hex-rays.com/ref/entries/index.md
## Key correction after reviewing capa
The current `capa/features/extractors/ida/` backend is IDALib-capable, but it is not written against `ida-domain` today. It uses the classic IDA Python surface: `idapro`, `idaapi`, `idautils`, `idc`, `ida_bytes`, `ida_funcs`, `ida_segment`, and related modules.
That means the correct migration strategy is not "invent a fresh IDA collector from scratch". The correct strategy is:
- use capa's existing IDA backend as the behavioral spec and a source of proven heuristics
- implement the new collector against `ida-domain` wherever the needed API exists cleanly
- treat the existing legacy helpers as reference material, not as the default implementation path
- only introduce lower-level `ida_*` calls if the implementer can point to a concrete `ida-domain` gap and document it
This is especially important for:
- IDALib database bootstrap
- import and extern enumeration
- thunk-chain resolution
- string/data-reference chasing
- alternative function names from comments
- known IDA version caveats
## Current mapa output that must remain stable
The current script renders these sections, in this order:
- `meta`
- `modules`
- `sections`
- `libraries`
- `functions`
Accepted intentional change for the port: remove `modules` entirely.
Inside `functions`, it currently:
- iterates functions in address order
- prints `thunk ...` for thunk functions
- prints `function ...` for normal functions
- inserts source-file separators when the primary Assemblage source path changes, ignoring gaps with missing Assemblage data
- annotates functions with Assemblage source name and file when available
- forwards callers through thunk targets so callers of a thunk appear on the real target
- prints `B/E/I` as basic blocks / CFG edges / instructions plus total instruction bytes
- prints capa rule names attached to the function
- prints `calls:` for internal non-library callees
- prints `api:` for import/external/library callees
- prints `string:` for referenced strings
That output contract should stay stable unless a deliberate change is accepted and documented.
## What capa already gives you
There are three reusable assets.
The first is a proven `ida-domain` database-resolution and session-opening path in `../idawilli/idals/idals.py`. That code already does the part mapa needs most: accept either a raw sample or an existing `.i64` / `.idb`, hash raw inputs, cache analyzed databases by SHA-256, and guard concurrent access.
The second is capa's proven IDALib bootstrap path:
- `capa/features/extractors/ida/idalib.py`
- `capa/loader.py`
- `tests/fixtures.py`
The third is capa's proven semantic definition of the data mapa cares about:
- function enumeration: `capa/features/extractors/ida/extractor.py`
- segments, imports, externs: `capa/features/extractors/ida/file.py`, `helpers.py`
- callers and function names: `capa/features/extractors/ida/function.py`
- API calls, call targets, strings, mnemonics, offsets: `capa/features/extractors/ida/insn.py`
- CFG/basic blocks: `capa/features/extractors/ida/basicblock.py`, `helpers.py`
- hashes, architecture, imagebase, file-type helpers: `capa/ida/helpers.py`
The practical split is simple. Use `idals.py` as the model for database resolution, caching, and guarded open/close. Use capa's IDA backend as the model for analysis semantics and parity behavior. Use `ida-domain` as the primary query surface inside the collector. Do not depend on Lancelot anywhere in the new implementation.
## Important behavioral facts from capa's backend
1. IDALib bootstrap in capa is not a bare `Database.open(...)` call.
- It uses `capa.features.extractors.ida.idalib.has_idalib()` and `load_idalib()`.
- It then calls `idapro.open_database(..., run_auto_analysis=True, args="-Olumina:host=0.0.0.0 -Osecondary_lumina:host=0.0.0.0 -R")`.
- It disables console chatter with `idapro.enable_console_messages(False)`.
- It waits for analysis completion with `ida_auto.auto_wait()`.
2. Capa explicitly disables Lumina during IDALib analysis.
- Reason documented in `capa/loader.py`: Lumina can inject bad names or overwrite debug-info names.
- mapa should do the same unless there is a deliberate decision to trust Lumina.
3. Capa requests resource loading with `-R`.
- This matters for some file-scope extraction.
- `tests/test_idalib_features.py` notes that IDA 9.0 had resource-loading limitations under IDALib.
4. The existing `IdaFeatureExtractor.get_functions()` is not a direct drop-in for mapa.
- It calls `helpers.get_functions(skip_thunks=True, skip_libs=True)`.
- mapa must render thunk functions, so mapa needs its own full function inventory.
5. Capa already encodes the thunk semantics mapa needs.
- `THUNK_CHAIN_DEPTH_DELTA` is defined in `capa/features/common.py` as `5`.
- `capa/features/extractors/ida/insn.py:check_for_api_call()` follows code refs, then data refs, through thunk chains to resolve imports/externs.
- `capa/features/extractors/binexport2/__init__.py:BinExport2Analysis._compute_thunks()` shows the intended "single-target thunk chain" rule: only resolve through chains with exactly one callee per thunk hop.
6. Capa already encodes mapa-relevant string semantics.
- `helpers.find_data_reference_from_insn(insn, max_depth=10)` follows single data-reference chains.
- `helpers.find_string_at(ea)` looks for C strings and works around an IDA Unicode-decoding quirk.
- `insn.extract_insn_string_features()` and `extract_insn_bytes_features()` use that behavior.
7. Capa already has the import and extern logic mapa needs.
- `helpers.get_file_imports()` enumerates import modules and normalizes names.
- `helpers.get_file_externs()` enumerates functions from `SEG_XTRN` segments.
- `file.extract_file_import_names()` shows how capa treats name-vs-ordinal imports.
8. Capa already has alternative-name logic.
- `helpers.get_function_alternative_names()` parses comments that look like `Alternative name is 'foo'`.
- `function.extract_function_alternative_names()` exposes them as `FunctionName` features.
9. Capa already has the CFG behavior mapa should match.
- `helpers.get_function_blocks()` uses `idaapi.FlowChart(f, flags=(idaapi.FC_PREDS | idaapi.FC_NOEXT))`.
- The `NOEXT` part matters: it avoids useless external blocks contaminating B/E/I counts.
10. The test suite documents real version caveats.
- IDA 9.0 and 9.1 had some ELF symbol issues.
- IDA 9.0 under IDALib had resource-loading limitations.
- mapa validation should account for those when comparing outputs.
## Database resolution and caching pattern to copy from idals
`../idawilli/idals/idals.py` is the best starting point for the "raw file or existing database" problem. It already solves the user-visible behavior mapa needs.
Its pattern is:
- if the input suffix is `.i64` or `.idb`, use that database directly
- otherwise compute hashes for the raw file with `compute_file_hashes()` and use the SHA-256 as the cache key
- store the generated database in a common cache directory, currently `~/.cache/hex-rays/idals/<sha256>.i64`
- serialize access with `database_access_guard()`
- detect an already-open or unpacked database by watching for the companion `.nam` file
- use an advisory `flock` on `<db>.lock` to avoid concurrent writers
- after acquiring the lock, re-check `.nam` to close the TOCTOU hole
- on a cache miss, analyze the raw sample with `Database.open(..., IdaCommandOptions(auto_analysis=True, new_database=True, output_database=..., load_resources=True), save_on_close=True)`
- after the cached database exists, open it read-only with `open_database_session(..., auto_analysis=False)` and `save_on_close=False`
mapa should adopt that pattern with only minor changes:
- use the same SHA-256-keyed cache strategy
- keep the same locking protocol
- put the cache in a mapa-specific directory, or intentionally share the idals directory if reuse is desired
- expose the cache location as a small helper or constant so it can be documented and tested
- reuse the computed SHA-256 for the `meta` section instead of hashing the sample twice
There is one deliberate integration check to make here. `idals.py` uses `ida-domain`'s `Database.open(...)`, while capa's bootstrap path uses `idapro.open_database(...)` and disables Lumina explicitly. For mapa, prefer the `idals.py` open-and-cache pattern because it already handles the database lifecycle correctly. Then verify whether the `ida-domain` open path offers an equivalent way to suppress Lumina. If it does, use it. If it does not, decide whether that matters for mapa output or whether database creation should fall back to capa's `idapro.open_database(...)` path while cached-session opens keep the `idals.py` pattern.
## Recommended architecture
Do not port `scripts/mapa.py` by replacing each Lancelot query inline. Split it into four layers:
- CLI and argument parsing
- IDA bootstrap and environment setup
- report collection
- rendering
Use backend-neutral dataclasses for the report model:
- `MapaReport`
- `MapaMeta`
- `MapaSection`
- `MapaLibrary`
- `MapaFunction`
- `MapaCall`
- `MapaString`
- `AssemblageRecord`
The collector should have one primary data-access layer: `ida-domain` for functions, flowcharts, instructions, strings, names, segments, xrefs, and database lifecycle. Existing capa helpers remain useful as semantic references and regression oracles.
## Best practical strategy
The implementation target is an IDALib-only collector with `ida-domain` as the primary API surface.
Concretely:
- use `ida-domain` for function inventory, instruction iteration, CFG stats, name lookup, segment listing, xref walking, and cached database open/create
- use the existing capa IDA code to understand the intended semantics for imports, externs, thunk resolution, data-reference chasing, and alternative names
- if the implementer discovers a real `ida-domain` gap, document the gap explicitly before introducing lower-level `ida_*` calls
That gives the next implementer a clear target: no Lancelot, no default hybrid backend, and no legacy helper dependency unless a concrete gap forces it.
## Concrete mapping from mapa fields to capa/backend logic
| mapa field/behavior | First source to consult | Recommended implementation |
|---|---|---|
| IDALib discovery | `capa/features/extractors/ida/idalib.py` | Reuse `has_idalib()` / `load_idalib()` logic if mapa needs to bootstrap `idapro` availability itself. |
| resolve/open DB | `../idawilli/idals/idals.py` | Use `resolve_database()` and `open_database_session()` as the primary pattern. |
| cache key and cache DB path | `../idawilli/idals/idals.py` | Hash raw inputs once and key cached databases by SHA-256. |
| Lumina suppression policy | `capa/loader.py`, `tests/fixtures.py` | Carry forward capa's disable-Lumina behavior if the chosen open path supports it. |
| sample hashes | `../idawilli/idals/idals.py`, `capa/ida/helpers.py`, `extractor.py` | Reuse the SHA-256 computed for cache lookup; prefer IDA-provided hashes when opening an existing database. |
| image base | `capa/ida/helpers.py` | Prefer IDA imagebase helper; use Domain API only if it exposes the same value clearly. |
| sections | `helpers.get_segments()`, `file.extract_file_section_names()` | Use `db.segments`; match capa's header-segment filtering rules if needed. |
| import modules/functions | `helpers.get_file_imports()` | Implement with `ida-domain` if the needed import data is exposed cleanly; otherwise use this helper as the semantic reference for normalization. |
| externs | `helpers.get_file_externs()` | Match this behavior with `ida-domain` if possible; if not, document the missing API and then fall back deliberately. |
| function inventory | `extractor.py`, `helpers.get_functions()` | Do not use extractor's default function list because it skips thunks/libs. Build a mapa-specific inventory with `ida-domain`. |
| callers | `function.extract_function_calls_to()` | Reproduce the same behavior with domain xrefs and compare against this helper during validation. |
| call targets | `insn.extract_function_calls_from()` | Reproduce the same behavior with domain xrefs and compare against this helper during validation. |
| API calls | `insn.extract_insn_api_features()` | Match the import/extern/thunk resolution semantics exposed by this function. |
| string refs | `helpers.find_data_reference_from_insn()`, `find_string_at()` | Match the same single-ref-chain behavior and max depth `10`. |
| function names | `function.extract_function_name()`, alternative-name helpers | Use normal name, demangled name, alternative names, and render Assemblage annotations separately without renaming the IDA function. |
| B/E/I stats | `helpers.get_function_blocks()` | Match `PREDS | NOEXT` semantics; use domain flowchart if possible. |
| function ordering | current `scripts/mapa.py` | Keep address order for deltas and rendering stability. |
## Step-by-step implementation plan
### 1. Freeze the current mapa output
Before editing code, save golden outputs from the current `scripts/mapa.py` for:
- a sample with normal internal calls and imports
- a sample with thunk-heavy call patterns
- a sample with capa and Assemblage overlays
These are the parity targets.
### 2. Add `resolve_database()` and `open_database_session()` helpers
Base these directly on `../idawilli/idals/idals.py`.
`resolve_database()` should:
- accept either a raw sample or an existing `.i64` / `.idb`
- return existing databases unchanged
- hash raw inputs once and use SHA-256 as the cache key
- place cached databases under the XDG cache root in `mandiant/mapa/`, i.e. `$XDG_CACHE_HOME/mandiant/mapa/` when set, else `~/.cache/mandiant/mapa/`
- guard cache creation with the same `.nam` + `flock` protocol from `database_access_guard()`
- analyze cache misses with `Database.open(..., IdaCommandOptions(auto_analysis=True, new_database=True, output_database=..., load_resources=True), save_on_close=True)`
- keep cache creation transparent in normal mode and only log cache details in verbose/debug mode
`open_database_session()` should:
- use the same guard before opening the database
- open cached or user-supplied databases with `new_database=False`
- default to `save_on_close=False`
- optionally run `ida_auto.auto_wait()` when `auto_analysis=True`
This should become mapa's primary database lifecycle.
Then add one capa-derived check on top: if the chosen open path can suppress Lumina, do so. If the `ida-domain` path cannot, verify whether that difference affects naming enough to justify a fallback to capa's `idapro.open_database(...)` path during cache creation.
### 3. Introduce a backend-neutral report model
Before touching the collector logic, split `scripts/mapa.py` into:
- CLI
- collector
- renderer
- input-overlay parsing for capa JSON and Assemblage CSV
Keep the renderer stable. The collector should return value objects only.
### 4. Build a mapa-specific function inventory
Do not use `IdaFeatureExtractor.get_functions()` as-is, because it skips thunks and library functions.
Instead:
- enumerate all functions in address order with `ida-domain` if possible
- keep flags for `is_thunk`, `is_library`, and `is_external`
- retain enough metadata to render thunks, skip imports from the function list, and compute deltas
For parity, compare your inventory against:
- `helpers.get_functions(skip_thunks=False, skip_libs=False)`
- IDA function flags such as `FUNC_THUNK` and `FUNC_LIB`
### 5. Recreate import and extern logic using capa's semantics
For the `libraries` section and for `api:` classification, start from the behavior encoded in:
- `helpers.get_file_imports()`
- `helpers.get_file_externs()`
That behavior already handles:
- PE imports with `__imp_` prefixes
- ELF imports with `@@version` suffixes
- ordinal imports
- extern functions in `SEG_XTRN`
The implementation target remains `ida-domain`. The next implementer should reproduce this behavior there if the API surface is available. If a real gap appears, document the gap before introducing any fallback.
### 6. Implement thunk resolution with capa's exact semantics
Build one cached helper, for example `resolve_thunk_target(ea)`, and use it everywhere.
Behavior should match capa's existing semantics:
- maximum thunk-chain depth: `THUNK_CHAIN_DEPTH_DELTA == 5`
- follow code refs first, then data refs if needed
- only resolve through single-target chains
- stop on cycles, zero-target, or multi-target cases
- allow the final resolved target to be an import or extern
Use two existing code paths as references:
- `capa/features/extractors/ida/insn.py:check_for_api_call()`
- `capa/features/extractors/binexport2/__init__.py:BinExport2Analysis._compute_thunks()`
This helper must drive:
- caller forwarding
- `calls:` lines
- `api:` lines
- capa match attachment when a match lands in a thunk
### 7. Use capa features as references, not as the collector
Do not build mapa by instantiating `IdaFeatureExtractor()` and aggregating capa features into the final report. That would create a hidden second backend and blur the migration target.
Instead, query IDA directly through `ida-domain` and use the capa feature-extraction code as a reference when the intended semantics are unclear. The implementer should compare specific results against:
- `Characteristic("calls to")`
- `Characteristic("calls from")`
- `API`
- `String`
- `FunctionName`
- `Mnemonic`
This keeps the delivered collector IDALib-only while still giving the implementer a precise oracle for parity checks.
### 8. Recreate callers and callees
Use a precomputed normalized call graph. Do not compute callers ad hoc during rendering.
For each non-import function:
- walk its instructions
- identify call or jump-to-import patterns using the same logic as `extract_insn_api_features()`
- resolve thunk chains
- classify the resolved target as internal or API/import/extern
- record caller and callee relationships on resolved targets
For parity, verify against these capa semantics:
- function callers: `function.extract_function_calls_to()`
- outgoing calls: `insn.extract_function_calls_from()`
- API calls: `insn.extract_insn_api_features()`
Important detail: the existing helper treats both `call` and `jmp` as API-bearing instructions in some thunk/import cases. Do not assume `call` only.
### 9. Recreate B/E/I with capa's CFG semantics
For each rendered function:
- basic blocks: count basic blocks using the equivalent of `helpers.get_function_blocks()`
- edges: sum successors across those blocks
- instructions: count instructions across those blocks
- bytes: sum instruction sizes
The important parity rule is the CFG construction mode:
- match `idaapi.FlowChart(f, flags=(idaapi.FC_PREDS | idaapi.FC_NOEXT))`
If the Domain API flowchart differs, use it only if it can match the no-external-block behavior. Otherwise use a tiny legacy helper for block enumeration and keep everything else in the Domain API.
### 10. Recreate string extraction with capa's data-ref chasing
Do not just test `db.strings.get_at(xref.to_ea)` and stop. That will miss the semantics capa already uses.
Start from capa's behavior:
- follow a single data-reference chain from the instruction, up to depth `10`
- if the final target is a string, emit it
- otherwise it may be bytes, not a string
For mapa specifically:
- only render strings, not raw bytes
- deduplicate by rendered string value, matching the current script
- trim trailing whitespace the same way the current script does
Reference implementation:
- `helpers.find_data_reference_from_insn()`
- `helpers.find_string_at()`
- `insn.extract_insn_string_features()`
### 11. Reuse capa's name and alternative-name semantics
For the function display name, use this order:
- demangled name
- IDA function name
- alternative names from comments if they help and the main name is poor
- final fallback such as `sub_{ea:x}`
Render Assemblage source name and source file as annotations beneath the function header. Do not mutate the database just to apply Assemblage data.
Reference points:
- `function.extract_function_name()`
- `helpers.get_function_alternative_names()`
### 12. Reattach capa matches by containing function
Keep the current capa JSON input format, but simplify the mapping logic.
Recommended algorithm:
- parse the capa JSON as today
- for each absolute match address, ask IDA for the containing function
- if that function is a thunk, resolve it through the thunk resolver
- attach the rule name to the resolved function start EA
- warn when no containing function exists
This is simpler than the current BinExport-specific mapping and aligns better with IDA's data model.
### 13. Rebuild top-level sections using capa-backed semantics
For `meta`:
- sample name: input path or IDA metadata
- hashes: prefer IDA-provided hash helpers in `capa/ida/helpers.py`
- architecture: reuse the logic in `capa/features/extractors/ida/global_.py`
- timestamp: define explicitly, because BinExport's old field is gone
For `sections`:
- use `ida-domain` segments if possible
- match capa's `skip_header_segments` behavior if needed
For `libraries`:
- use `helpers.get_file_imports()` and group/display import modules accordingly
For `modules`:
- remove the section entirely as an intentional interface change
- document the removal in the spec so future ports do not try to reintroduce BinExport-specific `module` semantics accidentally
### 14. Add tests using capa's existing IDALib pattern
Pure tests should cover:
- Assemblage parsing and RVA-to-VA mapping
- thunk-chain resolution
- import/extern normalization
- string de-duplication and trimming
- final rendering from a prebuilt `MapaReport`
Integration tests should reuse the same lifecycle mapa will use in production:
- resolve the input to an existing or cached database
- open it through the guarded session helper
- collect the mapa report
- compare key functions and sections against golden outputs
Use `tests/test_idalib_features.py` as the reference for version-specific skips and expectations, and use `../idawilli/idals/idals.py` as the reference for database resolution and guarded open/close behavior.
### 15. Validate parity and document deliberate differences
Compare the new output against the frozen Lancelot output on the supplied samples.
Verify specifically:
- function ordering
- thunk rendering
- thunk-forwarded callers
- internal vs API call classification
- libraries/imports section contents
- string extraction
- B/E/I counts
- Assemblage annotations and source-file separators
- capa attachment
Document every known delta. The likely ones are:
- function discovery differences between IDA and Lancelot
- the intentional removal of the `modules` section
- symbol differences across IDA versions, especially ELF on older 9.x
- resource-dependent differences on older IDALib versions
## Minimal implementation checklist
A good order of work is:
1. freeze current mapa outputs
2. add backend-neutral report dataclasses
3. add `resolve_database()` and `open_database_session()` helpers modeled on `idals.py`
4. implement the XDG cache path and quiet-by-default cache creation behavior
5. build a full mapa function inventory that includes thunks
6. port sections and metadata
7. implement import/extern classification to match capa semantics
8. implement the thunk resolver using capa's existing semantics
9. build normalized caller/callee/API indexes
10. port B/E/I using `PREDS | NOEXT`-equivalent CFG traversal
11. port string extraction using capa's data-ref-chain semantics
12. port Assemblage overlay handling
13. port capa JSON address-to-function attachment
14. remove the `modules` section and document the interface change
15. compare outputs against golden references
16. document any proven `ida-domain` gaps and any intentional differences in spec/design during implementation
## Resolved decisions for the implementation handoff
Record these in `spec.md` or `design.md` during implementation so the behavior stays stable.
- accepted inputs: raw binary and existing IDA databases
- cached databases live under the XDG cache root in `mandiant/mapa/`
- mapa may create and persist cached IDA databases automatically
- cache creation stays quiet in normal mode and only surfaces in verbose/debug logging
- Lumina stays disabled for now
- `meta.ts` becomes `datetime.now()`
- remove the `modules` section from the report
- the implementation target is IDALib only and all Lancelot dependencies should be removed
- assume `ida-domain` is sufficient unless the implementer can demonstrate a specific missing API; any lower-level fallback must be justified and documented

View File

@@ -0,0 +1,169 @@
# mapa plan: vendor Quantum Strand string tags
This plan describes how to extend mapa so every rendered `string:` line can carry right-aligned tags from Quantum Strand's string databases. The implementation target is broader than the earlier draft. It should include the full set of useful database-backed tags now: open-source library tags, CRT tags, expert tags, winapi tags, global-prevalence tags, and junk-code tags. The feature is still strictly limited to database matches. It must not import Quantum Strand or FLOSS as a runtime library, and it must not pull in Quantum Strand's whole-file layout analysis, structure tagging, file offsets, encoding columns, or any other non-database context.
The implementor should work only in `mapa/`, `tests/`, and packaging/docs as needed. Nothing belongs under `capa/`. The sibling checkout at `../quantumstrand/` is only a research source and a place to copy vendored resources from once.
## What mapa should do when this lands
When mapa emits a string referenced by a function, the left side should stay in the current mapa style and the right side should show database-derived tags. The renderer should use Rich width calculations so the tag column stays visible and the string text is clipped first if necessary. The output should continue to be function-centric and concise.
A representative result looks like this:
```text
string: "invalid distance code" #zlib
string: "GetProcAddress" #winapi
string: "!This program cannot be run in DOS mode." #common
string: "CurrencyDispenser1" #capa
string: "_initterm" #winapi #code-junk
```
The model should retain richer match metadata than the text renderer shows. The renderer only needs visible tags. The report model should still remember which database family matched and any associated metadata that might matter later.
## Non-goals
This feature is not a Quantum Strand port. Do not bring over its file-layout tree, structure labels like `import table`, section box rendering, code-vs-data analysis, duplicate-string tagging, relocation tagging, xor-decoding tags, or hidden-string filtering. Those features solve a different problem. mapa already knows which function references a string and only needs database-backed context for that string.
mapa should not start suppressing strings based on database matches. Even if an upstream expert rule has `action: hide`, mapa should still render the string. Store the action in metadata if it exists, but do not use it to drop rows.
## Upstream resources to copy
All of the following resources should be vendored into mapa under a new package such as `mapa/string_tags/data/`.
Library and CRT databases:
```bash
mkdir -p mapa/string_tags/data/oss mapa/string_tags/data/crt
cp ../quantumstrand/floss/qs/db/data/oss/*.jsonl.gz mapa/string_tags/data/oss/
cp ../quantumstrand/floss/qs/db/data/crt/msvc_v143.jsonl.gz mapa/string_tags/data/crt/
```
This copies these library databases:
`brotli.jsonl.gz`, `bzip2.jsonl.gz`, `cryptopp.jsonl.gz`, `curl.jsonl.gz`, `detours.jsonl.gz`, `jemalloc.jsonl.gz`, `jsoncpp.jsonl.gz`, `kcp.jsonl.gz`, `liblzma.jsonl.gz`, `libpcap.jsonl.gz`, `libsodium.jsonl.gz`, `mbedtls.jsonl.gz`, `openssl.jsonl.gz`, `sqlite3.jsonl.gz`, `tomcrypt.jsonl.gz`, `wolfssl.jsonl.gz`, `zlib.jsonl.gz`, plus the CRT database `msvc_v143.jsonl.gz`.
Expert, winapi, prevalence, and junk-code databases:
```bash
mkdir -p mapa/string_tags/data/expert mapa/string_tags/data/winapi mapa/string_tags/data/gp
cp ../quantumstrand/floss/qs/db/data/expert/capa.jsonl mapa/string_tags/data/expert/
cp ../quantumstrand/floss/qs/db/data/winapi/apis.txt.gz mapa/string_tags/data/winapi/
cp ../quantumstrand/floss/qs/db/data/winapi/dlls.txt.gz mapa/string_tags/data/winapi/
cp ../quantumstrand/floss/qs/db/data/gp/gp.jsonl.gz mapa/string_tags/data/gp/
cp ../quantumstrand/floss/qs/db/data/gp/cwindb-native.jsonl.gz mapa/string_tags/data/gp/
cp ../quantumstrand/floss/qs/db/data/gp/cwindb-dotnet.jsonl.gz mapa/string_tags/data/gp/
cp ../quantumstrand/floss/qs/db/data/gp/junk-code.jsonl.gz mapa/string_tags/data/gp/
cp ../quantumstrand/floss/qs/db/data/gp/xaa-hashes.bin mapa/string_tags/data/gp/
cp ../quantumstrand/floss/qs/db/data/gp/yaa-hashes.bin mapa/string_tags/data/gp/
```
The implementor should also create `mapa/string_tags/SOURCES.md` and record the upstream repo path, upstream commit, copied files, and any code copied or rewritten from upstream. The research for this plan used upstream commit `73eb1541e896c065fc694ba7b01067f56871631b`.
## Upstream code to read before implementing
The useful Quantum Strand code is small. Before writing anything, read `../quantumstrand/floss/qs/db/oss.py`, `expert.py`, `gp.py`, `winapi.py`, `../quantumstrand/floss/qs/main.py`, and the tests `../quantumstrand/tests/test_oss_db.py`, `test_winapi_db.py`, `test_gp_db.py`, `test_qs.py`, and `test_qs_pma0101.py`.
The only part of `floss/qs/main.py` that should influence mapa design is the small tagging and Rich rendering logic. Leave the rest of that file behind.
## Behavior that must be preserved from Quantum Strand
Quantum Strand's database lookups are simple and should be preserved exactly.
The OSS and CRT databases are gzip-compressed JSONL files. Each line contains one `OpenSourceString` record with fields such as `string`, `library_name`, `library_version`, `file_path`, `function_name`, and `line_number`. Lookup is exact by `string`. A match emits tag `#<library_name>`. The CRT file uses `library_name: "msvc"`, so it emits `#msvc`.
The expert database file is plain `capa.jsonl`, not gzip-compressed despite what the readme says. Each record is an `ExpertRule` with `type`, `value`, `tag`, `action`, and descriptive metadata. Matching behavior follows `floss/qs/db/expert.py`: exact string match for `type == "string"`, substring search for `type == "substring"`, and `re.compile(rule.value).search(...)` for `type == "regex"`. A match emits `rule.tag`, which in the current vendored file is typically `#capa`.
The winapi database is two gzip-compressed text files. `dlls.txt.gz` is loaded into a lowercase set and matched against `string.lower()`. `apis.txt.gz` is loaded into a case-sensitive set and matched against the string verbatim. A match from either source emits `#winapi`.
The global-prevalence JSONL databases are `gp.jsonl.gz`, `cwindb-native.jsonl.gz`, and `cwindb-dotnet.jsonl.gz`. Quantum Strand loads them as `StringGlobalPrevalenceDatabase` and does exact string lookup. Any hit in any of those databases emits `#common`.
The junk-code JSONL database is `junk-code.jsonl.gz`. It has the same file format as the prevalence JSONL databases, but Quantum Strand treats it separately. Any hit emits `#code-junk`.
The hash databases are `xaa-hashes.bin` and `yaa-hashes.bin`. Each file is a flat sequence of 8-byte truncated MD5 digests. Quantum Strand computes `md5(string.encode("utf-8")).digest()[:8]` and checks membership in the set. A hit emits `#common`.
These match rules are the core of the feature. They are much more important than matching Quantum Strand's internal class names.
## Recommended mapa package layout
Add a dedicated package under `mapa/`. A good layout is `mapa/string_tags/__init__.py`, `model.py`, `loaders.py`, `tagger.py`, a `data/` subtree, and `SOURCES.md`.
Do not copy upstream modules verbatim unless necessary. A mapa-local rewrite is cleaner because the code is short and mapa needs a narrower API than Quantum Strand.
`model.py` should define two small dataclasses. `StringTagMatch` should capture one concrete match with fields like `tag`, `source_family`, `source_name`, `kind`, and optional metadata such as `library_name`, `library_version`, `file_path`, `function_name`, `line_number`, `note`, `description`, `action`, `global_count`, `encoding`, and `location`. `StringTagResult` should hold the final sorted tag tuple plus the tuple of `StringTagMatch` entries.
`loaders.py` should own the file-format readers. It should use `gzip`, `hashlib`, `msgspec`, and `importlib.resources`. There is no reason to invent a new parser. This project already depends on `msgspec`, which is also what Quantum Strand uses for the JSONL formats.
`tagger.py` should own the process-wide cached tagger. A simple shape is `load_default_tagger()` plus an object with `tag_string(raw: str) -> StringTagResult`. The tagger should lazily load and cache the vendored databases once per process.
## Report-model changes
`mapa/model.py` should be extended so a rendered mapa string can carry tags and match metadata. The minimal change is to add `tags` and `tag_matches` to `MapaString`. The existing `value` field should remain the display string. If the implementor wants to preserve the exact raw string too, add a `raw_value` field. That is worthwhile because mapa currently trims trailing whitespace before storing the string, and exact-match databases should run against the untrimmed value.
The most important collector rule is this: match against the raw extracted string first, derive the display string second, and deduplicate on the display string only after the database matches have been computed. If two raw strings collapse to the same display value after `rstrip()`, their tags and metadata should be merged onto the single rendered `MapaString` entry.
## Collector guidance
The collector should keep its existing string-discovery behavior. This plan does not ask the implementor to revisit how mapa follows data references or how it discovers a string in IDA. Once `collect_report()` recovers a raw string, the new tagging pipeline begins.
A good implementation sequence inside `mapa/collector.py` is: recover `raw_value`, call the vendored tagger on `raw_value`, compute `display_value = raw_value.rstrip()`, skip empty display values, and then either create or update the `MapaString` entry for that display value. The update path should union tag names and append only unique `StringTagMatch` values. The final `MapaString.tags` should be sorted for stable rendering and stable tests.
This is the one place where the current mapa behavior is most likely to cause silent misses. If the implementor tags only the trimmed string, exact-match results from Quantum Strand can be lost.
## Tag aggregation rules
The model should preserve all concrete matches, even when multiple databases emit the same visible tag. This matters most for `#common`, because a string may hit several prevalence databases and one or both hash databases. The visible tag list should deduplicate tag names, but the metadata should preserve every source that contributed.
The tagger should produce tags in a deterministic order. A simple stable order is alphabetical order on the tag name after aggregation. The metadata order should also be deterministic, for example by `(tag, source_family, source_name, library_name, note, value)`.
## Rendering guidance
Replace the current plain markup string for `string:` rows with a dedicated Rich `Text` builder. The implementor should read `render_string()` and related helpers in `../quantumstrand/floss/qs/main.py` and copy only the layout idea. The left side is the existing `string: "..."` text. The right side is the space-joined visible tag list. Width should come from Rich's own measurement.
A helper such as `Renderer.render_string_line(value: str, tags: Sequence[str]) -> Text` is sufficient. It should use `self.console.size.width - (self.indent * 2)` as the available width for the line content, build a `Text` object for the left side and another for the right side, reserve at least one separating space, and then align or truncate the left side so the right side stays visible. If the terminal is too narrow for that layout, fall back to a single-column form that still shows the tags.
mapa should adopt one Quantum Strand display rule because it reduces noise without hiding information: when a string has `#common` plus one or more more-specific tags, omit `#common` from the visible tag column but keep it in `tag_matches` and `MapaString.tags`. That is a rendering choice only. The underlying data should stay intact.
No string row should be hidden by tag policy. `#common` and `#code-junk` may be styled in a muted color. `#capa` may be highlighted. `#winapi` and library tags can use the default string-tag style unless the implementor finds a better minimal palette. The important behavior is visibility and stable alignment. Decorative styling is secondary.
## Recommended visible-tag policy
The rendered tag column should follow these rules.
Show all tags except `#common` when a more-specific tag is also present. Keep `#common` visible only if it is the only tag. Show `#code-junk` even when other tags are present because it communicates a different kind of context than `#common`. Show `#winapi`, `#capa`, and library tags directly. Do not invent mapa-specific aliases or rename the upstream tags.
This yields readable outputs such as `#winapi #code-junk`, `#capa`, `#zlib`, or `#common`. It avoids noisy combinations like `#common #winapi` on every common API name.
## Packaging guidance
If mapa needs to work from an installed package, `pyproject.toml` will need changes because it currently only packages `capa*`. The implementor should include `mapa*` packages and package data under `mapa/string_tags/data/`. The loader should use `importlib.resources.files()` so it works both from a source checkout and an installed wheel.
Even if packaging is deferred, the code should still use `importlib.resources` because it centralizes the resource lookup and avoids hard-coded repository-relative paths.
## Implementation steps for the handoff
The implementor should start by copying the resources, writing `mapa/string_tags/SOURCES.md`, and adding pure loader tests before touching mapa's collector or renderer. Then they should implement the small loader layer for the five upstream database families: OSS/CRT, expert, winapi, prevalence JSONL, and prevalence hash files. After that they should implement the aggregated tagger and add pure tagger tests using known literals from the vendored datasets.
Once the tagger is stable, they should extend `MapaString`, thread tagging through `mapa/collector.py`, and finally switch `mapa/renderer.py` to the Rich `Text`-based string-row helper. Only after all of that is working should they update packaging and installed-resource handling, because those changes are easier to verify when the core behavior already exists.
During implementation they should update `doc/plans/spec.md` and `doc/plans/design.md` to record the final user-visible behavior and the final module layout. The spec should say that `string:` rows may carry right-aligned database tags and should document the visible-tag policy. The design doc should say where the vendored databases live, how the loader is structured, and how the collector merges raw-string matches into deduplicated display strings.
## Concrete test plan
Most tests should avoid IDA. Start with pure loader and tagger tests. Known-good assertions from the upstream data include `"invalid distance code" -> #zlib`, `"IsolationAware function called after IsolationAwareCleanup" -> #msvc`, `"CurrencyDispenser1" -> #capa`, `"kernel32.dll" -> #winapi`, `"CreateFileA" -> #winapi`, and `"!This program cannot be run in DOS mode." -> #common`. `"_initterm"` is a useful mixed case because Quantum Strand's own tests show it as both `#winapi` and `#code-junk`.
Cover the expert database's three rule types: exact, substring, and regex. Add a hash-database test that emits `#common` even when the string is absent from the JSONL prevalence files. Add another case where several databases contribute the same visible tag and the metadata still records every contributing match.
Add renderer tests using a fixed-width Rich console. One test should show that an untagged string row still matches the old mapa format. Another should show that a tagged row keeps the tag column at the right edge. A narrow-width test should show that the string side is clipped first. Another should check that `#common` disappears from the visible tag list when a more-specific tag exists while remaining present in the underlying model.
Finally, add report tests in `tests/test_mapa.py` that build a small `MapaReport` directly. At least one string should carry a library tag, at least one should carry `#common`, and at least one should carry a multi-tag combination like `#winapi #code-junk`. None of these tests should require IDA.
## Performance and memory notes
Vendoring every requested database is still practical, but loading them all eagerly may have a noticeable startup cost. The tagger should therefore be cached process-wide and built lazily. Hash files should be read once into memory as sets of 8-byte digests. The string databases should be decoded once into in-memory maps. This is a good place to keep the code simple first and optimize only if startup becomes a measured problem.
The current compressed data footprint is modest for OSS, CRT, expert, and winapi. The prevalence family is the largest part of the set, especially the hash files. That is another reason to centralize loading and avoid repeated per-function or per-string initialization.
## Notes the implementor should not miss
`floss/qs/db/data/expert/readme.md` says the expert database is gzip-compressed, but the shipped file is plain `capa.jsonl`. Follow the code and the actual file on disk. `floss/qs/db/oss.py` includes the CRT file in `DEFAULT_PATHS`, so treat `#msvc` as part of the library tagging feature. Quantum Strand's `remove_false_positive_lib_strings()` should not be copied because its five-hit threshold is tuned for whole-file triage and fits mapa's per-function presentation poorly.
The main risk in this work is not the file formats. It is silent semantic drift during integration. The implementor should preserve Quantum Strand's exact query rules, tag against the raw string before trimming, keep all concrete matches in metadata, and only simplify at the renderer boundary.

File diff suppressed because one or more lines are too long

92
doc/plans/spec.md Normal file
View File

@@ -0,0 +1,92 @@
# mapa specification
mapa renders either a structured text report or a self-contained HTML map of a binary's function map: metadata, sections, import libraries, and a per-function breakdown of callers, callees, API calls, strings, CFG stats, capa rule matches, and optional Assemblage ground truth.
## Invocation
```
python scripts/mapa.py <input_file> [--capa <capa.json>] [--assemblage <functions.csv>] [--output {text,html-map}] [--open] [--verbose] [--quiet]
```
`input_file` accepts raw binaries (PE, ELF), existing IDA databases (`.i64`, `.idb`), or any file IDA can analyze. For raw files, mapa automatically creates and caches an analyzed IDA database under the XDG cache root (`$XDG_CACHE_HOME/mandiant/mapa/` or `~/.cache/mandiant/mapa/`) keyed by the file's SHA-256 hash.
## Backend
IDALib only. All analysis uses `ida-domain` as the primary query API. The Lancelot/BinExport2 backend has been removed.
## Output modes
`text` is the default. It renders the existing structured terminal report to stdout.
`html-map` renders a single standalone HTML document to stdout. The page inlines all HTML, CSS, JavaScript, and data. It has a compact metadata summary, a tag control strip, and neighborhood controls at the top, then a split view below. The left pane contains the function grid and the right pane contains the program-string list.
The two panes scroll independently. A draggable vertical divider lets the user resize the panes horizontally. Function squares stay in function-address order and still use the naive left-to-right wrapping layout, but they now wrap within the current width of the left pane rather than the full page width.
Function squares are fixed small blocks laid out left-to-right and wrapped responsively within the left pane. Hovering a function square previews that function's neighborhood. Clicking a function square locks or unlocks that function as the active seed. Hovering a tag previews the neighborhood seeded by every function that directly references a string with that tag. Clicking a tag locks or unlocks that tag seed set. Hovering a string row previews the neighborhood seeded by every function that directly references that string. Clicking a string row locks or unlocks that string seed set. Only one active seed source is used at a time. Any locked function, tag, or string overrides hover previews from the other sources until it is cleared.
The neighborhood controls choose graph direction and traversal depth. Direction can be `callers`, `callees`, or `both`, with `both` as the default. Depth can be 1 through 4, with 3 as the default. For the active seed set, mapa runs a bounded breadth-first search from each seed over the selected direction mode, uses shortest distance from that seed, applies geometric decay `0.5 ** distance`, and adds the per-seed contributions together. Direct seed functions get an outline. Reached functions get a fill intensity scaled within the current view. Functions outside the active neighborhood dim.
The tag strip is sorted by descending distinct-function count, then tag name, and each control shows that count. The page also shows a small legend for heat, seed outline, and dim states. The string list shows each string's virtual address explicitly, preserves duplicate display values at different addresses, and shows visible tags right-aligned in each row. Function hover shows a tooltip containing the same single-function mapa summary content as text mode, plus the current heat, seed state, and shortest distance when a neighborhood is active. Top-level tag controls use only string tags. Capa rule names are not included there.
`--open` is only valid with `--output html-map`. In that mode, mapa writes the HTML to a temporary `.html` file, opens the user's local web browser on the corresponding `file://` URL, and does not write the HTML document to stdout.
The visible-tag policy is the same in both modes: hide `#common` when a more-specific tag is present, but keep it visible when it is the only tag.
## Report sections
The text report renders these sections in order:
1. meta — file name, SHA-256, architecture, timestamp
2. sections — memory segments with address, permissions (rwx), and size
3. libraries — import modules
4. functions — per-function detail in address order
### Functions section
Each function renders as either `thunk <name> @ <address>` or `function <name> @ <address>` followed by:
- source-file separator — a horizontal rule inserted before a function when its primary Assemblage source path differs from the last seen non-empty source path
- `assemblage name:` — source function name from Assemblage, when available
- `assemblage file:` — source file path from Assemblage, when available
- `xref:` — callers with direction arrow and function-order delta
- `B/E/I:` — basic blocks / CFG edges / instructions (total bytes)
- `capa:` — matched capa rule names
- `calls:` — internal non-library callees with direction and delta
- `api:` — import/external/library callees
- `string:` — referenced strings (deduplicated, whitespace-trimmed), with optional right-aligned database tags
Thunk functions show only the header plus any Assemblage lines.
### Assemblage overlay
When `--assemblage` is provided, mapa reads a CSV file and requires these columns: `hash`, `name`, `start`, `end`, and `source_file`.
Assemblage matching works like this:
- mapa resolves the sample SHA-256 from the input file or the opened IDA database.
- mapa keeps only CSV rows whose `hash` matches that SHA-256, case-insensitively.
- mapa treats `start` and `end` as RVAs and adds the IDA database base address to map them to function VAs.
- mapa does not rename functions, callers, or callees from Assemblage data. The displayed function header stays IDA-derived.
- mapa strips the trailing provenance suffix from `source_file` before rendering, for example `C:\src\foo.c (MD5: ...)` renders as `C:\src\foo.c`.
- Exact duplicate CSV rows are deduplicated. If multiple distinct Assemblage rows map to the same function address, mapa renders all of them in CSV order.
- For source-file separators, mapa uses the first Assemblage record's normalized `source_file` path as the function's primary source path.
- Missing Assemblage data does not start or end a source-file run. It does not trigger a separator and does not reset the last seen non-empty source path.
- When a later function has a different primary source path from the last seen non-empty source path, mapa inserts a separator immediately before that function.
## Deliberate interface changes from the Lancelot/BinExport2 version
- The `modules` section has been removed. BinExport2's module concept has no IDA equivalent.
## Decisions
- 2026-03-16: Lumina disabled during database creation via `IdaCommandOptions(plugin_options="lumina:host=0.0.0.0 -Osecondary_lumina:host=0.0.0.0")`, matching capa's `loader.py`. The `plugin_options` field maps to IDA's `-O` switch; embedding `-O` in the value for the second option works because `build_args()` concatenates it verbatim. Resource loading enabled via `load_resources=True` (maps to `-R`).
- 2026-03-16: Cache directory is `$XDG_CACHE_HOME/mandiant/mapa/` (or `~/.cache/mandiant/mapa/`). Separate from idals cache.
- 2026-03-16: `meta.ts` is `datetime.now(UTC).isoformat()` — no longer sourced from BinExport2.
- 2026-03-16: Thunk chain depth limit is 5 (matches capa's `THUNK_CHAIN_DEPTH_DELTA`).
- 2026-03-16: CFG stats use `FlowChartFlags.NOEXT | FlowChartFlags.PREDS` to match capa's block enumeration semantics.
- 2026-03-16: String extraction follows single data-reference chains up to depth 10, matching capa's `find_data_reference_from_insn`.
- 2026-03-16: String rows may carry right-aligned database tags derived from vendored Quantum Strand string databases. Tags include `#<library>` (e.g. `#zlib`, `#openssl`), `#msvc`, `#capa`, `#winapi`, `#common`, and `#code-junk`. Visible tag policy: `#common` is hidden when a more-specific tag is present; `#code-junk` is always shown. Tags are matched against the raw (untrimmed) string value. The underlying model preserves all match metadata even when the renderer suppresses a visible tag.
- 2026-03-16: Assemblage input is a CSV keyed by sample SHA-256. mapa matches rows by `hash`, converts `start`/`end` RVAs to VAs using the database base address, annotates functions with `assemblage name:` and `assemblage file:` lines, and does not override IDA-derived function names.
- 2026-03-16: `--output html-map` uses only string tags in the top control strip, sorts them by descending distinct-function count then name, shows those counts in the controls, and lists program strings by string VA with explicit addresses.
- 2026-03-16: `--output html-map` uses a split view with independently scrolling function and string panes, a draggable vertical divider, right-aligned visible tags in each string row, function/tag/string seed selection, caller/callee/both direction controls, depth controls from 1 to 4, and additive geometric-decay neighborhood heat based on bounded shortest-distance traversal.
- 2026-03-16: `--open` is valid only with `--output html-map`. It writes the HTML report to a temporary `.html` file and opens the local browser on that file instead of writing the HTML to stdout.

0
mapa/__init__.py Normal file
View File

5
mapa/__main__.py Normal file
View File

@@ -0,0 +1,5 @@
import sys
from mapa.cli import main
sys.exit(main())

57
mapa/assemblage.py Normal file
View File

@@ -0,0 +1,57 @@
from __future__ import annotations
import csv
import logging
from collections import defaultdict
from pathlib import Path
from mapa.model import AssemblageRecord
logger = logging.getLogger(__name__)
REQUIRED_COLUMNS = frozenset({"hash", "name", "start", "end", "source_file"})
def validate_assemblage_columns(fieldnames: list[str] | None) -> None:
columns = set(fieldnames or [])
missing = sorted(REQUIRED_COLUMNS - columns)
if missing:
raise ValueError(
f"assemblage CSV is missing required columns: {', '.join(missing)}"
)
def load_assemblage_records(
assemblage_path: Path,
sample_sha256: str,
base_address: int,
) -> dict[int, list[AssemblageRecord]]:
if not sample_sha256:
raise ValueError("sample sha256 is required to load assemblage data")
normalized_sha256 = sample_sha256.lower()
records_by_address: defaultdict[int, list[AssemblageRecord]] = defaultdict(list)
seen_by_address: defaultdict[int, set[AssemblageRecord]] = defaultdict(set)
with assemblage_path.open("rt", encoding="utf-8", newline="") as handle:
reader = csv.DictReader(handle)
validate_assemblage_columns(reader.fieldnames)
for row in reader:
row_hash = (row.get("hash") or "").strip().lower()
if row_hash != normalized_sha256:
continue
record = AssemblageRecord.from_csv_row(row, base_address=base_address)
seen = seen_by_address[record.address]
if record in seen:
continue
seen.add(record)
records_by_address[record.address].append(record)
logger.debug(
"loaded %d assemblage records for %s from %s",
sum(len(records) for records in records_by_address.values()),
normalized_sha256,
assemblage_path,
)
return dict(records_by_address)

235
mapa/cli.py Normal file
View File

@@ -0,0 +1,235 @@
from __future__ import annotations
import argparse
import json
import logging
import sys
import time
import webbrowser
from collections import defaultdict
from pathlib import Path
from tempfile import NamedTemporaryFile
from typing import Callable
from rich.console import Console
from rich.logging import RichHandler
from rich.theme import Theme
from mapa.assemblage import load_assemblage_records
from mapa.model import AssemblageRecord
logger = logging.getLogger("mapa")
def build_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(description="mapa: binary function map")
parser.add_argument(
"input_file", type=Path, help="path to input file (binary, .i64, or .idb)"
)
parser.add_argument("--capa", type=Path, help="path to capa JSON results file")
parser.add_argument("--assemblage", type=Path, help="path to Assemblage CSV file")
parser.add_argument(
"--output",
choices=("text", "html-map"),
default="text",
help="output format",
)
parser.add_argument(
"--open",
action="store_true",
help="when used with --output html-map, write to a temp file and open it in a browser",
)
parser.add_argument("--verbose", action="store_true", help="enable verbose logging")
parser.add_argument(
"--quiet", action="store_true", help="disable all output but errors"
)
return parser
def validate_output_options(output: str, open_report: bool) -> None:
if open_report and output != "html-map":
raise ValueError("--open requires --output html-map")
def write_temp_html_report(content: str, directory: Path | None = None) -> Path:
temp_dir = None if directory is None else str(directory)
with NamedTemporaryFile(
mode="w",
encoding="utf-8",
suffix=".html",
prefix="mapa-",
delete=False,
dir=temp_dir,
) as handle:
handle.write(content)
return Path(handle.name)
def open_html_report(
content: str,
opener: Callable[[str], bool] | None = None,
directory: Path | None = None,
) -> Path:
report_path = write_temp_html_report(content, directory=directory)
browser_opener = webbrowser.open if opener is None else opener
opened = browser_opener(report_path.as_uri())
if not opened:
raise RuntimeError(f"failed to open browser for {report_path}")
return report_path
def _load_capa_matches(
capa_path: Path,
thunk_targets: dict[int, int],
) -> dict[int, set[str]]:
"""Load capa JSON and map matches to function addresses."""
doc = json.loads(capa_path.read_text())
functions_by_basic_block: dict[int, int] = {}
for function in doc["meta"]["analysis"]["layout"]["functions"]:
for basic_block in function["matched_basic_blocks"]:
functions_by_basic_block[basic_block["address"]["value"]] = function[
"address"
]["value"]
matches_by_address: defaultdict[int, set[str]] = defaultdict(set)
for rule_name, results in doc["rules"].items():
for location, _ in results["matches"]:
if location["type"] != "absolute":
continue
matches_by_address[location["value"]].add(rule_name)
matches_by_function: defaultdict[int, set[str]] = defaultdict(set)
for address, matches in matches_by_address.items():
func_addr = functions_by_basic_block.get(address, address)
if func_addr in thunk_targets:
logger.debug(
"forwarding capa matches from thunk 0x%x to 0x%x",
func_addr,
thunk_targets[func_addr],
)
func_addr = thunk_targets[func_addr]
matches_by_function[func_addr].update(matches)
for match in matches:
logger.info("capa: 0x%x: %s", func_addr, match)
return dict(matches_by_function)
def main(argv: list[str] | None = None) -> int:
if argv is None:
argv = sys.argv[1:]
parser = build_parser()
args = parser.parse_args(args=argv)
try:
validate_output_options(args.output, args.open)
except ValueError as error:
parser.error(str(error))
stderr_console = Console(stderr=True)
logging.basicConfig(
level=logging.DEBUG
if args.verbose
else (logging.ERROR if args.quiet else logging.INFO),
format="%(message)s",
handlers=[
RichHandler(
console=stderr_console, show_path=False, rich_tracebacks=args.verbose
)
],
)
from mapa.collector import collect_report
from mapa.html_renderer import render_html_map
from mapa.ida_db import open_database_session, resolve_database
from mapa.renderer import render_report
t0 = time.time()
db_path, md5, sha256 = resolve_database(args.input_file)
logger.debug("perf: resolve_database: %0.2fs", time.time() - t0)
theme = Theme(
{
"decoration": "grey54",
"title": "yellow",
"key": "blue",
"value": "blue",
"default": "blue",
},
inherit=False,
)
console = Console(theme=theme, markup=False, emoji=False)
t0 = time.time()
with open_database_session(db_path) as db:
logger.debug("perf: open_database: %0.2fs", time.time() - t0)
base_address = db.base_address or 0
effective_sha256 = sha256 or db.sha256 or ""
assemblage_records_by_address: dict[int, list[AssemblageRecord]] = {}
if args.assemblage:
assemblage_records_by_address = load_assemblage_records(
args.assemblage,
sample_sha256=effective_sha256,
base_address=base_address,
)
matches_by_function: dict[int, set[str]] = {}
if args.capa:
from ida_domain.functions import FunctionFlags
from mapa.collector import (
_build_extern_index,
_build_import_index,
_resolve_thunk_target,
)
import_index = _build_import_index(db)
extern_addrs = _build_extern_index(db)
thunk_targets: dict[int, int] = {}
for func in db.functions:
flags = db.functions.get_flags(func)
if flags and FunctionFlags.THUNK in flags:
target = _resolve_thunk_target(
db, int(func.start_ea), import_index, extern_addrs
)
if target is not None:
thunk_targets[int(func.start_ea)] = target
matches_by_function = _load_capa_matches(
args.capa,
thunk_targets,
)
t0 = time.time()
report = collect_report(
db,
md5=md5,
sha256=effective_sha256,
matches_by_function=matches_by_function,
assemblage_records_by_address=assemblage_records_by_address,
)
logger.debug("perf: collect_report: %0.2fs", time.time() - t0)
t0 = time.time()
if args.output == "html-map":
html = render_html_map(report)
if args.open:
report_path = open_html_report(html)
logger.info("opened html map: %s", report_path)
else:
sys.stdout.write(html)
else:
render_report(report, console)
logger.debug("perf: render_report: %0.2fs", time.time() - t0)
return 0
if __name__ == "__main__":
sys.exit(main())

492
mapa/collector.py Normal file
View File

@@ -0,0 +1,492 @@
from __future__ import annotations
import logging
from datetime import datetime, timezone
from ida_domain.database import Database
from ida_domain.flowchart import FlowChartFlags
from ida_domain.functions import FunctionFlags
from mapa.model import (
AssemblageRecord,
MapaCall,
MapaCaller,
MapaFunction,
MapaLibrary,
MapaMeta,
MapaProgramString,
MapaReport,
MapaSection,
MapaString,
)
from mapa.strings import (
MAX_STRING_READ,
extract_ascii_from_buf,
extract_utf16le_from_buf,
)
from mapa.string_tags.tagger import StringTagger, load_default_tagger
logger = logging.getLogger(__name__)
THUNK_CHAIN_DEPTH_DELTA = 5
def _get_permissions_string(perm: int) -> str:
return (
("r" if perm & 4 else "-")
+ ("w" if perm & 2 else "-")
+ ("x" if perm & 1 else "-")
)
def _collect_meta(db: Database, md5: str, sha256: str) -> MapaMeta:
name = db.path or ""
if not md5 and db.md5:
md5 = db.md5
if not sha256 and db.sha256:
sha256 = db.sha256
arch = db.architecture or ""
base_address = db.base_address or 0
timestamp = datetime.now(timezone.utc).isoformat()
return MapaMeta(
name=name,
sha256=sha256,
md5=md5,
arch=arch,
timestamp=timestamp,
base_address=base_address,
)
def _collect_sections(db: Database) -> list[MapaSection]:
sections: list[MapaSection] = []
for seg in db.segments.get_all():
sections.append(
MapaSection(
address=int(seg.start_ea),
size=int(seg.end_ea) - int(seg.start_ea),
perms=_get_permissions_string(int(seg.perm)),
name=db.segments.get_name(seg) or "",
)
)
return sections
def _normalize_module_name(name: str) -> str:
"""Normalize an import module name to include extension.
IDA strips .dll from PE import module names (e.g. 'KERNEL32' instead of
'KERNEL32.dll'). Add it back when the name has no extension.
"""
if "." not in name:
return f"{name}.dll".lower()
return name.lower()
def _collect_libraries(db: Database) -> list[MapaLibrary]:
libraries: list[MapaLibrary] = []
for module in db.imports.get_all_modules():
libraries.append(MapaLibrary(name=_normalize_module_name(module.name)))
return libraries
def _build_import_index(db: Database) -> dict[int, tuple[str, str]]:
"""Build address -> (module, function_name) for all imports."""
imports: dict[int, tuple[str, str]] = {}
for imp in db.imports.get_all_imports():
name = imp.name or f"ord{imp.ordinal}"
imports[int(imp.address)] = (_normalize_module_name(imp.module_name), name)
return imports
def _build_extern_index(db: Database) -> set[int]:
"""Collect addresses in XTRN segments."""
externs: set[int] = set()
for seg in db.segments.get_all():
seg_class = db.segments.get_class(seg)
if seg_class and seg_class.upper() == "XTRN":
for func in db.functions.get_between(int(seg.start_ea), int(seg.end_ea)):
externs.add(int(func.start_ea))
return externs
def _resolve_thunk_target(
db: Database,
ea: int,
import_index: dict[int, tuple[str, str]],
extern_addrs: set[int],
) -> int | None:
"""Follow thunk chains up to THUNK_CHAIN_DEPTH_DELTA hops.
Returns the final resolved address, or None if resolution fails.
"""
current = ea
for _ in range(THUNK_CHAIN_DEPTH_DELTA):
code_refs = list(db.xrefs.code_refs_from_ea(current, flow=False))
if len(code_refs) == 1:
target = int(code_refs[0])
if target in import_index or target in extern_addrs:
return target
target_func = db.functions.get_at(target)
if target_func:
flags = db.functions.get_flags(target_func)
if flags and FunctionFlags.THUNK in flags:
current = target
continue
return target
data_refs = list(db.xrefs.data_refs_from_ea(current))
if len(data_refs) == 1:
target = int(data_refs[0])
if target in import_index or target in extern_addrs:
return target
target_func = db.functions.get_at(target)
if target_func:
flags = db.functions.get_flags(target_func)
if flags and FunctionFlags.THUNK in flags:
current = target
continue
return target
break
return None
def _find_string_at(db: Database, ea: int) -> str | None:
"""Read bytes at the given address and check for ASCII or UTF-16 LE string."""
try:
buf = db.bytes.get_bytes_at(ea, MAX_STRING_READ)
except Exception:
return None
if not buf:
return None
result = extract_ascii_from_buf(buf)
if result is not None:
return result
return extract_utf16le_from_buf(buf)
def _find_data_reference_string(
db: Database, insn_ea: int, max_depth: int = 10
) -> tuple[int, str] | None:
"""Follow single data-reference chains from an instruction to find a string."""
current = insn_ea
for _ in range(max_depth):
try:
data_refs = list(db.xrefs.data_refs_from_ea(current))
except Exception:
break
if len(data_refs) != 1:
break
target = int(data_refs[0])
if not db.is_valid_ea(target):
break
result = _find_string_at(db, target)
if result is not None:
return target, result
current = target
return None
def _merge_string_metadata(
tags: tuple[str, ...],
tag_matches: tuple,
new_tags: tuple[str, ...],
new_tag_matches: tuple,
) -> tuple[tuple[str, ...], tuple]:
merged_tags = tuple(sorted(set(tags) | set(new_tags)))
seen_match_keys = {match.sort_key for match in tag_matches}
unique_new = tuple(
match for match in new_tag_matches if match.sort_key not in seen_match_keys
)
return merged_tags, tag_matches + unique_new
def collect_report(
db: Database,
md5: str = "",
sha256: str = "",
matches_by_function: dict[int, set[str]] | None = None,
assemblage_records_by_address: dict[int, list[AssemblageRecord]] | None = None,
tagger: StringTagger | None = None,
) -> MapaReport:
"""Collect a complete mapa report from an open IDA database."""
if matches_by_function is None:
matches_by_function = {}
if assemblage_records_by_address is None:
assemblage_records_by_address = {}
if tagger is None:
tagger = load_default_tagger()
meta = _collect_meta(db, md5, sha256)
sections = _collect_sections(db)
libraries = _collect_libraries(db)
import_index = _build_import_index(db)
extern_addrs = _build_extern_index(db)
all_functions: list[tuple[int, object, bool, bool]] = []
for func in db.functions:
ea = int(func.start_ea)
flags = db.functions.get_flags(func)
is_thunk = flags is not None and FunctionFlags.THUNK in flags
is_lib = flags is not None and FunctionFlags.LIB in flags
all_functions.append((ea, func, is_thunk, is_lib))
all_functions.sort(key=lambda x: x[0])
func_address_to_order: dict[int, int] = {}
for i, (ea, _, _, _) in enumerate(all_functions):
func_address_to_order[ea] = i
thunk_targets: dict[int, int] = {}
for ea, func, is_thunk, _ in all_functions:
if is_thunk:
target = _resolve_thunk_target(db, ea, import_index, extern_addrs)
if target is not None:
thunk_targets[ea] = target
resolved_callers: dict[int, set[int]] = {}
resolved_callees: dict[int, list[tuple[int, bool]]] = {}
for ea, func, is_thunk, is_lib in all_functions:
if is_thunk or ea in import_index or ea in extern_addrs:
continue
fc = db.functions.get_flowchart(
func, flags=FlowChartFlags.NOEXT | FlowChartFlags.PREDS
)
if fc is None:
continue
seen_callees: set[int] = set()
callees: list[tuple[int, bool]] = []
for block in fc:
insns = block.get_instructions()
if insns is None:
continue
for insn in insns:
if not db.instructions.is_call_instruction(insn):
# also check for jumps to imports (thunk pattern)
mnem = db.instructions.get_mnemonic(insn)
if mnem and mnem.lower().startswith("jmp"):
call_targets = list(
db.xrefs.code_refs_from_ea(int(insn.ea), flow=False)
)
else:
continue
else:
call_targets = list(db.xrefs.calls_from_ea(int(insn.ea)))
if not call_targets:
call_targets = list(
db.xrefs.code_refs_from_ea(int(insn.ea), flow=False)
)
for target_ea in call_targets:
target_ea = int(target_ea)
resolved_target = target_ea
if target_ea in thunk_targets:
resolved_target = thunk_targets[target_ea]
if resolved_target in seen_callees:
continue
seen_callees.add(resolved_target)
is_api = (
resolved_target in import_index
or resolved_target in extern_addrs
)
callees.append((resolved_target, is_api))
if resolved_target not in resolved_callers:
resolved_callers[resolved_target] = set()
resolved_callers[resolved_target].add(ea)
resolved_callees[ea] = callees
mapa_functions: list[MapaFunction] = []
program_strings_by_address: dict[int, MapaProgramString] = {}
for ea, func, is_thunk, is_lib in all_functions:
if ea in import_index or ea in extern_addrs:
continue
name = db.functions.get_name(func) or f"sub_{ea:x}"
order = func_address_to_order[ea]
mf = MapaFunction(
address=ea,
name=name,
is_thunk=is_thunk,
is_library=is_lib,
assemblage_records=list(assemblage_records_by_address.get(ea, [])),
)
if is_thunk:
mapa_functions.append(mf)
continue
fc = db.functions.get_flowchart(
func, flags=FlowChartFlags.NOEXT | FlowChartFlags.PREDS
)
if fc is not None:
num_blocks = 0
num_edges = 0
num_insns = 0
total_bytes = 0
for block in fc:
num_blocks += 1
num_edges += block.count_successors()
insns = block.get_instructions()
if insns is None:
continue
for insn in insns:
num_insns += 1
insn_size = db.heads.size(int(insn.ea))
total_bytes += insn_size
mf.num_basic_blocks = num_blocks
mf.num_edges = num_edges
mf.num_instructions = num_insns
mf.total_instruction_bytes = total_bytes
for caller_ea in sorted(resolved_callers.get(ea, set())):
if caller_ea not in func_address_to_order:
continue
caller_order = func_address_to_order[caller_ea]
delta = caller_order - order
direction = "" if delta < 0 else ""
caller_func = db.functions.get_at(caller_ea)
caller_name = (
db.functions.get_name(caller_func)
if caller_func
else f"sub_{caller_ea:x}"
)
mf.callers.append(
MapaCaller(
name=caller_name or f"sub_{caller_ea:x}",
address=caller_ea,
delta=delta,
direction=direction,
)
)
for target_ea, is_api in resolved_callees.get(ea, []):
if is_api:
if target_ea in import_index:
module_name, func_name = import_index[target_ea]
api_name = f"{module_name}!{func_name}"
else:
target_func = db.functions.get_at(target_ea)
api_name = (
db.functions.get_name(target_func)
if target_func
else f"sub_{target_ea:x}"
)
api_name = api_name or f"sub_{target_ea:x}"
mf.apis.append(
MapaCall(
name=api_name,
address=target_ea,
is_api=True,
)
)
else:
if target_ea not in func_address_to_order:
continue
target_order = func_address_to_order[target_ea]
delta = target_order - order
direction = "" if delta < 0 else ""
target_func = db.functions.get_at(target_ea)
target_name = (
db.functions.get_name(target_func)
if target_func
else f"sub_{target_ea:x}"
)
mf.calls.append(
MapaCall(
name=target_name or f"sub_{target_ea:x}",
address=target_ea,
is_api=False,
delta=delta,
direction=direction,
)
)
if fc is not None:
seen_strings: dict[str, MapaString] = {}
fc2 = db.functions.get_flowchart(
func, flags=FlowChartFlags.NOEXT | FlowChartFlags.PREDS
)
if fc2 is not None:
for block in fc2:
insns = block.get_instructions()
if insns is None:
continue
for insn in insns:
string_result = _find_data_reference_string(db, int(insn.ea))
if string_result is None:
continue
string_ea, raw = string_result
tag_result = tagger.tag_string(raw)
display = raw.rstrip()
if not display:
continue
if display in seen_strings:
existing = seen_strings[display]
existing.tags, existing.tag_matches = _merge_string_metadata(
existing.tags,
existing.tag_matches,
tag_result.tags,
tag_result.matches,
)
existing.address = min(existing.address, string_ea)
else:
ms = MapaString(
value=display,
address=string_ea,
tags=tuple(sorted(set(tag_result.tags))),
tag_matches=tag_result.matches,
)
seen_strings[display] = ms
mf.strings.append(ms)
if string_ea in program_strings_by_address:
existing_program_string = program_strings_by_address[string_ea]
existing_program_string.tags, existing_program_string.tag_matches = _merge_string_metadata(
existing_program_string.tags,
existing_program_string.tag_matches,
tag_result.tags,
tag_result.matches,
)
existing_program_string.function_addresses = tuple(
sorted(
set(existing_program_string.function_addresses)
| {ea}
)
)
else:
program_strings_by_address[string_ea] = MapaProgramString(
value=display,
address=string_ea,
tags=tuple(sorted(set(tag_result.tags))),
tag_matches=tag_result.matches,
function_addresses=(ea,),
)
mf.capa_matches = sorted(matches_by_function.get(ea, set()))
mapa_functions.append(mf)
return MapaReport(
meta=meta,
sections=sections,
libraries=libraries,
functions=mapa_functions,
program_strings=sorted(
program_strings_by_address.values(),
key=lambda string: string.address,
),
)

553
mapa/html_renderer.py Normal file
View File

@@ -0,0 +1,553 @@
from __future__ import annotations
import json
from html import escape
from mapa.model import MapaProgramString, MapaReport
from mapa.renderer import _visible_tags, render_function_summary_text
def _to_json(value: object) -> str:
return json.dumps(value, separators=(",", ":")).replace("</", "<\\/")
def _collect_tag_entries(report: MapaReport) -> list[tuple[str, list[int]]]:
function_index_by_address = {
function.address: index for index, function in enumerate(report.functions)
}
tag_to_functions: dict[str, set[int]] = {}
for program_string in report.program_strings:
visible_tags = _visible_tags(program_string.tags)
if not visible_tags:
continue
function_indices = {
function_index_by_address[address]
for address in program_string.function_addresses
if address in function_index_by_address
}
if not function_indices:
continue
for tag in visible_tags:
tag_to_functions.setdefault(tag, set()).update(function_indices)
return sorted(
(
(tag, sorted(function_indices))
for tag, function_indices in tag_to_functions.items()
),
key=lambda item: (-len(item[1]), item[0]),
)
def _collect_function_graph(
report: MapaReport, function_index_by_address: dict[int, int]
) -> tuple[list[list[int]], list[list[int]]]:
callers_by_index: list[list[int]] = []
callees_by_index: list[list[int]] = []
for function in report.functions:
caller_indices = sorted(
{
function_index_by_address[caller.address]
for caller in function.callers
if caller.address in function_index_by_address
}
)
callee_indices = sorted(
{
function_index_by_address[call.address]
for call in function.calls
if call.address in function_index_by_address
}
)
callers_by_index.append(caller_indices)
callees_by_index.append(callee_indices)
return callers_by_index, callees_by_index
def _render_string_row(program_string: MapaProgramString, index: int) -> str:
visible_tags = _visible_tags(program_string.tags)
tag_text = " ".join(visible_tags)
tag_span = ""
if tag_text:
tag_span = f'<span class="string-tags">{escape(tag_text)}</span>'
return (
f'<button type="button" class="string-row" data-string-index="{index}" '
f'data-string-address="{escape(hex(program_string.address), quote=True)}" '
f'data-string-value="{escape(program_string.value, quote=True)}" '
f'data-string-tags="{escape(tag_text, quote=True)}">'
f'<span class="string-address">{escape(hex(program_string.address))}</span>'
f'<span class="string-value">{escape(program_string.value)}</span>'
f"{tag_span}"
"</button>"
)
def render_html_map(report: MapaReport) -> str:
tag_entries = _collect_tag_entries(report)
function_index_by_address = {
function.address: index for index, function in enumerate(report.functions)
}
callers_by_index, callees_by_index = _collect_function_graph(
report, function_index_by_address
)
program_strings = sorted(report.program_strings, key=lambda string: string.address)
data = {
"functions": [
{
"name": function.name,
"address": hex(function.address),
"label": f"{function.name} @ {hex(function.address)}",
"summary": render_function_summary_text(function),
}
for function in report.functions
],
"callersByIndex": callers_by_index,
"calleesByIndex": callees_by_index,
"tags": {tag: function_indices for tag, function_indices in tag_entries},
"strings": [
{
"address": hex(program_string.address),
"value": program_string.value,
"functionIndices": [
function_index_by_address[address]
for address in program_string.function_addresses
if address in function_index_by_address
],
}
for program_string in program_strings
],
}
style = """
*,*::before,*::after{box-sizing:border-box;margin:0;padding:0}
html,body{height:100%}
:root{--bg:#fff;--fg:#111;--muted:#666;--line:#cfcfcf;--fill:#d9d9d9;--heat:#2563eb;--seed:#f59e0b;--square:10px}
body{height:100vh;overflow:hidden;background:var(--bg);color:var(--fg);font:13px/1.4 ui-monospace,SFMono-Regular,Menlo,Consolas,monospace;padding:16px}
body.is-resizing{cursor:col-resize}
body.is-resizing *{user-select:none}
main{height:100%;min-height:0;display:flex;flex-direction:column;gap:16px}
section{display:flex;flex-direction:column;gap:8px}
.meta,.legend,.pane-header{color:var(--muted)}
.controls{display:flex;flex-wrap:wrap;gap:6px}
.control-row{display:flex;flex-wrap:wrap;align-items:center;gap:8px}
.control-label{color:var(--muted)}
.control{border:1px solid var(--line);background:transparent;color:inherit;padding:2px 6px;font:inherit;cursor:pointer}
.control.is-active{border-color:var(--heat);color:var(--heat)}
.control-count{color:var(--muted)}
.split-view{flex:1;min-height:0;display:flex;align-items:stretch}
.pane{min-height:0;display:flex;flex-direction:column;overflow:auto;border:1px solid var(--line);background:var(--bg)}
.pane-functions{flex:0 0 50%;min-width:18rem}
.pane-strings{flex:1 1 auto;min-width:18rem}
.pane-header{position:sticky;top:0;z-index:1;background:var(--bg);border-bottom:1px solid var(--line);padding:8px 10px}
.pane-body{display:flex;flex-direction:column;gap:8px;padding:8px 10px}
.splitter{position:relative;flex:0 0 12px;cursor:col-resize;touch-action:none}
.splitter::before{content:'';position:absolute;top:0;bottom:0;left:50%;width:1px;background:var(--line);transform:translateX(-50%)}
.splitter::after{content:'';position:absolute;top:50%;left:50%;width:3px;height:40px;border-left:1px solid var(--line);border-right:1px solid var(--line);transform:translate(-50%,-50%)}
.function-grid{display:flex;flex-wrap:wrap;gap:1px;align-content:flex-start}
.function-box{position:relative;z-index:0;width:var(--square);height:var(--square);border:1px solid var(--line);background:var(--fill);overflow:visible}
.function-box::before{content:'';position:absolute;inset:0;background:var(--heat);opacity:var(--heat-opacity,0);pointer-events:none}
.function-box::after{content:'';position:absolute;inset:-2px;border:1px solid var(--seed);opacity:0;pointer-events:none}
.function-box.is-seed::after{opacity:1}
.function-box.is-dim{opacity:.28}
.string-list{display:flex;flex-direction:column;gap:2px}
.string-row{display:flex;align-items:flex-start;gap:8px;width:100%;border:1px solid transparent;background:transparent;color:inherit;padding:3px 4px;font:inherit;text-align:left;cursor:pointer}
.string-row:hover,.string-row.is-active{border-color:var(--line)}
.string-address{color:var(--muted);white-space:nowrap;flex:0 0 auto}
.string-value{min-width:0;flex:1 1 auto;white-space:pre-wrap;word-break:break-word}
.string-tags{margin-left:auto;flex:0 0 auto;padding-left:8px;color:var(--muted);white-space:nowrap}
.tooltip{position:fixed;z-index:10;display:none;width:min(42rem,calc(100vw - 24px));max-height:calc(100vh - 24px);overflow:auto;border:1px solid var(--line);background:#fff;padding:8px;pointer-events:none;white-space:pre-wrap;box-shadow:0 2px 8px rgba(0,0,0,.08)}
.tooltip.is-visible{display:block}
h1,h2{font-size:inherit}
""".strip()
script = """
const data=JSON.parse(document.getElementById('mapa-data').textContent);
const splitView=document.getElementById('split-view');
const functionsPane=document.getElementById('functions-pane');
const splitter=document.getElementById('splitter');
const functionBoxes=[...document.querySelectorAll('.function-box')];
const tagControls=[...document.querySelectorAll('.tag-control')];
const stringRows=[...document.querySelectorAll('.string-row')];
const directionControls=[...document.querySelectorAll('.direction-control')];
const depthControls=[...document.querySelectorAll('.depth-control')];
const neighborhoodStatus=document.getElementById('neighborhood-status');
const tooltip=document.getElementById('tooltip');
const bothByIndex=data.callersByIndex.map((callers,index)=>[...new Set([...callers,...data.calleesByIndex[index]])]);
const functionCount=data.functions.length;
let hoveredFunction=null;
let lockedFunction=null;
let hoveredTag=null;
let lockedTag=null;
let hoveredString=null;
let lockedString=null;
let directionMode='both';
let maxDepth=3;
let activePointerId=null;
let tooltipFunctionIndex=null;
let currentNeighborhood=null;
const getDecayScore=(distance)=>0.5**distance;
const hasLockedSeed=()=>lockedFunction!==null||lockedTag!==null||lockedString!==null;
const getUniqueSeedIndices=(seedIndices)=>[...new Set(seedIndices)];
const clearHoveredSeeds=()=>{
hoveredFunction=null;
hoveredTag=null;
hoveredString=null;
};
const clearLockedSeeds=()=>{
lockedFunction=null;
lockedTag=null;
lockedString=null;
};
const buildSeedSource=(kind,key,label,seedIndices)=>{
const uniqueSeedIndices=getUniqueSeedIndices(seedIndices);
if(uniqueSeedIndices.length===0){return null;}
return {kind,key,label,seedIndices:uniqueSeedIndices};
};
const getFunctionSeedSource=(index)=>buildSeedSource('function',String(index),data.functions[index].label,[index]);
const getTagSeedSource=(tag)=>buildSeedSource('tag',tag,tag,data.tags[tag]||[]);
const getStringSeedSource=(index)=>{
const stringData=data.strings[index];
return buildSeedSource('string',String(index),`${stringData.value} @ ${stringData.address}`,stringData.functionIndices);
};
const getLockedSeedSource=()=>{
if(lockedFunction!==null){return getFunctionSeedSource(lockedFunction);}
if(lockedTag!==null){return getTagSeedSource(lockedTag);}
if(lockedString!==null){return getStringSeedSource(lockedString);}
return null;
};
const getHoveredSeedSource=()=>{
if(hoveredFunction!==null){return getFunctionSeedSource(hoveredFunction);}
if(hoveredTag!==null){return getTagSeedSource(hoveredTag);}
if(hoveredString!==null){return getStringSeedSource(hoveredString);}
return null;
};
const getActiveSeedSource=()=>getLockedSeedSource()??getHoveredSeedSource();
const getAdjacency=()=>{
if(directionMode==='callers'){return data.callersByIndex;}
if(directionMode==='callees'){return data.calleesByIndex;}
return bothByIndex;
};
const computeNeighborhoodState=(seedIndices)=>{
const adjacency=getAdjacency();
const uniqueSeedIndices=getUniqueSeedIndices(seedIndices);
const scores=new Float32Array(functionCount);
const bestDistances=new Array(functionCount).fill(null);
for(const seedIndex of uniqueSeedIndices){
const distances=new Array(functionCount).fill(-1);
const queue=[seedIndex];
distances[seedIndex]=0;
for(let queueIndex=0;queueIndex<queue.length;queueIndex++){
const functionIndex=queue[queueIndex];
const distance=distances[functionIndex];
if(distance===maxDepth){continue;}
for(const neighborIndex of adjacency[functionIndex]){
if(distances[neighborIndex]!==-1){continue;}
distances[neighborIndex]=distance+1;
queue.push(neighborIndex);
}
}
distances.forEach((distance,functionIndex)=>{
if(distance===-1){return;}
scores[functionIndex]+=getDecayScore(distance);
const bestDistance=bestDistances[functionIndex];
if(bestDistance===null||distance<bestDistance){bestDistances[functionIndex]=distance;}
});
}
let maxScore=0;
scores.forEach((score)=>{
if(score>maxScore){maxScore=score;}
});
return {
scores:Array.from(scores),
bestDistances,
seedIndices:uniqueSeedIndices,
seedSet:new Set(uniqueSeedIndices),
maxScore,
};
};
const renderTooltipText=(index)=>{
const summary=data.functions[index].summary;
if(currentNeighborhood===null){return summary;}
const lines=[`heat: ${currentNeighborhood.scores[index].toFixed(2)}`,`seed: ${currentNeighborhood.seedSet.has(index)?'yes':'no'}`];
const distance=currentNeighborhood.bestDistances[index];
if(distance!==null){lines.push(`distance: ${distance}`);}
return `${lines.join('\\n')}\\n\\n${summary}`;
};
const updateTooltip=()=>{
if(tooltipFunctionIndex===null){return;}
tooltip.textContent=renderTooltipText(tooltipFunctionIndex);
};
const updateStatus=()=>{
const activeSeedSource=getActiveSeedSource();
if(activeSeedSource===null){
neighborhoodStatus.textContent='hover or click a function, tag, or string';
return;
}
const seedCount=currentNeighborhood===null?0:currentNeighborhood.seedIndices.length;
neighborhoodStatus.textContent=`${activeSeedSource.kind} ${activeSeedSource.label} · direction ${directionMode} · depth ${maxDepth} · ${seedCount} seed${seedCount===1?'':'s'}`;
};
const updateView=()=>{
const activeSeedSource=getActiveSeedSource();
currentNeighborhood=activeSeedSource===null?null:computeNeighborhoodState(activeSeedSource.seedIndices);
const hasActive=currentNeighborhood!==null;
functionBoxes.forEach((box,index)=>{
const score=currentNeighborhood===null?0:currentNeighborhood.scores[index];
const heatOpacity=currentNeighborhood===null||currentNeighborhood.maxScore===0?0:score/currentNeighborhood.maxScore;
box.style.setProperty('--heat-opacity',heatOpacity.toFixed(3));
box.classList.toggle('is-seed',currentNeighborhood!==null&&currentNeighborhood.seedSet.has(index));
box.classList.toggle('is-dim',hasActive&&score===0);
});
tagControls.forEach((control)=>{
control.classList.toggle('is-active',activeSeedSource!==null&&activeSeedSource.kind==='tag'&&control.dataset.tag===activeSeedSource.key);
});
stringRows.forEach((row)=>{
row.classList.toggle('is-active',activeSeedSource!==null&&activeSeedSource.kind==='string'&&row.dataset.stringIndex===activeSeedSource.key);
});
directionControls.forEach((control)=>{
control.classList.toggle('is-active',control.dataset.direction===directionMode);
});
depthControls.forEach((control)=>{
control.classList.toggle('is-active',Number(control.dataset.depth)===maxDepth);
});
updateStatus();
updateTooltip();
};
const placeTooltip=(event)=>{
const offset=12;
let left=event.clientX+offset;
let top=event.clientY+offset;
const rect=tooltip.getBoundingClientRect();
if(left+rect.width>window.innerWidth-8){left=Math.max(8,window.innerWidth-rect.width-8);}
if(top+rect.height>window.innerHeight-8){top=Math.max(8,window.innerHeight-rect.height-8);}
tooltip.style.left=`${left}px`;
tooltip.style.top=`${top}px`;
};
const getPaneMinWidth=()=>parseFloat(getComputedStyle(document.documentElement).fontSize)*18;
const resizePanes=(clientX)=>{
const rect=splitView.getBoundingClientRect();
const splitterWidth=splitter.getBoundingClientRect().width;
const paneMinWidth=getPaneMinWidth();
const minLeft=rect.left+paneMinWidth;
const maxLeft=rect.right-paneMinWidth-splitterWidth;
if(maxLeft<=minLeft){functionsPane.style.flexBasis='50%';return;}
const clampedLeft=Math.min(maxLeft,Math.max(minLeft,clientX));
functionsPane.style.flexBasis=`${clampedLeft-rect.left}px`;
};
const clampPaneSize=()=>{
const basis=parseFloat(functionsPane.style.flexBasis);
if(Number.isFinite(basis)){resizePanes(splitView.getBoundingClientRect().left+basis);}
};
const stopResizing=(event)=>{
if(activePointerId===null||event.pointerId!==activePointerId){return;}
if(splitter.hasPointerCapture(event.pointerId)){splitter.releasePointerCapture(event.pointerId);}
activePointerId=null;
document.body.classList.remove('is-resizing');
};
functionBoxes.forEach((box,index)=>{
box.addEventListener('mouseenter',(event)=>{
tooltipFunctionIndex=index;
tooltip.classList.add('is-visible');
if(!hasLockedSeed()){hoveredFunction=index;}
updateView();
placeTooltip(event);
});
box.addEventListener('mousemove',placeTooltip);
box.addEventListener('mouseleave',()=>{
tooltip.classList.remove('is-visible');
tooltipFunctionIndex=null;
if(!hasLockedSeed()&&hoveredFunction===index){
hoveredFunction=null;
updateView();
}
});
box.addEventListener('click',()=>{
const functionIndex=Number(box.dataset.functionIndex);
if(lockedFunction===functionIndex){
lockedFunction=null;
}else{
clearLockedSeeds();
lockedFunction=functionIndex;
}
clearHoveredSeeds();
updateView();
});
});
tagControls.forEach((control)=>{
control.addEventListener('mouseenter',()=>{
if(hasLockedSeed()){return;}
hoveredTag=control.dataset.tag;
updateView();
});
control.addEventListener('mouseleave',()=>{
if(hasLockedSeed()||hoveredTag!==control.dataset.tag){return;}
hoveredTag=null;
updateView();
});
control.addEventListener('click',()=>{
const tag=control.dataset.tag;
if(lockedTag===tag){
lockedTag=null;
}else{
clearLockedSeeds();
lockedTag=tag;
}
clearHoveredSeeds();
updateView();
});
});
stringRows.forEach((row)=>{
row.addEventListener('mouseenter',()=>{
if(hasLockedSeed()){return;}
hoveredString=Number(row.dataset.stringIndex);
updateView();
});
row.addEventListener('mouseleave',()=>{
if(hasLockedSeed()||hoveredString!==Number(row.dataset.stringIndex)){return;}
hoveredString=null;
updateView();
});
row.addEventListener('click',()=>{
const stringIndex=Number(row.dataset.stringIndex);
if(lockedString===stringIndex){
lockedString=null;
}else{
clearLockedSeeds();
lockedString=stringIndex;
}
clearHoveredSeeds();
updateView();
});
});
directionControls.forEach((control)=>{
control.addEventListener('click',()=>{
directionMode=control.dataset.direction;
updateView();
});
});
depthControls.forEach((control)=>{
control.addEventListener('click',()=>{
maxDepth=Number(control.dataset.depth);
updateView();
});
});
splitter.addEventListener('pointerdown',(event)=>{
activePointerId=event.pointerId;
splitter.setPointerCapture(event.pointerId);
document.body.classList.add('is-resizing');
resizePanes(event.clientX);
event.preventDefault();
});
splitter.addEventListener('pointermove',(event)=>{if(activePointerId===event.pointerId){resizePanes(event.clientX);}});
splitter.addEventListener('pointerup',stopResizing);
splitter.addEventListener('pointercancel',stopResizing);
window.addEventListener('resize',clampPaneSize);
updateView();
""".strip()
parts: list[str] = [
"<!doctype html>",
'<html lang="en">',
"<head>",
'<meta charset="utf-8">',
f"<title>{escape(report.meta.name)} - mapa html map</title>",
"<style>",
style,
"</style>",
"</head>",
"<body>",
"<main>",
"<section>",
f"<div>{escape(report.meta.name)}</div>",
(
f'<div class="meta">sha256 {escape(report.meta.sha256)} · arch {escape(report.meta.arch)}'
f" · functions {len(report.functions)} · strings {len(program_strings)} · tags {len(tag_entries)}</div>"
),
"<h1>tags</h1>",
'<div class="controls" id="tag-controls">',
]
for tag, function_indices in tag_entries:
parts.append(
(
f'<button type="button" class="control tag-control" data-tag="{escape(tag, quote=True)}" '
f'data-count="{len(function_indices)}">{escape(tag)} '
f'<span class="control-count">({len(function_indices)})</span></button>'
)
)
parts.extend(
[
"</div>",
'<div class="control-row">',
'<span class="control-label">direction</span>',
'<div class="controls" id="direction-controls">',
'<button type="button" class="control direction-control" data-direction="callers">callers</button>',
'<button type="button" class="control direction-control" data-direction="callees">callees</button>',
'<button type="button" class="control direction-control is-active" data-direction="both">both</button>',
"</div>",
'<span class="control-label">depth</span>',
'<div class="controls" id="depth-controls">',
'<button type="button" class="control depth-control" data-depth="1">1</button>',
'<button type="button" class="control depth-control" data-depth="2">2</button>',
'<button type="button" class="control depth-control is-active" data-depth="3">3</button>',
'<button type="button" class="control depth-control" data-depth="4">4</button>',
"</div>",
"</div>",
'<div class="meta" id="neighborhood-status"></div>',
'<div class="legend">fill = neighborhood heat · outline = seed · dim = outside neighborhood</div>',
"</section>",
'<div class="split-view" id="split-view">',
'<section class="pane pane-functions" id="functions-pane">',
f'<div class="pane-header">functions ({len(report.functions)})</div>',
'<div class="pane-body">',
'<div class="function-grid" id="function-grid">',
]
)
for index, function in enumerate(report.functions):
parts.append(
(
f'<div class="function-box" data-function-index="{index}" '
f'data-function-address="{escape(hex(function.address), quote=True)}" '
f'aria-label="{escape(function.name, quote=True)}"></div>'
)
)
parts.extend(
[
"</div>",
"</div>",
"</section>",
'<div class="splitter" id="splitter" role="separator" aria-orientation="vertical" aria-label="resize panes"></div>',
'<section class="pane pane-strings" id="strings-pane">',
f'<div class="pane-header">strings ({len(program_strings)})</div>',
'<div class="pane-body">',
'<div class="string-list" id="string-list">',
]
)
for index, program_string in enumerate(program_strings):
parts.append(_render_string_row(program_string, index))
parts.extend(
[
"</div>",
"</div>",
"</section>",
"</div>",
"</main>",
'<div class="tooltip" id="tooltip"></div>',
f'<script type="application/json" id="mapa-data">{_to_json(data)}</script>',
"<script>",
script,
"</script>",
"</body>",
"</html>",
]
)
return "\n".join(parts)

165
mapa/ida_db.py Normal file
View File

@@ -0,0 +1,165 @@
from __future__ import annotations
import contextlib
import fcntl
import hashlib
import logging
import os
import time
from pathlib import Path
from typing import Iterator
import idapro # must be first: mutates sys.path so ida_auto and ida_domain are importable
import ida_auto
from ida_domain.database import Database, IdaCommandOptions
logger = logging.getLogger(__name__)
DATABASE_ACCESS_TIMEOUT = 5.0
DATABASE_ANALYSIS_TIMEOUT = 120.0
DATABASE_POLL_INTERVAL = 0.25
def get_cache_dir() -> Path:
xdg = os.environ.get("XDG_CACHE_HOME")
if xdg:
base = Path(xdg)
else:
base = Path.home() / ".cache"
return base / "mandiant" / "mapa"
def compute_file_hashes(file_path: Path) -> tuple[str, str]:
"""Compute (md5, sha256) for a file.
Raises:
OSError: If the file cannot be read.
"""
md5_digest = hashlib.md5()
sha256_digest = hashlib.sha256()
with file_path.open("rb") as fh:
for chunk in iter(lambda: fh.read(65536), b""):
md5_digest.update(chunk)
sha256_digest.update(chunk)
return md5_digest.hexdigest(), sha256_digest.hexdigest()
def _wait_for_repack(db_path: Path, timeout: float) -> None:
nam_path = db_path.with_suffix(".nam")
deadline = time.monotonic() + timeout
while nam_path.exists():
if time.monotonic() >= deadline:
raise RuntimeError(
f"Database {db_path} appears to be open in another program "
f"({nam_path} still exists after {timeout:.0f}s)."
)
time.sleep(DATABASE_POLL_INTERVAL)
@contextlib.contextmanager
def database_access_guard(db_path: Path, timeout: float) -> Iterator[None]:
"""Advisory guard that serialises access to an IDA database.
Uses .nam polling + flock on <db>.lock with TOCTOU re-check.
Raises:
RuntimeError: On timeout waiting for the database.
"""
_wait_for_repack(db_path, timeout)
lock_path = Path(str(db_path) + ".lock")
lock_fd = lock_path.open("w")
deadline = time.monotonic() + timeout
try:
while True:
try:
fcntl.flock(lock_fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
break
except OSError:
if time.monotonic() >= deadline:
raise RuntimeError(
f"Timed out waiting for lock on {db_path} after {timeout:.0f}s."
)
time.sleep(DATABASE_POLL_INTERVAL)
_wait_for_repack(db_path, max(0, deadline - time.monotonic()))
yield
finally:
fcntl.flock(lock_fd, fcntl.LOCK_UN)
lock_fd.close()
def resolve_database(file_path: Path) -> tuple[Path, str, str]:
"""Resolve an input path to an .i64/.idb database path.
Returns (db_path, md5, sha256). For existing databases, hashes are empty
strings (they'll be read from IDA metadata instead).
Raises:
RuntimeError: If analysis or caching fails.
"""
suffix = file_path.suffix.lower()
if suffix in {".i64", ".idb"}:
logger.debug("Using existing database: %s", file_path)
return file_path, "", ""
cache_dir = get_cache_dir()
cache_dir.mkdir(parents=True, exist_ok=True)
md5, sha256 = compute_file_hashes(file_path)
cache_path = cache_dir / f"{sha256}.i64"
if cache_path.exists():
logger.debug("Cache hit for %s -> %s", file_path, cache_path)
return cache_path, md5, sha256
logger.debug("Cache miss for %s; analyzing to %s", file_path, cache_path)
with database_access_guard(cache_path, timeout=DATABASE_ANALYSIS_TIMEOUT):
if cache_path.exists():
logger.debug("Cache populated while waiting for lock: %s", cache_path)
return cache_path, md5, sha256
logger.info("Analyzing %s (this may take a moment)...", file_path.name)
idapro.enable_console_messages(False)
ida_options = IdaCommandOptions(
auto_analysis=True,
new_database=True,
output_database=str(cache_path),
load_resources=True,
plugin_options="lumina:host=0.0.0.0 -Osecondary_lumina:host=0.0.0.0",
)
try:
with Database.open(str(file_path), ida_options, save_on_close=True):
ida_auto.auto_wait()
except Exception as exc:
raise RuntimeError(f"Analysis failed for {file_path}: {exc}") from exc
if not cache_path.exists():
raise RuntimeError(f"Analysis produced no database for {file_path}")
logger.debug("Analysis completed: %s", cache_path)
return cache_path, md5, sha256
@contextlib.contextmanager
def open_database_session(db_path: Path, auto_analysis: bool = False) -> Iterator[Database]:
"""Open a database session with advisory locking.
Raises:
RuntimeError: If opening fails or the database is locked.
"""
with database_access_guard(db_path, timeout=DATABASE_ACCESS_TIMEOUT):
ida_options = IdaCommandOptions(auto_analysis=auto_analysis, new_database=False)
logger.debug("Opening database session: %s (auto_analysis=%s)", db_path, auto_analysis)
idapro.enable_console_messages(False)
try:
database = Database.open(str(db_path), ida_options, save_on_close=False)
except Exception as exc:
raise RuntimeError(f"Failed to open {db_path}: {exc}") from exc
with database:
if auto_analysis:
ida_auto.auto_wait()
yield database
logger.debug("Closed database session: %s", db_path)

126
mapa/model.py Normal file
View File

@@ -0,0 +1,126 @@
from __future__ import annotations
from collections.abc import Mapping
from dataclasses import dataclass, field
@dataclass(frozen=True)
class AssemblageRecord:
sha256: str
name: str
start_rva: int
end_rva: int
address: int
end_address: int
source_file: str
@property
def source_path(self) -> str:
if not self.source_file.endswith(")"):
return self.source_file
head, separator, _ = self.source_file.rpartition(" (")
if separator:
return head
return self.source_file
@classmethod
def from_csv_row(
cls, row: Mapping[str, str], base_address: int
) -> AssemblageRecord:
start_rva = int(row["start"], 0)
end_rva = int(row["end"], 0)
return cls(
sha256=row["hash"].strip().lower(),
name=row["name"].strip(),
start_rva=start_rva,
end_rva=end_rva,
address=base_address + start_rva,
end_address=base_address + end_rva,
source_file=row["source_file"].strip(),
)
@dataclass
class MapaString:
value: str
address: int
tags: tuple[str, ...] = ()
tag_matches: tuple = ()
@dataclass
class MapaProgramString:
value: str
address: int
tags: tuple[str, ...] = ()
tag_matches: tuple = ()
function_addresses: tuple[int, ...] = ()
@dataclass
class MapaCall:
name: str
address: int
is_api: bool
delta: int = 0
direction: str = ""
@dataclass
class MapaCaller:
name: str
address: int
delta: int = 0
direction: str = ""
@dataclass
class MapaFunction:
address: int
name: str
is_thunk: bool = False
is_library: bool = False
num_basic_blocks: int = 0
num_edges: int = 0
num_instructions: int = 0
total_instruction_bytes: int = 0
callers: list[MapaCaller] = field(default_factory=list)
calls: list[MapaCall] = field(default_factory=list)
apis: list[MapaCall] = field(default_factory=list)
strings: list[MapaString] = field(default_factory=list)
capa_matches: list[str] = field(default_factory=list)
assemblage_records: list[AssemblageRecord] = field(default_factory=list)
@dataclass
class MapaSection:
address: int
size: int
perms: str
name: str = ""
@dataclass
class MapaLibrary:
name: str
is_static: bool = False
load_address: int | None = None
@dataclass
class MapaMeta:
name: str
sha256: str
md5: str = ""
arch: str = ""
timestamp: str = ""
base_address: int = 0
@dataclass
class MapaReport:
meta: MapaMeta
sections: list[MapaSection] = field(default_factory=list)
libraries: list[MapaLibrary] = field(default_factory=list)
functions: list[MapaFunction] = field(default_factory=list)
program_strings: list[MapaProgramString] = field(default_factory=list)

273
mapa/renderer.py Normal file
View File

@@ -0,0 +1,273 @@
from __future__ import annotations
import contextlib
import rich.padding
from rich.console import Console
from rich.markup import escape
from rich.text import Text
from mapa.model import MapaFunction, MapaReport, MapaString
class Renderer:
def __init__(self, console: Console):
self.console: Console = console
self.indent: int = 0
@contextlib.contextmanager
def indenting(self):
self.indent += 1
try:
yield
finally:
self.indent -= 1
@staticmethod
def markup(s: str, **kwargs) -> Text:
escaped_args = {
k: (escape(v) if isinstance(v, str) else v) for k, v in kwargs.items()
}
return Text.from_markup(s.format(**escaped_args))
def print(self, renderable, **kwargs):
if not kwargs:
return self.console.print(
rich.padding.Padding(renderable, (0, 0, 0, self.indent * 2))
)
assert isinstance(renderable, str)
return self.print(self.markup(renderable, **kwargs))
def writeln(self, s: str):
self.print(s)
@contextlib.contextmanager
def section(self, name):
if isinstance(name, str):
self.print("[title]{name}", name=name)
elif isinstance(name, Text):
name = name.copy()
name.stylize_before(self.console.get_style("title"))
self.print(name)
else:
raise ValueError("unexpected section name")
with self.indenting():
yield
def _visible_tags(tags: tuple[str, ...]) -> list[str]:
tag_set = set(tags)
has_specific = any(t != "#common" for t in tag_set)
result = []
for t in tags:
if t == "#common" and has_specific:
continue
result.append(t)
return result
def _render_string_line(o: Renderer, value: str, tags: list[str]) -> Text:
left = Text.from_markup(
'string: [decoration]"[/]{string}[decoration]"[/]'.format(
string=escape(value)
)
)
right = Text(" ".join(tags), style="dim")
available = o.console.size.width - (o.indent * 2)
min_gap = 1
right_len = right.cell_len
max_left = available - right_len - min_gap
if max_left < 12:
combined = left.copy()
combined.append(" ")
combined.append(right)
return combined
if left.cell_len > max_left:
left.truncate(max_left - 1, overflow="ellipsis")
padding = available - left.cell_len - right_len
combined = left.copy()
combined.append(" " * padding)
combined.append(right)
return combined
def _get_primary_source_path(func) -> str | None:
if not func.assemblage_records:
return None
source_path = func.assemblage_records[0].source_path
if not source_path:
return None
return source_path
def _render_source_path_separator(o: Renderer, source_path: str) -> Text:
label = f"[ {source_path} ]"
available = max(0, o.console.size.width - (o.indent * 2))
if available <= len(label) + 2:
return Text(label, style="decoration")
rule_len = available - len(label) - 2
left_len = rule_len // 2
right_len = rule_len - left_len
rendered = Text("-" * left_len, style="decoration")
rendered.append(" ")
rendered.append(label, style="decoration")
rendered.append(" ")
rendered.append("-" * right_len, style="decoration")
return rendered
def _format_function_heading(func: MapaFunction) -> str:
kind = "thunk" if func.is_thunk else "function"
return f"{kind} {func.name} @ {hex(func.address)}"
def _iter_function_rows(func: MapaFunction):
for record in func.assemblage_records:
yield "assemblage", record
if func.is_thunk:
return
for caller in func.callers:
yield "caller", caller
yield "metrics", None
for match in func.capa_matches:
yield "capa", match
for call in func.calls:
yield "call", call
for api in func.apis:
yield "api", api
for string in func.strings:
yield "string", string
def _render_plain_string_line(string: MapaString) -> str:
visible_tags = _visible_tags(string.tags)
line = f'string: "{string.value}"'
if visible_tags:
line += f" {' '.join(visible_tags)}"
return line
def render_function_summary_text(func: MapaFunction) -> str:
lines = [_format_function_heading(func)]
for kind, value in _iter_function_rows(func):
if kind == "assemblage":
lines.append(f"assemblage name: {value.name}")
lines.append(f"assemblage file: {value.source_path}")
elif kind == "caller":
lines.append(
f"xref: {value.direction} {value.name} ({value.delta:+})"
)
elif kind == "metrics":
lines.append(
f"B/E/I: {func.num_basic_blocks} / {func.num_edges} / {func.num_instructions} ({func.total_instruction_bytes} bytes)"
)
elif kind == "capa":
lines.append(f"capa: {value}")
elif kind == "call":
lines.append(
f"calls: {value.direction} {value.name} ({value.delta:+})"
)
elif kind == "api":
lines.append(f"api: {value.name}")
elif kind == "string":
lines.append(_render_plain_string_line(value))
else:
raise ValueError(f"unexpected function row kind: {kind}")
return "\n".join(lines)
def render_report(report: MapaReport, console: Console) -> None:
o = Renderer(console)
with o.section("meta"):
o.writeln(f"name: {report.meta.name}")
o.writeln(f"sha256: {report.meta.sha256}")
o.writeln(f"arch: {report.meta.arch}")
o.writeln(f"ts: {report.meta.timestamp}")
with o.section("sections"):
for section in report.sections:
o.writeln(f"- {hex(section.address)} {section.perms} {hex(section.size)}")
with o.section("libraries"):
for lib in report.libraries:
static = " (static)" if lib.is_static else ""
addr = (
f" at {hex(lib.load_address)}" if lib.load_address is not None else ""
)
o.writeln(f"- {lib.name:<12s}{static}{addr}")
if not report.libraries:
o.writeln("(none)")
with o.section("functions"):
last_source_path: str | None = None
for func in report.functions:
source_path = _get_primary_source_path(func)
if source_path is not None:
if last_source_path is not None and source_path != last_source_path:
o.print(_render_source_path_separator(o, source_path))
last_source_path = source_path
with o.section(
o.markup(
"{function_kind} [default]{function_name}[/] [decoration]@ {function_address}[/]",
function_kind="thunk" if func.is_thunk else "function",
function_name=func.name,
function_address=hex(func.address),
)
):
for kind, value in _iter_function_rows(func):
if kind == "assemblage":
o.writeln(f"assemblage name: {value.name}")
o.writeln(f"assemblage file: {value.source_path}")
elif kind == "caller":
o.print(
"xref: [decoration]{direction}[/] {name} [decoration]({delta:+})[/]",
direction=value.direction,
name=value.name,
delta=value.delta,
)
elif kind == "metrics":
o.writeln(
f"B/E/I: {func.num_basic_blocks} / {func.num_edges} / {func.num_instructions} ({func.total_instruction_bytes} bytes)"
)
elif kind == "capa":
o.writeln(f"capa: {value}")
elif kind == "call":
o.print(
"calls: [decoration]{direction}[/] {name} [decoration]({delta:+})[/]",
direction=value.direction,
name=value.name,
delta=value.delta,
)
elif kind == "api":
o.print(
"api: {name}",
name=value.name,
)
elif kind == "string":
visible_tags = _visible_tags(value.tags)
if visible_tags:
o.print(_render_string_line(o, value.value, visible_tags))
else:
o.print(
'string: [decoration]"[/]{string}[decoration]"[/]',
string=value.value,
)
else:
raise ValueError(f"unexpected function row kind: {kind}")
if not func.is_thunk:
o.print("")

View File

@@ -0,0 +1,25 @@
# Vendored Quantum Strand string databases
Upstream: `../quantumstrand/` (https://github.com/mandiant/flare-floss, Quantum Strand branch)
Upstream commit: `73eb1541e896c065fc694ba7b01067f56871631b`
## Copied data files
- `data/oss/*.jsonl.gz` — open-source library string databases
- `data/crt/msvc_v143.jsonl.gz` — MSVC CRT string database
- `data/expert/capa.jsonl` — expert tagging rules (plain JSONL, not gzipped)
- `data/winapi/apis.txt.gz` — Windows API function names
- `data/winapi/dlls.txt.gz` — Windows DLL names
- `data/gp/gp.jsonl.gz` — global prevalence strings
- `data/gp/cwindb-native.jsonl.gz` — CWinDB native prevalence strings
- `data/gp/cwindb-dotnet.jsonl.gz` — CWinDB .NET prevalence strings
- `data/gp/junk-code.jsonl.gz` — junk/compiler-generated code strings
- `data/gp/xaa-hashes.bin` — truncated MD5 hash set (8 bytes per entry)
- `data/gp/yaa-hashes.bin` — truncated MD5 hash set (8 bytes per entry)
## Code
The loader, tagger, and model code in this package are mapa-local rewrites
inspired by upstream modules `floss/qs/db/oss.py`, `expert.py`, `gp.py`,
`winapi.py`, and the tagging logic in `floss/qs/main.py`. No upstream code
was copied verbatim.

View File

@@ -0,0 +1 @@
from __future__ import annotations

Binary file not shown.

File diff suppressed because it is too large Load Diff

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

220
mapa/string_tags/loaders.py Normal file
View File

@@ -0,0 +1,220 @@
from __future__ import annotations
import gzip
import hashlib
import logging
import re
from importlib import resources
from typing import Literal
import msgspec
logger = logging.getLogger(__name__)
HASH_DIGEST_SIZE = 8
class OpenSourceString(msgspec.Struct):
string: str
library_name: str
library_version: str
file_path: str | None = None
function_name: str | None = None
line_number: int | None = None
class ExpertRule(msgspec.Struct):
type: Literal["string", "substring", "regex"]
value: str
tag: str
action: Literal["mute", "highlight", "hide"]
note: str
description: str
authors: list[str]
references: list[str]
class StringGlobalPrevalence(msgspec.Struct):
string: str
encoding: str = "unknown"
global_count: int = 0
location: str | None = None
class OssDatabase:
def __init__(self, entries: dict[str, OpenSourceString]):
self.entries = entries
def query(self, s: str) -> OpenSourceString | None:
return self.entries.get(s)
class ExpertDatabase:
def __init__(
self,
string_rules: dict[str, list[ExpertRule]],
substring_rules: list[ExpertRule],
regex_rules: list[tuple[ExpertRule, re.Pattern[str]]],
):
self.string_rules = string_rules
self.substring_rules = substring_rules
self.regex_rules = regex_rules
def query(self, s: str) -> list[ExpertRule]:
hits: list[ExpertRule] = []
for rule in self.string_rules.get(s, []):
hits.append(rule)
for rule in self.substring_rules:
if rule.value in s:
hits.append(rule)
for rule, pattern in self.regex_rules:
if pattern.search(s):
hits.append(rule)
return hits
class WinapiDatabase:
def __init__(self, dll_names: set[str], api_names: set[str]):
self.dll_names = dll_names
self.api_names = api_names
def query(self, s: str) -> bool:
return s.lower() in self.dll_names or s in self.api_names
class GpJsonlDatabase:
def __init__(self, entries: dict[str, list[StringGlobalPrevalence]]):
self.entries = entries
def query(self, s: str) -> list[StringGlobalPrevalence] | None:
return self.entries.get(s)
class GpHashDatabase:
def __init__(self, hashes: set[bytes]):
self.hashes = hashes
def query(self, s: str) -> bool:
digest = hashlib.md5(s.encode("utf-8")).digest()[:HASH_DIGEST_SIZE]
return digest in self.hashes
def _data_path():
return resources.files("mapa.string_tags") / "data"
def _read_gzip_lines(path) -> list[bytes]:
return gzip.decompress(path.read_bytes()).split(b"\n")
def load_oss_databases() -> list[OssDatabase]:
decoder = msgspec.json.Decoder(OpenSourceString)
dbs: list[OssDatabase] = []
for subdir in ("oss", "crt"):
data_dir = _data_path() / subdir
for child in sorted(data_dir.iterdir()):
if not child.name.endswith(".jsonl.gz"):
continue
entries: dict[str, OpenSourceString] = {}
for line in _read_gzip_lines(child):
line = line.strip()
if not line:
continue
record = decoder.decode(line)
entries[record.string] = record
dbs.append(OssDatabase(entries))
logger.debug("loaded OSS database %s/%s: %d entries", subdir, child.name, len(entries))
return dbs
def load_expert_database() -> ExpertDatabase:
decoder = msgspec.json.Decoder(ExpertRule)
path = _data_path() / "expert" / "capa.jsonl"
string_rules: dict[str, list[ExpertRule]] = {}
substring_rules: list[ExpertRule] = []
regex_rules: list[tuple[ExpertRule, re.Pattern[str]]] = []
for line in path.read_text().splitlines():
line = line.strip()
if not line:
continue
rule = decoder.decode(line)
if rule.type == "string":
string_rules.setdefault(rule.value, []).append(rule)
elif rule.type == "substring":
substring_rules.append(rule)
elif rule.type == "regex":
try:
regex_rules.append((rule, re.compile(rule.value)))
except re.error:
logger.warning("invalid regex in expert rule: %s", rule.value)
logger.debug(
"loaded expert database: %d string, %d substring, %d regex rules",
len(string_rules),
len(substring_rules),
len(regex_rules),
)
return ExpertDatabase(string_rules, substring_rules, regex_rules)
def load_winapi_database() -> WinapiDatabase:
winapi_dir = _data_path() / "winapi"
dll_lines = _read_gzip_lines(winapi_dir / "dlls.txt.gz")
api_lines = _read_gzip_lines(winapi_dir / "apis.txt.gz")
dll_names = {line.decode("utf-8").strip().lower() for line in dll_lines if line.strip()}
api_names = {line.decode("utf-8").strip() for line in api_lines if line.strip()}
logger.debug("loaded winapi database: %d dlls, %d apis", len(dll_names), len(api_names))
return WinapiDatabase(dll_names, api_names)
def load_gp_jsonl_databases() -> list[GpJsonlDatabase]:
decoder = msgspec.json.Decoder(StringGlobalPrevalence)
gp_dir = _data_path() / "gp"
dbs: list[GpJsonlDatabase] = []
for name in ("gp.jsonl.gz", "cwindb-native.jsonl.gz", "cwindb-dotnet.jsonl.gz"):
entries: dict[str, list[StringGlobalPrevalence]] = {}
lines = _read_gzip_lines(gp_dir / name)
for i, line in enumerate(lines):
line = line.strip()
if not line:
continue
if i == 0:
continue
record = decoder.decode(line)
entries.setdefault(record.string, []).append(record)
dbs.append(GpJsonlDatabase(entries))
logger.debug("loaded GP JSONL database %s: %d entries", name, len(entries))
return dbs
def load_junk_code_database() -> GpJsonlDatabase:
decoder = msgspec.json.Decoder(StringGlobalPrevalence)
path = _data_path() / "gp" / "junk-code.jsonl.gz"
entries: dict[str, list[StringGlobalPrevalence]] = {}
lines = _read_gzip_lines(path)
for i, line in enumerate(lines):
line = line.strip()
if not line:
continue
if i == 0:
continue
record = decoder.decode(line)
entries.setdefault(record.string, []).append(record)
logger.debug("loaded junk-code database: %d entries", len(entries))
return GpJsonlDatabase(entries)
def load_gp_hash_databases() -> list[GpHashDatabase]:
gp_dir = _data_path() / "gp"
dbs: list[GpHashDatabase] = []
for name in ("xaa-hashes.bin", "yaa-hashes.bin"):
data = (gp_dir / name).read_bytes()
hashes: set[bytes] = set()
for offset in range(0, len(data), HASH_DIGEST_SIZE):
chunk = data[offset : offset + HASH_DIGEST_SIZE]
if len(chunk) == HASH_DIGEST_SIZE:
hashes.add(chunk)
dbs.append(GpHashDatabase(hashes))
logger.debug("loaded GP hash database %s: %d entries", name, len(hashes))
return dbs

36
mapa/string_tags/model.py Normal file
View File

@@ -0,0 +1,36 @@
from __future__ import annotations
from dataclasses import field, dataclass
@dataclass
class StringTagMatch:
tag: str
source_family: str
source_name: str
kind: str = ""
library_name: str = ""
library_version: str = ""
file_path: str = ""
function_name: str = ""
line_number: int | None = None
note: str = ""
description: str = ""
action: str = ""
global_count: int | None = None
encoding: str = ""
location: str = ""
@property
def sort_key(self) -> tuple[str, str, str, str, str, str]:
return (self.tag, self.source_family, self.source_name, self.library_name, self.note, self.kind)
@dataclass
class StringTagResult:
tags: tuple[str, ...]
matches: tuple[StringTagMatch, ...] = field(default_factory=tuple)
@classmethod
def empty(cls) -> StringTagResult:
return cls(tags=(), matches=())

135
mapa/string_tags/tagger.py Normal file
View File

@@ -0,0 +1,135 @@
from __future__ import annotations
import logging
from mapa.string_tags.model import StringTagMatch, StringTagResult
from mapa.string_tags.loaders import (
GpHashDatabase,
GpJsonlDatabase,
OssDatabase,
ExpertDatabase,
WinapiDatabase,
load_expert_database,
load_gp_hash_databases,
load_gp_jsonl_databases,
load_junk_code_database,
load_oss_databases,
load_winapi_database,
)
logger = logging.getLogger(__name__)
class StringTagger:
def __init__(
self,
oss_dbs: list[OssDatabase],
expert_db: ExpertDatabase,
winapi_db: WinapiDatabase,
gp_jsonl_dbs: list[GpJsonlDatabase],
gp_hash_dbs: list[GpHashDatabase],
junk_code_db: GpJsonlDatabase,
):
self.oss_dbs = oss_dbs
self.expert_db = expert_db
self.winapi_db = winapi_db
self.gp_jsonl_dbs = gp_jsonl_dbs
self.gp_hash_dbs = gp_hash_dbs
self.junk_code_db = junk_code_db
def tag_string(self, raw: str) -> StringTagResult:
matches: list[StringTagMatch] = []
for db in self.oss_dbs:
hit = db.query(raw)
if hit is not None:
matches.append(StringTagMatch(
tag=f"#{hit.library_name}",
source_family="oss",
source_name=hit.library_name,
kind="exact",
library_name=hit.library_name,
library_version=hit.library_version,
file_path=hit.file_path or "",
function_name=hit.function_name or "",
line_number=hit.line_number,
))
for rule in self.expert_db.query(raw):
matches.append(StringTagMatch(
tag=rule.tag,
source_family="expert",
source_name="capa",
kind=rule.type,
note=rule.note,
description=rule.description,
action=rule.action,
))
if self.winapi_db.query(raw):
matches.append(StringTagMatch(
tag="#winapi",
source_family="winapi",
source_name="winapi",
kind="exact",
))
for db in self.gp_jsonl_dbs:
entries = db.query(raw)
if entries:
for entry in entries:
matches.append(StringTagMatch(
tag="#common",
source_family="gp",
source_name="jsonl",
kind="exact",
global_count=entry.global_count,
encoding=entry.encoding,
location=entry.location or "",
))
for db in self.gp_hash_dbs:
if db.query(raw):
matches.append(StringTagMatch(
tag="#common",
source_family="gp",
source_name="hash",
kind="hash",
))
if self.junk_code_db.query(raw):
matches.append(StringTagMatch(
tag="#code-junk",
source_family="gp",
source_name="junk-code",
kind="exact",
))
if not matches:
return StringTagResult.empty()
tags = sorted(set(m.tag for m in matches))
matches.sort(key=lambda m: m.sort_key)
return StringTagResult(tags=tuple(tags), matches=tuple(matches))
_cached_tagger: StringTagger | None = None
def load_default_tagger() -> StringTagger:
global _cached_tagger
if _cached_tagger is not None:
return _cached_tagger
logger.debug("loading string tag databases...")
tagger = StringTagger(
oss_dbs=load_oss_databases(),
expert_db=load_expert_database(),
winapi_db=load_winapi_database(),
gp_jsonl_dbs=load_gp_jsonl_databases(),
gp_hash_dbs=load_gp_hash_databases(),
junk_code_db=load_junk_code_database(),
)
_cached_tagger = tagger
logger.debug("string tag databases loaded")
return tagger

65
mapa/strings.py Normal file
View File

@@ -0,0 +1,65 @@
# String extraction derived from FLOSS via capa.
# https://github.com/mandiant/flare-floss
#
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import string
PRINTABLE_BYTES = set(string.printable.encode("ascii"))
MIN_STRING_LENGTH = 4
MAX_STRING_READ = 2048
def is_printable_ascii(s: str) -> bool:
try:
return all(b in PRINTABLE_BYTES for b in s.encode("ascii"))
except UnicodeEncodeError:
return False
def extract_ascii_from_buf(buf: bytes) -> str | None:
"""Extract a null-terminated printable ASCII string from the start of a buffer."""
end = buf.find(b"\x00")
if end == -1:
end = len(buf)
if end < MIN_STRING_LENGTH:
return None
candidate = buf[:end]
if not all(b in PRINTABLE_BYTES for b in candidate):
return None
return candidate.decode("ascii")
def extract_utf16le_from_buf(buf: bytes) -> str | None:
"""Extract a null-terminated UTF-16 LE string from the start of a buffer."""
if len(buf) < MIN_STRING_LENGTH * 2:
return None
if buf[1] != 0:
return None
chars: list[int] = []
for i in range(0, len(buf) - 1, 2):
lo, hi = buf[i], buf[i + 1]
if lo == 0 and hi == 0:
break
if hi != 0 or lo not in PRINTABLE_BYTES:
return None
chars.append(lo)
if len(chars) < MIN_STRING_LENGTH:
return None
return bytes(chars).decode("ascii")

View File

@@ -120,9 +120,12 @@ filterwarnings = [
version = {attr = "capa.version.__version__"}
[tool.setuptools.packages.find]
include = ["capa*"]
include = ["capa*", "mapa*"]
namespaces = false
[tool.setuptools.package-data]
"mapa.string_tags" = ["data/**/*"]
[project.optional-dependencies]
dev = [
# Dev and build dependencies are not relaxed because

20
scripts/mapa.py Normal file
View File

@@ -0,0 +1,20 @@
#!/usr/bin/env python
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "idapro",
# "ida-domain",
# "rich",
# ]
# ///
import sys
from pathlib import Path
ROOT = Path(__file__).resolve().parents[1]
if str(ROOT) not in sys.path:
sys.path.insert(0, str(ROOT))
from mapa.cli import main
if __name__ == "__main__":
sys.exit(main())

653
tests/test_mapa.py Normal file
View File

@@ -0,0 +1,653 @@
from __future__ import annotations
from io import StringIO
import pytest
from rich.console import Console
from rich.theme import Theme
from mapa.assemblage import load_assemblage_records
from mapa.cli import (
build_parser,
open_html_report,
validate_output_options,
write_temp_html_report,
)
from mapa.html_renderer import render_html_map
from mapa.model import (
AssemblageRecord,
MapaCall,
MapaCaller,
MapaFunction,
MapaLibrary,
MapaMeta,
MapaProgramString,
MapaReport,
MapaSection,
MapaString,
)
from mapa.renderer import render_report
class TestAssemblageRecord:
def test_from_csv_row(self):
row = {
"hash": "ABC123",
"name": "foo",
"start": "0x1000",
"end": "0x1010",
"source_file": "src/main.c (line 42)",
}
record = AssemblageRecord.from_csv_row(row, base_address=0x400000)
assert record.sha256 == "abc123"
assert record.start_rva == 0x1000
assert record.end_rva == 0x1010
assert record.address == 0x401000
assert record.end_address == 0x401010
assert record.source_path == "src/main.c"
def test_source_path_without_suffix(self):
record = AssemblageRecord(
sha256="abc123",
name="bar",
start_rva=0,
end_rva=0,
address=0x400000,
end_address=0x400010,
source_file="src/bar.c",
)
assert record.source_path == "src/bar.c"
class TestAssemblageLoader:
def test_load_filters_by_sha256_converts_rva_and_dedupes(self, tmp_path):
csv_path = tmp_path / "assemblage.csv"
csv_path.write_text(
"file_name,path,hash,name,start,end,source_file\n"
"sample.exe,01/sample.exe,abc123,foo,4096,4112,src/foo.c (MD5: 11)\n"
"sample.exe,01/sample.exe,abc123,foo,4096,4112,src/foo.c (MD5: 11)\n"
"sample.exe,01/sample.exe,abc123,foo_alias,4096,4112,src/foo_alias.c (MD5: 22)\n"
"sample.exe,01/sample.exe,def456,skip,4096,4112,src/skip.c (MD5: 33)\n"
"sample.exe,01/sample.exe,abc123,bar,8192,8208,src/bar.c (MD5: 44)\n",
encoding="utf-8",
)
records = load_assemblage_records(
csv_path, sample_sha256="ABC123", base_address=0x400000
)
assert sorted(records) == [0x401000, 0x402000]
assert [record.name for record in records[0x401000]] == ["foo", "foo_alias"]
assert [record.source_path for record in records[0x401000]] == [
"src/foo.c",
"src/foo_alias.c",
]
assert records[0x402000][0].name == "bar"
assert records[0x402000][0].address == 0x402000
def test_load_requires_sha256(self, tmp_path):
csv_path = tmp_path / "assemblage.csv"
csv_path.write_text(
"file_name,path,hash,name,start,end,source_file\n",
encoding="utf-8",
)
try:
load_assemblage_records(csv_path, sample_sha256="", base_address=0x400000)
except ValueError as exc:
assert "sha256" in str(exc).lower()
else:
assert False, "expected ValueError"
class TestRenderer:
@staticmethod
def _make_console() -> tuple[Console, StringIO]:
buf = StringIO()
theme = Theme(
{
"decoration": "grey54",
"title": "yellow",
"key": "black",
"value": "blue",
"default": "black",
},
inherit=False,
)
console = Console(
theme=theme,
markup=False,
emoji=False,
file=buf,
force_terminal=False,
width=120,
no_color=True,
)
return console, buf
@staticmethod
def _make_assemblage_record(
name: str, source_file: str, address: int = 0x1000
) -> AssemblageRecord:
return AssemblageRecord(
sha256="abc123",
name=name,
start_rva=address,
end_rva=address + 0x10,
address=address,
end_address=address + 0x10,
source_file=source_file,
)
def test_meta_section(self):
report = MapaReport(
meta=MapaMeta(
name="test.exe",
sha256="abc123",
arch="x86_64",
timestamp="2025-01-01T00:00:00",
),
)
console, buf = self._make_console()
render_report(report, console)
output = buf.getvalue()
assert "test.exe" in output
assert "abc123" in output
assert "x86_64" in output
def test_sections_rendered(self):
report = MapaReport(
meta=MapaMeta(name="t", sha256="s"),
sections=[
MapaSection(address=0x1000, size=0x2000, perms="r-x"),
],
)
console, buf = self._make_console()
render_report(report, console)
output = buf.getvalue()
assert "0x1000" in output
assert "r-x" in output
assert "0x2000" in output
def test_libraries_rendered(self):
report = MapaReport(
meta=MapaMeta(name="t", sha256="s"),
libraries=[MapaLibrary(name="KERNEL32.dll")],
)
console, buf = self._make_console()
render_report(report, console)
output = buf.getvalue()
assert "KERNEL32.dll" in output
def test_empty_libraries(self):
report = MapaReport(
meta=MapaMeta(name="t", sha256="s"),
)
console, buf = self._make_console()
render_report(report, console)
output = buf.getvalue()
assert "(none)" in output
def test_thunk_function_with_assemblage(self):
report = MapaReport(
meta=MapaMeta(name="t", sha256="s"),
functions=[
MapaFunction(
address=0x1000,
name="jmp_CreateFile",
is_thunk=True,
assemblage_records=[
self._make_assemblage_record(
"source_thunk", "src/thunk.c (MD5: 11)"
)
],
),
],
)
console, buf = self._make_console()
render_report(report, console)
output = buf.getvalue()
assert "thunk" in output
assert "jmp_CreateFile" in output
assert "assemblage name: source_thunk" in output
assert "assemblage file: src/thunk.c" in output
def test_function_with_calls_strings_and_assemblage(self):
report = MapaReport(
meta=MapaMeta(name="t", sha256="s"),
functions=[
MapaFunction(
address=0x1000,
name="main",
num_basic_blocks=3,
num_edges=4,
num_instructions=10,
total_instruction_bytes=42,
callers=[
MapaCaller(name="start", address=0x500, delta=-1, direction="")
],
calls=[
MapaCall(
name="helper",
address=0x2000,
is_api=False,
delta=1,
direction="",
)
],
apis=[MapaCall(name="CreateFileW", address=0x3000, is_api=True)],
strings=[MapaString(value="Hello World", address=0x4000)],
capa_matches=["write file"],
assemblage_records=[
self._make_assemblage_record(
"source_main", "src/main.c (MD5: 11)"
)
],
),
],
)
console, buf = self._make_console()
render_report(report, console)
output = buf.getvalue()
assert "function" in output
assert "main" in output
assert "assemblage name: source_main" in output
assert "assemblage file: src/main.c" in output
assert "3 / 4 / 10 (42 bytes)" in output
assert "xref:" in output
assert "start" in output
assert "calls:" in output
assert "helper" in output
assert "api:" in output
assert "CreateFileW" in output
assert "string:" in output
assert "Hello World" in output
assert "capa:" in output
assert "write file" in output
def test_ambiguous_assemblage_records_render_all(self):
report = MapaReport(
meta=MapaMeta(name="t", sha256="s"),
functions=[
MapaFunction(
address=0x1000,
name="ida_name",
assemblage_records=[
self._make_assemblage_record("foo", "src/foo.c (MD5: 11)"),
self._make_assemblage_record(
"foo_alias", "src/foo_alias.c (MD5: 22)"
),
],
),
],
)
console, buf = self._make_console()
render_report(report, console)
output = buf.getvalue()
assert "function ida_name @ 0x1000" in output
assert output.count("assemblage name:") == 2
assert output.count("assemblage file:") == 2
assert "foo_alias" in output
assert "src/foo_alias.c" in output
def test_source_file_separator_inserted_before_new_path(self):
report = MapaReport(
meta=MapaMeta(name="t", sha256="s"),
functions=[
MapaFunction(
address=0x1000,
name="a",
assemblage_records=[
self._make_assemblage_record("a", "src/a.c (MD5: 11)")
],
),
MapaFunction(
address=0x2000,
name="b",
assemblage_records=[
self._make_assemblage_record(
"b", "src/b.c (MD5: 22)", address=0x2000
)
],
),
],
)
console, buf = self._make_console()
render_report(report, console)
output = buf.getvalue()
assert "[ src/b.c ]" in output
assert "[ src/a.c ]" not in output
assert output.index("[ src/b.c ]") < output.index("function b @ 0x2000")
def test_missing_assemblage_data_does_not_force_split(self):
report = MapaReport(
meta=MapaMeta(name="t", sha256="s"),
functions=[
MapaFunction(
address=0x1000,
name="a1",
assemblage_records=[
self._make_assemblage_record("a1", "src/a.c (MD5: 11)")
],
),
MapaFunction(address=0x2000, name="unknown"),
MapaFunction(
address=0x3000,
name="a2",
assemblage_records=[
self._make_assemblage_record(
"a2", "src/a.c (MD5: 22)", address=0x3000
)
],
),
],
)
console, buf = self._make_console()
render_report(report, console)
output = buf.getvalue()
assert "[ src/a.c ]" not in output
def test_new_source_after_missing_gap_still_splits(self):
report = MapaReport(
meta=MapaMeta(name="t", sha256="s"),
functions=[
MapaFunction(
address=0x1000,
name="a",
assemblage_records=[
self._make_assemblage_record("a", "src/a.c (MD5: 11)")
],
),
MapaFunction(address=0x2000, name="unknown1"),
MapaFunction(address=0x3000, name="unknown2"),
MapaFunction(
address=0x4000,
name="b",
assemblage_records=[
self._make_assemblage_record(
"b", "src/b.c (MD5: 22)", address=0x4000
)
],
),
],
)
console, buf = self._make_console()
render_report(report, console)
output = buf.getvalue()
assert output.count("[ src/b.c ]") == 1
assert "[ src/a.c ]" not in output
assert output.index("[ src/b.c ]") < output.index("function b @ 0x4000")
class TestStringDedup:
def test_strings_deduped_in_model(self):
seen: set[str] = set()
strings = ["hello", "hello", "world", "hello"]
result = []
for s in strings:
stripped = s.rstrip()
if stripped and stripped not in seen:
seen.add(stripped)
result.append(stripped)
assert result == ["hello", "world"]
def test_string_rstrip(self):
s = "hello \n\t"
assert s.rstrip() == "hello"
class TestHtmlMapRenderer:
@staticmethod
def _make_assemblage_record(
name: str, source_file: str, address: int
) -> AssemblageRecord:
return AssemblageRecord(
sha256="abc123",
name=name,
start_rva=address,
end_rva=address + 0x10,
address=address,
end_address=address + 0x10,
source_file=source_file,
)
@staticmethod
def _make_report() -> MapaReport:
return MapaReport(
meta=MapaMeta(name="sample.exe", sha256="abc123", arch="x86_64"),
functions=[
MapaFunction(
address=0x1000,
name="entry",
calls=[
MapaCall(
name="worker",
address=0x2000,
is_api=False,
delta=1,
direction="",
)
],
strings=[MapaString(value="CreateFileW", address=0x3000, tags=("#common", "#winapi"))],
assemblage_records=[
TestHtmlMapRenderer._make_assemblage_record(
"entry_src", "src/core/a.c (MD5: 11)", 0x1000
)
],
),
MapaFunction(
address=0x2000,
name="worker",
callers=[
MapaCaller(
name="entry",
address=0x1000,
delta=-1,
direction="",
)
],
calls=[
MapaCall(
name="helper",
address=0x3000,
is_api=False,
delta=1,
direction="",
)
],
apis=[MapaCall(name="kernel32.dll!CreateFileW", address=0x5000, is_api=True)],
strings=[MapaString(value="inflate", address=0x2000, tags=("#zlib",))],
capa_matches=["write file"],
assemblage_records=[
TestHtmlMapRenderer._make_assemblage_record(
"worker_src", "src/core/a.c (MD5: 22)", 0x2000
)
],
),
MapaFunction(
address=0x3000,
name="helper",
callers=[
MapaCaller(
name="worker",
address=0x2000,
delta=-1,
direction="",
)
],
strings=[MapaString(value="normal", address=0x4000, tags=("#common",))],
assemblage_records=[
TestHtmlMapRenderer._make_assemblage_record(
"helper_src", "src/core/b.c (MD5: 33)", 0x3000
)
],
),
],
program_strings=[
MapaProgramString(
value="inflate",
address=0x4000,
tags=("#zlib",),
function_addresses=(0x2000,),
),
MapaProgramString(
value="CreateFileW",
address=0x3000,
tags=("#common", "#winapi"),
function_addresses=(0x1000,),
),
MapaProgramString(
value="normal",
address=0x5000,
tags=("#common",),
function_addresses=(0x3000,),
),
MapaProgramString(
value="inflate",
address=0x6000,
tags=("#zlib",),
function_addresses=(0x1000, 0x2000),
),
],
)
def test_cli_parser_accepts_html_map_output(self):
parser = build_parser()
args = parser.parse_args(["sample.exe", "--output", "html-map"])
assert args.output == "html-map"
def test_open_requires_html_map_output(self):
with pytest.raises(ValueError, match="--open requires --output html-map"):
validate_output_options("text", True)
def test_write_temp_html_report(self, tmp_path):
report_path = write_temp_html_report("<html>ok</html>", directory=tmp_path)
assert report_path.parent == tmp_path
assert report_path.read_text(encoding="utf-8") == "<html>ok</html>"
def test_open_html_report_writes_temp_file_and_uses_opener(self, tmp_path):
opened: list[str] = []
def open_url(url: str) -> bool:
opened.append(url)
return True
report_path = open_html_report(
"<html>ok</html>",
opener=open_url,
directory=tmp_path,
)
assert report_path.parent == tmp_path
assert report_path.read_text(encoding="utf-8") == "<html>ok</html>"
assert opened == [report_path.as_uri()]
def test_html_map_is_self_contained(self):
html = render_html_map(self._make_report())
assert "<style>" in html
assert "<script" in html
assert "function-grid" in html
assert 'class="split-view"' in html
assert 'role="separator"' in html
assert "<link " not in html
assert "<script src=" not in html
def test_html_map_uses_split_panes_and_resizer(self):
html = render_html_map(self._make_report())
assert 'id="functions-pane"' in html
assert 'id="strings-pane"' in html
assert 'id="splitter"' in html
assert "setPointerCapture" in html
assert "pointerdown" in html
def test_html_map_shows_tag_counts_and_legend(self):
html = render_html_map(self._make_report())
assert '#zlib <span class="control-count">(2)</span>' in html
assert '#common <span class="control-count">(1)</span>' in html
assert 'fill = neighborhood heat · outline = seed · dim = outside neighborhood' in html
def test_html_map_orders_tags_by_function_count_then_name(self):
html = render_html_map(self._make_report())
zlib = html.index('data-tag="#zlib"')
common = html.index('data-tag="#common"')
winapi = html.index('data-tag="#winapi"')
assert zlib < common < winapi
def test_html_map_orders_program_strings_by_address_and_shows_addresses(self):
html = render_html_map(self._make_report())
first = html.index('data-string-address="0x3000"')
second = html.index('data-string-address="0x4000"')
third = html.index('data-string-address="0x5000"')
fourth = html.index('data-string-address="0x6000"')
assert first < second < third < fourth
assert '0x3000' in html
assert '0x6000' in html
def test_html_map_string_rows_show_visible_tags(self):
html = render_html_map(self._make_report())
assert 'data-string-address="0x3000"' in html
assert 'data-string-tags="#winapi"' in html
assert 'data-string-address="0x4000"' in html
assert 'data-string-tags="#zlib"' in html
assert 'data-string-address="0x5000"' in html
assert 'data-string-tags="#common"' in html
assert 'class="string-tags">#winapi</span>' in html
def test_html_map_exposes_neighborhood_controls_and_call_graph_data(self):
html = render_html_map(self._make_report())
assert 'id="direction-controls"' in html
assert 'data-direction="callers"' in html
assert 'data-direction="callees"' in html
assert 'data-direction="both"' in html
assert 'id="depth-controls"' in html
assert 'data-depth="1"' in html
assert 'data-depth="3"' in html
assert '"callersByIndex":[[],[0],[1]]' in html
assert '"calleesByIndex":[[1],[2],[]]' in html
def test_html_map_function_boxes_support_neighborhood_hover_and_lock(self):
html = render_html_map(self._make_report())
assert 'data-function-index="0"' in html
assert 'let hoveredFunction=null;' in html
assert 'let lockedFunction=null;' in html
assert 'const getDecayScore=(distance)=>0.5**distance;' in html
assert "lines.join('\\n')" in html
assert 'function-box.is-seed' in html
assert "box.addEventListener('click'" in html
def test_html_map_preserves_duplicate_values_at_distinct_addresses(self):
html = render_html_map(self._make_report())
assert html.count('data-string-value="inflate"') == 2
def test_html_map_tooltip_contains_function_summary_text(self):
html = render_html_map(self._make_report())
assert "function worker @ 0x2000" in html
assert "assemblage file: src/core/a.c" in html
assert "api: kernel32.dll!CreateFileW" in html
assert 'string: \\\"inflate\\\" #zlib' in html
assert "capa: write file" in html
def test_html_map_visible_tag_policy_matches_text_renderer(self):
html = render_html_map(self._make_report())
assert 'data-tag="#winapi"' in html
assert 'data-tag="#common"' in html
common_control = html.index('data-tag="#common"')
createfile_row = html.index('data-string-address="0x3000"')
normal_row = html.index('data-string-address="0x5000"')
assert common_control < createfile_row < normal_row
def test_html_map_hides_common_control_when_only_hidden_common_exists(self):
report = MapaReport(
meta=MapaMeta(name="sample.exe", sha256="abc123"),
functions=[MapaFunction(address=0x1000, name="entry")],
program_strings=[
MapaProgramString(
value="CreateFileW",
address=0x3000,
tags=("#common", "#winapi"),
function_addresses=(0x1000,),
)
],
)
html = render_html_map(report)
assert 'data-tag="#winapi"' in html
assert 'data-tag="#common"' not in html

367
tests/test_string_tags.py Normal file
View File

@@ -0,0 +1,367 @@
from __future__ import annotations
from io import StringIO
from rich.text import Text
from rich.theme import Theme
from rich.console import Console
from mapa.model import MapaFunction, MapaMeta, MapaReport, MapaString
from mapa.renderer import Renderer, _render_string_line, _visible_tags, render_report
from mapa.string_tags.model import StringTagMatch, StringTagResult
from mapa.string_tags.loaders import (
load_expert_database,
load_gp_hash_databases,
load_gp_jsonl_databases,
load_junk_code_database,
load_oss_databases,
load_winapi_database,
)
from mapa.string_tags.tagger import StringTagger, load_default_tagger
class TestOssLoader:
def test_zlib_string(self):
dbs = load_oss_databases()
found = False
for db in dbs:
hit = db.query("invalid distance code")
if hit is not None:
assert hit.library_name == "zlib"
found = True
break
assert found
def test_msvc_string(self):
dbs = load_oss_databases()
found = False
for db in dbs:
hit = db.query("IsolationAware function called after IsolationAwareCleanup")
if hit is not None:
assert hit.library_name == "msvc"
found = True
break
assert found
def test_miss(self):
dbs = load_oss_databases()
for db in dbs:
assert db.query("this string does not exist in any library") is None
class TestExpertLoader:
def test_exact_match(self):
db = load_expert_database()
hits = db.query("CurrencyDispenser1")
assert any(r.tag == "#capa" for r in hits)
def test_substring_match(self):
db = load_expert_database()
hits = db.query("something with CurrencyDispenser1 in it")
tags = {r.tag for r in hits}
assert "#capa" in tags or len(hits) == 0
def test_miss(self):
db = load_expert_database()
hits = db.query("completely unrelated string xyz123")
assert len(hits) == 0
class TestWinapiLoader:
def test_dll_case_insensitive(self):
db = load_winapi_database()
assert db.query("kernel32.dll")
assert db.query("KERNEL32.DLL")
assert db.query("Kernel32.dll")
def test_api_exact(self):
db = load_winapi_database()
assert db.query("CreateFileA")
def test_miss(self):
db = load_winapi_database()
assert not db.query("NotARealApiFunction12345")
class TestGpJsonlLoader:
def test_common_string(self):
dbs = load_gp_jsonl_databases()
found = False
for db in dbs:
if db.query("!This program cannot be run in DOS mode."):
found = True
break
assert found
def test_miss(self):
dbs = load_gp_jsonl_databases()
for db in dbs:
assert db.query("xyzzy_not_a_real_string_99999") is None
class TestGpHashLoader:
def test_loads(self):
dbs = load_gp_hash_databases()
assert len(dbs) == 2
for db in dbs:
assert len(db.hashes) > 0
class TestJunkCodeLoader:
def test_initterm(self):
db = load_junk_code_database()
assert db.query("_initterm") is not None
class TestTagger:
def test_zlib_tag(self):
tagger = load_default_tagger()
result = tagger.tag_string("invalid distance code")
assert "#zlib" in result.tags
def test_capa_tag(self):
tagger = load_default_tagger()
result = tagger.tag_string("CurrencyDispenser1")
assert "#capa" in result.tags
def test_winapi_tag(self):
tagger = load_default_tagger()
result = tagger.tag_string("CreateFileA")
assert "#winapi" in result.tags
def test_common_tag(self):
tagger = load_default_tagger()
result = tagger.tag_string("!This program cannot be run in DOS mode.")
assert "#common" in result.tags
def test_code_junk_tag(self):
tagger = load_default_tagger()
result = tagger.tag_string("_initterm")
assert "#code-junk" in result.tags
def test_multi_tag(self):
tagger = load_default_tagger()
result = tagger.tag_string("_initterm")
assert "#winapi" in result.tags or "#code-junk" in result.tags
def test_empty_string(self):
tagger = load_default_tagger()
result = tagger.tag_string("")
assert result.tags == ()
def test_no_match(self):
tagger = load_default_tagger()
result = tagger.tag_string("xyzzy_unique_test_string_42")
assert result.tags == ()
assert result.matches == ()
def test_tags_sorted(self):
tagger = load_default_tagger()
result = tagger.tag_string("_initterm")
assert result.tags == tuple(sorted(result.tags))
def test_metadata_preserved_for_multiple_common_sources(self):
tagger = load_default_tagger()
result = tagger.tag_string("!This program cannot be run in DOS mode.")
common_matches = [m for m in result.matches if m.tag == "#common"]
assert len(common_matches) >= 1
class TestVisibleTags:
def test_common_only(self):
assert _visible_tags(("#common",)) == ["#common"]
def test_common_with_specific(self):
assert _visible_tags(("#common", "#winapi")) == ["#winapi"]
def test_code_junk_kept_with_others(self):
tags = ("#code-junk", "#winapi")
visible = _visible_tags(tags)
assert "#code-junk" in visible
assert "#winapi" in visible
def test_empty(self):
assert _visible_tags(()) == []
def _make_console(width: int = 120) -> tuple[Console, StringIO]:
buf = StringIO()
theme = Theme(
{
"decoration": "grey54",
"title": "yellow",
"key": "black",
"value": "blue",
"default": "black",
},
inherit=False,
)
console = Console(
theme=theme,
markup=False,
emoji=False,
file=buf,
force_terminal=False,
width=width,
no_color=True,
)
return console, buf
class TestStringLineRenderer:
def test_untagged_matches_old_format(self):
report = MapaReport(
meta=MapaMeta(name="t", sha256="s"),
functions=[
MapaFunction(
address=0x1000,
name="main",
strings=[MapaString(value="Hello World", address=0x4000)],
),
],
)
console, buf = _make_console()
render_report(report, console)
output = buf.getvalue()
assert 'string:' in output
assert "Hello World" in output
def test_tagged_string_shows_tag_at_right(self):
report = MapaReport(
meta=MapaMeta(name="t", sha256="s"),
functions=[
MapaFunction(
address=0x1000,
name="main",
strings=[MapaString(value="invalid distance code", address=0x4000, tags=("#zlib",))],
),
],
)
console, buf = _make_console()
render_report(report, console)
output = buf.getvalue()
assert "#zlib" in output
assert "invalid distance code" in output
def test_tag_alignment_right_edge(self):
console, _ = _make_console(width=80)
o = Renderer(console)
line = _render_string_line(o, "test string", ["#zlib"])
assert line.plain.rstrip().endswith("#zlib")
def test_narrow_terminal_still_shows_tags(self):
console, _ = _make_console(width=30)
o = Renderer(console)
line = _render_string_line(o, "a very long string value here that exceeds width", ["#zlib"])
assert "#zlib" in line.plain
def test_common_hidden_when_specific_present(self):
report = MapaReport(
meta=MapaMeta(name="t", sha256="s"),
functions=[
MapaFunction(
address=0x1000,
name="main",
strings=[
MapaString(
value="CreateFileA",
address=0x4000,
tags=("#common", "#winapi"),
)
],
),
],
)
console, buf = _make_console()
render_report(report, console)
output = buf.getvalue()
assert "#winapi" in output
assert "#common" not in output
def test_common_shown_when_only_tag(self):
report = MapaReport(
meta=MapaMeta(name="t", sha256="s"),
functions=[
MapaFunction(
address=0x1000,
name="main",
strings=[
MapaString(
value="!This program cannot be run in DOS mode.",
address=0x4000,
tags=("#common",),
)
],
),
],
)
console, buf = _make_console()
render_report(report, console)
output = buf.getvalue()
assert "#common" in output
class TestReportModel:
def test_string_with_library_tag(self):
ms = MapaString(
value="invalid distance code",
address=0x1000,
tags=("#zlib",),
tag_matches=(
StringTagMatch(
tag="#zlib",
source_family="oss",
source_name="zlib",
library_name="zlib",
library_version="1.3.1",
),
),
)
assert ms.tags == ("#zlib",)
assert ms.tag_matches[0].library_name == "zlib"
def test_string_with_multi_tag(self):
ms = MapaString(
value="_initterm",
address=0x2000,
tags=("#code-junk", "#winapi"),
tag_matches=(
StringTagMatch(tag="#code-junk", source_family="gp", source_name="junk-code"),
StringTagMatch(tag="#winapi", source_family="winapi", source_name="winapi"),
),
)
assert "#code-junk" in ms.tags
assert "#winapi" in ms.tags
assert len(ms.tag_matches) == 2
def test_string_with_common_tag(self):
ms = MapaString(
value="!This program cannot be run in DOS mode.",
address=0x3000,
tags=("#common",),
)
assert ms.tags == ("#common",)
def test_report_with_tagged_strings(self):
report = MapaReport(
meta=MapaMeta(name="test.exe", sha256="abc"),
functions=[
MapaFunction(
address=0x1000,
name="main",
strings=[
MapaString(value="invalid distance code", address=0x2000, tags=("#zlib",)),
MapaString(value="CreateFileA", address=0x3000, tags=("#common", "#winapi")),
MapaString(value="_initterm", address=0x4000, tags=("#code-junk", "#winapi")),
],
),
],
)
assert len(report.functions[0].strings) == 3
all_tags = set()
for s in report.functions[0].strings:
all_tags.update(s.tags)
assert "#zlib" in all_tags
assert "#winapi" in all_tags
assert "#code-junk" in all_tags
assert "#common" in all_tags