Compare commits

..

15 Commits

Author SHA1 Message Date
Ana Maria Martinez Gomez
3831f1c104 extractors: Do not use generate_api_features
`generate_api_features` was merged with the implementation of
`generate_import_features` and replaced by `generate_symbol`:
2b2656c2a3
 Use the new function in the miasm backend implementation.
2021-02-05 15:41:13 +01:00
Ana Maria Martinez Gomez
dc828e82b3 extractors: add required loc_db
Since the following PR, miasm requires LocationDB in the object's
constructor instead of creating a new LocationDB:
https://github.com/cea-sec/miasm/pull/1274

This was not the case at the point I started the miasm backend
implementation. Adapt the code to work with this change, which also
means interacting with miasm in a better way.
2021-02-05 15:41:04 +01:00
Ana María Martínez Gómez
2e98ba990c tests: enable tests for miasm
Everything is red :( Some tests are failing due to the not yet
implemented features. In addition, it looks like miasm has problems
disassembling some of the used files.
2021-02-03 15:07:31 +01:00
Ana María Martínez Gómez
d008fef23f extractors: enable miasm in Python3
Do not make miasm the default until we have ensured everything works as
it should.
2021-02-03 15:07:31 +01:00
Ana María Martínez Gómez
fe458c387a extractors: use block and feature offset function
`f` and `bb` in miasm are not an integer. Introduce `block_offset()` and
`feature_offset()` in the extractors and use them in main to solve this.

Related to https://github.com/cea-sec/miasm/pull/1277
2021-02-03 12:50:56 +01:00
Ana María Martínez Gómez
3e52c7de23 features: store mnemomics lower case
miasm extracts mnemonic capitalized while other backends do it
lowercase. To ensure capa works with all of them, use lower case in the
Mnemomic constructor.
2021-02-03 12:50:56 +01:00
Ana María Martínez Gómez
2d1e7946e3 extractors: Implement extract_insn_mnemonic_features
Extract insn mnemonic features in miasm.
2021-02-03 12:50:56 +01:00
Ana María Martínez Gómez
f2fe173ef3 extractors: Implement extract_insn_api_features
Extract insn API features in miasm.
2021-02-03 12:50:56 +01:00
Ana María Martínez Gómez
b2fc52d390 extractors: implement miasm insn features template
Add a template for insn features. These features needs some work and
there are many of them, so I'll introduce them independently in their
own commit.
2021-02-03 12:50:56 +01:00
Ana María Martínez Gómez
5ba4629c3c extractors: implement miasm function features
Add function features.
2021-02-03 12:50:56 +01:00
Ana María Martínez Gómez
4fc9c77791 extractors: implement miasm basic block features
Add basic block features.
2021-02-03 12:50:55 +01:00
Ana María Martínez Gómez
31ba9ee1b3 extractors: Implement get_basic_blocks in miasm
Implement `get_basic_blocks` in `MiasmFeatureExtractor`.
2021-02-03 12:50:55 +01:00
Ana María Martínez Gómez
b4a808ac76 extractors: Implement get_functions in miasm
Implement `get_functions` in `MiasmFeatureExtractor`. It is a proof of
concept, which just considers all loc_keys targets of calls a function.
This is enough to test feature extraction against the functions. A final
version should include other function recognition techniques and be
ported to miasm.
2021-02-03 12:50:55 +01:00
Ana María Martínez Gómez
0f030115d1 extractors: Implement cfg in miasm
Implement `_build_cfg()` in `MiasmFeatureExtractor`.

Co-authored-by: William Ballenthin <william.ballenthin@fireeye.com>
2021-02-03 12:50:55 +01:00
Ana María Martínez Gómez
42573d8df2 extractors: implement miasm file features
Begin to implement miasm backend. Add file features.

This implementation needs:
- https://github.com/cea-sec/miasm/pull/1273

Co-authored-by: William Ballenthin <william.ballenthin@fireeye.com>
2021-02-03 12:50:51 +01:00
127 changed files with 5123 additions and 10606 deletions

9
.gitattributes vendored
View File

@@ -1,9 +0,0 @@
# Set the default behavior, in case people don't have core.autocrlf set.
* text=auto
# Explicitly declare text files you want to always be normalized and converted
# to native line endings on checkout.
*.py text
*.yml text
*.md text
*.txt text

View File

@@ -1,46 +1,46 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [https://contributor-covenant.org/version/1/4][version]
[homepage]: https://contributor-covenant.org
[version]: https://contributor-covenant.org/version/1/4/
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [https://contributor-covenant.org/version/1/4][version]
[homepage]: https://contributor-covenant.org
[version]: https://contributor-covenant.org/version/1/4/

View File

@@ -1,197 +1,197 @@
# Contributing to Capa
First off, thanks for taking the time to contribute!
The following is a set of guidelines for contributing to capa and its packages, which are hosted in the [FireEye Organization](https://github.com/fireeye) on GitHub. These are mostly guidelines, not rules. Use your best judgment, and feel free to propose changes to this document in a pull request.
#### Table Of Contents
[Code of Conduct](#code-of-conduct)
[What should I know before I get started?](#what-should-i-know-before-i-get-started)
* [Capa and its Repositories](#capa-and-its-repositories)
* [Capa Design Decisions](#design-decisions)
[How Can I Contribute?](#how-can-i-contribute)
* [Reporting Bugs](#reporting-bugs)
* [Suggesting Enhancements](#suggesting-enhancements)
* [Your First Code Contribution](#your-first-code-contribution)
* [Pull Requests](#pull-requests)
[Styleguides](#styleguides)
* [Git Commit Messages](#git-commit-messages)
* [Python Styleguide](#python-styleguide)
* [Rules Styleguide](#rules-styleguide)
## Code of Conduct
This project and everyone participating in it is governed by the [Capa Code of Conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. Please report unacceptable behavior to the maintainers.
## What should I know before I get started?
### Capa and its repositories
We host the capa project as three Github repositories:
- [capa](https://github.com/fireeye/capa)
- [capa-rules](https://github.com/fireeye/capa-rules)
- [capa-testfiles](https://github.com/fireeye/capa-testfiles)
The command line tools, logic engine, and other Python source code are found in the `capa` repository.
This is the repository to fork when you want to enhance the features, performance, or user interface of capa.
Do *not* push rules directly to this repository, instead...
The standard rules contributed by the community are found in the `capa-rules` repository.
When you have an idea for a new rule, you should open a PR against `capa-rules`.
We keep `capa` and `capa-rules` separate to distinguish where ideas, bugs, and discussions should happen.
If you're writing yaml it probably goes in `capa-rules` and if you're writing Python it probably goes in `capa`.
Also, we encourage users to develop their own rule repositories, so we treat our default set of rules in the same way.
Test fixtures, such as malware samples and analysis workspaces, are found in the `capa-testfiles` repository.
These are files you'll need in order to run the linter (in `--thorough` mode) and full test suites;
however, they take up a lot of space (1GB+), so by keeping `capa-testfiles` separate,
a shallow checkout of `capa` and `capa-rules` doesn't take much bandwidth.
### Design Decisions
When we make a significant decision in how we maintain the project and what we can or cannot support,
we will document it in the [capa issues tracker](https://github.com/fireeye/capa/issues).
This is the best place review our discussions about what/how/why we do things in the project.
If you have a question, check to see if it is documented there.
If it is *not* documented there, or you can't find an answer, please open a issue.
We'll link to existing issues when appropriate to keep discussions in one place.
## How Can I Contribute?
### Reporting Bugs
This section guides you through submitting a bug report for capa.
Following these guidelines helps maintainers and the community understand your report, reproduce the behavior, and find related reports.
Before creating bug reports, please check [this list](#before-submitting-a-bug-report)
as you might find out that you don't need to create one.
When you are creating a bug report, please [include as many details as possible](#how-do-i-submit-a-good-bug-report).
Fill out [the required template](./ISSUE_TEMPLATE/bug_report.md),
the information it asks for helps us resolve issues faster.
> **Note:** If you find a **Closed** issue that seems like it is the same thing that you're experiencing, open a new issue and include a link to the original issue in the body of your new one.
#### Before Submitting A Bug Report
* **Determine [which repository the problem should be reported in](#capa-and-its-repositories)**.
* **Perform a [cursory search](https://github.com/fireeye/capa/issues?q=is%3Aissue)** to see if the problem has already been reported. If it has **and the issue is still open**, add a comment to the existing issue instead of opening a new one.
#### How Do I Submit A (Good) Bug Report?
Bugs are tracked as [GitHub issues](https://guides.github.com/features/issues/).
After you've determined [which repository](#capa-and-its-repositories) your bug is related to,
create an issue on that repository and provide the following information by filling in
[the template](./ISSUE_TEMPLATE/bug_report.md).
Explain the problem and include additional details to help maintainers reproduce the problem:
* **Use a clear and descriptive title** for the issue to identify the problem.
* **Describe the exact steps which reproduce the problem** in as many details as possible. For example, start by explaining how you started capa, e.g. which command exactly you used in the terminal, or how you started capa otherwise.
* **Provide specific examples to demonstrate the steps**. Include links to files or GitHub projects, or copy/pasteable snippets, which you use in those examples. If you're providing snippets in the issue, use [Markdown code blocks](https://help.github.com/articles/markdown-basics/#multiple-lines).
* **Describe the behavior you observed after following the steps** and point out what exactly is the problem with that behavior.
* **Explain which behavior you expected to see instead and why.**
* **Include screenshots and animated GIFs** which show you following the described steps and clearly demonstrate the problem. You can use [this tool](https://www.cockos.com/licecap/) to record GIFs on macOS and Windows, and [this tool](https://github.com/colinkeenan/silentcast) or [this tool](https://github.com/GNOME/byzanz) on Linux.
* **If you're reporting that capa crashed**, include the stack trace from the terminal. Include the stack trace in the issue in a [code block](https://help.github.com/articles/markdown-basics/#multiple-lines), a [file attachment](https://help.github.com/articles/file-attachments-on-issues-and-pull-requests/), or put it in a [gist](https://gist.github.com/) and provide link to that gist.
* **If the problem wasn't triggered by a specific action**, describe what you were doing before the problem happened and share more information using the guidelines below.
Provide more context by answering these questions:
* **Did the problem start happening recently** (e.g. after updating to a new version of capa) or was this always a problem?
* If the problem started happening recently, **can you reproduce the problem in an older version of capa?** What's the most recent version in which the problem doesn't happen? You can download older versions of capa from [the releases page](https://github.com/fireeye/capa/releases).
* **Can you reliably reproduce the issue?** If not, provide details about how often the problem happens and under which conditions it normally happens.
* If the problem is related to working with files (e.g. opening and editing files), **does the problem happen for all files and projects or only some?** Does the problem happen only when working with local or remote files (e.g. on network drives), with files of a specific type (e.g. only JavaScript or Python files), with large files or files with very long lines, or with files in a specific encoding? Is there anything else special about the files you are using?
Include details about your configuration and environment:
* **Which version of capa are you using?** You can get the exact version by running `capa --version` in your terminal.
* **What's the name and version of the OS you're using**?
### Suggesting Enhancements
This section guides you through submitting an enhancement suggestion for capa, including completely new features and minor improvements to existing functionality. Following these guidelines helps maintainers and the community understand your suggestion and find related suggestions.
Before creating enhancement suggestions, please check [this list](#before-submitting-an-enhancement-suggestion) as you might find out that you don't need to create one. When you are creating an enhancement suggestion, please [include as many details as possible](#how-do-i-submit-a-good-enhancement-suggestion). Fill in [the template](./ISSUE_TEMPLATE/feature_request.md), including the steps that you imagine you would take if the feature you're requesting existed.
#### Before Submitting An Enhancement Suggestion
* **Determine [which repository the enhancement should be suggested in](#capa-and-its-repositories).**
* **Perform a [cursory search](https://github.com/fireeye/capa/issues?q=is%3Aissue)** to see if the enhancement has already been suggested. If it has, add a comment to the existing issue instead of opening a new one.
#### How Do I Submit A (Good) Enhancement Suggestion?
Enhancement suggestions are tracked as [GitHub issues](https://guides.github.com/features/issues/). After you've determined [which repository](#capa-and-its-repositories) your enhancement suggestion is related to, create an issue on that repository and provide the following information:
* **Use a clear and descriptive title** for the issue to identify the suggestion.
* **Provide a step-by-step description of the suggested enhancement** in as many details as possible.
* **Provide specific examples to demonstrate the steps**. Include copy/pasteable snippets which you use in those examples, as [Markdown code blocks](https://help.github.com/articles/markdown-basics/#multiple-lines).
* **Describe the current behavior** and **explain which behavior you expected to see instead** and why.
* **Include screenshots and animated GIFs** which help you demonstrate the steps or point out the part of capa which the suggestion is related to. You can use [this tool](https://www.cockos.com/licecap/) to record GIFs on macOS and Windows, and [this tool](https://github.com/colinkeenan/silentcast) or [this tool](https://github.com/GNOME/byzanz) on Linux.
* **Explain why this enhancement would be useful** to most capa users and isn't something that can or should be implemented as an external tool that uses capa as a library.
* **Specify which version of capa you're using.** You can get the exact version by running `capa --version` in your terminal.
* **Specify the name and version of the OS you're using.**
### Your First Code Contribution
Unsure where to begin contributing to capa? You can start by looking through these `good-first-issue` and `rule-idea` issues:
* [good-first-issue](https://github.com/fireeye/capa/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) - issues which should only require a few lines of code, and a test or two.
* [rule-idea](https://github.com/fireeye/capa-rules/issues?q=is%3Aissue+is%3Aopen+label%3A%22rule+idea%22) - issues that describe potential new rule ideas.
Both issue lists are sorted by total number of comments. While not perfect, number of comments is a reasonable proxy for impact a given change will have.
#### Local development
capa and all its resources can be developed locally.
For instructions on how to do this, see the "Method 3" section of the [installation guide](https://github.com/fireeye/capa/blob/master/doc/installation.md).
### Pull Requests
The process described here has several goals:
- Maintain capa's quality
- Fix problems that are important to users
- Engage the community in working toward the best possible capa
- Enable a sustainable system for capa's maintainers to review contributions
Please follow these steps to have your contribution considered by the maintainers:
1. Follow the [styleguides](#styleguides)
2. Update the CHANGELOG and add tests and documentation. In case they are not needed, indicate it in [the PR template](pull_request_template.md).
3. After you submit your pull request, verify that all [status checks](https://help.github.com/articles/about-status-checks/) are passing <details><summary>What if the status checks are failing? </summary>If a status check is failing, and you believe that the failure is unrelated to your change, please leave a comment on the pull request explaining why you believe the failure is unrelated. A maintainer will re-run the status check for you. If we conclude that the failure was a false positive, then we will open an issue to track that problem with our status check suite.</details>
While the prerequisites above must be satisfied prior to having your pull request reviewed, the reviewer(s) may ask you to complete additional design work, tests, or other changes before your pull request can be ultimately accepted.
## Styleguides
### Git Commit Messages
* Use the present tense ("Add feature" not "Added feature")
* Use the imperative mood ("Move cursor to..." not "Moves cursor to...")
* Prefix the first line with the component in question ("rules: ..." or "render: ...")
* Reference issues and pull requests liberally after the first line
### Python Styleguide
All Python code must adhere to the style guide used by capa:
1. [PEP8](https://www.python.org/dev/peps/pep-0008/), with clarifications from
2. [Willi's style guide](https://docs.google.com/document/d/1iRpeg-w4DtibwytUyC_dDT7IGhNGBP25-nQfuBa-Fyk/edit?usp=sharing), formatted with
3. [isort](https://pypi.org/project/isort/) (with line width 120 and ordered by line length), and formatted with
4. [black](https://github.com/psf/black) (with line width 120), and formatted with
5. [dos2unix](https://linux.die.net/man/1/dos2unix)
Our CI pipeline will reformat and enforce the Python styleguide.
### Rules Styleguide
All (non-nursery) capa rules must:
1. pass the [linter](https://github.com/fireeye/capa/blob/master/scripts/lint.py), and
2. be formatted with [capafmt](https://github.com/fireeye/capa/blob/master/scripts/capafmt.py)
This ensures that all rules meet the same minimum level of quality and are structured in a consistent way.
Our CI pipeline will reformat and enforce the capa rules styleguide.
# Contributing to Capa
First off, thanks for taking the time to contribute!
The following is a set of guidelines for contributing to capa and its packages, which are hosted in the [FireEye Organization](https://github.com/fireeye) on GitHub. These are mostly guidelines, not rules. Use your best judgment, and feel free to propose changes to this document in a pull request.
#### Table Of Contents
[Code of Conduct](#code-of-conduct)
[What should I know before I get started?](#what-should-i-know-before-i-get-started)
* [Capa and its Repositories](#capa-and-its-repositories)
* [Capa Design Decisions](#design-decisions)
[How Can I Contribute?](#how-can-i-contribute)
* [Reporting Bugs](#reporting-bugs)
* [Suggesting Enhancements](#suggesting-enhancements)
* [Your First Code Contribution](#your-first-code-contribution)
* [Pull Requests](#pull-requests)
[Styleguides](#styleguides)
* [Git Commit Messages](#git-commit-messages)
* [Python Styleguide](#python-styleguide)
* [Rules Styleguide](#rules-styleguide)
## Code of Conduct
This project and everyone participating in it is governed by the [Capa Code of Conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. Please report unacceptable behavior to the maintainers.
## What should I know before I get started?
### Capa and its repositories
We host the capa project as three Github repositories:
- [capa](https://github.com/fireeye/capa)
- [capa-rules](https://github.com/fireeye/capa-rules)
- [capa-testfiles](https://github.com/fireeye/capa-testfiles)
The command line tools, logic engine, and other Python source code are found in the `capa` repository.
This is the repository to fork when you want to enhance the features, performance, or user interface of capa.
Do *not* push rules directly to this repository, instead...
The standard rules contributed by the community are found in the `capa-rules` repository.
When you have an idea for a new rule, you should open a PR against `capa-rules`.
We keep `capa` and `capa-rules` separate to distinguish where ideas, bugs, and discussions should happen.
If you're writing yaml it probably goes in `capa-rules` and if you're writing Python it probably goes in `capa`.
Also, we encourage users to develop their own rule repositories, so we treat our default set of rules in the same way.
Test fixtures, such as malware samples and analysis workspaces, are found in the `capa-testfiles` repository.
These are files you'll need in order to run the linter (in `--thorough` mode) and full test suites;
however, they take up a lot of space (1GB+), so by keeping `capa-testfiles` separate,
a shallow checkout of `capa` and `capa-rules` doesn't take much bandwidth.
### Design Decisions
When we make a significant decision in how we maintain the project and what we can or cannot support,
we will document it in the [capa issues tracker](https://github.com/fireeye/capa/issues).
This is the best place review our discussions about what/how/why we do things in the project.
If you have a question, check to see if it is documented there.
If it is *not* documented there, or you can't find an answer, please open a issue.
We'll link to existing issues when appropriate to keep discussions in one place.
## How Can I Contribute?
### Reporting Bugs
This section guides you through submitting a bug report for capa.
Following these guidelines helps maintainers and the community understand your report, reproduce the behavior, and find related reports.
Before creating bug reports, please check [this list](#before-submitting-a-bug-report)
as you might find out that you don't need to create one.
When you are creating a bug report, please [include as many details as possible](#how-do-i-submit-a-good-bug-report).
Fill out [the required template](./ISSUE_TEMPLATE/bug_report.md),
the information it asks for helps us resolve issues faster.
> **Note:** If you find a **Closed** issue that seems like it is the same thing that you're experiencing, open a new issue and include a link to the original issue in the body of your new one.
#### Before Submitting A Bug Report
* **Determine [which repository the problem should be reported in](#capa-and-its-repositories)**.
* **Perform a [cursory search](https://github.com/fireeye/capa/issues?q=is%3Aissue)** to see if the problem has already been reported. If it has **and the issue is still open**, add a comment to the existing issue instead of opening a new one.
#### How Do I Submit A (Good) Bug Report?
Bugs are tracked as [GitHub issues](https://guides.github.com/features/issues/).
After you've determined [which repository](#capa-and-its-repositories) your bug is related to,
create an issue on that repository and provide the following information by filling in
[the template](./ISSUE_TEMPLATE/bug_report.md).
Explain the problem and include additional details to help maintainers reproduce the problem:
* **Use a clear and descriptive title** for the issue to identify the problem.
* **Describe the exact steps which reproduce the problem** in as many details as possible. For example, start by explaining how you started capa, e.g. which command exactly you used in the terminal, or how you started capa otherwise.
* **Provide specific examples to demonstrate the steps**. Include links to files or GitHub projects, or copy/pasteable snippets, which you use in those examples. If you're providing snippets in the issue, use [Markdown code blocks](https://help.github.com/articles/markdown-basics/#multiple-lines).
* **Describe the behavior you observed after following the steps** and point out what exactly is the problem with that behavior.
* **Explain which behavior you expected to see instead and why.**
* **Include screenshots and animated GIFs** which show you following the described steps and clearly demonstrate the problem. You can use [this tool](https://www.cockos.com/licecap/) to record GIFs on macOS and Windows, and [this tool](https://github.com/colinkeenan/silentcast) or [this tool](https://github.com/GNOME/byzanz) on Linux.
* **If you're reporting that capa crashed**, include the stack trace from the terminal. Include the stack trace in the issue in a [code block](https://help.github.com/articles/markdown-basics/#multiple-lines), a [file attachment](https://help.github.com/articles/file-attachments-on-issues-and-pull-requests/), or put it in a [gist](https://gist.github.com/) and provide link to that gist.
* **If the problem wasn't triggered by a specific action**, describe what you were doing before the problem happened and share more information using the guidelines below.
Provide more context by answering these questions:
* **Did the problem start happening recently** (e.g. after updating to a new version of capa) or was this always a problem?
* If the problem started happening recently, **can you reproduce the problem in an older version of capa?** What's the most recent version in which the problem doesn't happen? You can download older versions of capa from [the releases page](https://github.com/fireeye/capa/releases).
* **Can you reliably reproduce the issue?** If not, provide details about how often the problem happens and under which conditions it normally happens.
* If the problem is related to working with files (e.g. opening and editing files), **does the problem happen for all files and projects or only some?** Does the problem happen only when working with local or remote files (e.g. on network drives), with files of a specific type (e.g. only JavaScript or Python files), with large files or files with very long lines, or with files in a specific encoding? Is there anything else special about the files you are using?
Include details about your configuration and environment:
* **Which version of capa are you using?** You can get the exact version by running `capa --version` in your terminal.
* **What's the name and version of the OS you're using**?
### Suggesting Enhancements
This section guides you through submitting an enhancement suggestion for capa, including completely new features and minor improvements to existing functionality. Following these guidelines helps maintainers and the community understand your suggestion and find related suggestions.
Before creating enhancement suggestions, please check [this list](#before-submitting-an-enhancement-suggestion) as you might find out that you don't need to create one. When you are creating an enhancement suggestion, please [include as many details as possible](#how-do-i-submit-a-good-enhancement-suggestion). Fill in [the template](./ISSUE_TEMPLATE/feature_request.md), including the steps that you imagine you would take if the feature you're requesting existed.
#### Before Submitting An Enhancement Suggestion
* **Determine [which repository the enhancement should be suggested in](#capa-and-its-repositories).**
* **Perform a [cursory search](https://github.com/fireeye/capa/issues?q=is%3Aissue)** to see if the enhancement has already been suggested. If it has, add a comment to the existing issue instead of opening a new one.
#### How Do I Submit A (Good) Enhancement Suggestion?
Enhancement suggestions are tracked as [GitHub issues](https://guides.github.com/features/issues/). After you've determined [which repository](#capa-and-its-repositories) your enhancement suggestion is related to, create an issue on that repository and provide the following information:
* **Use a clear and descriptive title** for the issue to identify the suggestion.
* **Provide a step-by-step description of the suggested enhancement** in as many details as possible.
* **Provide specific examples to demonstrate the steps**. Include copy/pasteable snippets which you use in those examples, as [Markdown code blocks](https://help.github.com/articles/markdown-basics/#multiple-lines).
* **Describe the current behavior** and **explain which behavior you expected to see instead** and why.
* **Include screenshots and animated GIFs** which help you demonstrate the steps or point out the part of capa which the suggestion is related to. You can use [this tool](https://www.cockos.com/licecap/) to record GIFs on macOS and Windows, and [this tool](https://github.com/colinkeenan/silentcast) or [this tool](https://github.com/GNOME/byzanz) on Linux.
* **Explain why this enhancement would be useful** to most capa users and isn't something that can or should be implemented as an external tool that uses capa as a library.
* **Specify which version of capa you're using.** You can get the exact version by running `capa --version` in your terminal.
* **Specify the name and version of the OS you're using.**
### Your First Code Contribution
Unsure where to begin contributing to capa? You can start by looking through these `good-first-issue` and `rule-idea` issues:
* [good-first-issue](https://github.com/fireeye/capa/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) - issues which should only require a few lines of code, and a test or two.
* [rule-idea](https://github.com/fireeye/capa-rules/issues?q=is%3Aissue+is%3Aopen+label%3A%22rule+idea%22) - issues that describe potential new rule ideas.
Both issue lists are sorted by total number of comments. While not perfect, number of comments is a reasonable proxy for impact a given change will have.
#### Local development
capa and all its resources can be developed locally.
For instructions on how to do this, see the "Method 3" section of the [installation guide](https://github.com/fireeye/capa/blob/master/doc/installation.md).
### Pull Requests
The process described here has several goals:
- Maintain capa's quality
- Fix problems that are important to users
- Engage the community in working toward the best possible capa
- Enable a sustainable system for capa's maintainers to review contributions
Please follow these steps to have your contribution considered by the maintainers:
1. Follow all instructions in [the template](PULL_REQUEST_TEMPLATE.md)
2. Follow the [styleguides](#styleguides)
3. After you submit your pull request, verify that all [status checks](https://help.github.com/articles/about-status-checks/) are passing <details><summary>What if the status checks are failing? </summary>If a status check is failing, and you believe that the failure is unrelated to your change, please leave a comment on the pull request explaining why you believe the failure is unrelated. A maintainer will re-run the status check for you. If we conclude that the failure was a false positive, then we will open an issue to track that problem with our status check suite.</details>
While the prerequisites above must be satisfied prior to having your pull request reviewed, the reviewer(s) may ask you to complete additional design work, tests, or other changes before your pull request can be ultimately accepted.
## Styleguides
### Git Commit Messages
* Use the present tense ("Add feature" not "Added feature")
* Use the imperative mood ("Move cursor to..." not "Moves cursor to...")
* Prefix the first line with the component in question ("rules: ..." or "render: ...")
* Reference issues and pull requests liberally after the first line
### Python Styleguide
All Python code must adhere to the style guide used by capa:
1. [PEP8](https://www.python.org/dev/peps/pep-0008/), with clarifications from
2. [Willi's style guide](https://docs.google.com/document/d/1iRpeg-w4DtibwytUyC_dDT7IGhNGBP25-nQfuBa-Fyk/edit?usp=sharing), formatted with
3. [isort](https://pypi.org/project/isort/) (with line width 120 and ordered by line length), and formatted with
4. [black](https://github.com/psf/black) (with line width 120), and formatted with
5. [dos2unix](https://linux.die.net/man/1/dos2unix)
Our CI pipeline will reformat and enforce the Python styleguide.
### Rules Styleguide
All (non-nursery) capa rules must:
1. pass the [linter](https://github.com/fireeye/capa/blob/master/scripts/lint.py), and
2. be formatted with [capafmt](https://github.com/fireeye/capa/blob/master/scripts/capafmt.py)
This ensures that all rules meet the same minimum level of quality and are structured in a consistent way.
Our CI pipeline will reformat and enforce the capa rules styleguide.

View File

@@ -1,47 +1,47 @@
---
name: Bug report
about: Create a report to help us improve
---
<!--
# Is your bug report related to capa rules (for example a false positive)?
We use submodules to separate code, rules and test data. If your issue is related to capa rules, please report it at https://github.com/fireeye/capa-rules/issues.
# Have you checked that your issue isn't already filed?
Please search if there is a similar issue at https://github.com/fireeye/capa/issues. If there is already a similar issue, please add more details there instead of opening a new one.
# Have you read capa's Code of Conduct?
By filing an Issue, you are expected to comply with it, including treating everyone with respect: https://github.com/fireeye/capa/blob/master/.github/CODE_OF_CONDUCT.md
# Have you read capa's CONTRIBUTING guide?
It contains helpful information about how to contribute to capa. Check https://github.com/fireeye/capa/blob/master/.github/CONTRIBUTING.md#reporting-bugs
-->
### Description
<!-- Description of the issue -->
### Steps to Reproduce
<!-- 1. First Step -->
<!-- 2. Second Step -->
<!-- 3. and so on… -->
**Expected behavior:**
<!-- What you expect to happen -->
**Actual behavior:**
<!-- What actually happens -->
### Versions
<!-- You can get this information from copy and pasting the output of `capa --version` from the command line.
Please specify the component you're using (e.g. standalone tool or IDA Pro integration) and your Python version.
Also, please include the OS and what version of the OS you're running. -->
### Additional Information
<!-- Any additional information, configuration or data that might be necessary to reproduce the issue. -->
---
name: Bug report
about: Create a report to help us improve
---
<!--
# Is your bug report related to capa rules (for example a false positive)?
We use sybmodules to separate code, rules and test data. If your issue is related to capa rules, please report it at https://github.com/fireeye/capa-rules/issues.
# Have you checked that your issue isn't already filed?
Please search if there is a similar issue at https://github.com/fireeye/capa/issues. If there is already a similar issue, please add more details there instead of opening a new one.
# Have you read capa's Code of Conduct?
By filing an Issue, you are expected to comply with it, including treating everyone with respect: https://github.com/fireeye/capa/blob/master/.github/CODE_OF_CONDUCT.md
# Have you read capa's CONTRIBUTING guide?
It contains helpful information about how to contribute to capa. Check https://github.com/fireeye/capa/blob/master/.github/CONTRIBUTING.md#reporting-bugs
-->
### Description
<!-- Description of the issue -->
### Steps to Reproduce
<!-- 1. First Step -->
<!-- 2. Second Step -->
<!-- 3. and so on… -->
**Expected behavior:**
<!-- What you expect to happen -->
**Actual behavior:**
<!-- What actually happens -->
### Versions
<!-- You can get this information from copy and pasting the output of `capa --version` from the command line.
Please specify the component you're using (e.g. standalone tool or IDA Pro integration) and your Python version.
Also, please include the OS and what version of the OS you're running. -->
### Additional Information
<!-- Any additional information, configuration or data that might be necessary to reproduce the issue. -->

View File

@@ -1,35 +1,35 @@
---
name: Feature request
about: Suggest an idea for capa
---
<!--
# Is your issue related to capa rules (for example an idea for a new rule)?
We use submodules to separate code, rules and test data. If your issue is related to capa rules, please report it at https://github.com/fireeye/capa-rules/issues.
# Have you checked that your issue isn't already filed?
Please search if there is a similar issue at https://github.com/fireeye/capa/issues. If there is already a similar issue, please add more details there instead of opening a new one.
# Have you read capa's Code of Conduct?
By filing an Issue, you are expected to comply with it, including treating everyone with respect: https://github.com/fireeye/capa/blob/master/.github/CODE_OF_CONDUCT.md
# Have you read capa's CONTRIBUTING guide?
It contains helpful information about how to contribute to capa. Check https://github.com/fireeye/capa/blob/master/.github/CONTRIBUTING.md#suggesting-enhancements
-->
### Summary
<!-- One paragraph explanation of the feature. -->
### Motivation
<!-- Why are we doing this? What use cases does it support? What is the expected outcome? -->
### Describe alternatives you've considered
<!-- A clear and concise description of the alternative solutions you've considered. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
---
name: Feature request
about: Suggest an idea for capa
---
<!--
# Is your issue related to capa rules (for example an idea for a new rule)?
We use sybmodules to separate code, rules and test data. If your issue is related to capa rules, please report it at https://github.com/fireeye/capa-rules/issues.
# Have you checked that your issue isn't already filed?
Please search if there is a similar issue at https://github.com/fireeye/capa/issues. If there is already a similar issue, please add more details there instead of opening a new one.
# Have you read capa's Code of Conduct?
By filing an Issue, you are expected to comply with it, including treating everyone with respect: https://github.com/fireeye/capa/blob/master/.github/CODE_OF_CONDUCT.md
# Have you read capa's CONTRIBUTING guide?
It contains helpful information about how to contribute to capa. Check https://github.com/fireeye/capa/blob/master/.github/CONTRIBUTING.md#suggesting-enhancements
-->
### Summary
<!-- One paragraph explanation of the feature. -->
### Motivation
<!-- Why are we doing this? What use cases does it support? What is the expected outcome? -->
### Describe alternatives you've considered
<!-- A clear and concise description of the alternative solutions you've considered. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->

76
.github/mypy/mypy.ini vendored
View File

@@ -1,76 +0,0 @@
[mypy]
[mypy-halo.*]
ignore_missing_imports = True
[mypy-tqdm.*]
ignore_missing_imports = True
[mypy-ruamel.*]
ignore_missing_imports = True
[mypy-networkx.*]
ignore_missing_imports = True
[mypy-pefile.*]
ignore_missing_imports = True
[mypy-viv_utils.*]
ignore_missing_imports = True
[mypy-flirt.*]
ignore_missing_imports = True
[mypy-smda.*]
ignore_missing_imports = True
[mypy-lief.*]
ignore_missing_imports = True
[mypy-idc.*]
ignore_missing_imports = True
[mypy-vivisect.*]
ignore_missing_imports = True
[mypy-envi.*]
ignore_missing_imports = True
[mypy-PE.*]
ignore_missing_imports = True
[mypy-idaapi.*]
ignore_missing_imports = True
[mypy-idautils.*]
ignore_missing_imports = True
[mypy-ida_bytes.*]
ignore_missing_imports = True
[mypy-ida_kernwin.*]
ignore_missing_imports = True
[mypy-ida_settings.*]
ignore_missing_imports = True
[mypy-ida_funcs.*]
ignore_missing_imports = True
[mypy-ida_loader.*]
ignore_missing_imports = True
[mypy-PyQt5.*]
ignore_missing_imports = True
[mypy-binaryninja.*]
ignore_missing_imports = True
[mypy-pytest.*]
ignore_missing_imports = True
[mypy-devtools.*]
ignore_missing_imports = True
[mypy-elftools.*]
ignore_missing_imports = True

View File

@@ -1,22 +0,0 @@
<!--
Thank you for contributing to capa! <3
Please read capa's CONTRIBUTING guide if you haven't done so already.
It contains helpful information about how to contribute to capa. Check https://github.com/fireeye/capa/blob/master/.github/CONTRIBUTING.md
Please describe the changes in this pull request (PR). Include your motivation and context to help us review.
Please mention the issue your PR addresses (if any):
closes #issue_number
-->
### Checklist
<!-- CHANGELOG.md has a `master (unreleased)` section. Please add bug fixes, new features, breaking changes and anything else you think is worthwhile mentioning in the release notes to this file. -->
- [ ] No CHANGELOG update needed
<!-- Tests prove that your fix/work as expected and ensure it doesn't break on the feature. -->
- [ ] No new tests needed
<!-- Please help us keeping capa documentation up-to-date -->
- [ ] No documentation update needed

View File

@@ -1,5 +0,0 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
import PyInstaller.utils.hooks
# ref: https://groups.google.com/g/pyinstaller/c/amWi0-66uZI/m/miPoKfWjBAAJ
binaries = PyInstaller.utils.hooks.collect_dynamic_libs("capstone")

View File

@@ -13,144 +13,3 @@ from PyInstaller.utils.hooks import copy_metadata
#
# ref: https://github.com/pyinstaller/pyinstaller/issues/1713#issuecomment-162682084
datas = copy_metadata("vivisect")
excludedimports = [
# viv gui requires these heavy libraries,
# but viv as a library doesn't.
# they shouldn't be installed in our configuration,
# but we'll ensure they don't slip in here (such as on developers' systems).
"PyQt5",
"qt5",
"pyqtwebengine",
# the above are imported by these viv modules.
# so really, we'd want to exclude these submodules of viv.
# but i dont think this works.
"vqt",
"vdb.qt",
"envi.qt",
# unused by capa
"pyasn1",
]
hiddenimports = [
# vivisect does manual/runtime importing of its modules,
# so declare the things that could be imported here.
"vivisect",
"vivisect.analysis",
"vivisect.analysis.amd64",
"vivisect.analysis.amd64",
"vivisect.analysis.amd64.emulation",
"vivisect.analysis.amd64.golang",
"vivisect.analysis.crypto",
"vivisect.analysis.crypto",
"vivisect.analysis.crypto.constants",
"vivisect.analysis.elf",
"vivisect.analysis.elf.elfplt",
"vivisect.analysis.elf.elfplt_late",
"vivisect.analysis.elf.libc_start_main",
"vivisect.analysis.generic",
"vivisect.analysis.generic",
"vivisect.analysis.generic.codeblocks",
"vivisect.analysis.generic.emucode",
"vivisect.analysis.generic.entrypoints",
"vivisect.analysis.generic.funcentries",
"vivisect.analysis.generic.impapi",
"vivisect.analysis.generic.mkpointers",
"vivisect.analysis.generic.pointers",
"vivisect.analysis.generic.pointertables",
"vivisect.analysis.generic.relocations",
"vivisect.analysis.generic.strconst",
"vivisect.analysis.generic.switchcase",
"vivisect.analysis.generic.thunks",
"vivisect.analysis.generic.noret",
"vivisect.analysis.i386",
"vivisect.analysis.i386",
"vivisect.analysis.i386.calling",
"vivisect.analysis.i386.golang",
"vivisect.analysis.i386.importcalls",
"vivisect.analysis.i386.instrhook",
"vivisect.analysis.i386.thunk_bx",
"vivisect.analysis.ms",
"vivisect.analysis.ms",
"vivisect.analysis.ms.hotpatch",
"vivisect.analysis.ms.localhints",
"vivisect.analysis.ms.msvc",
"vivisect.analysis.ms.msvcfunc",
"vivisect.analysis.ms.vftables",
"vivisect.analysis.pe",
"vivisect.impapi.posix.amd64",
"vivisect.impapi.posix.i386",
"vivisect.impapi.windows",
"vivisect.impapi.windows.amd64",
"vivisect.impapi.windows.i386",
"vivisect.impapi.winkern.i386",
"vivisect.impapi.winkern.amd64",
"vivisect.parsers.blob",
"vivisect.parsers.elf",
"vivisect.parsers.ihex",
"vivisect.parsers.macho",
"vivisect.parsers.pe",
"vivisect.storage",
"vivisect.storage.basicfile",
"vstruct.constants",
"vstruct.constants.ntstatus",
"vstruct.defs",
"vstruct.defs.arm7",
"vstruct.defs.bmp",
"vstruct.defs.dns",
"vstruct.defs.elf",
"vstruct.defs.gif",
"vstruct.defs.ihex",
"vstruct.defs.inet",
"vstruct.defs.java",
"vstruct.defs.kdcom",
"vstruct.defs.macho",
"vstruct.defs.macho.const",
"vstruct.defs.macho.fat",
"vstruct.defs.macho.loader",
"vstruct.defs.macho.stabs",
"vstruct.defs.minidump",
"vstruct.defs.pcap",
"vstruct.defs.pe",
"vstruct.defs.pptp",
"vstruct.defs.rar",
"vstruct.defs.swf",
"vstruct.defs.win32",
"vstruct.defs.windows",
"vstruct.defs.windows.win_5_1_i386",
"vstruct.defs.windows.win_5_1_i386.ntdll",
"vstruct.defs.windows.win_5_1_i386.ntoskrnl",
"vstruct.defs.windows.win_5_1_i386.win32k",
"vstruct.defs.windows.win_5_2_i386",
"vstruct.defs.windows.win_5_2_i386.ntdll",
"vstruct.defs.windows.win_5_2_i386.ntoskrnl",
"vstruct.defs.windows.win_5_2_i386.win32k",
"vstruct.defs.windows.win_6_1_amd64",
"vstruct.defs.windows.win_6_1_amd64.ntdll",
"vstruct.defs.windows.win_6_1_amd64.ntoskrnl",
"vstruct.defs.windows.win_6_1_amd64.win32k",
"vstruct.defs.windows.win_6_1_i386",
"vstruct.defs.windows.win_6_1_i386.ntdll",
"vstruct.defs.windows.win_6_1_i386.ntoskrnl",
"vstruct.defs.windows.win_6_1_i386.win32k",
"vstruct.defs.windows.win_6_1_wow64",
"vstruct.defs.windows.win_6_1_wow64.ntdll",
"vstruct.defs.windows.win_6_2_amd64",
"vstruct.defs.windows.win_6_2_amd64.ntdll",
"vstruct.defs.windows.win_6_2_amd64.ntoskrnl",
"vstruct.defs.windows.win_6_2_amd64.win32k",
"vstruct.defs.windows.win_6_2_i386",
"vstruct.defs.windows.win_6_2_i386.ntdll",
"vstruct.defs.windows.win_6_2_i386.ntoskrnl",
"vstruct.defs.windows.win_6_2_i386.win32k",
"vstruct.defs.windows.win_6_2_wow64",
"vstruct.defs.windows.win_6_2_wow64.ntdll",
"vstruct.defs.windows.win_6_3_amd64",
"vstruct.defs.windows.win_6_3_amd64.ntdll",
"vstruct.defs.windows.win_6_3_amd64.ntoskrnl",
"vstruct.defs.windows.win_6_3_i386",
"vstruct.defs.windows.win_6_3_i386.ntdll",
"vstruct.defs.windows.win_6_3_i386.ntoskrnl",
"vstruct.defs.windows.win_6_3_wow64",
"vstruct.defs.windows.win_6_3_wow64.ntdll",
]

View File

@@ -16,10 +16,9 @@ with open('./capa/version.py', 'wb') as f:
# - commits since
# g------- git hash fragment
version = (subprocess.check_output(["git", "describe", "--always", "--tags", "--long"])
.decode("utf-8")
.strip()
.replace("tags/", ""))
f.write(("__version__ = '%s'" % version).encode("utf-8"))
f.write("__version__ = '%s'" % version)
a = Analysis(
# when invoking pyinstaller from the project root,
@@ -33,7 +32,6 @@ a = Analysis(
# this gets invoked from the directory of the spec file,
# i.e. ./.github/pyinstaller
('../../rules', 'rules'),
('../../sigs', 'sigs'),
# capa.render.default uses tabulate that depends on wcwidth.
# it seems wcwidth uses a json file `version.json`
@@ -43,6 +41,128 @@ a = Analysis(
# ref: https://stackoverflow.com/a/62278462/87207
(os.path.dirname(wcwidth.__file__), 'wcwidth')
],
hiddenimports=[
# vivisect does manual/runtime importing of its modules,
# so declare the things that could be imported here.
"vivisect",
"vivisect.analysis",
"vivisect.analysis.amd64",
"vivisect.analysis.amd64",
"vivisect.analysis.amd64.emulation",
"vivisect.analysis.amd64.golang",
"vivisect.analysis.crypto",
"vivisect.analysis.crypto",
"vivisect.analysis.crypto.constants",
"vivisect.analysis.elf",
"vivisect.analysis.elf",
"vivisect.analysis.elf.elfplt",
"vivisect.analysis.elf.libc_start_main",
"vivisect.analysis.generic",
"vivisect.analysis.generic",
"vivisect.analysis.generic.codeblocks",
"vivisect.analysis.generic.emucode",
"vivisect.analysis.generic.entrypoints",
"vivisect.analysis.generic.funcentries",
"vivisect.analysis.generic.impapi",
"vivisect.analysis.generic.mkpointers",
"vivisect.analysis.generic.pointers",
"vivisect.analysis.generic.pointertables",
"vivisect.analysis.generic.relocations",
"vivisect.analysis.generic.strconst",
"vivisect.analysis.generic.switchcase",
"vivisect.analysis.generic.thunks",
"vivisect.analysis.i386",
"vivisect.analysis.i386",
"vivisect.analysis.i386.calling",
"vivisect.analysis.i386.golang",
"vivisect.analysis.i386.importcalls",
"vivisect.analysis.i386.instrhook",
"vivisect.analysis.i386.thunk_bx",
"vivisect.analysis.ms",
"vivisect.analysis.ms",
"vivisect.analysis.ms.hotpatch",
"vivisect.analysis.ms.localhints",
"vivisect.analysis.ms.msvc",
"vivisect.analysis.ms.msvcfunc",
"vivisect.analysis.ms.vftables",
"vivisect.analysis.pe",
"vivisect.impapi.posix.amd64",
"vivisect.impapi.posix.i386",
"vivisect.impapi.windows",
"vivisect.impapi.windows.amd64",
"vivisect.impapi.windows.i386",
"vivisect.impapi.winkern.i386",
"vivisect.impapi.winkern.amd64",
"vivisect.parsers.blob",
"vivisect.parsers.elf",
"vivisect.parsers.ihex",
"vivisect.parsers.macho",
"vivisect.parsers.pe",
"vivisect.parsers.utils",
"vivisect.storage",
"vivisect.storage.basicfile",
"vstruct.constants",
"vstruct.constants.ntstatus",
"vstruct.defs",
"vstruct.defs.arm7",
"vstruct.defs.bmp",
"vstruct.defs.dns",
"vstruct.defs.elf",
"vstruct.defs.gif",
"vstruct.defs.ihex",
"vstruct.defs.inet",
"vstruct.defs.java",
"vstruct.defs.kdcom",
"vstruct.defs.macho",
"vstruct.defs.macho.const",
"vstruct.defs.macho.fat",
"vstruct.defs.macho.loader",
"vstruct.defs.macho.stabs",
"vstruct.defs.minidump",
"vstruct.defs.pcap",
"vstruct.defs.pe",
"vstruct.defs.pptp",
"vstruct.defs.rar",
"vstruct.defs.swf",
"vstruct.defs.win32",
"vstruct.defs.windows",
"vstruct.defs.windows.win_5_1_i386",
"vstruct.defs.windows.win_5_1_i386.ntdll",
"vstruct.defs.windows.win_5_1_i386.ntoskrnl",
"vstruct.defs.windows.win_5_1_i386.win32k",
"vstruct.defs.windows.win_5_2_i386",
"vstruct.defs.windows.win_5_2_i386.ntdll",
"vstruct.defs.windows.win_5_2_i386.ntoskrnl",
"vstruct.defs.windows.win_5_2_i386.win32k",
"vstruct.defs.windows.win_6_1_amd64",
"vstruct.defs.windows.win_6_1_amd64.ntdll",
"vstruct.defs.windows.win_6_1_amd64.ntoskrnl",
"vstruct.defs.windows.win_6_1_amd64.win32k",
"vstruct.defs.windows.win_6_1_i386",
"vstruct.defs.windows.win_6_1_i386.ntdll",
"vstruct.defs.windows.win_6_1_i386.ntoskrnl",
"vstruct.defs.windows.win_6_1_i386.win32k",
"vstruct.defs.windows.win_6_1_wow64",
"vstruct.defs.windows.win_6_1_wow64.ntdll",
"vstruct.defs.windows.win_6_2_amd64",
"vstruct.defs.windows.win_6_2_amd64.ntdll",
"vstruct.defs.windows.win_6_2_amd64.ntoskrnl",
"vstruct.defs.windows.win_6_2_amd64.win32k",
"vstruct.defs.windows.win_6_2_i386",
"vstruct.defs.windows.win_6_2_i386.ntdll",
"vstruct.defs.windows.win_6_2_i386.ntoskrnl",
"vstruct.defs.windows.win_6_2_i386.win32k",
"vstruct.defs.windows.win_6_2_wow64",
"vstruct.defs.windows.win_6_2_wow64.ntdll",
"vstruct.defs.windows.win_6_3_amd64",
"vstruct.defs.windows.win_6_3_amd64.ntdll",
"vstruct.defs.windows.win_6_3_amd64.ntoskrnl",
"vstruct.defs.windows.win_6_3_i386",
"vstruct.defs.windows.win_6_3_i386.ntdll",
"vstruct.defs.windows.win_6_3_i386.ntoskrnl",
"vstruct.defs.windows.win_6_3_wow64",
"vstruct.defs.windows.win_6_3_wow64.ntdll",
],
# when invoking pyinstaller from the project root,
# this gets run from the project root.
hookspath=['.github/pyinstaller/hooks'],
@@ -60,25 +180,6 @@ a = Analysis(
# since we don't spawn a notebook, we can safely remove these.
"IPython",
"ipywidgets",
# these are pulled in by networkx
# but we don't need to compute the strongly connected components.
"numpy",
"scipy",
"matplotlib",
"pandas",
"pytest",
# deps from viv that we don't use.
# this duplicates the entries in `hook-vivisect`,
# but works better this way.
"vqt",
"vdb.qt",
"envi.qt",
"PyQt5",
"qt5",
"pyqtwebengine",
"pyasn1"
])
a.binaries = a.binaries - TOC([
@@ -109,4 +210,5 @@ exe = EXE(pyz,
# a.datas,
# strip=None,
# upx=True,
# name='capa-dat')
# name='capa-dat')

View File

@@ -1,116 +1,82 @@
name: build
on:
push:
branches: [master]
release:
types: [edited, published]
jobs:
build:
name: PyInstaller for ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
include:
- os: ubuntu-18.04
# use old linux so that the shared library versioning is more portable
artifact_name: capa
asset_name: linux
- os: windows-2019
artifact_name: capa.exe
asset_name: windows
- os: macos-10.15
artifact_name: capa
asset_name: macos
steps:
- name: Checkout capa
uses: actions/checkout@v2
with:
submodules: true
# using Python 3.8 to support running across multiple operating systems including Windows 7
- name: Set up Python 3.8
uses: actions/setup-python@v2
with:
python-version: 3.8
- if: matrix.os == 'ubuntu-18.04'
run: sudo apt-get install -y libyaml-dev
- name: Install PyInstaller
run: pip install 'pyinstaller==4.2'
- name: Install capa
run: pip install -e .
- name: Build standalone executable
run: pyinstaller .github/pyinstaller/pyinstaller.spec
- name: Does it run (PE)?
run: dist/capa "tests/data/Practical Malware Analysis Lab 01-01.dll_"
- name: Does it run (Shellcode)?
run: dist/capa "tests/data/499c2a85f6e8142c3f48d4251c9c7cd6.raw32"
- name: Does it run (ELF)?
run: dist/capa "tests/data/7351f8a40c5450557b24622417fc478d.elf_"
- uses: actions/upload-artifact@v2
with:
name: ${{ matrix.asset_name }}
path: dist/${{ matrix.artifact_name }}
test_run:
# test that binaries run on push to master
if: github.event_name == 'push'
name: Test run on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
needs: [build]
strategy:
matrix:
include:
# OSs not already tested above
- os: ubuntu-18.04
artifact_name: capa
asset_name: linux
- os: ubuntu-20.04
artifact_name: capa
asset_name: linux
- os: windows-2016
artifact_name: capa.exe
asset_name: windows
steps:
- name: Download ${{ matrix.asset_name }}
uses: actions/download-artifact@v2
with:
name: ${{ matrix.asset_name }}
- name: Set executable flag
if: matrix.os != 'windows-2016'
run: chmod +x ${{ matrix.artifact_name }}
- name: Run capa
run: ./${{ matrix.artifact_name }} -h
zip_and_upload:
# upload zipped binaries to Release page
if: github.event_name == 'release'
name: zip and upload ${{ matrix.asset_name }}
runs-on: ubuntu-20.04
needs: [build]
strategy:
matrix:
include:
- asset_name: linux
artifact_name: capa
- asset_name: windows
artifact_name: capa.exe
- asset_name: macos
artifact_name: capa
steps:
- name: Download ${{ matrix.asset_name }}
uses: actions/download-artifact@v2
with:
name: ${{ matrix.asset_name }}
- name: Set executable flag
run: chmod +x ${{ matrix.artifact_name }}
- name: Set zip name
run: echo "zip_name=capa-${GITHUB_REF#refs/tags/}-${{ matrix.asset_name }}.zip" >> $GITHUB_ENV
- name: Zip ${{ matrix.artifact_name }} into ${{ env.zip_name }}
run: zip ${{ env.zip_name }} ${{ matrix.artifact_name }}
- name: Upload ${{ env.zip_name }} to GH Release
uses: svenstaro/upload-release-action@v2
with:
repo_token: ${{ secrets.GITHUB_TOKEN}}
file: ${{ env.zip_name }}
tag: ${{ github.ref }}
name: build
on:
release:
types: [edited, published]
jobs:
build:
name: PyInstaller for ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
include:
- os: ubuntu-16.04
# use old linux so that the shared library versioning is more portable
artifact_name: capa
asset_name: linux
- os: windows-latest
artifact_name: capa.exe
asset_name: windows
- os: macos-latest
artifact_name: capa
asset_name: macos
steps:
- name: Checkout capa
uses: actions/checkout@v2
with:
submodules: true
- name: Set up Python 2.7
uses: actions/setup-python@v2
with:
python-version: 2.7
- if: matrix.os == 'ubuntu-latest'
run: sudo apt-get install -y libyaml-dev
- if: matrix.os == 'windows-latest'
run: |
choco install vcredist2008
choco install --ignore-dependencies vcpython27
- name: Install PyInstaller
# pyinstaller 4 doesn't support Python 2.7
run: pip install 'pyinstaller==3.*'
- name: Install capa
run: pip install -e .
- name: Build standalone executable
run: pyinstaller .github/pyinstaller/pyinstaller.spec
- name: Does it run?
run: dist/capa "tests/data/Practical Malware Analysis Lab 01-01.dll_"
- uses: actions/upload-artifact@v2
with:
name: ${{ matrix.asset_name }}
path: dist/${{ matrix.artifact_name }}
zip:
name: zip ${{ matrix.asset_name }}
runs-on: ubuntu-latest
needs: build
strategy:
matrix:
include:
- asset_name: linux
artifact_name: capa
- asset_name: windows
artifact_name: capa.exe
- asset_name: macos
artifact_name: capa
steps:
- name: Download ${{ matrix.asset_name }}
uses: actions/download-artifact@v2
with:
name: ${{ matrix.asset_name }}
- name: Set executable flag
run: chmod +x ${{ matrix.artifact_name }}
- name: Set zip name
run: echo "zip_name=capa-${GITHUB_REF#refs/tags/}-${{ matrix.asset_name }}.zip" >> $GITHUB_ENV
- name: Zip ${{ matrix.artifact_name }} into ${{ env.zip_name }}
run: zip ${{ env.zip_name }} ${{ matrix.artifact_name }}
- name: Upload ${{ env.zip_name }} to GH Release
uses: svenstaro/upload-release-action@v2
with:
repo_token: ${{ secrets.GITHUB_TOKEN}}
file: ${{ env.zip_name }}
tag: ${{ github.ref }}

View File

@@ -1,41 +0,0 @@
name: changelog
on:
# We need pull_request_target instead of pull_request because a write
# repository token is needed to add a review to a PR. DO NOT BUILD
# OR RUN UNTRUSTED CODE FROM PRs IN THIS ACTION
pull_request_target:
types: [opened, edited, synchronize]
jobs:
check_changelog:
# no need to check for dependency updates via dependabot
if: github.actor != 'dependabot[bot]' && github.actor != 'dependabot-preview[bot]'
runs-on: ubuntu-20.04
env:
NO_CHANGELOG: '[x] No CHANGELOG update needed'
steps:
- name: Get changed files
id: files
uses: Ana06/get-changed-files@v1.2
- name: check changelog updated
id: changelog_updated
env:
PR_BODY: ${{ github.event.pull_request.body }}
FILES: ${{ steps.files.outputs.modified }}
run: |
echo $FILES | grep -qF 'CHANGELOG.md' || echo $PR_BODY | grep -qiF "$NO_CHANGELOG"
- name: Reject pull request if no CHANGELOG update
if: ${{ always() && steps.changelog_updated.outcome == 'failure' }}
uses: Ana06/automatic-pull-request-review@v0.1.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
event: REQUEST_CHANGES
body: "Please add bug fixes, new features, breaking changes and anything else you think is worthwhile mentioning to the `master (unreleased)` section of CHANGELOG.md. If no CHANGELOG update is needed add the following to the PR description: `${{ env.NO_CHANGELOG }}`"
allow_duplicate: false
- name: Dismiss previous review if CHANGELOG update
uses: Ana06/automatic-pull-request-review@v0.1.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
event: DISMISS
body: "CHANGELOG updated or no update needed, thanks! :smile:"

View File

@@ -1,30 +1,29 @@
# This workflows will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries
name: publish to pypi
on:
release:
types: [published]
jobs:
deploy:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.6'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Build and publish
env:
TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }}
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
run: |
python setup.py sdist bdist_wheel
twine upload --skip-existing dist/*
# This workflows will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries
name: publish to pypi
on:
release:
types: [published]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '2.7'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Build and publish
env:
TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }}
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
run: |
python setup.py sdist bdist_wheel
twine upload --skip-existing dist/*

View File

@@ -1,29 +0,0 @@
name: tag
on:
release:
types: [published]
jobs:
tag:
name: Tag capa rules
runs-on: ubuntu-20.04
steps:
- name: Checkout capa-rules
uses: actions/checkout@v2
with:
repository: fireeye/capa-rules
token: ${{ secrets.CAPA_TOKEN }}
- name: Tag capa-rules
run: |
# user information is needed to create annotated tags (with a message)
git config user.email 'capa-dev@fireeye.com'
git config user.name 'Capa Bot'
name=${{ github.event.release.tag_name }}
git tag $name -m "https://github.com/fireeye/capa/releases/$name"
- name: Push tag to capa-rules
uses: ad-m/github-push-action@master
with:
repository: fireeye/capa-rules
github_token: ${{ secrets.CAPA_TOKEN }}
tags: true

View File

@@ -6,24 +6,9 @@ on:
pull_request:
branches: [ master ]
# save workspaces to speed up testing
env:
CAPA_SAVE_WORKSPACE: "True"
jobs:
changelog_format:
runs-on: ubuntu-20.04
steps:
- name: Checkout capa
uses: actions/checkout@v2
# The sync GH action in capa-rules relies on a single '- *$' in the CHANGELOG file
- name: Ensure CHANGELOG has '- *$'
run: |
number=$(grep '\- *$' CHANGELOG.md | wc -l)
if [ $number != 1 ]; then exit 1; fi
code_style:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
steps:
- name: Checkout capa
uses: actions/checkout@v2
@@ -32,18 +17,16 @@ jobs:
with:
python-version: 3.8
- name: Install dependencies
run: pip install -e .[dev]
run: pip install 'isort==5.*' black
- name: Lint with isort
run: isort --profile black --length-sort --line-width 120 -c .
- name: Lint with black
run: black -l 120 --check .
- name: Check types with mypy
run: mypy --config-file .github/mypy/mypy.ini capa/ scripts/ tests/
rule_linter:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
steps:
- name: Checkout capa with submodules
- name: Checkout capa with rules submodule
uses: actions/checkout@v2
with:
submodules: true
@@ -51,40 +34,37 @@ jobs:
uses: actions/setup-python@v2
with:
python-version: 3.8
# We don't need vivisect, so we can install capa using Python3
- name: Install capa
run: pip install -e .
- name: Run rule linter
run: python scripts/lint.py rules/
tests:
name: Tests in ${{ matrix.python-version }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
name: Tests in ${{ matrix.python }}
runs-on: ubuntu-latest
needs: [code_style, rule_linter]
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04, windows-2019, macos-10.15]
# across all operating systems
python-version: [3.6, 3.9]
include:
# on Ubuntu run these as well
- os: ubuntu-20.04
python-version: 3.7
- os: ubuntu-20.04
python-version: 3.8
- python: 2.7
- python: 3.7
- python: 3.8
- python: 3.9.1
steps:
- name: Checkout capa with submodules
uses: actions/checkout@v2
with:
submodules: true
- name: Set up Python ${{ matrix.python-version }}
- name: Set up Python ${{ matrix.python }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
python-version: ${{ matrix.python }}
- name: Install pyyaml
if: matrix.os == 'ubuntu-20.04'
run: sudo apt-get install -y libyaml-dev
- name: Install capa
run: pip install -e .[dev]
- name: Run tests
run: pytest -v tests/
run: pytest tests/

File diff suppressed because it is too large Load Diff

View File

@@ -1,20 +1,14 @@
![capa](https://github.com/fireeye/capa/blob/master/.github/logo.png)
![capa](.github/logo.png)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/flare-capa)](https://pypi.org/project/flare-capa)
[![Last release](https://img.shields.io/github/v/release/fireeye/capa)](https://github.com/fireeye/capa/releases)
[![Number of rules](https://img.shields.io/badge/rules-633-blue.svg)](https://github.com/fireeye/capa-rules)
[![CI status](https://github.com/fireeye/capa/workflows/CI/badge.svg)](https://github.com/fireeye/capa/actions?query=workflow%3ACI+event%3Apush+branch%3Amaster)
[![Downloads](https://img.shields.io/github/downloads/fireeye/capa/total)](https://github.com/fireeye/capa/releases)
[![Number of rules](https://img.shields.io/badge/rules-455-blue.svg)](https://github.com/fireeye/capa-rules)
[![License](https://img.shields.io/badge/license-Apache--2.0-green.svg)](LICENSE.txt)
capa detects capabilities in executable files.
You run it against a PE, ELF, or shellcode file and it tells you what it thinks the program can do.
You run it against a PE file or shellcode and it tells you what it thinks the program can do.
For example, it might suggest that the file is a backdoor, is capable of installing services, or relies on HTTP to communicate.
Check out:
- the overview in our first [capa blog post](https://www.fireeye.com/blog/threat-research/2020/07/capa-automatically-identify-malware-capabilities.html)
- the major version 2.0 updates described in our [second blog post](https://www.fireeye.com/blog/threat-research/2021/07/capa-2-better-stronger-faster.html)
- the major version 3.0 (ELF support) described in the [third blog post](https://www.fireeye.com/blog/threat-research/2021/09/elfant-in-the-room-capa-v3.html)
Check out the overview in our first [capa blog post](https://www.fireeye.com/blog/threat-research/2020/07/capa-automatically-identify-malware-capabilities.html).
```
$ capa.exe suspicious.exe
@@ -68,9 +62,16 @@ $ capa.exe suspicious.exe
Download stable releases of the standalone capa binaries [here](https://github.com/fireeye/capa/releases). You can run the standalone binaries without installation. capa is a command line tool that should be run from the terminal.
To use capa as a library or integrate with another tool, see [doc/installation.md](https://github.com/fireeye/capa/blob/master/doc/installation.md) for further setup instructions.
<!--
Alternatively, you can fetch a nightly build of a standalone binary from one of the following links. These are built using the latest development branch.
- Windows 64bit: TODO
- Linux: TODO
- OSX: TODO
-->
For more information about how to use capa, see [doc/usage.md](https://github.com/fireeye/capa/blob/master/doc/usage.md).
To use capa as a library or integrate with another tool, see [doc/installation.md](doc/installation.md) for further setup instructions.
For more information about how to use capa, see [doc/usage.md](doc/usage.md).
# example
@@ -87,7 +88,7 @@ This is useful for at least two reasons:
- it shows where within the binary an experienced analyst might study with IDA Pro
```
$ capa.exe suspicious.exe -vv
λ capa.exe suspicious.exe -vv
...
execute shell command and capture output
namespace c2/shell
@@ -145,21 +146,19 @@ rule:
The [github.com/fireeye/capa-rules](https://github.com/fireeye/capa-rules) repository contains hundreds of standard library rules that are distributed with capa.
Please learn to write rules and contribute new entries as you find interesting techniques in malware.
If you use IDA Pro, then you can use the [capa explorer](https://github.com/fireeye/capa/tree/master/capa/ida/plugin) plugin.
capa explorer helps you identify interesting areas of a program and build new capa rules using features extracted directly from your IDA Pro database.
If you use IDA Pro, then you use can use the [capa explorer IDA plugin](capa/ida/plugin/).
capa explorer lets you quickly identify and navigate to interesting areas of a program and dissect capa rule matches at
the assembly level.
![capa + IDA Pro integration](https://github.com/fireeye/capa/blob/master/doc/img/explorer_expanded.png)
![capa + IDA Pro integration](doc/img/ida_plugin_intro.gif)
# further information
## capa
- [Installation](https://github.com/fireeye/capa/blob/master/doc/installation.md)
- [Usage](https://github.com/fireeye/capa/blob/master/doc/usage.md)
- [Limitations](https://github.com/fireeye/capa/blob/master/doc/limitations.md)
- [Contributing Guide](https://github.com/fireeye/capa/blob/master/.github/CONTRIBUTING.md)
- [doc/installation](doc/installation.md)
- [doc/usage](doc/usage.md)
- [doc/limitations](doc/limitations.md)
- [Contributing Guide](.github/CONTRIBUTING.md)
## capa rules
- [capa-rules repository](https://github.com/fireeye/capa-rules)
- [capa-rules rule format](https://github.com/fireeye/capa-rules/blob/master/doc/format.md)
## capa testfiles
The [capa-testfiles repository](https://github.com/fireeye/capa-testfiles) contains the data we use to test capa's code and rules

View File

@@ -8,23 +8,11 @@
import copy
import collections
from typing import Set, Dict, List, Tuple, Union, Mapping, Iterable
import capa.rules
import capa.features.common
from capa.features.common import Feature
# a collection of features and the locations at which they are found.
#
# used throughout matching as the context in which features are searched:
# to check if a feature exists, do: `Number(0x10) in features`.
# to collect the locations of a feature, do: `features[Number(0x10)]`
#
# aliased here so that the type can be documented and xref'd.
FeatureSet = Dict[Feature, Set[int]]
import capa.features
class Statement:
class Statement(object):
"""
superclass for structural nodes, such as and/or/not.
this exists to provide a default impl for `__str__` and `__repr__`,
@@ -45,7 +33,7 @@ class Statement:
def __repr__(self):
return str(self)
def evaluate(self, features: FeatureSet) -> "Result":
def evaluate(self, ctx):
"""
classes that inherit `Statement` must implement `evaluate`
@@ -62,7 +50,7 @@ class Statement:
yield self.child
if hasattr(self, "children"):
for child in getattr(self, "children"):
for child in self.children:
yield child
def replace_child(self, existing, new):
@@ -71,13 +59,12 @@ class Statement:
self.child = new
if hasattr(self, "children"):
children = getattr(self, "children")
for i, child in enumerate(children):
for i, child in enumerate(self.children):
if child is existing:
children[i] = new
self.children[i] = new
class Result:
class Result(object):
"""
represents the results of an evaluation of statements against features.
@@ -91,7 +78,7 @@ class Result:
we need this so that we can render the tree of expressions and their results.
"""
def __init__(self, success: bool, statement: Union[Statement, Feature], children: List["Result"], locations=None):
def __init__(self, success, statement, children, locations=None):
"""
args:
success (bool)
@@ -212,40 +199,38 @@ class Subscope(Statement):
raise ValueError("cannot evaluate a subscope directly!")
# mapping from rule name to list of: (location of match, result object)
#
# used throughout matching and rendering to collection the results
# of statement evaluation and their locations.
#
# to check if a rule matched, do: `"TCP client" in matches`.
# to find where a rule matched, do: `map(first, matches["TCP client"])`
# to see how a rule matched, do:
#
# for address, match_details in matches["TCP client"]:
# inspect(match_details)
#
# aliased here so that the type can be documented and xref'd.
MatchResults = Mapping[str, List[Tuple[int, Result]]]
def index_rule_matches(features: FeatureSet, rule: "capa.rules.Rule", locations: Iterable[int]):
def topologically_order_rules(rules):
"""
record into the given featureset that the given rule matched at the given locations.
order the given rules such that dependencies show up before dependents.
this means that as we match rules, we can add features for the matches, and these
will be matched by subsequent rules if they follow this order.
naively, this is just adding a MatchedRule feature;
however, we also want to record matches for the rule's namespaces.
updates `features` in-place. doesn't modify the remaining arguments.
assumes that the rule dependency graph is a DAG.
"""
features[capa.features.common.MatchedRule(rule.name)].update(locations)
namespace = rule.meta.get("namespace")
if namespace:
while namespace:
features[capa.features.common.MatchedRule(namespace)].update(locations)
namespace, _, _ = namespace.rpartition("/")
# we evaluate `rules` multiple times, so if its a generator, realize it into a list.
rules = list(rules)
namespaces = capa.rules.index_rules_by_namespace(rules)
rules = {rule.name: rule for rule in rules}
seen = set([])
ret = []
def rec(rule):
if rule.name in seen:
return
for dep in rule.get_dependencies(namespaces):
rec(rules[dep])
ret.append(rule)
seen.add(rule.name)
for rule in rules.values():
rec(rule)
return ret
def match(rules: List["capa.rules.Rule"], features: FeatureSet, va: int) -> Tuple[FeatureSet, MatchResults]:
def match(rules, features, va):
"""
Args:
rules (List[capa.rules.Rule]): these must already be ordered topologically by dependency.
@@ -253,11 +238,11 @@ def match(rules: List["capa.rules.Rule"], features: FeatureSet, va: int) -> Tupl
va (int): location of the features
Returns:
Tuple[FeatureSet, MatchResults]: two-tuple with entries:
- set of features used for matching (which may be a superset of the given `features` argument, due to rule match features), and
- mapping from rule name to [(location of match, result object)]
Tuple[List[capa.features.Feature], Dict[str, Tuple[int, capa.engine.Result]]]: two-tuple with entries:
- list of features used for matching (which may be greater than argument, due to rule match features), and
- mapping from rule name to (location of match, result object)
"""
results = collections.defaultdict(list) # type: MatchResults
results = collections.defaultdict(list)
# copy features so that we can modify it
# without affecting the caller (keep this function pure)
@@ -269,9 +254,12 @@ def match(rules: List["capa.rules.Rule"], features: FeatureSet, va: int) -> Tupl
res = rule.evaluate(features)
if res:
results[rule.name].append((va, res))
# we need to update the current `features`
# because subsequent iterations of this loop may use newly added features,
# such as rule or namespace matches.
index_rule_matches(features, rule, [va])
features[capa.features.MatchedRule(rule.name)].add(va)
namespace = rule.meta.get("namespace")
if namespace:
while namespace:
features[capa.features.MatchedRule(namespace)].add(va)
namespace, _, _ = namespace.rpartition("/")
return (features, results)

View File

@@ -0,0 +1,222 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import re
import sys
import codecs
import logging
import capa.engine
logger = logging.getLogger(__name__)
MAX_BYTES_FEATURE_SIZE = 0x100
# thunks may be chained so we specify a delta to control the depth to which these chains are explored
THUNK_CHAIN_DEPTH_DELTA = 5
# identifiers for supported architectures names that tweak a feature
# for example, offset/x32
ARCH_X32 = "x32"
ARCH_X64 = "x64"
VALID_ARCH = (ARCH_X32, ARCH_X64)
def bytes_to_str(b):
if sys.version_info[0] >= 3:
return str(codecs.encode(b, "hex").decode("utf-8"))
else:
return codecs.encode(b, "hex")
def hex_string(h):
""" render hex string e.g. "0a40b1" as "0A 40 B1" """
return " ".join(h[i : i + 2] for i in range(0, len(h), 2)).upper()
class Feature(object):
def __init__(self, value, arch=None, description=None):
"""
Args:
value (any): the value of the feature, such as the number or string.
arch (str): one of the VALID_ARCH values, or None.
When None, then the feature applies to any architecture.
Modifies the feature name from `feature` to `feature/arch`, like `offset/x32`.
description (str): a human-readable description that explains the feature value.
"""
super(Feature, self).__init__()
if arch is not None:
if arch not in VALID_ARCH:
raise ValueError("arch '%s' must be one of %s" % (arch, VALID_ARCH))
self.name = self.__class__.__name__.lower() + "/" + arch
else:
self.name = self.__class__.__name__.lower()
self.value = value
self.arch = arch
self.description = description
def __hash__(self):
return hash((self.name, self.value, self.arch))
def __eq__(self, other):
return self.name == other.name and self.value == other.value and self.arch == other.arch
def get_value_str(self):
"""
render the value of this feature, for use by `__str__` and friends.
subclasses should override to customize the rendering.
Returns: any
"""
return self.value
def __str__(self):
if self.value is not None:
if self.description:
return "%s(%s = %s)" % (self.name, self.get_value_str(), self.description)
else:
return "%s(%s)" % (self.name, self.get_value_str())
else:
return "%s" % self.name
def __repr__(self):
return str(self)
def evaluate(self, ctx):
return capa.engine.Result(self in ctx, self, [], locations=ctx.get(self, []))
def freeze_serialize(self):
if self.arch is not None:
return (self.__class__.__name__, [self.value, {"arch": self.arch}])
else:
return (self.__class__.__name__, [self.value])
@classmethod
def freeze_deserialize(cls, args):
# as you can see below in code,
# if the last argument is a dictionary,
# consider it to be kwargs passed to the feature constructor.
if len(args) == 1:
return cls(*args)
elif isinstance(args[-1], dict):
kwargs = args[-1]
args = args[:-1]
return cls(*args, **kwargs)
class MatchedRule(Feature):
def __init__(self, value, description=None):
super(MatchedRule, self).__init__(value, description=description)
self.name = "match"
class Characteristic(Feature):
def __init__(self, value, description=None):
super(Characteristic, self).__init__(value, description=description)
class String(Feature):
def __init__(self, value, description=None):
super(String, self).__init__(value, description=description)
class Regex(String):
def __init__(self, value, description=None):
super(Regex, self).__init__(value, description=description)
pat = self.value[len("/") : -len("/")]
flags = re.DOTALL
if value.endswith("/i"):
pat = self.value[len("/") : -len("/i")]
flags |= re.IGNORECASE
try:
self.re = re.compile(pat, flags)
except re.error:
if value.endswith("/i"):
value = value[: -len("i")]
raise ValueError(
"invalid regular expression: %s it should use Python syntax, try it at https://pythex.org" % value
)
def evaluate(self, ctx):
for feature, locations in ctx.items():
if not isinstance(feature, (capa.features.String,)):
continue
# `re.search` finds a match anywhere in the given string
# which implies leading and/or trailing whitespace.
# using this mode cleans is more convenient for rule authors,
# so that they don't have to prefix/suffix their terms like: /.*foo.*/.
if self.re.search(feature.value):
# unlike other features, we cannot return put a reference to `self` directly in a `Result`.
# this is because `self` may match on many strings, so we can't stuff the matched value into it.
# instead, return a new instance that has a reference to both the regex and the matched value.
# see #262.
return capa.engine.Result(True, _MatchedRegex(self, feature.value), [], locations=locations)
return capa.engine.Result(False, _MatchedRegex(self, None), [])
def __str__(self):
return "regex(string =~ %s)" % self.value
class _MatchedRegex(Regex):
"""
this represents a specific instance of a regular expression feature match.
treat it the same as a `Regex` except it has the `match` field that contains the complete string that matched.
note: this type should only ever be constructed by `Regex.evaluate()`. it is not part of the public API.
"""
def __init__(self, regex, match):
"""
args:
regex (Regex): the regex feature that matches
match (string|None): the matching string or None if it doesn't match
"""
super(_MatchedRegex, self).__init__(regex.value, description=regex.description)
# we want this to collide with the name of `Regex` above,
# so that it works nicely with the renderers.
self.name = "regex"
# this may be None if the regex doesn't match
self.match = match
def __str__(self):
return 'regex(string =~ %s, matched = "%s")' % (self.value, self.match)
class StringFactory(object):
def __new__(self, value, description=None):
if value.startswith("/") and (value.endswith("/") or value.endswith("/i")):
return Regex(value, description=description)
return String(value, description=description)
class Bytes(Feature):
def __init__(self, value, description=None):
super(Bytes, self).__init__(value, description=description)
def evaluate(self, ctx):
for feature, locations in ctx.items():
if not isinstance(feature, (capa.features.Bytes,)):
continue
if feature.value.startswith(self.value):
return capa.engine.Result(True, self, [], locations=locations)
return capa.engine.Result(False, self, [])
def get_value_str(self):
return hex_string(bytes_to_str(self.value))
def freeze_serialize(self):
return (self.__class__.__name__, [bytes_to_str(self.value).upper()])
@classmethod
def freeze_deserialize(cls, args):
return cls(*[codecs.decode(x, "hex") for x in args])

View File

@@ -6,7 +6,7 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
from capa.features.common import Feature
from capa.features import Feature
class BasicBlock(Feature):

View File

@@ -1,380 +0,0 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import re
import codecs
import logging
import collections
from typing import Set, Dict, Union
import capa.engine
import capa.features
import capa.features.extractors.elf
logger = logging.getLogger(__name__)
MAX_BYTES_FEATURE_SIZE = 0x100
# thunks may be chained so we specify a delta to control the depth to which these chains are explored
THUNK_CHAIN_DEPTH_DELTA = 5
def bytes_to_str(b: bytes) -> str:
return str(codecs.encode(b, "hex").decode("utf-8"))
def hex_string(h: str) -> str:
"""render hex string e.g. "0a40b1" as "0A 40 B1" """
return " ".join(h[i : i + 2] for i in range(0, len(h), 2)).upper()
def escape_string(s: str) -> str:
"""escape special characters"""
s = repr(s)
if not s.startswith(('"', "'")):
# u'hello\r\nworld' -> hello\\r\\nworld
s = s[2:-1]
else:
# 'hello\r\nworld' -> hello\\r\\nworld
s = s[1:-1]
s = s.replace("\\'", "'") # repr() may escape "'" in some edge cases, remove
s = s.replace('"', '\\"') # repr() does not escape '"', add
return s
class Feature:
def __init__(self, value: Union[str, int, bytes], bitness=None, description=None):
"""
Args:
value (any): the value of the feature, such as the number or string.
bitness (str): one of the VALID_BITNESS values, or None.
When None, then the feature applies to any bitness.
Modifies the feature name from `feature` to `feature/bitness`, like `offset/x32`.
description (str): a human-readable description that explains the feature value.
"""
super(Feature, self).__init__()
if bitness is not None:
if bitness not in VALID_BITNESS:
raise ValueError("bitness '%s' must be one of %s" % (bitness, VALID_BITNESS))
self.name = self.__class__.__name__.lower() + "/" + bitness
else:
self.name = self.__class__.__name__.lower()
self.value = value
self.bitness = bitness
self.description = description
def __hash__(self):
return hash((self.name, self.value, self.bitness))
def __eq__(self, other):
return self.name == other.name and self.value == other.value and self.bitness == other.bitness
def get_value_str(self) -> str:
"""
render the value of this feature, for use by `__str__` and friends.
subclasses should override to customize the rendering.
Returns: any
"""
return str(self.value)
def __str__(self):
if self.value is not None:
if self.description:
return "%s(%s = %s)" % (self.name, self.get_value_str(), self.description)
else:
return "%s(%s)" % (self.name, self.get_value_str())
else:
return "%s" % self.name
def __repr__(self):
return str(self)
def evaluate(self, ctx: Dict["Feature", Set[int]]) -> "capa.engine.Result":
return capa.engine.Result(self in ctx, self, [], locations=ctx.get(self, []))
def freeze_serialize(self):
if self.bitness is not None:
return (self.__class__.__name__, [self.value, {"bitness": self.bitness}])
else:
return (self.__class__.__name__, [self.value])
@classmethod
def freeze_deserialize(cls, args):
# as you can see below in code,
# if the last argument is a dictionary,
# consider it to be kwargs passed to the feature constructor.
if len(args) == 1:
return cls(*args)
elif isinstance(args[-1], dict):
kwargs = args[-1]
args = args[:-1]
return cls(*args, **kwargs)
class MatchedRule(Feature):
def __init__(self, value: str, description=None):
super(MatchedRule, self).__init__(value, description=description)
self.name = "match"
class Characteristic(Feature):
def __init__(self, value: str, description=None):
super(Characteristic, self).__init__(value, description=description)
class String(Feature):
def __init__(self, value: str, description=None):
super(String, self).__init__(value, description=description)
class Substring(String):
def __init__(self, value: str, description=None):
super(Substring, self).__init__(value, description=description)
self.value = value
def evaluate(self, ctx):
# mapping from string value to list of locations.
# will unique the locations later on.
matches = collections.defaultdict(list)
for feature, locations in ctx.items():
if not isinstance(feature, (String,)):
continue
if not isinstance(feature.value, str):
# this is a programming error: String should only contain str
raise ValueError("unexpected feature value type")
if self.value in feature.value:
matches[feature.value].extend(locations)
if matches:
# finalize: defaultdict -> dict
# which makes json serialization easier
matches = dict(matches)
# collect all locations
locations = set()
for s in matches.keys():
matches[s] = list(set(matches[s]))
locations.update(matches[s])
# unlike other features, we cannot return put a reference to `self` directly in a `Result`.
# this is because `self` may match on many strings, so we can't stuff the matched value into it.
# instead, return a new instance that has a reference to both the substring and the matched values.
return capa.engine.Result(True, _MatchedSubstring(self, matches), [], locations=locations)
else:
return capa.engine.Result(False, _MatchedSubstring(self, None), [])
def __str__(self):
return "substring(%s)" % self.value
class _MatchedSubstring(Substring):
"""
this represents specific match instances of a substring feature.
treat it the same as a `Substring` except it has the `matches` field that contains the complete strings that matched.
note: this type should only ever be constructed by `Substring.evaluate()`. it is not part of the public API.
"""
def __init__(self, substring: Substring, matches):
"""
args:
substring (Substring): the substring feature that matches.
match (Dict[string, List[int]]|None): mapping from matching string to its locations.
"""
super(_MatchedSubstring, self).__init__(str(substring.value), description=substring.description)
# we want this to collide with the name of `Substring` above,
# so that it works nicely with the renderers.
self.name = "substring"
# this may be None if the substring doesn't match
self.matches = matches
def __str__(self):
return 'substring("%s", matches = %s)' % (
self.value,
", ".join(map(lambda s: '"' + s + '"', (self.matches or {}).keys())),
)
class Regex(String):
def __init__(self, value: str, description=None):
super(Regex, self).__init__(value, description=description)
self.value = value
pat = self.value[len("/") : -len("/")]
flags = re.DOTALL
if value.endswith("/i"):
pat = self.value[len("/") : -len("/i")]
flags |= re.IGNORECASE
try:
self.re = re.compile(pat, flags)
except re.error:
if value.endswith("/i"):
value = value[: -len("i")]
raise ValueError(
"invalid regular expression: %s it should use Python syntax, try it at https://pythex.org" % value
)
def evaluate(self, ctx):
# mapping from string value to list of locations.
# will unique the locations later on.
matches = collections.defaultdict(list)
for feature, locations in ctx.items():
if not isinstance(feature, (String,)):
continue
if not isinstance(feature.value, str):
# this is a programming error: String should only contain str
raise ValueError("unexpected feature value type")
# `re.search` finds a match anywhere in the given string
# which implies leading and/or trailing whitespace.
# using this mode cleans is more convenient for rule authors,
# so that they don't have to prefix/suffix their terms like: /.*foo.*/.
if self.re.search(feature.value):
matches[feature.value].extend(locations)
if matches:
# finalize: defaultdict -> dict
# which makes json serialization easier
matches = dict(matches)
# collect all locations
locations = set()
for s in matches.keys():
matches[s] = list(set(matches[s]))
locations.update(matches[s])
# unlike other features, we cannot return put a reference to `self` directly in a `Result`.
# this is because `self` may match on many strings, so we can't stuff the matched value into it.
# instead, return a new instance that has a reference to both the regex and the matched values.
# see #262.
return capa.engine.Result(True, _MatchedRegex(self, matches), [], locations=locations)
else:
return capa.engine.Result(False, _MatchedRegex(self, None), [])
def __str__(self):
return "regex(string =~ %s)" % self.value
class _MatchedRegex(Regex):
"""
this represents specific match instances of a regular expression feature.
treat it the same as a `Regex` except it has the `matches` field that contains the complete strings that matched.
note: this type should only ever be constructed by `Regex.evaluate()`. it is not part of the public API.
"""
def __init__(self, regex: Regex, matches):
"""
args:
regex (Regex): the regex feature that matches.
match (Dict[string, List[int]]|None): mapping from matching string to its locations.
"""
super(_MatchedRegex, self).__init__(str(regex.value), description=regex.description)
# we want this to collide with the name of `Regex` above,
# so that it works nicely with the renderers.
self.name = "regex"
# this may be None if the regex doesn't match
self.matches = matches
def __str__(self):
return "regex(string =~ %s, matches = %s)" % (
self.value,
", ".join(map(lambda s: '"' + s + '"', (self.matches or {}).keys())),
)
class StringFactory:
def __new__(cls, value: str, description=None):
if value.startswith("/") and (value.endswith("/") or value.endswith("/i")):
return Regex(value, description=description)
return String(value, description=description)
class Bytes(Feature):
def __init__(self, value: bytes, description=None):
super(Bytes, self).__init__(value, description=description)
self.value = value
def evaluate(self, ctx):
for feature, locations in ctx.items():
if not isinstance(feature, (Bytes,)):
continue
if feature.value.startswith(self.value):
return capa.engine.Result(True, self, [], locations=locations)
return capa.engine.Result(False, self, [])
def get_value_str(self):
return hex_string(bytes_to_str(self.value))
def freeze_serialize(self):
return (self.__class__.__name__, [bytes_to_str(self.value).upper()])
@classmethod
def freeze_deserialize(cls, args):
return cls(*[codecs.decode(x, "hex") for x in args])
# identifiers for supported bitness names that tweak a feature
# for example, offset/x32
BITNESS_X32 = "x32"
BITNESS_X64 = "x64"
VALID_BITNESS = (BITNESS_X32, BITNESS_X64)
# other candidates here: https://docs.microsoft.com/en-us/windows/win32/debug/pe-format#machine-types
ARCH_I386 = "i386"
ARCH_AMD64 = "amd64"
VALID_ARCH = (ARCH_I386, ARCH_AMD64)
class Arch(Feature):
def __init__(self, value: str, description=None):
super(Arch, self).__init__(value, description=description)
self.name = "arch"
OS_WINDOWS = "windows"
OS_LINUX = "linux"
OS_MACOS = "macos"
VALID_OS = {os.value for os in capa.features.extractors.elf.OS}
VALID_OS.update({OS_WINDOWS, OS_LINUX, OS_MACOS})
class OS(Feature):
def __init__(self, value: str, description=None):
super(OS, self).__init__(value, description=description)
self.name = "os"
FORMAT_PE = "pe"
FORMAT_ELF = "elf"
VALID_FORMAT = (FORMAT_PE, FORMAT_ELF)
class Format(Feature):
def __init__(self, value: str, description=None):
super(Format, self).__init__(value, description=description)
self.name = "format"
def is_global_feature(feature):
"""
is this a feature that is extracted at every scope?
today, these are OS and arch features.
"""
return isinstance(feature, (OS, Arch))

View File

@@ -0,0 +1,294 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import abc
from capa.helpers import oint
class FeatureExtractor(object):
"""
FeatureExtractor defines the interface for fetching features from a sample.
There may be multiple backends that support fetching features for capa.
For example, we use vivisect by default, but also want to support saving
and restoring features from a JSON file.
When we restore the features, we'd like to use exactly the same matching logic
to find matching rules.
Therefore, we can define a FeatureExtractor that provides features from the
serialized JSON file and do matching without a binary analysis pass.
Also, this provides a way to hook in an IDA backend.
This class is not instantiated directly; it is the base class for other implementations.
"""
__metaclass__ = abc.ABCMeta
def __init__(self):
#
# note: a subclass should define ctor parameters for its own use.
# for example, the Vivisect feature extract might require the vw and/or path.
# this base class doesn't know what to do with that info, though.
#
super(FeatureExtractor, self).__init__()
def block_offset(self, bb):
return oint(bb)
def function_offset(self, f):
return oint(f)
@abc.abstractmethod
def get_base_address(self):
"""
fetch the preferred load address at which the sample was analyzed.
returns: int
"""
raise NotImplemented
@abc.abstractmethod
def extract_file_features(self):
"""
extract file-scope features.
example::
extractor = VivisectFeatureExtractor(vw, path)
for feature, va in extractor.get_file_features():
print('0x%x: %s', va, feature)
yields:
Tuple[capa.features.Feature, int]: feature and its location
"""
raise NotImplemented
@abc.abstractmethod
def get_functions(self):
"""
enumerate the functions and provide opaque values that will
subsequently be provided to `.extract_function_features()`, etc.
by "opaque value", we mean that this can be any object, as long as it
provides enough context to `.extract_function_features()`.
the opaque value should support casting to int (`__int__`) for the function start address.
yields:
any: the opaque function value.
"""
raise NotImplemented
@abc.abstractmethod
def extract_function_features(self, f):
"""
extract function-scope features.
the arguments are opaque values previously provided by `.get_functions()`, etc.
example::
extractor = VivisectFeatureExtractor(vw, path)
for function in extractor.get_functions():
for feature, va in extractor.extract_function_features(function):
print('0x%x: %s', va, feature)
args:
f [any]: an opaque value previously fetched from `.get_functions()`.
yields:
Tuple[capa.features.Feature, int]: feature and its location
"""
raise NotImplemented
@abc.abstractmethod
def get_basic_blocks(self, f):
"""
enumerate the basic blocks in the given function and provide opaque values that will
subsequently be provided to `.extract_basic_block_features()`, etc.
by "opaque value", we mean that this can be any object, as long as it
provides enough context to `.extract_basic_block_features()`.
the opaque value should support casting to int (`__int__`) for the basic block start address.
yields:
any: the opaque basic block value.
"""
raise NotImplemented
@abc.abstractmethod
def extract_basic_block_features(self, f, bb):
"""
extract basic block-scope features.
the arguments are opaque values previously provided by `.get_functions()`, etc.
example::
extractor = VivisectFeatureExtractor(vw, path)
for function in extractor.get_functions():
for bb in extractor.get_basic_blocks(function):
for feature, va in extractor.extract_basic_block_features(function, bb):
print('0x%x: %s', va, feature)
args:
f [any]: an opaque value previously fetched from `.get_functions()`.
bb [any]: an opaque value previously fetched from `.get_basic_blocks()`.
yields:
Tuple[capa.features.Feature, int]: feature and its location
"""
raise NotImplemented
@abc.abstractmethod
def get_instructions(self, f, bb):
"""
enumerate the instructions in the given basic block and provide opaque values that will
subsequently be provided to `.extract_insn_features()`, etc.
by "opaque value", we mean that this can be any object, as long as it
provides enough context to `.extract_insn_features()`.
the opaque value should support casting to int (`__int__`) for the instruction address.
yields:
any: the opaque function value.
"""
raise NotImplemented
@abc.abstractmethod
def extract_insn_features(self, f, bb, insn):
"""
extract instruction-scope features.
the arguments are opaque values previously provided by `.get_functions()`, etc.
example::
extractor = VivisectFeatureExtractor(vw, path)
for function in extractor.get_functions():
for bb in extractor.get_basic_blocks(function):
for insn in extractor.get_instructions(function, bb):
for feature, va in extractor.extract_insn_features(function, bb, insn):
print('0x%x: %s', va, feature)
args:
f [any]: an opaque value previously fetched from `.get_functions()`.
bb [any]: an opaque value previously fetched from `.get_basic_blocks()`.
insn [any]: an opaque value previously fetched from `.get_instructions()`.
yields:
Tuple[capa.features.Feature, int]: feature and its location
"""
raise NotImplemented
class NullFeatureExtractor(FeatureExtractor):
"""
An extractor that extracts some user-provided features.
The structure of the single parameter is demonstrated in the example below.
This is useful for testing, as we can provide expected values and see if matching works.
Also, this is how we represent features deserialized from a freeze file.
example::
extractor = NullFeatureExtractor({
'base address: 0x401000,
'file features': [
(0x402345, capa.features.Characteristic('embedded pe')),
],
'functions': {
0x401000: {
'features': [
(0x401000, capa.features.Characteristic('nzxor')),
],
'basic blocks': {
0x401000: {
'features': [
(0x401000, capa.features.Characteristic('tight-loop')),
],
'instructions': {
0x401000: {
'features': [
(0x401000, capa.features.Characteristic('nzxor')),
],
},
0x401002: ...
}
},
0x401005: ...
}
},
0x40200: ...
}
)
"""
def __init__(self, features):
super(NullFeatureExtractor, self).__init__()
self.features = features
def get_base_address(self):
return self.features["base address"]
def extract_file_features(self):
for p in self.features.get("file features", []):
va, feature = p
yield feature, va
def get_functions(self):
for va in sorted(self.features["functions"].keys()):
yield va
def extract_function_features(self, f):
for p in self.features.get("functions", {}).get(f, {}).get("features", []): # noqa: E127 line over-indented
va, feature = p
yield feature, va
def get_basic_blocks(self, f):
for va in sorted(
self.features.get("functions", {}) # noqa: E127 line over-indented
.get(f, {})
.get("basic blocks", {})
.keys()
):
yield va
def extract_basic_block_features(self, f, bb):
for p in (
self.features.get("functions", {}) # noqa: E127 line over-indented
.get(f, {})
.get("basic blocks", {})
.get(bb, {})
.get("features", [])
):
va, feature = p
yield feature, va
def get_instructions(self, f, bb):
for va in sorted(
self.features.get("functions", {}) # noqa: E127 line over-indented
.get(f, {})
.get("basic blocks", {})
.get(bb, {})
.get("instructions", {})
.keys()
):
yield va
def extract_insn_features(self, f, bb, insn):
for p in (
self.features.get("functions", {}) # noqa: E127 line over-indented
.get(f, {})
.get("basic blocks", {})
.get(bb, {})
.get("instructions", {})
.get(insn, {})
.get("features", [])
):
va, feature = p
yield feature, va

View File

@@ -1,337 +0,0 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import abc
from typing import Tuple, Iterator, SupportsInt
from capa.features.common import Feature
# feature extractors may reference functions, BBs, insns by opaque handle values.
# the only requirement of these handles are that they support `__int__`,
# so that they can be rendered as addresses.
#
# these handles are only consumed by routines on
# the feature extractor from which they were created.
#
# int(FunctionHandle) -> function start address
# int(BBHandle) -> BasicBlock start address
# int(InsnHandle) -> instruction address
FunctionHandle = SupportsInt
BBHandle = SupportsInt
InsnHandle = SupportsInt
class FeatureExtractor:
"""
FeatureExtractor defines the interface for fetching features from a sample.
There may be multiple backends that support fetching features for capa.
For example, we use vivisect by default, but also want to support saving
and restoring features from a JSON file.
When we restore the features, we'd like to use exactly the same matching logic
to find matching rules.
Therefore, we can define a FeatureExtractor that provides features from the
serialized JSON file and do matching without a binary analysis pass.
Also, this provides a way to hook in an IDA backend.
This class is not instantiated directly; it is the base class for other implementations.
"""
__metaclass__ = abc.ABCMeta
def __init__(self):
#
# note: a subclass should define ctor parameters for its own use.
# for example, the Vivisect feature extract might require the vw and/or path.
# this base class doesn't know what to do with that info, though.
#
super(FeatureExtractor, self).__init__()
@abc.abstractmethod
def get_base_address(self) -> int:
"""
fetch the preferred load address at which the sample was analyzed.
"""
raise NotImplementedError()
@abc.abstractmethod
def extract_global_features(self) -> Iterator[Tuple[Feature, int]]:
"""
extract features found at every scope ("global").
example::
extractor = VivisectFeatureExtractor(vw, path)
for feature, va in extractor.get_global_features():
print('0x%x: %s', va, feature)
yields:
Tuple[Feature, int]: feature and its location
"""
raise NotImplementedError()
@abc.abstractmethod
def extract_file_features(self) -> Iterator[Tuple[Feature, int]]:
"""
extract file-scope features.
example::
extractor = VivisectFeatureExtractor(vw, path)
for feature, va in extractor.get_file_features():
print('0x%x: %s', va, feature)
yields:
Tuple[Feature, int]: feature and its location
"""
raise NotImplementedError()
@abc.abstractmethod
def get_functions(self) -> Iterator[FunctionHandle]:
"""
enumerate the functions and provide opaque values that will
subsequently be provided to `.extract_function_features()`, etc.
"""
raise NotImplementedError()
def is_library_function(self, va: int) -> bool:
"""
is the given address a library function?
the backend may implement its own function matching algorithm, or none at all.
we accept a VA here, rather than function object, to handle addresses identified in instructions.
this information is used to:
- filter out matches in library functions (by default), and
- recognize when to fetch symbol names for called (non-API) functions
args:
va (int): the virtual address of a function.
returns:
bool: True if the given address is the start of a library function.
"""
return False
def get_function_name(self, va: int) -> str:
"""
fetch any recognized name for the given address.
this is only guaranteed to return a value when the given function is a recognized library function.
we accept a VA here, rather than function object, to handle addresses identified in instructions.
args:
va (int): the virtual address of a function.
returns:
str: the function name
raises:
KeyError: when the given function does not have a name.
"""
raise KeyError(va)
@abc.abstractmethod
def extract_function_features(self, f: FunctionHandle) -> Iterator[Tuple[Feature, int]]:
"""
extract function-scope features.
the arguments are opaque values previously provided by `.get_functions()`, etc.
example::
extractor = VivisectFeatureExtractor(vw, path)
for function in extractor.get_functions():
for feature, va in extractor.extract_function_features(function):
print('0x%x: %s', va, feature)
args:
f [FunctionHandle]: an opaque value previously fetched from `.get_functions()`.
yields:
Tuple[Feature, int]: feature and its location
"""
raise NotImplementedError()
@abc.abstractmethod
def get_basic_blocks(self, f: FunctionHandle) -> Iterator[BBHandle]:
"""
enumerate the basic blocks in the given function and provide opaque values that will
subsequently be provided to `.extract_basic_block_features()`, etc.
"""
raise NotImplementedError()
@abc.abstractmethod
def extract_basic_block_features(self, f: FunctionHandle, bb: BBHandle) -> Iterator[Tuple[Feature, int]]:
"""
extract basic block-scope features.
the arguments are opaque values previously provided by `.get_functions()`, etc.
example::
extractor = VivisectFeatureExtractor(vw, path)
for function in extractor.get_functions():
for bb in extractor.get_basic_blocks(function):
for feature, va in extractor.extract_basic_block_features(function, bb):
print('0x%x: %s', va, feature)
args:
f [FunctionHandle]: an opaque value previously fetched from `.get_functions()`.
bb [BBHandle]: an opaque value previously fetched from `.get_basic_blocks()`.
yields:
Tuple[Feature, int]: feature and its location
"""
raise NotImplementedError()
@abc.abstractmethod
def get_instructions(self, f: FunctionHandle, bb: BBHandle) -> Iterator[InsnHandle]:
"""
enumerate the instructions in the given basic block and provide opaque values that will
subsequently be provided to `.extract_insn_features()`, etc.
"""
raise NotImplementedError()
@abc.abstractmethod
def extract_insn_features(self, f: FunctionHandle, bb: BBHandle, insn: InsnHandle) -> Iterator[Tuple[Feature, int]]:
"""
extract instruction-scope features.
the arguments are opaque values previously provided by `.get_functions()`, etc.
example::
extractor = VivisectFeatureExtractor(vw, path)
for function in extractor.get_functions():
for bb in extractor.get_basic_blocks(function):
for insn in extractor.get_instructions(function, bb):
for feature, va in extractor.extract_insn_features(function, bb, insn):
print('0x%x: %s', va, feature)
args:
f [FunctionHandle]: an opaque value previously fetched from `.get_functions()`.
bb [BBHandle]: an opaque value previously fetched from `.get_basic_blocks()`.
insn [InsnHandle]: an opaque value previously fetched from `.get_instructions()`.
yields:
Tuple[Feature, int]: feature and its location
"""
raise NotImplementedError()
class NullFeatureExtractor(FeatureExtractor):
"""
An extractor that extracts some user-provided features.
The structure of the single parameter is demonstrated in the example below.
This is useful for testing, as we can provide expected values and see if matching works.
Also, this is how we represent features deserialized from a freeze file.
example::
extractor = NullFeatureExtractor({
'base address: 0x401000,
'global features': [
(0x0, capa.features.Arch('i386')),
(0x0, capa.features.OS('linux')),
],
'file features': [
(0x402345, capa.features.Characteristic('embedded pe')),
],
'functions': {
0x401000: {
'features': [
(0x401000, capa.features.Characteristic('nzxor')),
],
'basic blocks': {
0x401000: {
'features': [
(0x401000, capa.features.Characteristic('tight-loop')),
],
'instructions': {
0x401000: {
'features': [
(0x401000, capa.features.Characteristic('nzxor')),
],
},
0x401002: ...
}
},
0x401005: ...
}
},
0x40200: ...
}
)
"""
def __init__(self, features):
super(NullFeatureExtractor, self).__init__()
self.features = features
def get_base_address(self):
return self.features["base address"]
def extract_global_features(self):
for p in self.features.get("global features", []):
va, feature = p
yield feature, va
def extract_file_features(self):
for p in self.features.get("file features", []):
va, feature = p
yield feature, va
def get_functions(self):
for va in sorted(self.features["functions"].keys()):
yield va
def extract_function_features(self, f):
for p in self.features.get("functions", {}).get(f, {}).get("features", []): # noqa: E127 line over-indented
va, feature = p
yield feature, va
def get_basic_blocks(self, f):
for va in sorted(
self.features.get("functions", {}) # noqa: E127 line over-indented
.get(f, {})
.get("basic blocks", {})
.keys()
):
yield va
def extract_basic_block_features(self, f, bb):
for p in (
self.features.get("functions", {}) # noqa: E127 line over-indented
.get(f, {})
.get("basic blocks", {})
.get(bb, {})
.get("features", [])
):
va, feature = p
yield feature, va
def get_instructions(self, f, bb):
for va in sorted(
self.features.get("functions", {}) # noqa: E127 line over-indented
.get(f, {})
.get("basic blocks", {})
.get(bb, {})
.get("instructions", {})
.keys()
):
yield va
def extract_insn_features(self, f, bb, insn):
for p in (
self.features.get("functions", {}) # noqa: E127 line over-indented
.get(f, {})
.get("basic blocks", {})
.get(bb, {})
.get("instructions", {})
.get(insn, {})
.get("features", [])
):
va, feature = p
yield feature, va

View File

@@ -1,95 +0,0 @@
import io
import logging
import binascii
import contextlib
import pefile
import capa.features
import capa.features.extractors.elf
import capa.features.extractors.pefile
from capa.features.common import OS, FORMAT_PE, FORMAT_ELF, OS_WINDOWS, Arch, Format, String
logger = logging.getLogger(__name__)
def extract_file_strings(buf, **kwargs):
"""
extract ASCII and UTF-16 LE strings from file
"""
for s in capa.features.extractors.strings.extract_ascii_strings(buf):
yield String(s.s), s.offset
for s in capa.features.extractors.strings.extract_unicode_strings(buf):
yield String(s.s), s.offset
def extract_format(buf):
if buf.startswith(b"MZ"):
yield Format(FORMAT_PE), 0x0
elif buf.startswith(b"\x7fELF"):
yield Format(FORMAT_ELF), 0x0
else:
# we likely end up here:
# 1. handling a file format (e.g. macho)
#
# for (1), this logic will need to be updated as the format is implemented.
logger.debug("unsupported file format: %s", binascii.hexlify(buf[:4]).decode("ascii"))
return
def extract_arch(buf):
if buf.startswith(b"MZ"):
yield from capa.features.extractors.pefile.extract_file_arch(pe=pefile.PE(data=buf))
elif buf.startswith(b"\x7fELF"):
with contextlib.closing(io.BytesIO(buf)) as f:
arch = capa.features.extractors.elf.detect_elf_arch(f)
if arch not in capa.features.common.VALID_ARCH:
logger.debug("unsupported arch: %s", arch)
return
yield Arch(arch), 0x0
else:
# we likely end up here:
# 1. handling shellcode, or
# 2. handling a new file format (e.g. macho)
#
# for (1) we can't do much - its shellcode and all bets are off.
# we could maybe accept a futher CLI argument to specify the arch,
# but i think this would be rarely used.
# rules that rely on arch conditions will fail to match on shellcode.
#
# for (2), this logic will need to be updated as the format is implemented.
logger.debug("unsupported file format: %s, will not guess Arch", binascii.hexlify(buf[:4]).decode("ascii"))
return
def extract_os(buf):
if buf.startswith(b"MZ"):
yield OS(OS_WINDOWS), 0x0
elif buf.startswith(b"\x7fELF"):
with contextlib.closing(io.BytesIO(buf)) as f:
os = capa.features.extractors.elf.detect_elf_os(f)
if os not in capa.features.common.VALID_OS:
logger.debug("unsupported os: %s", os)
return
yield OS(os), 0x0
else:
# we likely end up here:
# 1. handling shellcode, or
# 2. handling a new file format (e.g. macho)
#
# for (1) we can't do much - its shellcode and all bets are off.
# we could maybe accept a futher CLI argument to specify the OS,
# but i think this would be rarely used.
# rules that rely on OS conditions will fail to match on shellcode.
#
# for (2), this logic will need to be updated as the format is implemented.
logger.debug("unsupported file format: %s, will not guess OS", binascii.hexlify(buf[:4]).decode("ascii"))
return

View File

@@ -1,276 +0,0 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import struct
import logging
from enum import Enum
from typing import BinaryIO
logger = logging.getLogger(__name__)
def align(v, alignment):
remainder = v % alignment
if remainder == 0:
return v
else:
return v + (alignment - remainder)
class CorruptElfFile(ValueError):
pass
class OS(str, Enum):
HPUX = "hpux"
NETBSD = "netbsd"
LINUX = "linux"
HURD = "hurd"
_86OPEN = "86open"
SOLARIS = "solaris"
AIX = "aix"
IRIX = "irix"
FREEBSD = "freebsd"
TRU64 = "tru64"
MODESTO = "modesto"
OPENBSD = "openbsd"
OPENVMS = "openvms"
NSK = "nsk"
AROS = "aros"
FENIXOS = "fenixos"
CLOUD = "cloud"
SYLLABLE = "syllable"
NACL = "nacl"
def detect_elf_os(f: BinaryIO) -> str:
f.seek(0x0)
file_header = f.read(0x40)
# we'll set this to the detected OS
# prefer the first heuristics,
# but rather than short circuiting,
# we'll still parse out the remainder, for debugging.
ret = None
if not file_header.startswith(b"\x7fELF"):
raise CorruptElfFile("missing magic header")
ei_class, ei_data = struct.unpack_from("BB", file_header, 4)
logger.debug("ei_class: 0x%02x ei_data: 0x%02x", ei_class, ei_data)
if ei_class == 1:
bitness = 32
elif ei_class == 2:
bitness = 64
else:
raise CorruptElfFile("invalid ei_class: 0x%02x" % ei_class)
if ei_data == 1:
endian = "<"
elif ei_data == 2:
endian = ">"
else:
raise CorruptElfFile("not an ELF file: invalid ei_data: 0x%02x" % ei_data)
if bitness == 32:
(e_phoff,) = struct.unpack_from(endian + "I", file_header, 0x1C)
e_phentsize, e_phnum = struct.unpack_from(endian + "HH", file_header, 0x2A)
elif bitness == 64:
(e_phoff,) = struct.unpack_from(endian + "Q", file_header, 0x20)
e_phentsize, e_phnum = struct.unpack_from(endian + "HH", file_header, 0x36)
else:
raise NotImplementedError()
logger.debug("e_phoff: 0x%02x e_phentsize: 0x%02x e_phnum: %d", e_phoff, e_phentsize, e_phnum)
(ei_osabi,) = struct.unpack_from(endian + "B", file_header, 7)
OSABI = {
# via pyelftools: https://github.com/eliben/pyelftools/blob/0664de05ed2db3d39041e2d51d19622a8ef4fb0f/elftools/elf/enums.py#L35-L58
# some candidates are commented out because the are not useful values,
# at least when guessing OSes
# 0: "SYSV", # too often used when OS is not SYSV
1: OS.HPUX,
2: OS.NETBSD,
3: OS.LINUX,
4: OS.HURD,
5: OS._86OPEN,
6: OS.SOLARIS,
7: OS.AIX,
8: OS.IRIX,
9: OS.FREEBSD,
10: OS.TRU64,
11: OS.MODESTO,
12: OS.OPENBSD,
13: OS.OPENVMS,
14: OS.NSK,
15: OS.AROS,
16: OS.FENIXOS,
17: OS.CLOUD,
# 53: "SORTFIX", # i can't find any reference to this OS, i dont think it exists
# 64: "ARM_AEABI", # not an OS
# 97: "ARM", # not an OS
# 255: "STANDALONE", # not an OS
}
logger.debug("ei_osabi: 0x%02x (%s)", ei_osabi, OSABI.get(ei_osabi, "unknown"))
# os_osabi == 0 is commonly set even when the OS is not SYSV.
# other values are unused or unknown.
if ei_osabi in OSABI and ei_osabi != 0x0:
# subsequent strategies may overwrite this value
ret = OSABI[ei_osabi]
f.seek(e_phoff)
program_header_size = e_phnum * e_phentsize
program_headers = f.read(program_header_size)
if len(program_headers) != program_header_size:
logger.warning("failed to read program headers")
e_phnum = 0
# search for PT_NOTE sections that specify an OS
# for example, on Linux there is a GNU section with minimum kernel version
for i in range(e_phnum):
offset = i * e_phentsize
phent = program_headers[offset : offset + e_phentsize]
PT_NOTE = 0x4
(p_type,) = struct.unpack_from(endian + "I", phent, 0x0)
logger.debug("p_type: 0x%04x", p_type)
if p_type != PT_NOTE:
continue
if bitness == 32:
p_offset, _, _, p_filesz = struct.unpack_from(endian + "IIII", phent, 0x4)
elif bitness == 64:
p_offset, _, _, p_filesz = struct.unpack_from(endian + "QQQQ", phent, 0x8)
else:
raise NotImplementedError()
logger.debug("p_offset: 0x%02x p_filesz: 0x%04x", p_offset, p_filesz)
f.seek(p_offset)
note = f.read(p_filesz)
if len(note) != p_filesz:
logger.warning("failed to read note content")
continue
namesz, descsz, type_ = struct.unpack_from(endian + "III", note, 0x0)
name_offset = 0xC
desc_offset = name_offset + align(namesz, 0x4)
logger.debug("namesz: 0x%02x descsz: 0x%02x type: 0x%04x", namesz, descsz, type_)
name = note[name_offset : name_offset + namesz].partition(b"\x00")[0].decode("ascii")
logger.debug("name: %s", name)
if type_ != 1:
continue
if name == "GNU":
if descsz < 16:
continue
desc = note[desc_offset : desc_offset + descsz]
abi_tag, kmajor, kminor, kpatch = struct.unpack_from(endian + "IIII", desc, 0x0)
# via readelf: https://github.com/bminor/binutils-gdb/blob/c0e94211e1ac05049a4ce7c192c9d14d1764eb3e/binutils/readelf.c#L19635-L19658
# and here: https://github.com/bminor/binutils-gdb/blob/34c54daa337da9fadf87d2706d6a590ae1f88f4d/include/elf/common.h#L933-L939
GNU_ABI_TAG = {
0: OS.LINUX,
1: OS.HURD,
2: OS.SOLARIS,
3: OS.FREEBSD,
4: OS.NETBSD,
5: OS.SYLLABLE,
6: OS.NACL,
}
logger.debug("GNU_ABI_TAG: 0x%02x", abi_tag)
if abi_tag in GNU_ABI_TAG:
# update only if not set
# so we can get the debugging output of subsequent strategies
ret = GNU_ABI_TAG[abi_tag] if not ret else ret
logger.debug("abi tag: %s earliest compatible kernel: %d.%d.%d", ret, kmajor, kminor, kpatch)
elif name == "OpenBSD":
logger.debug("note owner: %s", "OPENBSD")
ret = OS.OPENBSD if not ret else ret
elif name == "NetBSD":
logger.debug("note owner: %s", "NETBSD")
ret = OS.NETBSD if not ret else ret
elif name == "FreeBSD":
logger.debug("note owner: %s", "FREEBSD")
ret = OS.FREEBSD if not ret else ret
# search for recognizable dynamic linkers (interpreters)
# for example, on linux, we see file paths like: /lib64/ld-linux-x86-64.so.2
for i in range(e_phnum):
offset = i * e_phentsize
phent = program_headers[offset : offset + e_phentsize]
PT_INTERP = 0x3
(p_type,) = struct.unpack_from(endian + "I", phent, 0x0)
if p_type != PT_INTERP:
continue
if bitness == 32:
p_offset, _, _, p_filesz = struct.unpack_from(endian + "IIII", phent, 0x4)
elif bitness == 64:
p_offset, _, _, p_filesz = struct.unpack_from(endian + "QQQQ", phent, 0x8)
else:
raise NotImplementedError()
f.seek(p_offset)
interp = f.read(p_filesz)
if len(interp) != p_filesz:
logger.warning("failed to read interp content")
continue
linker = interp.partition(b"\x00")[0].decode("ascii")
logger.debug("linker: %s", linker)
if "ld-linux" in linker:
# update only if not set
# so we can get the debugging output of subsequent strategies
ret = OS.LINUX if ret is None else ret
return ret.value if ret is not None else "unknown"
class Arch(str, Enum):
I386 = "i386"
AMD64 = "amd64"
def detect_elf_arch(f: BinaryIO) -> str:
f.seek(0x0)
file_header = f.read(0x40)
if not file_header.startswith(b"\x7fELF"):
raise CorruptElfFile("missing magic header")
(ei_data,) = struct.unpack_from("B", file_header, 5)
logger.debug("ei_data: 0x%02x", ei_data)
if ei_data == 1:
endian = "<"
elif ei_data == 2:
endian = ">"
else:
raise CorruptElfFile("not an ELF file: invalid ei_data: 0x%02x" % ei_data)
(ei_machine,) = struct.unpack_from(endian + "H", file_header, 0x12)
logger.debug("ei_machine: 0x%02x", ei_machine)
EM_386 = 0x3
EM_X86_64 = 0x3E
if ei_machine == EM_386:
return Arch.I386
elif ei_machine == EM_X86_64:
return Arch.AMD64
else:
# not really unknown, but unsupport at the moment:
# https://github.com/eliben/pyelftools/blob/ab444d982d1849191e910299a985989857466620/elftools/elf/enums.py#L73
return "unknown"

View File

@@ -1,159 +0,0 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import io
import logging
import contextlib
from typing import Tuple, Iterator
from elftools.elf.elffile import ELFFile, SymbolTableSection
import capa.features.extractors.common
from capa.features.file import Import, Section
from capa.features.common import OS, FORMAT_ELF, Arch, Format, Feature
from capa.features.extractors.elf import Arch as ElfArch
from capa.features.extractors.base_extractor import FeatureExtractor
logger = logging.getLogger(__name__)
def extract_file_import_names(elf, **kwargs):
# see https://github.com/eliben/pyelftools/blob/0664de05ed2db3d39041e2d51d19622a8ef4fb0f/scripts/readelf.py#L372
symbol_tables = [(idx, s) for idx, s in enumerate(elf.iter_sections()) if isinstance(s, SymbolTableSection)]
for section_index, section in symbol_tables:
if not isinstance(section, SymbolTableSection):
continue
if section["sh_entsize"] == 0:
logger.debug("Symbol table '%s' has a sh_entsize of zero!" % (section.name))
continue
logger.debug("Symbol table '%s' contains %s entries:" % (section.name, section.num_symbols()))
for nsym, symbol in enumerate(section.iter_symbols()):
if symbol.name and symbol.entry.st_info.type == "STT_FUNC":
# TODO symbol address
# TODO symbol version info?
yield Import(symbol.name), 0x0
def extract_file_section_names(elf, **kwargs):
for section in elf.iter_sections():
if section.name:
yield Section(section.name), section.header.sh_addr
elif section.is_null():
yield Section("NULL"), section.header.sh_addr
def extract_file_strings(buf, **kwargs):
yield from capa.features.extractors.common.extract_file_strings(buf)
def extract_file_os(elf, buf, **kwargs):
# our current approach does not always get an OS value, e.g. for packed samples
# for file limitation purposes, we're more lax here
try:
os = next(capa.features.extractors.common.extract_os(buf))
yield os
except StopIteration:
yield OS("unknown"), 0x0
def extract_file_format(**kwargs):
yield Format(FORMAT_ELF), 0x0
def extract_file_arch(elf, **kwargs):
# TODO merge with capa.features.extractors.elf.detect_elf_arch()
arch = elf.get_machine_arch()
if arch == "x86":
yield Arch(ElfArch.I386), 0x0
elif arch == "x64":
yield Arch(ElfArch.AMD64), 0x0
else:
logger.warning("unsupported architecture: %s", arch)
def extract_file_features(elf: ELFFile, buf: bytes) -> Iterator[Tuple[Feature, int]]:
for file_handler in FILE_HANDLERS:
for feature, va in file_handler(elf=elf, buf=buf): # type: ignore
yield feature, va
FILE_HANDLERS = (
# TODO extract_file_export_names,
extract_file_import_names,
extract_file_section_names,
extract_file_strings,
# no library matching
extract_file_format,
)
def extract_global_features(elf: ELFFile, buf: bytes) -> Iterator[Tuple[Feature, int]]:
for global_handler in GLOBAL_HANDLERS:
for feature, va in global_handler(elf=elf, buf=buf): # type: ignore
yield feature, va
GLOBAL_HANDLERS = (
extract_file_os,
extract_file_arch,
)
class ElfFeatureExtractor(FeatureExtractor):
def __init__(self, path: str):
super(ElfFeatureExtractor, self).__init__()
self.path = path
with open(self.path, "rb") as f:
self.elf = ELFFile(io.BytesIO(f.read()))
def get_base_address(self):
# virtual address of the first segment with type LOAD
for segment in self.elf.iter_segments():
if segment.header.p_type == "PT_LOAD":
return segment.header.p_vaddr
def extract_global_features(self):
with open(self.path, "rb") as f:
buf = f.read()
for feature, va in extract_global_features(self.elf, buf):
yield feature, va
def extract_file_features(self):
with open(self.path, "rb") as f:
buf = f.read()
for feature, va in extract_file_features(self.elf, buf):
yield feature, va
def get_functions(self):
raise NotImplementedError("ElfFeatureExtractor can only be used to extract file features")
def extract_function_features(self, f):
raise NotImplementedError("ElfFeatureExtractor can only be used to extract file features")
def get_basic_blocks(self, f):
raise NotImplementedError("ElfFeatureExtractor can only be used to extract file features")
def extract_basic_block_features(self, f, bb):
raise NotImplementedError("ElfFeatureExtractor can only be used to extract file features")
def get_instructions(self, f, bb):
raise NotImplementedError("ElfFeatureExtractor can only be used to extract file features")
def extract_insn_features(self, f, bb, insn):
raise NotImplementedError("ElfFeatureExtractor can only be used to extract file features")
def is_library_function(self, va):
raise NotImplementedError("ElfFeatureExtractor can only be used to extract file features")
def get_function_name(self, va):
raise NotImplementedError("ElfFeatureExtractor can only be used to extract file features")

View File

@@ -6,18 +6,23 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import struct
import sys
import builtins
from typing import Tuple, Iterator
from capa.features.file import Import
from capa.features.insn import API
MIN_STACKSTRING_LEN = 8
def xor_static(data: bytes, i: int) -> bytes:
return bytes(c ^ i for c in data)
def xor_static(data, i):
if sys.version_info >= (3, 0):
return bytes(c ^ i for c in data)
else:
return "".join(chr(ord(c) ^ i) for c in data)
def is_aw_function(symbol: str) -> bool:
def is_aw_function(symbol):
"""
is the given function name an A/W function?
these are variants of functions that, on Windows, accept either a narrow or wide string.
@@ -33,7 +38,7 @@ def is_aw_function(symbol: str) -> bool:
return "a" <= symbol[-2] <= "z" or "0" <= symbol[-2] <= "9"
def is_ordinal(symbol: str) -> bool:
def is_ordinal(symbol):
"""
is the given symbol an ordinal that is prefixed by "#"?
"""
@@ -42,7 +47,7 @@ def is_ordinal(symbol: str) -> bool:
return False
def generate_symbols(dll: str, symbol: str) -> Iterator[str]:
def generate_symbols(dll, symbol):
"""
for a given dll and symbol name, generate variants.
we over-generate features to make matching easier.
@@ -68,11 +73,11 @@ def generate_symbols(dll: str, symbol: str) -> Iterator[str]:
yield symbol[:-1]
def all_zeros(bytez: bytes) -> bool:
def all_zeros(bytez):
return all(b == 0 for b in builtins.bytes(bytez))
def twos_complement(val: int, bits: int) -> int:
def twos_complement(val, bits):
"""
compute the 2's complement of int value val
@@ -85,49 +90,3 @@ def twos_complement(val: int, bits: int) -> int:
else:
# return positive value as is
return val
def carve_pe(pbytes: bytes, offset: int = 0) -> Iterator[Tuple[int, int]]:
"""
Generate (offset, key) tuples of embedded PEs
Based on the version from vivisect:
https://github.com/vivisect/vivisect/blob/7be4037b1cecc4551b397f840405a1fc606f9b53/PE/carve.py#L19
And its IDA adaptation:
capa/features/extractors/ida/file.py
"""
mz_xor = [
(
xor_static(b"MZ", key),
xor_static(b"PE", key),
key,
)
for key in range(256)
]
pblen = len(pbytes)
todo = [(pbytes.find(mzx, offset), mzx, pex, key) for mzx, pex, key in mz_xor]
todo = [(off, mzx, pex, key) for (off, mzx, pex, key) in todo if off != -1]
while len(todo):
off, mzx, pex, key = todo.pop()
# The MZ header has one field we will check
# e_lfanew is at 0x3c
e_lfanew = off + 0x3C
if pblen < (e_lfanew + 4):
continue
newoff = struct.unpack("<I", xor_static(pbytes[e_lfanew : e_lfanew + 4], key))[0]
nextres = pbytes.find(mzx, off + 1)
if nextres != -1:
todo.append((nextres, mzx, pex, key))
peoff = off + newoff
if pblen < (peoff + 2):
continue
if pbytes[peoff : peoff + 2] == pex:
yield (off, key)

View File

@@ -0,0 +1,93 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import sys
import types
import idaapi
import capa.features.extractors.ida.file
import capa.features.extractors.ida.insn
import capa.features.extractors.ida.function
import capa.features.extractors.ida.basicblock
from capa.features.extractors import FeatureExtractor
def get_ea(self):
""" """
if isinstance(self, (idaapi.BasicBlock, idaapi.func_t)):
return self.start_ea
if isinstance(self, idaapi.insn_t):
return self.ea
raise TypeError
def add_ea_int_cast(o):
"""
dynamically add a cast-to-int (`__int__`) method to the given object
that returns the value of the `.ea` property.
this bit of skullduggery lets use cast viv-utils objects as ints.
the correct way of doing this is to update viv-utils (or subclass the objects here).
"""
if sys.version_info[0] >= 3:
setattr(o, "__int__", types.MethodType(get_ea, o))
else:
setattr(o, "__int__", types.MethodType(get_ea, o, type(o)))
return o
class IdaFeatureExtractor(FeatureExtractor):
def __init__(self):
super(IdaFeatureExtractor, self).__init__()
def get_base_address(self):
return idaapi.get_imagebase()
def extract_file_features(self):
for (feature, ea) in capa.features.extractors.ida.file.extract_features():
yield feature, ea
def get_functions(self):
import capa.features.extractors.ida.helpers as ida_helpers
# data structure shared across functions yielded here.
# useful for caching analysis relevant across a single workspace.
ctx = {}
# ignore library functions and thunk functions as identified by IDA
for f in ida_helpers.get_functions(skip_thunks=True, skip_libs=True):
setattr(f, "ctx", ctx)
yield add_ea_int_cast(f)
@staticmethod
def get_function(ea):
f = idaapi.get_func(ea)
setattr(f, "ctx", {})
return add_ea_int_cast(f)
def extract_function_features(self, f):
for (feature, ea) in capa.features.extractors.ida.function.extract_features(f):
yield feature, ea
def get_basic_blocks(self, f):
for bb in capa.features.extractors.ida.helpers.get_function_blocks(f):
yield add_ea_int_cast(bb)
def extract_basic_block_features(self, f, bb):
for (feature, ea) in capa.features.extractors.ida.basicblock.extract_features(f, bb):
yield feature, ea
def get_instructions(self, f, bb):
import capa.features.extractors.ida.helpers as ida_helpers
for insn in ida_helpers.get_instructions_in_range(bb.start_ea, bb.end_ea):
yield add_ea_int_cast(insn)
def extract_insn_features(self, f, bb, insn):
for (feature, ea) in capa.features.extractors.ida.insn.extract_features(f, bb, insn):
yield feature, ea

View File

@@ -6,13 +6,14 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import sys
import string
import struct
import idaapi
import capa.features.extractors.ida.helpers
from capa.features.common import Characteristic
from capa.features import Characteristic
from capa.features.basicblock import BasicBlock
from capa.features.extractors.ida import helpers
from capa.features.extractors.helpers import MIN_STACKSTRING_LEN
@@ -38,11 +39,18 @@ def get_printable_len(op):
raise ValueError("Unhandled operand data type 0x%x." % op.dtype)
def is_printable_ascii(chars):
return all(c < 127 and chr(c) in string.printable for c in chars)
if sys.version_info[0] >= 3:
return all(c < 127 and chr(c) in string.printable for c in chars)
else:
return all(ord(c) < 127 and c in string.printable for c in chars)
def is_printable_utf16le(chars):
if all(c == 0x00 for c in chars[1::2]):
return is_printable_ascii(chars[::2])
if sys.version_info[0] >= 3:
if all(c == 0x00 for c in chars[1::2]):
return is_printable_ascii(chars[::2])
else:
if all(c == "\x00" for c in chars[1::2]):
return is_printable_ascii(chars[::2])
if is_printable_ascii(chars):
return idaapi.get_dtype_size(op.dtype)

View File

@@ -1,112 +0,0 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import idaapi
import capa.ida.helpers
import capa.features.extractors.elf
import capa.features.extractors.ida.file
import capa.features.extractors.ida.insn
import capa.features.extractors.ida.global_
import capa.features.extractors.ida.function
import capa.features.extractors.ida.basicblock
from capa.features.extractors.base_extractor import FeatureExtractor
class FunctionHandle:
"""this acts like an idaapi.func_t but with __int__()"""
def __init__(self, inner):
self._inner = inner
def __int__(self):
return self.start_ea
def __getattr__(self, name):
return getattr(self._inner, name)
class BasicBlockHandle:
"""this acts like an idaapi.BasicBlock but with __int__()"""
def __init__(self, inner):
self._inner = inner
def __int__(self):
return self.start_ea
def __getattr__(self, name):
return getattr(self._inner, name)
class InstructionHandle:
"""this acts like an idaapi.insn_t but with __int__()"""
def __init__(self, inner):
self._inner = inner
def __int__(self):
return self.ea
def __getattr__(self, name):
return getattr(self._inner, name)
class IdaFeatureExtractor(FeatureExtractor):
def __init__(self):
super(IdaFeatureExtractor, self).__init__()
self.global_features = []
self.global_features.extend(capa.features.extractors.ida.global_.extract_os())
self.global_features.extend(capa.features.extractors.ida.global_.extract_arch())
def get_base_address(self):
return idaapi.get_imagebase()
def extract_global_features(self):
yield from self.global_features
def extract_file_features(self):
yield from capa.features.extractors.ida.file.extract_features()
def get_functions(self):
import capa.features.extractors.ida.helpers as ida_helpers
# data structure shared across functions yielded here.
# useful for caching analysis relevant across a single workspace.
ctx = {}
# ignore library functions and thunk functions as identified by IDA
for f in ida_helpers.get_functions(skip_thunks=True, skip_libs=True):
setattr(f, "ctx", ctx)
yield FunctionHandle(f)
@staticmethod
def get_function(ea):
f = idaapi.get_func(ea)
setattr(f, "ctx", {})
return FunctionHandle(f)
def extract_function_features(self, f):
yield from capa.features.extractors.ida.function.extract_features(f)
def get_basic_blocks(self, f):
import capa.features.extractors.ida.helpers as ida_helpers
for bb in ida_helpers.get_function_blocks(f):
yield BasicBlockHandle(bb)
def extract_basic_block_features(self, f, bb):
yield from capa.features.extractors.ida.basicblock.extract_features(f, bb)
def get_instructions(self, f, bb):
import capa.features.extractors.ida.helpers as ida_helpers
for insn in ida_helpers.get_instructions_in_range(bb.start_ea, bb.end_ea):
yield InstructionHandle(insn)
def extract_insn_features(self, f, bb, insn):
yield from capa.features.extractors.ida.insn.extract_features(f, bb, insn)

View File

@@ -11,13 +11,12 @@ import struct
import idc
import idaapi
import idautils
import ida_loader
import capa.features.extractors.helpers
import capa.features.extractors.strings
import capa.features.extractors.ida.helpers
from capa.features.file import Export, Import, Section, FunctionName
from capa.features.common import OS, FORMAT_PE, FORMAT_ELF, OS_WINDOWS, Format, String, Characteristic
from capa.features import String, Characteristic
from capa.features.file import Export, Import, Section
def check_segment_for_pe(seg):
@@ -79,7 +78,7 @@ def extract_file_embedded_pe():
def extract_file_export_names():
"""extract function exports"""
""" extract function exports """
for (_, _, ea, name) in idautils.Entries():
yield Export(name), ea
@@ -144,31 +143,8 @@ def extract_file_strings():
yield String(s.s), (seg.start_ea + s.offset)
def extract_file_function_names():
"""
extract the names of statically-linked library functions.
"""
for ea in idautils.Functions():
if idaapi.get_func(ea).flags & idaapi.FUNC_LIB:
name = idaapi.get_name(ea)
yield FunctionName(name), ea
def extract_file_format():
format_name = ida_loader.get_file_type_name()
if "PE" in format_name:
yield Format(FORMAT_PE), 0x0
elif "ELF64" in format_name:
yield Format(FORMAT_ELF), 0x0
elif "ELF32" in format_name:
yield Format(FORMAT_ELF), 0x0
else:
raise NotImplementedError("file format: %s", format_name)
def extract_features():
"""extract file features"""
""" extract file features """
for file_handler in FILE_HANDLERS:
for feature, va in file_handler():
yield feature, va
@@ -180,8 +156,6 @@ FILE_HANDLERS = (
extract_file_strings,
extract_file_section_names,
extract_file_embedded_pe,
extract_file_function_names,
extract_file_format,
)

View File

@@ -10,7 +10,7 @@ import idaapi
import idautils
import capa.features.extractors.ida.helpers
from capa.features.common import Characteristic
from capa.features import Characteristic
from capa.features.extractors import loops

View File

@@ -1,56 +0,0 @@
import logging
import contextlib
import idaapi
import ida_loader
import capa.ida.helpers
import capa.features.extractors.elf
from capa.features.common import OS, ARCH_I386, ARCH_AMD64, OS_WINDOWS, Arch
logger = logging.getLogger(__name__)
def extract_os():
format_name = ida_loader.get_file_type_name()
if "PE" in format_name:
yield OS(OS_WINDOWS), 0x0
elif "ELF" in format_name:
with contextlib.closing(capa.ida.helpers.IDAIO()) as f:
os = capa.features.extractors.elf.detect_elf_os(f)
yield OS(os), 0x0
else:
# we likely end up here:
# 1. handling shellcode, or
# 2. handling a new file format (e.g. macho)
#
# for (1) we can't do much - its shellcode and all bets are off.
# we could maybe accept a futher CLI argument to specify the OS,
# but i think this would be rarely used.
# rules that rely on OS conditions will fail to match on shellcode.
#
# for (2), this logic will need to be updated as the format is implemented.
logger.debug("unsupported file format: %s, will not guess OS", format_name)
return
def extract_arch():
info = idaapi.get_inf_structure()
if info.procName == "metapc" and info.is_64bit():
yield Arch(ARCH_AMD64), 0x0
elif info.procName == "metapc" and info.is_32bit():
yield Arch(ARCH_I386), 0x0
elif info.procName == "metapc":
logger.debug("unsupported architecture: non-32-bit nor non-64-bit intel")
return
else:
# we likely end up here:
# 1. handling a new architecture (e.g. aarch64)
#
# for (1), this logic will need to be updated as the format is implemented.
logger.debug("unsupported architecture: %s", info.procName)
return

View File

@@ -6,6 +6,9 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import sys
import string
import idc
import idaapi
import idautils
@@ -20,7 +23,11 @@ def find_byte_sequence(start, end, seq):
end: max virtual address
seq: bytes to search e.g. b"\x01\x03"
"""
seq = " ".join(["%02x" % b for b in seq])
if sys.version_info[0] >= 3:
seq = " ".join(["%02x" % b for b in seq])
else:
seq = " ".join(["%02x" % ord(b) for b in seq])
while True:
ea = idaapi.find_binary(start, end, seq, 0, idaapi.SEARCH_DOWN)
if ea == idaapi.BADADDR:
@@ -76,7 +83,7 @@ def get_segment_buffer(seg):
def get_file_imports():
"""get file imports"""
""" get file imports """
imports = {}
for idx in range(idaapi.get_import_module_qty()):
@@ -113,7 +120,7 @@ def get_instructions_in_range(start, end):
def is_operand_equal(op1, op2):
"""compare two IDA op_t"""
""" compare two IDA op_t """
if op1.flags != op2.flags:
return False
@@ -139,7 +146,7 @@ def is_operand_equal(op1, op2):
def is_basic_block_equal(bb1, bb2):
"""compare two IDA BasicBlock"""
""" compare two IDA BasicBlock """
if bb1.start_ea != bb2.start_ea:
return False
@@ -153,7 +160,7 @@ def is_basic_block_equal(bb1, bb2):
def basic_block_size(bb):
"""calculate size of basic block"""
""" calculate size of basic block """
return bb.end_ea - bb.start_ea
@@ -171,7 +178,7 @@ def read_bytes_at(ea, count):
def find_string_at(ea, min=4):
"""check if ASCII string exists at a given virtual address"""
""" check if ASCII string exists at a given virtual address """
found = idaapi.get_strlit_contents(ea, -1, idaapi.STRTYPE_C)
if found and len(found) > min:
try:
@@ -225,23 +232,23 @@ def get_op_phrase_info(op):
def is_op_write(insn, op):
"""Check if an operand is written to (destination operand)"""
""" Check if an operand is written to (destination operand) """
return idaapi.has_cf_chg(insn.get_canon_feature(), op.n)
def is_op_read(insn, op):
"""Check if an operand is read from (source operand)"""
""" Check if an operand is read from (source operand) """
return idaapi.has_cf_use(insn.get_canon_feature(), op.n)
def is_op_offset(insn, op):
"""Check is an operand has been marked as an offset (by auto-analysis or manually)"""
""" Check is an operand has been marked as an offset (by auto-analysis or manually) """
flags = idaapi.get_flags(insn.ea)
return ida_bytes.is_off(flags, op.n)
def is_sp_modified(insn):
"""determine if instruction modifies SP, ESP, RSP"""
""" determine if instruction modifies SP, ESP, RSP """
for op in get_insn_ops(insn, target_ops=(idaapi.o_reg,)):
if op.reg == idautils.procregs.sp.reg and is_op_write(insn, op):
# register is stack and written
@@ -250,7 +257,7 @@ def is_sp_modified(insn):
def is_bp_modified(insn):
"""check if instruction modifies BP, EBP, RBP"""
""" check if instruction modifies BP, EBP, RBP """
for op in get_insn_ops(insn, target_ops=(idaapi.o_reg,)):
if op.reg == idautils.procregs.bp.reg and is_op_write(insn, op):
# register is base and written
@@ -259,12 +266,12 @@ def is_bp_modified(insn):
def is_frame_register(reg):
"""check if register is sp or bp"""
""" check if register is sp or bp """
return reg in (idautils.procregs.sp.reg, idautils.procregs.bp.reg)
def get_insn_ops(insn, target_ops=()):
"""yield op_t for instruction, filter on type if specified"""
""" yield op_t for instruction, filter on type if specified """
for op in insn.ops:
if op.type == idaapi.o_void:
# avoid looping all 6 ops if only subset exists
@@ -275,7 +282,7 @@ def get_insn_ops(insn, target_ops=()):
def is_op_stack_var(ea, index):
"""check if operand is a stack variable"""
""" check if operand is a stack variable """
return idaapi.is_stkvar(idaapi.get_flags(ea), index)
@@ -329,7 +336,7 @@ def is_basic_block_tight_loop(bb):
def find_data_reference_from_insn(insn, max_depth=10):
"""search for data reference from instruction, return address of instruction if no reference exists"""
""" search for data reference from instruction, return address of instruction if no reference exists """
depth = 0
ea = insn.ea
@@ -344,10 +351,6 @@ def find_data_reference_from_insn(insn, max_depth=10):
# break if circular reference
break
if not idaapi.is_mapped(data_refs[0]):
# break if address is not mapped
break
depth += 1
if depth > max_depth:
# break if max depth
@@ -372,5 +375,5 @@ def get_function_blocks(f):
def is_basic_block_return(bb):
"""check if basic block is return block"""
""" check if basic block is return block """
return bb.type == idaapi.fcb_ret

View File

@@ -12,38 +12,38 @@ import idautils
import capa.features.extractors.helpers
import capa.features.extractors.ida.helpers
from capa.features.insn import API, Number, Offset, Mnemonic
from capa.features.common import (
BITNESS_X32,
BITNESS_X64,
from capa.features import (
ARCH_X32,
ARCH_X64,
MAX_BYTES_FEATURE_SIZE,
THUNK_CHAIN_DEPTH_DELTA,
Bytes,
String,
Characteristic,
)
from capa.features.insn import API, Number, Offset, Mnemonic
# security cookie checks may perform non-zeroing XORs, these are expected within a certain
# byte range within the first and returning basic blocks, this helps to reduce FP features
SECURITY_COOKIE_BYTES_DELTA = 0x40
def get_bitness(ctx):
def get_arch(ctx):
"""
fetch the BITNESS_* constant for the currently open workspace.
fetch the ARCH_* constant for the currently open workspace.
via Tamir Bahar/@tmr232
https://reverseengineering.stackexchange.com/a/11398/17194
"""
if "bitness" not in ctx:
if "arch" not in ctx:
info = idaapi.get_inf_structure()
if info.is_64bit():
ctx["bitness"] = BITNESS_X64
ctx["arch"] = ARCH_X64
elif info.is_32bit():
ctx["bitness"] = BITNESS_X32
ctx["arch"] = ARCH_X32
else:
raise ValueError("unexpected bitness")
return ctx["bitness"]
raise ValueError("unexpected architecture")
return ctx["arch"]
def get_imports(ctx):
@@ -53,7 +53,10 @@ def get_imports(ctx):
def check_for_api_call(ctx, insn):
"""check instruction for API call"""
""" check instruction for API call """
if not insn.get_canon_mnem() in ("call", "jmp"):
return
info = ()
ref = insn.ea
@@ -92,29 +95,11 @@ def extract_insn_api_features(f, bb, insn):
example:
call dword [0x00473038]
"""
if not insn.get_canon_mnem() in ("call", "jmp"):
return
for api in check_for_api_call(f.ctx, insn):
dll, _, symbol = api.rpartition(".")
for name in capa.features.extractors.helpers.generate_symbols(dll, symbol):
yield API(name), insn.ea
# extract IDA/FLIRT recognized API functions
targets = tuple(idautils.CodeRefsFrom(insn.ea, False))
if not targets:
return
target = targets[0]
target_func = idaapi.get_func(target)
if not target_func or target_func.start_ea != target:
# not a function (start)
return
if target_func.flags & idaapi.FUNC_LIB:
name = idaapi.get_name(target_func.start_ea)
yield API(name), insn.ea
def extract_insn_number_features(f, bb, insn):
"""parse instruction number features
@@ -149,7 +134,7 @@ def extract_insn_number_features(f, bb, insn):
const = op.addr
yield Number(const), insn.ea
yield Number(const, bitness=get_bitness(f.ctx)), insn.ea
yield Number(const, arch=get_arch(f.ctx)), insn.ea
def extract_insn_bytes_features(f, bb, insn):
@@ -218,7 +203,7 @@ def extract_insn_offset_features(f, bb, insn):
op_off = capa.features.extractors.helpers.twos_complement(op_off, 32)
yield Offset(op_off), insn.ea
yield Offset(op_off, bitness=get_bitness(f.ctx)), insn.ea
yield Offset(op_off, arch=get_arch(f.ctx)), insn.ea
def contains_stack_cookie_keywords(s):
@@ -271,7 +256,7 @@ def bb_stack_cookie_registers(bb):
def is_nzxor_stack_cookie_delta(f, bb, insn):
"""check if nzxor exists within stack cookie delta"""
""" check if nzxor exists within stack cookie delta """
# security cookie check should use SP or BP
if not capa.features.extractors.ida.helpers.is_frame_register(insn.Op2.reg):
return False
@@ -294,7 +279,7 @@ def is_nzxor_stack_cookie_delta(f, bb, insn):
def is_nzxor_stack_cookie(f, bb, insn):
"""check if nzxor is related to stack cookie"""
""" check if nzxor is related to stack cookie """
if contains_stack_cookie_keywords(idaapi.get_cmt(insn.ea, False)):
# Example:
# xor ecx, ebp ; StackCookie
@@ -337,7 +322,7 @@ def extract_insn_mnemonic_features(f, bb, insn):
bb (IDA BasicBlock)
insn (IDA insn_t)
"""
yield Mnemonic(idc.print_insn_mnem(insn.ea)), insn.ea
yield Mnemonic(insn.get_canon_mnem()), insn.ea
def extract_insn_peb_access_characteristic_features(f, bb, insn):

View File

@@ -6,7 +6,7 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import networkx
from networkx import nx
from networkx.algorithms.components import strongly_connected_components
@@ -20,6 +20,6 @@ def has_loop(edges, threshold=2):
returns:
bool
"""
g = networkx.DiGraph()
g = nx.DiGraph()
g.add_edges_from(edges)
return any(len(comp) >= threshold for comp in strongly_connected_components(g))

View File

@@ -0,0 +1,107 @@
# Copyright (C) 2020 FireEye, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: https://github.com/fireeye/capa/blob/master/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import miasm.analysis.binary
import miasm.analysis.machine
from miasm.core.locationdb import LocationDB
import capa.features.extractors.miasm.file
import capa.features.extractors.miasm.insn
import capa.features.extractors.miasm.function
import capa.features.extractors.miasm.basicblock
from capa.features.extractors import FeatureExtractor
class MiasmFeatureExtractor(FeatureExtractor):
def __init__(self, buf):
super(MiasmFeatureExtractor, self).__init__()
self.buf = buf
self.loc_db = LocationDB()
self.container = miasm.analysis.binary.Container.from_string(buf, self.loc_db)
self.pe = self.container.executable
self.machine = miasm.analysis.machine.Machine(self.container.arch)
self.cfg = self._build_cfg()
def get_base_address(self):
return self.container.entry_point
def extract_file_features(self):
for feature, va in capa.features.extractors.miasm.file.extract_file_features(self):
yield feature, va
# TODO: Improve this function (it just considers all loc_keys target of calls a function), port to miasm
def get_functions(self):
"""
returns all loc_keys which are the argument of any call function
"""
functions = set()
for block in self.cfg.blocks:
for line in block.lines:
if line.is_subcall() and line.args[0].is_loc():
loc_key = line.args[0].loc_key
if loc_key not in functions:
functions.add(loc_key)
yield loc_key
def extract_function_features(self, loc_key):
for feature, va in capa.features.extractors.miasm.function.extract_features(self, loc_key):
yield feature, va
def block_offset(self, bb):
return bb.lines[0].offset
def function_offset(self, f):
return self.cfg.loc_key_to_block(f).lines[0].offset
def get_basic_blocks(self, loc_key):
"""
get the basic blocks of the function represented by lock_key
"""
block = self.cfg.loc_key_to_block(loc_key)
disassembler = self.machine.dis_engine(self.container.bin_stream, loc_db=self.loc_db, follow_call=False)
cfg = disassembler.dis_multiblock(self.block_offset(block))
return cfg.blocks
def extract_basic_block_features(self, _, bb):
for feature, va in capa.features.extractors.miasm.basicblock.extract_features(bb):
yield feature, va
def get_instructions(self, _, bb):
return bb.lines
def extract_insn_features(self, f, bb, insn):
for feature, va in capa.features.extractors.miasm.insn.extract_features(self, f, bb, insn):
yield feature, va
def _get_entry_points(self):
entry_points = {self.get_base_address()}
for _, va in miasm.jitter.loader.pe.get_export_name_addr_list(self.pe):
entry_points.add(va)
return entry_points
# This is more efficient that using the `blocks` argument in `dis_multiblock`
# See http://www.williballenthin.com/post/2020-01-12-miasm-part-2
# TODO: port this efficiency improvement to miasm
def _build_cfg(self):
loc_db = self.container.loc_db
disassembler = self.machine.dis_engine(self.container.bin_stream, follow_call=True, loc_db=loc_db)
job_done = set()
cfgs = {}
for va in self._get_entry_points():
cfgs[va] = disassembler.dis_multiblock(va, job_done=job_done)
complete_cfs = miasm.core.asmblock.AsmCFG(loc_db)
for cfg in cfgs.values():
complete_cfs.merge(cfg)
disassembler.apply_splitting(complete_cfs)
return complete_cfs

View File

@@ -0,0 +1,134 @@
# Copyright (C) 2020 FireEye, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: https://github.com/fireeye/capa/blob/master/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import sys
import string
import struct
from capa.features import Characteristic
from capa.features.basicblock import BasicBlock
from capa.features.extractors.helpers import MIN_STACKSTRING_LEN
# TODO: Avoid this duplication (this code is in __init__ as well)
def block_offset(bb):
return bb.lines[0].offset
def extract_bb_tight_loop(bb):
""" check basic block for tight loop indicators """
if any(c.loc_key == bb.loc_key for c in bb.bto):
yield Characteristic("tight loop"), block_offset(bb)
def is_mov_imm_to_stack(instr):
"""
Return if instruction moves immediate onto stack
"""
if not instr.name.startswith("MOV"):
return False
try:
dst, src = instr.args
except ValueError:
# not two operands
return False
if not src.is_int():
return False
if not dst.is_mem():
return False
# should detect things like `@8[ESP + 0x8]` and `EBP` and not fail in other cases
if any(register in str(dst) for register in ["EBP", "RBP", "ESP", "RSP"]):
return True
return False
def is_printable_ascii(chars):
if sys.version_info >= (3, 0):
return all(c < 127 and chr(c) in string.printable for c in chars)
else:
return all(ord(c) < 127 and c in string.printable for c in chars)
def is_printable_utf16le(chars):
if all(c == b"\x00" for c in chars[1::2]):
return is_printable_ascii(chars[::2])
def get_printable_len(insn):
"""
Return string length if all operand bytes are ascii or utf16-le printable
"""
dst, src = insn.args
if not src.is_int():
return ValueError("unexpected operand type")
if not dst.is_mem():
return ValueError("unexpected operand type")
if isinstance(src.arg, int):
val = src.arg
else:
val = src.arg.arg
size = (val.bit_length() + 7) // 8
if size == 0:
return 0
elif size == 1:
chars = struct.pack("<B", val)
elif size == 2:
chars = struct.pack("<H", val)
elif size == 4:
chars = struct.pack("<I", val)
elif size == 8:
chars = struct.pack("<Q", val)
if is_printable_ascii(chars):
return size
if is_printable_utf16le(chars):
return size / 2
return 0
def extract_stackstring(bb):
""" check basic block for stackstring indicators """
count = 0
for line in bb.lines:
if is_mov_imm_to_stack(line):
count += get_printable_len(line)
if count > MIN_STACKSTRING_LEN:
yield Characteristic("stack string"), block_offset(bb)
return
def extract_features(bb):
"""
extract features from the given basic block.
args:
bb (miasm.core.asmblock.AsmBlock): the basic block to process.
yields:
Feature, set[VA]: the features and their location found in this basic block.
"""
yield BasicBlock(), block_offset(bb)
for bb_handler in BASIC_BLOCK_HANDLERS:
for feature, va in bb_handler(bb):
yield feature, va
BASIC_BLOCK_HANDLERS = (
extract_bb_tight_loop,
extract_stackstring,
)

View File

@@ -0,0 +1,102 @@
# Copyright (C) 2020 FireEye, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: https://github.com/fireeye/capa/blob/master/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import re
import miasm.analysis.binary
import capa.features.extractors.strings
from capa.features import String, Characteristic
from capa.features.file import Export, Import, Section
def extract_file_embedded_pe(extractor):
"""
extract embedded PE features
"""
buf = extractor.buf
for match in re.finditer(b"MZ", buf):
offset = match.start()
subcontainer = miasm.analysis.binary.ContainerPE.from_string(buf[offset:], loc_db=extractor.loc_db)
if isinstance(subcontainer, miasm.analysis.binary.ContainerPE):
yield Characteristic("embedded pe"), offset
def extract_file_export_names(extractor):
"""
extract file exports and their addresses
"""
for symbol, va in miasm.jitter.loader.pe.get_export_name_addr_list(extractor.pe):
# Only use func names and not ordinals
if isinstance(symbol, str):
yield Export(symbol), va
def extract_file_import_names(extractor):
"""
extract imported function names and their addresses
1. imports by ordinal:
- modulename.#ordinal
2. imports by name, results in two features to support importname-only matching:
- modulename.importname
- importname
"""
for ((dll, symbol), va_set) in miasm.jitter.loader.pe.get_import_address_pe(extractor.pe).items():
dll_name = dll[:-4] # Remove .dll
for va in va_set:
if isinstance(symbol, int):
yield Import("%s.#%s" % (dll_name, symbol)), va
else:
yield Import("%s.%s" % (dll_name, symbol)), va
yield Import(symbol), va
def extract_file_section_names(extractor):
"""
extract file sections and their addresses
"""
for section in extractor.pe.SHList.shlist:
name = section.name.partition(b"\x00")[0].decode("ascii")
va = section.addr
yield Section(name), va
def extract_file_strings(extractor):
"""
extract ASCII and UTF-16 LE strings from file
"""
for s in capa.features.extractors.strings.extract_ascii_strings(extractor.buf):
yield String(s.s), s.offset
for s in capa.features.extractors.strings.extract_unicode_strings(extractor.buf):
yield String(s.s), s.offset
def extract_file_features(extractor):
"""
extract file features from given buffer and parsed binary
args:
buf (bytes): binary content
container (miasm.analysis.binary.ContainerPE): parsed binary returned by miasm
yields:
Tuple[Feature, VA]: a feature and its location.
"""
for file_handler in FILE_HANDLERS:
for feature, va in file_handler(extractor):
yield feature, va
FILE_HANDLERS = (
extract_file_embedded_pe,
extract_file_export_names,
extract_file_import_names,
extract_file_section_names,
extract_file_strings,
)

View File

@@ -0,0 +1,50 @@
# Copyright (C) 2020 FireEye, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: https://github.com/fireeye/capa/blob/master/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
from capa.features import Characteristic
def extract_function_calls_to(extractor, loc_key):
for pred_key in extractor.cfg.predecessors(loc_key):
pred_block = extractor.cfg.loc_key_to_block(pred_key)
pred_insn = pred_block.get_subcall_instr()
if pred_insn and pred_insn.is_subcall():
dst = pred_insn.args[0]
if dst.is_loc() and dst.loc_key == loc_key:
yield Characteristic("calls to"), pred_insn.offset
def extract_function_loop(extractor, loc_key):
"""
returns if the function has a loop
"""
block = extractor.cfg.loc_key_to_block(loc_key)
disassembler = extractor.machine.dis_engine(
extractor.container.bin_stream, loc_db=extractor.loc_db, follow_call=False
)
offset = extractor.block_offset(block)
cfg = disassembler.dis_multiblock(offset)
if cfg.has_loop():
yield Characteristic("loop"), offset
def extract_features(extractor, loc_key):
"""
extract features from the given function.
args:
cfg (AsmCFG): the CFG of the function from which to extract features
loc_key (LocKey): LocKey which represents the beginning of the function
yields:
Feature, set[VA]: the features and their location found in this function.
"""
for func_handler in FUNCTION_HANDLERS:
for feature, va in func_handler(extractor, loc_key):
yield feature, va
FUNCTION_HANDLERS = (extract_function_calls_to, extract_function_loop)

View File

@@ -0,0 +1,126 @@
# Copyright (C) 2020 FireEye, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: https://github.com/fireeye/capa/blob/master/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import miasm.expression.expression
import capa.features.extractors.helpers
from capa.features.insn import Mnemonic
# TODO: remove duplication (similar code in file.py)
# TODO: this function should be cached
def get_imports(pe):
imports = {}
for ((dll, symbol), va_set) in miasm.jitter.loader.pe.get_import_address_pe(pe).items():
dll_name = dll[:-4]
for va in va_set:
if isinstance(symbol, int):
imports[va] = "%s.#%s" % (dll_name, symbol)
else:
imports[va] = "%s.%s" % (dll_name, symbol)
return imports
def extract_insn_api_features(extractor, _f, _bb, insn):
"""parse API features from the given instruction."""
if insn.is_subcall():
arg = insn.args[0]
if isinstance(arg, miasm.expression.expression.ExprMem) and isinstance(
arg.ptr, miasm.expression.expression.ExprInt
):
target = int(arg.ptr)
imports = get_imports(extractor.pe)
if target in imports:
dll, _, symbol = imports[target].rpartition(".")
for feature in capa.features.extractors.helpers.generate_symbols(dll, symbol):
yield feature, insn.offset
def extract_insn_number_features(extractor, f, bb, insn):
"""parse number features from the given instruction."""
raise NotImplementedError()
def extract_insn_string_features(extractor, f, bb, insn):
"""parse string features from the given instruction."""
raise NotImplementedError()
def extract_insn_offset_features(extractor, f, bb, insn):
"""parse structure offset features from the given instruction."""
raise NotImplementedError()
def extract_insn_nzxor_characteristic_features(extractor, f, bb, insn):
"""
parse non-zeroing XOR instruction from the given instruction.
ignore expected non-zeroing XORs, e.g. security cookies.
"""
raise NotImplementedError()
def extract_insn_mnemonic_features(extractor, f, bb, insn):
"""parse mnemonic features from the given instruction."""
yield Mnemonic(insn.name), insn.offset
def extract_insn_peb_access_characteristic_features(extractor, f, bb, insn):
"""
parse peb access from the given function. fs:[0x30] on x86, gs:[0x60] on x64
"""
raise NotImplementedError()
def extract_insn_segment_access_features(extractor, f, bb, insn):
""" parse the instruction for access to fs or gs """
raise NotImplementedError()
def extract_insn_cross_section_cflow(extractor, f, bb, insn):
"""
inspect the instruction for a CALL or JMP that crosses section boundaries.
"""
raise NotImplementedError()
# this is a feature that's most relevant at the function scope,
# however, its most efficient to extract at the instruction scope.
def extract_function_calls_from(f, bb, insn):
raise NotImplementedError()
def extract_features(extractor, f, bb, insn):
"""
extract features from the given insn.
args:
extractor (MiasmFeatureExtractor)
f (miasm.expression.expression.LocKey): the function from which to extract features
bb (miasm.core.asmblock.AsmBlock): the basic block to process.
insn (Instruction): the instruction to process.
yields:
Feature, set[VA]: the features and their location found in this insn.
"""
for insn_handler in INSTRUCTION_HANDLERS:
for feature, va in insn_handler(extractor, f, bb, insn):
yield feature, va
INSTRUCTION_HANDLERS = (
extract_insn_api_features,
# extract_insn_number_features,
# extract_insn_string_features,
# extract_insn_bytes_features,
# extract_insn_offset_features,
# extract_insn_nzxor_characteristic_features,
extract_insn_mnemonic_features,
# extract_insn_peb_access_characteristic_features,
# extract_insn_cross_section_cflow,
# extract_insn_segment_access_features,
# extract_function_calls_from,
# extract_function_indirect_call_characteristic_features,
)

View File

@@ -1,215 +0,0 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import logging
import pefile
import capa.features.common
import capa.features.extractors
import capa.features.extractors.common
import capa.features.extractors.helpers
import capa.features.extractors.strings
from capa.features.file import Export, Import, Section
from capa.features.common import OS, ARCH_I386, FORMAT_PE, ARCH_AMD64, OS_WINDOWS, Arch, Format, Characteristic
from capa.features.extractors.base_extractor import FeatureExtractor
logger = logging.getLogger(__name__)
def extract_file_embedded_pe(buf, **kwargs):
for offset, _ in capa.features.extractors.helpers.carve_pe(buf, 1):
yield Characteristic("embedded pe"), offset
def extract_file_export_names(pe, **kwargs):
base_address = pe.OPTIONAL_HEADER.ImageBase
if hasattr(pe, "DIRECTORY_ENTRY_EXPORT"):
for export in pe.DIRECTORY_ENTRY_EXPORT.symbols:
if not export.name:
continue
try:
name = export.name.partition(b"\x00")[0].decode("ascii")
except UnicodeDecodeError:
continue
va = base_address + export.address
yield Export(name), va
def extract_file_import_names(pe, **kwargs):
"""
extract imported function names
1. imports by ordinal:
- modulename.#ordinal
2. imports by name, results in two features to support importname-only matching:
- modulename.importname
- importname
"""
if hasattr(pe, "DIRECTORY_ENTRY_IMPORT"):
for dll in pe.DIRECTORY_ENTRY_IMPORT:
try:
modname = dll.dll.partition(b"\x00")[0].decode("ascii")
except UnicodeDecodeError:
continue
# strip extension
modname = modname.rpartition(".")[0].lower()
for imp in dll.imports:
if imp.import_by_ordinal:
impname = "#%s" % imp.ordinal
else:
try:
impname = imp.name.partition(b"\x00")[0].decode("ascii")
except UnicodeDecodeError:
continue
for name in capa.features.extractors.helpers.generate_symbols(modname, impname):
yield Import(name), imp.address
def extract_file_section_names(pe, **kwargs):
base_address = pe.OPTIONAL_HEADER.ImageBase
for section in pe.sections:
try:
name = section.Name.partition(b"\x00")[0].decode("ascii")
except UnicodeDecodeError:
continue
yield Section(name), base_address + section.VirtualAddress
def extract_file_strings(buf, **kwargs):
yield from capa.features.extractors.common.extract_file_strings(buf)
def extract_file_function_names(**kwargs):
"""
extract the names of statically-linked library functions.
"""
if False:
# using a `yield` here to force this to be a generator, not function.
yield NotImplementedError("pefile doesn't have library matching")
return
def extract_file_os(**kwargs):
# assuming PE -> Windows
# though i suppose they're also used by UEFI
yield OS(OS_WINDOWS), 0x0
def extract_file_format(**kwargs):
yield Format(FORMAT_PE), 0x0
def extract_file_arch(pe, **kwargs):
if pe.FILE_HEADER.Machine == pefile.MACHINE_TYPE["IMAGE_FILE_MACHINE_I386"]:
yield Arch(ARCH_I386), 0x0
elif pe.FILE_HEADER.Machine == pefile.MACHINE_TYPE["IMAGE_FILE_MACHINE_AMD64"]:
yield Arch(ARCH_AMD64), 0x0
else:
logger.warning("unsupported architecture: %s", pefile.MACHINE_TYPE[pe.FILE_HEADER.Machine])
def extract_file_features(pe, buf):
"""
extract file features from given workspace
args:
pe (pefile.PE): the parsed PE
buf: the raw sample bytes
yields:
Tuple[Feature, VA]: a feature and its location.
"""
for file_handler in FILE_HANDLERS:
for feature, va in file_handler(pe=pe, buf=buf):
yield feature, va
FILE_HANDLERS = (
extract_file_embedded_pe,
extract_file_export_names,
extract_file_import_names,
extract_file_section_names,
extract_file_strings,
extract_file_function_names,
extract_file_format,
)
def extract_global_features(pe, buf):
"""
extract global features from given workspace
args:
pe (pefile.PE): the parsed PE
buf: the raw sample bytes
yields:
Tuple[Feature, VA]: a feature and its location.
"""
for handler in GLOBAL_HANDLERS:
for feature, va in handler(pe=pe, buf=buf):
yield feature, va
GLOBAL_HANDLERS = (
extract_file_os,
extract_file_arch,
)
class PefileFeatureExtractor(FeatureExtractor):
def __init__(self, path: str):
super(PefileFeatureExtractor, self).__init__()
self.path = path
self.pe = pefile.PE(path)
def get_base_address(self):
return self.pe.OPTIONAL_HEADER.ImageBase
def extract_global_features(self):
with open(self.path, "rb") as f:
buf = f.read()
yield from extract_global_features(self.pe, buf)
def extract_file_features(self):
with open(self.path, "rb") as f:
buf = f.read()
yield from extract_file_features(self.pe, buf)
def get_functions(self):
raise NotImplementedError("PefileFeatureExtract can only be used to extract file features")
def extract_function_features(self, f):
raise NotImplementedError("PefileFeatureExtract can only be used to extract file features")
def get_basic_blocks(self, f):
raise NotImplementedError("PefileFeatureExtract can only be used to extract file features")
def extract_basic_block_features(self, f, bb):
raise NotImplementedError("PefileFeatureExtract can only be used to extract file features")
def get_instructions(self, f, bb):
raise NotImplementedError("PefileFeatureExtract can only be used to extract file features")
def extract_insn_features(self, f, bb, insn):
raise NotImplementedError("PefileFeatureExtract can only be used to extract file features")
def is_library_function(self, va):
raise NotImplementedError("PefileFeatureExtract can only be used to extract file features")
def get_function_name(self, va):
raise NotImplementedError("PefileFeatureExtract can only be used to extract file features")

View File

@@ -0,0 +1,52 @@
import sys
import types
from smda.common.SmdaReport import SmdaReport
from smda.common.SmdaInstruction import SmdaInstruction
import capa.features.extractors.smda.file
import capa.features.extractors.smda.insn
import capa.features.extractors.smda.function
import capa.features.extractors.smda.basicblock
from capa.main import UnsupportedRuntimeError
from capa.features.extractors import FeatureExtractor
class SmdaFeatureExtractor(FeatureExtractor):
def __init__(self, smda_report: SmdaReport, path):
super(SmdaFeatureExtractor, self).__init__()
if sys.version_info < (3, 0):
raise UnsupportedRuntimeError("SMDA should only be used with Python 3.")
self.smda_report = smda_report
self.path = path
def get_base_address(self):
return self.smda_report.base_addr
def extract_file_features(self):
for feature, va in capa.features.extractors.smda.file.extract_features(self.smda_report, self.path):
yield feature, va
def get_functions(self):
for function in self.smda_report.getFunctions():
yield function
def extract_function_features(self, f):
for feature, va in capa.features.extractors.smda.function.extract_features(f):
yield feature, va
def get_basic_blocks(self, f):
for bb in f.getBlocks():
yield bb
def extract_basic_block_features(self, f, bb):
for feature, va in capa.features.extractors.smda.basicblock.extract_features(f, bb):
yield feature, va
def get_instructions(self, f, bb):
for smda_ins in bb.getInstructions():
yield smda_ins
def extract_insn_features(self, f, bb, insn):
for feature, va in capa.features.extractors.smda.insn.extract_features(f, bb, insn):
yield feature, va

View File

@@ -1,7 +1,8 @@
import sys
import string
import struct
from capa.features.common import Characteristic
from capa.features import Characteristic
from capa.features.basicblock import BasicBlock
from capa.features.extractors.helpers import MIN_STACKSTRING_LEN
@@ -14,7 +15,7 @@ def _bb_has_tight_loop(f, bb):
def extract_bb_tight_loop(f, bb):
"""check basic block for tight loop indicators"""
""" check basic block for tight loop indicators """
if _bb_has_tight_loop(f, bb):
yield Characteristic("tight loop"), bb.offset
@@ -38,7 +39,7 @@ def get_operands(smda_ins):
def extract_stackstring(f, bb):
"""check basic block for stackstring indicators"""
""" check basic block for stackstring indicators """
if _bb_has_stackstring(f, bb):
yield Characteristic("stack string"), bb.offset
@@ -116,7 +117,7 @@ def extract_features(f, bb):
bb (smda.common.SmdaBasicBlock): the basic block to process.
yields:
Tuple[Feature, int]: the features and their location found in this basic block.
Feature, set[VA]: the features and their location found in this basic block.
"""
yield BasicBlock(), bb.offset
for bb_handler in BASIC_BLOCK_HANDLERS:

View File

@@ -1,53 +0,0 @@
from smda.common.SmdaReport import SmdaReport
import capa.features.extractors.common
import capa.features.extractors.smda.file
import capa.features.extractors.smda.insn
import capa.features.extractors.smda.global_
import capa.features.extractors.smda.function
import capa.features.extractors.smda.basicblock
from capa.features.extractors.base_extractor import FeatureExtractor
class SmdaFeatureExtractor(FeatureExtractor):
def __init__(self, smda_report: SmdaReport, path):
super(SmdaFeatureExtractor, self).__init__()
self.smda_report = smda_report
self.path = path
with open(self.path, "rb") as f:
self.buf = f.read()
# pre-compute these because we'll yield them at *every* scope.
self.global_features = []
self.global_features.extend(capa.features.extractors.common.extract_os(self.buf))
self.global_features.extend(capa.features.extractors.smda.global_.extract_arch(self.smda_report))
def get_base_address(self):
return self.smda_report.base_addr
def extract_global_features(self):
yield from self.global_features
def extract_file_features(self):
yield from capa.features.extractors.smda.file.extract_features(self.smda_report, self.buf)
def get_functions(self):
for function in self.smda_report.getFunctions():
yield function
def extract_function_features(self, f):
yield from capa.features.extractors.smda.function.extract_features(f)
def get_basic_blocks(self, f):
for bb in f.getBlocks():
yield bb
def extract_basic_block_features(self, f, bb):
yield from capa.features.extractors.smda.basicblock.extract_features(f, bb)
def get_instructions(self, f, bb):
for smda_ins in bb.getInstructions():
yield smda_ins
def extract_insn_features(self, f, bb, insn):
yield from capa.features.extractors.smda.insn.extract_features(f, bb, insn)

View File

@@ -1,37 +1,86 @@
import struct
# if we have SMDA we definitely have lief
import lief
import capa.features.extractors.common
import capa.features.extractors.helpers
import capa.features.extractors.strings
from capa.features import String, Characteristic
from capa.features.file import Export, Import, Section
from capa.features.common import String, Characteristic
def extract_file_embedded_pe(buf, **kwargs):
for offset, _ in capa.features.extractors.helpers.carve_pe(buf, 1):
def carve(pbytes, offset=0):
"""
Return a list of (offset, size, xor) tuples of embedded PEs
Based on the version from vivisect:
https://github.com/vivisect/vivisect/blob/7be4037b1cecc4551b397f840405a1fc606f9b53/PE/carve.py#L19
And its IDA adaptation:
capa/features/extractors/ida/file.py
"""
mz_xor = [
(
capa.features.extractors.helpers.xor_static(b"MZ", i),
capa.features.extractors.helpers.xor_static(b"PE", i),
i,
)
for i in range(256)
]
pblen = len(pbytes)
todo = [(pbytes.find(mzx, offset), mzx, pex, i) for mzx, pex, i in mz_xor]
todo = [(off, mzx, pex, i) for (off, mzx, pex, i) in todo if off != -1]
while len(todo):
off, mzx, pex, i = todo.pop()
# The MZ header has one field we will check
# e_lfanew is at 0x3c
e_lfanew = off + 0x3C
if pblen < (e_lfanew + 4):
continue
newoff = struct.unpack("<I", capa.features.extractors.helpers.xor_static(pbytes[e_lfanew : e_lfanew + 4], i))[0]
nextres = pbytes.find(mzx, off + 1)
if nextres != -1:
todo.append((nextres, mzx, pex, i))
peoff = off + newoff
if pblen < (peoff + 2):
continue
if pbytes[peoff : peoff + 2] == pex:
yield (off, i)
def extract_file_embedded_pe(smda_report, file_path):
with open(file_path, "rb") as f:
fbytes = f.read()
for offset, i in carve(fbytes, 1):
yield Characteristic("embedded pe"), offset
def extract_file_export_names(buf, **kwargs):
lief_binary = lief.parse(buf)
def extract_file_export_names(smda_report, file_path):
lief_binary = lief.parse(file_path)
if lief_binary is not None:
for function in lief_binary.exported_functions:
yield Export(function.name), function.address
def extract_file_import_names(smda_report, buf):
def extract_file_import_names(smda_report, file_path):
# extract import table info via LIEF
lief_binary = lief.parse(buf)
lief_binary = lief.parse(file_path)
if not isinstance(lief_binary, lief.PE.Binary):
return
for imported_library in lief_binary.imports:
library_name = imported_library.name.lower()
library_name = library_name[:-4] if library_name.endswith(".dll") else library_name
for func in imported_library.entries:
va = func.iat_address + smda_report.base_addr
if func.name:
va = func.iat_address + smda_report.base_addr
for name in capa.features.extractors.helpers.generate_symbols(library_name, func.name):
yield Import(name), va
elif func.is_ordinal:
@@ -39,8 +88,8 @@ def extract_file_import_names(smda_report, buf):
yield Import(name), va
def extract_file_section_names(buf, **kwargs):
lief_binary = lief.parse(buf)
def extract_file_section_names(smda_report, file_path):
lief_binary = lief.parse(file_path)
if not isinstance(lief_binary, lief.PE.Binary):
return
if lief_binary and lief_binary.sections:
@@ -49,45 +98,35 @@ def extract_file_section_names(buf, **kwargs):
yield Section(section.name), base_address + section.virtual_address
def extract_file_strings(buf, **kwargs):
def extract_file_strings(smda_report, file_path):
"""
extract ASCII and UTF-16 LE strings from file
"""
for s in capa.features.extractors.strings.extract_ascii_strings(buf):
with open(file_path, "rb") as f:
b = f.read()
for s in capa.features.extractors.strings.extract_ascii_strings(b):
yield String(s.s), s.offset
for s in capa.features.extractors.strings.extract_unicode_strings(buf):
for s in capa.features.extractors.strings.extract_unicode_strings(b):
yield String(s.s), s.offset
def extract_file_function_names(smda_report, **kwargs):
"""
extract the names of statically-linked library functions.
"""
if False:
# using a `yield` here to force this to be a generator, not function.
yield NotImplementedError("SMDA doesn't have library matching")
return
def extract_file_format(buf, **kwargs):
yield from capa.features.extractors.common.extract_format(buf)
def extract_features(smda_report, buf):
def extract_features(smda_report, file_path):
"""
extract file features from given workspace
args:
smda_report (smda.common.SmdaReport): a SmdaReport
buf: the raw bytes of the sample
file_path: path to the input file
yields:
Tuple[Feature, VA]: a feature and its location.
"""
for file_handler in FILE_HANDLERS:
for feature, va in file_handler(smda_report=smda_report, buf=buf):
result = file_handler(smda_report, file_path)
for feature, va in file_handler(smda_report, file_path):
yield feature, va
@@ -97,6 +136,4 @@ FILE_HANDLERS = (
extract_file_import_names,
extract_file_section_names,
extract_file_strings,
extract_file_function_names,
extract_file_format,
)

View File

@@ -1,4 +1,4 @@
from capa.features.common import Characteristic
from capa.features import Characteristic
from capa.features.extractors import loops
@@ -28,7 +28,7 @@ def extract_features(f):
f (smda.common.SmdaFunction): the function from which to extract features
yields:
Tuple[Feature, int]: the features and their location found in this function.
Feature, set[VA]: the features and their location found in this function.
"""
for func_handler in FUNCTION_HANDLERS:
for feature, va in func_handler(f):

View File

@@ -1,20 +0,0 @@
import logging
from capa.features.common import ARCH_I386, ARCH_AMD64, Arch
logger = logging.getLogger(__name__)
def extract_arch(smda_report):
if smda_report.architecture == "intel":
if smda_report.bitness == 32:
yield Arch(ARCH_I386), 0x0
elif smda_report.bitness == 64:
yield Arch(ARCH_AMD64), 0x0
else:
# we likely end up here:
# 1. handling a new architecture (e.g. aarch64)
#
# for (1), this logic will need to be updated as the format is implemented.
logger.debug("unsupported architecture: %s", smda_report.architecture)
return

View File

@@ -5,16 +5,16 @@ import struct
from smda.common.SmdaReport import SmdaReport
import capa.features.extractors.helpers
from capa.features.insn import API, Number, Offset, Mnemonic
from capa.features.common import (
BITNESS_X32,
BITNESS_X64,
from capa.features import (
ARCH_X32,
ARCH_X64,
MAX_BYTES_FEATURE_SIZE,
THUNK_CHAIN_DEPTH_DELTA,
Bytes,
String,
Characteristic,
)
from capa.features.insn import API, Number, Offset, Mnemonic
# security cookie checks may perform non-zeroing XORs, these are expected within a certain
# byte range within the first and returning basic blocks, this helps to reduce FP features
@@ -23,12 +23,12 @@ PATTERN_HEXNUM = re.compile(r"[+\-] (?P<num>0x[a-fA-F0-9]+)")
PATTERN_SINGLENUM = re.compile(r"[+\-] (?P<num>[0-9])")
def get_bitness(smda_report):
def get_arch(smda_report):
if smda_report.architecture == "intel":
if smda_report.bitness == 32:
return BITNESS_X32
return ARCH_X32
elif smda_report.bitness == 64:
return BITNESS_X64
return ARCH_X64
else:
raise NotImplementedError
@@ -85,7 +85,7 @@ def extract_insn_number_features(f, bb, insn):
for operand in operands:
try:
yield Number(int(operand, 16)), insn.offset
yield Number(int(operand, 16), bitness=get_bitness(f.smda_report)), insn.offset
yield Number(int(operand, 16), arch=get_arch(f.smda_report)), insn.offset
except:
continue
@@ -97,7 +97,7 @@ def read_bytes(smda_report, va, num_bytes=None):
rva = va - smda_report.base_addr
if smda_report.buffer is None:
raise ValueError("buffer is empty")
return
buffer_end = len(smda_report.buffer)
max_bytes = num_bytes if num_bytes is not None else MAX_BYTES_FEATURE_SIZE
if rva + max_bytes > buffer_end:
@@ -228,7 +228,7 @@ def extract_insn_offset_features(f, bb, insn):
number = int(number_int.group("num"))
number = -1 * number if number_int.group().startswith("-") else number
yield Offset(number), insn.offset
yield Offset(number, bitness=get_bitness(f.smda_report)), insn.offset
yield Offset(number, arch=get_arch(f.smda_report)), insn.offset
def is_security_cookie(f, bb, insn):
@@ -293,7 +293,7 @@ def extract_insn_peb_access_characteristic_features(f, bb, insn):
def extract_insn_segment_access_features(f, bb, insn):
"""parse the instruction for access to fs or gs"""
""" parse the instruction for access to fs or gs """
operands = [o.strip() for o in insn.operands.split(",")]
for operand in operands:
if "fs:" in operand:
@@ -336,7 +336,7 @@ def extract_function_calls_from(f, bb, insn):
# mark as recursive
yield Characteristic("recursive call"), outref
if insn.offset in f.apirefs:
yield Characteristic("calls from"), insn.offset
yield Characteristic("calls from"), f.apirefs[insn.offset]
# this is a feature that's most relevant at the function or basic block scope,
@@ -370,7 +370,7 @@ def extract_features(f, bb, insn):
insn (smda.common.SmdaInstruction): the instruction to process.
yields:
Tuple[Feature, int]: the features and their location found in this insn.
Feature, set[VA]: the features and their location found in this insn.
"""
for insn_handler in INSTRUCTION_HANDLERS:
for feature, va in insn_handler(f, bb, insn):

View File

@@ -0,0 +1,85 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import types
import file
import insn
import function
import viv_utils
import basicblock
import capa.features.extractors
import capa.features.extractors.viv.file
import capa.features.extractors.viv.insn
import capa.features.extractors.viv.function
import capa.features.extractors.viv.basicblock
from capa.features.extractors import FeatureExtractor
__all__ = ["file", "function", "basicblock", "insn"]
def get_va(self):
try:
# vivisect type
return self.va
except AttributeError:
pass
raise TypeError()
def add_va_int_cast(o):
"""
dynamically add a cast-to-int (`__int__`) method to the given object
that returns the value of the `.va` property.
this bit of skullduggery lets use cast viv-utils objects as ints.
the correct way of doing this is to update viv-utils (or subclass the objects here).
"""
setattr(o, "__int__", types.MethodType(get_va, o, type(o)))
return o
class VivisectFeatureExtractor(FeatureExtractor):
def __init__(self, vw, path):
super(VivisectFeatureExtractor, self).__init__()
self.vw = vw
self.path = path
def get_base_address(self):
# assume there is only one file loaded into the vw
return list(self.vw.filemeta.values())[0]["imagebase"]
def extract_file_features(self):
for feature, va in capa.features.extractors.viv.file.extract_features(self.vw, self.path):
yield feature, va
def get_functions(self):
for va in sorted(self.vw.getFunctions()):
yield add_va_int_cast(viv_utils.Function(self.vw, va))
def extract_function_features(self, f):
for feature, va in capa.features.extractors.viv.function.extract_features(f):
yield feature, va
def get_basic_blocks(self, f):
for bb in f.basic_blocks:
yield add_va_int_cast(bb)
def extract_basic_block_features(self, f, bb):
for feature, va in capa.features.extractors.viv.basicblock.extract_features(f, bb):
yield feature, va
def get_instructions(self, f, bb):
for insn in bb.instructions:
yield add_va_int_cast(insn)
def extract_insn_features(self, f, bb, insn):
for feature, va in capa.features.extractors.viv.insn.extract_features(f, bb, insn):
yield feature, va

View File

@@ -10,9 +10,9 @@ import string
import struct
import envi
import envi.archs.i386.disasm
import vivisect.const
from capa.features.common import Characteristic
from capa.features import Characteristic
from capa.features.basicblock import BasicBlock
from capa.features.extractors.helpers import MIN_STACKSTRING_LEN
@@ -37,7 +37,7 @@ def _bb_has_tight_loop(f, bb):
"""
if len(bb.instructions) > 0:
for bva, bflags in bb.instructions[-1].getBranches():
if bflags & envi.BR_COND:
if bflags & vivisect.envi.BR_COND:
if bva == bb.va:
return True
@@ -45,7 +45,7 @@ def _bb_has_tight_loop(f, bb):
def extract_bb_tight_loop(f, bb):
"""check basic block for tight loop indicators"""
""" check basic block for tight loop indicators """
if _bb_has_tight_loop(f, bb):
yield Characteristic("tight loop"), bb.va
@@ -68,12 +68,12 @@ def _bb_has_stackstring(f, bb):
def extract_stackstring(f, bb):
"""check basic block for stackstring indicators"""
""" check basic block for stackstring indicators """
if _bb_has_stackstring(f, bb):
yield Characteristic("stack string"), bb.va
def is_mov_imm_to_stack(instr: envi.archs.i386.disasm.i386Opcode) -> bool:
def is_mov_imm_to_stack(instr):
"""
Return if instruction moves immediate onto stack
"""
@@ -105,7 +105,7 @@ def is_mov_imm_to_stack(instr: envi.archs.i386.disasm.i386Opcode) -> bool:
return True
def get_printable_len(oper: envi.archs.i386.disasm.i386ImmOper) -> int:
def get_printable_len(oper):
"""
Return string length if all operand bytes are ascii or utf16-le printable
"""
@@ -117,30 +117,20 @@ def get_printable_len(oper: envi.archs.i386.disasm.i386ImmOper) -> int:
chars = struct.pack("<I", oper.imm)
elif oper.tsize == 8:
chars = struct.pack("<Q", oper.imm)
else:
raise ValueError("unexpected oper.tsize: %d" % (oper.tsize))
if is_printable_ascii(chars):
return oper.tsize
elif is_printable_utf16le(chars):
if is_printable_utf16le(chars):
return oper.tsize / 2
else:
return 0
return 0
def is_printable_ascii(chars: bytes) -> bool:
try:
chars_str = chars.decode("ascii")
except UnicodeDecodeError:
return False
else:
return all(c in string.printable for c in chars_str)
def is_printable_ascii(chars):
return all(ord(c) < 127 and c in string.printable for c in chars)
def is_printable_utf16le(chars: bytes) -> bool:
if all(c == b"\x00" for c in chars[1::2]):
def is_printable_utf16le(chars):
if all(c == "\x00" for c in chars[1::2]):
return is_printable_ascii(chars[::2])
return False
def extract_features(f, bb):
@@ -152,7 +142,7 @@ def extract_features(f, bb):
bb (viv_utils.BasicBlock): the basic block to process.
yields:
Tuple[Feature, int]: the features and their location found in this basic block.
Feature, set[VA]: the features and their location found in this basic block.
"""
yield BasicBlock(), bb.va
for bb_handler in BASIC_BLOCK_HANDLERS:

View File

@@ -1,84 +0,0 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import logging
import viv_utils
import viv_utils.flirt
import capa.features.extractors.common
import capa.features.extractors.viv.file
import capa.features.extractors.viv.insn
import capa.features.extractors.viv.global_
import capa.features.extractors.viv.function
import capa.features.extractors.viv.basicblock
from capa.features.extractors.base_extractor import FeatureExtractor
logger = logging.getLogger(__name__)
class InstructionHandle:
"""this acts like a vivisect.Opcode but with an __int__() method"""
def __init__(self, inner):
self._inner = inner
def __int__(self):
return self.va
def __getattr__(self, name):
return getattr(self._inner, name)
class VivisectFeatureExtractor(FeatureExtractor):
def __init__(self, vw, path):
super(VivisectFeatureExtractor, self).__init__()
self.vw = vw
self.path = path
with open(self.path, "rb") as f:
self.buf = f.read()
# pre-compute these because we'll yield them at *every* scope.
self.global_features = []
self.global_features.extend(capa.features.extractors.common.extract_os(self.buf))
self.global_features.extend(capa.features.extractors.viv.global_.extract_arch(self.vw))
def get_base_address(self):
# assume there is only one file loaded into the vw
return list(self.vw.filemeta.values())[0]["imagebase"]
def extract_global_features(self):
yield from self.global_features
def extract_file_features(self):
yield from capa.features.extractors.viv.file.extract_features(self.vw, self.buf)
def get_functions(self):
for va in sorted(self.vw.getFunctions()):
yield viv_utils.Function(self.vw, va)
def extract_function_features(self, f):
yield from capa.features.extractors.viv.function.extract_features(f)
def get_basic_blocks(self, f):
return f.basic_blocks
def extract_basic_block_features(self, f, bb):
yield from capa.features.extractors.viv.basicblock.extract_features(f, bb)
def get_instructions(self, f, bb):
for insn in bb.instructions:
yield InstructionHandle(insn)
def extract_insn_features(self, f, bb, insn):
yield from capa.features.extractors.viv.insn.extract_features(f, bb, insn)
def is_library_function(self, va):
return viv_utils.flirt.is_library_function(self.vw, va)
def get_function_name(self, va):
return viv_utils.get_function_name(self.vw, va)

View File

@@ -7,28 +7,27 @@
# See the License for the specific language governing permissions and limitations under the License.
import PE.carve as pe_carve # vivisect PE
import viv_utils
import viv_utils.flirt
import capa.features.insn
import capa.features.extractors.common
import capa.features.extractors.helpers
import capa.features.extractors.strings
from capa.features.file import Export, Import, Section, FunctionName
from capa.features.common import String, Characteristic
from capa.features import String, Characteristic
from capa.features.file import Export, Import, Section
def extract_file_embedded_pe(buf, **kwargs):
for offset, _ in pe_carve.carve(buf, 1):
def extract_file_embedded_pe(vw, file_path):
with open(file_path, "rb") as f:
fbytes = f.read()
for offset, i in pe_carve.carve(fbytes, 1):
yield Characteristic("embedded pe"), offset
def extract_file_export_names(vw, **kwargs):
for va, _, name, _ in vw.getExports():
def extract_file_export_names(vw, file_path):
for va, etype, name, _ in vw.getExports():
yield Export(name), va
def extract_file_import_names(vw, **kwargs):
def extract_file_import_names(vw, file_path):
"""
extract imported function names
1. imports by ordinal:
@@ -39,7 +38,7 @@ def extract_file_import_names(vw, **kwargs):
"""
for va, _, _, tinfo in vw.getImports():
# vivisect source: tinfo = "%s.%s" % (libname, impname)
modname, impname = tinfo.split(".", 1)
modname, impname = tinfo.split(".")
if is_viv_ord_impname(impname):
# replace ord prefix with #
impname = "#%s" % impname[len("ord") :]
@@ -48,7 +47,7 @@ def extract_file_import_names(vw, **kwargs):
yield Import(name), va
def is_viv_ord_impname(impname: str) -> bool:
def is_viv_ord_impname(impname):
"""
return if import name matches vivisect's ordinal naming scheme `'ord%d' % ord`
"""
@@ -62,43 +61,39 @@ def is_viv_ord_impname(impname: str) -> bool:
return True
def extract_file_section_names(vw, **kwargs):
def extract_file_section_names(vw, file_path):
for va, _, segname, _ in vw.getSegments():
yield Section(segname), va
def extract_file_strings(buf, **kwargs):
yield from capa.features.extractors.common.extract_file_strings(buf)
def extract_file_function_names(vw, **kwargs):
def extract_file_strings(vw, file_path):
"""
extract the names of statically-linked library functions.
extract ASCII and UTF-16 LE strings from file
"""
for va in sorted(vw.getFunctions()):
if viv_utils.flirt.is_library_function(vw, va):
name = viv_utils.get_function_name(vw, va)
yield FunctionName(name), va
with open(file_path, "rb") as f:
b = f.read()
for s in capa.features.extractors.strings.extract_ascii_strings(b):
yield String(s.s), s.offset
for s in capa.features.extractors.strings.extract_unicode_strings(b):
yield String(s.s), s.offset
def extract_file_format(buf, **kwargs):
yield from capa.features.extractors.common.extract_format(buf)
def extract_features(vw, buf: bytes):
def extract_features(vw, file_path):
"""
extract file features from given workspace
args:
vw (vivisect.VivWorkspace): the vivisect workspace
buf: the raw input file bytes
file_path: path to the input file
yields:
Tuple[Feature, VA]: a feature and its location.
"""
for file_handler in FILE_HANDLERS:
for feature, va in file_handler(vw=vw, buf=buf): # type: ignore
for feature, va in file_handler(vw, file_path):
yield feature, va
@@ -108,6 +103,4 @@ FILE_HANDLERS = (
extract_file_import_names,
extract_file_section_names,
extract_file_strings,
extract_file_function_names,
extract_file_format,
)

View File

@@ -6,10 +6,9 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import envi
import vivisect.const
from capa.features.common import Characteristic
from capa.features import Characteristic
from capa.features.extractors import loops
@@ -42,9 +41,9 @@ def extract_function_loop(f):
for bva, bflags in bb.instructions[-1].getBranches():
# vivisect does not set branch flags for non-conditional jmp so add explicit check
if (
bflags & envi.BR_COND
or bflags & envi.BR_FALL
or bflags & envi.BR_TABLE
bflags & vivisect.envi.BR_COND
or bflags & vivisect.envi.BR_FALL
or bflags & vivisect.envi.BR_TABLE
or bb.instructions[-1].mnem == "jmp"
):
edges.append((bb.va, bva))
@@ -61,7 +60,7 @@ def extract_features(f):
f (viv_utils.Function): the function from which to extract features
yields:
Tuple[Feature, int]: the features and their location found in this function.
Feature, set[VA]: the features and their location found in this function.
"""
for func_handler in FUNCTION_HANDLERS:
for feature, va in func_handler(f):

View File

@@ -1,24 +0,0 @@
import logging
import envi.archs.i386
import envi.archs.amd64
from capa.features.common import ARCH_I386, ARCH_AMD64, Arch
logger = logging.getLogger(__name__)
def extract_arch(vw):
if isinstance(vw.arch, envi.archs.amd64.Amd64Module):
yield Arch(ARCH_AMD64), 0x0
elif isinstance(vw.arch, envi.archs.i386.i386Module):
yield Arch(ARCH_I386), 0x0
else:
# we likely end up here:
# 1. handling a new architecture (e.g. aarch64)
#
# for (1), this logic will need to be updated as the format is implemented.
logger.debug("unsupported architecture: %s", vw.arch.__class__.__name__)
return

View File

@@ -5,13 +5,10 @@
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
from typing import Optional
from vivisect import VivWorkspace
from vivisect.const import XR_TO, REF_CODE
def get_coderef_from(vw: VivWorkspace, va: int) -> Optional[int]:
def get_coderef_from(vw, va):
"""
return first code `tova` whose origin is the specified va
return None if no code reference is found

View File

@@ -7,16 +7,11 @@
# See the License for the specific language governing permissions and limitations under the License.
import collections
from typing import TYPE_CHECKING, Set, List, Deque, Tuple, Union, Optional
import envi
import vivisect.const
import envi.archs.i386.disasm
import envi.archs.amd64.disasm
from vivisect import VivWorkspace
if TYPE_CHECKING:
from capa.features.extractors.viv.extractor import InstructionHandle
# pull out consts for lookup performance
i386RegOper = envi.archs.i386.disasm.i386RegOper
@@ -31,7 +26,7 @@ FAR_BRANCH_MASK = envi.BR_PROC | envi.BR_DEREF | envi.BR_ARCH
DESTRUCTIVE_MNEMONICS = ("mov", "lea", "pop", "xor")
def get_previous_instructions(vw: VivWorkspace, va: int) -> List[int]:
def get_previous_instructions(vw, va):
"""
collect the instructions that flow to the given address, local to the current function.
@@ -48,14 +43,12 @@ def get_previous_instructions(vw: VivWorkspace, va: int) -> List[int]:
# ensure that it fallsthrough to this one.
loc = vw.getPrevLocation(va, adjacent=True)
if loc is not None:
ploc = vw.getPrevLocation(va, adjacent=True)
if ploc is not None:
# from vivisect.const:
# location: (L_VA, L_SIZE, L_LTYPE, L_TINFO)
(pva, _, ptype, pinfo) = ploc
# from vivisect.const:
# location: (L_VA, L_SIZE, L_LTYPE, L_TINFO)
(pva, _, ptype, pinfo) = vw.getPrevLocation(va, adjacent=True)
if ptype == LOC_OP and not (pinfo & IF_NOFALL):
ret.append(pva)
if ptype == LOC_OP and not (pinfo & IF_NOFALL):
ret.append(pva)
# find any code refs, e.g. jmp, to this location.
# ignore any calls.
@@ -74,7 +67,7 @@ class NotFoundError(Exception):
pass
def find_definition(vw: VivWorkspace, va: int, reg: int) -> Tuple[int, Union[int, None]]:
def find_definition(vw, va, reg):
"""
scan backwards from the given address looking for assignments to the given register.
if a constant, return that value.
@@ -90,8 +83,8 @@ def find_definition(vw: VivWorkspace, va: int, reg: int) -> Tuple[int, Union[int
raises:
NotFoundError: when the definition cannot be found.
"""
q = collections.deque() # type: Deque[int]
seen = set([]) # type: Set[int]
q = collections.deque()
seen = set([])
q.extend(get_previous_instructions(vw, va))
while q:
@@ -135,16 +128,14 @@ def find_definition(vw: VivWorkspace, va: int, reg: int) -> Tuple[int, Union[int
raise NotFoundError()
def is_indirect_call(vw: VivWorkspace, va: int, insn: Optional["InstructionHandle"] = None) -> bool:
def is_indirect_call(vw, va, insn=None):
if insn is None:
insn = vw.parseOpcode(va)
return insn.mnem in ("call", "jmp") and isinstance(insn.opers[0], envi.archs.i386.disasm.i386RegOper)
def resolve_indirect_call(
vw: VivWorkspace, va: int, insn: Optional["InstructionHandle"] = None
) -> Tuple[int, Optional[int]]:
def resolve_indirect_call(vw, va, insn=None):
"""
inspect the given indirect call instruction and attempt to resolve the target address.

View File

@@ -5,28 +5,22 @@
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import envi
import envi.exc
import viv_utils
import envi.memory
import viv_utils.flirt
import envi.archs.i386.regs
import envi.archs.amd64.regs
import envi.archs.i386.disasm
import envi.archs.amd64.disasm
import capa.features.extractors.helpers
import capa.features.extractors.viv.helpers
from capa.features.insn import API, Number, Offset, Mnemonic
from capa.features.common import (
BITNESS_X32,
BITNESS_X64,
from capa.features import (
ARCH_X32,
ARCH_X64,
MAX_BYTES_FEATURE_SIZE,
THUNK_CHAIN_DEPTH_DELTA,
Bytes,
String,
Characteristic,
)
from capa.features.insn import API, Number, Offset, Mnemonic
from capa.features.extractors.viv.indirect_calls import NotFoundError, resolve_indirect_call
# security cookie checks may perform non-zeroing XORs, these are expected within a certain
@@ -34,12 +28,12 @@ from capa.features.extractors.viv.indirect_calls import NotFoundError, resolve_i
SECURITY_COOKIE_BYTES_DELTA = 0x40
def get_bitness(vw):
bitness = vw.getMeta("Architecture")
if bitness == "i386":
return BITNESS_X32
elif bitness == "amd64":
return BITNESS_X64
def get_arch(vw):
arch = vw.getMeta("Architecture")
if arch == "i386":
return ARCH_X32
elif arch == "amd64":
return ARCH_X64
def interface_extract_instruction_XXX(f, bb, insn):
@@ -80,6 +74,7 @@ def extract_insn_api_features(f, bb, insn):
# example:
#
# call dword [0x00473038]
if insn.mnem not in ("call", "jmp"):
return
@@ -101,7 +96,7 @@ def extract_insn_api_features(f, bb, insn):
# call via thunk on x86,
# see 9324d1a8ae37a36ae560c37448c9705a at 0x407985
#
# this is also how calls to internal functions may be decoded on x32 and x64.
# this is also how calls to internal functions may be decoded on x64.
# see Lab21-01.exe_:0x140001178
#
# follow chained thunks, e.g. in 82bf6347acf15e5d883715dc289d8a2b at 0x14005E0FF in
@@ -116,21 +111,12 @@ def extract_insn_api_features(f, bb, insn):
if not target:
return
if viv_utils.flirt.is_library_function(f.vw, target):
name = viv_utils.get_function_name(f.vw, target)
yield API(name), insn.va
return
for _ in range(THUNK_CHAIN_DEPTH_DELTA):
if target in imports:
dll, symbol = imports[target]
for name in capa.features.extractors.helpers.generate_symbols(dll, symbol):
yield API(name), insn.va
# if jump leads to an ENDBRANCH instruction, skip it
if f.vw.getByteDef(target)[1].startswith(b"\xf3\x0f\x1e"):
target += 4
target = capa.features.extractors.viv.helpers.get_coderef_from(f.vw, target)
if not target:
return
@@ -185,7 +171,7 @@ def extract_insn_number_features(f, bb, insn):
# assume its not also a constant.
continue
if insn.mnem == "add" and insn.opers[0].isReg() and insn.opers[0].reg == envi.archs.i386.regs.REG_ESP:
if insn.mnem == "add" and insn.opers[0].isReg() and insn.opers[0].reg == envi.archs.i386.disasm.REG_ESP:
# skip things like:
#
# .text:00401140 call sub_407E2B
@@ -193,7 +179,7 @@ def extract_insn_number_features(f, bb, insn):
return
yield Number(v), insn.va
yield Number(v, bitness=get_bitness(f.vw)), insn.va
yield Number(v, arch=get_arch(f.vw)), insn.va
def derefs(vw, p):
@@ -228,7 +214,7 @@ def derefs(vw, p):
p = next
def read_memory(vw, va: int, size: int) -> bytes:
def read_memory(vw, va, size):
# as documented in #176, vivisect will not readMemory() when the section is not marked readable.
#
# but here, we don't care about permissions.
@@ -241,10 +227,10 @@ def read_memory(vw, va: int, size: int) -> bytes:
mva, msize, mperms, mfname = mmap
offset = va - mva
return mbytes[offset : offset + size]
raise envi.exc.SegmentationViolation(va)
raise envi.SegmentationViolation(va)
def read_bytes(vw, va: int) -> bytes:
def read_bytes(vw, va):
"""
read up to MAX_BYTES_FEATURE_SIZE from the given address.
@@ -253,7 +239,7 @@ def read_bytes(vw, va: int) -> bytes:
"""
segm = vw.getSegment(va)
if not segm:
raise envi.exc.SegmentationViolation(va)
raise envi.SegmentationViolation()
segm_end = segm[0] + segm[1]
try:
@@ -262,7 +248,7 @@ def read_bytes(vw, va: int) -> bytes:
return read_memory(vw, va, segm_end - va)
else:
return read_memory(vw, va, MAX_BYTES_FEATURE_SIZE)
except envi.exc.SegmentationViolation:
except envi.SegmentationViolation:
raise
@@ -294,7 +280,7 @@ def extract_insn_bytes_features(f, bb, insn):
for v in derefs(f.vw, v):
try:
buf = read_bytes(f.vw, v)
except envi.exc.SegmentationViolation:
except envi.SegmentationViolation:
continue
if capa.features.extractors.helpers.all_zeros(buf):
@@ -303,10 +289,10 @@ def extract_insn_bytes_features(f, bb, insn):
yield Bytes(buf), insn.va
def read_string(vw, offset: int) -> str:
def read_string(vw, offset):
try:
alen = vw.detectString(offset)
except envi.exc.SegmentationViolation:
except envi.SegmentationViolation:
pass
else:
if alen > 0:
@@ -314,7 +300,7 @@ def read_string(vw, offset: int) -> str:
try:
ulen = vw.detectUnicode(offset)
except envi.exc.SegmentationViolation:
except envi.SegmentationViolation:
pass
except IndexError:
# potential vivisect bug detecting Unicode at segment end
@@ -375,21 +361,21 @@ def extract_insn_offset_features(f, bb, insn):
# reg ^
# disp
if isinstance(oper, envi.archs.i386.disasm.i386RegMemOper):
if oper.reg == envi.archs.i386.regs.REG_ESP:
if oper.reg == envi.archs.i386.disasm.REG_ESP:
continue
if oper.reg == envi.archs.i386.regs.REG_EBP:
if oper.reg == envi.archs.i386.disasm.REG_EBP:
continue
# TODO: do x64 support for real.
if oper.reg == envi.archs.amd64.regs.REG_RBP:
if oper.reg == envi.archs.amd64.disasm.REG_RBP:
continue
# viv already decodes offsets as signed
v = oper.disp
yield Offset(v), insn.va
yield Offset(v, bitness=get_bitness(f.vw)), insn.va
yield Offset(v, arch=get_arch(f.vw)), insn.va
# like: [esi + ecx + 16384]
# reg ^ ^
@@ -400,21 +386,21 @@ def extract_insn_offset_features(f, bb, insn):
v = oper.disp
yield Offset(v), insn.va
yield Offset(v, bitness=get_bitness(f.vw)), insn.va
yield Offset(v, arch=get_arch(f.vw)), insn.va
def is_security_cookie(f, bb, insn) -> bool:
def is_security_cookie(f, bb, insn):
"""
check if an instruction is related to security cookie checks
"""
# security cookie check should use SP or BP
oper = insn.opers[1]
if oper.isReg() and oper.reg not in [
envi.archs.i386.regs.REG_ESP,
envi.archs.i386.regs.REG_EBP,
envi.archs.i386.disasm.REG_ESP,
envi.archs.i386.disasm.REG_EBP,
# TODO: do x64 support for real.
envi.archs.amd64.regs.REG_RBP,
envi.archs.amd64.regs.REG_RSP,
envi.archs.amd64.disasm.REG_RBP,
envi.archs.amd64.disasm.REG_RSP,
]:
return False
@@ -490,7 +476,7 @@ def extract_insn_peb_access_characteristic_features(f, bb, insn):
def extract_insn_segment_access_features(f, bb, insn):
"""parse the instruction for access to fs or gs"""
""" parse the instruction for access to fs or gs """
prefix = insn.getPrefixName()
if prefix == "fs":
@@ -500,7 +486,7 @@ def extract_insn_segment_access_features(f, bb, insn):
yield Characteristic("gs access"), insn.va
def get_section(vw, va: int):
def get_section(vw, va):
for start, length, _, __ in vw.getMemoryMaps():
if start <= va < start + length:
return start
@@ -513,10 +499,6 @@ def extract_insn_cross_section_cflow(f, bb, insn):
inspect the instruction for a CALL or JMP that crosses section boundaries.
"""
for va, flags in insn.getBranches():
if va is None:
# va may be none for dynamic branches that haven't been resolved, such as `jmp eax`.
continue
if flags & envi.BR_FALL:
continue
@@ -611,7 +593,7 @@ def extract_features(f, bb, insn):
insn (vivisect...Instruction): the instruction to process.
yields:
Tuple[Feature, int]: the features and their location found in this insn.
Feature, set[VA]: the features and their location found in this insn.
"""
for insn_handler in INSTRUCTION_HANDLERS:
for feature, va in insn_handler(f, bb, insn):

View File

@@ -6,33 +6,22 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
from capa.features.common import Feature
from capa.features import Feature
class Export(Feature):
def __init__(self, value: str, description=None):
def __init__(self, value, description=None):
# value is export name
super(Export, self).__init__(value, description=description)
class Import(Feature):
def __init__(self, value: str, description=None):
def __init__(self, value, description=None):
# value is import name
super(Import, self).__init__(value, description=description)
class Section(Feature):
def __init__(self, value: str, description=None):
def __init__(self, value, description=None):
# value is section name
super(Section, self).__init__(value, description=description)
class FunctionName(Feature):
"""recognized name for statically linked function"""
def __init__(self, name: str, description=None):
# value is function name
super(FunctionName, self).__init__(name, description=description)
# override the name property set by `capa.features.Feature`
# that would be `functionname` (note missing dash)
self.name = "function-name"

View File

@@ -19,10 +19,6 @@ json format:
...
},
'scopes': {
'global': [
(str(name), [any(arg), ...], int(va), ()),
...
},
'file': [
(str(name), [any(arg), ...], int(va), ()),
...
@@ -57,11 +53,11 @@ import json
import zlib
import logging
import capa.features
import capa.features.file
import capa.features.insn
import capa.features.common
import capa.features.basicblock
import capa.features.extractors.base_extractor
import capa.features.extractors
from capa.helpers import hex
logger = logging.getLogger(__name__)
@@ -71,7 +67,7 @@ def serialize_feature(feature):
return feature.freeze_serialize()
KNOWN_FEATURES = {F.__name__: F for F in capa.features.common.Feature.__subclasses__()}
KNOWN_FEATURES = {F.__name__: F for F in capa.features.Feature.__subclasses__()}
def deserialize_feature(doc):
@@ -84,7 +80,7 @@ def dumps(extractor):
serialize the given extractor to a string
args:
extractor: capa.features.extractors.base_extractor.FeatureExtractor:
extractor: capa.features.extractor.FeatureExtractor:
returns:
str: the serialized features.
@@ -94,15 +90,12 @@ def dumps(extractor):
"base address": extractor.get_base_address(),
"functions": {},
"scopes": {
"global": [],
"file": [],
"function": [],
"basic block": [],
"instruction": [],
},
}
for feature, va in extractor.extract_global_features():
ret["scopes"]["global"].append(serialize_feature(feature) + (hex(va), ()))
for feature, va in extractor.extract_file_features():
ret["scopes"]["file"].append(serialize_feature(feature) + (hex(va), ()))
@@ -129,7 +122,7 @@ def dumps(extractor):
)
for insnva, insn in sorted(
[(int(insn), insn) for insn in extractor.get_instructions(f, bb)], key=lambda p: p[0]
[(insn.__int__(), insn) for insn in extractor.get_instructions(f, bb)], key=lambda p: p[0]
):
ret["functions"][hex(f)][hex(bb)].append(hex(insnva))
@@ -157,7 +150,6 @@ def loads(s):
features = {
"base address": doc.get("base address"),
"global features": [],
"file features": [],
"functions": {},
}
@@ -187,12 +179,6 @@ def loads(s):
# ('MatchedRule', ('foo', ), '0x401000', ('0x401000', ))
# ^^^^^^^^^^^^^ ^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^^^^^
# feature name args addr func/bb/insn
for feature in doc.get("scopes", {}).get("global", []):
va, loc = feature[2:]
va = int(va, 0x10)
feature = deserialize_feature(feature[:2])
features["global features"].append((va, feature))
for feature in doc.get("scopes", {}).get("file", []):
va, loc = feature[2:]
va = int(va, 0x10)
@@ -231,7 +217,7 @@ def loads(s):
feature = deserialize_feature(feature[:2])
features["functions"][loc[0]]["basic blocks"][loc[1]]["instructions"][loc[2]]["features"].append((va, feature))
return capa.features.extractors.base_extractor.NullFeatureExtractor(features)
return capa.features.extractors.NullFeatureExtractor(features)
MAGIC = "capa0000".encode("ascii")
@@ -242,7 +228,7 @@ def dump(extractor):
return MAGIC + zlib.compress(dumps(extractor).encode("utf-8"))
def is_freeze(buf: bytes) -> bool:
def is_freeze(buf):
return buf[: len(MAGIC)] == MAGIC
@@ -262,16 +248,35 @@ def main(argv=None):
if argv is None:
argv = sys.argv[1:]
formats = [
("auto", "(default) detect file type automatically"),
("pe", "Windows PE file"),
("sc32", "32-bit shellcode"),
("sc64", "64-bit shellcode"),
]
format_help = ", ".join(["%s: %s" % (f[0], f[1]) for f in formats])
parser = argparse.ArgumentParser(description="save capa features to a file")
capa.main.install_common_args(parser, {"sample", "format", "backend", "signatures"})
parser.add_argument("sample", type=str, help="Path to sample to analyze")
parser.add_argument("output", type=str, help="Path to output file")
parser.add_argument("-v", "--verbose", action="store_true", help="Enable verbose output")
parser.add_argument("-q", "--quiet", action="store_true", help="Disable all output but errors")
parser.add_argument(
"-f", "--format", choices=[f[0] for f in formats], default="auto", help="Select sample format, %s" % format_help
)
args = parser.parse_args(args=argv)
capa.main.handle_common_args(args)
sigpaths = capa.main.get_signatures(args.signatures)
extractor = capa.main.get_extractor(args.sample, args.format, args.backend, sigpaths, False)
if args.quiet:
logging.basicConfig(level=logging.ERROR)
logging.getLogger().setLevel(logging.ERROR)
elif args.verbose:
logging.basicConfig(level=logging.DEBUG)
logging.getLogger().setLevel(logging.DEBUG)
else:
logging.basicConfig(level=logging.INFO)
logging.getLogger().setLevel(logging.INFO)
extractor = capa.main.get_extractor(args.sample, args.format)
with open(args.output, "wb") as f:
f.write(dump(extractor))

View File

@@ -6,12 +6,11 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import capa.render.utils
from capa.features.common import Feature
from capa.features import Feature
class API(Feature):
def __init__(self, name: str, description=None):
def __init__(self, name, description=None):
# Downcase library name if given
if "." in name:
modname, _, impname = name.rpartition(".")
@@ -21,21 +20,21 @@ class API(Feature):
class Number(Feature):
def __init__(self, value: int, bitness=None, description=None):
super(Number, self).__init__(value, bitness=bitness, description=description)
def __init__(self, value, arch=None, description=None):
super(Number, self).__init__(value, arch=arch, description=description)
def get_value_str(self):
return capa.render.utils.hex(self.value)
return "0x%X" % self.value
class Offset(Feature):
def __init__(self, value: int, bitness=None, description=None):
super(Offset, self).__init__(value, bitness=bitness, description=description)
def __init__(self, value, arch=None, description=None):
super(Offset, self).__init__(value, arch=arch, description=description)
def get_value_str(self):
return capa.render.utils.hex(self.value)
return "0x%X" % self.value
class Mnemonic(Feature):
def __init__(self, value: str, description=None):
super(Mnemonic, self).__init__(value, description=description)
def __init__(self, value, description=None):
super(Mnemonic, self).__init__(value.lower(), description=description)

View File

@@ -12,21 +12,25 @@ _hex = hex
def hex(i):
return _hex(int(i))
# under py2.7, long integers get formatted with a trailing `L`
# and this is not pretty. so strip it out.
return _hex(oint(i)).rstrip("L")
def get_file_taste(sample_path: str) -> bytes:
def oint(i):
# there seems to be some trouble with using `int(viv_utils.Function)`
# with the black magic we do with binding the `__int__()` routine.
# i haven't had a chance to debug this yet (and i have no hotel wifi).
# so in the meantime, detect this, and call the method directly.
try:
return int(i)
except TypeError:
return i.__int__()
def get_file_taste(sample_path):
if not os.path.exists(sample_path):
raise IOError("sample path %s does not exist or cannot be accessed" % sample_path)
with open(sample_path, "rb") as f:
taste = f.read(8)
return taste
def is_runtime_ida():
try:
import idc
except ImportError:
return False
else:
return True

View File

@@ -10,34 +10,28 @@ import logging
import datetime
import idc
import six
import idaapi
import idautils
import ida_bytes
import ida_loader
import capa
import capa.version
import capa.features.common
logger = logging.getLogger("capa")
# IDA version as returned by idaapi.get_kernel_version()
SUPPORTED_IDA_VERSIONS = (
SUPPORTED_IDA_VERSIONS = [
"7.1",
"7.2",
"7.3",
"7.4",
"7.5",
"7.6",
)
]
# file type as returned by idainfo.file_type
SUPPORTED_FILE_TYPES = (
idaapi.f_PE,
idaapi.f_ELF,
idaapi.f_BIN,
# idaapi.f_MACHO,
)
# arch type as returned by idainfo.procname
SUPPORTED_ARCH_TYPES = ("metapc",)
# file type names as returned by idaapi.get_file_type_name()
SUPPORTED_FILE_TYPES = [
"Portable executable for 80386 (PE)",
"Portable executable for AMD64 (PE)",
"Binary file", # x86/AMD64 shellcode support
]
def inform_user_ida_ui(message):
@@ -57,13 +51,13 @@ def is_supported_ida_version():
def is_supported_file_type():
file_info = idaapi.get_inf_structure()
if file_info.filetype not in SUPPORTED_FILE_TYPES:
file_type = idaapi.get_file_type_name()
if file_type not in SUPPORTED_FILE_TYPES:
logger.error("-" * 80)
logger.error(" Input file does not appear to be a supported file type.")
logger.error(" Input file does not appear to be a PE file.")
logger.error(" ")
logger.error(
" capa currently only supports analyzing PE, ELF, or binary files containing x86 (32- and 64-bit) shellcode."
" capa currently only supports analyzing PE files (or binary files containing x86/AMD64 shellcode) with IDA."
)
logger.error(" If you don't know the input file type, you can try using the `file` utility to guess it.")
logger.error("-" * 80)
@@ -71,25 +65,13 @@ def is_supported_file_type():
return True
def is_supported_arch_type():
file_info = idaapi.get_inf_structure()
if file_info.procname not in SUPPORTED_ARCH_TYPES or not any((file_info.is_32bit(), file_info.is_64bit())):
logger.error("-" * 80)
logger.error(" Input file does not appear to target a supported architecture.")
logger.error(" ")
logger.error(" capa currently only supports analyzing x86 (32- and 64-bit).")
logger.error("-" * 80)
return False
return True
def get_disasm_line(va):
""" """
return idc.generate_disasm_line(va, idc.GENDSM_FORCE_CODE)
def is_func_start(ea):
"""check if function stat exists at virtual address"""
""" check if function stat exists at virtual address """
f = idaapi.get_func(ea)
return f and f.start_ea == ea
@@ -100,26 +82,14 @@ def get_func_start_ea(ea):
return f if f is None else f.start_ea
def get_file_md5():
""" """
md5 = idautils.GetInputFileMD5()
if not isinstance(md5, str):
md5 = capa.features.common.bytes_to_str(md5)
return md5
def get_file_sha256():
""" """
sha256 = idaapi.retrieve_input_file_sha256()
if not isinstance(sha256, str):
sha256 = capa.features.common.bytes_to_str(sha256)
return sha256
def collect_metadata():
""" """
md5 = get_file_md5()
sha256 = get_file_sha256()
md5 = idautils.GetInputFileMD5()
if not isinstance(md5, six.string_types):
md5 = capa.features.bytes_to_str(md5)
sha256 = idaapi.retrieve_input_file_sha256()
if not isinstance(sha256, six.string_types):
sha256 = capa.features.bytes_to_str(sha256)
return {
"timestamp": datetime.datetime.now().isoformat(),
@@ -137,30 +107,3 @@ def collect_metadata():
},
"version": capa.version.__version__,
}
class IDAIO:
"""
An object that acts as a file-like object,
using bytes from the current IDB workspace.
"""
def __init__(self):
super(IDAIO, self).__init__()
self.offset = 0
def seek(self, offset, whence=0):
assert whence == 0
self.offset = offset
def read(self, size):
ea = ida_loader.get_fileregion_ea(self.offset)
if ea == idc.BADADDR:
# best guess, such as if file is mapped at address 0x0.
ea = self.offset
logger.debug("reading 0x%x bytes at 0x%x (ea: 0x%x)", size, self.offset, ea)
return ida_bytes.get_bytes(ea, size)
def close(self):
return

View File

@@ -1,82 +1,65 @@
![capa explorer](../../../.github/capa-explorer-logo.png)
capa explorer is an IDAPython plugin that integrates the FLARE team's open-source framework, capa, with IDA Pro. capa is a framework that uses a well-defined collection of rules to
capa explorer is an IDA Pro plugin written in Python that integrates the FLARE team's open-source framework, capa, with IDA. capa is a framework that uses a well-defined collection of rules to
identify capabilities in a program. You can run capa against a PE file or shellcode and it tells you what it thinks the program can do. For example, it might suggest that
the program is a backdoor, can install services, or relies on HTTP to communicate. capa explorer runs capa directly against your IDA Pro database (IDB) without requiring access
to the original binary file. Once a database has been analyzed, capa explorer helps you identify interesting areas of a program and build new capa rules using features extracted from your IDB.
the program is a backdoor, can install services, or relies on HTTP to communicate. You can use capa explorer to run capa directly on an IDA database without requiring access
to the source binary. Once a database has been analyzed, capa explorer can be used to quickly identify and navigate to interesting areas of a program
and dissect capa rule matches at the assembly level.
We love using capa explorer during malware analysis because it teaches us what parts of a program suggest a behavior. As we click on rows, capa explorer jumps directly
to important addresses in the IDB and highlights key features in the Disassembly view so they stand out visually. To illustrate, we use capa explorer to
to important addresses in the IDA Pro database and highlights key features in the Disassembly view so they stand out visually. To illustrate, we use capa explorer to
analyze Lab 14-02 from [Practical Malware Analysis](https://nostarch.com/malware) (PMA) available [here](https://practicalmalwareanalysis.com/labs/). Our goal is to understand
the program's functionality.
After loading Lab 14-02 into IDA and analyzing the database with capa explorer, we see that capa detected a rule match for `self delete via COMSPEC environment variable`:
![](../../../doc/img/explorer_condensed.png)
![](../../../doc/img/ida_plugin_example_1.png)
We can use capa explorer to navigate our Disassembly view directly to the suspect function and get an assembly-level breakdown of why capa matched `self delete via COMSPEC environment variable`.
We can use capa explorer to navigate the IDA Disassembly view directly to the suspect function and get an assembly-level breakdown of why capa matched `self delete via COMSPEC environment variable`
for this particular function.
![](../../../doc/img/explorer_expanded.png)
![](../../../doc/img/ida_plugin_example_2.png)
Using the `Rule Information` and `Details` columns capa explorer shows us that the suspect function matched `self delete via COMSPEC environment variable` because it contains capa rule matches for `create process`, `get COMSPEC environment variable`,
and `query environment variable`, references to the strings `COMSPEC`, ` > nul`, and `/c del `, and calls to the Windows API functions `GetEnvironmentVariableA` and `ShellExecuteEx`.
capa explorer also helps you build new capa rules. To start select the `Rule Generator` tab, navigate to a function in your Disassembly view,
and click `Analyze`. capa explorer will extract features from the function and display them in the `Features` pane. You can add features listed in this pane to the `Editor` pane
by either double-clicking a feature or using multi-select + right-click to add multiple features at once. The `Preview` and `Editor` panes help edit your rule. Use the `Preview` pane
to modify the rule text directly and the `Editor` pane to construct and rearrange your hierarchy of statements and features. When you finish a rule you can save it directly to a file by clicking `Save`.
![](../../../doc/img/rulegen_expanded.png)
and `query environment variable`, references to the strings `COMSPEC`, ` > nul`, and `/c del`, and calls to the Windows API functions `GetEnvironmentVariableA` and `ShellExecuteEx`.
For more information on the FLARE team's open-source framework, capa, check out the overview in our first [blog](https://www.fireeye.com/blog/threat-research/2020/07/capa-automatically-identify-malware-capabilities.html).
## Features
![](../../../doc/img/ida_plugin_intro.gif)
* Display capa results in an interactive tree view of rule matches and their locations in the current database
* Search for keywords or phrases found in the `Rule Information`, `Address`, or `Details` columns
* Display rule source content when a user hovers their cursor over a rule match
* Double-click `Address` column to view associated feature in the IDA Disassembly view
* Limit tree view results to the function currently displayed in the IDA Disassembly view; update results as a user navigates to different functions
* Export results as formatted JSON by navigating to `File > Export results...`
* Remember a user's capa rules directory for future runs; change capa rules directory by navigating to `Rules > Change rules directory...`
* Automatically re-analyze database when user performs a program rebase
* Automatically update results when IDA is used to rename a function
* Select one or more checkboxes to highlight the associated addresses in the IDA Disassembly view
* Right-click a function match to rename it; the new function name is propagated to the current IDA database
* Right-click to copy a result by column or by row
* Sort results by column
* Reset tree view and IDA Disassembly view highlighting by clicking `Reset`
## Getting Started
### Requirements
capa explorer supports Python versions >= 3.6.x and the following IDA Pro versions:
capa explorer supports the following IDA setups:
* IDA 7.4
* IDA 7.5
* IDA 7.6 (caveat below)
capa explorer is however limited to the Python versions supported by your IDA installation (which may not include all Python versions >= 3.6.x). Based on our testing the following matrix shows the Python versions supported
by each supported IDA version:
| | IDA 7.4 | IDA 7.5 | IDA 7.6 |
| --- | --- | --- | --- |
| Python 3.6.x | Yes | Yes | Yes |
| Python 3.7.x | Yes | Yes | Yes |
| Python 3.8.x | Partial (see below) | Yes | Yes |
| Python 3.9.x | No | Partial (see below) | Yes |
To use capa explorer with IDA 7.4 and Python 3.8.x you must follow the instructions provided by hex-rays [here](https://hex-rays.com/blog/ida-7-4-and-python-3-8/).
To use capa explorer with IDA 7.5 and Python 3.9.x you must follow the instructions provided by hex-rays [here](https://hex-rays.com/blog/python-3-9-support-for-ida-7-5/).
* IDA Pro 7.4+ with Python 2.7 or Python 3.
If you encounter issues with your specific setup, please open a new [Issue](https://github.com/fireeye/capa/issues).
#### IDA 7.6 caveat: IDA 7.6sp1 or patch required
As described [here](https://www.hex-rays.com/blog/ida-7-6-empty-qtreeview-qtreewidget/):
> A rather nasty issue evaded our testing and found its way into IDA 7.6: using the PyQt5 modules that are shipped with IDA, QTreeView (or QTreeWidget) instances will always fail to display contents.
Therefore, in order to use capa under IDA 7.6 you need the [Service Pack 1 for IDA 7.6](https://www.hex-rays.com/products/ida/news/7_6sp1). Alternatively, you can download and install the fix corresponding to your IDA installation, replacing the original QtWidgets DLL with the one contained in the .zip file (links to Hex-Rays):
- Windows: [pyqt5_qtwidgets_win](https://www.hex-rays.com/wp-content/uploads/2021/04/pyqt5_qtwidgets_win.zip)
- Linux: [pyqt5_qtwidgets_linux](https://www.hex-rays.com/wp-content/uploads/2021/04/pyqt5_qtwidgets_linux.zip)
- MacOS (Intel): [pyqt5_qtwidgets_mac_x64](https://www.hex-rays.com/wp-content/uploads/2021/04/pyqt5_qtwidgets_mac_x64.zip)
- MacOS (AppleSilicon): [pyqt5_qtwidgets_mac_arm](https://www.hex-rays.com/wp-content/uploads/2021/04/pyqt5_qtwidgets_mac_arm.zip)
### Supported File Types
capa explorer is limited to the file types supported by capa, which include:
capa explorer is limited to the file types supported by capa, which includes:
* Windows x86 (32- and 64-bit) PE and ELF files
* Windows x86 (32- and 64-bit) shellcode
* Windows 32-bit and 64-bit PE files
* Windows 32-bit and 64-bit shellcode
### Installation
@@ -91,49 +74,38 @@ You can install capa explorer using the following steps:
### Usage
1. Open IDA and analyze a supported file type (select the `Manual Load` and `Load Resources` options in IDA for best results)
1. Run IDA and analyze a supported file type (select the `Manual Load` and `Load Resources` options in IDA for best results)
2. Open capa explorer in IDA by navigating to `Edit > Plugins > FLARE capa explorer` or using the keyboard shortcut `Alt+F5`
You can also use `ida_loader.load_and_run_plugin("capa_explorer", arg)`. `arg` is a bitflag for which setting the LSB enables automatic analysis. See `capa.ida.plugin.form.Options` for more details.
3. Select the `Program Analysis` tab
4. Click the `Analyze` button
3. Click the `Analyze` button
When running capa explorer for the first time you are prompted to select a file directory containing capa rules. The plugin conveniently
remembers your selection for future runs; you can change this selection and other default settings by clicking `Settings`. We recommend
remembers your selection for future runs; you can change this selection by navigating to `Rules > Change rules directory...`. We recommend
downloading and using the [standard collection of capa rules](https://github.com/fireeye/capa-rules) when getting started with the plugin.
#### Tips for Program Analysis
#### Tips
* Start analysis by clicking the `Analyze` button
* Reset the plugin user interface and remove highlighting from your Disassembly view by clicking the `Reset` button
* Change your capa rules directory and other default settings by clicking `Settings`
* Reset the plugin user interface and remove highlighting from IDA disassembly view by clicking the `Reset` button
* Change your capa rules directory by navigating to `Rules > Change rules directory...` from the plugin menu
* Hover your cursor over a rule match to view the source content of the rule
* Double-click the `Address` column to navigate your Disassembly view to the address of the associated feature
* Double-click the `Address` column to navigate the IDA Disassembly view to the associated feature
* Double-click a result in the `Rule Information` column to expand its children
* Select a checkbox in the `Rule Information` column to highlight the address of the associated feature in your Dissasembly view
#### Tips for Rule Generator
* Navigate to a function in your Disassembly view and click`Analyze` to get started
* Double-click or use multi-select + right-click to add features from the `Features` pane to the `Editor` pane
* Right-click features in the `Editor` pane to make context-specific modifications
* Drag-and-drop (single click + multi-select support) features in the `Editor` pane to construct your hierarchy of statements and features
* Right-click anywhere in the `Editor` pane not on a feature to remove all features
* Add descriptions or comments to a feature by editing the corresponding column in the `Editor` pane
* Directly edit rule text and metadata fields using the `Preview` pane
* Change the default rule author and default rule scope displayed in the `Preview` pane by clicking `Settings`
* Select a checkbox in the `Rule Information` column to highlight the address of the associated feature in the IDA Dissasembly view
## Development
capa explorer is packaged with capa so you will need to install capa locally for development. You can install capa locally by following the steps outlined in `Method 3: Inspecting the capa source code` of the [capa
Because capa explorer is packaged with capa you will need to install capa locally for development.
You can install capa locally by following the steps outlined in `Method 3: Inspecting the capa source code` of the [capa
installation guide](https://github.com/fireeye/capa/blob/master/doc/installation.md#method-3-inspecting-the-capa-source-code). Once installed, copy [capa_explorer.py](https://raw.githubusercontent.com/fireeye/capa/master/capa/ida/plugin/capa_explorer.py)
to your plugins directory to install capa explorer in IDA.
to your IDA plugins directory to run the plugin in IDA.
### Components
capa explorer consists of two main components:
* An [feature extractor](https://github.com/fireeye/capa/tree/master/capa/features/extractors/ida) built on top of IDA's binary analysis engine
* This component uses IDAPython to extract [capa features](https://github.com/fireeye/capa-rules/blob/master/doc/format.md#extracted-features) from your IDBs such as strings,
* An IDA [feature extractor](https://github.com/fireeye/capa/tree/master/capa/features/extractors/ida) built on top of IDA's binary analysis engine
* This component uses IDAPython to extract [capa features](https://github.com/fireeye/capa-rules/blob/master/doc/format.md#extracted-features) from the IDA database such as strings,
disassembly, and control flow; these extracted features are used by capa to find feature combinations that result in a rule match
* An [interactive user interface](https://github.com/fireeye/capa/tree/master/capa/ida/plugin) for displaying and exploring capa rule matches
* This component integrates the feature extractor and capa, providing an interactive user interface to dissect rule matches found by capa using features extracted directly from your IDBs
* This component integrates the IDA feature extractor and capa, providing an interactive user interface to dissect rule matches found by capa using features extracted by the IDA feature extractor

View File

@@ -11,6 +11,7 @@ import logging
import idaapi
import ida_kernwin
from capa.ida.helpers import is_supported_file_type, is_supported_ida_version
from capa.ida.plugin.form import CapaExplorerForm
from capa.ida.plugin.icon import ICON
@@ -40,14 +41,10 @@ class CapaExplorerPlugin(idaapi.plugin_t):
"""called when IDA is loading the plugin"""
logging.basicConfig(level=logging.INFO)
import capa.ida.helpers
# do not load plugin if IDA version/file type not supported
if not capa.ida.helpers.is_supported_ida_version():
if not is_supported_ida_version():
return idaapi.PLUGIN_SKIP
if not capa.ida.helpers.is_supported_file_type():
return idaapi.PLUGIN_SKIP
if not capa.ida.helpers.is_supported_arch_type():
if not is_supported_file_type():
return idaapi.PLUGIN_SKIP
return idaapi.PLUGIN_OK
@@ -56,14 +53,8 @@ class CapaExplorerPlugin(idaapi.plugin_t):
pass
def run(self, arg):
"""
called when IDA is running the plugin as a script
args:
arg (int): bitflag. Setting LSB enables automatic analysis upon
loading. The other bits are currently undefined. See `form.Options`.
"""
self.form = CapaExplorerForm(self.PLUGIN_NAME, arg)
"""called when IDA is running the plugin as a script"""
self.form = CapaExplorerForm(self.PLUGIN_NAME)
return True

File diff suppressed because it is too large Load Diff

View File

@@ -6,6 +6,7 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import sys
import codecs
import idc
@@ -31,22 +32,23 @@ def location_to_hex(location):
return "%08X" % location
class CapaExplorerDataItem:
class CapaExplorerDataItem(object):
"""store data for CapaExplorerDataModel"""
def __init__(self, parent, data, can_check=True):
def __init__(self, parent, data):
"""initialize item"""
self.pred = parent
self._data = data
self.children = []
self._checked = False
self._can_check = can_check
# default state for item
self.flags = QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable
if self._can_check:
self.flags = self.flags | QtCore.Qt.ItemIsUserCheckable | QtCore.Qt.ItemIsTristate
self.flags = (
QtCore.Qt.ItemIsEnabled
| QtCore.Qt.ItemIsSelectable
| QtCore.Qt.ItemIsTristate
| QtCore.Qt.ItemIsUserCheckable
)
if self.pred:
self.pred.appendChild(self)
@@ -68,10 +70,6 @@ class CapaExplorerDataItem:
"""
self._checked = checked
def canCheck(self):
""" """
return self._can_check
def isChecked(self):
"""get item is checked"""
return self._checked
@@ -167,7 +165,7 @@ class CapaExplorerRuleItem(CapaExplorerDataItem):
fmt = "%s (%d matches)"
def __init__(self, parent, name, namespace, count, source, can_check=True):
def __init__(self, parent, name, namespace, count, source):
"""initialize item
@param parent: parent node
@@ -177,7 +175,7 @@ class CapaExplorerRuleItem(CapaExplorerDataItem):
@param source: rule source (tooltip)
"""
display = self.fmt % (name, count) if count > 1 else name
super(CapaExplorerRuleItem, self).__init__(parent, [display, "", namespace], can_check)
super(CapaExplorerRuleItem, self).__init__(parent, [display, "", namespace])
self._source = source
@property
@@ -201,7 +199,7 @@ class CapaExplorerRuleMatchItem(CapaExplorerDataItem):
@property
def source(self):
"""return rule contents for display"""
""" return rule contents for display """
return self._source
@@ -210,14 +208,14 @@ class CapaExplorerFunctionItem(CapaExplorerDataItem):
fmt = "function(%s)"
def __init__(self, parent, location, can_check=True):
def __init__(self, parent, location):
"""initialize item
@param parent: parent node
@param location: virtual address of function as seen by IDA
"""
super(CapaExplorerFunctionItem, self).__init__(
parent, [self.fmt % idaapi.get_name(location), location_to_hex(location), ""], can_check
parent, [self.fmt % idaapi.get_name(location), location_to_hex(location), ""]
)
@property
@@ -327,10 +325,14 @@ class CapaExplorerByteViewItem(CapaExplorerFeatureItem):
"""
byte_snap = idaapi.get_bytes(location, 32)
details = ""
if byte_snap:
byte_snap = codecs.encode(byte_snap, "hex").upper()
details = " ".join([byte_snap[i : i + 2].decode() for i in range(0, len(byte_snap), 2)])
if sys.version_info >= (3, 0):
details = " ".join([byte_snap[i : i + 2].decode() for i in range(0, len(byte_snap), 2)])
else:
details = " ".join([byte_snap[i : i + 2] for i in range(0, len(byte_snap), 2)])
else:
details = ""
super(CapaExplorerByteViewItem, self).__init__(parent, display, location=location, details=details)
self.ida_highlight = idc.get_color(location, idc.CIC_ITEM)

View File

@@ -6,16 +6,14 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
from collections import deque, defaultdict
from collections import deque
import idc
import idaapi
from PyQt5 import QtGui, QtCore
import capa.rules
import capa.ida.helpers
import capa.render.utils as rutils
import capa.features.common
from capa.ida.plugin.item import (
CapaExplorerDataItem,
CapaExplorerRuleItem,
@@ -112,8 +110,6 @@ class CapaExplorerDataModel(QtCore.QAbstractItemModel):
if role == QtCore.Qt.CheckStateRole and column == CapaExplorerDataModel.COLUMN_INDEX_RULE_INFORMATION:
# inform view how to display content of checkbox - un/checked
if not item.canCheck():
return None
return QtCore.Qt.Checked if item.isChecked() else QtCore.Qt.Unchecked
if role == QtCore.Qt.FontRole and column in (
@@ -428,34 +424,14 @@ class CapaExplorerDataModel(QtCore.QAbstractItemModel):
for child in match.get("children", []):
self.render_capa_doc_match(parent2, child, doc)
def render_capa_doc_by_function(self, doc):
""" """
matches_by_function = {}
for rule in rutils.capability_rules(doc):
for ea in rule["matches"].keys():
ea = capa.ida.helpers.get_func_start_ea(ea)
if ea is None:
# file scope, skip rendering in this mode
continue
if not matches_by_function.get(ea, ()):
# new function root
matches_by_function[ea] = (CapaExplorerFunctionItem(self.root_node, ea, can_check=False), [])
function_root, match_cache = matches_by_function[ea]
if rule["meta"]["name"] in match_cache:
# rule match already rendered for this function root, skip it
continue
match_cache.append(rule["meta"]["name"])
CapaExplorerRuleItem(
function_root,
rule["meta"]["name"],
rule["meta"].get("namespace"),
len(rule["matches"]),
rule["source"],
can_check=False,
)
def render_capa_doc(self, doc):
"""render capa features specified in doc
@param doc: capa result doc
"""
# inform model that changes are about to occur
self.beginResetModel()
def render_capa_doc_by_program(self, doc):
""" """
for rule in rutils.capability_rules(doc):
rule_name = rule["meta"]["name"]
rule_namespace = rule["meta"].get("namespace")
@@ -475,19 +451,6 @@ class CapaExplorerDataModel(QtCore.QAbstractItemModel):
self.render_capa_doc_match(parent2, match, doc)
def render_capa_doc(self, doc, by_function):
"""render capa features specified in doc
@param doc: capa result doc
"""
# inform model that changes are about to occur
self.beginResetModel()
if by_function:
self.render_capa_doc_by_function(doc)
else:
self.render_capa_doc_by_program(doc)
# inform model changes have ended
self.endResetModel()
@@ -496,17 +459,13 @@ class CapaExplorerDataModel(QtCore.QAbstractItemModel):
@param feature: capa feature read from doc
"""
key = feature["type"]
value = feature[feature["type"]]
if value:
if key == "string":
value = '"%s"' % capa.features.common.escape_string(value)
if feature[feature["type"]]:
if feature.get("description", ""):
return "%s(%s = %s)" % (key, value, feature["description"])
return "%s(%s = %s)" % (feature["type"], feature[feature["type"]], feature["description"])
else:
return "%s(%s)" % (key, value)
return "%s(%s)" % (feature["type"], feature[feature["type"]])
else:
return "%s" % key
return "%s" % feature["type"]
def render_capa_doc_feature_node(self, parent, feature, locations, doc):
"""process capa doc feature node
@@ -562,15 +521,8 @@ class CapaExplorerDataModel(QtCore.QAbstractItemModel):
parent, display, source=doc["rules"].get(feature[feature["type"]], {}).get("source", "")
)
if feature["type"] in ("regex", "substring"):
for s, locations in feature["matches"].items():
if location in locations:
return CapaExplorerStringViewItem(
parent, display, location, '"' + capa.features.common.escape_string(s) + '"'
)
# programming error: the given location should always be found in the regex matches
raise ValueError("regex match at location not found")
if feature["type"] == "regex":
return CapaExplorerStringViewItem(parent, display, location, feature["match"])
if feature["type"] == "basicblock":
return CapaExplorerBlockItem(parent, location)
@@ -595,15 +547,10 @@ class CapaExplorerDataModel(QtCore.QAbstractItemModel):
if feature["type"] in ("string",):
# display string preview
return CapaExplorerStringViewItem(
parent, display, location, '"%s"' % capa.features.common.escape_string(feature[feature["type"]])
)
return CapaExplorerStringViewItem(parent, display, location, feature[feature["type"]])
if feature["type"] in ("import", "export", "function-name"):
if feature["type"] in ("import", "export"):
# display no preview
return CapaExplorerFeatureItem(parent, location=location, display=display)
if feature["type"] in ("arch", "os", "format"):
return CapaExplorerFeatureItem(parent, display=display)
raise RuntimeError("unexpected feature type: " + str(feature["type"]))

View File

@@ -5,6 +5,7 @@
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import six
from PyQt5 import QtCore
from PyQt5.QtCore import Qt
@@ -207,7 +208,7 @@ class CapaExplorerSearchProxyModel(QtCore.QSortFilterProxyModel):
if not data:
continue
if not isinstance(data, str):
if not isinstance(data, six.string_types):
# sanity check: should already be a string, but double check
continue

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,266 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import json
import six
import capa.rules
import capa.engine
def convert_statement_to_result_document(statement):
"""
"statement": {
"type": "or"
},
"statement": {
"max": 9223372036854775808,
"min": 2,
"type": "range"
},
"""
statement_type = statement.name.lower()
result = {"type": statement_type}
if statement.description:
result["description"] = statement.description
if statement_type == "some" and statement.count == 0:
result["type"] = "optional"
elif statement_type == "some":
result["count"] = statement.count
elif statement_type == "range":
result["min"] = statement.min
result["max"] = statement.max
result["child"] = convert_feature_to_result_document(statement.child)
elif statement_type == "subscope":
result["subscope"] = statement.scope
return result
def convert_feature_to_result_document(feature):
"""
"feature": {
"number": 6,
"type": "number"
},
"feature": {
"api": "ws2_32.WSASocket",
"type": "api"
},
"feature": {
"match": "create TCP socket",
"type": "match"
},
"feature": {
"characteristic": [
"loop",
true
],
"type": "characteristic"
},
"""
result = {"type": feature.name, feature.name: feature.get_value_str()}
if feature.description:
result["description"] = feature.description
if feature.name == "regex":
result["match"] = feature.match
return result
def convert_node_to_result_document(node):
"""
"node": {
"type": "statement",
"statement": { ... }
},
"node": {
"type": "feature",
"feature": { ... }
},
"""
if isinstance(node, capa.engine.Statement):
return {
"type": "statement",
"statement": convert_statement_to_result_document(node),
}
elif isinstance(node, capa.features.Feature):
return {
"type": "feature",
"feature": convert_feature_to_result_document(node),
}
else:
raise RuntimeError("unexpected match node type")
def convert_match_to_result_document(rules, capabilities, result):
"""
convert the given Result instance into a common, Python-native data structure.
this will become part of the "result document" format that can be emitted to JSON.
"""
doc = {
"success": bool(result.success),
"node": convert_node_to_result_document(result.statement),
"children": [convert_match_to_result_document(rules, capabilities, child) for child in result.children],
}
# logic expression, like `and`, don't have locations - their children do.
# so only add `locations` to feature nodes.
if isinstance(result.statement, capa.features.Feature):
if bool(result.success):
doc["locations"] = result.locations
elif isinstance(result.statement, capa.rules.Range):
if bool(result.success):
doc["locations"] = result.locations
# if we have a `match` statement, then we're referencing another rule.
# this could an external rule (written by a human), or
# rule generated to support a subscope (basic block, etc.)
# we still want to include the matching logic in this tree.
#
# so, we need to lookup the other rule results
# and then filter those down to the address used here.
# finally, splice that logic into this tree.
if (
doc["node"]["type"] == "feature"
and doc["node"]["feature"]["type"] == "match"
# only add subtree on success,
# because there won't be results for the other rule on failure.
and doc["success"]
):
rule_name = doc["node"]["feature"]["match"]
rule = rules[rule_name]
rule_matches = {address: result for (address, result) in capabilities[rule_name]}
if rule.meta.get("capa/subscope-rule"):
# for a subscope rule, fixup the node to be a scope node, rather than a match feature node.
#
# e.g. `contain loop/30c4c78e29bf4d54894fc74f664c62e8` -> `basic block`
scope = rule.meta["scope"]
doc["node"] = {
"type": "statement",
"statement": {
"type": "subscope",
"subscope": scope,
},
}
for location in doc["locations"]:
doc["children"].append(convert_match_to_result_document(rules, capabilities, rule_matches[location]))
return doc
def convert_capabilities_to_result_document(meta, rules, capabilities):
"""
convert the given rule set and capabilities result to a common, Python-native data structure.
this format can be directly emitted to JSON, or passed to the other `render_*` routines
to render as text.
see examples of substructures in above routines.
schema:
```json
{
"meta": {...},
"rules: {
$rule-name: {
"meta": {...copied from rule.meta...},
"matches: {
$address: {...match details...},
...
}
},
...
}
}
```
Args:
meta (Dict[str, Any]):
rules (RuleSet):
capabilities (Dict[str, List[Tuple[int, Result]]]):
"""
doc = {
"meta": meta,
"rules": {},
}
for rule_name, matches in capabilities.items():
rule = rules[rule_name]
if rule.meta.get("capa/subscope-rule"):
continue
doc["rules"][rule_name] = {
"meta": dict(rule.meta),
"source": rule.definition,
"matches": {
addr: convert_match_to_result_document(rules, capabilities, match) for (addr, match) in matches
},
}
return doc
def render_vverbose(meta, rules, capabilities):
# there's an import loop here
# if capa.render imports capa.render.vverbose
# and capa.render.vverbose import capa.render (implicitly, as a submodule)
# so, defer the import until routine is called, breaking the import loop.
import capa.render.vverbose
doc = convert_capabilities_to_result_document(meta, rules, capabilities)
return capa.render.vverbose.render_vverbose(doc)
def render_verbose(meta, rules, capabilities):
# break import loop
import capa.render.verbose
doc = convert_capabilities_to_result_document(meta, rules, capabilities)
return capa.render.verbose.render_verbose(doc)
def render_default(meta, rules, capabilities):
# break import loop
import capa.render.default
import capa.render.verbose
doc = convert_capabilities_to_result_document(meta, rules, capabilities)
return capa.render.default.render_default(doc)
class CapaJsonObjectEncoder(json.JSONEncoder):
"""JSON encoder that emits Python sets as sorted lists"""
def default(self, obj):
if isinstance(obj, (list, dict, int, float, bool, type(None))) or isinstance(obj, six.string_types):
return json.JSONEncoder.default(self, obj)
elif isinstance(obj, set):
return list(sorted(obj))
else:
# probably will TypeError
return json.JSONEncoder.default(self, obj)
def render_json(meta, rules, capabilities):
return json.dumps(
convert_capabilities_to_result_document(meta, rules, capabilities),
cls=CapaJsonObjectEncoder,
sort_keys=True,
)

View File

@@ -8,18 +8,15 @@
import collections
import six
import tabulate
import capa.render.utils as rutils
import capa.render.result_document
from capa.rules import RuleSet
from capa.engine import MatchResults
from capa.render.utils import StringIO
tabulate.PRESERVE_WHITESPACE = True
def width(s: str, character_count: int) -> str:
def width(s, character_count):
"""pad the given string to at least `character_count`"""
if len(s) < character_count:
return s + " " * (character_count - len(s))
@@ -27,14 +24,11 @@ def width(s: str, character_count: int) -> str:
return s
def render_meta(doc, ostream: StringIO):
def render_meta(doc, ostream):
rows = [
(width("md5", 22), width(doc["meta"]["sample"]["md5"], 82)),
("sha1", doc["meta"]["sample"]["sha1"]),
("sha256", doc["meta"]["sample"]["sha256"]),
("os", doc["meta"]["analysis"]["os"]),
("format", doc["meta"]["analysis"]["format"]),
("arch", doc["meta"]["analysis"]["arch"]),
("path", doc["meta"]["sample"]["path"]),
]
@@ -70,7 +64,7 @@ def find_subrule_matches(doc):
return matches
def render_capabilities(doc, ostream: StringIO):
def render_capabilities(doc, ostream):
"""
example::
@@ -108,7 +102,7 @@ def render_capabilities(doc, ostream: StringIO):
ostream.writeln(rutils.bold("no capabilities found"))
def render_attack(doc, ostream: StringIO):
def render_attack(doc, ostream):
"""
example::
@@ -130,16 +124,27 @@ def render_attack(doc, ostream: StringIO):
continue
for attack in rule["meta"]["att&ck"]:
tactics[attack["tactic"]].add((attack["technique"], attack.get("subtechnique"), attack["id"]))
tactic, _, rest = attack.partition("::")
if "::" in rest:
technique, _, rest = rest.partition("::")
subtechnique, _, id = rest.rpartition(" ")
tactics[tactic].add((technique, subtechnique, id))
else:
technique, _, id = rest.rpartition(" ")
tactics[tactic].add((technique, id))
rows = []
for tactic, techniques in sorted(tactics.items()):
inner_rows = []
for (technique, subtechnique, id) in sorted(techniques):
if subtechnique is None:
for spec in sorted(techniques):
if len(spec) == 2:
technique, id = spec
inner_rows.append("%s %s" % (rutils.bold(technique), id))
else:
elif len(spec) == 3:
technique, subtechnique, id = spec
inner_rows.append("%s::%s %s" % (rutils.bold(technique), subtechnique, id))
else:
raise RuntimeError("unexpected ATT&CK spec format")
rows.append(
(
rutils.bold(tactic.upper()),
@@ -156,7 +161,7 @@ def render_attack(doc, ostream: StringIO):
ostream.write("\n")
def render_mbc(doc, ostream: StringIO):
def render_mbc(doc, ostream):
"""
example::
@@ -175,17 +180,32 @@ def render_mbc(doc, ostream: StringIO):
if not rule["meta"].get("mbc"):
continue
for mbc in rule["meta"]["mbc"]:
objectives[mbc["objective"]].add((mbc["behavior"], mbc.get("method"), mbc["id"]))
mbcs = rule["meta"]["mbc"]
if not isinstance(mbcs, list):
raise ValueError("invalid rule: MBC mapping is not a list")
for mbc in mbcs:
objective, _, rest = mbc.partition("::")
if "::" in rest:
behavior, _, rest = rest.partition("::")
method, _, id = rest.rpartition(" ")
objectives[objective].add((behavior, method, id))
else:
behavior, _, id = rest.rpartition(" ")
objectives[objective].add((behavior, id))
rows = []
for objective, behaviors in sorted(objectives.items()):
inner_rows = []
for (behavior, method, id) in sorted(behaviors):
if method is None:
inner_rows.append("%s [%s]" % (rutils.bold(behavior), id))
for spec in sorted(behaviors):
if len(spec) == 2:
behavior, id = spec
inner_rows.append("%s %s" % (rutils.bold(behavior), id))
elif len(spec) == 3:
behavior, method, id = spec
inner_rows.append("%s::%s %s" % (rutils.bold(behavior), method, id))
else:
inner_rows.append("%s::%s [%s]" % (rutils.bold(behavior), method, id))
raise RuntimeError("unexpected MBC spec format")
rows.append(
(
rutils.bold(objective.upper()),
@@ -212,8 +232,3 @@ def render_default(doc):
render_capabilities(doc, ostream)
return ostream.getvalue()
def render(meta, rules: RuleSet, capabilities: MatchResults) -> str:
doc = capa.render.result_document.convert_capabilities_to_result_document(meta, rules, capabilities)
return render_default(doc)

View File

@@ -1,33 +0,0 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import json
import capa.render.result_document
from capa.rules import RuleSet
from capa.engine import MatchResults
class CapaJsonObjectEncoder(json.JSONEncoder):
"""JSON encoder that emits Python sets as sorted lists"""
def default(self, obj):
if isinstance(obj, (list, dict, int, float, bool, type(None))) or isinstance(obj, str):
return json.JSONEncoder.default(self, obj)
elif isinstance(obj, set):
return list(sorted(obj))
else:
# probably will TypeError
return json.JSONEncoder.default(self, obj)
def render(meta, rules: RuleSet, capabilities: MatchResults) -> str:
return json.dumps(
capa.render.result_document.convert_capabilities_to_result_document(meta, rules, capabilities),
cls=CapaJsonObjectEncoder,
sort_keys=True,
)

View File

@@ -1,329 +0,0 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import copy
import capa.rules
import capa.engine
import capa.render.utils
import capa.features.common
from capa.rules import RuleSet
from capa.engine import MatchResults
def convert_statement_to_result_document(statement):
"""
"statement": {
"type": "or"
},
"statement": {
"max": 9223372036854775808,
"min": 2,
"type": "range"
},
"""
statement_type = statement.name.lower()
result = {"type": statement_type}
if statement.description:
result["description"] = statement.description
if statement_type == "some" and statement.count == 0:
result["type"] = "optional"
elif statement_type == "some":
result["count"] = statement.count
elif statement_type == "range":
result["min"] = statement.min
result["max"] = statement.max
result["child"] = convert_feature_to_result_document(statement.child)
elif statement_type == "subscope":
result["subscope"] = statement.scope
return result
def convert_feature_to_result_document(feature):
"""
"feature": {
"number": 6,
"type": "number"
},
"feature": {
"api": "ws2_32.WSASocket",
"type": "api"
},
"feature": {
"match": "create TCP socket",
"type": "match"
},
"feature": {
"characteristic": [
"loop",
true
],
"type": "characteristic"
},
"""
result = {"type": feature.name, feature.name: feature.get_value_str()}
if feature.description:
result["description"] = feature.description
if feature.name in ("regex", "substring"):
result["matches"] = feature.matches
return result
def convert_node_to_result_document(node):
"""
"node": {
"type": "statement",
"statement": { ... }
},
"node": {
"type": "feature",
"feature": { ... }
},
"""
if isinstance(node, capa.engine.Statement):
return {
"type": "statement",
"statement": convert_statement_to_result_document(node),
}
elif isinstance(node, capa.features.common.Feature):
return {
"type": "feature",
"feature": convert_feature_to_result_document(node),
}
else:
raise RuntimeError("unexpected match node type")
def convert_match_to_result_document(rules, capabilities, result):
"""
convert the given Result instance into a common, Python-native data structure.
this will become part of the "result document" format that can be emitted to JSON.
"""
doc = {
"success": bool(result.success),
"node": convert_node_to_result_document(result.statement),
"children": [convert_match_to_result_document(rules, capabilities, child) for child in result.children],
}
# logic expression, like `and`, don't have locations - their children do.
# so only add `locations` to feature nodes.
if isinstance(result.statement, capa.features.common.Feature):
if bool(result.success):
doc["locations"] = result.locations
elif isinstance(result.statement, capa.engine.Range):
if bool(result.success):
doc["locations"] = result.locations
# if we have a `match` statement, then we're referencing another rule or namespace.
# this could an external rule (written by a human), or
# rule generated to support a subscope (basic block, etc.)
# we still want to include the matching logic in this tree.
#
# so, we need to lookup the other rule results
# and then filter those down to the address used here.
# finally, splice that logic into this tree.
if (
doc["node"]["type"] == "feature"
and doc["node"]["feature"]["type"] == "match"
# only add subtree on success,
# because there won't be results for the other rule on failure.
and doc["success"]
):
name = doc["node"]["feature"]["match"]
if name in rules:
# this is a rule that we're matching
#
# pull matches from the referenced rule into our tree here.
rule_name = doc["node"]["feature"]["match"]
rule = rules[rule_name]
rule_matches = {address: result for (address, result) in capabilities[rule_name]}
if rule.meta.get("capa/subscope-rule"):
# for a subscope rule, fixup the node to be a scope node, rather than a match feature node.
#
# e.g. `contain loop/30c4c78e29bf4d54894fc74f664c62e8` -> `basic block`
scope = rule.meta["scope"]
doc["node"] = {
"type": "statement",
"statement": {
"type": "subscope",
"subscope": scope,
},
}
for location in doc["locations"]:
doc["children"].append(convert_match_to_result_document(rules, capabilities, rule_matches[location]))
else:
# this is a namespace that we're matching
#
# check for all rules in the namespace,
# seeing if they matched.
# if so, pull their matches into our match tree here.
ns_name = doc["node"]["feature"]["match"]
ns_rules = rules.rules_by_namespace[ns_name]
for rule in ns_rules:
if rule.name in capabilities:
# the rule matched, so splice results into our tree here.
#
# note, there's a shortcoming in our result document schema here:
# we lose the name of the rule that matched in a namespace.
# for example, if we have a statement: `match: runtime/dotnet`
# and we get matches, we can say the following:
#
# match: runtime/dotnet @ 0x0
# or:
# import: mscoree._CorExeMain @ 0x402000
#
# however, we lose the fact that it was rule
# "compiled to the .NET platform"
# that contained this logic and did the match.
#
# we could introduce an intermediate node here.
# this would be a breaking change and require updates to the renderers.
# in the meantime, the above might be sufficient.
rule_matches = {address: result for (address, result) in capabilities[rule.name]}
for location in doc["locations"]:
# doc[locations] contains all matches for the given namespace.
# for example, the feature might be `match: anti-analysis/packer`
# which matches against "generic unpacker" and "UPX".
# in this case, doc[locations] contains locations for *both* of thse.
#
# rule_matches contains the matches for the specific rule.
# this is a subset of doc[locations].
#
# so, grab only the locations for current rule.
if location in rule_matches:
doc["children"].append(
convert_match_to_result_document(rules, capabilities, rule_matches[location])
)
return doc
def convert_meta_to_result_document(meta):
# make a copy so that we don't modify the given parameter
meta = copy.deepcopy(meta)
attacks = meta.get("att&ck", [])
meta["att&ck"] = [parse_canonical_attack(attack) for attack in attacks]
mbcs = meta.get("mbc", [])
meta["mbc"] = [parse_canonical_mbc(mbc) for mbc in mbcs]
return meta
def parse_canonical_attack(attack: str):
"""
parse capa's canonical ATT&CK representation: `Tactic::Technique::Subtechnique [Identifier]`
"""
tactic = ""
technique = ""
subtechnique = ""
parts, id = capa.render.utils.parse_parts_id(attack)
if len(parts) > 0:
tactic = parts[0]
if len(parts) > 1:
technique = parts[1]
if len(parts) > 2:
subtechnique = parts[2]
return {
"parts": parts,
"id": id,
"tactic": tactic,
"technique": technique,
"subtechnique": subtechnique,
}
def parse_canonical_mbc(mbc: str):
"""
parse capa's canonical MBC representation: `Objective::Behavior::Method [Identifier]`
"""
objective = ""
behavior = ""
method = ""
parts, id = capa.render.utils.parse_parts_id(mbc)
if len(parts) > 0:
objective = parts[0]
if len(parts) > 1:
behavior = parts[1]
if len(parts) > 2:
method = parts[2]
return {
"parts": parts,
"id": id,
"objective": objective,
"behavior": behavior,
"method": method,
}
def convert_capabilities_to_result_document(meta, rules: RuleSet, capabilities: MatchResults):
"""
convert the given rule set and capabilities result to a common, Python-native data structure.
this format can be directly emitted to JSON, or passed to the other `capa.render.*.render()` routines
to render as text.
see examples of substructures in above routines.
schema:
```json
{
"meta": {...},
"rules: {
$rule-name: {
"meta": {...copied from rule.meta...},
"matches: {
$address: {...match details...},
...
}
},
...
}
}
```
Args:
meta (Dict[str, Any]):
rules (RuleSet):
capabilities (Dict[str, List[Tuple[int, Result]]]):
"""
doc = {
"meta": meta,
"rules": {},
}
for rule_name, matches in capabilities.items():
rule = rules[rule_name]
if rule.meta.get("capa/subscope-rule"):
continue
rule_meta = convert_meta_to_result_document(rule.meta)
doc["rules"][rule_name] = {
"meta": rule_meta,
"source": rule.definition,
"matches": {
addr: convert_match_to_result_document(rules, capabilities, match) for (addr, match) in matches
},
}
return doc

View File

@@ -6,22 +6,21 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import io
import six
import termcolor
def bold(s: str) -> str:
def bold(s):
"""draw attention to the given string"""
return termcolor.colored(s, "blue")
def bold2(s: str) -> str:
def bold2(s):
"""draw attention to the given string, within a `bold` section"""
return termcolor.colored(s, "green")
def hex(n: int) -> str:
def hex(n):
"""render the given number using upper case hex, like: 0x123ABC"""
if n < 0:
return "-0x%X" % (-n)
@@ -29,24 +28,6 @@ def hex(n: int) -> str:
return "0x%X" % n
def parse_parts_id(s: str):
id = ""
parts = s.split("::")
if len(parts) > 0:
last = parts.pop()
last, _, id = last.rpartition(" ")
id = id.lstrip("[").rstrip("]")
parts.append(last)
return parts, id
def format_parts_id(data):
"""
format canonical representation of ATT&CK/MBC parts and ID
"""
return "%s [%s]" % ("::".join(data["parts"]), data["id"])
def capability_rules(doc):
"""enumerate the rules in (namespace, name) order that are 'capability' rules (not lib/subscope/disposition/etc)."""
for (_, _, rule) in sorted(
@@ -68,7 +49,7 @@ def capability_rules(doc):
yield rule
class StringIO(io.StringIO):
class StringIO(six.StringIO):
def writeln(self, s):
self.write(s)
self.write("\n")

View File

@@ -26,9 +26,6 @@ import tabulate
import capa.rules
import capa.render.utils as rutils
import capa.render.result_document
from capa.rules import RuleSet
from capa.engine import MatchResults
def render_meta(ostream, doc):
@@ -41,9 +38,7 @@ def render_meta(ostream, doc):
path /tmp/suspicious.dll_
timestamp 2020-07-03T10:17:05.796933
capa version 0.0.0
os windows
format pe
arch amd64
format auto
extractor VivisectFeatureExtractor
base address 0x10000000
rules (embedded rules)
@@ -57,14 +52,11 @@ def render_meta(ostream, doc):
("path", doc["meta"]["sample"]["path"]),
("timestamp", doc["meta"]["timestamp"]),
("capa version", doc["meta"]["version"]),
("os", doc["meta"]["analysis"]["os"]),
("format", doc["meta"]["analysis"]["format"]),
("arch", doc["meta"]["analysis"]["arch"]),
("extractor", doc["meta"]["analysis"]["extractor"]),
("base address", hex(doc["meta"]["analysis"]["base_address"])),
("rules", doc["meta"]["analysis"]["rules"]),
("function count", len(doc["meta"]["analysis"]["feature_counts"]["functions"])),
("library function count", len(doc["meta"]["analysis"]["library_functions"])),
(
"total feature count",
doc["meta"]["analysis"]["feature_counts"]["file"]
@@ -127,8 +119,3 @@ def render_verbose(doc):
ostream.write("\n")
return ostream.getvalue()
def render(meta, rules: RuleSet, capabilities: MatchResults) -> str:
doc = capa.render.result_document.convert_capabilities_to_result_document(meta, rules, capabilities)
return render_verbose(doc)

View File

@@ -6,15 +6,13 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import collections
import tabulate
import capa.rules
import capa.render.utils as rutils
import capa.render.verbose
import capa.features.common
import capa.render.result_document
from capa.rules import RuleSet
from capa.engine import MatchResults
def render_locations(ostream, match):
@@ -58,11 +56,7 @@ def render_statement(ostream, match, statement, indent=0):
child = statement["child"]
if child[child["type"]]:
if child["type"] == "string":
value = '"%s"' % capa.features.common.escape_string(child[child["type"]])
else:
value = child[child["type"]]
value = rutils.bold2(value)
value = rutils.bold2(child[child["type"]])
if child.get("description"):
ostream.write("count(%s(%s = %s)): " % (child["type"], value, child["description"]))
else:
@@ -87,51 +81,27 @@ def render_statement(ostream, match, statement, indent=0):
raise RuntimeError("unexpected match statement type: " + str(statement))
def render_string_value(s):
return '"%s"' % capa.features.common.escape_string(s)
def render_feature(ostream, match, feature, indent=0):
ostream.write(" " * indent)
key = feature["type"]
value = feature[feature["type"]]
if key == "regex":
key = "string" # render string for regex to mirror the rule source
value = feature["match"] # the match provides more information than the value for regex
if key not in ("regex", "substring"):
# like:
# number: 10 = SOME_CONSTANT @ 0x401000
if key == "string":
value = render_string_value(value)
ostream.write(key)
ostream.write(": ")
ostream.write(key)
ostream.write(": ")
if value:
ostream.write(rutils.bold2(value))
if value:
ostream.write(rutils.bold2(value))
if "description" in feature:
ostream.write(capa.rules.DESCRIPTION_SEPARATOR)
ostream.write(feature["description"])
if "description" in feature:
ostream.write(capa.rules.DESCRIPTION_SEPARATOR)
ostream.write(feature["description"])
if key not in ("os", "arch"):
render_locations(ostream, match)
ostream.write("\n")
else:
# like:
# regex: /blah/ = SOME_CONSTANT
# - "foo blah baz" @ 0x401000
# - "aaa blah bbb" @ 0x402000, 0x403400
ostream.write(key)
ostream.write(": ")
ostream.write(value)
ostream.write("\n")
for match, locations in sorted(feature["matches"].items(), key=lambda p: p[0]):
ostream.write(" " * (indent + 1))
ostream.write("- ")
ostream.write(rutils.bold2(render_string_value(match)))
render_locations(ostream, {"locations": locations})
ostream.write("\n")
render_locations(ostream, match)
ostream.write("\n")
def render_node(ostream, match, node, indent=0):
@@ -220,12 +190,6 @@ def render_rules(ostream, doc):
continue
v = rule["meta"][key]
if not v:
continue
if key in ("att&ck", "mbc"):
v = [rutils.format_parts_id(vv) for vv in v]
if isinstance(v, list) and len(v) == 1:
v = v[0]
elif isinstance(v, list) and len(v) > 1:
@@ -265,8 +229,3 @@ def render_vverbose(doc):
ostream.write("\n")
return ostream.getvalue()
def render(meta, rules: RuleSet, capabilities: MatchResults) -> str:
doc = capa.render.result_document.convert_capabilities_to_result_document(meta, rules, capabilities)
return render_vverbose(doc)

View File

@@ -6,35 +6,29 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import io
import re
import uuid
import codecs
import logging
import binascii
import functools
import collections
try:
from functools import lru_cache
except ImportError:
# need to type ignore this due to mypy bug here (duplicate name):
# https://github.com/python/mypy/issues/1153
from backports.functools_lru_cache import lru_cache # type: ignore
from typing import Any, Dict, List, Union, Iterator
from backports.functools_lru_cache import lru_cache
import six
import yaml
import ruamel.yaml
import capa.engine as ceng
import capa.engine
import capa.features
import capa.features.file
import capa.features.insn
import capa.features.common
import capa.features.basicblock
from capa.engine import Statement, FeatureSet
from capa.features.common import MAX_BYTES_FEATURE_SIZE, Feature
from capa.engine import *
from capa.features import MAX_BYTES_FEATURE_SIZE
logger = logging.getLogger(__name__)
@@ -71,45 +65,37 @@ BASIC_BLOCK_SCOPE = "basic block"
SUPPORTED_FEATURES = {
FILE_SCOPE: {
capa.features.common.MatchedRule,
capa.features.MatchedRule,
capa.features.file.Export,
capa.features.file.Import,
capa.features.file.Section,
capa.features.file.FunctionName,
capa.features.common.Characteristic("embedded pe"),
capa.features.common.String,
capa.features.common.Format,
capa.features.common.OS,
capa.features.common.Arch,
capa.features.Characteristic("embedded pe"),
capa.features.String,
},
FUNCTION_SCOPE: {
# plus basic block scope features, see below
capa.features.basicblock.BasicBlock,
capa.features.common.Characteristic("calls from"),
capa.features.common.Characteristic("calls to"),
capa.features.common.Characteristic("loop"),
capa.features.common.Characteristic("recursive call"),
capa.features.common.OS,
capa.features.common.Arch,
capa.features.Characteristic("calls from"),
capa.features.Characteristic("calls to"),
capa.features.Characteristic("loop"),
capa.features.Characteristic("recursive call"),
},
BASIC_BLOCK_SCOPE: {
capa.features.common.MatchedRule,
capa.features.MatchedRule,
capa.features.insn.API,
capa.features.insn.Number,
capa.features.common.String,
capa.features.common.Bytes,
capa.features.String,
capa.features.Bytes,
capa.features.insn.Offset,
capa.features.insn.Mnemonic,
capa.features.common.Characteristic("nzxor"),
capa.features.common.Characteristic("peb access"),
capa.features.common.Characteristic("fs access"),
capa.features.common.Characteristic("gs access"),
capa.features.common.Characteristic("cross section flow"),
capa.features.common.Characteristic("tight loop"),
capa.features.common.Characteristic("stack string"),
capa.features.common.Characteristic("indirect call"),
capa.features.common.OS,
capa.features.common.Arch,
capa.features.Characteristic("nzxor"),
capa.features.Characteristic("peb access"),
capa.features.Characteristic("fs access"),
capa.features.Characteristic("gs access"),
capa.features.Characteristic("cross section flow"),
capa.features.Characteristic("tight loop"),
capa.features.Characteristic("stack string"),
capa.features.Characteristic("indirect call"),
},
}
@@ -152,32 +138,22 @@ class InvalidRuleSet(ValueError):
return str(self)
def ensure_feature_valid_for_scope(scope: str, feature: Union[Feature, Statement]):
# if the given feature is a characteristic,
# check that is a valid characteristic for the given scope.
if (
isinstance(feature, capa.features.common.Characteristic)
and isinstance(feature.value, str)
and capa.features.common.Characteristic(feature.value) not in SUPPORTED_FEATURES[scope]
):
raise InvalidRule("feature %s not supported for scope %s" % (feature, scope))
if not isinstance(feature, capa.features.common.Characteristic):
# features of this scope that are not Characteristics will be Type instances.
# check that the given feature is one of these types.
types_for_scope = filter(lambda t: isinstance(t, type), SUPPORTED_FEATURES[scope])
if not isinstance(feature, tuple(types_for_scope)): # type: ignore
raise InvalidRule("feature %s not supported for scope %s" % (feature, scope))
def ensure_feature_valid_for_scope(scope, feature):
if isinstance(feature, capa.features.Characteristic):
if capa.features.Characteristic(feature.value) not in SUPPORTED_FEATURES[scope]:
raise InvalidRule("feature %s not support for scope %s" % (feature, scope))
elif not isinstance(feature, tuple(filter(lambda t: isinstance(t, type), SUPPORTED_FEATURES[scope]))):
raise InvalidRule("feature %s not support for scope %s" % (feature, scope))
def parse_int(s: str) -> int:
def parse_int(s):
if s.startswith("0x"):
return int(s, 0x10)
else:
return int(s, 10)
def parse_range(s: str):
def parse_range(s):
"""
parse a string "(0, 1)" into a range (min, max).
min and/or max may by None to indicate an unbound range.
@@ -190,21 +166,23 @@ def parse_range(s: str):
raise InvalidRule("invalid range: %s" % (s))
s = s[len("(") : -len(")")]
min_spec, _, max_spec = s.partition(",")
min_spec = min_spec.strip()
max_spec = max_spec.strip()
min, _, max = s.partition(",")
min = min.strip()
max = max.strip()
min = None
if min_spec:
min = parse_int(min_spec)
if min:
min = parse_int(min.strip())
if min < 0:
raise InvalidRule("range min less than zero")
else:
min = None
max = None
if max_spec:
max = parse_int(max_spec)
if max:
max = parse_int(max.strip())
if max < 0:
raise InvalidRule("range max less than zero")
else:
max = None
if min is not None and max is not None:
if max < min:
@@ -213,38 +191,36 @@ def parse_range(s: str):
return min, max
def parse_feature(key: str):
def parse_feature(key):
# keep this in sync with supported features
if key == "api":
return capa.features.insn.API
elif key == "string":
return capa.features.common.StringFactory
elif key == "substring":
return capa.features.common.Substring
return capa.features.StringFactory
elif key == "bytes":
return capa.features.common.Bytes
return capa.features.Bytes
elif key == "number":
return capa.features.insn.Number
elif key.startswith("number/"):
bitness = key.partition("/")[2]
arch = key.partition("/")[2]
# the other handlers here return constructors for features,
# and we want to as well,
# however, we need to preconfigure one of the arguments (`bitness`).
# however, we need to preconfigure one of the arguments (`arch`).
# so, instead we return a partially-applied function that
# provides `bitness` to the feature constructor.
# provides `arch` to the feature constructor.
# it forwards any other arguments provided to the closure along to the constructor.
return functools.partial(capa.features.insn.Number, bitness=bitness)
return functools.partial(capa.features.insn.Number, arch=arch)
elif key == "offset":
return capa.features.insn.Offset
elif key.startswith("offset/"):
bitness = key.partition("/")[2]
return functools.partial(capa.features.insn.Offset, bitness=bitness)
arch = key.partition("/")[2]
return functools.partial(capa.features.insn.Offset, arch=arch)
elif key == "mnemonic":
return capa.features.insn.Mnemonic
elif key == "basic blocks":
return capa.features.basicblock.BasicBlock
elif key == "characteristic":
return capa.features.common.Characteristic
return capa.features.Characteristic
elif key == "export":
return capa.features.file.Export
elif key == "import":
@@ -252,16 +228,7 @@ def parse_feature(key: str):
elif key == "section":
return capa.features.file.Section
elif key == "match":
return capa.features.common.MatchedRule
elif key == "function-name":
return capa.features.file.FunctionName
elif key == "os":
return capa.features.common.OS
elif key == "format":
return capa.features.common.Format
elif key == "arch":
return capa.features.common.Arch
return capa.features.MatchedRule
else:
raise InvalidRule("unexpected statement: %s" % key)
@@ -273,70 +240,39 @@ def parse_feature(key: str):
DESCRIPTION_SEPARATOR = " = "
def parse_bytes(s: str) -> bytes:
try:
b = codecs.decode(s.replace(" ", "").encode("ascii"), "hex")
except binascii.Error:
raise InvalidRule('unexpected bytes value: must be a valid hex sequence: "%s"' % s)
if len(b) > MAX_BYTES_FEATURE_SIZE:
raise InvalidRule(
"unexpected bytes value: byte sequences must be no larger than %s bytes" % MAX_BYTES_FEATURE_SIZE
)
return b
def parse_description(s: Union[str, int, bytes], value_type: str, description=None):
if value_type == "string":
# string features cannot have inline descriptions,
# so we assume the entire value is the string,
# like: `string: foo = bar` -> "foo = bar"
value = s
def parse_description(s, value_type, description=None):
"""
s can be an int or a string
"""
if value_type != "string" and isinstance(s, six.string_types) and DESCRIPTION_SEPARATOR in s:
if description:
raise InvalidRule(
'unexpected value: "%s", only one description allowed (inline description with `%s`)'
% (s, DESCRIPTION_SEPARATOR)
)
value, _, description = s.partition(DESCRIPTION_SEPARATOR)
if description == "":
raise InvalidRule('unexpected value: "%s", description cannot be empty' % s)
else:
# other features can have inline descriptions, like `number: 10 = CONST_FOO`.
# in this case, the RHS will be like `10 = CONST_FOO` or some other string
if isinstance(s, str):
if DESCRIPTION_SEPARATOR in s:
if description:
# there is already a description passed in as a sub node, like:
#
# - number: 10 = CONST_FOO
# description: CONST_FOO
raise InvalidRule(
'unexpected value: "%s", only one description allowed (inline description with `%s`)'
% (s, DESCRIPTION_SEPARATOR)
)
value = s
value, _, description = s.partition(DESCRIPTION_SEPARATOR)
if description == "":
# sanity check:
# there is an empty description, like `number: 10 =`
raise InvalidRule('unexpected value: "%s", description cannot be empty' % s)
else:
# this is a string, but there is no description,
# like: `api: CreateFileA`
value = s
if isinstance(value, six.string_types):
if value_type == "bytes":
try:
value = codecs.decode(value.replace(" ", ""), "hex")
# TODO: Remove TypeError when Python2 is not used anymore
except (TypeError, binascii.Error):
raise InvalidRule('unexpected bytes value: "%s", must be a valid hex sequence' % value)
# cast from the received string value to the appropriate type.
#
# without a description, this type would already be correct,
# but since we parsed the description from a string,
# we need to convert the value to the expected type.
#
# for example, from `number: 10 = CONST_FOO` we have
# the string "10" that needs to become the number 10.
if value_type == "bytes":
value = parse_bytes(value)
elif value_type in ("number", "offset") or value_type.startswith(("number/", "offset/")):
try:
value = parse_int(value)
except ValueError:
raise InvalidRule('unexpected value: "%s", must begin with numerical value' % value)
else:
# the value might be a number, like: `number: 10`
value = s
if len(value) > MAX_BYTES_FEATURE_SIZE:
raise InvalidRule(
"unexpected bytes value: byte sequences must be no larger than %s bytes" % MAX_BYTES_FEATURE_SIZE
)
elif value_type in ("number", "offset") or value_type.startswith(("number/", "offset/")):
try:
value = parse_int(value)
except ValueError:
raise InvalidRule('unexpected value: "%s", must begin with numerical value' % value)
return value, description
@@ -376,28 +312,28 @@ def pop_statement_description_entry(d):
return description["description"]
def build_statements(d, scope: str):
def build_statements(d, scope):
if len(d.keys()) > 2:
raise InvalidRule("too many statements")
key = list(d.keys())[0]
description = pop_statement_description_entry(d[key])
if key == "and":
return ceng.And([build_statements(dd, scope) for dd in d[key]], description=description)
return And([build_statements(dd, scope) for dd in d[key]], description=description)
elif key == "or":
return ceng.Or([build_statements(dd, scope) for dd in d[key]], description=description)
return Or([build_statements(dd, scope) for dd in d[key]], description=description)
elif key == "not":
if len(d[key]) != 1:
raise InvalidRule("not statement must have exactly one child statement")
return ceng.Not(build_statements(d[key][0], scope), description=description)
return Not(build_statements(d[key][0], scope), description=description)
elif key.endswith(" or more"):
count = int(key[: -len("or more")])
return ceng.Some(count, [build_statements(dd, scope) for dd in d[key]], description=description)
return Some(count, [build_statements(dd, scope) for dd in d[key]], description=description)
elif key == "optional":
# `optional` is an alias for `0 or more`
# which is useful for documenting behaviors,
# like with `write file`, we might say that `WriteFile` is optionally found alongside `CreateFileA`.
return ceng.Some(0, [build_statements(dd, scope) for dd in d[key]], description=description)
return Some(0, [build_statements(dd, scope) for dd in d[key]], description=description)
elif key == "function":
if scope != FILE_SCOPE:
@@ -406,7 +342,7 @@ def build_statements(d, scope: str):
if len(d[key]) != 1:
raise InvalidRule("subscope must have exactly one child statement")
return ceng.Subscope(FUNCTION_SCOPE, build_statements(d[key][0], FUNCTION_SCOPE))
return Subscope(FUNCTION_SCOPE, build_statements(d[key][0], FUNCTION_SCOPE))
elif key == "basic block":
if scope != FUNCTION_SCOPE:
@@ -415,7 +351,7 @@ def build_statements(d, scope: str):
if len(d[key]) != 1:
raise InvalidRule("subscope must have exactly one child statement")
return ceng.Subscope(BASIC_BLOCK_SCOPE, build_statements(d[key][0], BASIC_BLOCK_SCOPE))
return Subscope(BASIC_BLOCK_SCOPE, build_statements(d[key][0], BASIC_BLOCK_SCOPE))
elif key.startswith("count(") and key.endswith(")"):
# e.g.:
@@ -456,28 +392,22 @@ def build_statements(d, scope: str):
count = d[key]
if isinstance(count, int):
return ceng.Range(feature, min=count, max=count, description=description)
return Range(feature, min=count, max=count, description=description)
elif count.endswith(" or more"):
min = parse_int(count[: -len(" or more")])
max = None
return ceng.Range(feature, min=min, max=max, description=description)
return Range(feature, min=min, max=max, description=description)
elif count.endswith(" or fewer"):
min = None
max = parse_int(count[: -len(" or fewer")])
return ceng.Range(feature, min=min, max=max, description=description)
return Range(feature, min=min, max=max, description=description)
elif count.startswith("("):
min, max = parse_range(count)
return ceng.Range(feature, min=min, max=max, description=description)
return Range(feature, min=min, max=max, description=description)
else:
raise InvalidRule("unexpected range: %s" % (count))
elif key == "string" and not isinstance(d[key], str):
elif key == "string" and not isinstance(d[key], six.string_types):
raise InvalidRule("ambiguous string value %s, must be defined as explicit string" % d[key])
elif (
(key == "os" and d[key] not in capa.features.common.VALID_OS)
or (key == "format" and d[key] not in capa.features.common.VALID_FORMAT)
or (key == "arch" and d[key] not in capa.features.common.VALID_ARCH)
):
raise InvalidRule("unexpected %s value %s" % (key, d[key]))
else:
Feature = parse_feature(key)
value, description = parse_description(d[key], key, d.get("description"))
@@ -489,16 +419,16 @@ def build_statements(d, scope: str):
return feature
def first(s: List[Any]) -> Any:
def first(s):
return s[0]
def second(s: List[Any]) -> Any:
def second(s):
return s[1]
class Rule:
def __init__(self, name: str, scope: str, statement: Statement, meta, definition=""):
class Rule(object):
def __init__(self, name, scope, statement, meta, definition=""):
super(Rule, self).__init__()
self.name = name
self.scope = scope
@@ -528,7 +458,7 @@ class Rule:
deps = set([])
def rec(statement):
if isinstance(statement, capa.features.common.MatchedRule):
if isinstance(statement, capa.features.MatchedRule):
# we're not sure at this point if the `statement.value` is
# really a rule name or a namespace name (we use `MatchedRule` for both cases).
# we'll give precedence to namespaces, and then assume if that does work,
@@ -544,7 +474,7 @@ class Rule:
# not a namespace, assume its a rule name.
deps.add(statement.value)
elif isinstance(statement, ceng.Statement):
elif isinstance(statement, Statement):
for child in statement.get_children():
rec(child)
@@ -555,9 +485,11 @@ class Rule:
return deps
def _extract_subscope_rules_rec(self, statement):
if isinstance(statement, ceng.Statement):
if isinstance(statement, Statement):
# for each child that is a subscope,
for subscope in filter(lambda statement: isinstance(statement, ceng.Subscope), statement.get_children()):
for subscope in filter(
lambda statement: isinstance(statement, capa.engine.Subscope), statement.get_children()
):
# create a new rule from it.
# the name is a randomly generated, hopefully unique value.
@@ -582,7 +514,7 @@ class Rule:
)
# update the existing statement to `match` the new rule
new_node = capa.features.common.MatchedRule(name)
new_node = capa.features.MatchedRule(name)
statement.replace_child(subscope, new_node)
# and yield the new rule to our caller
@@ -619,16 +551,15 @@ class Rule:
for new_rule in self._extract_subscope_rules_rec(self.statement):
yield new_rule
def evaluate(self, features: FeatureSet):
def evaluate(self, features):
return self.statement.evaluate(features)
@classmethod
def from_dict(cls, d, definition):
meta = d["rule"]["meta"]
name = meta["name"]
name = d["rule"]["meta"]["name"]
# if scope is not specified, default to function scope.
# this is probably the mode that rule authors will start with.
scope = meta.get("scope", FUNCTION_SCOPE)
scope = d["rule"]["meta"].get("scope", FUNCTION_SCOPE)
statements = d["rule"]["features"]
# the rule must start with a single logic node.
@@ -636,19 +567,13 @@ class Rule:
if len(statements) != 1:
raise InvalidRule("rule must begin with a single top level statement")
if isinstance(statements[0], ceng.Subscope):
if isinstance(statements[0], capa.engine.Subscope):
raise InvalidRule("top level statement may not be a subscope")
if scope not in SUPPORTED_FEATURES.keys():
raise InvalidRule("{:s} is not a supported scope".format(scope))
meta = d["rule"]["meta"]
if not isinstance(meta.get("att&ck", []), list):
raise InvalidRule("ATT&CK mapping must be a list")
if not isinstance(meta.get("mbc", []), list):
raise InvalidRule("MBC mapping must be a list")
return cls(name, scope, build_statements(statements[0], scope), meta, definition)
return cls(name, scope, build_statements(statements[0], scope), d["rule"]["meta"], definition)
@staticmethod
@lru_cache()
@@ -774,7 +699,7 @@ class Rule:
for key in hidden_meta.keys():
del meta[key]
ostream = io.BytesIO()
ostream = six.BytesIO()
self._get_ruamel_yaml_parser().dump(definition, ostream)
for key, value in hidden_meta.items():
@@ -811,42 +736,53 @@ class Rule:
# the below regex makes these adjustments and while ugly, we don't have to explore the ruamel.yaml insides
doc = re.sub(r"!!int '0x-([0-9a-fA-F]+)'", r"-0x\1", doc)
# normalize CRLF to LF
doc = doc.replace("\r\n", "\n")
return doc
def get_rules_with_scope(rules, scope) -> List[Rule]:
def get_rules_with_scope(rules, scope):
"""
from the given collection of rules, select those with the given scope.
`scope` is one of the capa.rules.*_SCOPE constants.
args:
rules (List[capa.rules.Rule]):
scope (str): one of the capa.rules.*_SCOPE constants.
returns:
List[capa.rules.Rule]:
"""
return list(rule for rule in rules if rule.scope == scope)
def get_rules_and_dependencies(rules: List[Rule], rule_name: str) -> Iterator[Rule]:
def get_rules_and_dependencies(rules, rule_name):
"""
from the given collection of rules, select a rule and its dependencies (transitively).
args:
rules (List[Rule]):
rule_name (str):
yields:
Rule:
"""
# we evaluate `rules` multiple times, so if its a generator, realize it into a list.
rules = list(rules)
namespaces = index_rules_by_namespace(rules)
rules_by_name = {rule.name: rule for rule in rules}
rules = {rule.name: rule for rule in rules}
wanted = set([rule_name])
def rec(rule):
wanted.add(rule.name)
for dep in rule.get_dependencies(namespaces):
rec(rules_by_name[dep])
rec(rules[dep])
rec(rules_by_name[rule_name])
rec(rules[rule_name])
for rule in rules_by_name.values():
for rule in rules.values():
if rule.name in wanted:
yield rule
def ensure_rules_are_unique(rules: List[Rule]) -> None:
def ensure_rules_are_unique(rules):
seen = set([])
for rule in rules:
if rule.name in seen:
@@ -854,7 +790,7 @@ def ensure_rules_are_unique(rules: List[Rule]) -> None:
seen.add(rule.name)
def ensure_rule_dependencies_are_met(rules: List[Rule]) -> None:
def ensure_rule_dependencies_are_met(rules):
"""
raise an exception if a rule dependency does not exist.
@@ -864,14 +800,14 @@ def ensure_rule_dependencies_are_met(rules: List[Rule]) -> None:
# we evaluate `rules` multiple times, so if its a generator, realize it into a list.
rules = list(rules)
namespaces = index_rules_by_namespace(rules)
rules_by_name = {rule.name: rule for rule in rules}
for rule in rules_by_name.values():
rules = {rule.name: rule for rule in rules}
for rule in rules.values():
for dep in rule.get_dependencies(namespaces):
if dep not in rules_by_name:
if dep not in rules:
raise InvalidRule('rule "%s" depends on missing rule "%s"' % (rule.name, dep))
def index_rules_by_namespace(rules: List[Rule]) -> Dict[str, List[Rule]]:
def index_rules_by_namespace(rules):
"""
compute the rules that fit into each namespace found within the given rules.
@@ -885,6 +821,11 @@ def index_rules_by_namespace(rules: List[Rule]) -> Dict[str, List[Rule]]:
c2/shell: [create reverse shell]
c2/file-transfer: [download and write a file]
c2: [create reverse shell, download and write a file]
Args:
rules (List[Rule]):
Returns: Dict[str, List[Rule]]
"""
namespaces = collections.defaultdict(list)
@@ -900,38 +841,7 @@ def index_rules_by_namespace(rules: List[Rule]) -> Dict[str, List[Rule]]:
return dict(namespaces)
def topologically_order_rules(rules: List[Rule]) -> List[Rule]:
"""
order the given rules such that dependencies show up before dependents.
this means that as we match rules, we can add features for the matches, and these
will be matched by subsequent rules if they follow this order.
assumes that the rule dependency graph is a DAG.
"""
# we evaluate `rules` multiple times, so if its a generator, realize it into a list.
rules = list(rules)
namespaces = index_rules_by_namespace(rules)
rules_by_name = {rule.name: rule for rule in rules}
seen = set([])
ret = []
def rec(rule):
if rule.name in seen:
return
for dep in rule.get_dependencies(namespaces):
rec(rules_by_name[dep])
ret.append(rule)
seen.add(rule.name)
for rule in rules_by_name.values():
rec(rule)
return ret
class RuleSet:
class RuleSet(object):
"""
a ruleset is initialized with a collection of rules, which it verifies and sorts into scopes.
each set of scoped rules is sorted topologically, which enables rules to match on past rule matches.
@@ -946,7 +856,7 @@ class RuleSet:
capa.engine.match(ruleset.file_rules, ...)
"""
def __init__(self, rules: List[Rule]):
def __init__(self, rules):
super(RuleSet, self).__init__()
ensure_rules_are_unique(rules)
@@ -962,7 +872,6 @@ class RuleSet:
self.function_rules = self._get_rules_for_scope(rules, FUNCTION_SCOPE)
self.basic_block_rules = self._get_rules_for_scope(rules, BASIC_BLOCK_SCOPE)
self.rules = {rule.name: rule for rule in rules}
self.rules_by_namespace = index_rules_by_namespace(rules)
def __len__(self):
return len(self.rules)
@@ -970,9 +879,6 @@ class RuleSet:
def __getitem__(self, rulename):
return self.rules[rulename]
def __contains__(self, rulename):
return rulename in self.rules
@staticmethod
def _get_rules_for_scope(rules, scope):
"""
@@ -993,7 +899,7 @@ class RuleSet:
continue
scope_rules.update(get_rules_and_dependencies(rules, rule.name))
return get_rules_with_scope(topologically_order_rules(list(scope_rules)), scope)
return get_rules_with_scope(capa.engine.topologically_order_rules(scope_rules), scope)
@staticmethod
def _extract_subscope_rules(rules):
@@ -1017,7 +923,7 @@ class RuleSet:
return done
def filter_rules_by_meta(self, tag: str) -> "RuleSet":
def filter_rules_by_meta(self, tag):
"""
return new rule set with rules filtered based on all meta field values, adds all dependency rules
apply tag-based rule filter assuming that all required rules are loaded
@@ -1026,11 +932,11 @@ class RuleSet:
TODO handle circular dependencies?
TODO support -t=metafield <k>
"""
rules = list(self.rules.values())
rules = self.rules.values()
rules_filtered = set([])
for rule in rules:
for k, v in rule.meta.items():
if isinstance(v, str) and tag in v:
if isinstance(v, six.string_types) and tag in v:
logger.debug('using rule "%s" and dependencies, found tag in meta.%s: %s', rule.name, k, v)
rules_filtered.update(set(capa.rules.get_rules_and_dependencies(rules, rule.name)))
break

View File

@@ -1 +1 @@
__version__ = "3.0.2"
__version__ = "1.4.0"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 141 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 322 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 173 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.4 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 135 KiB

View File

@@ -27,10 +27,6 @@ To install capa as a Python library use `pip` to fetch the `flare-capa` module.
#### *Note*:
This method is appropriate for integrating capa in an existing project.
This technique doesn't pull the default rule set, so you should check it out separately from [capa-rules](https://github.com/fireeye/capa-rules/) and pass the directory to the entrypoint using `-r` or set the rules path in the IDA Pro plugin.
This technique also doesn't set up the default library identification [signatures](https://github.com/fireeye/capa/tree/master/sigs). You can pass the signature directory using the `-s` argument.
For example, to run capa with both a rule path and a signature path:
capa -r /path/to/capa-rules -s /path/to/capa-sigs suspicious.exe
Alternatively, see Method 3 below.
### 1. Install capa module
@@ -46,7 +42,6 @@ If you'd like to review and modify the capa source code, you'll need to check it
Next, clone the capa git repository.
We use submodules to separate [code](https://github.com/fireeye/capa), [rules](https://github.com/fireeye/capa-rules), and [test data](https://github.com/fireeye/capa-testfiles).
To clone everything use the `--recurse-submodules` option:
- CAUTION: The capa testfiles repository contains many malware samples. If you pull down everything using this method, you may want to install to a directory that won't trigger your anti-virus software.
- `$ git clone --recurse-submodules https://github.com/fireeye/capa.git /local/path/to/src` (HTTPS)
- `$ git clone --recurse-submodules git@github.com:fireeye/capa.git /local/path/to/src` (SSH)
@@ -64,25 +59,6 @@ Use `pip` to install the source code in "editable" mode. This means that Python
You'll find that the `capa.exe` (Windows) or `capa` (Linux/MacOS) executables in your path now invoke the capa binary from this directory.
#### Development
##### venv [optional]
For development, we recommend to use [venv](https://docs.python.org/3/tutorial/venv.html). It allows you to create a virtual environment: a self-contained directory tree that contains a Python installation for a particular version of Python, plus a number of additional packages. This approach avoids conflicts between the requirements of different applications on your computer. It also ensures that you don't overlook to add a new requirement to `setup.up` using a library already installed on your system.
To create an environment (in the parent directory, to avoid commiting it by accident or messing with the linters), run:
`$ python3 -m venv ../capa-env`
To activate `capa-env` in Linux or MacOS, run:
`$ source ../capa-env/bin/activate`
To activate `capa-env` in Windows, run:
`$ ..\capa-env\Scripts\activate.bat`
For more details about creating and using virtual environments, check out the [venv documentation](https://docs.python.org/3/tutorial/venv.html).
##### Install development dependencies
We use the following tools to ensure consistent code style and formatting:
- [black](https://github.com/psf/black) code formatter, with `-l 120`
- [isort 5](https://pypi.org/project/isort/) code formatter, with `--profile black --length-sort --line-width 120`
@@ -93,28 +69,30 @@ To install these development dependencies, run:
`$ pip install -e /local/path/to/src[dev]`
Note that some development dependencies (including the black code formatter) require Python 3.
To check the code style, formatting and run the tests you can run the script `scripts/ci.sh`.
You can run it with the argument `no_tests` to skip the tests and only run the code style and formatting: `scripts/ci.sh no_tests`
##### Setup hooks [optional]
If you plan to contribute to capa, you may want to setup the hooks.
Run `scripts/setup-hooks.sh` to set the following hooks up:
- The `pre-commit` hook runs checks before every `git commit`.
It runs `scripts/ci.sh no_tests` aborting the commit if there are code style or rule linter offenses you need to fix.
- The `pre-push` hook runs checks before every `git push`.
It runs `scripts/ci.sh` aborting the push if there are code style or rule linter offenses or if the tests fail.
This way you can ensure everything is alright before sending a pull request.
You can skip the checks by using the `--no-verify` git option.
### 3. Compile binary using PyInstaller
We compile capa standalone binaries using PyInstaller. To reproduce the build process check out the source code as described above and follow these steps.
#### Install PyInstaller:
`$ pip install pyinstaller` (Python 3)
For Python 2.7: `$ pip install 'pyinstaller==3.*'` (PyInstaller 4 doesn't support Python 2.7)
For Python 3: `$ pip install 'pyinstaller`
#### Run Pyinstaller
`$ pyinstaller .github/pyinstaller/pyinstaller.spec`
You can find the compiled binary in the created directory `dist/`.
### 4. Setup hooks [optional]
If you plan to contribute to capa, you may want to setup the hooks.
Run `scripts/setup-hooks.sh` to set the following hooks up:
- The `pre-commit` hook runs checks before every `git commit`.
It runs `scripts/ci.sh no_tests` aborting the commit if there are code style or rule linter offenses you need to fix.
You can skip this check by using the `--no-verify` git option.
- The `pre-push` hook runs checks before every `git push`.
It runs `scripts/ci.sh` aborting the push if there are code style or rule linter offenses or if the tests fail.
This way you can ensure everything is alright before sending a pull request.

View File

@@ -1,43 +0,0 @@
# Release checklist
- [ ] Ensure all [milestoned issues/PRs](https://github.com/fireeye/capa/milestones) are addressed, or reassign to a new milestone.
- [ ] Add the `dont merge` label to all PRs that are close to be ready to merge (or merge them if they are ready) in [capa](https://github.com/fireeye/capa/pulls) and [capa-rules](https://github.com/fireeye/capa-rules/pulls).
- [ ] Ensure the [CI workflow succeeds in master](https://github.com/fireeye/capa/actions/workflows/tests.yml?query=branch%3Amaster).
- [ ] Ensure that `python scripts/lint.py rules/ --thorough` succeeds (only `missing examples` offenses are allowed in the nursery).
- [ ] Review changes
- capa https://github.com/fireeye/capa/compare/\<last-release\>...master
- capa-rules https://github.com/fireeye/capa-rules/compare/\<last-release>\...master
- [ ] Update [CHANGELOG.md](https://github.com/fireeye/capa/blob/master/CHANGELOG.md)
- Do not forget to add a nice introduction thanking contributors
- Remember that we need a major release if we introduce breaking changes
- Sections: see template below
- Update `Raw diffs` links
- Create placeholder for `master (unreleased)` section
```
## master (unreleased)
### New Features
### Breaking Changes
### New Rules (0)
-
### Bug Fixes
### capa explorer IDA Pro plugin
### Development
### Raw diffs
- [capa <release>...master](https://github.com/fireeye/capa/compare/<release>...master)
- [capa-rules <release>...master](https://github.com/fireeye/capa-rules/compare/<release>...master)
```
- [ ] Update [capa/version.py](https://github.com/fireeye/capa/blob/master/capa/version.py)
- [ ] Create a PR with the updated [CHANGELOG.md](https://github.com/fireeye/capa/blob/master/CHANGELOG.md) and [capa/version.py](https://github.com/fireeye/capa/blob/master/capa/version.py). Copy this checklist in the PR description.
- [ ] After PR review, merge the PR and [create the release in GH](https://github.com/fireeye/capa/releases/new) using text from the [CHANGELOG.md](https://github.com/fireeye/capa/blob/master/CHANGELOG.md).
- [ ] Verify GH actions [upload artifacts](https://github.com/fireeye/capa/releases), [publish to PyPI](https://pypi.org/project/flare-capa) and [create a tag in capa rules](https://github.com/fireeye/capa-rules/tags) upon completion.
- [ ] [Spread the word](https://twitter.com)
- [ ] Update internal service

View File

@@ -11,9 +11,3 @@ For example, `capa -t william.ballenthin@mandiant.com` runs rules that reference
### IDA Pro plugin: capa explorer
Please check out the [capa explorer documentation](/capa/ida/plugin/README.md).
### save time by reusing .viv files
Set the environment variable `CAPA_SAVE_WORKSPACE` to instruct the underlying analysis engine to
cache its intermediate results to the file system. For example, vivisect will create `.viv` files.
Subsequently, capa may run faster when reprocessing the same input file.
This is particularly useful during rule development as you repeatedly test a rule against a known sample.

2
rules

Submodule rules updated: f04491001d...faa670ac38

View File

@@ -1,223 +1,247 @@
#!/usr/bin/env python
"""
bulk-process
Invoke capa recursively against a directory of samples
and emit a JSON document mapping the file paths to their results.
By default, this will use subprocesses for parallelism.
Use `-n/--parallelism` to change the subprocess count from
the default of current CPU count.
Use `--no-mp` to use threads instead of processes,
which is probably not useful unless you set `--parallelism=1`.
example:
$ python scripts/bulk-process /tmp/suspicious
{
"/tmp/suspicious/suspicious.dll_": {
"rules": {
"encode data using XOR": {
"matches": {
"268440358": {
[...]
"/tmp/suspicious/1.dll_": { ... }
"/tmp/suspicious/2.dll_": { ... }
}
usage:
usage: bulk-process.py [-h] [-r RULES] [-d] [-q] [-n PARALLELISM] [--no-mp]
input
detect capabilities in programs.
positional arguments:
input Path to directory of files to recursively analyze
optional arguments:
-h, --help show this help message and exit
-r RULES, --rules RULES
Path to rule file or directory, use embedded rules by
default
-d, --debug Enable debugging output on STDERR
-q, --quiet Disable all output but errors
-n PARALLELISM, --parallelism PARALLELISM
parallelism factor
--no-mp disable subprocesses
Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at: [package root]/LICENSE.txt
Unless required by applicable law or agreed to in writing, software distributed under the License
is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
"""
import os
import sys
import json
import logging
import os.path
import argparse
import multiprocessing
import multiprocessing.pool
import capa
import capa.main
import capa.rules
import capa.render.json
logger = logging.getLogger("capa")
def get_capa_results(args):
"""
run capa against the file at the given path, using the given rules.
args is a tuple, containing:
rules (capa.rules.RuleSet): the rules to match
signatures (List[str]): list of file system paths to signature files
format (str): the name of the sample file format
path (str): the file system path to the sample to process
args is a tuple because i'm not quite sure how to unpack multiple arguments using `map`.
returns an dict with two required keys:
path (str): the file system path of the sample to process
status (str): either "error" or "ok"
when status == "error", then a human readable message is found in property "error".
when status == "ok", then the capa results are found in the property "ok".
the capa results are a dictionary with the following keys:
meta (dict): the meta analysis results
capabilities (dict): the matched capabilities and their result objects
"""
rules, sigpaths, format, path = args
should_save_workspace = os.environ.get("CAPA_SAVE_WORKSPACE") not in ("0", "no", "NO", "n", None)
logger.info("computing capa results for: %s", path)
try:
extractor = capa.main.get_extractor(
path, format, capa.main.BACKEND_VIV, sigpaths, should_save_workspace, disable_progress=True
)
except capa.main.UnsupportedFormatError:
# i'm 100% sure if multiprocessing will reliably raise exceptions across process boundaries.
# so instead, return an object with explicit success/failure status.
#
# if success, then status=ok, and results found in property "ok"
# if error, then status=error, and human readable message in property "error"
return {
"path": path,
"status": "error",
"error": "input file does not appear to be a PE file: %s" % path,
}
except capa.main.UnsupportedRuntimeError:
return {
"path": path,
"status": "error",
"error": "unsupported runtime or Python interpreter",
}
except Exception as e:
return {
"path": path,
"status": "error",
"error": "unexpected error: %s" % (e),
}
meta = capa.main.collect_metadata("", path, "", extractor)
capabilities, counts = capa.main.find_capabilities(rules, extractor, disable_progress=True)
meta["analysis"].update(counts)
return {
"path": path,
"status": "ok",
"ok": {
"meta": meta,
"capabilities": capabilities,
},
}
def main(argv=None):
if argv is None:
argv = sys.argv[1:]
parser = argparse.ArgumentParser(description="detect capabilities in programs.")
capa.main.install_common_args(parser, wanted={"rules", "signatures"})
parser.add_argument("input", type=str, help="Path to directory of files to recursively analyze")
parser.add_argument(
"-n", "--parallelism", type=int, default=multiprocessing.cpu_count(), help="parallelism factor"
)
parser.add_argument("--no-mp", action="store_true", help="disable subprocesses")
args = parser.parse_args(args=argv)
capa.main.handle_common_args(args)
try:
rules = capa.main.get_rules(args.rules)
rules = capa.rules.RuleSet(rules)
logger.info("successfully loaded %s rules", len(rules))
except (IOError, capa.rules.InvalidRule, capa.rules.InvalidRuleSet) as e:
logger.error("%s", str(e))
return -1
try:
sig_paths = capa.main.get_signatures(args.signatures)
except (IOError) as e:
logger.error("%s", str(e))
return -1
samples = []
for (base, directories, files) in os.walk(args.input):
for file in files:
samples.append(os.path.join(base, file))
def pmap(f, args, parallelism=multiprocessing.cpu_count()):
"""apply the given function f to the given args using subprocesses"""
return multiprocessing.Pool(parallelism).imap(f, args)
def tmap(f, args, parallelism=multiprocessing.cpu_count()):
"""apply the given function f to the given args using threads"""
return multiprocessing.pool.ThreadPool(parallelism).imap(f, args)
def map(f, args, parallelism=None):
"""apply the given function f to the given args in the current thread"""
for arg in args:
yield f(arg)
if args.no_mp:
if args.parallelism == 1:
logger.debug("using current thread mapper")
mapper = map
else:
logger.debug("using threading mapper")
mapper = tmap
else:
logger.debug("using process mapper")
mapper = pmap
results = {}
for result in mapper(
get_capa_results, [(rules, sig_paths, "pe", sample) for sample in samples], parallelism=args.parallelism
):
if result["status"] == "error":
logger.warning(result["error"])
elif result["status"] == "ok":
meta = result["ok"]["meta"]
capabilities = result["ok"]["capabilities"]
# our renderer expects to emit a json document for a single sample
# so we deserialize the json document, store it in a larger dict, and we'll subsequently re-encode.
results[result["path"]] = json.loads(capa.render.json.render(meta, rules, capabilities))
else:
raise ValueError("unexpected status: %s" % (result["status"]))
print(json.dumps(results))
logger.info("done.")
return 0
if __name__ == "__main__":
sys.exit(main())
#!/usr/bin/env python
"""
bulk-process
Invoke capa recursively against a directory of samples
and emit a JSON document mapping the file paths to their results.
By default, this will use subprocesses for parallelism.
Use `-n/--parallelism` to change the subprocess count from
the default of current CPU count.
Use `--no-mp` to use threads instead of processes,
which is probably not useful unless you set `--parallelism=1`.
example:
$ python scripts/bulk-process /tmp/suspicious
{
"/tmp/suspicious/suspicious.dll_": {
"rules": {
"encode data using XOR": {
"matches": {
"268440358": {
[...]
"/tmp/suspicious/1.dll_": { ... }
"/tmp/suspicious/2.dll_": { ... }
}
usage:
usage: bulk-process.py [-h] [-r RULES] [-d] [-q] [-n PARALLELISM] [--no-mp]
input
detect capabilities in programs.
positional arguments:
input Path to directory of files to recursively analyze
optional arguments:
-h, --help show this help message and exit
-r RULES, --rules RULES
Path to rule file or directory, use embedded rules by
default
-d, --debug Enable debugging output on STDERR
-q, --quiet Disable all output but errors
-n PARALLELISM, --parallelism PARALLELISM
parallelism factor
--no-mp disable subprocesses
Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at: [package root]/LICENSE.txt
Unless required by applicable law or agreed to in writing, software distributed under the License
is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
"""
import sys
import json
import logging
import os.path
import argparse
import multiprocessing
import multiprocessing.pool
import capa
import capa.main
import capa.render
logger = logging.getLogger("capa")
def get_capa_results(args):
"""
run capa against the file at the given path, using the given rules.
args is a tuple, containing:
rules (capa.rules.RuleSet): the rules to match
format (str): the name of the sample file format
path (str): the file system path to the sample to process
args is a tuple because i'm not quite sure how to unpack multiple arguments using `map`.
returns an dict with two required keys:
path (str): the file system path of the sample to process
status (str): either "error" or "ok"
when status == "error", then a human readable message is found in property "error".
when status == "ok", then the capa results are found in the property "ok".
the capa results are a dictionary with the following keys:
meta (dict): the meta analysis results
capabilities (dict): the matched capabilities and their result objects
"""
rules, format, path = args
logger.info("computing capa results for: %s", path)
try:
extractor = capa.main.get_extractor(path, format, disable_progress=True)
except capa.main.UnsupportedFormatError:
# i'm 100% sure if multiprocessing will reliably raise exceptions across process boundaries.
# so instead, return an object with explicit success/failure status.
#
# if success, then status=ok, and results found in property "ok"
# if error, then status=error, and human readable message in property "error"
return {
"path": path,
"status": "error",
"error": "input file does not appear to be a PE file: %s" % path,
}
except capa.main.UnsupportedRuntimeError:
return {
"path": path,
"status": "error",
"error": "unsupported runtime or Python interpreter",
}
except Exception as e:
return {
"path": path,
"status": "error",
"error": "unexpected error: %s" % (e),
}
meta = capa.main.collect_metadata("", path, "", format, extractor)
capabilities, counts = capa.main.find_capabilities(rules, extractor, disable_progress=True)
meta["analysis"].update(counts)
return {
"path": path,
"status": "ok",
"ok": {
"meta": meta,
"capabilities": capabilities,
},
}
def main(argv=None):
if argv is None:
argv = sys.argv[1:]
parser = argparse.ArgumentParser(description="detect capabilities in programs.")
parser.add_argument("input", type=str, help="Path to directory of files to recursively analyze")
parser.add_argument(
"-r",
"--rules",
type=str,
default="(embedded rules)",
help="Path to rule file or directory, use embedded rules by default",
)
parser.add_argument("-d", "--debug", action="store_true", help="Enable debugging output on STDERR")
parser.add_argument("-q", "--quiet", action="store_true", help="Disable all output but errors")
parser.add_argument(
"-n", "--parallelism", type=int, default=multiprocessing.cpu_count(), help="parallelism factor"
)
parser.add_argument("--no-mp", action="store_true", help="disable subprocesses")
args = parser.parse_args(args=argv)
if args.quiet:
logging.basicConfig(level=logging.ERROR)
logging.getLogger().setLevel(logging.ERROR)
elif args.debug:
logging.basicConfig(level=logging.DEBUG)
logging.getLogger().setLevel(logging.DEBUG)
else:
logging.basicConfig(level=logging.INFO)
logging.getLogger().setLevel(logging.INFO)
# disable vivisect-related logging, it's verbose and not relevant for capa users
capa.main.set_vivisect_log_level(logging.CRITICAL)
# py2 doesn't know about cp65001, which is a variant of utf-8 on windows
# tqdm bails when trying to render the progress bar in this setup.
# because cp65001 is utf-8, we just map that codepage to the utf-8 codec.
# see #380 and: https://stackoverflow.com/a/3259271/87207
import codecs
codecs.register(lambda name: codecs.lookup("utf-8") if name == "cp65001" else None)
if args.rules == "(embedded rules)":
logger.info("using default embedded rules")
logger.debug("detected running from source")
args.rules = os.path.join(os.path.dirname(__file__), "..", "rules")
logger.debug("default rule path (source method): %s", args.rules)
else:
logger.info("using rules path: %s", args.rules)
try:
rules = capa.main.get_rules(args.rules)
rules = capa.rules.RuleSet(rules)
logger.info("successfully loaded %s rules", len(rules))
except (IOError, capa.rules.InvalidRule, capa.rules.InvalidRuleSet) as e:
logger.error("%s", str(e))
return -1
samples = []
for (base, directories, files) in os.walk(args.input):
for file in files:
samples.append(os.path.join(base, file))
def pmap(f, args, parallelism=multiprocessing.cpu_count()):
"""apply the given function f to the given args using subprocesses"""
return multiprocessing.Pool(parallelism).imap(f, args)
def tmap(f, args, parallelism=multiprocessing.cpu_count()):
"""apply the given function f to the given args using threads"""
return multiprocessing.pool.ThreadPool(parallelism).imap(f, args)
def map(f, args, parallelism=None):
"""apply the given function f to the given args in the current thread"""
for arg in args:
yield f(arg)
if args.no_mp:
if args.parallelism == 1:
logger.debug("using current thread mapper")
mapper = map
else:
logger.debug("using threading mapper")
mapper = tmap
else:
logger.debug("using process mapper")
mapper = pmap
results = {}
for result in mapper(
get_capa_results, [(rules, "pe", sample) for sample in samples], parallelism=args.parallelism
):
if result["status"] == "error":
logger.warning(result["error"])
elif result["status"] == "ok":
meta = result["ok"]["meta"]
capabilities = result["ok"]["capabilities"]
# our renderer expects to emit a json document for a single sample
# so we deserialize the json document, store it in a larger dict, and we'll subsequently re-encode.
results[result["path"]] = json.loads(capa.render.render_json(meta, rules, capabilities))
else:
raise ValueError("unexpected status: %s" % (result["status"]))
print(json.dumps(results))
logger.info("done.")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,760 +0,0 @@
"""
Convert capa rules to YARA rules (where this is possible)
usage: capa2yara.py [-h] [--private] [--version] [-v] [-vv] [-d] [-q] [--color {auto,always,never}] [-t TAG] rules
Capa to YARA rule converter
positional arguments:
rules Path to rules
optional arguments:
-h, --help show this help message and exit
--private, -p Create private rules
--version show program's version number and exit
-v, --verbose enable verbose result document (no effect with --json)
-vv, --vverbose enable very verbose result document (no effect with --json)
-d, --debug enable debugging output on STDERR
-q, --quiet disable all output but errors
--color {auto,always,never}
enable ANSI color codes in results, default: only during interactive session
-t TAG, --tag TAG filter on rule meta field values
Copyright (C) 2020, 2021 Arnim Rupp (@ruppde) and FireEye, Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at: [package root]/LICENSE.txt
Unless required by applicable law or agreed to in writing, software distributed under the License
is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
"""
import re
import sys
import string
import logging
import argparse
import datetime
import itertools
import capa.main
import capa.rules
import capa.engine
import capa.features
import capa.features.insn
from capa.features.common import BITNESS_X32, BITNESS_X64, String
logger = logging.getLogger("capa2yara")
today = str(datetime.date.today())
# create unique variable names for each rule in case somebody wants to move/copy stuff around later
var_names = ["".join(letters) for letters in itertools.product(string.ascii_lowercase, repeat=3)]
# this have to be the internal names used by capa.py which are sometimes different to the ones written out in the rules, e.g. "2 or more" is "Some", count is Range
unsupported = ["characteristic", "mnemonic", "offset", "subscope", "Range"]
# TODO shorten this list, possible stuff:
# - 2 or more strings: e.g.
# -- https://github.com/fireeye/capa-rules/blob/master/collection/file-managers/gather-direct-ftp-information.yml
# -- https://github.com/fireeye/capa-rules/blob/master/collection/browser/gather-firefox-profile-information.yml
# - count(string (1 rule: /executable/subfile/pe/contain-an-embedded-pe-file.yml)
# - count(match( could be done by creating the referenced rule a 2nd time with the condition, that it hits x times (only 1 rule: ./anti-analysis/anti-disasm/contain-anti-disasm-techniques.yml)
# - it would be technically possible to get the "basic blocks" working, but the rules contain mostly other non supported statements in there => not worth the effort.
# collect all converted rules to be able to check if we have needed sub rules for match:
converted_rules = []
count_incomplete = 0
default_tags = "CAPA "
# minimum number of rounds to do be able to convert rules which depend on referenced rules in several levels of depth
min_rounds = 5
unsupported_capa_rules = open("unsupported_capa_rules.yml", "wb")
unsupported_capa_rules_names = open("unsupported_capa_rules.txt", "wb")
unsupported_capa_rules_list = []
condition_header = """
capa_pe_file and
"""
condition_rule = """
private rule capa_pe_file : CAPA {
meta:
description = "match in PE files. used by all further CAPA rules"
author = "Arnim Rupp"
condition:
uint16be(0) == 0x4d5a
or uint16be(0) == 0x558b
or uint16be(0) == 0x5649
}
"""
def check_feature(statement, rulename):
if statement in unsupported:
logger.info("unsupported: " + statement + " in rule: " + rulename)
return True
else:
return False
def get_rule_url(path):
path = re.sub(r"\.\.\/", "", path)
path = re.sub(r"capa-rules\/", "", path)
return "https://github.com/fireeye/capa-rules/blob/master/" + path
def convert_capa_number_to_yara_bytes(number):
if not number.startswith("0x"):
print("TODO: fix decimal")
sys.exit()
number = re.sub(r"^0[xX]", "", number)
logger.info("number ok: " + repr(number))
# include spaces every 2 hex
bytesv = re.sub(r"(..)", r"\1 ", number)
# reverse order
bytesl = bytesv.split(" ")
bytesl.reverse()
bytesv = " ".join(bytesl)
# fix spaces
bytesv = bytesv[1:] + " "
return bytesv
def convert_rule_name(rule_name):
# yara rule names: "Identifiers must follow the same lexical conventions of the C programming language, they can contain any alphanumeric character and the underscore character, but the first character cannot be a digit. Rule identifiers are case sensitive and cannot exceed 128 characters." so we replace any non-alpanum with _
rule_name = re.sub(r"\W", "_", rule_name)
rule_name = "capa_" + rule_name
return rule_name
def convert_description(statement):
try:
desc = statement.description
if desc:
yara_desc = " // " + desc
logger.info("using desc: " + repr(yara_desc))
return yara_desc
except:
# no description
pass
return ""
def convert_rule(rule, rulename, cround, depth):
depth += 1
logger.info("recursion depth: " + str(depth))
global var_names
def do_statement(s_type, kid):
yara_strings = ""
yara_condition = ""
if check_feature(s_type, rulename):
return "BREAK", s_type
elif s_type == "string":
string = kid.value
logger.info("doing string: " + repr(string))
string = string.replace("\\", "\\\\")
string = string.replace("\n", "\\n")
string = string.replace("\t", "\\t")
var_name = "str_" + var_names.pop(0)
yara_strings += "\t$" + var_name + ' = "' + string + '" ascii wide' + convert_description(kid) + "\n"
yara_condition += "\t$" + var_name + " "
elif s_type == "api" or s_type == "import":
# TODO: is it possible in YARA to make a difference between api & import?
# https://github.com/fireeye/capa-rules/blob/master/doc/format.md#api
api = kid.value
logger.info("doing api: " + repr(api))
# e.g. kernel32.CreateNamedPipe => look for kernel32.dll and CreateNamedPipe
if "." in api:
dll, api = api.split(".")
# usage of regex is needed and /i because string search for "CreateMutex" in imports() doesn't look for e.g. CreateMutexA
yara_condition += "\tpe.imports(/" + dll + "/i, /" + api + "/) "
else:
# e.g. - api: 'CallNextHookEx'
# (from user32.dll)
# even looking for empty string in dll_regex doesn't work for some files (list below) with pe.imports so do just a string search
# yara_condition += '\tpe.imports(/.{0,30}/i, /' + api + '/) '
# 5fbbfeed28b258c42e0cfeb16718b31c, 2D3EDC218A90F03089CC01715A9F047F, 7EFF498DE13CC734262F87E6B3EF38AB, C91887D861D9BD4A5872249B641BC9F9, a70052c45e907820187c7e6bcdc7ecca, 0596C4EA5AA8DEF47F22C85D75AACA95
var_name = "api_" + var_names.pop(0)
# limit regex with word boundary \b but also search for appended A and W
# TODO: better use something like /(\\x00|\\x01|\\x02|\\x03|\\x04)' + api + '(A|W)?\\x00/ ???
yara_strings += "\t$" + var_name + " = /\\b" + api + "(A|W)?\\b/ ascii wide\n"
yara_condition += "\t$" + var_name + " "
elif s_type == "export":
export = kid.value
logger.info("doing export: " + repr(export))
yara_condition += '\tpe.exports("' + export + '") '
elif s_type == "section":
# https://github.com/fireeye/capa-rules/blob/master/doc/format.md#section
section = kid.value
logger.info("doing section: " + repr(section))
# e.g. - section: .rsrc
var_name_sec = var_names.pop(0)
# yeah, it would be better to make one loop out of multiple sections but we're in POC-land (and I guess it's not much of a performance hit, loop over short array?)
yara_condition += (
"\tfor any " + var_name_sec + " in pe.sections : ( " + var_name_sec + '.name == "' + section + '" ) '
)
elif s_type == "match":
# https://github.com/fireeye/capa-rules/blob/master/doc/format.md#matching-prior-rule-matches-and-namespaces
match = kid.value
logger.info("doing match: " + repr(match))
# e.g. - match: create process
# - match: host-interaction/file-system/write
match_rule_name = convert_rule_name(match)
if match.startswith(rulename + "/"):
logger.info("Depending on myself = basic block: " + match)
return "BREAK", "Depending on myself = basic block"
if match_rule_name in converted_rules:
yara_condition += "\t" + match_rule_name + "\n"
else:
# don't complain in the early rounds as there should be 3+ rounds (if all rules are converted)
if cround > min_rounds - 2:
logger.info("needed sub-rule not converted (yet, maybe in next round): " + repr(match))
return "BREAK", "needed sub-rule not converted"
else:
return "BREAK", "NOLOG"
elif s_type == "bytes":
bytesv = kid.get_value_str()
logger.info("doing bytes: " + repr(bytesv))
var_name = var_names.pop(0)
yara_strings += "\t$" + var_name + " = { " + bytesv + " }" + convert_description(kid) + "\n"
yara_condition += "\t$" + var_name + " "
elif s_type == "number":
number = kid.get_value_str()
logger.info("doing number: " + repr(number))
if len(number) < 10:
logger.info("too short for byte search (until I figure out how to do it properly)" + repr(number))
return "BREAK", "Number too short"
# there's just one rule which contains 0xFFFFFFF but yara gives a warning if if used
if number == "0xFFFFFFFF":
return "BREAK", "slow byte pattern for YARA search"
logger.info("number ok: " + repr(number))
number = convert_capa_number_to_yara_bytes(number)
logger.info("number ok: " + repr(number))
var_name = "num_" + var_names.pop(0)
yara_strings += "\t$" + var_name + " = { " + number + "}" + convert_description(kid) + "\n"
yara_condition += "$" + var_name + " "
elif s_type == "regex":
regex = kid.get_value_str()
logger.info("doing regex: " + repr(regex))
# change capas /xxx/i to yaras /xxx/ nocase, count will be used later to decide appending 'nocase'
regex, count = re.subn(r"/i$", "/", regex)
# remove / in the begining and end
regex = regex[1:-1]
# all .* in the regexes of capa look like they should be maximum 100 chars so take 1000 to speed up rules and prevent yara warnings on poor performance
regex = regex.replace(".*", ".{,1000}")
# strange: capa accepts regexes with unsescaped / like - string: /com/exe4j/runtime/exe4jcontroller/i in capa-rules/compiler/exe4j/compiled-with-exe4j.yml, needs a fix for yara:
# would assume that get_value_str() gives the raw string
regex = re.sub(r"(?<!\\)/", r"\/", regex)
# capa uses python regex which accepts /reg(|.exe)/ but yaras regex engine doesn't not => fix it
# /reg(|.exe)/ => /reg(.exe)?/
regex = re.sub(r"\(\|([^\)]+)\)", r"(\1)?", regex)
# change begining of line to null byte, e.g. /^open => /\x00open (not word boundary because we're not looking for the begining of a word in a text but usually a function name if there's ^ in a capa rule)
regex = re.sub(r"^\^", r"\\x00", regex)
# regex = re.sub(r"^\^", r"\\b", regex)
regex = "/" + regex + "/"
if count:
regex += " nocase"
# strange: if statement.name == "string", the string is as it is, if statement.name == "regex", the string has // around it, e.g. /regex/
var_name = "re_" + var_names.pop(0)
yara_strings += "\t" + "$" + var_name + " = " + regex + " ascii wide " + convert_description(kid) + "\n"
yara_condition += "\t" + "$" + var_name + " "
elif s_type == "Not" or s_type == "And" or s_type == "Or":
pass
else:
logger.info("something unhandled: " + repr(s_type))
sys.exit()
return yara_strings, yara_condition
############################## end def do_statement
yara_strings_list = []
yara_condition_list = []
rule_comment = ""
incomplete = 0
statement = rule.name
logger.info("doing statement: " + statement)
if check_feature(statement, rulename):
return "BREAK", statement, rule_comment, incomplete
if statement == "And" or statement == "Or":
desc = convert_description(rule)
if desc:
logger.info("description of bool statement: " + repr(desc))
yara_strings_list.append("\t" * depth + desc + "\n")
elif statement == "Not":
logger.info("one of those seldom nots: " + rule.name)
# check for nested statements
try:
kids = rule.children
num_kids = len(kids)
logger.info("kids: " + kids)
except:
logger.info("no kids in rule: " + rule.name)
try:
# maybe it's "Not" = only one child:
kid = rule.child
kids = [kid]
num_kids = 1
logger.info("kid: %s", kids)
except:
logger.info("no kid in rule: %s", rule.name)
# just a single statement without 'and' or 'or' before it in this rule
if "kids" not in locals().keys():
logger.info("no kids: " + rule.name)
yara_strings_sub, yara_condition_sub = do_statement(statement, rule)
if yara_strings_sub == "BREAK":
logger.info("Unknown feature at1: " + rule.name)
return "BREAK", yara_condition_sub, rule_comment, incomplete
yara_strings_list.append(yara_strings_sub)
yara_condition_list.append(yara_condition_sub)
else:
x = 0
logger.info("doing kids: %r - len: %s", kids, num_kids)
for kid in kids:
s_type = kid.name
logger.info("doing type: " + s_type + " kidnum: " + str(x))
if s_type == "Some":
cmin = kid.count
logger.info("Some type with mininum: " + str(cmin))
if not cmin:
logger.info("this is optional: which means, we can just ignore it")
x += 1
continue
elif statement == "Or":
logger.info("we're inside an OR, we can just ignore it")
x += 1
continue
else:
# this is "x or more". could be coded for strings TODO
return "BREAK", "Some aka x or more (TODO)", rule_comment, incomplete
if s_type == "And" or s_type == "Or" or s_type == "Not" and not kid.name == "Some":
logger.info("doing bool with recursion: " + repr(kid))
logger.info("kid coming: " + repr(kid.name))
# logger.info("grandchildren: " + repr(kid.children))
##### here we go into RECURSION ##################################################################################
yara_strings_sub, yara_condition_sub, rule_comment_sub, incomplete_sub = convert_rule(
kid, rulename, cround, depth
)
logger.info("coming out of this recursion, depth: " + repr(depth) + " s_type: " + s_type)
if yara_strings_sub == "BREAK":
logger.info(
"Unknown feature at2: " + rule.name + " - s_type: " + s_type + " - depth: " + str(depth)
)
# luckily this is only a killer, if we're inside an 'And', inside 'Or' we're just missing some coverage
# only accept incomplete rules in rounds > 3 because the reason might be a reference to another rule not converted yet because of missing dependencies
logger.info("rule.name, depth, cround: " + rule.name + ", " + str(depth) + ", " + str(cround))
if rule.name == "Or" and depth == 1 and cround > min_rounds - 1:
logger.info(
"Unknown feature, just ignore this branch and keep the rest bec we're in Or (1): "
+ s_type
+ " - depth: "
+ str(depth)
)
# remove last 'or'
# yara_condition = re.sub(r'\sor $', ' ', yara_condition)
rule_comment += "This rule is incomplete because a branch inside an Or-statement had an unsupported feature and was skipped => coverage is reduced compared to the original capa rule. "
x += 1
incomplete = 1
continue
else:
return "BREAK", yara_condition_sub, rule_comment, incomplete
rule_comment += rule_comment_sub
yara_strings_list.append(yara_strings_sub)
yara_condition_list.append(yara_condition_sub)
incomplete = incomplete or incomplete_sub
yara_strings_sub, yara_condition_sub = do_statement(s_type, kid)
if yara_strings_sub == "BREAK":
logger.info("Unknown feature at3: " + rule.name)
logger.info("rule.name, depth, cround: " + rule.name + ", " + str(depth) + ", " + str(cround))
if rule.name == "Or" and depth == 1 and cround > min_rounds - 1:
logger.info(
"Unknown feature, just ignore this branch and keep the rest bec we're in Or (2): "
+ s_type
+ " - depth: "
+ str(depth)
)
rule_comment += "This rule is incomplete because a branch inside an Or-statement had an unsupported feature and was skipped => coverage is reduced compared to the original capa rule. "
x += 1
incomplete = 1
continue
else:
return "BREAK", yara_condition_sub, rule_comment, incomplete
# don't append And or Or if we got no condition back from this kid from e.g. match in myself or unsupported feature inside 'Or'
if not yara_condition_sub:
continue
yara_strings_list.append(yara_strings_sub)
yara_condition_list.append(yara_condition_sub)
x += 1
# this might happen, if all conditions are inside "or" and none of them was supported
if not yara_condition_list:
return (
"BREAK",
'Multiple statements inside "- or:" where all unsupported, the last one was "' + s_type + '"',
rule_comment,
incomplete,
)
if statement == "And" or statement == "Or":
if yara_strings_list:
yara_strings = "".join(yara_strings_list)
else:
yara_strings = ""
yara_condition = " (\n\t\t" + ("\n\t\t" + statement.lower() + " ").join(yara_condition_list) + " \n\t) "
elif statement == "Some":
cmin = rule.count
logger.info("Some type with mininum at2: " + str(cmin))
if not cmin:
logger.info("this is optional: which means, we can just ignore it")
else:
# this is "x or more". could be coded for strings TODO
return "BREAK", "Some aka x or more (TODO)", rule_comment, incomplete
elif statement == "Not":
logger.info("Not")
yara_strings = "".join(yara_strings_list)
yara_condition = "not " + "".join(yara_condition_list) + " "
else:
if len(yara_condition_list) != 1:
logger.info("something wrong around here" + repr(yara_condition_list) + " - " + statement)
sys.exit()
# strings might be empty with only conditions
if yara_strings_list:
yara_strings = "\n\t" + yara_strings_list[0]
yara_condition = "\n\t" + yara_condition_list[0]
logger.info(
f"################# end of convert_rule() #strings: {len(yara_strings_list)} #conditions: {len(yara_condition_list)}"
)
logger.info(f"strings: {yara_strings} conditions: {yara_condition}")
return yara_strings, yara_condition, rule_comment, incomplete
def output_yar(yara):
print(yara + "\n")
def output_unsupported_capa_rules(yaml, capa_rulename, url, reason):
if reason != "NOLOG":
if capa_rulename not in unsupported_capa_rules_list:
logger.info("unsupported: " + capa_rulename + " - reason: " + reason + " - url: " + url)
unsupported_capa_rules_list.append(capa_rulename)
unsupported_capa_rules.write(yaml.encode("utf-8") + b"\n")
unsupported_capa_rules.write(
(
"Reason: "
+ reason
+ " (there might be multiple unsupported things in this rule, this is the 1st one encountered)"
).encode("utf-8")
+ b"\n"
)
unsupported_capa_rules.write(url.encode("utf-8") + b"\n----------------------------------------------\n")
unsupported_capa_rules_names.write(capa_rulename.encode("utf-8") + b":")
unsupported_capa_rules_names.write(reason.encode("utf-8") + b":")
unsupported_capa_rules_names.write(url.encode("utf-8") + b"\n")
def convert_rules(rules, namespaces, cround):
for rule in rules.rules.values():
rule_name = convert_rule_name(rule.name)
if rule.meta.get("capa/subscope-rule", False):
logger.info("skipping sub scope rule capa: " + rule.name)
continue
if rule_name in converted_rules:
logger.info("skipping already converted rule capa: " + rule.name + " - yara rule: " + rule_name)
continue
logger.info("-------------------------- DOING RULE CAPA: " + rule.name + " - yara rule: " + rule_name)
if "capa/path" in rule.meta:
url = get_rule_url(rule.meta["capa/path"])
else:
url = "no url"
logger.info("URL: " + url)
logger.info("statements: " + repr(rule.statement))
# don't really know what that passed empty string is good for :)
dependencies = rule.get_dependencies(namespaces)
if len(dependencies):
logger.info("Dependencies at4: " + rule.name + " - dep: " + str(dependencies))
for dep in dependencies:
logger.info("Dependencies at44: " + dep)
if not dep.startswith(rule.name + "/"):
logger.info("Depending on another rule: " + dep)
continue
yara_strings, yara_condition, rule_comment, incomplete = convert_rule(rule.statement, rule.name, cround, 0)
if yara_strings == "BREAK":
# only give up if in final extra round #9000
if cround == 9000:
output_unsupported_capa_rules(rule.to_yaml(), rule.name, url, yara_condition)
logger.info("Unknown feature at5: " + rule.name)
else:
yara_meta = ""
metas = rule.meta
rule_tags = ""
for meta in metas:
meta_name = meta
# e.g. 'examples:' can be a list
seen_hashes = []
if isinstance(metas[meta], list):
if meta_name == "examples":
meta_name = "hash"
if meta_name == "att&ck":
meta_name = "attack"
for attack in list(metas[meta]):
logger.info("attack:" + attack)
# cut out tag in square brackets, e.g. Defense Evasion::Obfuscated Files or Information [T1027] => T1027
r = re.search(r"\[(T[^\]]*)", attack)
if r:
tag = r.group(1)
logger.info("attack tag:" + tag)
tag = re.sub(r"\W", "_", tag)
rule_tags += tag + " "
# also add a line "attack = ..." to yaras 'meta:' to keep the long description:
yara_meta += '\tattack = "' + attack + '"\n'
elif meta_name == "mbc":
for mbc in list(metas[meta]):
logger.info("mbc:" + mbc)
# cut out tag in square brackets, e.g. Cryptography::Encrypt Data::RC6 [C0027.010] => C0027.010
r = re.search(r"\[(.[^\]]*)", mbc)
if r:
tag = r.group(1)
logger.info("mbc tag:" + tag)
tag = re.sub(r"\W", "_", tag)
rule_tags += tag + " "
# also add a line "mbc = ..." to yaras 'meta:' to keep the long description:
yara_meta += '\tmbc = "' + mbc + '"\n'
for value in metas[meta]:
if meta_name == "hash":
value = re.sub(r"^([0-9a-f]{20,64}):0x[0-9a-f]{1,10}$", r"\1", value, flags=re.IGNORECASE)
# examples in capa can contain the same hash several times with different offset, so check if it's already there:
# (keeping the offset might be interessting for some but breaks yara-ci for checking of the final rules
if not value in seen_hashes:
yara_meta += "\t" + meta_name + ' = "' + value + '"\n'
seen_hashes.append(value)
else:
# no list:
if meta == "capa/path":
url = get_rule_url(metas[meta])
meta_name = "reference"
meta_value = "This YARA rule converted from capa rule: " + url
else:
meta_value = metas[meta]
if meta_name == "name":
meta_name = "description"
meta_value += " (converted from capa rule)"
elif meta_name == "lib":
meta_value = str(meta_value)
elif meta_name == "capa/nursery":
meta_name = "capa_nursery"
meta_value = str(meta_value)
# for the rest of the maec/malware-category names:
meta_name = re.sub(r"\W", "_", meta_name)
if meta_name and meta_value:
yara_meta += "\t" + meta_name + ' = "' + meta_value + '"\n'
rule_name_bonus = ""
if rule_comment:
yara_meta += '\tcomment = "' + rule_comment + '"\n'
yara_meta += '\tdate = "' + today + '"\n'
yara_meta += '\tminimum_yara = "3.8"\n'
yara_meta += '\tlicense = "Apache-2.0 License"\n'
# check if there's some beef in condition:
tmp_yc = re.sub(r"(and|or|not)", "", yara_condition)
if re.search(r"\w", tmp_yc):
yara = ""
if make_priv:
yara = "private "
# put yara rule tags here:
rule_tags = default_tags + rule_tags
yara += "rule " + rule_name + " : " + rule_tags + " { \n meta: \n " + yara_meta + "\n"
if "$" in yara_strings:
yara += " strings: \n " + yara_strings + " \n"
yara += " condition:" + condition_header + yara_condition + "\n}"
# TODO: now the rule is finished and could be automatically checked with the capa-testfile(s) named in meta (doing it for all of them using yara-ci upload at the moment)
output_yar(yara)
converted_rules.append(rule_name)
global count_incomplete
count_incomplete += incomplete
else:
output_unsupported_capa_rules(rule.to_yaml(), rule.name, url, yara_condition)
pass
def main(argv=None):
if argv is None:
argv = sys.argv[1:]
parser = argparse.ArgumentParser(description="Capa to YARA rule converter")
parser.add_argument("rules", type=str, help="Path to rules")
parser.add_argument("--private", "-p", action="store_true", help="Create private rules", default=False)
capa.main.install_common_args(parser, wanted={"tag"})
args = parser.parse_args(args=argv)
global make_priv
make_priv = args.private
if args.verbose:
level = logging.DEBUG
elif args.quiet:
level = logging.ERROR
else:
level = logging.INFO
logging.basicConfig(level=level)
logging.getLogger("capa2yara").setLevel(level)
try:
rules = capa.main.get_rules(args.rules, disable_progress=True)
namespaces = capa.rules.index_rules_by_namespace(list(rules))
rules = capa.rules.RuleSet(rules)
logger.info("successfully loaded %s rules (including subscope rules which will be ignored)", len(rules))
if args.tag:
rules = rules.filter_rules_by_meta(args.tag)
logger.debug("selected %s rules", len(rules))
for i, r in enumerate(rules.rules, 1):
logger.debug(" %d. %s", i, r)
except (IOError, capa.rules.InvalidRule, capa.rules.InvalidRuleSet) as e:
logger.error("%s", str(e))
return -1
output_yar(
"// Rules from FireEye's https://github.com/fireeye/capa-rules converted to YARA using https://github.com/fireeye/capa/blob/master/scripts/capa2yara.py by Arnim Rupp"
)
output_yar(
"// Beware: These are less rules than capa (because not all fit into YARA, stats at EOF) and is less precise because e.g. capas function scopes are applied to the whole file"
)
output_yar(
'// Beware: Some rules are incomplete because an optional branch was not supported by YARA. These rules are marked in a comment in meta: (search for "incomplete")'
)
output_yar("// Rule authors and license stay the same")
output_yar(
'// att&ck and MBC tags are put into YARA rule tags. All rules are tagged with "CAPA" for easy filtering'
)
output_yar("// The date = in meta: is the date of converting (there is no date in capa rules)")
output_yar("// Minimum YARA version is 3.8.0 plus PE module")
output_yar('\nimport "pe"')
output_yar(condition_rule)
# do several rounds of converting rules because some rules for match: might not be converted in the 1st run
num_rules = 9999999
cround = 0
while num_rules != len(converted_rules) or cround < min_rounds:
cround += 1
logger.info("doing convert_rules(), round: " + str(cround))
num_rules = len(converted_rules)
convert_rules(rules, namespaces, cround)
# one last round to collect all unconverted rules
convert_rules(rules, namespaces, 9000)
stats = "\n// converted rules : " + str(len(converted_rules))
stats += "\n// among those are incomplete : " + str(count_incomplete)
stats += "\n// unconverted rules : " + str(len(unsupported_capa_rules_list)) + "\n"
logger.info(stats)
output_yar(stats)
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -7,17 +7,16 @@ import capa.main
import capa.rules
import capa.engine
import capa.features
import capa.render.json
import capa.render.utils as rutils
import capa.render.default
import capa.render.result_document
from capa.engine import *
from capa.render import convert_capabilities_to_result_document
# edit this to set the path for file to analyze and rule directory
RULES_PATH = "/tmp/capa/rules/"
# load rules from disk
rules = capa.rules.RuleSet(capa.main.get_rules(RULES_PATH, disable_progress=True))
rules = capa.main.get_rules(RULES_PATH, disable_progress=True)
rules = capa.rules.RuleSet(rules)
# == Render ddictionary helpers
def render_meta(doc, ostream):
@@ -104,16 +103,28 @@ def render_attack(doc, ostream):
for rule in rutils.capability_rules(doc):
if not rule["meta"].get("att&ck"):
continue
for attack in rule["meta"]["att&ck"]:
tactics[attack["tactic"]].add((attack["technique"], attack.get("subtechnique"), attack["id"]))
tactic, _, rest = attack.partition("::")
if "::" in rest:
technique, _, rest = rest.partition("::")
subtechnique, _, id = rest.rpartition(" ")
tactics[tactic].add((technique, subtechnique, id))
else:
technique, _, id = rest.rpartition(" ")
tactics[tactic].add((technique, id))
for tactic, techniques in sorted(tactics.items()):
inner_rows = []
for (technique, subtechnique, id) in sorted(techniques):
if subtechnique is None:
for spec in sorted(techniques):
if len(spec) == 2:
technique, id = spec
inner_rows.append("%s %s" % (technique, id))
else:
elif len(spec) == 3:
technique, subtechnique, id = spec
inner_rows.append("%s::%s %s" % (technique, subtechnique, id))
else:
raise RuntimeError("unexpected ATT&CK spec format")
ostream["ATTCK"].setdefault(tactic.upper(), inner_rows)
@@ -138,16 +149,31 @@ def render_mbc(doc, ostream):
if not rule["meta"].get("mbc"):
continue
for mbc in rule["meta"]["mbc"]:
objectives[mbc["objective"]].add((mbc["behavior"], mbc.get("method"), mbc["id"]))
mbcs = rule["meta"]["mbc"]
if not isinstance(mbcs, list):
raise ValueError("invalid rule: MBC mapping is not a list")
for mbc in mbcs:
objective, _, rest = mbc.partition("::")
if "::" in rest:
behavior, _, rest = rest.partition("::")
method, _, id = rest.rpartition(" ")
objectives[objective].add((behavior, method, id))
else:
behavior, _, id = rest.rpartition(" ")
objectives[objective].add((behavior, id))
for objective, behaviors in sorted(objectives.items()):
inner_rows = []
for (behavior, method, id) in sorted(behaviors):
if method is None:
inner_rows.append("%s [%s]" % (behavior, id))
for spec in sorted(behaviors):
if len(spec) == 2:
behavior, id = spec
inner_rows.append("%s %s" % (behavior, id))
elif len(spec) == 3:
behavior, method, id = spec
inner_rows.append("%s::%s %s" % (behavior, method, id))
else:
inner_rows.append("%s::%s [%s]" % (behavior, method, id))
raise RuntimeError("unexpected MBC spec format")
ostream["MBC"].setdefault(objective.upper(), inner_rows)
@@ -165,24 +191,24 @@ def render_dictionary(doc):
def capa_details(file_path, output_format="dictionary"):
# extract features and find capabilities
extractor = capa.main.get_extractor(file_path, "auto", capa.main.BACKEND_VIV, [], False, disable_progress=True)
extractor = capa.main.get_extractor(file_path, "auto", disable_progress=True)
capabilities, counts = capa.main.find_capabilities(rules, extractor, disable_progress=True)
# collect metadata (used only to make rendering more complete)
meta = capa.main.collect_metadata("", file_path, RULES_PATH, extractor)
meta = capa.main.collect_metadata("", file_path, RULES_PATH, "auto", extractor)
meta["analysis"].update(counts)
capa_output = False
if output_format == "dictionary":
# ...as python dictionary, simplified as textable but in dictionary
doc = capa.render.result_document.convert_capabilities_to_result_document(meta, rules, capabilities)
doc = convert_capabilities_to_result_document(meta, rules, capabilities)
capa_output = render_dictionary(doc)
elif output_format == "json":
# render results
# ...as json
capa_output = json.loads(capa.render.json.render(meta, rules, capabilities))
capa_output = json.loads(capa.render.render_json(meta, rules, capabilities))
elif output_format == "texttable":
# ...as human readable text table
capa_output = capa.render.default.render(meta, rules, capabilities)
capa_output = capa.render.render_default(meta, rules, capabilities)
return capa_output

View File

@@ -65,8 +65,6 @@ def main(argv=None):
return 0
else:
logger.info("rule requires reformatting (%s)", rule.name)
if "\r\n" in rule.definition:
logger.info("please make sure that the file uses LF (\\n) line endings only")
return 1
if args.in_place:

View File

@@ -9,7 +9,6 @@
# See the License for the specific language governing permissions and limitations under the License.
# Use a console with emojis support for a better experience
# Use venv to ensure that `python` calls the correct python version
# Stash uncommited changes
MSG="pre-push-$(date +%s)";
@@ -26,8 +25,17 @@ restore_stashed() {
fi
}
python_3() {
case "$(uname -s)" in
CYGWIN*|MINGW32*|MSYS*|MINGW*)
py -3 -m $1 > $2 2>&1;;
*)
python3 -m $1 > $2 2>&1;;
esac
}
# Run isort and print state
python -m isort --profile black --length-sort --line-width 120 -c . > isort-output.log 2>&1;
python_3 'isort --profile black --length-sort --line-width 120 -c .' 'isort-output.log';
if [ $? == 0 ]; then
echo 'isort succeeded!! 💖';
else
@@ -38,7 +46,7 @@ else
fi
# Run black and print state
python -m black -l 120 --check . > black-output.log 2>&1;
python_3 'black -l 120 --check .' 'black-output.log';
if [ $? == 0 ]; then
echo 'black succeeded!! 💝';
else
@@ -62,7 +70,7 @@ fi
# Run tests except if first argument is no_tests
if [ "$1" != 'no_tests' ]; then
echo 'Running tests, please wait ⌛';
python -m pytest tests/ --maxfail=1;
pytest tests/ --maxfail=1;
if [ $? == 0 ]; then
echo 'Tests succeed!! 🎉';
else

View File

@@ -1,74 +0,0 @@
#!/usr/bin/env python2
"""
Copyright (C) 2021 FireEye, Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at: [package root]/LICENSE.txt
Unless required by applicable law or agreed to in writing, software distributed under the License
is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
detect-elf-os
Attempt to detect the underlying OS that the given ELF file targets.
"""
import sys
import logging
import argparse
import contextlib
from typing import BinaryIO
import capa.helpers
import capa.features.extractors.elf
logger = logging.getLogger("capa.detect-elf-os")
def main(argv=None):
if capa.helpers.is_runtime_ida():
from capa.ida.helpers import IDAIO
f: BinaryIO = IDAIO()
else:
if argv is None:
argv = sys.argv[1:]
parser = argparse.ArgumentParser(description="Detect the underlying OS for the given ELF file")
parser.add_argument("sample", type=str, help="path to ELF file")
logging_group = parser.add_argument_group("logging arguments")
logging_group.add_argument("-d", "--debug", action="store_true", help="enable debugging output on STDERR")
logging_group.add_argument(
"-q", "--quiet", action="store_true", help="disable all status output except fatal errors"
)
args = parser.parse_args(args=argv)
if args.quiet:
logging.basicConfig(level=logging.WARNING)
logging.getLogger().setLevel(logging.WARNING)
elif args.debug:
logging.basicConfig(level=logging.DEBUG)
logging.getLogger().setLevel(logging.DEBUG)
else:
logging.basicConfig(level=logging.INFO)
logging.getLogger().setLevel(logging.INFO)
f = open(args.sample, "rb")
with contextlib.closing(f):
try:
print(capa.features.extractors.elf.detect_elf_os(f))
return 0
except capa.features.extractors.elf.CorruptElfFile as e:
logger.error("corrupt ELF file: %s", str(e.args[0]))
return -1
if __name__ == "__main__":
if capa.helpers.is_runtime_ida():
main()
else:
sys.exit(main())

View File

@@ -25,8 +25,7 @@ Derived from: https://github.com/fireeye/capa/blob/master/scripts/import-to-ida.
import os
import json
import binaryninja
import binaryninja.interaction
from binaryninja import *
def append_func_cmt(bv, va, cmt):
@@ -47,31 +46,31 @@ def append_func_cmt(bv, va, cmt):
def load_analysis(bv):
shortname = os.path.splitext(os.path.basename(bv.file.filename))[0]
dirname = os.path.dirname(bv.file.filename)
binaryninja.log_info(f"dirname: {dirname}\nshortname: {shortname}\n")
log_info(f"dirname: {dirname}\nshortname: {shortname}\n")
if os.access(os.path.join(dirname, shortname + ".js"), os.R_OK):
path = os.path.join(dirname, shortname + ".js")
elif os.access(os.path.join(dirname, shortname + ".json"), os.R_OK):
path = os.path.join(dirname, shortname + ".json")
else:
path = binaryninja.interaction.get_open_filename_input("capa report:", "JSON (*.js *.json);;All Files (*)")
path = interaction.get_open_filename_input("capa report:", "JSON (*.js *.json);;All Files (*)")
if not path or not os.access(path, os.R_OK):
binaryninja.log_error("Invalid filename.")
log_error("Invalid filename.")
return 0
binaryninja.log_info("Using capa file %s" % path)
log_info("Using capa file %s" % path)
with open(path, "rb") as f:
doc = json.loads(f.read().decode("utf-8"))
if "meta" not in doc or "rules" not in doc:
binaryninja.log_error("doesn't appear to be a capa report")
log_error("doesn't appear to be a capa report")
return -1
a = doc["meta"]["sample"]["md5"].lower()
md5 = binaryninja.Transform["MD5"]
rawhex = binaryninja.Transform["RawHex"]
md5 = Transform["MD5"]
rawhex = Transform["RawHex"]
b = rawhex.encode(md5.encode(bv.parent_view.read(bv.parent_view.start, bv.parent_view.end))).decode("utf-8")
if not a == b:
binaryninja.log_error("sample mismatch")
log_error("sample mismatch")
return -2
rows = []
@@ -97,7 +96,7 @@ def load_analysis(bv):
else:
cmt = "%s" % (name,)
binaryninja.log_info("0x%x: %s" % (va, cmt))
log_info("0x%x: %s" % (va, cmt))
try:
# message will look something like:
#
@@ -106,7 +105,7 @@ def load_analysis(bv):
except ValueError:
continue
binaryninja.log_info("ok")
log_info("ok")
binaryninja.PluginCommand.register("Load capa file", "Loads an analysis file from capa", load_analysis)
PluginCommand.register("Load capa file", "Loads an analysis file from capa", load_analysis)

View File

@@ -31,8 +31,10 @@ See the License for the specific language governing permissions and limitations
import json
import logging
import idc
import idautils
import ida_funcs
import ida_idaapi
import ida_kernwin
logger = logging.getLogger("capa")

Some files were not shown because too many files have changed in this diff Show More