Compare commits

...

922 Commits

Author SHA1 Message Date
Willi Ballenthin
ead8a836be Merge pull request #799 from mandiant/williballenthin-patch-1
v3.0.2
2021-09-28 10:25:10 -06:00
Willi Ballenthin
d471e66073 v3.0.2 2021-09-28 09:44:46 -06:00
Willi Ballenthin
4ddef1f60b changelog: v3.0.2 2021-09-28 09:41:12 -06:00
Moritz
7b9da896e8 Merge pull request #797 from mandiant/fix/pyinstaller-elf
PyInstaller fix: add hidden import and test
2021-09-28 17:37:36 +02:00
Moritz Raabe
41786f4ab8 add hidden import and test 2021-09-28 15:39:23 +02:00
Capa Bot
4661da729f Sync capa-testfiles submodule 2021-09-28 10:15:01 +00:00
Capa Bot
97dc40a585 Sync capa-testfiles submodule 2021-09-28 10:04:34 +00:00
Moritz
f2082f3f52 release v3.0.1 (#791)
* release v3.0.1

* Update CHANGELOG.md

Co-authored-by: Willi Ballenthin <willi.ballenthin@gmail.com>

Co-authored-by: Willi Ballenthin <willi.ballenthin@gmail.com>
2021-09-27 20:59:18 +02:00
Moritz
f87c8ced3f Merge pull request #792 from mandiant/dependabot/pip/types-psutil-5.8.8
build(deps-dev): bump types-psutil from 5.8.5 to 5.8.8
2021-09-27 16:54:49 +02:00
dependabot[bot]
f914eea8ae build(deps-dev): bump types-psutil from 5.8.5 to 5.8.8
Bumps [types-psutil](https://github.com/python/typeshed) from 5.8.5 to 5.8.8.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-psutil
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-09-27 14:18:14 +00:00
Willi Ballenthin
b41d239301 Merge pull request #790 from mandiant/refactor/viv-utils-flirt
use viv-utils functions
2021-09-23 14:29:30 -06:00
Moritz Raabe
8bb1a1cb5a use viv-utils functions 2021-09-23 19:35:14 +02:00
Willi Ballenthin
2f61bc0b05 Merge pull request #789 from mandiant/dependabot/pip/tqdm-4.62.3
build(deps): bump tqdm from 4.62.2 to 4.62.3
2021-09-23 08:26:59 -06:00
dependabot[bot]
d22557947a build(deps): bump tqdm from 4.62.2 to 4.62.3
Bumps [tqdm](https://github.com/tqdm/tqdm) from 4.62.2 to 4.62.3.
- [Release notes](https://github.com/tqdm/tqdm/releases)
- [Commits](https://github.com/tqdm/tqdm/compare/v4.62.2...v4.62.3)

---
updated-dependencies:
- dependency-name: tqdm
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-09-23 14:24:28 +00:00
Moritz
3e44d07541 Merge pull request #786 from fireeye/williballenthin-patch-1
setup.py: bump viv dep to v1.0.5
2021-09-23 10:21:20 +02:00
Willi Ballenthin
f56b27e1c7 changelog 2021-09-22 21:39:36 -06:00
Willi Ballenthin
12075df3ba setup.py: bump viv dep to v1.0.5 2021-09-22 21:34:17 -06:00
Moritz
a8bb9620e2 Merge pull request #785 from fireeye/dependabot/pip/black-21.9b0
build(deps-dev): bump black from 21.8b0 to 21.9b0
2021-09-20 19:03:35 +02:00
dependabot[bot]
9ed4e21429 build(deps-dev): bump black from 21.8b0 to 21.9b0
Bumps [black](https://github.com/psf/black) from 21.8b0 to 21.9b0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/commits)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-09-20 14:02:13 +00:00
Capa Bot
5b293d675f Sync capa-testfiles submodule 2021-09-15 21:40:34 +00:00
Willi Ballenthin
5972d6576d Merge pull request #776 from fireeye/fix-775
v3.0.0
2021-09-14 21:37:37 -06:00
William Ballenthin
19ce514b5c main: handle malformed ELF files
closes #777
2021-09-14 21:35:47 -06:00
William Ballenthin
144ed80c56 readme: add reference to third blog post 2021-09-14 21:14:44 -06:00
William Ballenthin
4d34e56589 changelog: wording 2021-09-14 21:12:46 -06:00
William Ballenthin
9045770192 version: v3.0 2021-09-14 21:09:58 -06:00
William Ballenthin
4ea21d2a9c changelog: v3.0 2021-09-14 21:08:58 -06:00
Moritz
774a188d19 Merge pull request #774 from fireeye/no-flirt-elf
disable flirt matching on elf files
2021-09-14 18:59:20 +02:00
Capa Bot
bd5c125561 Sync capa rules submodule 2021-09-14 15:29:28 +00:00
Moritz
420feea0aa Update capa/main.py
Co-authored-by: Willi Ballenthin <willi.ballenthin@gmail.com>
2021-09-14 17:27:40 +02:00
Capa Bot
b298f547f9 Sync capa rules submodule 2021-09-14 15:26:51 +00:00
Capa Bot
a7fe76c336 Sync capa rules submodule 2021-09-14 15:25:46 +00:00
Willi Ballenthin
9f777ba152 readme: reference ELF support 2021-09-14 09:22:33 -06:00
Moritz Raabe
cc3b56ddcb disable flirt matching on elf files 2021-09-14 13:59:38 +02:00
Moritz Raabe
0c42942a88 black code style 2021-09-14 09:57:33 +02:00
William Ballenthin
0803c6f3fa elffile: extract global features 2021-09-13 13:51:19 -06:00
William Ballenthin
02d9d37c1e *: raise NotImplementedError not NotImplemented
> NotImplementedError and NotImplemented are not interchangeable, even though they have similar names and purposes. See NotImplemented for details on when to use it.

https://docs.python.org/3/library/exceptions.html#NotImplementedError
2021-09-13 13:47:30 -06:00
William Ballenthin
c121e9219c elffile: fix mypy 2021-09-13 13:32:09 -06:00
Willi Ballenthin
297d9aaa32 Merge pull request #770 from fireeye/elffile-extractor
add light weight ElfFeatureExtractor
2021-09-13 13:27:00 -06:00
Willi Ballenthin
11644cbc31 Update capa/features/extractors/elffile.py 2021-09-13 13:20:52 -06:00
Moritz Raabe
4c6be15edc minor fixes 2021-09-13 21:15:31 +02:00
Willi Ballenthin
e1028e4dd8 Merge pull request #773 from fireeye/dependabot/pip/types-psutil-5.8.5
build(deps-dev): bump types-psutil from 5.8.2 to 5.8.5
2021-09-13 09:29:20 -06:00
dependabot[bot]
861ff1c91f build(deps-dev): bump types-psutil from 5.8.2 to 5.8.5
Bumps [types-psutil](https://github.com/python/typeshed) from 5.8.2 to 5.8.5.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-psutil
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-09-13 14:03:31 +00:00
Moritz Raabe
80bb0b4aff init variable :/ 2021-09-10 21:29:59 +02:00
Moritz Raabe
06d238a9f9 add ElfFeatureExtractor 2021-09-10 20:38:27 +02:00
mike-hunhoff
71ce28d9e6 Merge pull request #768 from fireeye/explorer/fix/745
explorer: improve parsing algorithm for rule generator feature editor
2021-09-10 10:37:52 -06:00
Moritz
c48429e5c3 Merge pull request #766 from fireeye/ci/update-ubuntu-16
update to ubuntu-18.04
2021-09-10 10:28:31 +02:00
Willi Ballenthin
34e3f7bbaf Merge pull request #759 from fireeye/fix-755
extractors: extract global features as their own pseudo scope
2021-09-09 20:16:48 -06:00
Michael Hunhoff
db624460bc explorer: improve parsing algorithm for rule generator feature editor 2021-09-09 15:45:04 -06:00
Moritz Raabe
16c12f816b update to ubuntu-18.04 2021-09-09 16:45:11 +02:00
Capa Bot
ea6fed56a2 Sync capa rules submodule 2021-09-08 14:41:58 +00:00
Moritz
22f11f1a97 Merge pull request #763 from fireeye/dependabot/pip/types-psutil-5.8.2
build(deps-dev): bump types-psutil from 5.8.0 to 5.8.2
2021-09-06 23:03:20 +02:00
Moritz
7c21ccb8f9 Merge pull request #762 from fireeye/dependabot/pip/types-pyyaml-5.4.10
build(deps-dev): bump types-pyyaml from 5.4.8 to 5.4.10
2021-09-06 23:03:11 +02:00
Moritz
8f86b0eac2 Merge pull request #761 from fireeye/dependabot/pip/pytest-6.2.5
build(deps-dev): bump pytest from 6.2.4 to 6.2.5
2021-09-06 23:03:02 +02:00
Moritz
9c8fa32e5c Merge pull request #760 from fireeye/dependabot/pip/pefile-2021.9.3
build(deps): bump pefile from 2021.5.24 to 2021.9.3
2021-09-06 23:02:54 +02:00
dependabot[bot]
9d348c6da2 build(deps-dev): bump types-psutil from 5.8.0 to 5.8.2
Bumps [types-psutil](https://github.com/python/typeshed) from 5.8.0 to 5.8.2.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-psutil
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-09-06 14:02:56 +00:00
dependabot[bot]
4dc87240f9 build(deps-dev): bump types-pyyaml from 5.4.8 to 5.4.10
Bumps [types-pyyaml](https://github.com/python/typeshed) from 5.4.8 to 5.4.10.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-pyyaml
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-09-06 14:02:54 +00:00
dependabot[bot]
a60d11a763 build(deps-dev): bump pytest from 6.2.4 to 6.2.5
Bumps [pytest](https://github.com/pytest-dev/pytest) from 6.2.4 to 6.2.5.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/6.2.4...6.2.5)

---
updated-dependencies:
- dependency-name: pytest
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-09-06 14:02:52 +00:00
dependabot[bot]
391cc77996 build(deps): bump pefile from 2021.5.24 to 2021.9.3
Bumps [pefile](https://github.com/erocarrera/pefile) from 2021.5.24 to 2021.9.3.
- [Release notes](https://github.com/erocarrera/pefile/releases)
- [Commits](https://github.com/erocarrera/pefile/compare/v2021.5.24...v2021.9.3)

---
updated-dependencies:
- dependency-name: pefile
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-09-06 14:02:48 +00:00
William Ballenthin
7a3287fa25 extractors: smda: fix missing yield from 2021-09-04 16:55:37 -06:00
William Ballenthin
32244b2641 fixtures: fix extraction of global features 2021-09-04 16:12:51 -06:00
William Ballenthin
122fdc69e3 fixtures: name error 2021-09-04 16:00:49 -06:00
William Ballenthin
39e4e47763 pep8 2021-09-04 15:59:38 -06:00
William Ballenthin
2ea4dc9d7e tests: fixtures: extract global features at each scope 2021-09-04 15:58:32 -06:00
William Ballenthin
b2590e7c9a changelog 2021-09-04 15:55:28 -06:00
William Ballenthin
af6fe6baa0 extractors: extract global features as their own pseudo scope
this means they can be extracted separately in the freeze format.

closes #755
2021-09-04 15:53:05 -06:00
Moritz
ce799dadbe Merge pull request #758 from fireeye/explorer/new-feature-support
adding support for arch, os, and format features
2021-09-02 20:39:08 +02:00
Michael Hunhoff
217e6f88d9 adding support for arch, os, and format features 2021-09-02 08:29:55 -06:00
Moritz
a363baffce Merge pull request #757 from davidt99/master
fix: use netwrokx import since nx is deprecated
2021-08-31 11:02:40 +02:00
Capa Bot
bbe47d81e9 Sync capa rules submodule 2021-08-30 16:30:52 +00:00
davidt99
a105b41647 fix: use netwrokx import since nx is deprecated 2021-08-30 19:11:30 +03:00
Capa Bot
fc8919adce Sync capa-testfiles submodule 2021-08-30 15:51:01 +00:00
Willi Ballenthin
f21877ae27 Merge pull request #750 from fireeye/dependabot/pip/types-pyyaml-5.4.8
build(deps-dev): bump types-pyyaml from 5.4.6 to 5.4.8
2021-08-30 08:46:01 -06:00
Willi Ballenthin
99e7967e22 Merge pull request #752 from fireeye/dependabot/pip/ruamel-yaml-0.17.16
build(deps): bump ruamel-yaml from 0.17.13 to 0.17.16
2021-08-30 08:45:47 -06:00
Willi Ballenthin
766fe9d845 Merge pull request #754 from fireeye/dependabot/pip/black-21.8b0
build(deps-dev): bump black from 21.7b0 to 21.8b0
2021-08-30 08:44:40 -06:00
dependabot[bot]
2c60faee26 build(deps-dev): bump black from 21.7b0 to 21.8b0
Bumps [black](https://github.com/psf/black) from 21.7b0 to 21.8b0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/commits)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-30 14:02:09 +00:00
dependabot[bot]
097f1d4695 build(deps): bump ruamel-yaml from 0.17.13 to 0.17.16
Bumps [ruamel-yaml](https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree) from 0.17.13 to 0.17.16.

---
updated-dependencies:
- dependency-name: ruamel-yaml
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-30 14:02:03 +00:00
dependabot[bot]
a6efc3952f build(deps-dev): bump types-pyyaml from 5.4.6 to 5.4.8
Bumps [types-pyyaml](https://github.com/python/typeshed) from 5.4.6 to 5.4.8.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-pyyaml
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-30 14:01:55 +00:00
Moritz
dadd76bd62 Merge pull request #747 from fireeye/feature-linter-pbar
linter enhancements
2021-08-30 12:18:19 +02:00
William Ballenthin
282c0c2655 lint: guide mypy typing to address CI issues 2021-08-27 13:00:40 -06:00
William Ballenthin
14f2391f49 mypy: add devtools ignore 2021-08-27 12:33:04 -06:00
William Ballenthin
b5860190e3 linter: invoke gc 2021-08-27 09:47:34 -06:00
William Ballenthin
d8ecb88867 changelog 2021-08-27 09:37:53 -06:00
William Ballenthin
f5b2efdc87 lint: reduce logging verbosity 2021-08-27 09:36:32 -06:00
William Ballenthin
fab26180cb lint: cache analysis results per path 2021-08-27 09:24:36 -06:00
William Ballenthin
3968d40bf4 linter: use pathlib.Path 2021-08-27 09:11:28 -06:00
William Ballenthin
cb2d1cde36 linter: add typing 2021-08-27 09:04:37 -06:00
William Ballenthin
da7a9b7232 linter: don't show noisey "need example" warnings in nursery 2021-08-27 08:42:46 -06:00
William Ballenthin
4f15225665 lint: handle calls to print within pbar 2021-08-27 08:34:02 -06:00
William Ballenthin
90708c123b linter: show progress bar 2021-08-27 08:21:09 -06:00
Capa Bot
384f467d4a Sync capa rules submodule 2021-08-26 23:53:30 +00:00
Capa Bot
37064f20d1 Sync capa rules submodule 2021-08-26 23:49:07 +00:00
Willi Ballenthin
9e579f9de3 tests: viv: reenable elf tests
revert 56f9e16a8b

viv is reverted to v1.0.3 so tests should pass again ref $735
2021-08-26 16:50:57 -06:00
Willi Ballenthin
b2c688ef14 Merge pull request #746 from fireeye/revert-731-dependabot/pip/vivisect-1.0.4
Revert "build(deps): bump vivisect from 1.0.3 to 1.0.4"
2021-08-26 13:00:13 -06:00
Willi Ballenthin
9717acd988 Revert "build(deps): bump vivisect from 1.0.3 to 1.0.4" 2021-08-26 12:59:49 -06:00
mike-hunhoff
d06c5b12c2 Merge pull request #742 from fireeye/fix/740
explorer: small performance boost to rule generator search functionality
2021-08-26 10:35:20 -06:00
Capa Bot
e97a120602 Sync capa rules submodule 2021-08-26 15:12:41 +00:00
Capa Bot
5b806b08dd Sync capa rules submodule 2021-08-26 15:12:14 +00:00
Willi Ballenthin
fd5dfcc6d8 Merge pull request #743 from fireeye/feature-lint-ntoskrnl-ntdll-exceptions
fix linter ntoskrnl/ntdll exceptions
2021-08-26 08:56:45 -06:00
Michael Hunhoff
3979317b10 merging upstream 2021-08-26 08:26:41 -06:00
mike-hunhoff
8d2595a6db Update README.md 2021-08-26 08:20:38 -06:00
mike-hunhoff
3c2c452501 Merge pull request #741 from fireeye/doc/explorer-support
explorer: updating support documentation and runtime checks
2021-08-26 08:19:01 -06:00
Michael Hunhoff
af48f86e55 Merge branch 'doc/explorer-support' of github.com:fireeye/capa into doc/explorer-support 2021-08-26 08:16:25 -06:00
Michael Hunhoff
73957ea14e merging upstream 2021-08-26 08:15:25 -06:00
William Ballenthin
bb824e9167 Merge branch 'master' into feature-lint-ntoskrnl-ntdll-exceptions 2021-08-25 16:44:29 -06:00
William Ballenthin
b996e77606 setup: add psutil deps to [dev] 2021-08-25 16:43:46 -06:00
William Ballenthin
9a20bbd4e1 changelog 2021-08-25 16:39:57 -06:00
William Ballenthin
8195b7565f lint: hardcoded some exports of ntdll/ntoskrnl to reduce warning spam 2021-08-25 16:36:36 -06:00
William Ballenthin
0569f9b242 lint: show mod/imp names per rule
fix bug where the same mod/imp name pair was shown for all rules
2021-08-25 16:36:08 -06:00
Michael Hunhoff
8ffa8ea2c8 explorer: small performance boost to rule generator search functionality 2021-08-25 15:45:47 -06:00
Capa Bot
fd7cff6109 Sync capa rules submodule 2021-08-25 20:34:00 +00:00
mike-hunhoff
a3b292066a Update capa/ida/helpers.py
Co-authored-by: Moritz <mr-tz@users.noreply.github.com>
2021-08-25 13:03:45 -06:00
Michael Hunhoff
8f6d38468e explorer: updating support documentation and runtime checks 2021-08-25 12:46:34 -06:00
William Ballenthin
4af5cc66ba changelog 2021-08-24 17:53:56 -06:00
William Ballenthin
33c3c7e106 scripts: profile-memory: show vms, too 2021-08-24 17:26:45 -06:00
William Ballenthin
5c75f12b78 scripts: profile-memory: show incremental duration and RSS 2021-08-24 17:22:18 -06:00
William Ballenthin
1ae6638861 Merge branch 'master' of github.com:fireeye/capa 2021-08-24 17:05:59 -06:00
William Ballenthin
d8999471c5 scripts: add profile-memory
ref #736
2021-08-24 17:05:34 -06:00
Capa Bot
90c0de1a7f Sync capa rules submodule 2021-08-24 22:48:07 +00:00
Capa Bot
d13ea1cbbe Sync capa rules submodule 2021-08-24 22:34:04 +00:00
Willi Ballenthin
03cf28fccd Merge pull request #739 from fireeye/feature-737
rules: add substring feature
2021-08-24 16:33:17 -06:00
William Ballenthin
8e757d2099 show-features: print function addresses, too 2021-08-24 16:32:44 -06:00
William Ballenthin
2989732637 tests: fix fva of substring test function 2021-08-24 16:32:27 -06:00
William Ballenthin
db45068357 tests: fix tests for substring 2021-08-24 16:13:41 -06:00
Capa Bot
735aea86e0 Sync capa rules submodule 2021-08-24 18:41:34 +00:00
William Ballenthin
d8c8c6d2f3 lint: apply string lints to substrings, too 2021-08-24 11:52:28 -06:00
William Ballenthin
3b4cb47597 pep8 2021-08-24 11:45:48 -06:00
William Ballenthin
f55e758d47 tests: rules: demonstrate substring with description 2021-08-24 11:45:24 -06:00
William Ballenthin
c5a5e5600a changelog: substring 2021-08-24 11:37:07 -06:00
William Ballenthin
6989e8b8cf rules: add substring feature
closes #737
2021-08-24 11:35:01 -06:00
Capa Bot
7d2e550b84 Sync capa rules submodule 2021-08-24 16:35:30 +00:00
Capa Bot
7f17c45b69 Sync capa rules submodule 2021-08-24 16:06:15 +00:00
Willi Ballenthin
b0c86ab8db Merge pull request #738 from fireeye/revert-697-dependabot/pip/networkx-2.6.2
Revert "build(deps): bump networkx from 2.5.1 to 2.6.2"
2021-08-24 09:50:49 -06:00
Willi Ballenthin
4c0c2c75c6 Revert "build(deps): bump networkx from 2.5.1 to 2.6.2" 2021-08-24 09:50:39 -06:00
Capa Bot
1549b9b506 Sync capa rules submodule 2021-08-24 15:47:44 +00:00
Capa Bot
057eeb3629 Sync capa-testfiles submodule 2021-08-24 15:45:39 +00:00
Capa Bot
0dea4e8b7d Sync capa-testfiles submodule 2021-08-24 15:45:04 +00:00
Willi Ballenthin
d3573a565c Merge pull request #723 from fireeye/feature-701
os, arch, and format features
2021-08-24 08:56:29 -06:00
Willi Ballenthin
1275b49ebb Merge pull request #697 from fireeye/dependabot/pip/networkx-2.6.2
build(deps): bump networkx from 2.5.1 to 2.6.2
2021-08-24 08:56:17 -06:00
William Ballenthin
56f9e16a8b tests: viv: disable ELF tests due to #735 2021-08-23 17:51:28 -06:00
William Ballenthin
a4b0954532 viv: ignore mypy FP 2021-08-23 16:57:35 -06:00
William Ballenthin
fc73787849 extractors: file extractor arg consistency via kwargs 2021-08-23 16:42:16 -06:00
William Ballenthin
30a5493414 tests: smda: remove unused import 2021-08-23 16:13:01 -06:00
William Ballenthin
a729bdfbe6 elf: more clearly set first detected OS 2021-08-23 16:12:07 -06:00
William Ballenthin
dab88e482d elf: add more explanation about ei_osabi 2021-08-23 16:08:01 -06:00
William Ballenthin
6482f67a0c elf: document unused OS constants 2021-08-23 16:06:14 -06:00
William Ballenthin
a1bf95ec2c features: formatting of OS constants 2021-08-23 16:00:57 -06:00
William Ballenthin
6961fde327 Merge branch 'feature-701' of github.com:fireeye/capa into feature-701 2021-08-23 15:59:09 -06:00
William Ballenthin
c0fe0420fc changelog: tweak PR ref 2021-08-23 15:58:32 -06:00
Willi Ballenthin
2ba000a987 Merge branch 'master' into feature-701 2021-08-23 10:02:41 -06:00
Willi Ballenthin
a90e93e150 Update capa/main.py
Co-authored-by: Moritz <mr-tz@users.noreply.github.com>
2021-08-23 08:54:43 -06:00
Willi Ballenthin
b6ab12d3c1 Update capa/features/common.py
Co-authored-by: Moritz <mr-tz@users.noreply.github.com>
2021-08-23 08:54:22 -06:00
dependabot[bot]
71ccd87435 build(deps): bump networkx from 2.5.1 to 2.6.2
Bumps [networkx](https://github.com/networkx/networkx) from 2.5.1 to 2.6.2.
- [Release notes](https://github.com/networkx/networkx/releases)
- [Commits](https://github.com/networkx/networkx/compare/networkx-2.5.1...networkx-2.6.2)

---
updated-dependencies:
- dependency-name: networkx
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-23 14:24:19 +00:00
Willi Ballenthin
d07045f134 Merge pull request #731 from fireeye/dependabot/pip/vivisect-1.0.4
build(deps): bump vivisect from 1.0.3 to 1.0.4
2021-08-23 08:23:36 -06:00
dependabot[bot]
bede4a0aa1 build(deps): bump vivisect from 1.0.3 to 1.0.4
Bumps [vivisect](https://github.com/vivisect/vivisect) from 1.0.3 to 1.0.4.
- [Release notes](https://github.com/vivisect/vivisect/releases)
- [Changelog](https://github.com/vivisect/vivisect/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/vivisect/vivisect/compare/v1.0.3...v1.0.4)

---
updated-dependencies:
- dependency-name: vivisect
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-23 14:22:20 +00:00
Willi Ballenthin
de1cff356a Merge pull request #733 from fireeye/dependabot/pip/tqdm-4.62.2
build(deps): bump tqdm from 4.62.1 to 4.62.2
2021-08-23 08:21:56 -06:00
Willi Ballenthin
1bee098fb6 Merge pull request #734 from fireeye/dependabot/pip/smda-1.6.2
build(deps): bump smda from 1.5.19 to 1.6.2
2021-08-23 08:21:00 -06:00
dependabot[bot]
e36e175e08 build(deps): bump smda from 1.5.19 to 1.6.2
Bumps [smda](https://github.com/danielplohmann/smda) from 1.5.19 to 1.6.2.
- [Release notes](https://github.com/danielplohmann/smda/releases)
- [Commits](https://github.com/danielplohmann/smda/commits)

---
updated-dependencies:
- dependency-name: smda
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-23 14:20:55 +00:00
Willi Ballenthin
9db45d2fcb Merge pull request #732 from fireeye/dependabot/pip/ruamel-yaml-0.17.13
build(deps): bump ruamel-yaml from 0.17.10 to 0.17.13
2021-08-23 08:20:07 -06:00
dependabot[bot]
558f5d0c8a build(deps): bump tqdm from 4.62.1 to 4.62.2
Bumps [tqdm](https://github.com/tqdm/tqdm) from 4.62.1 to 4.62.2.
- [Release notes](https://github.com/tqdm/tqdm/releases)
- [Commits](https://github.com/tqdm/tqdm/compare/v4.62.1...v4.62.2)

---
updated-dependencies:
- dependency-name: tqdm
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-23 14:02:36 +00:00
dependabot[bot]
e32a887091 build(deps): bump ruamel-yaml from 0.17.10 to 0.17.13
Bumps [ruamel-yaml](https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree) from 0.17.10 to 0.17.13.

---
updated-dependencies:
- dependency-name: ruamel-yaml
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-23 14:02:32 +00:00
William Ballenthin
1b9a6c3c59 main: collect os/format/arch into metadata and render it 2021-08-20 16:50:40 -06:00
William Ballenthin
aef03b5592 elf: fix type error caught by mypy! 2021-08-20 15:00:06 -06:00
William Ballenthin
3eaeb533e9 Merge branch 'feature-701' of github.com:fireeye/capa into feature-701 2021-08-20 14:56:53 -06:00
William Ballenthin
04cc94a450 main: detect invalid arch and os 2021-08-20 14:56:26 -06:00
Willi Ballenthin
dae7be076d elf: fix alignment calculation
identified over [here](14f9c972b3 (r692441396))
2021-08-19 14:45:08 -06:00
Michael Hunhoff
3cb7573edb enable os/arch/format for capa explorer 2021-08-19 13:06:43 -06:00
William Ballenthin
a96a5de12d tests: re-enable SMDA ELF API tests 2021-08-19 08:02:17 -06:00
William Ballenthin
45b6c8dad3 setup: bump SMDA dep ver
closes #725
2021-08-19 08:01:17 -06:00
William Ballenthin
cf17ebac33 Merge branch 'feature-701' of github.com:fireeye/capa into feature-701 2021-08-18 16:33:21 -06:00
William Ballenthin
f0a34fdb5e merge 2021-08-18 16:32:58 -06:00
Willi Ballenthin
e124115e8d Merge branch 'master' into feature-701 2021-08-18 16:29:05 -06:00
William Ballenthin
249b8498d9 pefile: extract Arch 2021-08-18 16:27:41 -06:00
Capa Bot
15c69e3b7d Sync capa rules submodule 2021-08-18 21:15:01 +00:00
Capa Bot
98208b8eec Sync capa rules submodule 2021-08-18 20:50:11 +00:00
Capa Bot
0690e73320 Sync capa rules submodule 2021-08-18 20:38:06 +00:00
William Ballenthin
766ac7e500 Merge branch 'master' of github.com:fireeye/capa into feature-701 2021-08-18 14:33:17 -06:00
Capa Bot
51ac57c657 Sync capa-testfiles submodule 2021-08-18 20:33:02 +00:00
William Ballenthin
89603586da elf: add some doc 2021-08-18 14:23:48 -06:00
William Ballenthin
a35f5a1650 elf: detect FreeBSD via note 2021-08-18 14:21:50 -06:00
William Ballenthin
f1df29d27e tests: xfail smda ELF API
waiting for #725
2021-08-18 14:08:36 -06:00
Willi Ballenthin
08c24e2705 Merge pull request #729 from doomedraven/patch-1
update capa_as_library for capa v2
2021-08-18 08:32:41 -06:00
doomedraven
b1171864e3 black 2021-08-18 14:25:58 +02:00
doomedraven
5af59cecda update capa_as_library for capa v2 2021-08-18 14:23:36 +02:00
William Ballenthin
0c3a38b24b Merge branch 'feature-701' of github.com:fireeye/capa into feature-701 2021-08-17 09:07:25 -06:00
William Ballenthin
ac5d163aa0 pep8 2021-08-17 09:07:08 -06:00
Willi Ballenthin
dfe2dbea6d Merge pull request #722 from fireeye/fix-703
fix reporting of namespace matches
2021-08-17 09:05:19 -06:00
Willi Ballenthin
909ffc187b Merge branch 'master' into feature-701 2021-08-17 09:00:48 -06:00
William Ballenthin
92dfa99059 extractors: log unsupported os/arch/format but don't except 2021-08-17 08:57:42 -06:00
William Ballenthin
0065876702 extractors: ida: move os extraction to global module 2021-08-17 08:57:27 -06:00
Capa Bot
23bf28702f Sync capa rules submodule 2021-08-17 14:23:23 +00:00
Capa Bot
066873bd06 Sync capa rules submodule 2021-08-17 14:20:34 +00:00
William Ballenthin
98c00bd8b1 extractors: add missing global_.py files 2021-08-16 17:12:45 -06:00
William Ballenthin
fd47b03fac render: vverbose: don't render locations of global scope features 2021-08-16 17:12:28 -06:00
William Ballenthin
8e689c39f4 features: add Arch feature at global scope 2021-08-16 17:06:56 -06:00
William Ballenthin
738fa9150e fixtures: update tests to account for Format scope 2021-08-16 16:39:40 -06:00
William Ballenthin
5405e182c3 features: move Format features to file scope 2021-08-16 16:37:04 -06:00
William Ballenthin
ab1326f858 features: move OS and Format to their own features, not characteristics 2021-08-16 16:28:26 -06:00
William Ballenthin
f013815b2a features: rename legacy term arch to bitness
makes space for upcoming feature `arch: ` for things like i386/amd64/aarch64
2021-08-16 12:21:25 -06:00
Willi Ballenthin
5b24fc2543 Merge pull request #727 from fireeye/dependabot/pip/tqdm-4.62.1
build(deps): bump tqdm from 4.62.0 to 4.62.1
2021-08-16 08:22:44 -06:00
dependabot[bot]
b103e40ba8 build(deps): bump tqdm from 4.62.0 to 4.62.1
Bumps [tqdm](https://github.com/tqdm/tqdm) from 4.62.0 to 4.62.1.
- [Release notes](https://github.com/tqdm/tqdm/releases)
- [Commits](https://github.com/tqdm/tqdm/compare/v4.62.0...v4.62.1)

---
updated-dependencies:
- dependency-name: tqdm
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-16 14:02:16 +00:00
William Ballenthin
d5c9a5cf3c mypy: ignore ida_loader 2021-08-11 15:15:33 -06:00
William Ballenthin
30d7425b98 changelog 2021-08-11 15:10:07 -06:00
William Ballenthin
34819b289d pep8 2021-08-11 15:08:31 -06:00
William Ballenthin
71d9ebd859 extractors: ida: extract OS and file format characteristics at all scopes 2021-08-11 15:05:57 -06:00
William Ballenthin
c1910d47f0 move is_global_feature into capa.features.common 2021-08-11 15:02:10 -06:00
William Ballenthin
769d354792 detect-elf-os: remove extra print statement 2021-08-11 14:56:01 -06:00
William Ballenthin
a7678e779e extractors: smda: extract format and OS characteristics at all scopes 2021-08-11 14:52:36 -06:00
William Ballenthin
294f74b209 extractors: viv: extract format and OS at all scopes 2021-08-11 14:44:41 -06:00
William Ballenthin
fa8b4a4203 extractors: add common routine to extract OS from ELF 2021-08-11 14:43:13 -06:00
William Ballenthin
7205862dbf helpers: move ELF and IDA helpers out of script and into common module 2021-08-11 14:42:29 -06:00
William Ballenthin
37bc47c772 extractors: viv: extract from bytes not file path 2021-08-11 14:41:11 -06:00
William Ballenthin
baaa8ba2c1 scripts: add script to detect ELF OS
closes #724
2021-08-11 13:52:50 -06:00
William Ballenthin
05f8e2445a fixtures: add tests demonstrating extraction of features from ELF files 2021-08-11 09:29:05 -06:00
William Ballenthin
753b003107 pep8 2021-08-11 09:23:41 -06:00
William Ballenthin
97092c91db tests: assert absence of the wrong os/format 2021-08-11 09:13:56 -06:00
William Ballenthin
20859d2796 extractors: pefile: extract OS and format 2021-08-11 09:11:29 -06:00
William Ballenthin
06f8943bc4 features: add format/pe and format/elf characteristics 2021-08-11 09:10:04 -06:00
William Ballenthin
e797a67e97 features: define CHARACTERISTIC_OS constants for ease of use 2021-08-11 09:08:37 -06:00
William Ballenthin
a1eca58d7a features: support characteristic(os/*) features 2021-08-11 08:40:40 -06:00
William Ballenthin
aefe97e09e rules: fix typos 2021-08-11 08:39:56 -06:00
Willi Ballenthin
59ae901f57 changelog 2021-08-11 08:21:38 -06:00
Capa Bot
811f484d3b Sync capa-testfiles submodule 2021-08-11 14:18:28 +00:00
Willi Ballenthin
ff08b99190 Merge pull request #700 from Adir-Shemesh/elf
Add initial elf files support
2021-08-11 08:18:02 -06:00
William Ballenthin
44dc4efe57 changlog 2021-08-10 13:14:00 -06:00
William Ballenthin
f7e2ac83f2 Merge branch 'master' of github.com:fireeye/capa into fix-703 2021-08-10 13:12:25 -06:00
William Ballenthin
7e60162d65 result_document: extract only the relevant namespace locations
closes #703
2021-08-10 13:06:04 -06:00
William Ballenthin
cd06ee4544 main: correctly extract namespaces matches across scopes
closes #721
2021-08-10 13:05:31 -06:00
Willi Ballenthin
6d0a777de6 pefile: handle case where no name is exported
closes #684
2021-08-09 20:28:25 -06:00
Capa Bot
dd7a48a00c Sync capa rules submodule 2021-08-09 19:52:39 +00:00
Willi Ballenthin
582dcef097 Merge pull request #718 from fireeye/dependabot/pip/types-tabulate-0.8.2
build(deps-dev): bump types-tabulate from 0.8.0 to 0.8.2
2021-08-09 09:55:27 -06:00
dependabot[bot]
b9501d7b77 build(deps-dev): bump types-tabulate from 0.8.0 to 0.8.2
Bumps [types-tabulate](https://github.com/python/typeshed) from 0.8.0 to 0.8.2.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-tabulate
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-09 15:49:55 +00:00
Willi Ballenthin
a523fcf804 Merge pull request #717 from fireeye/dependabot/pip/types-termcolor-1.1.1
build(deps-dev): bump types-termcolor from 0.1.1 to 1.1.1
2021-08-09 09:49:16 -06:00
dependabot[bot]
cd07745af1 build(deps-dev): bump types-termcolor from 0.1.1 to 1.1.1
Bumps [types-termcolor](https://github.com/python/typeshed) from 0.1.1 to 1.1.1.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-termcolor
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-09 15:41:25 +00:00
Willi Ballenthin
6c15881bfe Merge pull request #716 from fireeye/dependabot/pip/types-pyyaml-5.4.6
build(deps-dev): bump types-pyyaml from 5.4.3 to 5.4.6
2021-08-09 09:40:40 -06:00
dependabot[bot]
7ff358ee00 build(deps-dev): bump types-pyyaml from 5.4.3 to 5.4.6
Bumps [types-pyyaml](https://github.com/python/typeshed) from 5.4.3 to 5.4.6.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-pyyaml
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-09 15:39:34 +00:00
Willi Ballenthin
79e5fad326 Merge pull request #715 from fireeye/dependabot/pip/types-colorama-0.4.3
build(deps-dev): bump types-colorama from 0.4.2 to 0.4.3
2021-08-09 09:38:48 -06:00
dependabot[bot]
93f5e966b2 build(deps-dev): bump types-colorama from 0.4.2 to 0.4.3
Bumps [types-colorama](https://github.com/python/typeshed) from 0.4.2 to 0.4.3.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-colorama
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-09 14:02:06 +00:00
adirshemesh
d0e9c004a0 Add initial elf files support 2021-08-05 15:24:22 +03:00
Capa Bot
4814a47560 Sync capa rules submodule 2021-08-03 14:10:25 +00:00
Willi Ballenthin
3c81d91072 Merge pull request #696 from fireeye/dependabot/pip/tqdm-4.62.0
build(deps): bump tqdm from 4.61.2 to 4.62.0
2021-08-02 08:43:26 -06:00
Willi Ballenthin
de21f9a1f9 Merge pull request #695 from fireeye/dependabot/pip/types-tabulate-0.8.0
build(deps-dev): bump types-tabulate from 0.1.1 to 0.8.0
2021-08-02 08:43:12 -06:00
Willi Ballenthin
9f4dab89a5 Merge pull request #694 from fireeye/dependabot/pip/isort-5.9.3
build(deps-dev): bump isort from 5.9.2 to 5.9.3
2021-08-02 08:43:01 -06:00
dependabot[bot]
9def3df16f build(deps): bump tqdm from 4.61.2 to 4.62.0
Bumps [tqdm](https://github.com/tqdm/tqdm) from 4.61.2 to 4.62.0.
- [Release notes](https://github.com/tqdm/tqdm/releases)
- [Commits](https://github.com/tqdm/tqdm/compare/v4.61.2...v4.62.0)

---
updated-dependencies:
- dependency-name: tqdm
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-02 14:02:28 +00:00
dependabot[bot]
44dd56e344 build(deps-dev): bump types-tabulate from 0.1.1 to 0.8.0
Bumps [types-tabulate](https://github.com/python/typeshed) from 0.1.1 to 0.8.0.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-tabulate
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-02 14:02:24 +00:00
dependabot[bot]
e630bd06db build(deps-dev): bump isort from 5.9.2 to 5.9.3
Bumps [isort](https://github.com/pycqa/isort) from 5.9.2 to 5.9.3.
- [Release notes](https://github.com/pycqa/isort/releases)
- [Changelog](https://github.com/PyCQA/isort/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pycqa/isort/compare/5.9.2...5.9.3)

---
updated-dependencies:
- dependency-name: isort
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-08-02 14:02:22 +00:00
Capa Bot
1fbd4937bc Sync capa rules submodule 2021-07-29 19:33:40 +00:00
Capa Bot
cc54bdddc6 Sync capa rules submodule 2021-07-29 18:44:43 +00:00
Capa Bot
f750455519 Sync capa rules submodule 2021-07-29 18:42:08 +00:00
mike-hunhoff
3d383bcc57 Merge pull request #692 from fireeye/explorer/enhance-limit-features-to-selection
add option to limit features to currently selected disassembly address
2021-07-29 09:20:36 -06:00
Michael Hunhoff
cdab6eaa5d updating CHANGELOG 2021-07-28 13:50:50 -06:00
Michael Hunhoff
7937cb6ea3 updating API calls 2021-07-28 13:44:06 -06:00
Michael Hunhoff
57f5236c9b adding option to filter features by currenty disassembly address 2021-07-28 13:38:36 -06:00
mike-hunhoff
f7bdd0e7f6 Merge pull request #691 from fireeye/fix/690
enforce max column width Features and Editor panes
2021-07-28 12:10:02 -06:00
Michael Hunhoff
a108e385fe updating changelog 2021-07-28 09:07:22 -06:00
Michael Hunhoff
6549c9878b merge upstream 2021-07-28 09:06:30 -06:00
Michael Hunhoff
a3a760e1e6 limit column sizes for Features and Editor panes 2021-07-28 08:53:12 -06:00
mike-hunhoff
576b9be78c Merge pull request #689 from fireeye/fix/544
add option to select specificed byte count for bytes feature
2021-07-27 16:12:26 -06:00
Michael Hunhoff
528548eb8c add option to select specificed byte count for bytes feature 2021-07-27 15:18:13 -06:00
mike-hunhoff
9a2415e34e Merge pull request #688 from fireeye/fix/514
update IDA extractor to use non-canon mnemonics
2021-07-27 14:56:14 -06:00
Michael Hunhoff
c9b7162a5f update IDA extractor to use non-canon mnemonics 2021-07-27 13:34:52 -06:00
mike-hunhoff
7fd9ab5e88 Merge pull request #687 from fireeye/fix/655
remove duplicate check when saving file
2021-07-27 10:49:23 -06:00
Michael Hunhoff
b44edbd90e remove duplicate check when saving file 2021-07-27 09:50:25 -06:00
mike-hunhoff
a1b3703a0d Merge pull request #686 from fireeye/fix/531
add additional filter logic when displaying capa matches by function
2021-07-27 08:48:35 -06:00
Michael Hunhoff
874dffc13f add additional filter logic when displaying capa matches by function 2021-07-26 17:37:35 -06:00
Capa Bot
8b572dc63f Sync capa rules submodule 2021-07-26 21:48:37 +00:00
Willi Ballenthin
659b29a62d Merge pull request #685 from fireeye/dependabot/pip/smda-1.5.19
build(deps): bump smda from 1.5.18 to 1.5.19
2021-07-26 09:22:22 -06:00
dependabot[bot]
7a558898e1 build(deps): bump smda from 1.5.18 to 1.5.19
Bumps [smda](https://github.com/danielplohmann/smda) from 1.5.18 to 1.5.19.
- [Release notes](https://github.com/danielplohmann/smda/releases)
- [Commits](https://github.com/danielplohmann/smda/commits)

---
updated-dependencies:
- dependency-name: smda
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-26 14:02:46 +00:00
Capa Bot
7dee553558 Sync capa rules submodule 2021-07-23 16:39:19 +00:00
Capa Bot
9f6f18466a Sync capa rules submodule 2021-07-22 06:56:23 +00:00
Capa Bot
ef003366da Sync capa-testfiles submodule 2021-07-21 07:12:59 +00:00
Moritz
aaaadc2a47 Update installation.md (#679)
* Update installation.md

* Update doc/installation.md

Co-authored-by: Willi Ballenthin <willi.ballenthin@gmail.com>

Co-authored-by: Willi Ballenthin <willi.ballenthin@gmail.com>
2021-07-20 20:01:10 +02:00
Willi Ballenthin
f94287c9ae Merge pull request #678 from fireeye/mr-tz-patch-1
Update README.md
2021-07-19 14:31:37 -06:00
Moritz
c56bfdca67 Update README.md 2021-07-19 21:10:20 +02:00
Willi Ballenthin
77a86e33bd Merge pull request #671 from Ana06/release2
Release capa v2.0 🎉
2021-07-19 10:32:34 -06:00
Willi Ballenthin
4f44b5a60a Merge pull request #677 from fireeye/dependabot/pip/black-21.7b0
build(deps-dev): bump black from 21.6b0 to 21.7b0
2021-07-19 10:01:45 -06:00
dependabot[bot]
9361b3deb1 build(deps-dev): bump black from 21.6b0 to 21.7b0
Bumps [black](https://github.com/psf/black) from 21.6b0 to 21.7b0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/commits)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-19 14:02:42 +00:00
Ana Maria Martinez Gomez
9a0ec51f00 changelog: update date and number of rules 2021-07-16 17:37:03 +02:00
Ana Maria Martinez Gomez
5979892d29 version: capa v2.0
Prepare capa/version for capa 2.0 release.
2021-07-16 17:34:14 +02:00
Ana Maria Martinez Gomez
96f2536c34 changelog: capa v2.0
Prepare changelog for capa v2.0 release.
2021-07-16 17:34:13 +02:00
Capa Bot
52a3d35987 Sync capa rules submodule 2021-07-13 18:39:44 +00:00
Capa Bot
de4827e8fa Sync capa rules submodule 2021-07-13 18:34:24 +00:00
Capa Bot
b6d5409691 Sync capa rules submodule 2021-07-13 18:33:06 +00:00
Capa Bot
818f532ca9 Sync capa rules submodule 2021-07-13 18:31:57 +00:00
Capa Bot
895b548f34 Sync capa rules submodule 2021-07-13 03:09:36 +00:00
Willi Ballenthin
d9f1d0918f Merge pull request #675 from fireeye/dependabot/pip/isort-5.9.2
build(deps-dev): bump isort from 5.9.1 to 5.9.2
2021-07-12 10:33:16 -06:00
Willi Ballenthin
35abdb8ecf Merge pull request #674 from fireeye/dependabot/pip/tqdm-4.61.2
build(deps): bump tqdm from 4.61.1 to 4.61.2
2021-07-12 10:32:38 -06:00
dependabot[bot]
e77bbd68cf build(deps-dev): bump isort from 5.9.1 to 5.9.2
Bumps [isort](https://github.com/pycqa/isort) from 5.9.1 to 5.9.2.
- [Release notes](https://github.com/pycqa/isort/releases)
- [Changelog](https://github.com/PyCQA/isort/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pycqa/isort/compare/5.9.1...5.9.2)

---
updated-dependencies:
- dependency-name: isort
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-12 14:02:19 +00:00
dependabot[bot]
4c73e5df3c build(deps): bump tqdm from 4.61.1 to 4.61.2
Bumps [tqdm](https://github.com/tqdm/tqdm) from 4.61.1 to 4.61.2.
- [Release notes](https://github.com/tqdm/tqdm/releases)
- [Commits](https://github.com/tqdm/tqdm/compare/v4.61.1...v4.61.2)

---
updated-dependencies:
- dependency-name: tqdm
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-07-12 14:02:13 +00:00
Moritz
933789d02b Merge pull request #670 from fireeye/better-sig-loading
Better sig loading
2021-06-30 18:56:39 +02:00
Moritz Raabe
e88bb4814e update readme 2021-06-30 10:10:44 +02:00
Moritz
17b7694170 Merge pull request #666 from fireeye/fix-656
main: load signatures in order of their basename
2021-06-30 10:04:35 +02:00
Moritz Raabe
f191c4f145 wrap sig loading 2021-06-30 10:04:11 +02:00
Moritz Raabe
6fc2037f45 update sig file names 2021-06-30 08:54:37 +02:00
Moritz
b5f23e7baf Merge pull request #660 from fireeye/ci/test-scripts
test scripts and fix show-features
2021-06-29 21:46:43 +02:00
Capa Bot
f7e4273523 Sync capa rules submodule 2021-06-29 19:22:47 +00:00
Moritz Raabe
6860b9a040 address Willi's feedback 2021-06-29 21:16:31 +02:00
Moritz Raabe
5c8a4aafd7 test scripts and fix show-features 2021-06-29 21:16:31 +02:00
Moritz Raabe
02658d6962 do not process non-pe even with --format pe 2021-06-29 21:16:31 +02:00
William Ballenthin
b2b94e6a8e main: load signatures in order of their basename
closes #656
2021-06-29 10:52:07 -06:00
Moritz
65b3c046a3 Merge pull request #661 from fireeye/ida/extract-api-flirt
ida extract library funcs identified via flirt
2021-06-29 09:23:21 +02:00
Moritz Raabe
04b5949a05 address Mike's feedback 2021-06-29 08:57:43 +02:00
Moritz Raabe
18c87e4e55 ida extract library funcs identified via flirt 2021-06-29 08:49:48 +02:00
Willi Ballenthin
b84cc3128d Merge pull request #664 from fireeye/verify-pe-format
do not process non-pe even with --format pe
2021-06-28 12:09:54 -06:00
Willi Ballenthin
f83ef470cb Merge pull request #662 from fireeye/dependabot/pip/mypy-0.910
build(deps-dev): bump mypy from 0.902 to 0.910
2021-06-28 11:54:28 -06:00
Willi Ballenthin
2928dd279c Merge pull request #663 from fireeye/dependabot/pip/ruamel-yaml-0.17.10
build(deps): bump ruamel-yaml from 0.17.9 to 0.17.10
2021-06-28 11:54:15 -06:00
Moritz Raabe
f96d3fd8ba do not process non-pe even with --format pe 2021-06-28 18:21:01 +02:00
dependabot[bot]
d094272e4a build(deps): bump ruamel-yaml from 0.17.9 to 0.17.10
Bumps [ruamel-yaml](https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree) from 0.17.9 to 0.17.10.

---
updated-dependencies:
- dependency-name: ruamel-yaml
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-28 14:02:54 +00:00
dependabot[bot]
7eeab35ae8 build(deps-dev): bump mypy from 0.902 to 0.910
Bumps [mypy](https://github.com/python/mypy) from 0.902 to 0.910.
- [Release notes](https://github.com/python/mypy/releases)
- [Commits](https://github.com/python/mypy/compare/v0.902...v0.910)

---
updated-dependencies:
- dependency-name: mypy
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-28 14:02:45 +00:00
Moritz
4e7b490bc3 Merge pull request #639 from fireeye/fix-630
more intuitive signature loading
2021-06-28 12:53:58 +02:00
Moritz Raabe
4ca9e168fe Merge branch 'master' into fix-630 2021-06-28 11:32:27 +02:00
Ana María Martínez Gómez
e579edecb4 Merge pull request #548 from Ana06/explorer-analyze
explorer: add analyze option
2021-06-24 12:22:24 +02:00
Capa Bot
58aa3e33bf Sync capa rules submodule 2021-06-24 00:33:45 +00:00
Ana Maria Martinez Gomez
0685d36220 explorer: use bitmask + enum for analyze option 2021-06-23 11:23:27 +02:00
Ana Maria Martinez Gomez
2158be0a2e explorer: add analyze option
I would like to load capa explorer with an script and that it runs the
analysis without needing extra clicks. Introduce an analyze option for
this.

Loading capa explorer from the UI or with Alt+F5 behaves as before. The
following command as well:
```
ida_loader.load_and_run_plugin("capa_explorer", 0)
```
But the following command automatically runs the analysis without extra
clicks:
```
ida_loader.load_and_run_plugin("capa_explorer", 1)
```

Example of where I am using this:
https://github.com/Ana06/idapython/blob/master/idapythonrc.py#L22
2021-06-23 11:23:27 +02:00
Moritz
7922d08fd4 Merge pull request #617 from fireeye/changelog-reorg
changelog: add breaking change section and reorg
2021-06-23 07:47:53 +02:00
Moritz Raabe
44b47eb39c update release checklist 2021-06-23 07:44:08 +02:00
Moritz Raabe
45c4b4019a move breaking changes to top 2021-06-23 07:44:05 +02:00
Moritz Raabe
831dc577f4 add breaking change section and reorg 2021-06-23 07:40:33 +02:00
Willi Ballenthin
229d5ca549 Merge pull request #654 from fireeye/fix/653
resolve circular import failure
2021-06-22 17:47:06 -06:00
Michael Hunhoff
2872db8b23 resolve circular import failure 2021-06-22 16:12:07 -06:00
Moritz
7152525dbc Merge pull request #648 from fireeye/mr-tz-patch-1
update dependabot actor name
2021-06-22 09:07:12 +02:00
Willi Ballenthin
d7d7aa76c8 Merge pull request #651 from fireeye/dependabot/pip/mypy-0.902
build(deps-dev): bump mypy from 0.901 to 0.902
2021-06-21 10:49:53 -06:00
dependabot[bot]
565bb96c9e build(deps-dev): bump mypy from 0.901 to 0.902
Bumps [mypy](https://github.com/python/mypy) from 0.901 to 0.902.
- [Release notes](https://github.com/python/mypy/releases)
- [Commits](https://github.com/python/mypy/compare/v0.901...v0.902)

---
updated-dependencies:
- dependency-name: mypy
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-21 16:47:28 +00:00
Willi Ballenthin
9fd6098e1e Merge pull request #650 from fireeye/dependabot/pip/types-backports-0.1.3
build(deps-dev): bump types-backports from 0.1.2 to 0.1.3
2021-06-21 10:47:10 -06:00
Willi Ballenthin
0c0929fd94 Merge pull request #649 from fireeye/dependabot/pip/isort-5.9.1
build(deps-dev): bump isort from 5.8.0 to 5.9.1
2021-06-21 10:46:59 -06:00
Willi Ballenthin
1343baa250 Merge pull request #646 from fireeye/dependabot/pip/types-pyyaml-5.4.3
build(deps-dev): bump types-pyyaml from 0.1.6 to 5.4.3
2021-06-21 10:46:43 -06:00
dependabot[bot]
6977477a39 build(deps-dev): bump types-backports from 0.1.2 to 0.1.3
Bumps [types-backports](https://github.com/python/typeshed) from 0.1.2 to 0.1.3.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-backports
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-21 16:44:34 +00:00
dependabot[bot]
86b3438a2d build(deps-dev): bump isort from 5.8.0 to 5.9.1
Bumps [isort](https://github.com/pycqa/isort) from 5.8.0 to 5.9.1.
- [Release notes](https://github.com/pycqa/isort/releases)
- [Changelog](https://github.com/PyCQA/isort/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pycqa/isort/compare/5.8.0...5.9.1)

---
updated-dependencies:
- dependency-name: isort
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-21 16:44:34 +00:00
dependabot[bot]
a00c3b6d32 build(deps-dev): bump types-pyyaml from 0.1.6 to 5.4.3
Bumps [types-pyyaml](https://github.com/python/typeshed) from 0.1.6 to 5.4.3.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-pyyaml
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-21 16:44:34 +00:00
Willi Ballenthin
544ffdea8f Merge pull request #647 from fireeye/dependabot/pip/types-tabulate-0.1.1
build(deps-dev): bump types-tabulate from 0.1.0 to 0.1.1
2021-06-21 10:43:55 -06:00
dependabot[bot]
e4b89f1d7b build(deps-dev): bump types-tabulate from 0.1.0 to 0.1.1
Bumps [types-tabulate](https://github.com/python/typeshed) from 0.1.0 to 0.1.1.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-tabulate
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-21 16:43:23 +00:00
Willi Ballenthin
73dd49ed21 Merge pull request #645 from fireeye/dependabot/pip/viv-utils-flirt--0.6.5
build(deps): bump viv-utils[flirt] from 0.6.4 to 0.6.5
2021-06-21 10:43:11 -06:00
Willi Ballenthin
0511eec67c Merge pull request #644 from fireeye/dependabot/pip/types-termcolor-0.1.1
build(deps-dev): bump types-termcolor from 0.1.0 to 0.1.1
2021-06-21 10:42:56 -06:00
Willi Ballenthin
c7e2ca0b1a Merge pull request #643 from fireeye/dependabot/pip/types-colorama-0.4.2
build(deps-dev): bump types-colorama from 0.4.0 to 0.4.2
2021-06-21 10:42:46 -06:00
Capa Bot
03b15ce289 Sync capa rules submodule 2021-06-21 14:30:00 +00:00
Moritz
2d7ac73caa update dependabot actor name 2021-06-21 16:24:43 +02:00
dependabot[bot]
7fe53073fe build(deps): bump viv-utils[flirt] from 0.6.4 to 0.6.5
Bumps [viv-utils[flirt]](https://github.com/williballenthin/viv-utils) from 0.6.4 to 0.6.5.
- [Release notes](https://github.com/williballenthin/viv-utils/releases)
- [Commits](https://github.com/williballenthin/viv-utils/compare/v0.6.4...v0.6.5)

---
updated-dependencies:
- dependency-name: viv-utils[flirt]
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-21 14:02:11 +00:00
dependabot[bot]
d1407f0a1e build(deps-dev): bump types-termcolor from 0.1.0 to 0.1.1
Bumps [types-termcolor](https://github.com/python/typeshed) from 0.1.0 to 0.1.1.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-termcolor
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-21 14:02:08 +00:00
dependabot[bot]
f5a0e1cd08 build(deps-dev): bump types-colorama from 0.4.0 to 0.4.2
Bumps [types-colorama](https://github.com/python/typeshed) from 0.4.0 to 0.4.2.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-colorama
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-21 14:02:06 +00:00
Willi Ballenthin
94485285f3 Merge pull request #640 from fireeye/fix-507
disable viv creation by default
2021-06-15 15:06:40 -06:00
Willi Ballenthin
466bc4995b Update CHANGELOG.md
Co-authored-by: Moritz <mr-tz@users.noreply.github.com>
2021-06-15 15:06:34 -06:00
William Ballenthin
7bce202122 doc: explain CAPA_SAVE_WORKSPACE 2021-06-15 12:31:56 -06:00
William Ballenthin
40c7401f0a pep8 2021-06-15 12:28:45 -06:00
William Ballenthin
a7ebd5a309 Merge branch 'master' of github.com:fireeye/capa into fix-507 2021-06-15 12:28:17 -06:00
William Ballenthin
d510840bb7 changelog 2021-06-15 12:26:37 -06:00
William Ballenthin
09ad0ec184 tests: save .viv by default, hopefully improve test performance 2021-06-15 12:24:29 -06:00
William Ballenthin
7f03db9fe4 main: dont save .viv by default, unless CAPA_SAVE_WORKSPACE set
closes #507
2021-06-15 12:24:01 -06:00
William Ballenthin
96b9bce93c Merge branch 'master' of github.com:fireeye/capa into fix-630 2021-06-15 11:59:25 -06:00
William Ballenthin
48858e114d main: refactor handling of rules, signatures cli arguments 2021-06-15 11:54:57 -06:00
William Ballenthin
1b4a087c4b render: don't stomp on meta dictionary
fixes a bug in bulk-process in which rules are evaluated multiple times
so meta cannot be updated in place.
2021-06-15 11:44:02 -06:00
William Ballenthin
6f1f928434 main: when --signatures provided, override default set
closes #630
2021-06-15 11:43:38 -06:00
Willi Ballenthin
efd02915ab Merge pull request #621 from fireeye/feature-447
add type annotations to public routines
2021-06-15 11:01:52 -06:00
William Ballenthin
9484fadd0f submodule sync data 2021-06-15 09:08:14 -06:00
Willi Ballenthin
b47b398b07 Merge pull request #636 from fireeye/fix-629
move test sigs into testfiles
2021-06-14 13:56:21 -06:00
Capa Bot
5867e880c6 Sync capa rules submodule 2021-06-14 19:41:57 +00:00
William Ballenthin
c1acf702b6 fixtures: move test sigs to testfiles 2021-06-14 11:37:39 -06:00
William Ballenthin
9a7c83b26f tests: move test sigs to testfiles 2021-06-14 11:36:53 -06:00
William Ballenthin
dd2671aac2 rules: fix types 2021-06-14 11:10:42 -06:00
William Ballenthin
c2981d5091 engine: cleanup some lints 2021-06-14 11:05:58 -06:00
William Ballenthin
ae2baebf6c import-to-bn: dont import * 2021-06-14 11:02:20 -06:00
William Ballenthin
7372aa91c6 engine: better type doc 2021-06-14 10:56:44 -06:00
William Ballenthin
48756a7621 ci: invoke mypy during testing 2021-06-14 10:41:53 -06:00
William Ballenthin
aca6ad2f52 scripts: fix types 2021-06-14 10:41:44 -06:00
William Ballenthin
24d61d8634 mypy: ignore more external deps 2021-06-14 10:41:32 -06:00
William Ballenthin
6411732bea rules: fix bug validating rules 2021-06-14 10:35:57 -06:00
William Ballenthin
152060a28a setup: move mypy deps in to capa[dev] target 2021-06-14 10:33:24 -06:00
William Ballenthin
919aef90c0 mypy: fix capa.features.common types 2021-06-14 10:33:08 -06:00
William Ballenthin
853d7285bd mypy: ignore ruamel 2021-06-14 10:32:51 -06:00
William Ballenthin
6842b92ca2 pep8 2021-06-14 10:25:37 -06:00
William Ballenthin
dba250ca86 rules: fix types and document description parsing 2021-06-14 10:25:15 -06:00
William Ballenthin
b8c524d2f5 type: capa.rules parse range 2021-06-14 10:09:35 -06:00
William Ballenthin
0ff5db9397 type: capa.rules feature validation 2021-06-14 10:06:48 -06:00
William Ballenthin
15334cf5d4 render: further refactor att&ck handling 2021-06-14 09:53:36 -06:00
William Ballenthin
f5cb5d462d render: further cleanup rendering of att&ck 2021-06-14 09:52:32 -06:00
William Ballenthin
79459d4a14 mypy fixes
type checker doesn't like a list that contains tuples with both
length 2 and length 3. so keep length constant with None values.
2021-06-14 09:50:12 -06:00
William Ballenthin
addd4683ca mypy fixes 2021-06-14 09:47:51 -06:00
William Ballenthin
6d8399684b type: capa.render 2021-06-14 09:28:33 -06:00
William Ballenthin
4583692539 type: capa.main 2021-06-14 09:19:08 -06:00
William Ballenthin
9b7e67443b extractors: fix type hints 2021-06-14 08:59:23 -06:00
William Ballenthin
83909b2be4 *: remove explicit object super class
closes #635
2021-06-14 08:47:09 -06:00
William Ballenthin
247d330f79 type: capa.features.extractors.base_extractor 2021-06-14 08:44:48 -06:00
Willi Ballenthin
1a31c84eef Merge pull request #632 from fireeye/dependabot/pip/black-21.6b0
build(deps-dev): bump black from 21.5b2 to 21.6b0
2021-06-14 08:20:53 -06:00
Willi Ballenthin
9ce92cfb5b Merge pull request #633 from fireeye/dependabot/pip/ruamel-yaml-0.17.9
build(deps): bump ruamel-yaml from 0.17.7 to 0.17.9
2021-06-14 08:20:31 -06:00
Willi Ballenthin
1f44a2dec8 Merge pull request #634 from fireeye/dependabot/pip/tqdm-4.61.1
build(deps): bump tqdm from 4.61.0 to 4.61.1
2021-06-14 08:20:19 -06:00
dependabot[bot]
b7cd467363 build(deps): bump tqdm from 4.61.0 to 4.61.1
Bumps [tqdm](https://github.com/tqdm/tqdm) from 4.61.0 to 4.61.1.
- [Release notes](https://github.com/tqdm/tqdm/releases)
- [Commits](https://github.com/tqdm/tqdm/compare/v4.61.0...v4.61.1)

---
updated-dependencies:
- dependency-name: tqdm
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-14 05:59:01 +00:00
dependabot[bot]
ff3cc421eb build(deps): bump ruamel-yaml from 0.17.7 to 0.17.9
Bumps [ruamel-yaml](https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree) from 0.17.7 to 0.17.9.

---
updated-dependencies:
- dependency-name: ruamel-yaml
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-14 05:58:55 +00:00
dependabot[bot]
205798865d build(deps-dev): bump black from 21.5b2 to 21.6b0
Bumps [black](https://github.com/psf/black) from 21.5b2 to 21.6b0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/commits)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-14 05:58:48 +00:00
Capa Bot
10f499d230 Sync capa rules submodule 2021-06-14 03:51:14 +00:00
William Ballenthin
a21b53d737 Merge branch 'master' of github.com:fireeye/capa into feature-447 2021-06-10 09:02:39 -06:00
Capa Bot
0f15895b36 Sync capa rules submodule 2021-06-10 14:42:56 +00:00
Moritz
2ba2aec0d3 Merge pull request #624 from fireeye/fix-622
remove logic from __init__.py and break import cycles
2021-06-10 13:53:10 +02:00
William Ballenthin
11d50aa5b1 pep8 2021-06-10 00:29:23 -06:00
William Ballenthin
b066af9506 mypy: extend lib ignore config 2021-06-10 00:28:28 -06:00
William Ballenthin
059909c027 features: fix types 2021-06-10 00:23:01 -06:00
William Ballenthin
d61ff0c69f changelog 2021-06-10 00:08:00 -06:00
William Ballenthin
f6c2394bdf common: fix type annotations 2021-06-10 00:07:05 -06:00
William Ballenthin
df5ed6bbf2 indirect_calls: fix types 2021-06-10 00:02:23 -06:00
William Ballenthin
0b653aa47a ida: file: fix imports 2021-06-10 00:02:11 -06:00
William Ballenthin
b5a18de4a3 pep8 2021-06-09 23:52:15 -06:00
William Ballenthin
5408481606 type: capa.engine 2021-06-09 23:51:55 -06:00
William Ballenthin
1c66ebe638 type: capa.features.common 2021-06-09 23:47:06 -06:00
William Ballenthin
3e79dfd0e7 type: capa.rules 2021-06-09 23:39:07 -06:00
William Ballenthin
459df37b13 indirect_calls: fix typing circular dependencies 2021-06-09 23:28:05 -06:00
William Ballenthin
3d8edc513c type: capa.features.extractors.viv.insn 2021-06-09 23:24:51 -06:00
William Ballenthin
ab7bf53f67 type: capa.features.insn 2021-06-09 23:20:46 -06:00
William Ballenthin
c30a56bc11 type: capa.features.extractors.helpers 2021-06-09 23:19:36 -06:00
William Ballenthin
6918a039e9 type: capa.render.result_document 2021-06-09 23:15:45 -06:00
William Ballenthin
469e2ff870 type: capa.features.extractors.viv.basicblock 2021-06-09 23:12:07 -06:00
William Ballenthin
3416f7bc61 type: capa.features.file 2021-06-09 23:09:24 -06:00
William Ballenthin
a75d7576f8 type: capa.features.extractors.viv.indirect_calls 2021-06-09 23:08:29 -06:00
William Ballenthin
23addda29a type: capa.render.utils 2021-06-09 23:06:33 -06:00
William Ballenthin
14e2efa309 type: capa.features.extractors.viv.file 2021-06-09 23:01:14 -06:00
William Ballenthin
faa363cd8f type: capa.render.default 2021-06-09 22:59:54 -06:00
William Ballenthin
e29922af57 type: capa.features.extractors.pefile 2021-06-09 22:56:02 -06:00
William Ballenthin
8a0ae7ae55 type: capa.features.extractors.viv.helpers 2021-06-09 22:54:29 -06:00
William Ballenthin
6f67619621 type capa.features.freeze 2021-06-09 22:51:09 -06:00
William Ballenthin
3f55f678ca Merge branch 'fix-622' into feature-447 2021-06-09 22:41:10 -06:00
William Ballenthin
ee41d47e4d test_function_id: fix test imports 2021-06-09 22:35:26 -06:00
William Ballenthin
527e993bb4 engine: remove dependency on rules, fixing circular import 2021-06-09 22:30:43 -06:00
William Ballenthin
6b4d7266e6 changelog 2021-06-09 22:23:06 -06:00
William Ballenthin
954ed3a408 pep8 2021-06-09 22:22:03 -06:00
William Ballenthin
ac59e50b5f move capa/features/__init__.py logic to common.py
also cleanup imports across the board,
thanks to pylance.
2021-06-09 22:20:53 -06:00
William Ballenthin
7029ad32c4 move capa/features/extractors/__init__.py logic to base_extractor.py 2021-06-09 21:09:29 -06:00
William Ballenthin
766dcacdbe move logic out of capa/render/__init__.py 2021-06-09 18:06:51 -06:00
William Ballenthin
fc9ad6c737 move extractors/ida/__init__.py logic to extractor.py 2021-06-09 17:55:44 -06:00
William Ballenthin
7d2e664320 move extractors/smda/__init__.py logic to extractor.py 2021-06-09 17:52:06 -06:00
William Ballenthin
6187317a4e move extractors/viv/__init__.py logic to extractor.py 2021-06-09 17:49:50 -06:00
William Ballenthin
d81b0bcbfa move helpers/__init__.py to helpers.py 2021-06-09 17:43:58 -06:00
William Ballenthin
9c8e18acb4 pefile/__init__ to pefile.py 2021-06-09 17:42:46 -06:00
William Ballenthin
8aed58c1d4 *: remove __all__
closes #623
2021-06-09 17:38:57 -06:00
William Ballenthin
325c726f0e typing: capa.helpers 2021-06-09 15:09:37 -06:00
William Ballenthin
9a4e9b6586 setup: add initial mypy setup
invoke like: mypy --config-file .github/mypy/mypy.ini capa/main.py
2021-06-09 14:50:37 -06:00
Capa Bot
23354ec452 Sync capa rules submodule 2021-06-09 09:19:50 +00:00
Capa Bot
f698f4e79b Sync capa rules submodule 2021-06-09 08:08:12 +00:00
Moritz
c05a8bf910 Merge pull request #620 from fireeye/fix-619
correctly render negative numbers and offsets
2021-06-09 10:03:04 +02:00
Moritz
9ffbb82f4c Merge pull request #618 from fireeye/fix/616
fix 616
2021-06-09 10:00:04 +02:00
William Ballenthin
0508d31a35 changelog 2021-06-08 11:10:40 -06:00
William Ballenthin
901a398b31 insn: render negative number, offset correctly
closes #619
2021-06-08 11:09:32 -06:00
mike-hunhoff
fd0f87ca6e Update capa/features/file.py w/ PR changes
Co-authored-by: Willi Ballenthin <willi.ballenthin@gmail.com>
2021-06-08 10:59:42 -06:00
Michael Hunhoff
84d2f9f324 fix 616 2021-06-08 10:16:54 -06:00
Capa Bot
f9bad7e5e4 Sync capa rules submodule 2021-06-08 14:17:39 +00:00
Capa Bot
40b6575db6 Sync capa-testfiles submodule 2021-06-08 12:48:33 +00:00
Willi Ballenthin
64d849aafc Merge pull request #613 from fireeye/doc/update-readme
update readme
2021-06-07 10:46:28 -06:00
Willi Ballenthin
3b6e6dcc00 Merge pull request #612 from fireeye/ci/no-changelog-dependabot
ignore dependabot for changelog check
2021-06-07 10:45:56 -06:00
Willi Ballenthin
d17ac2928f Merge pull request #615 from fireeye/bump-smda
bump smda and remove xfail
2021-06-07 10:33:21 -06:00
Moritz Raabe
8b58723f40 bump smda and remove xfail 2021-06-07 13:56:55 +02:00
Moritz Raabe
bed2e3777e job level exclusion 2021-06-07 12:38:03 +02:00
Capa Bot
c039e98d3f Sync capa rules submodule 2021-06-07 09:51:13 +00:00
Moritz Raabe
c3ba6a9025 update readme 2021-06-07 10:26:41 +02:00
Moritz
2691fb400e Merge pull request #611 from fireeye/dependabot/pip/pytest-cov-2.12.1
build(deps-dev): bump pytest-cov from 2.12.0 to 2.12.1
2021-06-07 09:55:12 +02:00
Moritz
e0075573d9 Merge pull request #610 from fireeye/dependabot/pip/ruamel-yaml-0.17.7
build(deps): bump ruamel-yaml from 0.17.5 to 0.17.7
2021-06-07 09:55:00 +02:00
Moritz
1bb8c78b60 Merge pull request #609 from fireeye/dependabot/pip/black-21.5b2
build(deps-dev): bump black from 21.5b1 to 21.5b2
2021-06-07 09:54:40 +02:00
Moritz Raabe
ff66346d2a ignore dependabot for changelog check 2021-06-07 09:52:46 +02:00
Capa Bot
6f51324cca Sync capa-testfiles submodule 2021-06-07 07:45:31 +00:00
Capa Bot
700259eab6 Sync capa rules submodule 2021-06-07 07:45:04 +00:00
Capa Bot
438677b129 Sync capa-testfiles submodule 2021-06-07 06:48:11 +00:00
Capa Bot
3f51e787e4 Sync capa rules submodule 2021-06-07 06:19:37 +00:00
Capa Bot
2bbf00d603 Sync capa rules submodule 2021-06-07 06:17:47 +00:00
Moritz
b21b041dab Merge pull request #608 from fireeye/fix-605
fix 605
2021-06-07 08:16:16 +02:00
Moritz
734b1702e6 Merge pull request #607 from Ana06/ahead-changed-files
Use Ana06/get-changed-files@v1.2
2021-06-07 08:11:27 +02:00
dependabot[bot]
a39e2e7e0f build(deps-dev): bump pytest-cov from 2.12.0 to 2.12.1
Bumps [pytest-cov](https://github.com/pytest-dev/pytest-cov) from 2.12.0 to 2.12.1.
- [Release notes](https://github.com/pytest-dev/pytest-cov/releases)
- [Changelog](https://github.com/pytest-dev/pytest-cov/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest-cov/compare/v2.12.0...v2.12.1)

---
updated-dependencies:
- dependency-name: pytest-cov
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-07 06:02:46 +00:00
dependabot[bot]
d9e1732766 build(deps): bump ruamel-yaml from 0.17.5 to 0.17.7
Bumps [ruamel-yaml](https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree) from 0.17.5 to 0.17.7.

---
updated-dependencies:
- dependency-name: ruamel-yaml
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-07 06:02:38 +00:00
dependabot[bot]
6dd5bbeffd build(deps-dev): bump black from 21.5b1 to 21.5b2
Bumps [black](https://github.com/psf/black) from 21.5b1 to 21.5b2.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/commits)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-06-07 06:02:31 +00:00
William Ballenthin
3c4388e280 changelog 2021-06-04 11:48:03 -06:00
Ana Maria Martinez Gomez
6ffa5ef53e changelog: fix number of new rules
This was caused by a bug in the GH action which updates this number
automatically:
https://github.com/fireeye/capa-rules/pull/405
2021-06-04 19:47:57 +02:00
William Ballenthin
90ec848bf6 main: fix matching BB features at file scope
closes #605
2021-06-04 11:45:37 -06:00
William Ballenthin
e0be7f1b8e main: debug rules more correctly 2021-06-04 11:31:19 -06:00
Ana Maria Martinez Gomez
4ef3830b6b Use Ana06/get-changed-files@v1.2
Use Ana06/get-changed-files@v1.2 which removes the _head commit is ahead
of the base commit_ check. This made the action failed in not up-to-date
branches (in which rebasing is needed).

It supersedes https://github.com/fireeye/capa/pull/599
2021-06-04 14:03:41 +02:00
Ana María Martínez Gómez
e737595339 Merge pull request #604 from Ana06/lint_changelog
ci: lint CHANGELOG
2021-06-04 13:33:11 +02:00
Capa Bot
94cb090afe Sync capa rules submodule 2021-06-04 09:10:09 +00:00
Moritz
32e0a5dce2 Merge pull request #598 from fireeye/render/json-attck-fields
parse att&ck for output doc
2021-06-02 16:54:31 +02:00
Ana Maria Martinez Gomez
f304bdbd20 ci: lint CHANGELOG
The sync GH action in capa-rules relies on a single '- *$' in the
CHANGELOG file. Check in the tests that this is the case to avoid that
it is removed.

This happened in the following PR:
https://github.com/fireeye/capa/pull/591
This caused that the new rules in the following PR were not added to the
CHANGELOG:
https://github.com/fireeye/capa-rules/pull/400
2021-06-02 12:42:48 +02:00
Ana Maria Martinez Gomez
1a3286beda ci: fix CHANGELOG
The `-` used by the GitHub actions which updates the rules in the
CHANGELOG was removed in:
https://github.com/fireeye/capa/pull/591
Consequently the new rules added in the last pull request were not added
to the CHANGELOG:
https://github.com/fireeye/capa-rules/pull/400
2021-06-02 12:12:48 +02:00
Moritz Raabe
63cd70029f dedup code 2021-06-02 11:06:49 +02:00
Moritz Raabe
94089ff43f parse att&ck for output doc 2021-06-02 10:37:19 +02:00
Capa Bot
8f1ce68e96 Sync capa rules submodule 2021-06-01 17:51:43 +00:00
Willi Ballenthin
37208aabd3 Merge pull request #591 from fireeye/feature-590
main: use rule scope internal/limitation/file for file limitations, not code
2021-06-01 11:50:56 -06:00
Willi Ballenthin
8c3605c886 Merge branch 'master' into feature-590 2021-06-01 11:50:40 -06:00
William Ballenthin
2706a7171e linter: fix match namespace handling
closes #601
2021-06-01 11:38:05 -06:00
William Ballenthin
8f3d443247 rules: use existing code, dedup 2021-06-01 11:25:38 -06:00
Willi Ballenthin
9968d16f21 Merge pull request #593 from fireeye/feature-159
json: capture all strings matching regex
2021-06-01 11:18:08 -06:00
Willi Ballenthin
2756c05889 Merge branch 'master' into feature-159 2021-06-01 11:17:41 -06:00
William Ballenthin
8a65c565a5 pep8 2021-06-01 11:06:12 -06:00
William Ballenthin
17eeecc526 render: handle namespace matches in result document 2021-05-31 10:28:11 -06:00
William Ballenthin
3b245ea201 rules: index rules by namespace 2021-05-31 10:28:00 -06:00
William Ballenthin
3cd348e8f7 rules: implement __contains__ for RuleSet 2021-05-31 10:27:44 -06:00
William Ballenthin
6d08695b38 Merge branch 'master' of github.com:fireeye/capa into feature-590 2021-05-31 09:54:33 -06:00
William Ballenthin
66b2c07af4 main: show matching file limitation rule when showing warning 2021-05-31 09:53:19 -06:00
Capa Bot
b8a67553d0 Sync capa rules submodule 2021-05-31 08:53:38 +00:00
Moritz
82eae4324e Merge pull request #595 from fireeye/dependabot/pip/ruamel-yaml-0.17.5
build(deps): bump ruamel-yaml from 0.17.4 to 0.17.5
2021-05-31 10:39:33 +02:00
Moritz
ac9c132c91 Merge pull request #594 from fireeye/dependabot/pip/tqdm-4.61.0
build(deps): bump tqdm from 4.60.0 to 4.61.0
2021-05-31 10:39:14 +02:00
Moritz
c2953b9733 Merge pull request #576 from fireeye/render/json-mbc-attck-fields
render `rule.meta.mbc` on output
2021-05-31 10:38:27 +02:00
Moritz
30de93b81f Merge pull request #596 from fireeye/tests/fix-smda-fails
fix smda test xfail
2021-05-31 10:37:43 +02:00
Moritz Raabe
e6f45b63d6 fix test xfail 2021-05-31 10:02:31 +02:00
dependabot[bot]
c1b689a375 build(deps): bump ruamel-yaml from 0.17.4 to 0.17.5
Bumps [ruamel-yaml](https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree) from 0.17.4 to 0.17.5.

Signed-off-by: dependabot[bot] <support@github.com>
2021-05-31 05:57:42 +00:00
dependabot[bot]
c1546cf6a8 build(deps): bump tqdm from 4.60.0 to 4.61.0
Bumps [tqdm](https://github.com/tqdm/tqdm) from 4.60.0 to 4.61.0.
- [Release notes](https://github.com/tqdm/tqdm/releases)
- [Commits](https://github.com/tqdm/tqdm/compare/v4.60.0...v4.61.0)

Signed-off-by: dependabot[bot] <support@github.com>
2021-05-31 05:57:33 +00:00
Moritz Raabe
de96bb763b address code review 2021-05-28 16:52:17 +02:00
Moritz Raabe
9e62bd1b24 update renderers 2021-05-28 16:40:15 +02:00
Moritz Raabe
54d21a043e parse mbc for result doc 2021-05-28 16:40:15 +02:00
Moritz Raabe
f593592ff0 parse mbc fields 2021-05-28 16:40:15 +02:00
Willi Ballenthin
ed02088c82 detect (and short circuit) file limitations at file scope (#586)
* smda: move pe carve into helpers

* smda: simplify test parametrization/xfail

* extractors: add pefile extractor for file scope features

* pep8

* main: bail early on file limitation detected at file scope

closes #583

* changelog
2021-05-28 08:14:44 -06:00
Ana María Martínez Gómez
b3fff51002 Merge pull request #584 from Ana06/changelog-GA
ci: Reject PRs without CHANGELOG update
2021-05-28 12:09:06 +02:00
Ana Maria Martinez Gomez
51884fea2d doc: Fix link and add more details
Fix broken link to `pull_request_template.md` and add some more details.

Related #457
2021-05-28 12:07:21 +02:00
Ana Maria Martinez Gomez
84b0bc6439 changelog: Add #584 to CHANGELOG 2021-05-28 11:08:05 +02:00
Ana Maria Martinez Gomez
38d41e2f59 ci: fix get-changed-files
Ana06/get-changed-files@v1.1 is a fork of
https://github.com/jitterbit/get-changed-files, which supports
`pull_request_target` and allow to filter files using regular
expressions.

As we need to use `pull_request_target`, Ana06/get-changed-files@v1.1
works, but jitterbit/get-changed-files@v1 doesn't.
2021-05-28 11:08:04 +02:00
Ana Maria Martinez Gomez
23ff9e719f ci: only reject once and fix dismiss
`Ana06/automatic-pull-request-review@v0.1.0` is a fork of
https://github.com/AndrewMusgrave/automatic-pull-request-review which
fixes `DISMISS` and provides an `allow_duplicate` option which allows to
only approve once.
2021-05-28 11:08:04 +02:00
Ana Maria Martinez Gomez
7a0a6f9cf1 ci: check changelog
Request changes in a PR without CHANGELOG update.
2021-05-28 11:08:04 +02:00
Ana Maria Martinez Gomez
f6960e4deb github: Improve pull request template
After using the PR template for a while, I think simplifying it will be
helpful:

- GitHub includes the commit message description automatically with the
aim of saving you time as it is sometimes also a good PR description.
With the current template, I need to cut this test and paste it into the
description section (which is really annoying!).

- Make a single simpler checklist. Add information as comment and have a
straightforward list which helps us remembering the changelog, tests and
documentation without needing to invest much time. The changelog
bulletpoint will also be used in GitHub Actions.
2021-05-28 11:08:00 +02:00
Willi Ballenthin
bd63ded1dd file scope API features (#568)
* smda: minor unrelated fixes

* file features: extract API features at file scope for library functions

closes #567

* changelog

* ida: add file-scope API feature

Co-authored-by: mike-hunhoff <mike.hunhoff@gmail.com>

* fix lints from pylance

* features: use "function-name" for recognized linked functions

* pep8

* pep8

* rules: remove incorrect feature scope

* tests: xfail SMDA tests relying on function id

* tests: fixtures: order tests by sample, ideally improving memory usage

* pep8

* pep8

* smda: xfail two more tests

Co-authored-by: mike-hunhoff <mike.hunhoff@gmail.com>
2021-05-27 12:59:00 -06:00
William Ballenthin
3c90e909a1 pep8 2021-05-27 10:45:01 -06:00
William Ballenthin
70396ffa36 ida: try to fix regex match rendering 2021-05-27 10:38:40 -06:00
William Ballenthin
56efb2adfe changelog 2021-05-27 10:28:41 -06:00
William Ballenthin
868b5ed6a3 features: extract all strings matching regex
closes #159
2021-05-27 10:27:39 -06:00
William Ballenthin
0a226e8b01 main: use rule scope internal/limitation/file for file limitations, not
code

closes #390
2021-05-27 09:18:55 -06:00
Capa Bot
7df29b491c Sync capa-testfiles submodule 2021-05-27 07:08:00 +00:00
Capa Bot
f0fb5fb346 Sync capa rules submodule 2021-05-26 21:03:50 +00:00
Capa Bot
342497b72f Sync capa rules submodule 2021-05-26 07:31:49 +00:00
Capa Bot
2b19257c5c Sync capa-testfiles submodule 2021-05-26 07:22:40 +00:00
Moritz
4ebbdcd00c Merge pull request #582 from fireeye/ci/lint-color-optional
or/optional lint and colors
2021-05-25 17:26:23 +02:00
Moritz Raabe
204d8b36df add or/optional lint and colors
closes #348
2021-05-25 16:32:47 +02:00
Moritz Raabe
8e4e9fc616 Revert "Sync capa-testfiles submodule"
This reverts commit 826d472c07.
2021-05-25 14:58:01 +02:00
Capa Bot
826d472c07 Sync capa-testfiles submodule 2021-05-25 12:45:59 +00:00
Capa Bot
57f416d62d Sync capa-testfiles submodule 2021-05-25 12:44:13 +00:00
Capa Bot
a79a547682 Sync capa rules submodule 2021-05-24 15:25:44 +00:00
Capa Bot
bd9812cee4 Sync capa rules submodule 2021-05-24 15:22:21 +00:00
Willi Ballenthin
2a36894d85 Merge pull request #578 from fireeye/dependabot/pip/viv-utils-flirt--0.6.4
build(deps): bump viv-utils[flirt] from 0.6.2 to 0.6.4
2021-05-24 09:14:31 -06:00
Willi Ballenthin
c33c4c45dc Merge pull request #577 from fireeye/dependabot/pip/smda-1.5.17
build(deps): bump smda from 1.5.14 to 1.5.17
2021-05-24 09:14:22 -06:00
dependabot[bot]
9cd07a0cee build(deps): bump viv-utils[flirt] from 0.6.2 to 0.6.4
Bumps [viv-utils[flirt]](https://github.com/williballenthin/viv-utils) from 0.6.2 to 0.6.4.
- [Release notes](https://github.com/williballenthin/viv-utils/releases)
- [Commits](https://github.com/williballenthin/viv-utils/compare/v0.6.2...v0.6.4)

Signed-off-by: dependabot[bot] <support@github.com>
2021-05-24 06:03:22 +00:00
dependabot[bot]
4f85d85ea6 build(deps): bump smda from 1.5.14 to 1.5.17
Bumps [smda](https://github.com/danielplohmann/smda) from 1.5.14 to 1.5.17.
- [Release notes](https://github.com/danielplohmann/smda/releases)
- [Commits](https://github.com/danielplohmann/smda/commits)

Signed-off-by: dependabot[bot] <support@github.com>
2021-05-24 06:03:15 +00:00
Willi Ballenthin
8699003597 Merge pull request #572 from fireeye/feature-571
linter: summarize status at end
2021-05-21 11:14:29 -06:00
Willi Ballenthin
4cada67b21 Merge branch 'master' into feature-571 2021-05-21 11:14:22 -06:00
Willi Ballenthin
0a203b54cd changelog 2021-05-21 11:13:48 -06:00
Willi Ballenthin
cf1e9dc425 Merge pull request #573 from fireeye/lazy-import-flirt
lazy import flirt
2021-05-21 09:50:14 -06:00
Willi Ballenthin
6b8bb0520d Merge pull request #575 from ruppde/master
Update capa2yara.py
2021-05-21 09:45:24 -06:00
Arnim Rupp
7759d2dd79 Update capa2yara.py 2021-05-21 17:04:16 +02:00
Arnim Rupp
73f121cf03 Update capa2yara.py
bugfix: https://github.com/fireeye/capa-rules/blob/master/collection/get-geographical-location.yml hit an far too many files with /\bcity opposed to the intention of the capa rule ti just hit in function names. changed to /\x00city.
2021-05-21 16:51:14 +02:00
Moritz
91f914f5c0 Merge pull request #562 from fireeye/lib-meta-info
improve progress bar output
2021-05-21 16:47:52 +02:00
Moritz Raabe
af5613250f lazy import flirt
closes #540
2021-05-21 11:31:37 +02:00
Capa Bot
72da8f3aed Sync capa rules submodule 2021-05-21 07:12:57 +00:00
Moritz Raabe
a8e353fe31 revert rule loading pbar 2021-05-20 14:00:01 +02:00
Moritz Raabe
8a386b6909 improve progress bar output 2021-05-20 13:56:29 +02:00
Ana Maria Martinez Gomez
83606bbc0f changelog: convert capa rules to YARA rules
Add https://github.com/fireeye/capa/pull/561 to CHANGELOG.
2021-05-20 11:25:24 +02:00
Moritz
caaeded278 Merge pull request #563 from fireeye/ci/lint-statement-children
lint statements for single child statements
2021-05-20 10:41:41 +02:00
Willi Ballenthin
dcf4a056ee show-features: skip library functions (#570)
* show-features: skip library functions

closes #569

* changelog
2021-05-20 10:34:48 +02:00
Capa Bot
f9cec64c2d Sync capa-testfiles submodule 2021-05-20 08:11:28 +00:00
William Ballenthin
9b1400c23a pep8 2021-05-19 16:14:37 -06:00
William Ballenthin
60d77759f2 Merge branch 'feature-571' of github.com:fireeye/capa into feature-571 2021-05-19 16:14:09 -06:00
Willi Ballenthin
5fc705856d Merge branch 'master' into feature-571 2021-05-20 16:40:37 -06:00
William Ballenthin
0a1adb99e0 lint: cleanup handling of nursery rules further 2021-05-19 16:13:45 -06:00
William Ballenthin
3eef034a94 lint: better handling of nursery rule summary 2021-05-19 16:06:07 -06:00
Capa Bot
66d96201cb Sync capa rules submodule 2021-05-19 20:31:48 +00:00
Moritz Raabe
586726fb13 lint statements for single child statements 2021-05-19 18:25:14 +02:00
Capa Bot
656cdfc41c Sync capa rules submodule 2021-05-19 16:21:47 +00:00
Arnim Rupp
7b62b589f7 Create capa2yara.py (#561)
* Create capa2yara.py

* Update capa2yara.py

    isort --profile black --length-sort --line-width 120

    black -l 120

* Update scripts/capa2yara.py

Co-authored-by: Willi Ballenthin <willi.ballenthin@gmail.com>

Co-authored-by: Arnim Rupp <46819580+2d4d@users.noreply.github.com>
Co-authored-by: Moritz <mr-tz@users.noreply.github.com>
Co-authored-by: Willi Ballenthin <willi.ballenthin@gmail.com>
2021-05-19 18:01:04 +02:00
Capa Bot
e7884c9a53 Sync capa rules submodule 2021-05-19 07:50:11 +00:00
William Ballenthin
2f2849dee0 changelog 2021-05-18 15:20:54 -06:00
William Ballenthin
ff88393248 linter: summarize status at end
closes #571
2021-05-18 15:19:34 -06:00
William Ballenthin
9ed6e12e7c Merge branch 'master' of github.com:fireeye/capa 2021-05-18 13:35:59 -06:00
William Ballenthin
ec5cec619d rules: add tests demonstrating mnemonic descriptions 2021-05-18 13:35:24 -06:00
Capa Bot
760867b81e Sync capa rules submodule 2021-05-17 15:00:45 +00:00
Capa Bot
abeaac0675 Sync capa rules submodule 2021-05-17 10:14:49 +00:00
Moritz
010866a3bd Merge pull request #560 from fireeye/dependabot/pip/pytest-cov-2.12.0
build(deps-dev): bump pytest-cov from 2.11.1 to 2.12.0
2021-05-17 12:14:16 +02:00
Capa Bot
8f9f792930 Sync capa rules submodule 2021-05-17 08:36:26 +00:00
Capa Bot
9ccdce9896 Sync capa rules submodule 2021-05-17 08:35:45 +00:00
dependabot[bot]
0dc212f53e build(deps-dev): bump pytest-cov from 2.11.1 to 2.12.0
Bumps [pytest-cov](https://github.com/pytest-dev/pytest-cov) from 2.11.1 to 2.12.0.
- [Release notes](https://github.com/pytest-dev/pytest-cov/releases)
- [Changelog](https://github.com/pytest-dev/pytest-cov/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest-cov/compare/v2.11.1...v2.12.0)

Signed-off-by: dependabot[bot] <support@github.com>
2021-05-17 05:57:20 +00:00
Capa Bot
3cf4a47773 Sync capa rules submodule 2021-05-12 14:23:14 +00:00
Capa Bot
bbf59d65ad Sync capa rules submodule 2021-05-12 12:14:30 +00:00
Moritz
6b738f754e Merge pull request #557 from fireeye/dependabot/pip/black-21.5b1
build(deps-dev): bump black from 21.4b2 to 21.5b1
2021-05-12 07:35:43 +02:00
dependabot[bot]
83a4e054d1 build(deps-dev): bump black from 21.4b2 to 21.5b1
Bumps [black](https://github.com/psf/black) from 21.4b2 to 21.5b1.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/commits)

Signed-off-by: dependabot[bot] <support@github.com>
2021-05-11 17:42:03 +00:00
Moritz
9843776460 Merge pull request #552 from fireeye/dependabot/pip/pytest-6.2.4
build(deps-dev): bump pytest from 6.2.3 to 6.2.4
2021-05-11 19:40:43 +02:00
dependabot[bot]
2626572ddc build(deps-dev): bump pytest from 6.2.3 to 6.2.4
Bumps [pytest](https://github.com/pytest-dev/pytest) from 6.2.3 to 6.2.4.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/6.2.3...6.2.4)

Signed-off-by: dependabot[bot] <support@github.com>
2021-05-11 16:51:23 +00:00
Moritz
e3af23f209 Merge pull request #551 from fireeye/dependabot/pip/vivisect-1.0.3
build(deps): bump vivisect from 1.0.1 to 1.0.3
2021-05-11 18:48:16 +02:00
dependabot[bot]
0f16787ef9 build(deps): bump vivisect from 1.0.1 to 1.0.3
Bumps [vivisect](https://github.com/vivisect/vivisect) from 1.0.1 to 1.0.3.
- [Release notes](https://github.com/vivisect/vivisect/releases)
- [Changelog](https://github.com/vivisect/vivisect/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/vivisect/vivisect/compare/v1.0.1...v1.0.3)

Signed-off-by: dependabot[bot] <support@github.com>
2021-05-11 15:01:03 +00:00
Moritz
495a270c99 Update CHANGELOG.md 2021-05-11 16:32:54 +02:00
Moritz
424a25cb91 Fix tests on Windows - reduced memory impact and general fixes (#545)
* Update tests.yml

* Update .github/workflows/tests.yml

* Update tests.yml

* update

* min tests

* enable all, no sigpaths

* update cache

* save workspace, log caching

* updated tests

* update tests

* update rec call test

* lower cache size

* address Ana's feedback
2021-05-11 16:29:01 +02:00
Capa Bot
fa0809685e Sync capa rules submodule 2021-05-11 11:10:45 +00:00
Ana Maria Martinez Gomez
188966a94b changelog: support multiple authors
GH didn't support multiple authors, producing a breaking entry in the
last update. Correct the entry and mention the fix in the CHANGELOG.

https://github.com/fireeye/capa/issues/555
2021-05-11 12:48:30 +02:00
Capa Bot
d7b7e0111e Sync capa rules submodule 2021-05-10 08:24:40 +00:00
Capa Bot
be11223e4b Sync capa rules submodule 2021-05-07 15:06:52 +00:00
Ana Maria Martinez Gomez
2cbf5147c0 changelog: add #517 and capa/rules/374
Add to the changelog that we now update `New Rules` section in CHANGELOG
automatically.
2021-05-07 17:01:55 +02:00
Capa Bot
5b026df5f4 Sync capa rules submodule 2021-05-07 14:47:03 +00:00
Ana María Martínez Gómez
ac842c95d3 Merge pull request #549 from Ana06/changelog
Update CHANGELOG and release
2021-05-07 16:34:08 +02:00
Capa Bot
aaaeec4de7 Sync capa rules submodule 2021-05-07 13:54:11 +00:00
Capa Bot
99a7380faf Sync capa-testfiles submodule 2021-05-07 12:49:58 +00:00
Ana Maria Martinez Gomez
f43ffabded doc: add item to release checklist
We should update capa everywhere after releasing!
2021-05-07 12:55:02 +02:00
Ana Maria Martinez Gomez
52c0cfd5d0 changelog: prepare to automatize new rules entries
Use an empty item in the `New Rules` section as a marker for the GitHub
Action. If this causes problems, we could look into other solution such
as writing 2 lines before `### Bug Fixes`. But I think this is the
easiest I can come up with. So lets give it a try.
2021-05-07 12:55:02 +02:00
Ana Maria Martinez Gomez
1caf4a7fbf changelog: add missing changes
Add missing changes to CHANGELOG. It should be up-to-date now, with the
exception of the dependencies updates which I think need discussion.
2021-05-07 12:54:59 +02:00
Ana Maria Martinez Gomez
98a976fa72 changelog: add v1.6.3
Add v1.6.3 release which backports IDA 7.6 support to Python 2. Also
remove the capa-rules raw diff as there are not changes (and the tag
doesn't exist).
2021-05-06 23:25:53 +02:00
Capa Bot
3a883807e5 Sync capa rules submodule 2021-05-06 18:07:01 +00:00
Capa Bot
b1b34db0b6 Sync capa rules submodule 2021-05-04 13:43:40 +00:00
Capa Bot
4901cd1da1 Sync capa-testfiles submodule 2021-05-04 07:26:14 +00:00
Capa Bot
272471e158 Sync capa rules submodule 2021-05-03 22:42:41 +00:00
William Ballenthin
8f0ce11ff6 tests: register common FLIRT sigs
closes #538
2021-05-01 08:06:56 -06:00
Willi Ballenthin
e8c807b993 Merge pull request #541 from fireeye/dependabot/pip/black-21.4b2
build(deps-dev): bump black from 21.4b0 to 21.4b2
2021-05-03 08:35:32 -06:00
dependabot[bot]
0b1c80d4d5 build(deps-dev): bump black from 21.4b0 to 21.4b2
Bumps [black](https://github.com/psf/black) from 21.4b0 to 21.4b2.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/master/CHANGES.md)
- [Commits](https://github.com/psf/black/commits)

Signed-off-by: dependabot[bot] <support@github.com>
2021-05-03 06:36:46 +00:00
Capa Bot
82ce223c9b Sync capa-testfiles submodule 2021-04-30 21:06:56 +00:00
Capa Bot
f190b630b7 Sync capa-testfiles submodule 2021-04-30 21:06:48 +00:00
Capa Bot
614a6caee6 Sync capa rules submodule 2021-04-30 21:05:57 +00:00
Capa Bot
ddda87373d Sync capa rules submodule 2021-04-30 20:35:46 +00:00
Capa Bot
9ceebb9bb2 Sync capa-testfiles submodule 2021-04-30 17:13:44 +00:00
Willi Ballenthin
7d2bb6f61b changelog: document FLIRT #446 2021-04-30 08:54:32 -06:00
Willi Ballenthin
c7fe132389 Merge pull request #446 from fireeye/function-id-flirt
feature: match functions with FLIRT
2021-04-30 08:49:30 -06:00
William Ballenthin
404c7a7e88 tests: fix function id tests 2021-04-30 08:48:49 -06:00
William Ballenthin
9a2827935f sigs: add README with license 2021-04-30 08:45:41 -06:00
William Ballenthin
55b83fc2b5 tests: re-enable function id test 2021-04-30 08:37:38 -06:00
William Ballenthin
b89a29b997 freeze: use common args 2021-04-30 08:35:46 -06:00
Moritz
5aa7c57798 Merge pull request #536 from Ana06/ida7_6sp1
doc: document IDA 7.6sp1
2021-04-29 11:05:42 +02:00
Ana Maria Martinez Gomez
e46d1bbbfb doc: document IDA 7.6sp1
The Service Pack 1 for IDA 7.6 includes a bug fix that broke capa
explorer. Document this as an alternative to install the patch.
2021-04-29 11:00:12 +02:00
William Ballenthin
14abb7d4f6 pep8 2021-04-27 13:41:59 -06:00
William Ballenthin
b0c27f5890 setup: bump viv-utils dep v0.6.2 2021-04-27 13:29:45 -06:00
William Ballenthin
bd92933030 show-features: accept signatures or use default 2021-04-27 13:27:59 -06:00
William Ballenthin
249332a9dd lint: load default sigs 2021-04-27 13:22:45 -06:00
William Ballenthin
1a99ff8ccb main: remove old code 2021-04-27 13:12:39 -06:00
William Ballenthin
7373437317 pep8 2021-04-27 13:12:20 -06:00
William Ballenthin
4e7364f25b main: import flirt at top level 2021-04-27 13:11:05 -06:00
William Ballenthin
ce9fd73fa9 main: further document not analyzing workspace 2021-04-27 13:09:52 -06:00
William Ballenthin
9ca1a7ebb6 extractors: do cast-to-int correctly 2021-04-27 13:07:27 -06:00
William Ballenthin
e8457c7abf Merge branch 'function-id-flirt' of github.com:fireeye/capa into function-id-flirt 2021-04-27 12:34:26 -06:00
William Ballenthin
f4ba5a5eb9 setup: bump viv-utils 0.6.1 for more platform support 2021-04-27 12:33:44 -06:00
Moritz Raabe
fc126451a7 add signature files 2021-04-27 19:27:02 +02:00
William Ballenthin
89ad582af5 main: flirt: pat: ensure posix-style line endings 2021-04-27 11:05:21 -06:00
Capa Bot
e66d74764a Sync capa rules submodule 2021-04-27 15:02:51 +00:00
William Ballenthin
4962fcfcde ci: fix accidental merge conflict 2021-04-26 12:19:25 -06:00
William Ballenthin
582e45f72f Merge branch 'function-id-flirt' of github.com:fireeye/capa into function-id-flirt 2021-04-26 12:14:44 -06:00
William Ballenthin
6ec89baf26 pep8 2021-04-26 12:12:51 -06:00
William Ballenthin
76cd530a0f flirt: py3 2021-04-26 12:11:59 -06:00
William Ballenthin
f6a105bcc1 pep8 2021-04-26 12:09:39 -06:00
William Ballenthin
75eed82d33 main: clarify that get_workspace caller is responsible for saving 2021-04-26 12:08:20 -06:00
Capa Bot
fbe307d26a Sync capa rules submodule 2021-04-26 16:20:38 +00:00
Capa Bot
c4a0c3d54a Sync capa rules submodule 2021-04-26 16:18:28 +00:00
William Ballenthin
c79f461e39 Merge branch 'master' into function-id-flirt 2021-04-26 09:47:42 -06:00
Capa Bot
24cd301fa8 Sync capa-testfiles submodule 2021-04-26 14:53:44 +00:00
Willi Ballenthin
a32d609ead Merge pull request #534 from fireeye/dependabot/pip/black-21.4b0
build(deps-dev): bump black from 20.8b1 to 21.4b0
2021-04-26 08:45:10 -06:00
William Ballenthin
a0e045dc52 ci: use black/isort dep from setup.py
closes #535
2021-04-26 08:39:01 -06:00
William Ballenthin
3111593ab8 pep8 2021-04-26 08:34:36 -06:00
Capa Bot
75d9ff5fff Sync capa rules submodule 2021-04-26 12:26:25 +00:00
dependabot[bot]
42877b0b6e build(deps-dev): bump black from 20.8b1 to 21.4b0
Bumps [black](https://github.com/psf/black) from 20.8b1 to 21.4b0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/master/CHANGES.md)
- [Commits](https://github.com/psf/black/commits)

Signed-off-by: dependabot[bot] <support@github.com>
2021-04-26 06:30:28 +00:00
Capa Bot
f54b697187 Sync capa rules submodule 2021-04-23 22:50:16 +00:00
Capa Bot
e4a001170c Sync capa-testfiles submodule 2021-04-23 22:49:23 +00:00
Willi Ballenthin
bb15023b0b Merge pull request #533 from fireeye/mr-tz-patch-1
Update installation.md
2021-04-22 14:40:55 -06:00
Moritz
54531ebf35 Update installation.md 2021-04-22 20:41:07 +02:00
Capa Bot
9257e326f3 Sync capa-testfiles submodule 2021-04-22 18:04:58 +00:00
Capa Bot
b59b83a86a Sync capa-testfiles submodule 2021-04-22 17:39:19 +00:00
Capa Bot
caec649a5d Sync capa rules submodule 2021-04-16 14:23:56 +00:00
Capa Bot
09d0286b1b Sync capa rules submodule 2021-04-14 18:35:29 +00:00
Ana María Martínez Gómez
1ebe9766c0 Merge pull request #530 from Ana06/masterv1-6-2
changelog: add v1.6.2
2021-04-14 10:44:57 +02:00
Capa Bot
3e3b1579c3 Sync capa rules submodule 2021-04-14 06:23:30 +00:00
Ana Maria Martinez Gomez
ec6b380acd changelog: add v1.6.2
The code of v1.6.2 is not included in the `master` branch, as it was
backported to `master-py2`. But users may expect to find all releases in
the CHANGELOG of the master branch.
2021-04-13 17:27:48 +02:00
Willi Ballenthin
5ceb515325 Merge pull request #528 from fireeye/williballenthin-patch-2
explorer: readme: document IDA 7.6 patch
2021-04-13 08:54:59 -06:00
Willi Ballenthin
8938744e3e Merge pull request #497 from fireeye/williballenthin-patch-1
ida: support 7.6
2021-04-13 08:54:51 -06:00
Willi Ballenthin
d0f6b47f58 changelog: #528 2021-04-13 08:35:10 -06:00
Willi Ballenthin
a07bcbff2e explorer: readme: document IDA 7.6 patch
closes #496
2021-04-13 08:33:37 -06:00
Moritz
3023634536 build using Py3.8 and test across more OSs (#506)
* build using Py3.8 and test across more OSs

* enable for release

* test builds on push to master
2021-04-13 15:42:58 +02:00
Moritz
a11d04e92b Merge pull request #525 from fireeye/dependabot/pip/smda-1.5.14
build(deps): bump smda from 1.5.13 to 1.5.14
2021-04-12 14:13:36 +02:00
dependabot[bot]
2140a3d762 build(deps): bump smda from 1.5.13 to 1.5.14
Bumps [smda](https://github.com/danielplohmann/smda) from 1.5.13 to 1.5.14.
- [Release notes](https://github.com/danielplohmann/smda/releases)
- [Commits](https://github.com/danielplohmann/smda/commits)

Signed-off-by: dependabot[bot] <support@github.com>
2021-04-12 06:32:25 +00:00
Willi Ballenthin
1f6debc6e0 Merge pull request #524 from fireeye/mr-tz-patch-1
Update pull_request_template.md
2021-04-09 15:03:13 -06:00
Moritz
eb5c705083 Update pull_request_template.md 2021-04-09 15:03:43 +02:00
Capa Bot
f01044e453 Sync capa rules submodule 2021-04-09 11:19:42 +00:00
Moritz
8ef3eb85a2 Merge pull request #523 from fireeye/auto-detect-sc-extension-2
move auto format check
2021-04-09 13:16:12 +02:00
Moritz Raabe
d1cd4ef259 move auto format check 2021-04-09 11:59:30 +02:00
Capa Bot
a8bef0d9c0 Sync capa rules submodule 2021-04-09 09:21:00 +00:00
Moritz
309a9abb8a Merge pull request #521 from fireeye/auto-detect-sc-extension
auto detect shellcode file extensions
2021-04-09 11:13:25 +02:00
Moritz
cc13a7681a Merge pull request #522 from fireeye/explorer/update-docs
updating capa explorer doc
2021-04-09 10:31:03 +02:00
Michael Hunhoff
503a723611 updating capa explorer doc 2021-04-08 14:06:23 -06:00
Moritz Raabe
998f4a6bad auto detect shellcode file extensions 2021-04-08 18:49:22 +02:00
Willi Ballenthin
1be3613063 changelog: describe #519 2021-04-08 09:10:14 -06:00
Willi Ballenthin
9ffbe5cd76 Merge pull request #519 from fireeye/dependabot/pip/ruamel-yaml-0.17.4
build(deps): bump ruamel-yaml from 0.17.0 to 0.17.4
2021-04-08 09:06:14 -06:00
Ana María Martínez Gómez
255d6ea176 Merge pull request #517 from Ana06/better-tag
ci: add capa release link to capa-rules tag
2021-04-08 10:49:07 +02:00
dependabot[bot]
628e2ef3f4 build(deps): bump ruamel-yaml from 0.17.0 to 0.17.4
Bumps [ruamel-yaml](https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree) from 0.17.0 to 0.17.4.

Signed-off-by: dependabot[bot] <support@github.com>
2021-04-08 08:49:03 +00:00
Ana María Martínez Gómez
64465a7a31 Merge pull request #480 from Ana06/py3-only 2021-04-08 10:48:15 +02:00
Ana Maria Martinez Gomez
9d79baa96a ci: add capa release link to capa-rules tag
GitHub displays the commit's message of the tag if no description is
given, which is ugly. Use annotated tags which include a message. Use
the release link as message, as this is useful information.
2021-04-07 18:46:51 +02:00
Ana Maria Martinez Gomez
3013269a1c changelog: Update changelog
Add `drop Python 2 support` entry.
2021-04-07 18:24:52 +02:00
Ana Maria Martinez Gomez
bbff3016fe doc: Update Python 2 related documentation
Update documentation and code comments which mention Python 2.
2021-04-07 18:20:08 +02:00
Ana Maria Martinez Gomez
e9d190799e py3: use Python 3.6 to publish capa 2021-04-07 18:20:08 +02:00
Ana Maria Martinez Gomez
0465333aa4 py3: Python 3 knows about cp65001
Python 2 doesn't know about `cp65001`. But Python 3 does. Since Python
3.8 `cp65001` is an alias to `utf_8`. But not before Python 3.8 and it
used to cause some problems:
https://bugs.python.org/issue36778
Keep this code to ensure same behavior for all Python versions.
2021-04-07 18:20:08 +02:00
Ana Maria Martinez Gomez
28406dafa1 py3: codecs.decode doesn't raise TypeError
`codecs.decode` doesn't raise `TypeError` in Python 3. Just obey the
comment!
2021-04-07 18:20:08 +02:00
Ana Maria Martinez Gomez
73a49c6a1f py3: remove rstrip("L") needed in Python 2
In Python 3, long integers are not formatted with a trailing `L`, so
this code is not longer needed.
2021-04-07 18:20:08 +02:00
Ana Maria Martinez Gomez
4028171f59 py3: use python3 in shebang 2021-04-07 18:20:08 +02:00
Ana Maria Martinez Gomez
5d341ba078 py3: remove six
As we are not supporting Python 2 any longer, we can stop using six and
use the equivalent Python 3 method instead.
2021-04-07 18:20:07 +02:00
Ana Maria Martinez Gomez
dfb7cf4888 py3: set and document env
Document how to use env now that we are Python3 only. Adapt
`scripts/ci.sh`.
2021-04-07 18:20:07 +02:00
Ana Maria Martinez Gomez
d640c57e29 py3: Update setup.py
Require Python 3.6+ in setup.py
2021-04-07 18:20:07 +02:00
Ana Maria Martinez Gomez
c0d6468347 py3: Remove Python 2 tests
Tests don't need to support Python 2 any longer. Do not run tests with
Python 2.
2021-04-07 18:20:07 +02:00
Ana Maria Martinez Gomez
058b61b10c py3: prevent that capa is run with Python2
Raise an exception from main if capa is run with Python < 3.6 to avoid
any silly issues reported to GitHub.
2021-04-07 18:20:07 +02:00
Ana Maria Martinez Gomez
aa4d6305af py3: remove py2/3 branches
Remove `if-else`s with a condition like `sys.version_info >= (3, 0)`.
2021-04-07 18:20:06 +02:00
Ana María Martínez Gómez
407ecab162 Merge pull request #515 from Ana06/v1-6-1 2021-04-07 18:03:56 +02:00
Ana Maria Martinez Gomez
cbc1f57b21 changelog: add master (unreleased) to CHANGELOG
Add placeholder for master (unreleased changes) in CHANGELOG. Document
this in the release checklist.
2021-04-07 17:50:19 +02:00
Ana Maria Martinez Gomez
374a9e4337 changelog: v1.6.1
This release includes several bug fixes, such as a vivisect fix for a bug, which caused that capa didn't work on Windows with Python 3. It also adds 17 new rules and a bunch of improvements in the rules and IDA rule generator. We appreciate everyone who opened issues, provided feedback, and contributed code and rules.

This is the very last capa release that supports Python 2.
2021-04-07 17:50:16 +02:00
Capa Bot
83e2f80d10 Sync capa-testfiles submodule 2021-04-07 13:53:32 +00:00
Ana Maria Martinez Gomez
576211c4ef version: bump to v1.6.1 2021-04-07 11:11:43 +02:00
Ana María Martínez Gómez
31fc5a31d6 Merge pull request #513 from Ana06/ping-dependencies
setup: pin dependencies
2021-04-07 10:19:04 +02:00
Ana Maria Martinez Gomez
eb08943d4f setup: pin dependencies
Pin all dependencies in setup to the currently used version to avoid
that a new release breaks capa without being noticed.

Closes https://github.com/fireeye/capa/issues/498
2021-04-07 09:40:13 +02:00
Ana María Martínez Gómez
c36ed71353 Merge pull request #470 from fireeye/ci/test-windows 2021-04-07 09:38:34 +02:00
Ana Maria Martinez Gomez
fa52dbcf84 ci: skip smda tests in win32
Due to a bug, two `test_smda_features` tests are failing:
https://github.com/danielplohmann/smda/issues/20

Disable them until the bug is fixed.
2021-04-06 21:53:22 +02:00
Ana Maria Martinez Gomez
d412e66cea ci: do not test Python 2.7 with Windows
The Python 2.7 tests fail in Windows with vivisect because the Windows
filesystem encoding is not UTF-8. This shouldn't be a problem when using
capa as the given filename most likely uses the same encoding, but we
force UTF-8 in our tests. As we are planing to remove Python 2 support
is not wortwhile to invest time in making this test working. Instead,
test Python 2.7 only in Ubuntu.
2021-04-06 21:39:01 +02:00
Moritz Raabe
efe50d3313 ci: test on Windows and macOS
Run the tests on Windows and macOS to avoid failures OS related.

closes #460
2021-04-06 21:38:07 +02:00
Ana María Martínez Gómez
1062ba995e doc: add milestones link to release checklist
This makes it a bit easier to check if all milestoned issues/PRs are addressed, or reassign to a new milestone.

I am committing directly to master as this is a minor change which doesn't need review.
2021-04-06 10:21:43 +02:00
Ana María Martínez Gómez
7f93bd5b59 Merge pull request #512 from fireeye/williballenthin-patch-2
setup: bump viv to v1.0.1
2021-04-06 10:17:44 +02:00
Willi Ballenthin
275d170680 setup: bump viv to v1.0.1 2021-04-05 21:22:17 -06:00
Moritz
6d7e10b804 Merge pull request #511 from fireeye/ci/fix-typos
fix submodule typos
2021-04-05 13:13:41 +02:00
Moritz Raabe
25944864f7 fix submodule typos 2021-04-05 12:52:08 +02:00
Capa Bot
5e84a16eba Sync capa rules submodule 2021-04-01 16:44:59 +00:00
Capa Bot
244ec163a3 Sync capa-testfiles submodule 2021-04-01 16:44:11 +00:00
Capa Bot
dabd2174d4 Sync capa rules submodule 2021-03-29 16:25:18 +00:00
Moritz
f8d2b41a86 Merge pull request #495 from fireeye/gh/add-pr-template
add PR template
2021-03-29 17:31:05 +02:00
Capa Bot
902972a1ee Sync capa-testfiles submodule 2021-03-29 12:49:24 +00:00
Capa Bot
bddb5fbd2f Sync capa rules submodule 2021-03-26 11:17:46 +00:00
Capa Bot
adfd769963 Sync capa-testfiles submodule 2021-03-26 11:00:35 +00:00
Capa Bot
c75e70ec74 Sync capa-testfiles submodule 2021-03-26 11:00:15 +00:00
Moritz
6118183105 Merge pull request #504 from fireeye/mr-tz-patch-1
Update setup.py
2021-03-26 11:58:52 +01:00
Moritz
da755d8411 Update setup.py 2021-03-26 11:44:04 +01:00
mike-hunhoff
742e03d90f Merge pull request #503 from fireeye/explorer/update-readme
updating capa explorer README
2021-03-25 14:51:21 -06:00
Capa Bot
744228a03e Sync capa rules submodule 2021-03-25 20:48:41 +00:00
Michael Hunhoff
5d1c6f54cd updating capa explorer README 2021-03-25 14:30:28 -06:00
mike-hunhoff
0a3dd4600b Merge pull request #468 from fireeye/features/support-string-values-special-chars
add support for string features with special characters e.g. '\n'
2021-03-25 12:58:00 -06:00
Michael Hunhoff
0289891d07 merging upstream 2021-03-25 12:43:59 -06:00
Michael Hunhoff
87cdf837e6 merging upstream 2021-03-25 12:42:36 -06:00
Capa Bot
ea4c7d6403 Sync capa rules submodule 2021-03-25 18:37:22 +00:00
Capa Bot
2807549564 Sync capa rules submodule 2021-03-25 07:21:21 +00:00
Capa Bot
c0fe96cec6 Sync capa-testfiles submodule 2021-03-25 07:17:41 +00:00
mike-hunhoff
8c967ac237 Merge pull request #500 from fireeye/explorer/improve-rulegen-search
explorer: add checks to validate matched data when searching
2021-03-24 15:55:34 -06:00
Michael Hunhoff
c48b46e932 explorer: adding checks to validate matched data when searching 2021-03-24 15:33:20 -06:00
mike-hunhoff
49d1af7798 improve unit tests for strings containing special characters
Co-authored-by: Moritz <mr-tz@users.noreply.github.com>
2021-03-24 13:22:18 -06:00
mike-hunhoff
d44fd008ae improve unit tests for strings containing special characters
Co-authored-by: Moritz <mr-tz@users.noreply.github.com>
2021-03-24 13:22:04 -06:00
Moritz Raabe
c0c9ea3403 incorprate Ana's feedback 2021-03-24 09:22:40 +01:00
Michael Hunhoff
21359da766 updating test for strings with special characaters 2021-03-23 16:02:47 -06:00
Michael Hunhoff
e51c79c241 adding lint for incorrect rule string format, refined rendering for strings 2021-03-23 15:55:48 -06:00
Willi Ballenthin
e22113c20d ida: support 7.6
closes #496
2021-03-23 08:43:33 -06:00
Capa Bot
195bae903f Sync capa rules submodule 2021-03-23 12:25:20 +00:00
Moritz Raabe
5aff21a9a1 add PR template 2021-03-23 10:52:01 +01:00
Ana María Martínez Gómez
6f289d1b8e Merge pull request #476 from Ana06/tag-workflow 2021-03-23 09:54:59 +01:00
Moritz
71b21aec59 Merge pull request #492 from fireeye/ignore-gitfiles
rule loading: ignore files starting with .git
2021-03-23 08:16:29 +01:00
Capa Bot
42a87d4eaa Sync capa-testfiles submodule 2021-03-23 07:14:58 +00:00
Capa Bot
51d125642f Sync capa rules submodule 2021-03-23 07:14:21 +00:00
mike-hunhoff
ddebf2e1cb Merge pull request #493 from fireeye/enhance/472
rule generator: support subscope rules
2021-03-22 17:28:43 -06:00
Michael Hunhoff
7f3e8f1fb1 adding support to match subscope rules and auto insert child statements when creating a new basic block subscope 2021-03-22 17:12:13 -06:00
Ana María Martínez Gómez
ab7dbcd2e4 Merge pull request #491 from fireeye/williballenthin-patch-3 2021-03-22 19:16:49 +01:00
Ana Maria Martinez Gomez
7e5cbddf5d doc: document release process
Add a release checklist.

Closes https://github.com/fireeye/capa/issues/184
2021-03-22 19:14:02 +01:00
Moritz Raabe
44f517c20d rule loading: ignore files starting with .git 2021-03-22 18:11:29 +01:00
Michael Hunhoff
7bf8c6e3a1 merging upstream 2021-03-22 10:33:36 -06:00
Michael Hunhoff
31ea683335 merge upstream 2021-03-22 09:53:07 -06:00
Willi Ballenthin
29d8f1fd27 ci: tests: pin OS version 2021-03-22 09:51:20 -06:00
Willi Ballenthin
a6c472bb2a ci: publish: pin OS version 2021-03-22 09:50:47 -06:00
Willi Ballenthin
b880d419a3 ci: build: pin OS versions 2021-03-22 09:50:04 -06:00
Capa Bot
a2ff87af8a Sync capa rules submodule 2021-03-22 15:45:10 +00:00
Willi Ballenthin
5b9c577380 Merge pull request #489 from fireeye/dependabot/pip/viv-utils-0.6.0
Bump viv-utils from 0.5.0 to 0.6.0
2021-03-22 09:39:52 -06:00
Capa Bot
4775e124db Sync capa rules submodule 2021-03-22 09:02:35 +00:00
Moritz
c243158d7c Merge pull request #486 from fireeye/fix/eol-improvements
EOL improvements
2021-03-22 09:58:29 +01:00
Capa Bot
8afc3f46f6 Sync capa rules submodule 2021-03-22 08:41:21 +00:00
dependabot[bot]
8b5dc54397 Bump viv-utils from 0.5.0 to 0.6.0
Bumps [viv-utils](https://github.com/williballenthin/viv-utils) from 0.5.0 to 0.6.0.
- [Release notes](https://github.com/williballenthin/viv-utils/releases)
- [Commits](https://github.com/williballenthin/viv-utils/compare/v0.5.0...v0.6.0)

Signed-off-by: dependabot[bot] <support@github.com>
2021-03-22 06:20:47 +00:00
Capa Bot
1dbb34df9f Sync capa-testfiles submodule 2021-03-21 19:28:58 +00:00
mike-hunhoff
9383f0bc77 Merge pull request #474 from fireeye/explorer/fix-471
explorer: adding support for multi-line tab and SHIFT + Tab
2021-03-19 19:11:14 -06:00
Willi Ballenthin
900a03c172 setup: bump viv-utils version for better FLIRT matching 2021-03-19 11:15:15 -06:00
Moritz Raabe
13306b71e0 add file 2021-03-19 09:40:44 +01:00
Moritz Raabe
8719a23de4 dos2unix 2021-03-19 09:40:44 +01:00
Moritz Raabe
7e0b5236af better deal with CRLF/LF issues 2021-03-19 09:40:43 +01:00
Moritz Raabe
c7798b3254 ensure LF end of line 2021-03-19 09:40:43 +01:00
Willi Ballenthin
7d668550f5 Merge pull request #485 from fireeye/ci/ensure-lf-eol
ensure LF end of line
2021-03-18 14:41:13 -06:00
Capa Bot
c945eaf804 Sync capa rules submodule 2021-03-18 20:41:05 +00:00
Moritz Raabe
1bfe0e0874 ensure LF end of line 2021-03-18 20:15:23 +01:00
Capa Bot
153c6a7b01 Sync capa-testfiles submodule 2021-03-18 18:04:33 +00:00
Ana Maria Martinez Gomez
30a83fa382 doc: Fix broken link in README
Introduced in https://github.com/fireeye/capa/pull/478
2021-03-16 16:37:33 +01:00
Willi Ballenthin
c0bcefe0bf Merge pull request #479 from Ana06/viv-utils5
setup: bump viv-utils to 0.5.0
2021-03-16 07:02:43 -06:00
Ana Maria Martinez Gomez
5d16a77891 ci: tag capa-rules on release
Add GitHub Action to tag capa-rules when releasing capa. The used tag
name is the same as the one in capa.
2021-03-16 12:45:02 +01:00
Ana Maria Martinez Gomez
cd01a01894 setup: bump viv-utils to 0.5.0
In viv-utils `getWorkspace` raises `IncompatibleVivVersion` on Python 3
when `vw.loadWorkspace(viv_file)` raises `UnicodeDecodeError`.

Fixes https://github.com/fireeye/capa/issues/469

As we use the same version in py2 and py3, define the viv-utils
requirement once.
2021-03-16 10:51:50 +01:00
Willi Ballenthin
df36bb9f35 Merge pull request #478 from Ana06/badges
doc: Improve README badges
2021-03-15 14:42:57 -06:00
William Ballenthin
8a3f5e423b setup: bump viv-utils version 2021-03-15 13:39:44 -06:00
William Ballenthin
177605aaf8 flirt: enable only on py3, ignore otherwise 2021-03-15 13:38:29 -06:00
Ana María Martínez Gómez
030893e125 Merge pull request #475 from Ana06/incompatible-viv
changelog: document incompatibility of viv files
2021-03-15 17:30:17 +01:00
Ana Maria Martinez Gomez
b2ab8ab54c doc: Improve README badges
- Add a link to the `PyPI - Python Version` badge. Otherwise it opens
the image when clicking on it, which is inconsistent with the other
labels. I arrived too late to point this out in:
https://github.com/fireeye/capa/pull/477
- Add release badge with last release version. This may help users to
realize that a new version has been released.
- Add downloads badge.
- Order labels by color.

Closes https://github.com/fireeye/capa/issues/196
2021-03-15 16:47:15 +01:00
Willi Ballenthin
12eb1b96de Merge pull request #477 from fireeye/mr-tz-patch-1
Update README.md with Python version badge
2021-03-15 08:35:27 -06:00
Moritz
cff7d4bad4 Update README.md 2021-03-15 11:54:11 +01:00
Ana Maria Martinez Gomez
a31c616a21 changelog: document incompatibility of viv files
`.viv` files (generated by vivisect) are not compatible between Python 2
and Python 3. This causes capa to raise an `UnicodeDecodeError`
exception and should be documented better. I'll add this change to the
release notes after the review.

Related to https://github.com/fireeye/capa/issues/469
2021-03-15 10:26:32 +01:00
Michael Hunhoff
3d2b4dcc26 adding support for multi-line tab and SHIFT + Tab 2021-03-11 17:13:43 -07:00
Michael Hunhoff
c7d24ee290 adding support for string features with special characters e.g. '\n' 2021-03-10 13:56:54 -07:00
mike-hunhoff
06c958f081 Merge pull request #465 from fireeye/explorer/fix-463
explorer: improve settings modification
2021-03-10 11:30:23 -07:00
Michael Hunhoff
b8efe585d5 fix 463, improve settings UI 2021-03-09 14:56:44 -07:00
Willi Ballenthin
e7eb2152cc Merge pull request #464 from fireeye/explorer/fix-462
fix 462
2021-03-09 12:13:54 -07:00
Michael Hunhoff
e1a8641399 fixes 462, default to empty string when accessing rule path stored in ida_settings 2021-03-09 12:09:35 -07:00
Capa Bot
cffac62e68 Sync capa rules submodule 2021-03-09 10:00:48 +00:00
Ana María Martínez Gómez
7a8c0572e9 Merge pull request #455 from Ana06/v1-6-0 2021-03-09 10:48:01 +01:00
Ana Maria Martinez Gomez
5596d5f8b2 version: bump to v1.6.0 2021-03-09 10:36:26 +01:00
Ana Maria Martinez Gomez
06fd02cd61 changelog: v1.6.0
This release adds the capa explorer rule generator plugin for IDA Pro,
vivisect support for Python 3 and 12 new rules. We appreciate everyone
who opened issues, provided feedback, and contributed code and rules.
Thank you also to the vivisect development team (rakuy0, atlas0
fd00m) for the Python 3 support (v1.0.0) and the fixes for Python 2
(v0.2.1). This is the last capa release which supports Python 2. Next
release will be Python 3 only.
2021-03-09 10:36:26 +01:00
Capa Bot
6b9d1047cf Sync capa rules submodule 2021-03-08 19:39:47 +00:00
Ana Maria Martinez Gomez
a7b3fd72ca changelog: v1.5.1 2021-03-08 20:09:31 +01:00
Ana María Martínez Gómez
dd3deb2358 Merge pull request #454 from fireeye/mr-tz-patch-1
setup: bump viv to 0.2.1
2021-03-08 11:36:18 +01:00
Moritz
c99fce3183 setup: bump viv to 0.2.1 2021-03-08 09:07:04 +01:00
William Ballenthin
4db6227d84 ci: build: test exe: run in debug mode to see messages 2021-03-05 15:49:31 -07:00
William Ballenthin
30e1d409dd pyinstaller: package default signatures into standalone exe 2021-03-05 15:46:23 -07:00
William Ballenthin
ff8a6f1d57 main: use default signature set found in source directory 2021-03-05 15:45:56 -07:00
William Ballenthin
9b5d6f8df0 ci: enable test building of standalone exe in CI 2021-03-05 15:35:42 -07:00
William Ballenthin
1e8919c6e6 pep8 2021-03-05 15:27:44 -07:00
William Ballenthin
1ee7b7b856 merge master 2021-03-05 15:23:47 -07:00
Willi Ballenthin
3e55581bf7 Merge pull request #450 from fireeye/feature-refactor-args
refactor common cli argument handling
2021-03-05 15:07:50 -07:00
Willi Ballenthin
dfbe1418d4 Merge pull request #452 from fireeye/feature-py3-pyinstaller
pyinstaller: update for py3/pyinstaller 4.2
2021-03-05 15:06:47 -07:00
William Ballenthin
7671fca373 pep8 2021-03-05 13:27:16 -07:00
William Ballenthin
c01dde3fb2 ci: disable test building of pyinstaller upon push 2021-03-05 13:26:15 -07:00
William Ballenthin
bb17adeda2 pyinstaller: smda: collect capstone shared library 2021-03-05 13:23:15 -07:00
Willi Ballenthin
9f743f1c59 main: fix reference error 2021-03-05 13:19:54 -07:00
William Ballenthin
ee85c929da pyinstaller: install capstone for smda 2021-03-05 12:59:21 -07:00
William Ballenthin
6f9c660082 ci: test pyinstaller CI 2021-03-05 12:55:19 -07:00
William Ballenthin
e02bb7f5a1 pep8 2021-03-05 12:53:50 -07:00
William Ballenthin
9aaaa044da ci: use py3.9 and pyinstaller 4.2 to build standalone binaries 2021-03-05 12:52:38 -07:00
William Ballenthin
54da8444df pyinstaller: update for py3/pyinstaller 4.2
closes #451
2021-03-05 12:40:21 -07:00
William Ballenthin
063e1229bc pep8 2021-03-05 11:10:12 -07:00
William Ballenthin
eacd70329a merge from master, sorry 2021-03-05 11:06:40 -07:00
William Ballenthin
3a1d5d068c scripts: use common argument handler
closes #449
2021-03-05 10:58:40 -07:00
William Ballenthin
f2749d884f main: factor out common cli argument handling
ref #449
2021-03-05 10:57:39 -07:00
William Ballenthin
bdea61f93b scripts: remove old migration script 2021-03-05 10:57:14 -07:00
William Ballenthin
6006e87c5e pep8 2021-03-05 09:40:43 -07:00
William Ballenthin
1e8161b24e setup: bump viv-utils for FLIRT 2021-03-05 09:39:47 -07:00
William Ballenthin
a3e6d1b611 scripts: add helper to show function id matches 2021-03-05 08:38:02 -07:00
William Ballenthin
1a93999cc0 capa: main: factor loading of flirt signatures into its own routine 2021-03-05 08:34:33 -07:00
William Ballenthin
53684adbdd sigs: add license to test files 2021-03-04 18:07:34 -07:00
William Ballenthin
d3caecc551 pep8 2021-03-04 18:06:06 -07:00
William Ballenthin
004ddb3e66 main: load gzip compressed .pat files 2021-03-04 18:04:46 -07:00
William Ballenthin
20894124e6 tests: test FLIRT matching 2021-03-04 15:50:05 -07:00
William Ballenthin
22c4e3b8c2 viv: cleanup flirt changes 2021-03-04 15:46:14 -07:00
William Ballenthin
c2a4629c62 scripts: add cli arguments to specify signatures 2021-03-04 15:04:33 -07:00
William Ballenthin
c0f4fe6867 merge master 2021-03-04 14:59:17 -07:00
William Ballenthin
f2c95568bd main: add FLIRT signature matching configuration 2021-03-04 14:52:22 -07:00
William Ballenthin
358aab85e7 viv: move FLIRT matching into viv-utils 2021-03-04 14:51:40 -07:00
Ana María Martínez Gómez
829274cd5e Merge pull request #421 from Ana06/viv-py3 2021-03-03 21:40:08 +01:00
Ana Maria Martinez Gomez
c522f5094a Use -j option in test_backend_option
Use `-j` option in `test_backend_option` to check the extractor and that
rules have been extracted. This way we don't need to check if a concrete
rule matches, but only that at least a rule matches.
2021-03-03 18:33:20 +01:00
Ana Maria Martinez Gomez
29b6772721 Test backend option
As `get_extractor` returns only vivisect now, `test_main` is not run for
smda. Test that capa works with all backends. It doesn't test that the
backend is actually called.
2021-03-03 17:36:51 +01:00
Ana Maria Martinez Gomez
695b5b50ab Remove va not None check
Instead of checking if `va` is `None in `get_section()` we should avoid
calling this function with `None`. This have been fixed in the following
PR, so this is not longer needed:
https://github.com/fireeye/capa/pull/442
2021-03-03 17:36:51 +01:00
Ana Maria Martinez Gomez
42af7b2d8b Use default backend instead of None
Set the `backend` variable to the default backend by default instead to
`None`. The `backend` variable is needed in Python 2 as `args.backend`
is only set in Python 3. Although the value of the backend variable is
ignored in Python 2, so that the default value is not used.

Co-authored-by: William Ballenthin <william.ballenthin@fireeye.com>
2021-03-03 17:36:51 +01:00
Ana Maria Martinez Gomez
079a9b5204 Remove backend option from Python 2
Do only provide the backend option in Python 3, as there is only one
backend in Python 2. This way we keep the help text simpler.
2021-03-03 17:36:51 +01:00
Ana Maria Martinez Gomez
e5048fd3ac Add missing va parameter to SegmentationViolation
The `envi.SegmentationViolation()` was missing the `va` required
parameter. This has started failing now, because calling
`vw.getSegment(0x4BA190)` for the `tests/data/mimikatz.exe_` produces
different results in Python 2 and Python 3. It returns `None` in Python
3 while the output in Python 2 is:
`(4939776, 16840, '.data', 'mimikatz')`

I have reported the issue to vivisect:
https://github.com/vivisect/vivisect/issues/370
2021-03-03 17:36:51 +01:00
Ana Maria Martinez Gomez
18eaea95fa Fix TypeError exception in Python3
`va` can be None and this causes Python 3 to raise a TypeError
exception. This is caused by the following breaking change in Python3:
> The ordering comparison operators (<, <=, >=, >) raise a TypeError
> exception when the operands don’t have a meaningful natural ordering.

This didn't failed in the previously tried vivisect version (master from
one week ago and not the release). This may have been caused by a bug in
vivisect that has been fixed.
2021-03-03 17:36:51 +01:00
Ana Maria Martinez Gomez
a4a0a56448 Vivisect 1.0.0 released
Vivisect 1.0.0 (Python 3) has been released, so we do not need to link
to my GitHub branch anymore.

https://pypi.org/project/vivisect
2021-03-03 17:36:50 +01:00
Ana Maria Martinez Gomez
40ed2f39a4 Make backend a required parameter in get_extractor
Make the `backend` argument required in the `get_extractor` internal
routine. Specify a backend in the scripts which call this function. Add
a CLI backend option in capa/features/freeze.py as well.
2021-03-03 17:36:50 +01:00
Ana Maria Martinez Gomez
2859b037aa Use constants for backend option
Use constants instead of string literals for the backend option.
2021-03-03 17:36:50 +01:00
Ana Maria Martinez Gomez
bbb7878e0a Enable tests for vivisect in Python3
Now we support vivisect as backend in Python3. We should test it.
2021-03-03 17:36:50 +01:00
Ana Maria Martinez Gomez
fc438866ec Add option to select the backend in Py3
Now we have two working backends in Python3! Add an option to select
which one to use. With this code, vivisect is the default backend, but
this is really easy to change. We could do some analysis to see if smda
performances better than vivisect once the vivisect implementation.
2021-03-03 17:36:50 +01:00
Ana Maria Martinez Gomez
2da2f498a2 Add script to compare vivisect Python 2 vs 3
Compare the performance of vivisect Python 2 vs 3 by counting the number
of feature of each type extracted for every binary in `tests/data`.
Render the ones that perform bad (under a threshold - 98) and the total
performance. Render also the running time per binary for both Python 2 and 3.

From this result, it seems that vivisect behaves properly with Python3.
2021-03-03 17:36:50 +01:00
Ana Maria Martinez Gomez
29dffffe1b Python3 support for vivisect
Vivisect has moved to Python3. Allow to run vivisect with Python3 in
capa.

I am using the following version of vivisect (which includes fixes for
some bugs I have found and some open PRs in vivisect):
https://github.com/Ana06/vivisect/tree/py-3
2021-03-03 17:36:49 +01:00
Capa Bot
1ecaad5413 Sync capa rules submodule 2021-03-02 15:06:24 +00:00
Willi Ballenthin
cd56d672c0 Merge pull request #442 from fireeye/williballenthin-patch-2
viv: ignore empty branch targets
2021-03-01 08:43:26 -07:00
Willi Ballenthin
68aed3c190 insn: better document when branch va may be none 2021-02-28 23:03:08 -07:00
William Ballenthin
f16ecd837e viv: flirt: add more documentation 2021-02-26 05:02:10 -07:00
Willi Ballenthin
68fcc03d5c viv: ignore empty branch targets
but what does this really mean? why would `getBranches` return `None`?

closes #441
2021-02-25 13:34:59 -07:00
William Ballenthin
bfcae0e754 extractors: viv: match flirt signatures [wip] 2021-02-25 12:21:27 -07:00
William Ballenthin
1b2c8880ee capa: extractors: sketch API extension to support function id 2021-02-25 12:20:29 -07:00
Ana Maria Martinez Gomez
fa7d58d01a Add missing va parameter to SegmentationViolation
The `envi.SegmentationViolation()` was missing the `va` required
parameter. This has started failing now, because calling
`vw.getSegment(0x4BA190)` for the `tests/data/mimikatz.exe_` produces
different results in Python 2 and Python 3. It returns `None` in Python
3 while the output in Python 2 is:
`(4939776, 16840, '.data', 'mimikatz')`

I have reported the issue to vivisect:
https://github.com/vivisect/vivisect/issues/370
2021-02-25 11:20:45 +01:00
Ana Maria Martinez Gomez
ec558f377a Fix TypeError exception in Python3
`va` can be None and this causes Python 3 to raise a TypeError
exception. This is caused by the following breaking change in Python3:
> The ordering comparison operators (<, <=, >=, >) raise a TypeError
> exception when the operands don’t have a meaningful natural ordering.

This didn't failed in the previously tried vivisect version (master from
one week ago and not the release). This may have been caused by a bug in
vivisect that has been fixed.
2021-02-25 10:15:49 +01:00
Ana Maria Martinez Gomez
186eba7197 Vivisect 1.0.0 released
Vivisect 1.0.0 (Python 3) has been released, so we do not need to link
to my GitHub branch anymore.

https://pypi.org/project/vivisect
2021-02-25 10:05:04 +01:00
Ana Maria Martinez Gomez
d28ba3c628 Make backend a required parameter in get_extractor
Make the `backend` argument required in the `get_extractor` internal
routine. Specify a backend in the scripts which call this function. Add
a CLI backend option in capa/features/freeze.py as well.
2021-02-25 10:04:19 +01:00
Ana Maria Martinez Gomez
a026cb84d1 Use constants for backend option
Use constants instead of string literals for the backend option.
2021-02-25 09:35:40 +01:00
Ana Maria Martinez Gomez
3acc3eeabd Enable tests for vivisect in Python3
Now we support vivisect as backend in Python3. We should test it.
2021-02-25 09:35:40 +01:00
Ana Maria Martinez Gomez
a92d2af7f8 Add option to select the backend in Py3
Now we have two working backends in Python3! Add an option to select
which one to use. With this code, vivisect is the default backend, but
this is really easy to change. We could do some analysis to see if smda
performances better than vivisect once the vivisect implementation.
2021-02-25 09:35:40 +01:00
Ana Maria Martinez Gomez
adcb683458 Add script to compare vivisect Python 2 vs 3
Compare the performance of vivisect Python 2 vs 3 by counting the number
of feature of each type extracted for every binary in `tests/data`.
Render the ones that perform bad (under a threshold - 98) and the total
performance. Render also the running time per binary for both Python 2 and 3.

From this result, it seems that vivisect behaves properly with Python3.
2021-02-25 09:35:40 +01:00
Capa Bot
939b29bf60 Sync capa rules submodule 2021-02-24 23:00:34 +00:00
Ana Maria Martinez Gomez
e4925613b3 Python3 support for vivisect
Vivisect has moved to Python3. Allow to run vivisect with Python3 in
capa.

I am using the following version of vivisect (which includes fixes for
some bugs I have found and some open PRs in vivisect):
https://github.com/Ana06/vivisect/tree/py-3
2021-02-24 17:55:39 +01:00
Capa Bot
2f6a6e4628 Sync capa rules submodule 2021-02-24 08:07:52 +00:00
Capa Bot
7938ea34d0 Sync capa rules submodule 2021-02-24 08:06:30 +00:00
Capa Bot
ed94e36f7a Sync capa rules submodule 2021-02-24 00:12:19 +00:00
mike-hunhoff
1c3a8df136 Merge pull request #439 from fireeye/explorer/rulegen-support-file-scope
adding file scope support to rule generator IDA plugin
2021-02-23 11:50:54 -07:00
Michael Hunhoff
9f254b22ee adding file scope support to rule generator IDA plugin 2021-02-23 11:10:34 -07:00
Capa Bot
753f8ce84e Sync capa rules submodule 2021-02-23 17:33:38 +00:00
Capa Bot
acf3b549de Sync capa rules submodule 2021-02-23 15:29:20 +00:00
Capa Bot
669f6dcf98 Sync capa rules submodule 2021-02-23 15:23:19 +00:00
Capa Bot
e4f7c4aab1 Sync capa rules submodule 2021-02-23 15:22:43 +00:00
Moritz
5836d55e21 Merge pull request #438 from fireeye/explorer/show-results-by-function
explorer: adding option to show results by function
2021-02-22 18:23:44 +01:00
Michael Hunhoff
e17bf1a1f4 explorer: adding option to show results by function 2021-02-22 08:16:18 -07:00
Willi Ballenthin
acb253ae9c Merge pull request #437 from fireeye/scripts/show-capabilities
update to support running in IDA w/ Python 3
2021-02-19 17:02:53 -07:00
Michael Hunhoff
cc0aaa301f update to support running in IDA w/ Python 3 2021-02-19 14:28:20 -07:00
mike-hunhoff
4256316045 Merge pull request #436 from fireeye/fix/ida/unmapped-data-ref
check for unmapped addresses when resolving data references
2021-02-19 12:58:16 -07:00
Capa Bot
78ab0c9400 Sync capa-testfiles submodule 2021-02-19 19:39:18 +00:00
Capa Bot
944a670af0 Sync capa rules submodule 2021-02-19 17:17:33 +00:00
Michael Hunhoff
e4e517b334 checked for unmapped address when resolving data references 2021-02-19 10:07:23 -07:00
Capa Bot
ccd7f1ee4b Sync capa-testfiles submodule 2021-02-19 09:54:02 +00:00
Capa Bot
9db7ed88aa Sync capa rules submodule 2021-02-18 21:36:08 +00:00
Capa Bot
a5e7497f56 Sync capa-testfiles submodule 2021-02-18 21:35:02 +00:00
Capa Bot
754f302493 Sync capa rules submodule 2021-02-18 17:56:06 +00:00
Moritz
7783543153 Merge pull request #429 from fireeye/scripts/multiple-backends-show-features
mirror show-capabilities-by-function to enable multiple backends
2021-02-18 09:33:36 +01:00
Moritz
b02f92b3ea Merge pull request #428 from fireeye/linter/ntoskrnl-ntdll-overlap
linter: adding ntoskrnl, ntdll overlap lint
2021-02-18 09:23:02 +01:00
Michael Hunhoff
47b3ef29be removing viv dep from show-capabilities-by-function.py 2021-02-17 14:49:52 -07:00
Michael Hunhoff
1eb615f97c mirror show-capabilities-by-function to enable multiple backends 2021-02-17 14:40:33 -07:00
mike-hunhoff
cfa904a0a0 Merge pull request #426 from fireeye/explorer/rule-generator
initial commit of capa explorer rule generator plugin for IDA Pro
2021-02-17 13:44:54 -07:00
Michael Hunhoff
2d34458d10 linter: adding ntoskrnl, ntdll overlap lint 2021-02-17 13:29:36 -07:00
Capa Bot
e39713c4fd Sync capa rules submodule 2021-02-17 17:10:12 +00:00
Capa Bot
320b734da8 Sync capa rules submodule 2021-02-17 17:00:43 +00:00
Capa Bot
887848625c Sync capa-testfiles submodule 2021-02-17 16:52:43 +00:00
Capa Bot
685f06582d Sync capa rules submodule 2021-02-17 15:18:16 +00:00
Capa Bot
a3c21dba32 Sync capa rules submodule 2021-02-17 14:59:46 +00:00
Capa Bot
9744cde8aa Sync capa rules submodule 2021-02-17 07:27:24 +00:00
Capa Bot
0ba8c9ec00 Sync capa-testfiles submodule 2021-02-16 23:44:50 +00:00
Capa Bot
0764c603b4 Sync capa-testfiles submodule 2021-02-16 23:32:23 +00:00
mike-hunhoff
2d4f7a6946 Update README.md 2021-02-12 14:38:11 -07:00
mike-hunhoff
5346eec84d Update README.md 2021-02-12 14:35:34 -07:00
Michael Hunhoff
b704dd967b updating README related to capa explorer 2021-02-12 14:32:08 -07:00
Michael Hunhoff
84ace24b35 merging upstream 2021-02-12 14:19:23 -07:00
Michael Hunhoff
ea42f76cff updating README related to capa explorer 2021-02-12 14:18:30 -07:00
Michael Hunhoff
dd147dd040 format fixes, strip strings before display 2021-02-12 12:03:48 -07:00
Capa Bot
9a79136d15 Sync capa-testfiles submodule 2021-02-11 15:19:46 +00:00
Capa Bot
b722dd016a Sync capa rules submodule 2021-02-11 07:39:06 +00:00
Capa Bot
054853dc06 Sync capa-testfiles submodule 2021-02-11 07:36:27 +00:00
Capa Bot
e5ceef52c6 Sync capa rules submodule 2021-02-10 16:11:34 +00:00
Capa Bot
92747e8efc Sync capa-testfiles submodule 2021-02-10 14:11:34 +00:00
Capa Bot
6171de54f9 Sync capa-testfiles submodule 2021-02-10 14:05:17 +00:00
Capa Bot
287ef31081 Sync capa rules submodule 2021-02-10 13:44:47 +00:00
Michael Hunhoff
1a804ed97b merge upstream 2021-02-09 07:55:53 -07:00
Michael Hunhoff
c8a99c247c rulegen python2.x support 2021-01-29 12:45:04 -07:00
Michael Hunhoff
9f50a37e40 rulegen filtering basic blocks, adding support for double-click to add feature 2021-01-29 11:47:58 -07:00
Michael Hunhoff
54c9e39654 rulegen reorder context menu actions 2021-01-29 11:11:41 -07:00
Michael Hunhoff
3386a1e9f9 rulegen adding vert and hort splitters, moving save button to right 2021-01-29 10:51:26 -07:00
Michael Hunhoff
b413f2eafe rulegen adding support for sync between editor and preview windows 2021-01-28 17:15:18 -07:00
Michael Hunhoff
9caafedb8d merging upstream 2021-01-28 08:14:16 -07:00
Michael Hunhoff
b1c99d82fd rulegen adding special handling for count description 2021-01-22 09:41:17 -07:00
Michael Hunhoff
10db79f636 rulegen changes for backwards compat w/ Python 2.x 2021-01-22 08:22:37 -07:00
Michael Hunhoff
cd27a64f4e rulegen clear ruleset cache when user configures new directory 2021-01-21 14:15:52 -07:00
Michael Hunhoff
d1b7a5c2e4 rulegen fixing bug in handling of subscope-rules 2021-01-21 14:05:24 -07:00
Michael Hunhoff
4b81b086db rulegen removing uneeded file 2021-01-21 10:19:37 -07:00
Michael Hunhoff
0db42c28a7 rulegen adding support to use cached ruleset, user click reset to reload rules from disk 2021-01-21 10:09:43 -07:00
Michael Hunhoff
0eca6ce2e3 rulegen adding save button, reducing menu complexity 2021-01-21 09:29:10 -07:00
Michael Hunhoff
34685bf80e rulegen adding header comment to generated rules 2021-01-20 15:22:56 -07:00
Michael Hunhoff
271dc2a6a9 rulegen add ability to configure default values for rule author and scope 2021-01-20 15:12:44 -07:00
Michael Hunhoff
bf0376f73f rulegen adding auto check if new rule matches current function 2021-01-20 14:31:48 -07:00
Michael Hunhoff
cf8656eb2d adding search bar for feature tree in rule generator 2021-01-19 12:03:15 -07:00
Michael Hunhoff
15625b5f8c capa explorer rulegen -> adding styling; adding support for descriptions 2021-01-15 12:52:52 -07:00
Michael Hunhoff
e5f9da1f2b adding submenus to rulegen editor; empty expressions auto pruned from rulegen editor 2021-01-14 16:22:56 -07:00
Michael Hunhoff
ab33c46c87 init commit capa explorer rulegen 2021-01-14 15:46:24 -07:00
121 changed files with 10603 additions and 4640 deletions

9
.gitattributes vendored Normal file
View File

@@ -0,0 +1,9 @@
# Set the default behavior, in case people don't have core.autocrlf set.
* text=auto
# Explicitly declare text files you want to always be normalized and converted
# to native line endings on checkout.
*.py text
*.yml text
*.md text
*.txt text

View File

@@ -1,46 +1,46 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [https://contributor-covenant.org/version/1/4][version]
[homepage]: https://contributor-covenant.org
[version]: https://contributor-covenant.org/version/1/4/
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [https://contributor-covenant.org/version/1/4][version]
[homepage]: https://contributor-covenant.org
[version]: https://contributor-covenant.org/version/1/4/

View File

@@ -1,197 +1,197 @@
# Contributing to Capa
First off, thanks for taking the time to contribute!
The following is a set of guidelines for contributing to capa and its packages, which are hosted in the [FireEye Organization](https://github.com/fireeye) on GitHub. These are mostly guidelines, not rules. Use your best judgment, and feel free to propose changes to this document in a pull request.
#### Table Of Contents
[Code of Conduct](#code-of-conduct)
[What should I know before I get started?](#what-should-i-know-before-i-get-started)
* [Capa and its Repositories](#capa-and-its-repositories)
* [Capa Design Decisions](#design-decisions)
[How Can I Contribute?](#how-can-i-contribute)
* [Reporting Bugs](#reporting-bugs)
* [Suggesting Enhancements](#suggesting-enhancements)
* [Your First Code Contribution](#your-first-code-contribution)
* [Pull Requests](#pull-requests)
[Styleguides](#styleguides)
* [Git Commit Messages](#git-commit-messages)
* [Python Styleguide](#python-styleguide)
* [Rules Styleguide](#rules-styleguide)
## Code of Conduct
This project and everyone participating in it is governed by the [Capa Code of Conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. Please report unacceptable behavior to the maintainers.
## What should I know before I get started?
### Capa and its repositories
We host the capa project as three Github repositories:
- [capa](https://github.com/fireeye/capa)
- [capa-rules](https://github.com/fireeye/capa-rules)
- [capa-testfiles](https://github.com/fireeye/capa-testfiles)
The command line tools, logic engine, and other Python source code are found in the `capa` repository.
This is the repository to fork when you want to enhance the features, performance, or user interface of capa.
Do *not* push rules directly to this repository, instead...
The standard rules contributed by the community are found in the `capa-rules` repository.
When you have an idea for a new rule, you should open a PR against `capa-rules`.
We keep `capa` and `capa-rules` separate to distinguish where ideas, bugs, and discussions should happen.
If you're writing yaml it probably goes in `capa-rules` and if you're writing Python it probably goes in `capa`.
Also, we encourage users to develop their own rule repositories, so we treat our default set of rules in the same way.
Test fixtures, such as malware samples and analysis workspaces, are found in the `capa-testfiles` repository.
These are files you'll need in order to run the linter (in `--thorough` mode) and full test suites;
however, they take up a lot of space (1GB+), so by keeping `capa-testfiles` separate,
a shallow checkout of `capa` and `capa-rules` doesn't take much bandwidth.
### Design Decisions
When we make a significant decision in how we maintain the project and what we can or cannot support,
we will document it in the [capa issues tracker](https://github.com/fireeye/capa/issues).
This is the best place review our discussions about what/how/why we do things in the project.
If you have a question, check to see if it is documented there.
If it is *not* documented there, or you can't find an answer, please open a issue.
We'll link to existing issues when appropriate to keep discussions in one place.
## How Can I Contribute?
### Reporting Bugs
This section guides you through submitting a bug report for capa.
Following these guidelines helps maintainers and the community understand your report, reproduce the behavior, and find related reports.
Before creating bug reports, please check [this list](#before-submitting-a-bug-report)
as you might find out that you don't need to create one.
When you are creating a bug report, please [include as many details as possible](#how-do-i-submit-a-good-bug-report).
Fill out [the required template](./ISSUE_TEMPLATE/bug_report.md),
the information it asks for helps us resolve issues faster.
> **Note:** If you find a **Closed** issue that seems like it is the same thing that you're experiencing, open a new issue and include a link to the original issue in the body of your new one.
#### Before Submitting A Bug Report
* **Determine [which repository the problem should be reported in](#capa-and-its-repositories)**.
* **Perform a [cursory search](https://github.com/fireeye/capa/issues?q=is%3Aissue)** to see if the problem has already been reported. If it has **and the issue is still open**, add a comment to the existing issue instead of opening a new one.
#### How Do I Submit A (Good) Bug Report?
Bugs are tracked as [GitHub issues](https://guides.github.com/features/issues/).
After you've determined [which repository](#capa-and-its-repositories) your bug is related to,
create an issue on that repository and provide the following information by filling in
[the template](./ISSUE_TEMPLATE/bug_report.md).
Explain the problem and include additional details to help maintainers reproduce the problem:
* **Use a clear and descriptive title** for the issue to identify the problem.
* **Describe the exact steps which reproduce the problem** in as many details as possible. For example, start by explaining how you started capa, e.g. which command exactly you used in the terminal, or how you started capa otherwise.
* **Provide specific examples to demonstrate the steps**. Include links to files or GitHub projects, or copy/pasteable snippets, which you use in those examples. If you're providing snippets in the issue, use [Markdown code blocks](https://help.github.com/articles/markdown-basics/#multiple-lines).
* **Describe the behavior you observed after following the steps** and point out what exactly is the problem with that behavior.
* **Explain which behavior you expected to see instead and why.**
* **Include screenshots and animated GIFs** which show you following the described steps and clearly demonstrate the problem. You can use [this tool](https://www.cockos.com/licecap/) to record GIFs on macOS and Windows, and [this tool](https://github.com/colinkeenan/silentcast) or [this tool](https://github.com/GNOME/byzanz) on Linux.
* **If you're reporting that capa crashed**, include the stack trace from the terminal. Include the stack trace in the issue in a [code block](https://help.github.com/articles/markdown-basics/#multiple-lines), a [file attachment](https://help.github.com/articles/file-attachments-on-issues-and-pull-requests/), or put it in a [gist](https://gist.github.com/) and provide link to that gist.
* **If the problem wasn't triggered by a specific action**, describe what you were doing before the problem happened and share more information using the guidelines below.
Provide more context by answering these questions:
* **Did the problem start happening recently** (e.g. after updating to a new version of capa) or was this always a problem?
* If the problem started happening recently, **can you reproduce the problem in an older version of capa?** What's the most recent version in which the problem doesn't happen? You can download older versions of capa from [the releases page](https://github.com/fireeye/capa/releases).
* **Can you reliably reproduce the issue?** If not, provide details about how often the problem happens and under which conditions it normally happens.
* If the problem is related to working with files (e.g. opening and editing files), **does the problem happen for all files and projects or only some?** Does the problem happen only when working with local or remote files (e.g. on network drives), with files of a specific type (e.g. only JavaScript or Python files), with large files or files with very long lines, or with files in a specific encoding? Is there anything else special about the files you are using?
Include details about your configuration and environment:
* **Which version of capa are you using?** You can get the exact version by running `capa --version` in your terminal.
* **What's the name and version of the OS you're using**?
### Suggesting Enhancements
This section guides you through submitting an enhancement suggestion for capa, including completely new features and minor improvements to existing functionality. Following these guidelines helps maintainers and the community understand your suggestion and find related suggestions.
Before creating enhancement suggestions, please check [this list](#before-submitting-an-enhancement-suggestion) as you might find out that you don't need to create one. When you are creating an enhancement suggestion, please [include as many details as possible](#how-do-i-submit-a-good-enhancement-suggestion). Fill in [the template](./ISSUE_TEMPLATE/feature_request.md), including the steps that you imagine you would take if the feature you're requesting existed.
#### Before Submitting An Enhancement Suggestion
* **Determine [which repository the enhancement should be suggested in](#capa-and-its-repositories).**
* **Perform a [cursory search](https://github.com/fireeye/capa/issues?q=is%3Aissue)** to see if the enhancement has already been suggested. If it has, add a comment to the existing issue instead of opening a new one.
#### How Do I Submit A (Good) Enhancement Suggestion?
Enhancement suggestions are tracked as [GitHub issues](https://guides.github.com/features/issues/). After you've determined [which repository](#capa-and-its-repositories) your enhancement suggestion is related to, create an issue on that repository and provide the following information:
* **Use a clear and descriptive title** for the issue to identify the suggestion.
* **Provide a step-by-step description of the suggested enhancement** in as many details as possible.
* **Provide specific examples to demonstrate the steps**. Include copy/pasteable snippets which you use in those examples, as [Markdown code blocks](https://help.github.com/articles/markdown-basics/#multiple-lines).
* **Describe the current behavior** and **explain which behavior you expected to see instead** and why.
* **Include screenshots and animated GIFs** which help you demonstrate the steps or point out the part of capa which the suggestion is related to. You can use [this tool](https://www.cockos.com/licecap/) to record GIFs on macOS and Windows, and [this tool](https://github.com/colinkeenan/silentcast) or [this tool](https://github.com/GNOME/byzanz) on Linux.
* **Explain why this enhancement would be useful** to most capa users and isn't something that can or should be implemented as an external tool that uses capa as a library.
* **Specify which version of capa you're using.** You can get the exact version by running `capa --version` in your terminal.
* **Specify the name and version of the OS you're using.**
### Your First Code Contribution
Unsure where to begin contributing to capa? You can start by looking through these `good-first-issue` and `rule-idea` issues:
* [good-first-issue](https://github.com/fireeye/capa/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) - issues which should only require a few lines of code, and a test or two.
* [rule-idea](https://github.com/fireeye/capa-rules/issues?q=is%3Aissue+is%3Aopen+label%3A%22rule+idea%22) - issues that describe potential new rule ideas.
Both issue lists are sorted by total number of comments. While not perfect, number of comments is a reasonable proxy for impact a given change will have.
#### Local development
capa and all its resources can be developed locally.
For instructions on how to do this, see the "Method 3" section of the [installation guide](https://github.com/fireeye/capa/blob/master/doc/installation.md).
### Pull Requests
The process described here has several goals:
- Maintain capa's quality
- Fix problems that are important to users
- Engage the community in working toward the best possible capa
- Enable a sustainable system for capa's maintainers to review contributions
Please follow these steps to have your contribution considered by the maintainers:
1. Follow all instructions in [the template](PULL_REQUEST_TEMPLATE.md)
2. Follow the [styleguides](#styleguides)
3. After you submit your pull request, verify that all [status checks](https://help.github.com/articles/about-status-checks/) are passing <details><summary>What if the status checks are failing? </summary>If a status check is failing, and you believe that the failure is unrelated to your change, please leave a comment on the pull request explaining why you believe the failure is unrelated. A maintainer will re-run the status check for you. If we conclude that the failure was a false positive, then we will open an issue to track that problem with our status check suite.</details>
While the prerequisites above must be satisfied prior to having your pull request reviewed, the reviewer(s) may ask you to complete additional design work, tests, or other changes before your pull request can be ultimately accepted.
## Styleguides
### Git Commit Messages
* Use the present tense ("Add feature" not "Added feature")
* Use the imperative mood ("Move cursor to..." not "Moves cursor to...")
* Prefix the first line with the component in question ("rules: ..." or "render: ...")
* Reference issues and pull requests liberally after the first line
### Python Styleguide
All Python code must adhere to the style guide used by capa:
1. [PEP8](https://www.python.org/dev/peps/pep-0008/), with clarifications from
2. [Willi's style guide](https://docs.google.com/document/d/1iRpeg-w4DtibwytUyC_dDT7IGhNGBP25-nQfuBa-Fyk/edit?usp=sharing), formatted with
3. [isort](https://pypi.org/project/isort/) (with line width 120 and ordered by line length), and formatted with
4. [black](https://github.com/psf/black) (with line width 120), and formatted with
5. [dos2unix](https://linux.die.net/man/1/dos2unix)
Our CI pipeline will reformat and enforce the Python styleguide.
### Rules Styleguide
All (non-nursery) capa rules must:
1. pass the [linter](https://github.com/fireeye/capa/blob/master/scripts/lint.py), and
2. be formatted with [capafmt](https://github.com/fireeye/capa/blob/master/scripts/capafmt.py)
This ensures that all rules meet the same minimum level of quality and are structured in a consistent way.
Our CI pipeline will reformat and enforce the capa rules styleguide.
# Contributing to Capa
First off, thanks for taking the time to contribute!
The following is a set of guidelines for contributing to capa and its packages, which are hosted in the [FireEye Organization](https://github.com/fireeye) on GitHub. These are mostly guidelines, not rules. Use your best judgment, and feel free to propose changes to this document in a pull request.
#### Table Of Contents
[Code of Conduct](#code-of-conduct)
[What should I know before I get started?](#what-should-i-know-before-i-get-started)
* [Capa and its Repositories](#capa-and-its-repositories)
* [Capa Design Decisions](#design-decisions)
[How Can I Contribute?](#how-can-i-contribute)
* [Reporting Bugs](#reporting-bugs)
* [Suggesting Enhancements](#suggesting-enhancements)
* [Your First Code Contribution](#your-first-code-contribution)
* [Pull Requests](#pull-requests)
[Styleguides](#styleguides)
* [Git Commit Messages](#git-commit-messages)
* [Python Styleguide](#python-styleguide)
* [Rules Styleguide](#rules-styleguide)
## Code of Conduct
This project and everyone participating in it is governed by the [Capa Code of Conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. Please report unacceptable behavior to the maintainers.
## What should I know before I get started?
### Capa and its repositories
We host the capa project as three Github repositories:
- [capa](https://github.com/fireeye/capa)
- [capa-rules](https://github.com/fireeye/capa-rules)
- [capa-testfiles](https://github.com/fireeye/capa-testfiles)
The command line tools, logic engine, and other Python source code are found in the `capa` repository.
This is the repository to fork when you want to enhance the features, performance, or user interface of capa.
Do *not* push rules directly to this repository, instead...
The standard rules contributed by the community are found in the `capa-rules` repository.
When you have an idea for a new rule, you should open a PR against `capa-rules`.
We keep `capa` and `capa-rules` separate to distinguish where ideas, bugs, and discussions should happen.
If you're writing yaml it probably goes in `capa-rules` and if you're writing Python it probably goes in `capa`.
Also, we encourage users to develop their own rule repositories, so we treat our default set of rules in the same way.
Test fixtures, such as malware samples and analysis workspaces, are found in the `capa-testfiles` repository.
These are files you'll need in order to run the linter (in `--thorough` mode) and full test suites;
however, they take up a lot of space (1GB+), so by keeping `capa-testfiles` separate,
a shallow checkout of `capa` and `capa-rules` doesn't take much bandwidth.
### Design Decisions
When we make a significant decision in how we maintain the project and what we can or cannot support,
we will document it in the [capa issues tracker](https://github.com/fireeye/capa/issues).
This is the best place review our discussions about what/how/why we do things in the project.
If you have a question, check to see if it is documented there.
If it is *not* documented there, or you can't find an answer, please open a issue.
We'll link to existing issues when appropriate to keep discussions in one place.
## How Can I Contribute?
### Reporting Bugs
This section guides you through submitting a bug report for capa.
Following these guidelines helps maintainers and the community understand your report, reproduce the behavior, and find related reports.
Before creating bug reports, please check [this list](#before-submitting-a-bug-report)
as you might find out that you don't need to create one.
When you are creating a bug report, please [include as many details as possible](#how-do-i-submit-a-good-bug-report).
Fill out [the required template](./ISSUE_TEMPLATE/bug_report.md),
the information it asks for helps us resolve issues faster.
> **Note:** If you find a **Closed** issue that seems like it is the same thing that you're experiencing, open a new issue and include a link to the original issue in the body of your new one.
#### Before Submitting A Bug Report
* **Determine [which repository the problem should be reported in](#capa-and-its-repositories)**.
* **Perform a [cursory search](https://github.com/fireeye/capa/issues?q=is%3Aissue)** to see if the problem has already been reported. If it has **and the issue is still open**, add a comment to the existing issue instead of opening a new one.
#### How Do I Submit A (Good) Bug Report?
Bugs are tracked as [GitHub issues](https://guides.github.com/features/issues/).
After you've determined [which repository](#capa-and-its-repositories) your bug is related to,
create an issue on that repository and provide the following information by filling in
[the template](./ISSUE_TEMPLATE/bug_report.md).
Explain the problem and include additional details to help maintainers reproduce the problem:
* **Use a clear and descriptive title** for the issue to identify the problem.
* **Describe the exact steps which reproduce the problem** in as many details as possible. For example, start by explaining how you started capa, e.g. which command exactly you used in the terminal, or how you started capa otherwise.
* **Provide specific examples to demonstrate the steps**. Include links to files or GitHub projects, or copy/pasteable snippets, which you use in those examples. If you're providing snippets in the issue, use [Markdown code blocks](https://help.github.com/articles/markdown-basics/#multiple-lines).
* **Describe the behavior you observed after following the steps** and point out what exactly is the problem with that behavior.
* **Explain which behavior you expected to see instead and why.**
* **Include screenshots and animated GIFs** which show you following the described steps and clearly demonstrate the problem. You can use [this tool](https://www.cockos.com/licecap/) to record GIFs on macOS and Windows, and [this tool](https://github.com/colinkeenan/silentcast) or [this tool](https://github.com/GNOME/byzanz) on Linux.
* **If you're reporting that capa crashed**, include the stack trace from the terminal. Include the stack trace in the issue in a [code block](https://help.github.com/articles/markdown-basics/#multiple-lines), a [file attachment](https://help.github.com/articles/file-attachments-on-issues-and-pull-requests/), or put it in a [gist](https://gist.github.com/) and provide link to that gist.
* **If the problem wasn't triggered by a specific action**, describe what you were doing before the problem happened and share more information using the guidelines below.
Provide more context by answering these questions:
* **Did the problem start happening recently** (e.g. after updating to a new version of capa) or was this always a problem?
* If the problem started happening recently, **can you reproduce the problem in an older version of capa?** What's the most recent version in which the problem doesn't happen? You can download older versions of capa from [the releases page](https://github.com/fireeye/capa/releases).
* **Can you reliably reproduce the issue?** If not, provide details about how often the problem happens and under which conditions it normally happens.
* If the problem is related to working with files (e.g. opening and editing files), **does the problem happen for all files and projects or only some?** Does the problem happen only when working with local or remote files (e.g. on network drives), with files of a specific type (e.g. only JavaScript or Python files), with large files or files with very long lines, or with files in a specific encoding? Is there anything else special about the files you are using?
Include details about your configuration and environment:
* **Which version of capa are you using?** You can get the exact version by running `capa --version` in your terminal.
* **What's the name and version of the OS you're using**?
### Suggesting Enhancements
This section guides you through submitting an enhancement suggestion for capa, including completely new features and minor improvements to existing functionality. Following these guidelines helps maintainers and the community understand your suggestion and find related suggestions.
Before creating enhancement suggestions, please check [this list](#before-submitting-an-enhancement-suggestion) as you might find out that you don't need to create one. When you are creating an enhancement suggestion, please [include as many details as possible](#how-do-i-submit-a-good-enhancement-suggestion). Fill in [the template](./ISSUE_TEMPLATE/feature_request.md), including the steps that you imagine you would take if the feature you're requesting existed.
#### Before Submitting An Enhancement Suggestion
* **Determine [which repository the enhancement should be suggested in](#capa-and-its-repositories).**
* **Perform a [cursory search](https://github.com/fireeye/capa/issues?q=is%3Aissue)** to see if the enhancement has already been suggested. If it has, add a comment to the existing issue instead of opening a new one.
#### How Do I Submit A (Good) Enhancement Suggestion?
Enhancement suggestions are tracked as [GitHub issues](https://guides.github.com/features/issues/). After you've determined [which repository](#capa-and-its-repositories) your enhancement suggestion is related to, create an issue on that repository and provide the following information:
* **Use a clear and descriptive title** for the issue to identify the suggestion.
* **Provide a step-by-step description of the suggested enhancement** in as many details as possible.
* **Provide specific examples to demonstrate the steps**. Include copy/pasteable snippets which you use in those examples, as [Markdown code blocks](https://help.github.com/articles/markdown-basics/#multiple-lines).
* **Describe the current behavior** and **explain which behavior you expected to see instead** and why.
* **Include screenshots and animated GIFs** which help you demonstrate the steps or point out the part of capa which the suggestion is related to. You can use [this tool](https://www.cockos.com/licecap/) to record GIFs on macOS and Windows, and [this tool](https://github.com/colinkeenan/silentcast) or [this tool](https://github.com/GNOME/byzanz) on Linux.
* **Explain why this enhancement would be useful** to most capa users and isn't something that can or should be implemented as an external tool that uses capa as a library.
* **Specify which version of capa you're using.** You can get the exact version by running `capa --version` in your terminal.
* **Specify the name and version of the OS you're using.**
### Your First Code Contribution
Unsure where to begin contributing to capa? You can start by looking through these `good-first-issue` and `rule-idea` issues:
* [good-first-issue](https://github.com/fireeye/capa/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) - issues which should only require a few lines of code, and a test or two.
* [rule-idea](https://github.com/fireeye/capa-rules/issues?q=is%3Aissue+is%3Aopen+label%3A%22rule+idea%22) - issues that describe potential new rule ideas.
Both issue lists are sorted by total number of comments. While not perfect, number of comments is a reasonable proxy for impact a given change will have.
#### Local development
capa and all its resources can be developed locally.
For instructions on how to do this, see the "Method 3" section of the [installation guide](https://github.com/fireeye/capa/blob/master/doc/installation.md).
### Pull Requests
The process described here has several goals:
- Maintain capa's quality
- Fix problems that are important to users
- Engage the community in working toward the best possible capa
- Enable a sustainable system for capa's maintainers to review contributions
Please follow these steps to have your contribution considered by the maintainers:
1. Follow the [styleguides](#styleguides)
2. Update the CHANGELOG and add tests and documentation. In case they are not needed, indicate it in [the PR template](pull_request_template.md).
3. After you submit your pull request, verify that all [status checks](https://help.github.com/articles/about-status-checks/) are passing <details><summary>What if the status checks are failing? </summary>If a status check is failing, and you believe that the failure is unrelated to your change, please leave a comment on the pull request explaining why you believe the failure is unrelated. A maintainer will re-run the status check for you. If we conclude that the failure was a false positive, then we will open an issue to track that problem with our status check suite.</details>
While the prerequisites above must be satisfied prior to having your pull request reviewed, the reviewer(s) may ask you to complete additional design work, tests, or other changes before your pull request can be ultimately accepted.
## Styleguides
### Git Commit Messages
* Use the present tense ("Add feature" not "Added feature")
* Use the imperative mood ("Move cursor to..." not "Moves cursor to...")
* Prefix the first line with the component in question ("rules: ..." or "render: ...")
* Reference issues and pull requests liberally after the first line
### Python Styleguide
All Python code must adhere to the style guide used by capa:
1. [PEP8](https://www.python.org/dev/peps/pep-0008/), with clarifications from
2. [Willi's style guide](https://docs.google.com/document/d/1iRpeg-w4DtibwytUyC_dDT7IGhNGBP25-nQfuBa-Fyk/edit?usp=sharing), formatted with
3. [isort](https://pypi.org/project/isort/) (with line width 120 and ordered by line length), and formatted with
4. [black](https://github.com/psf/black) (with line width 120), and formatted with
5. [dos2unix](https://linux.die.net/man/1/dos2unix)
Our CI pipeline will reformat and enforce the Python styleguide.
### Rules Styleguide
All (non-nursery) capa rules must:
1. pass the [linter](https://github.com/fireeye/capa/blob/master/scripts/lint.py), and
2. be formatted with [capafmt](https://github.com/fireeye/capa/blob/master/scripts/capafmt.py)
This ensures that all rules meet the same minimum level of quality and are structured in a consistent way.
Our CI pipeline will reformat and enforce the capa rules styleguide.

View File

@@ -1,47 +1,47 @@
---
name: Bug report
about: Create a report to help us improve
---
<!--
# Is your bug report related to capa rules (for example a false positive)?
We use sybmodules to separate code, rules and test data. If your issue is related to capa rules, please report it at https://github.com/fireeye/capa-rules/issues.
# Have you checked that your issue isn't already filed?
Please search if there is a similar issue at https://github.com/fireeye/capa/issues. If there is already a similar issue, please add more details there instead of opening a new one.
# Have you read capa's Code of Conduct?
By filing an Issue, you are expected to comply with it, including treating everyone with respect: https://github.com/fireeye/capa/blob/master/.github/CODE_OF_CONDUCT.md
# Have you read capa's CONTRIBUTING guide?
It contains helpful information about how to contribute to capa. Check https://github.com/fireeye/capa/blob/master/.github/CONTRIBUTING.md#reporting-bugs
-->
### Description
<!-- Description of the issue -->
### Steps to Reproduce
<!-- 1. First Step -->
<!-- 2. Second Step -->
<!-- 3. and so on… -->
**Expected behavior:**
<!-- What you expect to happen -->
**Actual behavior:**
<!-- What actually happens -->
### Versions
<!-- You can get this information from copy and pasting the output of `capa --version` from the command line.
Please specify the component you're using (e.g. standalone tool or IDA Pro integration) and your Python version.
Also, please include the OS and what version of the OS you're running. -->
### Additional Information
<!-- Any additional information, configuration or data that might be necessary to reproduce the issue. -->
---
name: Bug report
about: Create a report to help us improve
---
<!--
# Is your bug report related to capa rules (for example a false positive)?
We use submodules to separate code, rules and test data. If your issue is related to capa rules, please report it at https://github.com/fireeye/capa-rules/issues.
# Have you checked that your issue isn't already filed?
Please search if there is a similar issue at https://github.com/fireeye/capa/issues. If there is already a similar issue, please add more details there instead of opening a new one.
# Have you read capa's Code of Conduct?
By filing an Issue, you are expected to comply with it, including treating everyone with respect: https://github.com/fireeye/capa/blob/master/.github/CODE_OF_CONDUCT.md
# Have you read capa's CONTRIBUTING guide?
It contains helpful information about how to contribute to capa. Check https://github.com/fireeye/capa/blob/master/.github/CONTRIBUTING.md#reporting-bugs
-->
### Description
<!-- Description of the issue -->
### Steps to Reproduce
<!-- 1. First Step -->
<!-- 2. Second Step -->
<!-- 3. and so on… -->
**Expected behavior:**
<!-- What you expect to happen -->
**Actual behavior:**
<!-- What actually happens -->
### Versions
<!-- You can get this information from copy and pasting the output of `capa --version` from the command line.
Please specify the component you're using (e.g. standalone tool or IDA Pro integration) and your Python version.
Also, please include the OS and what version of the OS you're running. -->
### Additional Information
<!-- Any additional information, configuration or data that might be necessary to reproduce the issue. -->

View File

@@ -1,35 +1,35 @@
---
name: Feature request
about: Suggest an idea for capa
---
<!--
# Is your issue related to capa rules (for example an idea for a new rule)?
We use sybmodules to separate code, rules and test data. If your issue is related to capa rules, please report it at https://github.com/fireeye/capa-rules/issues.
# Have you checked that your issue isn't already filed?
Please search if there is a similar issue at https://github.com/fireeye/capa/issues. If there is already a similar issue, please add more details there instead of opening a new one.
# Have you read capa's Code of Conduct?
By filing an Issue, you are expected to comply with it, including treating everyone with respect: https://github.com/fireeye/capa/blob/master/.github/CODE_OF_CONDUCT.md
# Have you read capa's CONTRIBUTING guide?
It contains helpful information about how to contribute to capa. Check https://github.com/fireeye/capa/blob/master/.github/CONTRIBUTING.md#suggesting-enhancements
-->
### Summary
<!-- One paragraph explanation of the feature. -->
### Motivation
<!-- Why are we doing this? What use cases does it support? What is the expected outcome? -->
### Describe alternatives you've considered
<!-- A clear and concise description of the alternative solutions you've considered. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
---
name: Feature request
about: Suggest an idea for capa
---
<!--
# Is your issue related to capa rules (for example an idea for a new rule)?
We use submodules to separate code, rules and test data. If your issue is related to capa rules, please report it at https://github.com/fireeye/capa-rules/issues.
# Have you checked that your issue isn't already filed?
Please search if there is a similar issue at https://github.com/fireeye/capa/issues. If there is already a similar issue, please add more details there instead of opening a new one.
# Have you read capa's Code of Conduct?
By filing an Issue, you are expected to comply with it, including treating everyone with respect: https://github.com/fireeye/capa/blob/master/.github/CODE_OF_CONDUCT.md
# Have you read capa's CONTRIBUTING guide?
It contains helpful information about how to contribute to capa. Check https://github.com/fireeye/capa/blob/master/.github/CONTRIBUTING.md#suggesting-enhancements
-->
### Summary
<!-- One paragraph explanation of the feature. -->
### Motivation
<!-- Why are we doing this? What use cases does it support? What is the expected outcome? -->
### Describe alternatives you've considered
<!-- A clear and concise description of the alternative solutions you've considered. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->

76
.github/mypy/mypy.ini vendored Normal file
View File

@@ -0,0 +1,76 @@
[mypy]
[mypy-halo.*]
ignore_missing_imports = True
[mypy-tqdm.*]
ignore_missing_imports = True
[mypy-ruamel.*]
ignore_missing_imports = True
[mypy-networkx.*]
ignore_missing_imports = True
[mypy-pefile.*]
ignore_missing_imports = True
[mypy-viv_utils.*]
ignore_missing_imports = True
[mypy-flirt.*]
ignore_missing_imports = True
[mypy-smda.*]
ignore_missing_imports = True
[mypy-lief.*]
ignore_missing_imports = True
[mypy-idc.*]
ignore_missing_imports = True
[mypy-vivisect.*]
ignore_missing_imports = True
[mypy-envi.*]
ignore_missing_imports = True
[mypy-PE.*]
ignore_missing_imports = True
[mypy-idaapi.*]
ignore_missing_imports = True
[mypy-idautils.*]
ignore_missing_imports = True
[mypy-ida_bytes.*]
ignore_missing_imports = True
[mypy-ida_kernwin.*]
ignore_missing_imports = True
[mypy-ida_settings.*]
ignore_missing_imports = True
[mypy-ida_funcs.*]
ignore_missing_imports = True
[mypy-ida_loader.*]
ignore_missing_imports = True
[mypy-PyQt5.*]
ignore_missing_imports = True
[mypy-binaryninja.*]
ignore_missing_imports = True
[mypy-pytest.*]
ignore_missing_imports = True
[mypy-devtools.*]
ignore_missing_imports = True
[mypy-elftools.*]
ignore_missing_imports = True

22
.github/pull_request_template.md vendored Normal file
View File

@@ -0,0 +1,22 @@
<!--
Thank you for contributing to capa! <3
Please read capa's CONTRIBUTING guide if you haven't done so already.
It contains helpful information about how to contribute to capa. Check https://github.com/fireeye/capa/blob/master/.github/CONTRIBUTING.md
Please describe the changes in this pull request (PR). Include your motivation and context to help us review.
Please mention the issue your PR addresses (if any):
closes #issue_number
-->
### Checklist
<!-- CHANGELOG.md has a `master (unreleased)` section. Please add bug fixes, new features, breaking changes and anything else you think is worthwhile mentioning in the release notes to this file. -->
- [ ] No CHANGELOG update needed
<!-- Tests prove that your fix/work as expected and ensure it doesn't break on the feature. -->
- [ ] No new tests needed
<!-- Please help us keeping capa documentation up-to-date -->
- [ ] No documentation update needed

View File

@@ -0,0 +1,5 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
import PyInstaller.utils.hooks
# ref: https://groups.google.com/g/pyinstaller/c/amWi0-66uZI/m/miPoKfWjBAAJ
binaries = PyInstaller.utils.hooks.collect_dynamic_libs("capstone")

View File

@@ -13,3 +13,144 @@ from PyInstaller.utils.hooks import copy_metadata
#
# ref: https://github.com/pyinstaller/pyinstaller/issues/1713#issuecomment-162682084
datas = copy_metadata("vivisect")
excludedimports = [
# viv gui requires these heavy libraries,
# but viv as a library doesn't.
# they shouldn't be installed in our configuration,
# but we'll ensure they don't slip in here (such as on developers' systems).
"PyQt5",
"qt5",
"pyqtwebengine",
# the above are imported by these viv modules.
# so really, we'd want to exclude these submodules of viv.
# but i dont think this works.
"vqt",
"vdb.qt",
"envi.qt",
# unused by capa
"pyasn1",
]
hiddenimports = [
# vivisect does manual/runtime importing of its modules,
# so declare the things that could be imported here.
"vivisect",
"vivisect.analysis",
"vivisect.analysis.amd64",
"vivisect.analysis.amd64",
"vivisect.analysis.amd64.emulation",
"vivisect.analysis.amd64.golang",
"vivisect.analysis.crypto",
"vivisect.analysis.crypto",
"vivisect.analysis.crypto.constants",
"vivisect.analysis.elf",
"vivisect.analysis.elf.elfplt",
"vivisect.analysis.elf.elfplt_late",
"vivisect.analysis.elf.libc_start_main",
"vivisect.analysis.generic",
"vivisect.analysis.generic",
"vivisect.analysis.generic.codeblocks",
"vivisect.analysis.generic.emucode",
"vivisect.analysis.generic.entrypoints",
"vivisect.analysis.generic.funcentries",
"vivisect.analysis.generic.impapi",
"vivisect.analysis.generic.mkpointers",
"vivisect.analysis.generic.pointers",
"vivisect.analysis.generic.pointertables",
"vivisect.analysis.generic.relocations",
"vivisect.analysis.generic.strconst",
"vivisect.analysis.generic.switchcase",
"vivisect.analysis.generic.thunks",
"vivisect.analysis.generic.noret",
"vivisect.analysis.i386",
"vivisect.analysis.i386",
"vivisect.analysis.i386.calling",
"vivisect.analysis.i386.golang",
"vivisect.analysis.i386.importcalls",
"vivisect.analysis.i386.instrhook",
"vivisect.analysis.i386.thunk_bx",
"vivisect.analysis.ms",
"vivisect.analysis.ms",
"vivisect.analysis.ms.hotpatch",
"vivisect.analysis.ms.localhints",
"vivisect.analysis.ms.msvc",
"vivisect.analysis.ms.msvcfunc",
"vivisect.analysis.ms.vftables",
"vivisect.analysis.pe",
"vivisect.impapi.posix.amd64",
"vivisect.impapi.posix.i386",
"vivisect.impapi.windows",
"vivisect.impapi.windows.amd64",
"vivisect.impapi.windows.i386",
"vivisect.impapi.winkern.i386",
"vivisect.impapi.winkern.amd64",
"vivisect.parsers.blob",
"vivisect.parsers.elf",
"vivisect.parsers.ihex",
"vivisect.parsers.macho",
"vivisect.parsers.pe",
"vivisect.storage",
"vivisect.storage.basicfile",
"vstruct.constants",
"vstruct.constants.ntstatus",
"vstruct.defs",
"vstruct.defs.arm7",
"vstruct.defs.bmp",
"vstruct.defs.dns",
"vstruct.defs.elf",
"vstruct.defs.gif",
"vstruct.defs.ihex",
"vstruct.defs.inet",
"vstruct.defs.java",
"vstruct.defs.kdcom",
"vstruct.defs.macho",
"vstruct.defs.macho.const",
"vstruct.defs.macho.fat",
"vstruct.defs.macho.loader",
"vstruct.defs.macho.stabs",
"vstruct.defs.minidump",
"vstruct.defs.pcap",
"vstruct.defs.pe",
"vstruct.defs.pptp",
"vstruct.defs.rar",
"vstruct.defs.swf",
"vstruct.defs.win32",
"vstruct.defs.windows",
"vstruct.defs.windows.win_5_1_i386",
"vstruct.defs.windows.win_5_1_i386.ntdll",
"vstruct.defs.windows.win_5_1_i386.ntoskrnl",
"vstruct.defs.windows.win_5_1_i386.win32k",
"vstruct.defs.windows.win_5_2_i386",
"vstruct.defs.windows.win_5_2_i386.ntdll",
"vstruct.defs.windows.win_5_2_i386.ntoskrnl",
"vstruct.defs.windows.win_5_2_i386.win32k",
"vstruct.defs.windows.win_6_1_amd64",
"vstruct.defs.windows.win_6_1_amd64.ntdll",
"vstruct.defs.windows.win_6_1_amd64.ntoskrnl",
"vstruct.defs.windows.win_6_1_amd64.win32k",
"vstruct.defs.windows.win_6_1_i386",
"vstruct.defs.windows.win_6_1_i386.ntdll",
"vstruct.defs.windows.win_6_1_i386.ntoskrnl",
"vstruct.defs.windows.win_6_1_i386.win32k",
"vstruct.defs.windows.win_6_1_wow64",
"vstruct.defs.windows.win_6_1_wow64.ntdll",
"vstruct.defs.windows.win_6_2_amd64",
"vstruct.defs.windows.win_6_2_amd64.ntdll",
"vstruct.defs.windows.win_6_2_amd64.ntoskrnl",
"vstruct.defs.windows.win_6_2_amd64.win32k",
"vstruct.defs.windows.win_6_2_i386",
"vstruct.defs.windows.win_6_2_i386.ntdll",
"vstruct.defs.windows.win_6_2_i386.ntoskrnl",
"vstruct.defs.windows.win_6_2_i386.win32k",
"vstruct.defs.windows.win_6_2_wow64",
"vstruct.defs.windows.win_6_2_wow64.ntdll",
"vstruct.defs.windows.win_6_3_amd64",
"vstruct.defs.windows.win_6_3_amd64.ntdll",
"vstruct.defs.windows.win_6_3_amd64.ntoskrnl",
"vstruct.defs.windows.win_6_3_i386",
"vstruct.defs.windows.win_6_3_i386.ntdll",
"vstruct.defs.windows.win_6_3_i386.ntoskrnl",
"vstruct.defs.windows.win_6_3_wow64",
"vstruct.defs.windows.win_6_3_wow64.ntdll",
]

View File

@@ -16,9 +16,10 @@ with open('./capa/version.py', 'wb') as f:
# - commits since
# g------- git hash fragment
version = (subprocess.check_output(["git", "describe", "--always", "--tags", "--long"])
.decode("utf-8")
.strip()
.replace("tags/", ""))
f.write("__version__ = '%s'" % version)
f.write(("__version__ = '%s'" % version).encode("utf-8"))
a = Analysis(
# when invoking pyinstaller from the project root,
@@ -32,6 +33,7 @@ a = Analysis(
# this gets invoked from the directory of the spec file,
# i.e. ./.github/pyinstaller
('../../rules', 'rules'),
('../../sigs', 'sigs'),
# capa.render.default uses tabulate that depends on wcwidth.
# it seems wcwidth uses a json file `version.json`
@@ -41,128 +43,6 @@ a = Analysis(
# ref: https://stackoverflow.com/a/62278462/87207
(os.path.dirname(wcwidth.__file__), 'wcwidth')
],
hiddenimports=[
# vivisect does manual/runtime importing of its modules,
# so declare the things that could be imported here.
"vivisect",
"vivisect.analysis",
"vivisect.analysis.amd64",
"vivisect.analysis.amd64",
"vivisect.analysis.amd64.emulation",
"vivisect.analysis.amd64.golang",
"vivisect.analysis.crypto",
"vivisect.analysis.crypto",
"vivisect.analysis.crypto.constants",
"vivisect.analysis.elf",
"vivisect.analysis.elf",
"vivisect.analysis.elf.elfplt",
"vivisect.analysis.elf.libc_start_main",
"vivisect.analysis.generic",
"vivisect.analysis.generic",
"vivisect.analysis.generic.codeblocks",
"vivisect.analysis.generic.emucode",
"vivisect.analysis.generic.entrypoints",
"vivisect.analysis.generic.funcentries",
"vivisect.analysis.generic.impapi",
"vivisect.analysis.generic.mkpointers",
"vivisect.analysis.generic.pointers",
"vivisect.analysis.generic.pointertables",
"vivisect.analysis.generic.relocations",
"vivisect.analysis.generic.strconst",
"vivisect.analysis.generic.switchcase",
"vivisect.analysis.generic.thunks",
"vivisect.analysis.i386",
"vivisect.analysis.i386",
"vivisect.analysis.i386.calling",
"vivisect.analysis.i386.golang",
"vivisect.analysis.i386.importcalls",
"vivisect.analysis.i386.instrhook",
"vivisect.analysis.i386.thunk_bx",
"vivisect.analysis.ms",
"vivisect.analysis.ms",
"vivisect.analysis.ms.hotpatch",
"vivisect.analysis.ms.localhints",
"vivisect.analysis.ms.msvc",
"vivisect.analysis.ms.msvcfunc",
"vivisect.analysis.ms.vftables",
"vivisect.analysis.pe",
"vivisect.impapi.posix.amd64",
"vivisect.impapi.posix.i386",
"vivisect.impapi.windows",
"vivisect.impapi.windows.amd64",
"vivisect.impapi.windows.i386",
"vivisect.impapi.winkern.i386",
"vivisect.impapi.winkern.amd64",
"vivisect.parsers.blob",
"vivisect.parsers.elf",
"vivisect.parsers.ihex",
"vivisect.parsers.macho",
"vivisect.parsers.pe",
"vivisect.parsers.utils",
"vivisect.storage",
"vivisect.storage.basicfile",
"vstruct.constants",
"vstruct.constants.ntstatus",
"vstruct.defs",
"vstruct.defs.arm7",
"vstruct.defs.bmp",
"vstruct.defs.dns",
"vstruct.defs.elf",
"vstruct.defs.gif",
"vstruct.defs.ihex",
"vstruct.defs.inet",
"vstruct.defs.java",
"vstruct.defs.kdcom",
"vstruct.defs.macho",
"vstruct.defs.macho.const",
"vstruct.defs.macho.fat",
"vstruct.defs.macho.loader",
"vstruct.defs.macho.stabs",
"vstruct.defs.minidump",
"vstruct.defs.pcap",
"vstruct.defs.pe",
"vstruct.defs.pptp",
"vstruct.defs.rar",
"vstruct.defs.swf",
"vstruct.defs.win32",
"vstruct.defs.windows",
"vstruct.defs.windows.win_5_1_i386",
"vstruct.defs.windows.win_5_1_i386.ntdll",
"vstruct.defs.windows.win_5_1_i386.ntoskrnl",
"vstruct.defs.windows.win_5_1_i386.win32k",
"vstruct.defs.windows.win_5_2_i386",
"vstruct.defs.windows.win_5_2_i386.ntdll",
"vstruct.defs.windows.win_5_2_i386.ntoskrnl",
"vstruct.defs.windows.win_5_2_i386.win32k",
"vstruct.defs.windows.win_6_1_amd64",
"vstruct.defs.windows.win_6_1_amd64.ntdll",
"vstruct.defs.windows.win_6_1_amd64.ntoskrnl",
"vstruct.defs.windows.win_6_1_amd64.win32k",
"vstruct.defs.windows.win_6_1_i386",
"vstruct.defs.windows.win_6_1_i386.ntdll",
"vstruct.defs.windows.win_6_1_i386.ntoskrnl",
"vstruct.defs.windows.win_6_1_i386.win32k",
"vstruct.defs.windows.win_6_1_wow64",
"vstruct.defs.windows.win_6_1_wow64.ntdll",
"vstruct.defs.windows.win_6_2_amd64",
"vstruct.defs.windows.win_6_2_amd64.ntdll",
"vstruct.defs.windows.win_6_2_amd64.ntoskrnl",
"vstruct.defs.windows.win_6_2_amd64.win32k",
"vstruct.defs.windows.win_6_2_i386",
"vstruct.defs.windows.win_6_2_i386.ntdll",
"vstruct.defs.windows.win_6_2_i386.ntoskrnl",
"vstruct.defs.windows.win_6_2_i386.win32k",
"vstruct.defs.windows.win_6_2_wow64",
"vstruct.defs.windows.win_6_2_wow64.ntdll",
"vstruct.defs.windows.win_6_3_amd64",
"vstruct.defs.windows.win_6_3_amd64.ntdll",
"vstruct.defs.windows.win_6_3_amd64.ntoskrnl",
"vstruct.defs.windows.win_6_3_i386",
"vstruct.defs.windows.win_6_3_i386.ntdll",
"vstruct.defs.windows.win_6_3_i386.ntoskrnl",
"vstruct.defs.windows.win_6_3_wow64",
"vstruct.defs.windows.win_6_3_wow64.ntdll",
],
# when invoking pyinstaller from the project root,
# this gets run from the project root.
hookspath=['.github/pyinstaller/hooks'],
@@ -180,6 +60,25 @@ a = Analysis(
# since we don't spawn a notebook, we can safely remove these.
"IPython",
"ipywidgets",
# these are pulled in by networkx
# but we don't need to compute the strongly connected components.
"numpy",
"scipy",
"matplotlib",
"pandas",
"pytest",
# deps from viv that we don't use.
# this duplicates the entries in `hook-vivisect`,
# but works better this way.
"vqt",
"vdb.qt",
"envi.qt",
"PyQt5",
"qt5",
"pyqtwebengine",
"pyasn1"
])
a.binaries = a.binaries - TOC([
@@ -210,5 +109,4 @@ exe = EXE(pyz,
# a.datas,
# strip=None,
# upx=True,
# name='capa-dat')
# name='capa-dat')

View File

@@ -1,82 +1,116 @@
name: build
on:
release:
types: [edited, published]
jobs:
build:
name: PyInstaller for ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
include:
- os: ubuntu-16.04
# use old linux so that the shared library versioning is more portable
artifact_name: capa
asset_name: linux
- os: windows-latest
artifact_name: capa.exe
asset_name: windows
- os: macos-latest
artifact_name: capa
asset_name: macos
steps:
- name: Checkout capa
uses: actions/checkout@v2
with:
submodules: true
- name: Set up Python 2.7
uses: actions/setup-python@v2
with:
python-version: 2.7
- if: matrix.os == 'ubuntu-latest'
run: sudo apt-get install -y libyaml-dev
- if: matrix.os == 'windows-latest'
run: |
choco install vcredist2008
choco install --ignore-dependencies vcpython27
- name: Install PyInstaller
# pyinstaller 4 doesn't support Python 2.7
run: pip install 'pyinstaller==3.*'
- name: Install capa
run: pip install -e .
- name: Build standalone executable
run: pyinstaller .github/pyinstaller/pyinstaller.spec
- name: Does it run?
run: dist/capa "tests/data/Practical Malware Analysis Lab 01-01.dll_"
- uses: actions/upload-artifact@v2
with:
name: ${{ matrix.asset_name }}
path: dist/${{ matrix.artifact_name }}
zip:
name: zip ${{ matrix.asset_name }}
runs-on: ubuntu-latest
needs: build
strategy:
matrix:
include:
- asset_name: linux
artifact_name: capa
- asset_name: windows
artifact_name: capa.exe
- asset_name: macos
artifact_name: capa
steps:
- name: Download ${{ matrix.asset_name }}
uses: actions/download-artifact@v2
with:
name: ${{ matrix.asset_name }}
- name: Set executable flag
run: chmod +x ${{ matrix.artifact_name }}
- name: Set zip name
run: echo "zip_name=capa-${GITHUB_REF#refs/tags/}-${{ matrix.asset_name }}.zip" >> $GITHUB_ENV
- name: Zip ${{ matrix.artifact_name }} into ${{ env.zip_name }}
run: zip ${{ env.zip_name }} ${{ matrix.artifact_name }}
- name: Upload ${{ env.zip_name }} to GH Release
uses: svenstaro/upload-release-action@v2
with:
repo_token: ${{ secrets.GITHUB_TOKEN}}
file: ${{ env.zip_name }}
tag: ${{ github.ref }}
name: build
on:
push:
branches: [master]
release:
types: [edited, published]
jobs:
build:
name: PyInstaller for ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
include:
- os: ubuntu-18.04
# use old linux so that the shared library versioning is more portable
artifact_name: capa
asset_name: linux
- os: windows-2019
artifact_name: capa.exe
asset_name: windows
- os: macos-10.15
artifact_name: capa
asset_name: macos
steps:
- name: Checkout capa
uses: actions/checkout@v2
with:
submodules: true
# using Python 3.8 to support running across multiple operating systems including Windows 7
- name: Set up Python 3.8
uses: actions/setup-python@v2
with:
python-version: 3.8
- if: matrix.os == 'ubuntu-18.04'
run: sudo apt-get install -y libyaml-dev
- name: Install PyInstaller
run: pip install 'pyinstaller==4.2'
- name: Install capa
run: pip install -e .
- name: Build standalone executable
run: pyinstaller .github/pyinstaller/pyinstaller.spec
- name: Does it run (PE)?
run: dist/capa "tests/data/Practical Malware Analysis Lab 01-01.dll_"
- name: Does it run (Shellcode)?
run: dist/capa "tests/data/499c2a85f6e8142c3f48d4251c9c7cd6.raw32"
- name: Does it run (ELF)?
run: dist/capa "tests/data/7351f8a40c5450557b24622417fc478d.elf_"
- uses: actions/upload-artifact@v2
with:
name: ${{ matrix.asset_name }}
path: dist/${{ matrix.artifact_name }}
test_run:
# test that binaries run on push to master
if: github.event_name == 'push'
name: Test run on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
needs: [build]
strategy:
matrix:
include:
# OSs not already tested above
- os: ubuntu-18.04
artifact_name: capa
asset_name: linux
- os: ubuntu-20.04
artifact_name: capa
asset_name: linux
- os: windows-2016
artifact_name: capa.exe
asset_name: windows
steps:
- name: Download ${{ matrix.asset_name }}
uses: actions/download-artifact@v2
with:
name: ${{ matrix.asset_name }}
- name: Set executable flag
if: matrix.os != 'windows-2016'
run: chmod +x ${{ matrix.artifact_name }}
- name: Run capa
run: ./${{ matrix.artifact_name }} -h
zip_and_upload:
# upload zipped binaries to Release page
if: github.event_name == 'release'
name: zip and upload ${{ matrix.asset_name }}
runs-on: ubuntu-20.04
needs: [build]
strategy:
matrix:
include:
- asset_name: linux
artifact_name: capa
- asset_name: windows
artifact_name: capa.exe
- asset_name: macos
artifact_name: capa
steps:
- name: Download ${{ matrix.asset_name }}
uses: actions/download-artifact@v2
with:
name: ${{ matrix.asset_name }}
- name: Set executable flag
run: chmod +x ${{ matrix.artifact_name }}
- name: Set zip name
run: echo "zip_name=capa-${GITHUB_REF#refs/tags/}-${{ matrix.asset_name }}.zip" >> $GITHUB_ENV
- name: Zip ${{ matrix.artifact_name }} into ${{ env.zip_name }}
run: zip ${{ env.zip_name }} ${{ matrix.artifact_name }}
- name: Upload ${{ env.zip_name }} to GH Release
uses: svenstaro/upload-release-action@v2
with:
repo_token: ${{ secrets.GITHUB_TOKEN}}
file: ${{ env.zip_name }}
tag: ${{ github.ref }}

41
.github/workflows/changelog.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: changelog
on:
# We need pull_request_target instead of pull_request because a write
# repository token is needed to add a review to a PR. DO NOT BUILD
# OR RUN UNTRUSTED CODE FROM PRs IN THIS ACTION
pull_request_target:
types: [opened, edited, synchronize]
jobs:
check_changelog:
# no need to check for dependency updates via dependabot
if: github.actor != 'dependabot[bot]' && github.actor != 'dependabot-preview[bot]'
runs-on: ubuntu-20.04
env:
NO_CHANGELOG: '[x] No CHANGELOG update needed'
steps:
- name: Get changed files
id: files
uses: Ana06/get-changed-files@v1.2
- name: check changelog updated
id: changelog_updated
env:
PR_BODY: ${{ github.event.pull_request.body }}
FILES: ${{ steps.files.outputs.modified }}
run: |
echo $FILES | grep -qF 'CHANGELOG.md' || echo $PR_BODY | grep -qiF "$NO_CHANGELOG"
- name: Reject pull request if no CHANGELOG update
if: ${{ always() && steps.changelog_updated.outcome == 'failure' }}
uses: Ana06/automatic-pull-request-review@v0.1.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
event: REQUEST_CHANGES
body: "Please add bug fixes, new features, breaking changes and anything else you think is worthwhile mentioning to the `master (unreleased)` section of CHANGELOG.md. If no CHANGELOG update is needed add the following to the PR description: `${{ env.NO_CHANGELOG }}`"
allow_duplicate: false
- name: Dismiss previous review if CHANGELOG update
uses: Ana06/automatic-pull-request-review@v0.1.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
event: DISMISS
body: "CHANGELOG updated or no update needed, thanks! :smile:"

View File

@@ -1,29 +1,30 @@
# This workflows will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries
name: publish to pypi
on:
release:
types: [published]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '2.7'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Build and publish
env:
TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }}
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
run: |
python setup.py sdist bdist_wheel
twine upload --skip-existing dist/*
# This workflows will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries
name: publish to pypi
on:
release:
types: [published]
jobs:
deploy:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.6'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Build and publish
env:
TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }}
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
run: |
python setup.py sdist bdist_wheel
twine upload --skip-existing dist/*

29
.github/workflows/tag.yml vendored Normal file
View File

@@ -0,0 +1,29 @@
name: tag
on:
release:
types: [published]
jobs:
tag:
name: Tag capa rules
runs-on: ubuntu-20.04
steps:
- name: Checkout capa-rules
uses: actions/checkout@v2
with:
repository: fireeye/capa-rules
token: ${{ secrets.CAPA_TOKEN }}
- name: Tag capa-rules
run: |
# user information is needed to create annotated tags (with a message)
git config user.email 'capa-dev@fireeye.com'
git config user.name 'Capa Bot'
name=${{ github.event.release.tag_name }}
git tag $name -m "https://github.com/fireeye/capa/releases/$name"
- name: Push tag to capa-rules
uses: ad-m/github-push-action@master
with:
repository: fireeye/capa-rules
github_token: ${{ secrets.CAPA_TOKEN }}
tags: true

View File

@@ -6,9 +6,24 @@ on:
pull_request:
branches: [ master ]
# save workspaces to speed up testing
env:
CAPA_SAVE_WORKSPACE: "True"
jobs:
changelog_format:
runs-on: ubuntu-20.04
steps:
- name: Checkout capa
uses: actions/checkout@v2
# The sync GH action in capa-rules relies on a single '- *$' in the CHANGELOG file
- name: Ensure CHANGELOG has '- *$'
run: |
number=$(grep '\- *$' CHANGELOG.md | wc -l)
if [ $number != 1 ]; then exit 1; fi
code_style:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- name: Checkout capa
uses: actions/checkout@v2
@@ -17,16 +32,18 @@ jobs:
with:
python-version: 3.8
- name: Install dependencies
run: pip install 'isort==5.*' black
run: pip install -e .[dev]
- name: Lint with isort
run: isort --profile black --length-sort --line-width 120 -c .
- name: Lint with black
run: black -l 120 --check .
- name: Check types with mypy
run: mypy --config-file .github/mypy/mypy.ini capa/ scripts/ tests/
rule_linter:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- name: Checkout capa with rules submodule
- name: Checkout capa with submodules
uses: actions/checkout@v2
with:
submodules: true
@@ -34,37 +51,40 @@ jobs:
uses: actions/setup-python@v2
with:
python-version: 3.8
# We don't need vivisect, so we can install capa using Python3
- name: Install capa
run: pip install -e .
- name: Run rule linter
run: python scripts/lint.py rules/
tests:
name: Tests in ${{ matrix.python }}
runs-on: ubuntu-latest
name: Tests in ${{ matrix.python-version }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
needs: [code_style, rule_linter]
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04, windows-2019, macos-10.15]
# across all operating systems
python-version: [3.6, 3.9]
include:
- python: 2.7
- python: 3.7
- python: 3.8
- python: 3.9.1
# on Ubuntu run these as well
- os: ubuntu-20.04
python-version: 3.7
- os: ubuntu-20.04
python-version: 3.8
steps:
- name: Checkout capa with submodules
uses: actions/checkout@v2
with:
submodules: true
- name: Set up Python ${{ matrix.python }}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python }}
python-version: ${{ matrix.python-version }}
- name: Install pyyaml
if: matrix.os == 'ubuntu-20.04'
run: sudo apt-get install -y libyaml-dev
- name: Install capa
run: pip install -e .[dev]
- name: Run tests
run: pytest tests/
run: pytest -v tests/

File diff suppressed because it is too large Load Diff

View File

@@ -1,14 +1,20 @@
![capa](.github/logo.png)
![capa](https://github.com/fireeye/capa/blob/master/.github/logo.png)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/flare-capa)](https://pypi.org/project/flare-capa)
[![Last release](https://img.shields.io/github/v/release/fireeye/capa)](https://github.com/fireeye/capa/releases)
[![Number of rules](https://img.shields.io/badge/rules-633-blue.svg)](https://github.com/fireeye/capa-rules)
[![CI status](https://github.com/fireeye/capa/workflows/CI/badge.svg)](https://github.com/fireeye/capa/actions?query=workflow%3ACI+event%3Apush+branch%3Amaster)
[![Number of rules](https://img.shields.io/badge/rules-458-blue.svg)](https://github.com/fireeye/capa-rules)
[![Downloads](https://img.shields.io/github/downloads/fireeye/capa/total)](https://github.com/fireeye/capa/releases)
[![License](https://img.shields.io/badge/license-Apache--2.0-green.svg)](LICENSE.txt)
capa detects capabilities in executable files.
You run it against a PE file or shellcode and it tells you what it thinks the program can do.
You run it against a PE, ELF, or shellcode file and it tells you what it thinks the program can do.
For example, it might suggest that the file is a backdoor, is capable of installing services, or relies on HTTP to communicate.
Check out the overview in our first [capa blog post](https://www.fireeye.com/blog/threat-research/2020/07/capa-automatically-identify-malware-capabilities.html).
Check out:
- the overview in our first [capa blog post](https://www.fireeye.com/blog/threat-research/2020/07/capa-automatically-identify-malware-capabilities.html)
- the major version 2.0 updates described in our [second blog post](https://www.fireeye.com/blog/threat-research/2021/07/capa-2-better-stronger-faster.html)
- the major version 3.0 (ELF support) described in the [third blog post](https://www.fireeye.com/blog/threat-research/2021/09/elfant-in-the-room-capa-v3.html)
```
$ capa.exe suspicious.exe
@@ -62,16 +68,9 @@ $ capa.exe suspicious.exe
Download stable releases of the standalone capa binaries [here](https://github.com/fireeye/capa/releases). You can run the standalone binaries without installation. capa is a command line tool that should be run from the terminal.
<!--
Alternatively, you can fetch a nightly build of a standalone binary from one of the following links. These are built using the latest development branch.
- Windows 64bit: TODO
- Linux: TODO
- OSX: TODO
-->
To use capa as a library or integrate with another tool, see [doc/installation.md](https://github.com/fireeye/capa/blob/master/doc/installation.md) for further setup instructions.
To use capa as a library or integrate with another tool, see [doc/installation.md](doc/installation.md) for further setup instructions.
For more information about how to use capa, see [doc/usage.md](doc/usage.md).
For more information about how to use capa, see [doc/usage.md](https://github.com/fireeye/capa/blob/master/doc/usage.md).
# example
@@ -88,7 +87,7 @@ This is useful for at least two reasons:
- it shows where within the binary an experienced analyst might study with IDA Pro
```
λ capa.exe suspicious.exe -vv
$ capa.exe suspicious.exe -vv
...
execute shell command and capture output
namespace c2/shell
@@ -146,19 +145,21 @@ rule:
The [github.com/fireeye/capa-rules](https://github.com/fireeye/capa-rules) repository contains hundreds of standard library rules that are distributed with capa.
Please learn to write rules and contribute new entries as you find interesting techniques in malware.
If you use IDA Pro, then you use can use the [capa explorer IDA plugin](capa/ida/plugin/).
capa explorer lets you quickly identify and navigate to interesting areas of a program and dissect capa rule matches at
the assembly level.
If you use IDA Pro, then you can use the [capa explorer](https://github.com/fireeye/capa/tree/master/capa/ida/plugin) plugin.
capa explorer helps you identify interesting areas of a program and build new capa rules using features extracted directly from your IDA Pro database.
![capa + IDA Pro integration](doc/img/ida_plugin_intro.gif)
![capa + IDA Pro integration](https://github.com/fireeye/capa/blob/master/doc/img/explorer_expanded.png)
# further information
## capa
- [doc/installation](doc/installation.md)
- [doc/usage](doc/usage.md)
- [doc/limitations](doc/limitations.md)
- [Contributing Guide](.github/CONTRIBUTING.md)
- [Installation](https://github.com/fireeye/capa/blob/master/doc/installation.md)
- [Usage](https://github.com/fireeye/capa/blob/master/doc/usage.md)
- [Limitations](https://github.com/fireeye/capa/blob/master/doc/limitations.md)
- [Contributing Guide](https://github.com/fireeye/capa/blob/master/.github/CONTRIBUTING.md)
## capa rules
- [capa-rules repository](https://github.com/fireeye/capa-rules)
- [capa-rules rule format](https://github.com/fireeye/capa-rules/blob/master/doc/format.md)
## capa testfiles
The [capa-testfiles repository](https://github.com/fireeye/capa-testfiles) contains the data we use to test capa's code and rules

View File

@@ -8,11 +8,23 @@
import copy
import collections
from typing import Set, Dict, List, Tuple, Union, Mapping, Iterable
import capa.features
import capa.rules
import capa.features.common
from capa.features.common import Feature
# a collection of features and the locations at which they are found.
#
# used throughout matching as the context in which features are searched:
# to check if a feature exists, do: `Number(0x10) in features`.
# to collect the locations of a feature, do: `features[Number(0x10)]`
#
# aliased here so that the type can be documented and xref'd.
FeatureSet = Dict[Feature, Set[int]]
class Statement(object):
class Statement:
"""
superclass for structural nodes, such as and/or/not.
this exists to provide a default impl for `__str__` and `__repr__`,
@@ -33,7 +45,7 @@ class Statement(object):
def __repr__(self):
return str(self)
def evaluate(self, ctx):
def evaluate(self, features: FeatureSet) -> "Result":
"""
classes that inherit `Statement` must implement `evaluate`
@@ -50,7 +62,7 @@ class Statement(object):
yield self.child
if hasattr(self, "children"):
for child in self.children:
for child in getattr(self, "children"):
yield child
def replace_child(self, existing, new):
@@ -59,12 +71,13 @@ class Statement(object):
self.child = new
if hasattr(self, "children"):
for i, child in enumerate(self.children):
children = getattr(self, "children")
for i, child in enumerate(children):
if child is existing:
self.children[i] = new
children[i] = new
class Result(object):
class Result:
"""
represents the results of an evaluation of statements against features.
@@ -78,7 +91,7 @@ class Result(object):
we need this so that we can render the tree of expressions and their results.
"""
def __init__(self, success, statement, children, locations=None):
def __init__(self, success: bool, statement: Union[Statement, Feature], children: List["Result"], locations=None):
"""
args:
success (bool)
@@ -199,38 +212,40 @@ class Subscope(Statement):
raise ValueError("cannot evaluate a subscope directly!")
def topologically_order_rules(rules):
# mapping from rule name to list of: (location of match, result object)
#
# used throughout matching and rendering to collection the results
# of statement evaluation and their locations.
#
# to check if a rule matched, do: `"TCP client" in matches`.
# to find where a rule matched, do: `map(first, matches["TCP client"])`
# to see how a rule matched, do:
#
# for address, match_details in matches["TCP client"]:
# inspect(match_details)
#
# aliased here so that the type can be documented and xref'd.
MatchResults = Mapping[str, List[Tuple[int, Result]]]
def index_rule_matches(features: FeatureSet, rule: "capa.rules.Rule", locations: Iterable[int]):
"""
order the given rules such that dependencies show up before dependents.
this means that as we match rules, we can add features for the matches, and these
will be matched by subsequent rules if they follow this order.
record into the given featureset that the given rule matched at the given locations.
assumes that the rule dependency graph is a DAG.
naively, this is just adding a MatchedRule feature;
however, we also want to record matches for the rule's namespaces.
updates `features` in-place. doesn't modify the remaining arguments.
"""
# we evaluate `rules` multiple times, so if its a generator, realize it into a list.
rules = list(rules)
namespaces = capa.rules.index_rules_by_namespace(rules)
rules = {rule.name: rule for rule in rules}
seen = set([])
ret = []
def rec(rule):
if rule.name in seen:
return
for dep in rule.get_dependencies(namespaces):
rec(rules[dep])
ret.append(rule)
seen.add(rule.name)
for rule in rules.values():
rec(rule)
return ret
features[capa.features.common.MatchedRule(rule.name)].update(locations)
namespace = rule.meta.get("namespace")
if namespace:
while namespace:
features[capa.features.common.MatchedRule(namespace)].update(locations)
namespace, _, _ = namespace.rpartition("/")
def match(rules, features, va):
def match(rules: List["capa.rules.Rule"], features: FeatureSet, va: int) -> Tuple[FeatureSet, MatchResults]:
"""
Args:
rules (List[capa.rules.Rule]): these must already be ordered topologically by dependency.
@@ -238,11 +253,11 @@ def match(rules, features, va):
va (int): location of the features
Returns:
Tuple[List[capa.features.Feature], Dict[str, Tuple[int, capa.engine.Result]]]: two-tuple with entries:
- list of features used for matching (which may be greater than argument, due to rule match features), and
- mapping from rule name to (location of match, result object)
Tuple[FeatureSet, MatchResults]: two-tuple with entries:
- set of features used for matching (which may be a superset of the given `features` argument, due to rule match features), and
- mapping from rule name to [(location of match, result object)]
"""
results = collections.defaultdict(list)
results = collections.defaultdict(list) # type: MatchResults
# copy features so that we can modify it
# without affecting the caller (keep this function pure)
@@ -254,12 +269,9 @@ def match(rules, features, va):
res = rule.evaluate(features)
if res:
results[rule.name].append((va, res))
features[capa.features.MatchedRule(rule.name)].add(va)
namespace = rule.meta.get("namespace")
if namespace:
while namespace:
features[capa.features.MatchedRule(namespace)].add(va)
namespace, _, _ = namespace.rpartition("/")
# we need to update the current `features`
# because subsequent iterations of this loop may use newly added features,
# such as rule or namespace matches.
index_rule_matches(features, rule, [va])
return (features, results)

View File

@@ -1,222 +0,0 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import re
import sys
import codecs
import logging
import capa.engine
logger = logging.getLogger(__name__)
MAX_BYTES_FEATURE_SIZE = 0x100
# thunks may be chained so we specify a delta to control the depth to which these chains are explored
THUNK_CHAIN_DEPTH_DELTA = 5
# identifiers for supported architectures names that tweak a feature
# for example, offset/x32
ARCH_X32 = "x32"
ARCH_X64 = "x64"
VALID_ARCH = (ARCH_X32, ARCH_X64)
def bytes_to_str(b):
if sys.version_info[0] >= 3:
return str(codecs.encode(b, "hex").decode("utf-8"))
else:
return codecs.encode(b, "hex")
def hex_string(h):
""" render hex string e.g. "0a40b1" as "0A 40 B1" """
return " ".join(h[i : i + 2] for i in range(0, len(h), 2)).upper()
class Feature(object):
def __init__(self, value, arch=None, description=None):
"""
Args:
value (any): the value of the feature, such as the number or string.
arch (str): one of the VALID_ARCH values, or None.
When None, then the feature applies to any architecture.
Modifies the feature name from `feature` to `feature/arch`, like `offset/x32`.
description (str): a human-readable description that explains the feature value.
"""
super(Feature, self).__init__()
if arch is not None:
if arch not in VALID_ARCH:
raise ValueError("arch '%s' must be one of %s" % (arch, VALID_ARCH))
self.name = self.__class__.__name__.lower() + "/" + arch
else:
self.name = self.__class__.__name__.lower()
self.value = value
self.arch = arch
self.description = description
def __hash__(self):
return hash((self.name, self.value, self.arch))
def __eq__(self, other):
return self.name == other.name and self.value == other.value and self.arch == other.arch
def get_value_str(self):
"""
render the value of this feature, for use by `__str__` and friends.
subclasses should override to customize the rendering.
Returns: any
"""
return self.value
def __str__(self):
if self.value is not None:
if self.description:
return "%s(%s = %s)" % (self.name, self.get_value_str(), self.description)
else:
return "%s(%s)" % (self.name, self.get_value_str())
else:
return "%s" % self.name
def __repr__(self):
return str(self)
def evaluate(self, ctx):
return capa.engine.Result(self in ctx, self, [], locations=ctx.get(self, []))
def freeze_serialize(self):
if self.arch is not None:
return (self.__class__.__name__, [self.value, {"arch": self.arch}])
else:
return (self.__class__.__name__, [self.value])
@classmethod
def freeze_deserialize(cls, args):
# as you can see below in code,
# if the last argument is a dictionary,
# consider it to be kwargs passed to the feature constructor.
if len(args) == 1:
return cls(*args)
elif isinstance(args[-1], dict):
kwargs = args[-1]
args = args[:-1]
return cls(*args, **kwargs)
class MatchedRule(Feature):
def __init__(self, value, description=None):
super(MatchedRule, self).__init__(value, description=description)
self.name = "match"
class Characteristic(Feature):
def __init__(self, value, description=None):
super(Characteristic, self).__init__(value, description=description)
class String(Feature):
def __init__(self, value, description=None):
super(String, self).__init__(value, description=description)
class Regex(String):
def __init__(self, value, description=None):
super(Regex, self).__init__(value, description=description)
pat = self.value[len("/") : -len("/")]
flags = re.DOTALL
if value.endswith("/i"):
pat = self.value[len("/") : -len("/i")]
flags |= re.IGNORECASE
try:
self.re = re.compile(pat, flags)
except re.error:
if value.endswith("/i"):
value = value[: -len("i")]
raise ValueError(
"invalid regular expression: %s it should use Python syntax, try it at https://pythex.org" % value
)
def evaluate(self, ctx):
for feature, locations in ctx.items():
if not isinstance(feature, (capa.features.String,)):
continue
# `re.search` finds a match anywhere in the given string
# which implies leading and/or trailing whitespace.
# using this mode cleans is more convenient for rule authors,
# so that they don't have to prefix/suffix their terms like: /.*foo.*/.
if self.re.search(feature.value):
# unlike other features, we cannot return put a reference to `self` directly in a `Result`.
# this is because `self` may match on many strings, so we can't stuff the matched value into it.
# instead, return a new instance that has a reference to both the regex and the matched value.
# see #262.
return capa.engine.Result(True, _MatchedRegex(self, feature.value), [], locations=locations)
return capa.engine.Result(False, _MatchedRegex(self, None), [])
def __str__(self):
return "regex(string =~ %s)" % self.value
class _MatchedRegex(Regex):
"""
this represents a specific instance of a regular expression feature match.
treat it the same as a `Regex` except it has the `match` field that contains the complete string that matched.
note: this type should only ever be constructed by `Regex.evaluate()`. it is not part of the public API.
"""
def __init__(self, regex, match):
"""
args:
regex (Regex): the regex feature that matches
match (string|None): the matching string or None if it doesn't match
"""
super(_MatchedRegex, self).__init__(regex.value, description=regex.description)
# we want this to collide with the name of `Regex` above,
# so that it works nicely with the renderers.
self.name = "regex"
# this may be None if the regex doesn't match
self.match = match
def __str__(self):
return 'regex(string =~ %s, matched = "%s")' % (self.value, self.match)
class StringFactory(object):
def __new__(self, value, description=None):
if value.startswith("/") and (value.endswith("/") or value.endswith("/i")):
return Regex(value, description=description)
return String(value, description=description)
class Bytes(Feature):
def __init__(self, value, description=None):
super(Bytes, self).__init__(value, description=description)
def evaluate(self, ctx):
for feature, locations in ctx.items():
if not isinstance(feature, (capa.features.Bytes,)):
continue
if feature.value.startswith(self.value):
return capa.engine.Result(True, self, [], locations=locations)
return capa.engine.Result(False, self, [])
def get_value_str(self):
return hex_string(bytes_to_str(self.value))
def freeze_serialize(self):
return (self.__class__.__name__, [bytes_to_str(self.value).upper()])
@classmethod
def freeze_deserialize(cls, args):
return cls(*[codecs.decode(x, "hex") for x in args])

View File

@@ -6,7 +6,7 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
from capa.features import Feature
from capa.features.common import Feature
class BasicBlock(Feature):

380
capa/features/common.py Normal file
View File

@@ -0,0 +1,380 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import re
import codecs
import logging
import collections
from typing import Set, Dict, Union
import capa.engine
import capa.features
import capa.features.extractors.elf
logger = logging.getLogger(__name__)
MAX_BYTES_FEATURE_SIZE = 0x100
# thunks may be chained so we specify a delta to control the depth to which these chains are explored
THUNK_CHAIN_DEPTH_DELTA = 5
def bytes_to_str(b: bytes) -> str:
return str(codecs.encode(b, "hex").decode("utf-8"))
def hex_string(h: str) -> str:
"""render hex string e.g. "0a40b1" as "0A 40 B1" """
return " ".join(h[i : i + 2] for i in range(0, len(h), 2)).upper()
def escape_string(s: str) -> str:
"""escape special characters"""
s = repr(s)
if not s.startswith(('"', "'")):
# u'hello\r\nworld' -> hello\\r\\nworld
s = s[2:-1]
else:
# 'hello\r\nworld' -> hello\\r\\nworld
s = s[1:-1]
s = s.replace("\\'", "'") # repr() may escape "'" in some edge cases, remove
s = s.replace('"', '\\"') # repr() does not escape '"', add
return s
class Feature:
def __init__(self, value: Union[str, int, bytes], bitness=None, description=None):
"""
Args:
value (any): the value of the feature, such as the number or string.
bitness (str): one of the VALID_BITNESS values, or None.
When None, then the feature applies to any bitness.
Modifies the feature name from `feature` to `feature/bitness`, like `offset/x32`.
description (str): a human-readable description that explains the feature value.
"""
super(Feature, self).__init__()
if bitness is not None:
if bitness not in VALID_BITNESS:
raise ValueError("bitness '%s' must be one of %s" % (bitness, VALID_BITNESS))
self.name = self.__class__.__name__.lower() + "/" + bitness
else:
self.name = self.__class__.__name__.lower()
self.value = value
self.bitness = bitness
self.description = description
def __hash__(self):
return hash((self.name, self.value, self.bitness))
def __eq__(self, other):
return self.name == other.name and self.value == other.value and self.bitness == other.bitness
def get_value_str(self) -> str:
"""
render the value of this feature, for use by `__str__` and friends.
subclasses should override to customize the rendering.
Returns: any
"""
return str(self.value)
def __str__(self):
if self.value is not None:
if self.description:
return "%s(%s = %s)" % (self.name, self.get_value_str(), self.description)
else:
return "%s(%s)" % (self.name, self.get_value_str())
else:
return "%s" % self.name
def __repr__(self):
return str(self)
def evaluate(self, ctx: Dict["Feature", Set[int]]) -> "capa.engine.Result":
return capa.engine.Result(self in ctx, self, [], locations=ctx.get(self, []))
def freeze_serialize(self):
if self.bitness is not None:
return (self.__class__.__name__, [self.value, {"bitness": self.bitness}])
else:
return (self.__class__.__name__, [self.value])
@classmethod
def freeze_deserialize(cls, args):
# as you can see below in code,
# if the last argument is a dictionary,
# consider it to be kwargs passed to the feature constructor.
if len(args) == 1:
return cls(*args)
elif isinstance(args[-1], dict):
kwargs = args[-1]
args = args[:-1]
return cls(*args, **kwargs)
class MatchedRule(Feature):
def __init__(self, value: str, description=None):
super(MatchedRule, self).__init__(value, description=description)
self.name = "match"
class Characteristic(Feature):
def __init__(self, value: str, description=None):
super(Characteristic, self).__init__(value, description=description)
class String(Feature):
def __init__(self, value: str, description=None):
super(String, self).__init__(value, description=description)
class Substring(String):
def __init__(self, value: str, description=None):
super(Substring, self).__init__(value, description=description)
self.value = value
def evaluate(self, ctx):
# mapping from string value to list of locations.
# will unique the locations later on.
matches = collections.defaultdict(list)
for feature, locations in ctx.items():
if not isinstance(feature, (String,)):
continue
if not isinstance(feature.value, str):
# this is a programming error: String should only contain str
raise ValueError("unexpected feature value type")
if self.value in feature.value:
matches[feature.value].extend(locations)
if matches:
# finalize: defaultdict -> dict
# which makes json serialization easier
matches = dict(matches)
# collect all locations
locations = set()
for s in matches.keys():
matches[s] = list(set(matches[s]))
locations.update(matches[s])
# unlike other features, we cannot return put a reference to `self` directly in a `Result`.
# this is because `self` may match on many strings, so we can't stuff the matched value into it.
# instead, return a new instance that has a reference to both the substring and the matched values.
return capa.engine.Result(True, _MatchedSubstring(self, matches), [], locations=locations)
else:
return capa.engine.Result(False, _MatchedSubstring(self, None), [])
def __str__(self):
return "substring(%s)" % self.value
class _MatchedSubstring(Substring):
"""
this represents specific match instances of a substring feature.
treat it the same as a `Substring` except it has the `matches` field that contains the complete strings that matched.
note: this type should only ever be constructed by `Substring.evaluate()`. it is not part of the public API.
"""
def __init__(self, substring: Substring, matches):
"""
args:
substring (Substring): the substring feature that matches.
match (Dict[string, List[int]]|None): mapping from matching string to its locations.
"""
super(_MatchedSubstring, self).__init__(str(substring.value), description=substring.description)
# we want this to collide with the name of `Substring` above,
# so that it works nicely with the renderers.
self.name = "substring"
# this may be None if the substring doesn't match
self.matches = matches
def __str__(self):
return 'substring("%s", matches = %s)' % (
self.value,
", ".join(map(lambda s: '"' + s + '"', (self.matches or {}).keys())),
)
class Regex(String):
def __init__(self, value: str, description=None):
super(Regex, self).__init__(value, description=description)
self.value = value
pat = self.value[len("/") : -len("/")]
flags = re.DOTALL
if value.endswith("/i"):
pat = self.value[len("/") : -len("/i")]
flags |= re.IGNORECASE
try:
self.re = re.compile(pat, flags)
except re.error:
if value.endswith("/i"):
value = value[: -len("i")]
raise ValueError(
"invalid regular expression: %s it should use Python syntax, try it at https://pythex.org" % value
)
def evaluate(self, ctx):
# mapping from string value to list of locations.
# will unique the locations later on.
matches = collections.defaultdict(list)
for feature, locations in ctx.items():
if not isinstance(feature, (String,)):
continue
if not isinstance(feature.value, str):
# this is a programming error: String should only contain str
raise ValueError("unexpected feature value type")
# `re.search` finds a match anywhere in the given string
# which implies leading and/or trailing whitespace.
# using this mode cleans is more convenient for rule authors,
# so that they don't have to prefix/suffix their terms like: /.*foo.*/.
if self.re.search(feature.value):
matches[feature.value].extend(locations)
if matches:
# finalize: defaultdict -> dict
# which makes json serialization easier
matches = dict(matches)
# collect all locations
locations = set()
for s in matches.keys():
matches[s] = list(set(matches[s]))
locations.update(matches[s])
# unlike other features, we cannot return put a reference to `self` directly in a `Result`.
# this is because `self` may match on many strings, so we can't stuff the matched value into it.
# instead, return a new instance that has a reference to both the regex and the matched values.
# see #262.
return capa.engine.Result(True, _MatchedRegex(self, matches), [], locations=locations)
else:
return capa.engine.Result(False, _MatchedRegex(self, None), [])
def __str__(self):
return "regex(string =~ %s)" % self.value
class _MatchedRegex(Regex):
"""
this represents specific match instances of a regular expression feature.
treat it the same as a `Regex` except it has the `matches` field that contains the complete strings that matched.
note: this type should only ever be constructed by `Regex.evaluate()`. it is not part of the public API.
"""
def __init__(self, regex: Regex, matches):
"""
args:
regex (Regex): the regex feature that matches.
match (Dict[string, List[int]]|None): mapping from matching string to its locations.
"""
super(_MatchedRegex, self).__init__(str(regex.value), description=regex.description)
# we want this to collide with the name of `Regex` above,
# so that it works nicely with the renderers.
self.name = "regex"
# this may be None if the regex doesn't match
self.matches = matches
def __str__(self):
return "regex(string =~ %s, matches = %s)" % (
self.value,
", ".join(map(lambda s: '"' + s + '"', (self.matches or {}).keys())),
)
class StringFactory:
def __new__(cls, value: str, description=None):
if value.startswith("/") and (value.endswith("/") or value.endswith("/i")):
return Regex(value, description=description)
return String(value, description=description)
class Bytes(Feature):
def __init__(self, value: bytes, description=None):
super(Bytes, self).__init__(value, description=description)
self.value = value
def evaluate(self, ctx):
for feature, locations in ctx.items():
if not isinstance(feature, (Bytes,)):
continue
if feature.value.startswith(self.value):
return capa.engine.Result(True, self, [], locations=locations)
return capa.engine.Result(False, self, [])
def get_value_str(self):
return hex_string(bytes_to_str(self.value))
def freeze_serialize(self):
return (self.__class__.__name__, [bytes_to_str(self.value).upper()])
@classmethod
def freeze_deserialize(cls, args):
return cls(*[codecs.decode(x, "hex") for x in args])
# identifiers for supported bitness names that tweak a feature
# for example, offset/x32
BITNESS_X32 = "x32"
BITNESS_X64 = "x64"
VALID_BITNESS = (BITNESS_X32, BITNESS_X64)
# other candidates here: https://docs.microsoft.com/en-us/windows/win32/debug/pe-format#machine-types
ARCH_I386 = "i386"
ARCH_AMD64 = "amd64"
VALID_ARCH = (ARCH_I386, ARCH_AMD64)
class Arch(Feature):
def __init__(self, value: str, description=None):
super(Arch, self).__init__(value, description=description)
self.name = "arch"
OS_WINDOWS = "windows"
OS_LINUX = "linux"
OS_MACOS = "macos"
VALID_OS = {os.value for os in capa.features.extractors.elf.OS}
VALID_OS.update({OS_WINDOWS, OS_LINUX, OS_MACOS})
class OS(Feature):
def __init__(self, value: str, description=None):
super(OS, self).__init__(value, description=description)
self.name = "os"
FORMAT_PE = "pe"
FORMAT_ELF = "elf"
VALID_FORMAT = (FORMAT_PE, FORMAT_ELF)
class Format(Feature):
def __init__(self, value: str, description=None):
super(Format, self).__init__(value, description=description)
self.name = "format"
def is_global_feature(feature):
"""
is this a feature that is extracted at every scope?
today, these are OS and arch features.
"""
return isinstance(feature, (OS, Arch))

View File

@@ -1,286 +0,0 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import abc
class FeatureExtractor(object):
"""
FeatureExtractor defines the interface for fetching features from a sample.
There may be multiple backends that support fetching features for capa.
For example, we use vivisect by default, but also want to support saving
and restoring features from a JSON file.
When we restore the features, we'd like to use exactly the same matching logic
to find matching rules.
Therefore, we can define a FeatureExtractor that provides features from the
serialized JSON file and do matching without a binary analysis pass.
Also, this provides a way to hook in an IDA backend.
This class is not instantiated directly; it is the base class for other implementations.
"""
__metaclass__ = abc.ABCMeta
def __init__(self):
#
# note: a subclass should define ctor parameters for its own use.
# for example, the Vivisect feature extract might require the vw and/or path.
# this base class doesn't know what to do with that info, though.
#
super(FeatureExtractor, self).__init__()
@abc.abstractmethod
def get_base_address(self):
"""
fetch the preferred load address at which the sample was analyzed.
returns: int
"""
raise NotImplemented
@abc.abstractmethod
def extract_file_features(self):
"""
extract file-scope features.
example::
extractor = VivisectFeatureExtractor(vw, path)
for feature, va in extractor.get_file_features():
print('0x%x: %s', va, feature)
yields:
Tuple[capa.features.Feature, int]: feature and its location
"""
raise NotImplemented
@abc.abstractmethod
def get_functions(self):
"""
enumerate the functions and provide opaque values that will
subsequently be provided to `.extract_function_features()`, etc.
by "opaque value", we mean that this can be any object, as long as it
provides enough context to `.extract_function_features()`.
the opaque value should support casting to int (`__int__`) for the function start address.
yields:
any: the opaque function value.
"""
raise NotImplemented
@abc.abstractmethod
def extract_function_features(self, f):
"""
extract function-scope features.
the arguments are opaque values previously provided by `.get_functions()`, etc.
example::
extractor = VivisectFeatureExtractor(vw, path)
for function in extractor.get_functions():
for feature, va in extractor.extract_function_features(function):
print('0x%x: %s', va, feature)
args:
f [any]: an opaque value previously fetched from `.get_functions()`.
yields:
Tuple[capa.features.Feature, int]: feature and its location
"""
raise NotImplemented
@abc.abstractmethod
def get_basic_blocks(self, f):
"""
enumerate the basic blocks in the given function and provide opaque values that will
subsequently be provided to `.extract_basic_block_features()`, etc.
by "opaque value", we mean that this can be any object, as long as it
provides enough context to `.extract_basic_block_features()`.
the opaque value should support casting to int (`__int__`) for the basic block start address.
yields:
any: the opaque basic block value.
"""
raise NotImplemented
@abc.abstractmethod
def extract_basic_block_features(self, f, bb):
"""
extract basic block-scope features.
the arguments are opaque values previously provided by `.get_functions()`, etc.
example::
extractor = VivisectFeatureExtractor(vw, path)
for function in extractor.get_functions():
for bb in extractor.get_basic_blocks(function):
for feature, va in extractor.extract_basic_block_features(function, bb):
print('0x%x: %s', va, feature)
args:
f [any]: an opaque value previously fetched from `.get_functions()`.
bb [any]: an opaque value previously fetched from `.get_basic_blocks()`.
yields:
Tuple[capa.features.Feature, int]: feature and its location
"""
raise NotImplemented
@abc.abstractmethod
def get_instructions(self, f, bb):
"""
enumerate the instructions in the given basic block and provide opaque values that will
subsequently be provided to `.extract_insn_features()`, etc.
by "opaque value", we mean that this can be any object, as long as it
provides enough context to `.extract_insn_features()`.
the opaque value should support casting to int (`__int__`) for the instruction address.
yields:
any: the opaque function value.
"""
raise NotImplemented
@abc.abstractmethod
def extract_insn_features(self, f, bb, insn):
"""
extract instruction-scope features.
the arguments are opaque values previously provided by `.get_functions()`, etc.
example::
extractor = VivisectFeatureExtractor(vw, path)
for function in extractor.get_functions():
for bb in extractor.get_basic_blocks(function):
for insn in extractor.get_instructions(function, bb):
for feature, va in extractor.extract_insn_features(function, bb, insn):
print('0x%x: %s', va, feature)
args:
f [any]: an opaque value previously fetched from `.get_functions()`.
bb [any]: an opaque value previously fetched from `.get_basic_blocks()`.
insn [any]: an opaque value previously fetched from `.get_instructions()`.
yields:
Tuple[capa.features.Feature, int]: feature and its location
"""
raise NotImplemented
class NullFeatureExtractor(FeatureExtractor):
"""
An extractor that extracts some user-provided features.
The structure of the single parameter is demonstrated in the example below.
This is useful for testing, as we can provide expected values and see if matching works.
Also, this is how we represent features deserialized from a freeze file.
example::
extractor = NullFeatureExtractor({
'base address: 0x401000,
'file features': [
(0x402345, capa.features.Characteristic('embedded pe')),
],
'functions': {
0x401000: {
'features': [
(0x401000, capa.features.Characteristic('nzxor')),
],
'basic blocks': {
0x401000: {
'features': [
(0x401000, capa.features.Characteristic('tight-loop')),
],
'instructions': {
0x401000: {
'features': [
(0x401000, capa.features.Characteristic('nzxor')),
],
},
0x401002: ...
}
},
0x401005: ...
}
},
0x40200: ...
}
)
"""
def __init__(self, features):
super(NullFeatureExtractor, self).__init__()
self.features = features
def get_base_address(self):
return self.features["base address"]
def extract_file_features(self):
for p in self.features.get("file features", []):
va, feature = p
yield feature, va
def get_functions(self):
for va in sorted(self.features["functions"].keys()):
yield va
def extract_function_features(self, f):
for p in self.features.get("functions", {}).get(f, {}).get("features", []): # noqa: E127 line over-indented
va, feature = p
yield feature, va
def get_basic_blocks(self, f):
for va in sorted(
self.features.get("functions", {}) # noqa: E127 line over-indented
.get(f, {})
.get("basic blocks", {})
.keys()
):
yield va
def extract_basic_block_features(self, f, bb):
for p in (
self.features.get("functions", {}) # noqa: E127 line over-indented
.get(f, {})
.get("basic blocks", {})
.get(bb, {})
.get("features", [])
):
va, feature = p
yield feature, va
def get_instructions(self, f, bb):
for va in sorted(
self.features.get("functions", {}) # noqa: E127 line over-indented
.get(f, {})
.get("basic blocks", {})
.get(bb, {})
.get("instructions", {})
.keys()
):
yield va
def extract_insn_features(self, f, bb, insn):
for p in (
self.features.get("functions", {}) # noqa: E127 line over-indented
.get(f, {})
.get("basic blocks", {})
.get(bb, {})
.get("instructions", {})
.get(insn, {})
.get("features", [])
):
va, feature = p
yield feature, va

View File

@@ -0,0 +1,337 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import abc
from typing import Tuple, Iterator, SupportsInt
from capa.features.common import Feature
# feature extractors may reference functions, BBs, insns by opaque handle values.
# the only requirement of these handles are that they support `__int__`,
# so that they can be rendered as addresses.
#
# these handles are only consumed by routines on
# the feature extractor from which they were created.
#
# int(FunctionHandle) -> function start address
# int(BBHandle) -> BasicBlock start address
# int(InsnHandle) -> instruction address
FunctionHandle = SupportsInt
BBHandle = SupportsInt
InsnHandle = SupportsInt
class FeatureExtractor:
"""
FeatureExtractor defines the interface for fetching features from a sample.
There may be multiple backends that support fetching features for capa.
For example, we use vivisect by default, but also want to support saving
and restoring features from a JSON file.
When we restore the features, we'd like to use exactly the same matching logic
to find matching rules.
Therefore, we can define a FeatureExtractor that provides features from the
serialized JSON file and do matching without a binary analysis pass.
Also, this provides a way to hook in an IDA backend.
This class is not instantiated directly; it is the base class for other implementations.
"""
__metaclass__ = abc.ABCMeta
def __init__(self):
#
# note: a subclass should define ctor parameters for its own use.
# for example, the Vivisect feature extract might require the vw and/or path.
# this base class doesn't know what to do with that info, though.
#
super(FeatureExtractor, self).__init__()
@abc.abstractmethod
def get_base_address(self) -> int:
"""
fetch the preferred load address at which the sample was analyzed.
"""
raise NotImplementedError()
@abc.abstractmethod
def extract_global_features(self) -> Iterator[Tuple[Feature, int]]:
"""
extract features found at every scope ("global").
example::
extractor = VivisectFeatureExtractor(vw, path)
for feature, va in extractor.get_global_features():
print('0x%x: %s', va, feature)
yields:
Tuple[Feature, int]: feature and its location
"""
raise NotImplementedError()
@abc.abstractmethod
def extract_file_features(self) -> Iterator[Tuple[Feature, int]]:
"""
extract file-scope features.
example::
extractor = VivisectFeatureExtractor(vw, path)
for feature, va in extractor.get_file_features():
print('0x%x: %s', va, feature)
yields:
Tuple[Feature, int]: feature and its location
"""
raise NotImplementedError()
@abc.abstractmethod
def get_functions(self) -> Iterator[FunctionHandle]:
"""
enumerate the functions and provide opaque values that will
subsequently be provided to `.extract_function_features()`, etc.
"""
raise NotImplementedError()
def is_library_function(self, va: int) -> bool:
"""
is the given address a library function?
the backend may implement its own function matching algorithm, or none at all.
we accept a VA here, rather than function object, to handle addresses identified in instructions.
this information is used to:
- filter out matches in library functions (by default), and
- recognize when to fetch symbol names for called (non-API) functions
args:
va (int): the virtual address of a function.
returns:
bool: True if the given address is the start of a library function.
"""
return False
def get_function_name(self, va: int) -> str:
"""
fetch any recognized name for the given address.
this is only guaranteed to return a value when the given function is a recognized library function.
we accept a VA here, rather than function object, to handle addresses identified in instructions.
args:
va (int): the virtual address of a function.
returns:
str: the function name
raises:
KeyError: when the given function does not have a name.
"""
raise KeyError(va)
@abc.abstractmethod
def extract_function_features(self, f: FunctionHandle) -> Iterator[Tuple[Feature, int]]:
"""
extract function-scope features.
the arguments are opaque values previously provided by `.get_functions()`, etc.
example::
extractor = VivisectFeatureExtractor(vw, path)
for function in extractor.get_functions():
for feature, va in extractor.extract_function_features(function):
print('0x%x: %s', va, feature)
args:
f [FunctionHandle]: an opaque value previously fetched from `.get_functions()`.
yields:
Tuple[Feature, int]: feature and its location
"""
raise NotImplementedError()
@abc.abstractmethod
def get_basic_blocks(self, f: FunctionHandle) -> Iterator[BBHandle]:
"""
enumerate the basic blocks in the given function and provide opaque values that will
subsequently be provided to `.extract_basic_block_features()`, etc.
"""
raise NotImplementedError()
@abc.abstractmethod
def extract_basic_block_features(self, f: FunctionHandle, bb: BBHandle) -> Iterator[Tuple[Feature, int]]:
"""
extract basic block-scope features.
the arguments are opaque values previously provided by `.get_functions()`, etc.
example::
extractor = VivisectFeatureExtractor(vw, path)
for function in extractor.get_functions():
for bb in extractor.get_basic_blocks(function):
for feature, va in extractor.extract_basic_block_features(function, bb):
print('0x%x: %s', va, feature)
args:
f [FunctionHandle]: an opaque value previously fetched from `.get_functions()`.
bb [BBHandle]: an opaque value previously fetched from `.get_basic_blocks()`.
yields:
Tuple[Feature, int]: feature and its location
"""
raise NotImplementedError()
@abc.abstractmethod
def get_instructions(self, f: FunctionHandle, bb: BBHandle) -> Iterator[InsnHandle]:
"""
enumerate the instructions in the given basic block and provide opaque values that will
subsequently be provided to `.extract_insn_features()`, etc.
"""
raise NotImplementedError()
@abc.abstractmethod
def extract_insn_features(self, f: FunctionHandle, bb: BBHandle, insn: InsnHandle) -> Iterator[Tuple[Feature, int]]:
"""
extract instruction-scope features.
the arguments are opaque values previously provided by `.get_functions()`, etc.
example::
extractor = VivisectFeatureExtractor(vw, path)
for function in extractor.get_functions():
for bb in extractor.get_basic_blocks(function):
for insn in extractor.get_instructions(function, bb):
for feature, va in extractor.extract_insn_features(function, bb, insn):
print('0x%x: %s', va, feature)
args:
f [FunctionHandle]: an opaque value previously fetched from `.get_functions()`.
bb [BBHandle]: an opaque value previously fetched from `.get_basic_blocks()`.
insn [InsnHandle]: an opaque value previously fetched from `.get_instructions()`.
yields:
Tuple[Feature, int]: feature and its location
"""
raise NotImplementedError()
class NullFeatureExtractor(FeatureExtractor):
"""
An extractor that extracts some user-provided features.
The structure of the single parameter is demonstrated in the example below.
This is useful for testing, as we can provide expected values and see if matching works.
Also, this is how we represent features deserialized from a freeze file.
example::
extractor = NullFeatureExtractor({
'base address: 0x401000,
'global features': [
(0x0, capa.features.Arch('i386')),
(0x0, capa.features.OS('linux')),
],
'file features': [
(0x402345, capa.features.Characteristic('embedded pe')),
],
'functions': {
0x401000: {
'features': [
(0x401000, capa.features.Characteristic('nzxor')),
],
'basic blocks': {
0x401000: {
'features': [
(0x401000, capa.features.Characteristic('tight-loop')),
],
'instructions': {
0x401000: {
'features': [
(0x401000, capa.features.Characteristic('nzxor')),
],
},
0x401002: ...
}
},
0x401005: ...
}
},
0x40200: ...
}
)
"""
def __init__(self, features):
super(NullFeatureExtractor, self).__init__()
self.features = features
def get_base_address(self):
return self.features["base address"]
def extract_global_features(self):
for p in self.features.get("global features", []):
va, feature = p
yield feature, va
def extract_file_features(self):
for p in self.features.get("file features", []):
va, feature = p
yield feature, va
def get_functions(self):
for va in sorted(self.features["functions"].keys()):
yield va
def extract_function_features(self, f):
for p in self.features.get("functions", {}).get(f, {}).get("features", []): # noqa: E127 line over-indented
va, feature = p
yield feature, va
def get_basic_blocks(self, f):
for va in sorted(
self.features.get("functions", {}) # noqa: E127 line over-indented
.get(f, {})
.get("basic blocks", {})
.keys()
):
yield va
def extract_basic_block_features(self, f, bb):
for p in (
self.features.get("functions", {}) # noqa: E127 line over-indented
.get(f, {})
.get("basic blocks", {})
.get(bb, {})
.get("features", [])
):
va, feature = p
yield feature, va
def get_instructions(self, f, bb):
for va in sorted(
self.features.get("functions", {}) # noqa: E127 line over-indented
.get(f, {})
.get("basic blocks", {})
.get(bb, {})
.get("instructions", {})
.keys()
):
yield va
def extract_insn_features(self, f, bb, insn):
for p in (
self.features.get("functions", {}) # noqa: E127 line over-indented
.get(f, {})
.get("basic blocks", {})
.get(bb, {})
.get("instructions", {})
.get(insn, {})
.get("features", [])
):
va, feature = p
yield feature, va

View File

@@ -0,0 +1,95 @@
import io
import logging
import binascii
import contextlib
import pefile
import capa.features
import capa.features.extractors.elf
import capa.features.extractors.pefile
from capa.features.common import OS, FORMAT_PE, FORMAT_ELF, OS_WINDOWS, Arch, Format, String
logger = logging.getLogger(__name__)
def extract_file_strings(buf, **kwargs):
"""
extract ASCII and UTF-16 LE strings from file
"""
for s in capa.features.extractors.strings.extract_ascii_strings(buf):
yield String(s.s), s.offset
for s in capa.features.extractors.strings.extract_unicode_strings(buf):
yield String(s.s), s.offset
def extract_format(buf):
if buf.startswith(b"MZ"):
yield Format(FORMAT_PE), 0x0
elif buf.startswith(b"\x7fELF"):
yield Format(FORMAT_ELF), 0x0
else:
# we likely end up here:
# 1. handling a file format (e.g. macho)
#
# for (1), this logic will need to be updated as the format is implemented.
logger.debug("unsupported file format: %s", binascii.hexlify(buf[:4]).decode("ascii"))
return
def extract_arch(buf):
if buf.startswith(b"MZ"):
yield from capa.features.extractors.pefile.extract_file_arch(pe=pefile.PE(data=buf))
elif buf.startswith(b"\x7fELF"):
with contextlib.closing(io.BytesIO(buf)) as f:
arch = capa.features.extractors.elf.detect_elf_arch(f)
if arch not in capa.features.common.VALID_ARCH:
logger.debug("unsupported arch: %s", arch)
return
yield Arch(arch), 0x0
else:
# we likely end up here:
# 1. handling shellcode, or
# 2. handling a new file format (e.g. macho)
#
# for (1) we can't do much - its shellcode and all bets are off.
# we could maybe accept a futher CLI argument to specify the arch,
# but i think this would be rarely used.
# rules that rely on arch conditions will fail to match on shellcode.
#
# for (2), this logic will need to be updated as the format is implemented.
logger.debug("unsupported file format: %s, will not guess Arch", binascii.hexlify(buf[:4]).decode("ascii"))
return
def extract_os(buf):
if buf.startswith(b"MZ"):
yield OS(OS_WINDOWS), 0x0
elif buf.startswith(b"\x7fELF"):
with contextlib.closing(io.BytesIO(buf)) as f:
os = capa.features.extractors.elf.detect_elf_os(f)
if os not in capa.features.common.VALID_OS:
logger.debug("unsupported os: %s", os)
return
yield OS(os), 0x0
else:
# we likely end up here:
# 1. handling shellcode, or
# 2. handling a new file format (e.g. macho)
#
# for (1) we can't do much - its shellcode and all bets are off.
# we could maybe accept a futher CLI argument to specify the OS,
# but i think this would be rarely used.
# rules that rely on OS conditions will fail to match on shellcode.
#
# for (2), this logic will need to be updated as the format is implemented.
logger.debug("unsupported file format: %s, will not guess OS", binascii.hexlify(buf[:4]).decode("ascii"))
return

View File

@@ -0,0 +1,276 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import struct
import logging
from enum import Enum
from typing import BinaryIO
logger = logging.getLogger(__name__)
def align(v, alignment):
remainder = v % alignment
if remainder == 0:
return v
else:
return v + (alignment - remainder)
class CorruptElfFile(ValueError):
pass
class OS(str, Enum):
HPUX = "hpux"
NETBSD = "netbsd"
LINUX = "linux"
HURD = "hurd"
_86OPEN = "86open"
SOLARIS = "solaris"
AIX = "aix"
IRIX = "irix"
FREEBSD = "freebsd"
TRU64 = "tru64"
MODESTO = "modesto"
OPENBSD = "openbsd"
OPENVMS = "openvms"
NSK = "nsk"
AROS = "aros"
FENIXOS = "fenixos"
CLOUD = "cloud"
SYLLABLE = "syllable"
NACL = "nacl"
def detect_elf_os(f: BinaryIO) -> str:
f.seek(0x0)
file_header = f.read(0x40)
# we'll set this to the detected OS
# prefer the first heuristics,
# but rather than short circuiting,
# we'll still parse out the remainder, for debugging.
ret = None
if not file_header.startswith(b"\x7fELF"):
raise CorruptElfFile("missing magic header")
ei_class, ei_data = struct.unpack_from("BB", file_header, 4)
logger.debug("ei_class: 0x%02x ei_data: 0x%02x", ei_class, ei_data)
if ei_class == 1:
bitness = 32
elif ei_class == 2:
bitness = 64
else:
raise CorruptElfFile("invalid ei_class: 0x%02x" % ei_class)
if ei_data == 1:
endian = "<"
elif ei_data == 2:
endian = ">"
else:
raise CorruptElfFile("not an ELF file: invalid ei_data: 0x%02x" % ei_data)
if bitness == 32:
(e_phoff,) = struct.unpack_from(endian + "I", file_header, 0x1C)
e_phentsize, e_phnum = struct.unpack_from(endian + "HH", file_header, 0x2A)
elif bitness == 64:
(e_phoff,) = struct.unpack_from(endian + "Q", file_header, 0x20)
e_phentsize, e_phnum = struct.unpack_from(endian + "HH", file_header, 0x36)
else:
raise NotImplementedError()
logger.debug("e_phoff: 0x%02x e_phentsize: 0x%02x e_phnum: %d", e_phoff, e_phentsize, e_phnum)
(ei_osabi,) = struct.unpack_from(endian + "B", file_header, 7)
OSABI = {
# via pyelftools: https://github.com/eliben/pyelftools/blob/0664de05ed2db3d39041e2d51d19622a8ef4fb0f/elftools/elf/enums.py#L35-L58
# some candidates are commented out because the are not useful values,
# at least when guessing OSes
# 0: "SYSV", # too often used when OS is not SYSV
1: OS.HPUX,
2: OS.NETBSD,
3: OS.LINUX,
4: OS.HURD,
5: OS._86OPEN,
6: OS.SOLARIS,
7: OS.AIX,
8: OS.IRIX,
9: OS.FREEBSD,
10: OS.TRU64,
11: OS.MODESTO,
12: OS.OPENBSD,
13: OS.OPENVMS,
14: OS.NSK,
15: OS.AROS,
16: OS.FENIXOS,
17: OS.CLOUD,
# 53: "SORTFIX", # i can't find any reference to this OS, i dont think it exists
# 64: "ARM_AEABI", # not an OS
# 97: "ARM", # not an OS
# 255: "STANDALONE", # not an OS
}
logger.debug("ei_osabi: 0x%02x (%s)", ei_osabi, OSABI.get(ei_osabi, "unknown"))
# os_osabi == 0 is commonly set even when the OS is not SYSV.
# other values are unused or unknown.
if ei_osabi in OSABI and ei_osabi != 0x0:
# subsequent strategies may overwrite this value
ret = OSABI[ei_osabi]
f.seek(e_phoff)
program_header_size = e_phnum * e_phentsize
program_headers = f.read(program_header_size)
if len(program_headers) != program_header_size:
logger.warning("failed to read program headers")
e_phnum = 0
# search for PT_NOTE sections that specify an OS
# for example, on Linux there is a GNU section with minimum kernel version
for i in range(e_phnum):
offset = i * e_phentsize
phent = program_headers[offset : offset + e_phentsize]
PT_NOTE = 0x4
(p_type,) = struct.unpack_from(endian + "I", phent, 0x0)
logger.debug("p_type: 0x%04x", p_type)
if p_type != PT_NOTE:
continue
if bitness == 32:
p_offset, _, _, p_filesz = struct.unpack_from(endian + "IIII", phent, 0x4)
elif bitness == 64:
p_offset, _, _, p_filesz = struct.unpack_from(endian + "QQQQ", phent, 0x8)
else:
raise NotImplementedError()
logger.debug("p_offset: 0x%02x p_filesz: 0x%04x", p_offset, p_filesz)
f.seek(p_offset)
note = f.read(p_filesz)
if len(note) != p_filesz:
logger.warning("failed to read note content")
continue
namesz, descsz, type_ = struct.unpack_from(endian + "III", note, 0x0)
name_offset = 0xC
desc_offset = name_offset + align(namesz, 0x4)
logger.debug("namesz: 0x%02x descsz: 0x%02x type: 0x%04x", namesz, descsz, type_)
name = note[name_offset : name_offset + namesz].partition(b"\x00")[0].decode("ascii")
logger.debug("name: %s", name)
if type_ != 1:
continue
if name == "GNU":
if descsz < 16:
continue
desc = note[desc_offset : desc_offset + descsz]
abi_tag, kmajor, kminor, kpatch = struct.unpack_from(endian + "IIII", desc, 0x0)
# via readelf: https://github.com/bminor/binutils-gdb/blob/c0e94211e1ac05049a4ce7c192c9d14d1764eb3e/binutils/readelf.c#L19635-L19658
# and here: https://github.com/bminor/binutils-gdb/blob/34c54daa337da9fadf87d2706d6a590ae1f88f4d/include/elf/common.h#L933-L939
GNU_ABI_TAG = {
0: OS.LINUX,
1: OS.HURD,
2: OS.SOLARIS,
3: OS.FREEBSD,
4: OS.NETBSD,
5: OS.SYLLABLE,
6: OS.NACL,
}
logger.debug("GNU_ABI_TAG: 0x%02x", abi_tag)
if abi_tag in GNU_ABI_TAG:
# update only if not set
# so we can get the debugging output of subsequent strategies
ret = GNU_ABI_TAG[abi_tag] if not ret else ret
logger.debug("abi tag: %s earliest compatible kernel: %d.%d.%d", ret, kmajor, kminor, kpatch)
elif name == "OpenBSD":
logger.debug("note owner: %s", "OPENBSD")
ret = OS.OPENBSD if not ret else ret
elif name == "NetBSD":
logger.debug("note owner: %s", "NETBSD")
ret = OS.NETBSD if not ret else ret
elif name == "FreeBSD":
logger.debug("note owner: %s", "FREEBSD")
ret = OS.FREEBSD if not ret else ret
# search for recognizable dynamic linkers (interpreters)
# for example, on linux, we see file paths like: /lib64/ld-linux-x86-64.so.2
for i in range(e_phnum):
offset = i * e_phentsize
phent = program_headers[offset : offset + e_phentsize]
PT_INTERP = 0x3
(p_type,) = struct.unpack_from(endian + "I", phent, 0x0)
if p_type != PT_INTERP:
continue
if bitness == 32:
p_offset, _, _, p_filesz = struct.unpack_from(endian + "IIII", phent, 0x4)
elif bitness == 64:
p_offset, _, _, p_filesz = struct.unpack_from(endian + "QQQQ", phent, 0x8)
else:
raise NotImplementedError()
f.seek(p_offset)
interp = f.read(p_filesz)
if len(interp) != p_filesz:
logger.warning("failed to read interp content")
continue
linker = interp.partition(b"\x00")[0].decode("ascii")
logger.debug("linker: %s", linker)
if "ld-linux" in linker:
# update only if not set
# so we can get the debugging output of subsequent strategies
ret = OS.LINUX if ret is None else ret
return ret.value if ret is not None else "unknown"
class Arch(str, Enum):
I386 = "i386"
AMD64 = "amd64"
def detect_elf_arch(f: BinaryIO) -> str:
f.seek(0x0)
file_header = f.read(0x40)
if not file_header.startswith(b"\x7fELF"):
raise CorruptElfFile("missing magic header")
(ei_data,) = struct.unpack_from("B", file_header, 5)
logger.debug("ei_data: 0x%02x", ei_data)
if ei_data == 1:
endian = "<"
elif ei_data == 2:
endian = ">"
else:
raise CorruptElfFile("not an ELF file: invalid ei_data: 0x%02x" % ei_data)
(ei_machine,) = struct.unpack_from(endian + "H", file_header, 0x12)
logger.debug("ei_machine: 0x%02x", ei_machine)
EM_386 = 0x3
EM_X86_64 = 0x3E
if ei_machine == EM_386:
return Arch.I386
elif ei_machine == EM_X86_64:
return Arch.AMD64
else:
# not really unknown, but unsupport at the moment:
# https://github.com/eliben/pyelftools/blob/ab444d982d1849191e910299a985989857466620/elftools/elf/enums.py#L73
return "unknown"

View File

@@ -0,0 +1,159 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import io
import logging
import contextlib
from typing import Tuple, Iterator
from elftools.elf.elffile import ELFFile, SymbolTableSection
import capa.features.extractors.common
from capa.features.file import Import, Section
from capa.features.common import OS, FORMAT_ELF, Arch, Format, Feature
from capa.features.extractors.elf import Arch as ElfArch
from capa.features.extractors.base_extractor import FeatureExtractor
logger = logging.getLogger(__name__)
def extract_file_import_names(elf, **kwargs):
# see https://github.com/eliben/pyelftools/blob/0664de05ed2db3d39041e2d51d19622a8ef4fb0f/scripts/readelf.py#L372
symbol_tables = [(idx, s) for idx, s in enumerate(elf.iter_sections()) if isinstance(s, SymbolTableSection)]
for section_index, section in symbol_tables:
if not isinstance(section, SymbolTableSection):
continue
if section["sh_entsize"] == 0:
logger.debug("Symbol table '%s' has a sh_entsize of zero!" % (section.name))
continue
logger.debug("Symbol table '%s' contains %s entries:" % (section.name, section.num_symbols()))
for nsym, symbol in enumerate(section.iter_symbols()):
if symbol.name and symbol.entry.st_info.type == "STT_FUNC":
# TODO symbol address
# TODO symbol version info?
yield Import(symbol.name), 0x0
def extract_file_section_names(elf, **kwargs):
for section in elf.iter_sections():
if section.name:
yield Section(section.name), section.header.sh_addr
elif section.is_null():
yield Section("NULL"), section.header.sh_addr
def extract_file_strings(buf, **kwargs):
yield from capa.features.extractors.common.extract_file_strings(buf)
def extract_file_os(elf, buf, **kwargs):
# our current approach does not always get an OS value, e.g. for packed samples
# for file limitation purposes, we're more lax here
try:
os = next(capa.features.extractors.common.extract_os(buf))
yield os
except StopIteration:
yield OS("unknown"), 0x0
def extract_file_format(**kwargs):
yield Format(FORMAT_ELF), 0x0
def extract_file_arch(elf, **kwargs):
# TODO merge with capa.features.extractors.elf.detect_elf_arch()
arch = elf.get_machine_arch()
if arch == "x86":
yield Arch(ElfArch.I386), 0x0
elif arch == "x64":
yield Arch(ElfArch.AMD64), 0x0
else:
logger.warning("unsupported architecture: %s", arch)
def extract_file_features(elf: ELFFile, buf: bytes) -> Iterator[Tuple[Feature, int]]:
for file_handler in FILE_HANDLERS:
for feature, va in file_handler(elf=elf, buf=buf): # type: ignore
yield feature, va
FILE_HANDLERS = (
# TODO extract_file_export_names,
extract_file_import_names,
extract_file_section_names,
extract_file_strings,
# no library matching
extract_file_format,
)
def extract_global_features(elf: ELFFile, buf: bytes) -> Iterator[Tuple[Feature, int]]:
for global_handler in GLOBAL_HANDLERS:
for feature, va in global_handler(elf=elf, buf=buf): # type: ignore
yield feature, va
GLOBAL_HANDLERS = (
extract_file_os,
extract_file_arch,
)
class ElfFeatureExtractor(FeatureExtractor):
def __init__(self, path: str):
super(ElfFeatureExtractor, self).__init__()
self.path = path
with open(self.path, "rb") as f:
self.elf = ELFFile(io.BytesIO(f.read()))
def get_base_address(self):
# virtual address of the first segment with type LOAD
for segment in self.elf.iter_segments():
if segment.header.p_type == "PT_LOAD":
return segment.header.p_vaddr
def extract_global_features(self):
with open(self.path, "rb") as f:
buf = f.read()
for feature, va in extract_global_features(self.elf, buf):
yield feature, va
def extract_file_features(self):
with open(self.path, "rb") as f:
buf = f.read()
for feature, va in extract_file_features(self.elf, buf):
yield feature, va
def get_functions(self):
raise NotImplementedError("ElfFeatureExtractor can only be used to extract file features")
def extract_function_features(self, f):
raise NotImplementedError("ElfFeatureExtractor can only be used to extract file features")
def get_basic_blocks(self, f):
raise NotImplementedError("ElfFeatureExtractor can only be used to extract file features")
def extract_basic_block_features(self, f, bb):
raise NotImplementedError("ElfFeatureExtractor can only be used to extract file features")
def get_instructions(self, f, bb):
raise NotImplementedError("ElfFeatureExtractor can only be used to extract file features")
def extract_insn_features(self, f, bb, insn):
raise NotImplementedError("ElfFeatureExtractor can only be used to extract file features")
def is_library_function(self, va):
raise NotImplementedError("ElfFeatureExtractor can only be used to extract file features")
def get_function_name(self, va):
raise NotImplementedError("ElfFeatureExtractor can only be used to extract file features")

View File

@@ -6,23 +6,18 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import sys
import struct
import builtins
from capa.features.file import Import
from capa.features.insn import API
from typing import Tuple, Iterator
MIN_STACKSTRING_LEN = 8
def xor_static(data, i):
if sys.version_info >= (3, 0):
return bytes(c ^ i for c in data)
else:
return "".join(chr(ord(c) ^ i) for c in data)
def xor_static(data: bytes, i: int) -> bytes:
return bytes(c ^ i for c in data)
def is_aw_function(symbol):
def is_aw_function(symbol: str) -> bool:
"""
is the given function name an A/W function?
these are variants of functions that, on Windows, accept either a narrow or wide string.
@@ -38,7 +33,7 @@ def is_aw_function(symbol):
return "a" <= symbol[-2] <= "z" or "0" <= symbol[-2] <= "9"
def is_ordinal(symbol):
def is_ordinal(symbol: str) -> bool:
"""
is the given symbol an ordinal that is prefixed by "#"?
"""
@@ -47,7 +42,7 @@ def is_ordinal(symbol):
return False
def generate_symbols(dll, symbol):
def generate_symbols(dll: str, symbol: str) -> Iterator[str]:
"""
for a given dll and symbol name, generate variants.
we over-generate features to make matching easier.
@@ -73,11 +68,11 @@ def generate_symbols(dll, symbol):
yield symbol[:-1]
def all_zeros(bytez):
def all_zeros(bytez: bytes) -> bool:
return all(b == 0 for b in builtins.bytes(bytez))
def twos_complement(val, bits):
def twos_complement(val: int, bits: int) -> int:
"""
compute the 2's complement of int value val
@@ -90,3 +85,49 @@ def twos_complement(val, bits):
else:
# return positive value as is
return val
def carve_pe(pbytes: bytes, offset: int = 0) -> Iterator[Tuple[int, int]]:
"""
Generate (offset, key) tuples of embedded PEs
Based on the version from vivisect:
https://github.com/vivisect/vivisect/blob/7be4037b1cecc4551b397f840405a1fc606f9b53/PE/carve.py#L19
And its IDA adaptation:
capa/features/extractors/ida/file.py
"""
mz_xor = [
(
xor_static(b"MZ", key),
xor_static(b"PE", key),
key,
)
for key in range(256)
]
pblen = len(pbytes)
todo = [(pbytes.find(mzx, offset), mzx, pex, key) for mzx, pex, key in mz_xor]
todo = [(off, mzx, pex, key) for (off, mzx, pex, key) in todo if off != -1]
while len(todo):
off, mzx, pex, key = todo.pop()
# The MZ header has one field we will check
# e_lfanew is at 0x3c
e_lfanew = off + 0x3C
if pblen < (e_lfanew + 4):
continue
newoff = struct.unpack("<I", xor_static(pbytes[e_lfanew : e_lfanew + 4], key))[0]
nextres = pbytes.find(mzx, off + 1)
if nextres != -1:
todo.append((nextres, mzx, pex, key))
peoff = off + newoff
if pblen < (peoff + 2):
continue
if pbytes[peoff : peoff + 2] == pex:
yield (off, key)

View File

@@ -1,93 +0,0 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import sys
import types
import idaapi
import capa.features.extractors.ida.file
import capa.features.extractors.ida.insn
import capa.features.extractors.ida.function
import capa.features.extractors.ida.basicblock
from capa.features.extractors import FeatureExtractor
def get_ea(self):
""" """
if isinstance(self, (idaapi.BasicBlock, idaapi.func_t)):
return self.start_ea
if isinstance(self, idaapi.insn_t):
return self.ea
raise TypeError
def add_ea_int_cast(o):
"""
dynamically add a cast-to-int (`__int__`) method to the given object
that returns the value of the `.ea` property.
this bit of skullduggery lets use cast viv-utils objects as ints.
the correct way of doing this is to update viv-utils (or subclass the objects here).
"""
if sys.version_info[0] >= 3:
setattr(o, "__int__", types.MethodType(get_ea, o))
else:
setattr(o, "__int__", types.MethodType(get_ea, o, type(o)))
return o
class IdaFeatureExtractor(FeatureExtractor):
def __init__(self):
super(IdaFeatureExtractor, self).__init__()
def get_base_address(self):
return idaapi.get_imagebase()
def extract_file_features(self):
for (feature, ea) in capa.features.extractors.ida.file.extract_features():
yield feature, ea
def get_functions(self):
import capa.features.extractors.ida.helpers as ida_helpers
# data structure shared across functions yielded here.
# useful for caching analysis relevant across a single workspace.
ctx = {}
# ignore library functions and thunk functions as identified by IDA
for f in ida_helpers.get_functions(skip_thunks=True, skip_libs=True):
setattr(f, "ctx", ctx)
yield add_ea_int_cast(f)
@staticmethod
def get_function(ea):
f = idaapi.get_func(ea)
setattr(f, "ctx", {})
return add_ea_int_cast(f)
def extract_function_features(self, f):
for (feature, ea) in capa.features.extractors.ida.function.extract_features(f):
yield feature, ea
def get_basic_blocks(self, f):
for bb in capa.features.extractors.ida.helpers.get_function_blocks(f):
yield add_ea_int_cast(bb)
def extract_basic_block_features(self, f, bb):
for (feature, ea) in capa.features.extractors.ida.basicblock.extract_features(f, bb):
yield feature, ea
def get_instructions(self, f, bb):
import capa.features.extractors.ida.helpers as ida_helpers
for insn in ida_helpers.get_instructions_in_range(bb.start_ea, bb.end_ea):
yield add_ea_int_cast(insn)
def extract_insn_features(self, f, bb, insn):
for (feature, ea) in capa.features.extractors.ida.insn.extract_features(f, bb, insn):
yield feature, ea

View File

@@ -6,14 +6,13 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import sys
import string
import struct
import idaapi
import capa.features.extractors.ida.helpers
from capa.features import Characteristic
from capa.features.common import Characteristic
from capa.features.basicblock import BasicBlock
from capa.features.extractors.ida import helpers
from capa.features.extractors.helpers import MIN_STACKSTRING_LEN
@@ -39,18 +38,11 @@ def get_printable_len(op):
raise ValueError("Unhandled operand data type 0x%x." % op.dtype)
def is_printable_ascii(chars):
if sys.version_info[0] >= 3:
return all(c < 127 and chr(c) in string.printable for c in chars)
else:
return all(ord(c) < 127 and c in string.printable for c in chars)
return all(c < 127 and chr(c) in string.printable for c in chars)
def is_printable_utf16le(chars):
if sys.version_info[0] >= 3:
if all(c == 0x00 for c in chars[1::2]):
return is_printable_ascii(chars[::2])
else:
if all(c == "\x00" for c in chars[1::2]):
return is_printable_ascii(chars[::2])
if all(c == 0x00 for c in chars[1::2]):
return is_printable_ascii(chars[::2])
if is_printable_ascii(chars):
return idaapi.get_dtype_size(op.dtype)

View File

@@ -0,0 +1,112 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import idaapi
import capa.ida.helpers
import capa.features.extractors.elf
import capa.features.extractors.ida.file
import capa.features.extractors.ida.insn
import capa.features.extractors.ida.global_
import capa.features.extractors.ida.function
import capa.features.extractors.ida.basicblock
from capa.features.extractors.base_extractor import FeatureExtractor
class FunctionHandle:
"""this acts like an idaapi.func_t but with __int__()"""
def __init__(self, inner):
self._inner = inner
def __int__(self):
return self.start_ea
def __getattr__(self, name):
return getattr(self._inner, name)
class BasicBlockHandle:
"""this acts like an idaapi.BasicBlock but with __int__()"""
def __init__(self, inner):
self._inner = inner
def __int__(self):
return self.start_ea
def __getattr__(self, name):
return getattr(self._inner, name)
class InstructionHandle:
"""this acts like an idaapi.insn_t but with __int__()"""
def __init__(self, inner):
self._inner = inner
def __int__(self):
return self.ea
def __getattr__(self, name):
return getattr(self._inner, name)
class IdaFeatureExtractor(FeatureExtractor):
def __init__(self):
super(IdaFeatureExtractor, self).__init__()
self.global_features = []
self.global_features.extend(capa.features.extractors.ida.global_.extract_os())
self.global_features.extend(capa.features.extractors.ida.global_.extract_arch())
def get_base_address(self):
return idaapi.get_imagebase()
def extract_global_features(self):
yield from self.global_features
def extract_file_features(self):
yield from capa.features.extractors.ida.file.extract_features()
def get_functions(self):
import capa.features.extractors.ida.helpers as ida_helpers
# data structure shared across functions yielded here.
# useful for caching analysis relevant across a single workspace.
ctx = {}
# ignore library functions and thunk functions as identified by IDA
for f in ida_helpers.get_functions(skip_thunks=True, skip_libs=True):
setattr(f, "ctx", ctx)
yield FunctionHandle(f)
@staticmethod
def get_function(ea):
f = idaapi.get_func(ea)
setattr(f, "ctx", {})
return FunctionHandle(f)
def extract_function_features(self, f):
yield from capa.features.extractors.ida.function.extract_features(f)
def get_basic_blocks(self, f):
import capa.features.extractors.ida.helpers as ida_helpers
for bb in ida_helpers.get_function_blocks(f):
yield BasicBlockHandle(bb)
def extract_basic_block_features(self, f, bb):
yield from capa.features.extractors.ida.basicblock.extract_features(f, bb)
def get_instructions(self, f, bb):
import capa.features.extractors.ida.helpers as ida_helpers
for insn in ida_helpers.get_instructions_in_range(bb.start_ea, bb.end_ea):
yield InstructionHandle(insn)
def extract_insn_features(self, f, bb, insn):
yield from capa.features.extractors.ida.insn.extract_features(f, bb, insn)

View File

@@ -11,12 +11,13 @@ import struct
import idc
import idaapi
import idautils
import ida_loader
import capa.features.extractors.helpers
import capa.features.extractors.strings
import capa.features.extractors.ida.helpers
from capa.features import String, Characteristic
from capa.features.file import Export, Import, Section
from capa.features.file import Export, Import, Section, FunctionName
from capa.features.common import OS, FORMAT_PE, FORMAT_ELF, OS_WINDOWS, Format, String, Characteristic
def check_segment_for_pe(seg):
@@ -78,7 +79,7 @@ def extract_file_embedded_pe():
def extract_file_export_names():
""" extract function exports """
"""extract function exports"""
for (_, _, ea, name) in idautils.Entries():
yield Export(name), ea
@@ -143,8 +144,31 @@ def extract_file_strings():
yield String(s.s), (seg.start_ea + s.offset)
def extract_file_function_names():
"""
extract the names of statically-linked library functions.
"""
for ea in idautils.Functions():
if idaapi.get_func(ea).flags & idaapi.FUNC_LIB:
name = idaapi.get_name(ea)
yield FunctionName(name), ea
def extract_file_format():
format_name = ida_loader.get_file_type_name()
if "PE" in format_name:
yield Format(FORMAT_PE), 0x0
elif "ELF64" in format_name:
yield Format(FORMAT_ELF), 0x0
elif "ELF32" in format_name:
yield Format(FORMAT_ELF), 0x0
else:
raise NotImplementedError("file format: %s", format_name)
def extract_features():
""" extract file features """
"""extract file features"""
for file_handler in FILE_HANDLERS:
for feature, va in file_handler():
yield feature, va
@@ -156,6 +180,8 @@ FILE_HANDLERS = (
extract_file_strings,
extract_file_section_names,
extract_file_embedded_pe,
extract_file_function_names,
extract_file_format,
)

View File

@@ -10,7 +10,7 @@ import idaapi
import idautils
import capa.features.extractors.ida.helpers
from capa.features import Characteristic
from capa.features.common import Characteristic
from capa.features.extractors import loops

View File

@@ -0,0 +1,56 @@
import logging
import contextlib
import idaapi
import ida_loader
import capa.ida.helpers
import capa.features.extractors.elf
from capa.features.common import OS, ARCH_I386, ARCH_AMD64, OS_WINDOWS, Arch
logger = logging.getLogger(__name__)
def extract_os():
format_name = ida_loader.get_file_type_name()
if "PE" in format_name:
yield OS(OS_WINDOWS), 0x0
elif "ELF" in format_name:
with contextlib.closing(capa.ida.helpers.IDAIO()) as f:
os = capa.features.extractors.elf.detect_elf_os(f)
yield OS(os), 0x0
else:
# we likely end up here:
# 1. handling shellcode, or
# 2. handling a new file format (e.g. macho)
#
# for (1) we can't do much - its shellcode and all bets are off.
# we could maybe accept a futher CLI argument to specify the OS,
# but i think this would be rarely used.
# rules that rely on OS conditions will fail to match on shellcode.
#
# for (2), this logic will need to be updated as the format is implemented.
logger.debug("unsupported file format: %s, will not guess OS", format_name)
return
def extract_arch():
info = idaapi.get_inf_structure()
if info.procName == "metapc" and info.is_64bit():
yield Arch(ARCH_AMD64), 0x0
elif info.procName == "metapc" and info.is_32bit():
yield Arch(ARCH_I386), 0x0
elif info.procName == "metapc":
logger.debug("unsupported architecture: non-32-bit nor non-64-bit intel")
return
else:
# we likely end up here:
# 1. handling a new architecture (e.g. aarch64)
#
# for (1), this logic will need to be updated as the format is implemented.
logger.debug("unsupported architecture: %s", info.procName)
return

View File

@@ -6,9 +6,6 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import sys
import string
import idc
import idaapi
import idautils
@@ -23,11 +20,7 @@ def find_byte_sequence(start, end, seq):
end: max virtual address
seq: bytes to search e.g. b"\x01\x03"
"""
if sys.version_info[0] >= 3:
seq = " ".join(["%02x" % b for b in seq])
else:
seq = " ".join(["%02x" % ord(b) for b in seq])
seq = " ".join(["%02x" % b for b in seq])
while True:
ea = idaapi.find_binary(start, end, seq, 0, idaapi.SEARCH_DOWN)
if ea == idaapi.BADADDR:
@@ -83,7 +76,7 @@ def get_segment_buffer(seg):
def get_file_imports():
""" get file imports """
"""get file imports"""
imports = {}
for idx in range(idaapi.get_import_module_qty()):
@@ -120,7 +113,7 @@ def get_instructions_in_range(start, end):
def is_operand_equal(op1, op2):
""" compare two IDA op_t """
"""compare two IDA op_t"""
if op1.flags != op2.flags:
return False
@@ -146,7 +139,7 @@ def is_operand_equal(op1, op2):
def is_basic_block_equal(bb1, bb2):
""" compare two IDA BasicBlock """
"""compare two IDA BasicBlock"""
if bb1.start_ea != bb2.start_ea:
return False
@@ -160,7 +153,7 @@ def is_basic_block_equal(bb1, bb2):
def basic_block_size(bb):
""" calculate size of basic block """
"""calculate size of basic block"""
return bb.end_ea - bb.start_ea
@@ -178,7 +171,7 @@ def read_bytes_at(ea, count):
def find_string_at(ea, min=4):
""" check if ASCII string exists at a given virtual address """
"""check if ASCII string exists at a given virtual address"""
found = idaapi.get_strlit_contents(ea, -1, idaapi.STRTYPE_C)
if found and len(found) > min:
try:
@@ -232,23 +225,23 @@ def get_op_phrase_info(op):
def is_op_write(insn, op):
""" Check if an operand is written to (destination operand) """
"""Check if an operand is written to (destination operand)"""
return idaapi.has_cf_chg(insn.get_canon_feature(), op.n)
def is_op_read(insn, op):
""" Check if an operand is read from (source operand) """
"""Check if an operand is read from (source operand)"""
return idaapi.has_cf_use(insn.get_canon_feature(), op.n)
def is_op_offset(insn, op):
""" Check is an operand has been marked as an offset (by auto-analysis or manually) """
"""Check is an operand has been marked as an offset (by auto-analysis or manually)"""
flags = idaapi.get_flags(insn.ea)
return ida_bytes.is_off(flags, op.n)
def is_sp_modified(insn):
""" determine if instruction modifies SP, ESP, RSP """
"""determine if instruction modifies SP, ESP, RSP"""
for op in get_insn_ops(insn, target_ops=(idaapi.o_reg,)):
if op.reg == idautils.procregs.sp.reg and is_op_write(insn, op):
# register is stack and written
@@ -257,7 +250,7 @@ def is_sp_modified(insn):
def is_bp_modified(insn):
""" check if instruction modifies BP, EBP, RBP """
"""check if instruction modifies BP, EBP, RBP"""
for op in get_insn_ops(insn, target_ops=(idaapi.o_reg,)):
if op.reg == idautils.procregs.bp.reg and is_op_write(insn, op):
# register is base and written
@@ -266,12 +259,12 @@ def is_bp_modified(insn):
def is_frame_register(reg):
""" check if register is sp or bp """
"""check if register is sp or bp"""
return reg in (idautils.procregs.sp.reg, idautils.procregs.bp.reg)
def get_insn_ops(insn, target_ops=()):
""" yield op_t for instruction, filter on type if specified """
"""yield op_t for instruction, filter on type if specified"""
for op in insn.ops:
if op.type == idaapi.o_void:
# avoid looping all 6 ops if only subset exists
@@ -282,7 +275,7 @@ def get_insn_ops(insn, target_ops=()):
def is_op_stack_var(ea, index):
""" check if operand is a stack variable """
"""check if operand is a stack variable"""
return idaapi.is_stkvar(idaapi.get_flags(ea), index)
@@ -336,7 +329,7 @@ def is_basic_block_tight_loop(bb):
def find_data_reference_from_insn(insn, max_depth=10):
""" search for data reference from instruction, return address of instruction if no reference exists """
"""search for data reference from instruction, return address of instruction if no reference exists"""
depth = 0
ea = insn.ea
@@ -351,6 +344,10 @@ def find_data_reference_from_insn(insn, max_depth=10):
# break if circular reference
break
if not idaapi.is_mapped(data_refs[0]):
# break if address is not mapped
break
depth += 1
if depth > max_depth:
# break if max depth
@@ -375,5 +372,5 @@ def get_function_blocks(f):
def is_basic_block_return(bb):
""" check if basic block is return block """
"""check if basic block is return block"""
return bb.type == idaapi.fcb_ret

View File

@@ -12,38 +12,38 @@ import idautils
import capa.features.extractors.helpers
import capa.features.extractors.ida.helpers
from capa.features import (
ARCH_X32,
ARCH_X64,
from capa.features.insn import API, Number, Offset, Mnemonic
from capa.features.common import (
BITNESS_X32,
BITNESS_X64,
MAX_BYTES_FEATURE_SIZE,
THUNK_CHAIN_DEPTH_DELTA,
Bytes,
String,
Characteristic,
)
from capa.features.insn import API, Number, Offset, Mnemonic
# security cookie checks may perform non-zeroing XORs, these are expected within a certain
# byte range within the first and returning basic blocks, this helps to reduce FP features
SECURITY_COOKIE_BYTES_DELTA = 0x40
def get_arch(ctx):
def get_bitness(ctx):
"""
fetch the ARCH_* constant for the currently open workspace.
fetch the BITNESS_* constant for the currently open workspace.
via Tamir Bahar/@tmr232
https://reverseengineering.stackexchange.com/a/11398/17194
"""
if "arch" not in ctx:
if "bitness" not in ctx:
info = idaapi.get_inf_structure()
if info.is_64bit():
ctx["arch"] = ARCH_X64
ctx["bitness"] = BITNESS_X64
elif info.is_32bit():
ctx["arch"] = ARCH_X32
ctx["bitness"] = BITNESS_X32
else:
raise ValueError("unexpected architecture")
return ctx["arch"]
raise ValueError("unexpected bitness")
return ctx["bitness"]
def get_imports(ctx):
@@ -53,10 +53,7 @@ def get_imports(ctx):
def check_for_api_call(ctx, insn):
""" check instruction for API call """
if not insn.get_canon_mnem() in ("call", "jmp"):
return
"""check instruction for API call"""
info = ()
ref = insn.ea
@@ -95,11 +92,29 @@ def extract_insn_api_features(f, bb, insn):
example:
call dword [0x00473038]
"""
if not insn.get_canon_mnem() in ("call", "jmp"):
return
for api in check_for_api_call(f.ctx, insn):
dll, _, symbol = api.rpartition(".")
for name in capa.features.extractors.helpers.generate_symbols(dll, symbol):
yield API(name), insn.ea
# extract IDA/FLIRT recognized API functions
targets = tuple(idautils.CodeRefsFrom(insn.ea, False))
if not targets:
return
target = targets[0]
target_func = idaapi.get_func(target)
if not target_func or target_func.start_ea != target:
# not a function (start)
return
if target_func.flags & idaapi.FUNC_LIB:
name = idaapi.get_name(target_func.start_ea)
yield API(name), insn.ea
def extract_insn_number_features(f, bb, insn):
"""parse instruction number features
@@ -134,7 +149,7 @@ def extract_insn_number_features(f, bb, insn):
const = op.addr
yield Number(const), insn.ea
yield Number(const, arch=get_arch(f.ctx)), insn.ea
yield Number(const, bitness=get_bitness(f.ctx)), insn.ea
def extract_insn_bytes_features(f, bb, insn):
@@ -203,7 +218,7 @@ def extract_insn_offset_features(f, bb, insn):
op_off = capa.features.extractors.helpers.twos_complement(op_off, 32)
yield Offset(op_off), insn.ea
yield Offset(op_off, arch=get_arch(f.ctx)), insn.ea
yield Offset(op_off, bitness=get_bitness(f.ctx)), insn.ea
def contains_stack_cookie_keywords(s):
@@ -256,7 +271,7 @@ def bb_stack_cookie_registers(bb):
def is_nzxor_stack_cookie_delta(f, bb, insn):
""" check if nzxor exists within stack cookie delta """
"""check if nzxor exists within stack cookie delta"""
# security cookie check should use SP or BP
if not capa.features.extractors.ida.helpers.is_frame_register(insn.Op2.reg):
return False
@@ -279,7 +294,7 @@ def is_nzxor_stack_cookie_delta(f, bb, insn):
def is_nzxor_stack_cookie(f, bb, insn):
""" check if nzxor is related to stack cookie """
"""check if nzxor is related to stack cookie"""
if contains_stack_cookie_keywords(idaapi.get_cmt(insn.ea, False)):
# Example:
# xor ecx, ebp ; StackCookie
@@ -322,7 +337,7 @@ def extract_insn_mnemonic_features(f, bb, insn):
bb (IDA BasicBlock)
insn (IDA insn_t)
"""
yield Mnemonic(insn.get_canon_mnem()), insn.ea
yield Mnemonic(idc.print_insn_mnem(insn.ea)), insn.ea
def extract_insn_peb_access_characteristic_features(f, bb, insn):

View File

@@ -6,7 +6,7 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
from networkx import nx
import networkx
from networkx.algorithms.components import strongly_connected_components
@@ -20,6 +20,6 @@ def has_loop(edges, threshold=2):
returns:
bool
"""
g = nx.DiGraph()
g = networkx.DiGraph()
g.add_edges_from(edges)
return any(len(comp) >= threshold for comp in strongly_connected_components(g))

View File

@@ -0,0 +1,215 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import logging
import pefile
import capa.features.common
import capa.features.extractors
import capa.features.extractors.common
import capa.features.extractors.helpers
import capa.features.extractors.strings
from capa.features.file import Export, Import, Section
from capa.features.common import OS, ARCH_I386, FORMAT_PE, ARCH_AMD64, OS_WINDOWS, Arch, Format, Characteristic
from capa.features.extractors.base_extractor import FeatureExtractor
logger = logging.getLogger(__name__)
def extract_file_embedded_pe(buf, **kwargs):
for offset, _ in capa.features.extractors.helpers.carve_pe(buf, 1):
yield Characteristic("embedded pe"), offset
def extract_file_export_names(pe, **kwargs):
base_address = pe.OPTIONAL_HEADER.ImageBase
if hasattr(pe, "DIRECTORY_ENTRY_EXPORT"):
for export in pe.DIRECTORY_ENTRY_EXPORT.symbols:
if not export.name:
continue
try:
name = export.name.partition(b"\x00")[0].decode("ascii")
except UnicodeDecodeError:
continue
va = base_address + export.address
yield Export(name), va
def extract_file_import_names(pe, **kwargs):
"""
extract imported function names
1. imports by ordinal:
- modulename.#ordinal
2. imports by name, results in two features to support importname-only matching:
- modulename.importname
- importname
"""
if hasattr(pe, "DIRECTORY_ENTRY_IMPORT"):
for dll in pe.DIRECTORY_ENTRY_IMPORT:
try:
modname = dll.dll.partition(b"\x00")[0].decode("ascii")
except UnicodeDecodeError:
continue
# strip extension
modname = modname.rpartition(".")[0].lower()
for imp in dll.imports:
if imp.import_by_ordinal:
impname = "#%s" % imp.ordinal
else:
try:
impname = imp.name.partition(b"\x00")[0].decode("ascii")
except UnicodeDecodeError:
continue
for name in capa.features.extractors.helpers.generate_symbols(modname, impname):
yield Import(name), imp.address
def extract_file_section_names(pe, **kwargs):
base_address = pe.OPTIONAL_HEADER.ImageBase
for section in pe.sections:
try:
name = section.Name.partition(b"\x00")[0].decode("ascii")
except UnicodeDecodeError:
continue
yield Section(name), base_address + section.VirtualAddress
def extract_file_strings(buf, **kwargs):
yield from capa.features.extractors.common.extract_file_strings(buf)
def extract_file_function_names(**kwargs):
"""
extract the names of statically-linked library functions.
"""
if False:
# using a `yield` here to force this to be a generator, not function.
yield NotImplementedError("pefile doesn't have library matching")
return
def extract_file_os(**kwargs):
# assuming PE -> Windows
# though i suppose they're also used by UEFI
yield OS(OS_WINDOWS), 0x0
def extract_file_format(**kwargs):
yield Format(FORMAT_PE), 0x0
def extract_file_arch(pe, **kwargs):
if pe.FILE_HEADER.Machine == pefile.MACHINE_TYPE["IMAGE_FILE_MACHINE_I386"]:
yield Arch(ARCH_I386), 0x0
elif pe.FILE_HEADER.Machine == pefile.MACHINE_TYPE["IMAGE_FILE_MACHINE_AMD64"]:
yield Arch(ARCH_AMD64), 0x0
else:
logger.warning("unsupported architecture: %s", pefile.MACHINE_TYPE[pe.FILE_HEADER.Machine])
def extract_file_features(pe, buf):
"""
extract file features from given workspace
args:
pe (pefile.PE): the parsed PE
buf: the raw sample bytes
yields:
Tuple[Feature, VA]: a feature and its location.
"""
for file_handler in FILE_HANDLERS:
for feature, va in file_handler(pe=pe, buf=buf):
yield feature, va
FILE_HANDLERS = (
extract_file_embedded_pe,
extract_file_export_names,
extract_file_import_names,
extract_file_section_names,
extract_file_strings,
extract_file_function_names,
extract_file_format,
)
def extract_global_features(pe, buf):
"""
extract global features from given workspace
args:
pe (pefile.PE): the parsed PE
buf: the raw sample bytes
yields:
Tuple[Feature, VA]: a feature and its location.
"""
for handler in GLOBAL_HANDLERS:
for feature, va in handler(pe=pe, buf=buf):
yield feature, va
GLOBAL_HANDLERS = (
extract_file_os,
extract_file_arch,
)
class PefileFeatureExtractor(FeatureExtractor):
def __init__(self, path: str):
super(PefileFeatureExtractor, self).__init__()
self.path = path
self.pe = pefile.PE(path)
def get_base_address(self):
return self.pe.OPTIONAL_HEADER.ImageBase
def extract_global_features(self):
with open(self.path, "rb") as f:
buf = f.read()
yield from extract_global_features(self.pe, buf)
def extract_file_features(self):
with open(self.path, "rb") as f:
buf = f.read()
yield from extract_file_features(self.pe, buf)
def get_functions(self):
raise NotImplementedError("PefileFeatureExtract can only be used to extract file features")
def extract_function_features(self, f):
raise NotImplementedError("PefileFeatureExtract can only be used to extract file features")
def get_basic_blocks(self, f):
raise NotImplementedError("PefileFeatureExtract can only be used to extract file features")
def extract_basic_block_features(self, f, bb):
raise NotImplementedError("PefileFeatureExtract can only be used to extract file features")
def get_instructions(self, f, bb):
raise NotImplementedError("PefileFeatureExtract can only be used to extract file features")
def extract_insn_features(self, f, bb, insn):
raise NotImplementedError("PefileFeatureExtract can only be used to extract file features")
def is_library_function(self, va):
raise NotImplementedError("PefileFeatureExtract can only be used to extract file features")
def get_function_name(self, va):
raise NotImplementedError("PefileFeatureExtract can only be used to extract file features")

View File

@@ -1,52 +0,0 @@
import sys
import types
from smda.common.SmdaReport import SmdaReport
from smda.common.SmdaInstruction import SmdaInstruction
import capa.features.extractors.smda.file
import capa.features.extractors.smda.insn
import capa.features.extractors.smda.function
import capa.features.extractors.smda.basicblock
from capa.main import UnsupportedRuntimeError
from capa.features.extractors import FeatureExtractor
class SmdaFeatureExtractor(FeatureExtractor):
def __init__(self, smda_report: SmdaReport, path):
super(SmdaFeatureExtractor, self).__init__()
if sys.version_info < (3, 0):
raise UnsupportedRuntimeError("SMDA should only be used with Python 3.")
self.smda_report = smda_report
self.path = path
def get_base_address(self):
return self.smda_report.base_addr
def extract_file_features(self):
for feature, va in capa.features.extractors.smda.file.extract_features(self.smda_report, self.path):
yield feature, va
def get_functions(self):
for function in self.smda_report.getFunctions():
yield function
def extract_function_features(self, f):
for feature, va in capa.features.extractors.smda.function.extract_features(f):
yield feature, va
def get_basic_blocks(self, f):
for bb in f.getBlocks():
yield bb
def extract_basic_block_features(self, f, bb):
for feature, va in capa.features.extractors.smda.basicblock.extract_features(f, bb):
yield feature, va
def get_instructions(self, f, bb):
for smda_ins in bb.getInstructions():
yield smda_ins
def extract_insn_features(self, f, bb, insn):
for feature, va in capa.features.extractors.smda.insn.extract_features(f, bb, insn):
yield feature, va

View File

@@ -1,8 +1,7 @@
import sys
import string
import struct
from capa.features import Characteristic
from capa.features.common import Characteristic
from capa.features.basicblock import BasicBlock
from capa.features.extractors.helpers import MIN_STACKSTRING_LEN
@@ -15,7 +14,7 @@ def _bb_has_tight_loop(f, bb):
def extract_bb_tight_loop(f, bb):
""" check basic block for tight loop indicators """
"""check basic block for tight loop indicators"""
if _bb_has_tight_loop(f, bb):
yield Characteristic("tight loop"), bb.offset
@@ -39,7 +38,7 @@ def get_operands(smda_ins):
def extract_stackstring(f, bb):
""" check basic block for stackstring indicators """
"""check basic block for stackstring indicators"""
if _bb_has_stackstring(f, bb):
yield Characteristic("stack string"), bb.offset
@@ -117,7 +116,7 @@ def extract_features(f, bb):
bb (smda.common.SmdaBasicBlock): the basic block to process.
yields:
Feature, set[VA]: the features and their location found in this basic block.
Tuple[Feature, int]: the features and their location found in this basic block.
"""
yield BasicBlock(), bb.offset
for bb_handler in BASIC_BLOCK_HANDLERS:

View File

@@ -0,0 +1,53 @@
from smda.common.SmdaReport import SmdaReport
import capa.features.extractors.common
import capa.features.extractors.smda.file
import capa.features.extractors.smda.insn
import capa.features.extractors.smda.global_
import capa.features.extractors.smda.function
import capa.features.extractors.smda.basicblock
from capa.features.extractors.base_extractor import FeatureExtractor
class SmdaFeatureExtractor(FeatureExtractor):
def __init__(self, smda_report: SmdaReport, path):
super(SmdaFeatureExtractor, self).__init__()
self.smda_report = smda_report
self.path = path
with open(self.path, "rb") as f:
self.buf = f.read()
# pre-compute these because we'll yield them at *every* scope.
self.global_features = []
self.global_features.extend(capa.features.extractors.common.extract_os(self.buf))
self.global_features.extend(capa.features.extractors.smda.global_.extract_arch(self.smda_report))
def get_base_address(self):
return self.smda_report.base_addr
def extract_global_features(self):
yield from self.global_features
def extract_file_features(self):
yield from capa.features.extractors.smda.file.extract_features(self.smda_report, self.buf)
def get_functions(self):
for function in self.smda_report.getFunctions():
yield function
def extract_function_features(self, f):
yield from capa.features.extractors.smda.function.extract_features(f)
def get_basic_blocks(self, f):
for bb in f.getBlocks():
yield bb
def extract_basic_block_features(self, f, bb):
yield from capa.features.extractors.smda.basicblock.extract_features(f, bb)
def get_instructions(self, f, bb):
for smda_ins in bb.getInstructions():
yield smda_ins
def extract_insn_features(self, f, bb, insn):
yield from capa.features.extractors.smda.insn.extract_features(f, bb, insn)

View File

@@ -1,86 +1,37 @@
import struct
# if we have SMDA we definitely have lief
import lief
import capa.features.extractors.common
import capa.features.extractors.helpers
import capa.features.extractors.strings
from capa.features import String, Characteristic
from capa.features.file import Export, Import, Section
from capa.features.common import String, Characteristic
def carve(pbytes, offset=0):
"""
Return a list of (offset, size, xor) tuples of embedded PEs
Based on the version from vivisect:
https://github.com/vivisect/vivisect/blob/7be4037b1cecc4551b397f840405a1fc606f9b53/PE/carve.py#L19
And its IDA adaptation:
capa/features/extractors/ida/file.py
"""
mz_xor = [
(
capa.features.extractors.helpers.xor_static(b"MZ", i),
capa.features.extractors.helpers.xor_static(b"PE", i),
i,
)
for i in range(256)
]
pblen = len(pbytes)
todo = [(pbytes.find(mzx, offset), mzx, pex, i) for mzx, pex, i in mz_xor]
todo = [(off, mzx, pex, i) for (off, mzx, pex, i) in todo if off != -1]
while len(todo):
off, mzx, pex, i = todo.pop()
# The MZ header has one field we will check
# e_lfanew is at 0x3c
e_lfanew = off + 0x3C
if pblen < (e_lfanew + 4):
continue
newoff = struct.unpack("<I", capa.features.extractors.helpers.xor_static(pbytes[e_lfanew : e_lfanew + 4], i))[0]
nextres = pbytes.find(mzx, off + 1)
if nextres != -1:
todo.append((nextres, mzx, pex, i))
peoff = off + newoff
if pblen < (peoff + 2):
continue
if pbytes[peoff : peoff + 2] == pex:
yield (off, i)
def extract_file_embedded_pe(smda_report, file_path):
with open(file_path, "rb") as f:
fbytes = f.read()
for offset, i in carve(fbytes, 1):
def extract_file_embedded_pe(buf, **kwargs):
for offset, _ in capa.features.extractors.helpers.carve_pe(buf, 1):
yield Characteristic("embedded pe"), offset
def extract_file_export_names(smda_report, file_path):
lief_binary = lief.parse(file_path)
def extract_file_export_names(buf, **kwargs):
lief_binary = lief.parse(buf)
if lief_binary is not None:
for function in lief_binary.exported_functions:
yield Export(function.name), function.address
def extract_file_import_names(smda_report, file_path):
def extract_file_import_names(smda_report, buf):
# extract import table info via LIEF
lief_binary = lief.parse(file_path)
lief_binary = lief.parse(buf)
if not isinstance(lief_binary, lief.PE.Binary):
return
for imported_library in lief_binary.imports:
library_name = imported_library.name.lower()
library_name = library_name[:-4] if library_name.endswith(".dll") else library_name
for func in imported_library.entries:
va = func.iat_address + smda_report.base_addr
if func.name:
va = func.iat_address + smda_report.base_addr
for name in capa.features.extractors.helpers.generate_symbols(library_name, func.name):
yield Import(name), va
elif func.is_ordinal:
@@ -88,8 +39,8 @@ def extract_file_import_names(smda_report, file_path):
yield Import(name), va
def extract_file_section_names(smda_report, file_path):
lief_binary = lief.parse(file_path)
def extract_file_section_names(buf, **kwargs):
lief_binary = lief.parse(buf)
if not isinstance(lief_binary, lief.PE.Binary):
return
if lief_binary and lief_binary.sections:
@@ -98,35 +49,45 @@ def extract_file_section_names(smda_report, file_path):
yield Section(section.name), base_address + section.virtual_address
def extract_file_strings(smda_report, file_path):
def extract_file_strings(buf, **kwargs):
"""
extract ASCII and UTF-16 LE strings from file
"""
with open(file_path, "rb") as f:
b = f.read()
for s in capa.features.extractors.strings.extract_ascii_strings(b):
for s in capa.features.extractors.strings.extract_ascii_strings(buf):
yield String(s.s), s.offset
for s in capa.features.extractors.strings.extract_unicode_strings(b):
for s in capa.features.extractors.strings.extract_unicode_strings(buf):
yield String(s.s), s.offset
def extract_features(smda_report, file_path):
def extract_file_function_names(smda_report, **kwargs):
"""
extract the names of statically-linked library functions.
"""
if False:
# using a `yield` here to force this to be a generator, not function.
yield NotImplementedError("SMDA doesn't have library matching")
return
def extract_file_format(buf, **kwargs):
yield from capa.features.extractors.common.extract_format(buf)
def extract_features(smda_report, buf):
"""
extract file features from given workspace
args:
smda_report (smda.common.SmdaReport): a SmdaReport
file_path: path to the input file
buf: the raw bytes of the sample
yields:
Tuple[Feature, VA]: a feature and its location.
"""
for file_handler in FILE_HANDLERS:
result = file_handler(smda_report, file_path)
for feature, va in file_handler(smda_report, file_path):
for feature, va in file_handler(smda_report=smda_report, buf=buf):
yield feature, va
@@ -136,4 +97,6 @@ FILE_HANDLERS = (
extract_file_import_names,
extract_file_section_names,
extract_file_strings,
extract_file_function_names,
extract_file_format,
)

View File

@@ -1,4 +1,4 @@
from capa.features import Characteristic
from capa.features.common import Characteristic
from capa.features.extractors import loops
@@ -28,7 +28,7 @@ def extract_features(f):
f (smda.common.SmdaFunction): the function from which to extract features
yields:
Feature, set[VA]: the features and their location found in this function.
Tuple[Feature, int]: the features and their location found in this function.
"""
for func_handler in FUNCTION_HANDLERS:
for feature, va in func_handler(f):

View File

@@ -0,0 +1,20 @@
import logging
from capa.features.common import ARCH_I386, ARCH_AMD64, Arch
logger = logging.getLogger(__name__)
def extract_arch(smda_report):
if smda_report.architecture == "intel":
if smda_report.bitness == 32:
yield Arch(ARCH_I386), 0x0
elif smda_report.bitness == 64:
yield Arch(ARCH_AMD64), 0x0
else:
# we likely end up here:
# 1. handling a new architecture (e.g. aarch64)
#
# for (1), this logic will need to be updated as the format is implemented.
logger.debug("unsupported architecture: %s", smda_report.architecture)
return

View File

@@ -5,16 +5,16 @@ import struct
from smda.common.SmdaReport import SmdaReport
import capa.features.extractors.helpers
from capa.features import (
ARCH_X32,
ARCH_X64,
from capa.features.insn import API, Number, Offset, Mnemonic
from capa.features.common import (
BITNESS_X32,
BITNESS_X64,
MAX_BYTES_FEATURE_SIZE,
THUNK_CHAIN_DEPTH_DELTA,
Bytes,
String,
Characteristic,
)
from capa.features.insn import API, Number, Offset, Mnemonic
# security cookie checks may perform non-zeroing XORs, these are expected within a certain
# byte range within the first and returning basic blocks, this helps to reduce FP features
@@ -23,12 +23,12 @@ PATTERN_HEXNUM = re.compile(r"[+\-] (?P<num>0x[a-fA-F0-9]+)")
PATTERN_SINGLENUM = re.compile(r"[+\-] (?P<num>[0-9])")
def get_arch(smda_report):
def get_bitness(smda_report):
if smda_report.architecture == "intel":
if smda_report.bitness == 32:
return ARCH_X32
return BITNESS_X32
elif smda_report.bitness == 64:
return ARCH_X64
return BITNESS_X64
else:
raise NotImplementedError
@@ -85,7 +85,7 @@ def extract_insn_number_features(f, bb, insn):
for operand in operands:
try:
yield Number(int(operand, 16)), insn.offset
yield Number(int(operand, 16), arch=get_arch(f.smda_report)), insn.offset
yield Number(int(operand, 16), bitness=get_bitness(f.smda_report)), insn.offset
except:
continue
@@ -97,7 +97,7 @@ def read_bytes(smda_report, va, num_bytes=None):
rva = va - smda_report.base_addr
if smda_report.buffer is None:
return
raise ValueError("buffer is empty")
buffer_end = len(smda_report.buffer)
max_bytes = num_bytes if num_bytes is not None else MAX_BYTES_FEATURE_SIZE
if rva + max_bytes > buffer_end:
@@ -228,7 +228,7 @@ def extract_insn_offset_features(f, bb, insn):
number = int(number_int.group("num"))
number = -1 * number if number_int.group().startswith("-") else number
yield Offset(number), insn.offset
yield Offset(number, arch=get_arch(f.smda_report)), insn.offset
yield Offset(number, bitness=get_bitness(f.smda_report)), insn.offset
def is_security_cookie(f, bb, insn):
@@ -293,7 +293,7 @@ def extract_insn_peb_access_characteristic_features(f, bb, insn):
def extract_insn_segment_access_features(f, bb, insn):
""" parse the instruction for access to fs or gs """
"""parse the instruction for access to fs or gs"""
operands = [o.strip() for o in insn.operands.split(",")]
for operand in operands:
if "fs:" in operand:
@@ -370,7 +370,7 @@ def extract_features(f, bb, insn):
insn (smda.common.SmdaInstruction): the instruction to process.
yields:
Feature, set[VA]: the features and their location found in this insn.
Tuple[Feature, int]: the features and their location found in this insn.
"""
for insn_handler in INSTRUCTION_HANDLERS:
for feature, va in insn_handler(f, bb, insn):

View File

@@ -1,85 +0,0 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import types
import file
import insn
import function
import viv_utils
import basicblock
import capa.features.extractors
import capa.features.extractors.viv.file
import capa.features.extractors.viv.insn
import capa.features.extractors.viv.function
import capa.features.extractors.viv.basicblock
from capa.features.extractors import FeatureExtractor
__all__ = ["file", "function", "basicblock", "insn"]
def get_va(self):
try:
# vivisect type
return self.va
except AttributeError:
pass
raise TypeError()
def add_va_int_cast(o):
"""
dynamically add a cast-to-int (`__int__`) method to the given object
that returns the value of the `.va` property.
this bit of skullduggery lets use cast viv-utils objects as ints.
the correct way of doing this is to update viv-utils (or subclass the objects here).
"""
setattr(o, "__int__", types.MethodType(get_va, o, type(o)))
return o
class VivisectFeatureExtractor(FeatureExtractor):
def __init__(self, vw, path):
super(VivisectFeatureExtractor, self).__init__()
self.vw = vw
self.path = path
def get_base_address(self):
# assume there is only one file loaded into the vw
return list(self.vw.filemeta.values())[0]["imagebase"]
def extract_file_features(self):
for feature, va in capa.features.extractors.viv.file.extract_features(self.vw, self.path):
yield feature, va
def get_functions(self):
for va in sorted(self.vw.getFunctions()):
yield add_va_int_cast(viv_utils.Function(self.vw, va))
def extract_function_features(self, f):
for feature, va in capa.features.extractors.viv.function.extract_features(f):
yield feature, va
def get_basic_blocks(self, f):
for bb in f.basic_blocks:
yield add_va_int_cast(bb)
def extract_basic_block_features(self, f, bb):
for feature, va in capa.features.extractors.viv.basicblock.extract_features(f, bb):
yield feature, va
def get_instructions(self, f, bb):
for insn in bb.instructions:
yield add_va_int_cast(insn)
def extract_insn_features(self, f, bb, insn):
for feature, va in capa.features.extractors.viv.insn.extract_features(f, bb, insn):
yield feature, va

View File

@@ -10,9 +10,9 @@ import string
import struct
import envi
import vivisect.const
import envi.archs.i386.disasm
from capa.features import Characteristic
from capa.features.common import Characteristic
from capa.features.basicblock import BasicBlock
from capa.features.extractors.helpers import MIN_STACKSTRING_LEN
@@ -37,7 +37,7 @@ def _bb_has_tight_loop(f, bb):
"""
if len(bb.instructions) > 0:
for bva, bflags in bb.instructions[-1].getBranches():
if bflags & vivisect.envi.BR_COND:
if bflags & envi.BR_COND:
if bva == bb.va:
return True
@@ -45,7 +45,7 @@ def _bb_has_tight_loop(f, bb):
def extract_bb_tight_loop(f, bb):
""" check basic block for tight loop indicators """
"""check basic block for tight loop indicators"""
if _bb_has_tight_loop(f, bb):
yield Characteristic("tight loop"), bb.va
@@ -68,12 +68,12 @@ def _bb_has_stackstring(f, bb):
def extract_stackstring(f, bb):
""" check basic block for stackstring indicators """
"""check basic block for stackstring indicators"""
if _bb_has_stackstring(f, bb):
yield Characteristic("stack string"), bb.va
def is_mov_imm_to_stack(instr):
def is_mov_imm_to_stack(instr: envi.archs.i386.disasm.i386Opcode) -> bool:
"""
Return if instruction moves immediate onto stack
"""
@@ -105,7 +105,7 @@ def is_mov_imm_to_stack(instr):
return True
def get_printable_len(oper):
def get_printable_len(oper: envi.archs.i386.disasm.i386ImmOper) -> int:
"""
Return string length if all operand bytes are ascii or utf16-le printable
"""
@@ -117,20 +117,30 @@ def get_printable_len(oper):
chars = struct.pack("<I", oper.imm)
elif oper.tsize == 8:
chars = struct.pack("<Q", oper.imm)
else:
raise ValueError("unexpected oper.tsize: %d" % (oper.tsize))
if is_printable_ascii(chars):
return oper.tsize
if is_printable_utf16le(chars):
elif is_printable_utf16le(chars):
return oper.tsize / 2
return 0
else:
return 0
def is_printable_ascii(chars):
return all(ord(c) < 127 and c in string.printable for c in chars)
def is_printable_ascii(chars: bytes) -> bool:
try:
chars_str = chars.decode("ascii")
except UnicodeDecodeError:
return False
else:
return all(c in string.printable for c in chars_str)
def is_printable_utf16le(chars):
if all(c == "\x00" for c in chars[1::2]):
def is_printable_utf16le(chars: bytes) -> bool:
if all(c == b"\x00" for c in chars[1::2]):
return is_printable_ascii(chars[::2])
return False
def extract_features(f, bb):
@@ -142,7 +152,7 @@ def extract_features(f, bb):
bb (viv_utils.BasicBlock): the basic block to process.
yields:
Feature, set[VA]: the features and their location found in this basic block.
Tuple[Feature, int]: the features and their location found in this basic block.
"""
yield BasicBlock(), bb.va
for bb_handler in BASIC_BLOCK_HANDLERS:

View File

@@ -0,0 +1,84 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import logging
import viv_utils
import viv_utils.flirt
import capa.features.extractors.common
import capa.features.extractors.viv.file
import capa.features.extractors.viv.insn
import capa.features.extractors.viv.global_
import capa.features.extractors.viv.function
import capa.features.extractors.viv.basicblock
from capa.features.extractors.base_extractor import FeatureExtractor
logger = logging.getLogger(__name__)
class InstructionHandle:
"""this acts like a vivisect.Opcode but with an __int__() method"""
def __init__(self, inner):
self._inner = inner
def __int__(self):
return self.va
def __getattr__(self, name):
return getattr(self._inner, name)
class VivisectFeatureExtractor(FeatureExtractor):
def __init__(self, vw, path):
super(VivisectFeatureExtractor, self).__init__()
self.vw = vw
self.path = path
with open(self.path, "rb") as f:
self.buf = f.read()
# pre-compute these because we'll yield them at *every* scope.
self.global_features = []
self.global_features.extend(capa.features.extractors.common.extract_os(self.buf))
self.global_features.extend(capa.features.extractors.viv.global_.extract_arch(self.vw))
def get_base_address(self):
# assume there is only one file loaded into the vw
return list(self.vw.filemeta.values())[0]["imagebase"]
def extract_global_features(self):
yield from self.global_features
def extract_file_features(self):
yield from capa.features.extractors.viv.file.extract_features(self.vw, self.buf)
def get_functions(self):
for va in sorted(self.vw.getFunctions()):
yield viv_utils.Function(self.vw, va)
def extract_function_features(self, f):
yield from capa.features.extractors.viv.function.extract_features(f)
def get_basic_blocks(self, f):
return f.basic_blocks
def extract_basic_block_features(self, f, bb):
yield from capa.features.extractors.viv.basicblock.extract_features(f, bb)
def get_instructions(self, f, bb):
for insn in bb.instructions:
yield InstructionHandle(insn)
def extract_insn_features(self, f, bb, insn):
yield from capa.features.extractors.viv.insn.extract_features(f, bb, insn)
def is_library_function(self, va):
return viv_utils.flirt.is_library_function(self.vw, va)
def get_function_name(self, va):
return viv_utils.get_function_name(self.vw, va)

View File

@@ -7,27 +7,28 @@
# See the License for the specific language governing permissions and limitations under the License.
import PE.carve as pe_carve # vivisect PE
import viv_utils
import viv_utils.flirt
import capa.features.insn
import capa.features.extractors.common
import capa.features.extractors.helpers
import capa.features.extractors.strings
from capa.features import String, Characteristic
from capa.features.file import Export, Import, Section
from capa.features.file import Export, Import, Section, FunctionName
from capa.features.common import String, Characteristic
def extract_file_embedded_pe(vw, file_path):
with open(file_path, "rb") as f:
fbytes = f.read()
for offset, i in pe_carve.carve(fbytes, 1):
def extract_file_embedded_pe(buf, **kwargs):
for offset, _ in pe_carve.carve(buf, 1):
yield Characteristic("embedded pe"), offset
def extract_file_export_names(vw, file_path):
for va, etype, name, _ in vw.getExports():
def extract_file_export_names(vw, **kwargs):
for va, _, name, _ in vw.getExports():
yield Export(name), va
def extract_file_import_names(vw, file_path):
def extract_file_import_names(vw, **kwargs):
"""
extract imported function names
1. imports by ordinal:
@@ -38,7 +39,7 @@ def extract_file_import_names(vw, file_path):
"""
for va, _, _, tinfo in vw.getImports():
# vivisect source: tinfo = "%s.%s" % (libname, impname)
modname, impname = tinfo.split(".")
modname, impname = tinfo.split(".", 1)
if is_viv_ord_impname(impname):
# replace ord prefix with #
impname = "#%s" % impname[len("ord") :]
@@ -47,7 +48,7 @@ def extract_file_import_names(vw, file_path):
yield Import(name), va
def is_viv_ord_impname(impname):
def is_viv_ord_impname(impname: str) -> bool:
"""
return if import name matches vivisect's ordinal naming scheme `'ord%d' % ord`
"""
@@ -61,39 +62,43 @@ def is_viv_ord_impname(impname):
return True
def extract_file_section_names(vw, file_path):
def extract_file_section_names(vw, **kwargs):
for va, _, segname, _ in vw.getSegments():
yield Section(segname), va
def extract_file_strings(vw, file_path):
def extract_file_strings(buf, **kwargs):
yield from capa.features.extractors.common.extract_file_strings(buf)
def extract_file_function_names(vw, **kwargs):
"""
extract ASCII and UTF-16 LE strings from file
extract the names of statically-linked library functions.
"""
with open(file_path, "rb") as f:
b = f.read()
for s in capa.features.extractors.strings.extract_ascii_strings(b):
yield String(s.s), s.offset
for s in capa.features.extractors.strings.extract_unicode_strings(b):
yield String(s.s), s.offset
for va in sorted(vw.getFunctions()):
if viv_utils.flirt.is_library_function(vw, va):
name = viv_utils.get_function_name(vw, va)
yield FunctionName(name), va
def extract_features(vw, file_path):
def extract_file_format(buf, **kwargs):
yield from capa.features.extractors.common.extract_format(buf)
def extract_features(vw, buf: bytes):
"""
extract file features from given workspace
args:
vw (vivisect.VivWorkspace): the vivisect workspace
file_path: path to the input file
buf: the raw input file bytes
yields:
Tuple[Feature, VA]: a feature and its location.
"""
for file_handler in FILE_HANDLERS:
for feature, va in file_handler(vw, file_path):
for feature, va in file_handler(vw=vw, buf=buf): # type: ignore
yield feature, va
@@ -103,4 +108,6 @@ FILE_HANDLERS = (
extract_file_import_names,
extract_file_section_names,
extract_file_strings,
extract_file_function_names,
extract_file_format,
)

View File

@@ -6,9 +6,10 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import envi
import vivisect.const
from capa.features import Characteristic
from capa.features.common import Characteristic
from capa.features.extractors import loops
@@ -41,9 +42,9 @@ def extract_function_loop(f):
for bva, bflags in bb.instructions[-1].getBranches():
# vivisect does not set branch flags for non-conditional jmp so add explicit check
if (
bflags & vivisect.envi.BR_COND
or bflags & vivisect.envi.BR_FALL
or bflags & vivisect.envi.BR_TABLE
bflags & envi.BR_COND
or bflags & envi.BR_FALL
or bflags & envi.BR_TABLE
or bb.instructions[-1].mnem == "jmp"
):
edges.append((bb.va, bva))
@@ -60,7 +61,7 @@ def extract_features(f):
f (viv_utils.Function): the function from which to extract features
yields:
Feature, set[VA]: the features and their location found in this function.
Tuple[Feature, int]: the features and their location found in this function.
"""
for func_handler in FUNCTION_HANDLERS:
for feature, va in func_handler(f):

View File

@@ -0,0 +1,24 @@
import logging
import envi.archs.i386
import envi.archs.amd64
from capa.features.common import ARCH_I386, ARCH_AMD64, Arch
logger = logging.getLogger(__name__)
def extract_arch(vw):
if isinstance(vw.arch, envi.archs.amd64.Amd64Module):
yield Arch(ARCH_AMD64), 0x0
elif isinstance(vw.arch, envi.archs.i386.i386Module):
yield Arch(ARCH_I386), 0x0
else:
# we likely end up here:
# 1. handling a new architecture (e.g. aarch64)
#
# for (1), this logic will need to be updated as the format is implemented.
logger.debug("unsupported architecture: %s", vw.arch.__class__.__name__)
return

View File

@@ -5,10 +5,13 @@
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
from typing import Optional
from vivisect import VivWorkspace
from vivisect.const import XR_TO, REF_CODE
def get_coderef_from(vw, va):
def get_coderef_from(vw: VivWorkspace, va: int) -> Optional[int]:
"""
return first code `tova` whose origin is the specified va
return None if no code reference is found

View File

@@ -7,11 +7,16 @@
# See the License for the specific language governing permissions and limitations under the License.
import collections
from typing import TYPE_CHECKING, Set, List, Deque, Tuple, Union, Optional
import envi
import vivisect.const
import envi.archs.i386.disasm
import envi.archs.amd64.disasm
from vivisect import VivWorkspace
if TYPE_CHECKING:
from capa.features.extractors.viv.extractor import InstructionHandle
# pull out consts for lookup performance
i386RegOper = envi.archs.i386.disasm.i386RegOper
@@ -26,7 +31,7 @@ FAR_BRANCH_MASK = envi.BR_PROC | envi.BR_DEREF | envi.BR_ARCH
DESTRUCTIVE_MNEMONICS = ("mov", "lea", "pop", "xor")
def get_previous_instructions(vw, va):
def get_previous_instructions(vw: VivWorkspace, va: int) -> List[int]:
"""
collect the instructions that flow to the given address, local to the current function.
@@ -43,12 +48,14 @@ def get_previous_instructions(vw, va):
# ensure that it fallsthrough to this one.
loc = vw.getPrevLocation(va, adjacent=True)
if loc is not None:
# from vivisect.const:
# location: (L_VA, L_SIZE, L_LTYPE, L_TINFO)
(pva, _, ptype, pinfo) = vw.getPrevLocation(va, adjacent=True)
ploc = vw.getPrevLocation(va, adjacent=True)
if ploc is not None:
# from vivisect.const:
# location: (L_VA, L_SIZE, L_LTYPE, L_TINFO)
(pva, _, ptype, pinfo) = ploc
if ptype == LOC_OP and not (pinfo & IF_NOFALL):
ret.append(pva)
if ptype == LOC_OP and not (pinfo & IF_NOFALL):
ret.append(pva)
# find any code refs, e.g. jmp, to this location.
# ignore any calls.
@@ -67,7 +74,7 @@ class NotFoundError(Exception):
pass
def find_definition(vw, va, reg):
def find_definition(vw: VivWorkspace, va: int, reg: int) -> Tuple[int, Union[int, None]]:
"""
scan backwards from the given address looking for assignments to the given register.
if a constant, return that value.
@@ -83,8 +90,8 @@ def find_definition(vw, va, reg):
raises:
NotFoundError: when the definition cannot be found.
"""
q = collections.deque()
seen = set([])
q = collections.deque() # type: Deque[int]
seen = set([]) # type: Set[int]
q.extend(get_previous_instructions(vw, va))
while q:
@@ -128,14 +135,16 @@ def find_definition(vw, va, reg):
raise NotFoundError()
def is_indirect_call(vw, va, insn=None):
def is_indirect_call(vw: VivWorkspace, va: int, insn: Optional["InstructionHandle"] = None) -> bool:
if insn is None:
insn = vw.parseOpcode(va)
return insn.mnem in ("call", "jmp") and isinstance(insn.opers[0], envi.archs.i386.disasm.i386RegOper)
def resolve_indirect_call(vw, va, insn=None):
def resolve_indirect_call(
vw: VivWorkspace, va: int, insn: Optional["InstructionHandle"] = None
) -> Tuple[int, Optional[int]]:
"""
inspect the given indirect call instruction and attempt to resolve the target address.

View File

@@ -5,22 +5,28 @@
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import envi
import envi.exc
import viv_utils
import envi.memory
import viv_utils.flirt
import envi.archs.i386.regs
import envi.archs.amd64.regs
import envi.archs.i386.disasm
import envi.archs.amd64.disasm
import capa.features.extractors.helpers
import capa.features.extractors.viv.helpers
from capa.features import (
ARCH_X32,
ARCH_X64,
from capa.features.insn import API, Number, Offset, Mnemonic
from capa.features.common import (
BITNESS_X32,
BITNESS_X64,
MAX_BYTES_FEATURE_SIZE,
THUNK_CHAIN_DEPTH_DELTA,
Bytes,
String,
Characteristic,
)
from capa.features.insn import API, Number, Offset, Mnemonic
from capa.features.extractors.viv.indirect_calls import NotFoundError, resolve_indirect_call
# security cookie checks may perform non-zeroing XORs, these are expected within a certain
@@ -28,12 +34,12 @@ from capa.features.extractors.viv.indirect_calls import NotFoundError, resolve_i
SECURITY_COOKIE_BYTES_DELTA = 0x40
def get_arch(vw):
arch = vw.getMeta("Architecture")
if arch == "i386":
return ARCH_X32
elif arch == "amd64":
return ARCH_X64
def get_bitness(vw):
bitness = vw.getMeta("Architecture")
if bitness == "i386":
return BITNESS_X32
elif bitness == "amd64":
return BITNESS_X64
def interface_extract_instruction_XXX(f, bb, insn):
@@ -74,7 +80,6 @@ def extract_insn_api_features(f, bb, insn):
# example:
#
# call dword [0x00473038]
if insn.mnem not in ("call", "jmp"):
return
@@ -96,7 +101,7 @@ def extract_insn_api_features(f, bb, insn):
# call via thunk on x86,
# see 9324d1a8ae37a36ae560c37448c9705a at 0x407985
#
# this is also how calls to internal functions may be decoded on x64.
# this is also how calls to internal functions may be decoded on x32 and x64.
# see Lab21-01.exe_:0x140001178
#
# follow chained thunks, e.g. in 82bf6347acf15e5d883715dc289d8a2b at 0x14005E0FF in
@@ -111,12 +116,21 @@ def extract_insn_api_features(f, bb, insn):
if not target:
return
if viv_utils.flirt.is_library_function(f.vw, target):
name = viv_utils.get_function_name(f.vw, target)
yield API(name), insn.va
return
for _ in range(THUNK_CHAIN_DEPTH_DELTA):
if target in imports:
dll, symbol = imports[target]
for name in capa.features.extractors.helpers.generate_symbols(dll, symbol):
yield API(name), insn.va
# if jump leads to an ENDBRANCH instruction, skip it
if f.vw.getByteDef(target)[1].startswith(b"\xf3\x0f\x1e"):
target += 4
target = capa.features.extractors.viv.helpers.get_coderef_from(f.vw, target)
if not target:
return
@@ -171,7 +185,7 @@ def extract_insn_number_features(f, bb, insn):
# assume its not also a constant.
continue
if insn.mnem == "add" and insn.opers[0].isReg() and insn.opers[0].reg == envi.archs.i386.disasm.REG_ESP:
if insn.mnem == "add" and insn.opers[0].isReg() and insn.opers[0].reg == envi.archs.i386.regs.REG_ESP:
# skip things like:
#
# .text:00401140 call sub_407E2B
@@ -179,7 +193,7 @@ def extract_insn_number_features(f, bb, insn):
return
yield Number(v), insn.va
yield Number(v, arch=get_arch(f.vw)), insn.va
yield Number(v, bitness=get_bitness(f.vw)), insn.va
def derefs(vw, p):
@@ -214,7 +228,7 @@ def derefs(vw, p):
p = next
def read_memory(vw, va, size):
def read_memory(vw, va: int, size: int) -> bytes:
# as documented in #176, vivisect will not readMemory() when the section is not marked readable.
#
# but here, we don't care about permissions.
@@ -227,10 +241,10 @@ def read_memory(vw, va, size):
mva, msize, mperms, mfname = mmap
offset = va - mva
return mbytes[offset : offset + size]
raise envi.SegmentationViolation(va)
raise envi.exc.SegmentationViolation(va)
def read_bytes(vw, va):
def read_bytes(vw, va: int) -> bytes:
"""
read up to MAX_BYTES_FEATURE_SIZE from the given address.
@@ -239,7 +253,7 @@ def read_bytes(vw, va):
"""
segm = vw.getSegment(va)
if not segm:
raise envi.SegmentationViolation()
raise envi.exc.SegmentationViolation(va)
segm_end = segm[0] + segm[1]
try:
@@ -248,7 +262,7 @@ def read_bytes(vw, va):
return read_memory(vw, va, segm_end - va)
else:
return read_memory(vw, va, MAX_BYTES_FEATURE_SIZE)
except envi.SegmentationViolation:
except envi.exc.SegmentationViolation:
raise
@@ -280,7 +294,7 @@ def extract_insn_bytes_features(f, bb, insn):
for v in derefs(f.vw, v):
try:
buf = read_bytes(f.vw, v)
except envi.SegmentationViolation:
except envi.exc.SegmentationViolation:
continue
if capa.features.extractors.helpers.all_zeros(buf):
@@ -289,10 +303,10 @@ def extract_insn_bytes_features(f, bb, insn):
yield Bytes(buf), insn.va
def read_string(vw, offset):
def read_string(vw, offset: int) -> str:
try:
alen = vw.detectString(offset)
except envi.SegmentationViolation:
except envi.exc.SegmentationViolation:
pass
else:
if alen > 0:
@@ -300,7 +314,7 @@ def read_string(vw, offset):
try:
ulen = vw.detectUnicode(offset)
except envi.SegmentationViolation:
except envi.exc.SegmentationViolation:
pass
except IndexError:
# potential vivisect bug detecting Unicode at segment end
@@ -361,21 +375,21 @@ def extract_insn_offset_features(f, bb, insn):
# reg ^
# disp
if isinstance(oper, envi.archs.i386.disasm.i386RegMemOper):
if oper.reg == envi.archs.i386.disasm.REG_ESP:
if oper.reg == envi.archs.i386.regs.REG_ESP:
continue
if oper.reg == envi.archs.i386.disasm.REG_EBP:
if oper.reg == envi.archs.i386.regs.REG_EBP:
continue
# TODO: do x64 support for real.
if oper.reg == envi.archs.amd64.disasm.REG_RBP:
if oper.reg == envi.archs.amd64.regs.REG_RBP:
continue
# viv already decodes offsets as signed
v = oper.disp
yield Offset(v), insn.va
yield Offset(v, arch=get_arch(f.vw)), insn.va
yield Offset(v, bitness=get_bitness(f.vw)), insn.va
# like: [esi + ecx + 16384]
# reg ^ ^
@@ -386,21 +400,21 @@ def extract_insn_offset_features(f, bb, insn):
v = oper.disp
yield Offset(v), insn.va
yield Offset(v, arch=get_arch(f.vw)), insn.va
yield Offset(v, bitness=get_bitness(f.vw)), insn.va
def is_security_cookie(f, bb, insn):
def is_security_cookie(f, bb, insn) -> bool:
"""
check if an instruction is related to security cookie checks
"""
# security cookie check should use SP or BP
oper = insn.opers[1]
if oper.isReg() and oper.reg not in [
envi.archs.i386.disasm.REG_ESP,
envi.archs.i386.disasm.REG_EBP,
envi.archs.i386.regs.REG_ESP,
envi.archs.i386.regs.REG_EBP,
# TODO: do x64 support for real.
envi.archs.amd64.disasm.REG_RBP,
envi.archs.amd64.disasm.REG_RSP,
envi.archs.amd64.regs.REG_RBP,
envi.archs.amd64.regs.REG_RSP,
]:
return False
@@ -476,7 +490,7 @@ def extract_insn_peb_access_characteristic_features(f, bb, insn):
def extract_insn_segment_access_features(f, bb, insn):
""" parse the instruction for access to fs or gs """
"""parse the instruction for access to fs or gs"""
prefix = insn.getPrefixName()
if prefix == "fs":
@@ -486,7 +500,7 @@ def extract_insn_segment_access_features(f, bb, insn):
yield Characteristic("gs access"), insn.va
def get_section(vw, va):
def get_section(vw, va: int):
for start, length, _, __ in vw.getMemoryMaps():
if start <= va < start + length:
return start
@@ -499,6 +513,10 @@ def extract_insn_cross_section_cflow(f, bb, insn):
inspect the instruction for a CALL or JMP that crosses section boundaries.
"""
for va, flags in insn.getBranches():
if va is None:
# va may be none for dynamic branches that haven't been resolved, such as `jmp eax`.
continue
if flags & envi.BR_FALL:
continue
@@ -593,7 +611,7 @@ def extract_features(f, bb, insn):
insn (vivisect...Instruction): the instruction to process.
yields:
Feature, set[VA]: the features and their location found in this insn.
Tuple[Feature, int]: the features and their location found in this insn.
"""
for insn_handler in INSTRUCTION_HANDLERS:
for feature, va in insn_handler(f, bb, insn):

View File

@@ -6,22 +6,33 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
from capa.features import Feature
from capa.features.common import Feature
class Export(Feature):
def __init__(self, value, description=None):
def __init__(self, value: str, description=None):
# value is export name
super(Export, self).__init__(value, description=description)
class Import(Feature):
def __init__(self, value, description=None):
def __init__(self, value: str, description=None):
# value is import name
super(Import, self).__init__(value, description=description)
class Section(Feature):
def __init__(self, value, description=None):
def __init__(self, value: str, description=None):
# value is section name
super(Section, self).__init__(value, description=description)
class FunctionName(Feature):
"""recognized name for statically linked function"""
def __init__(self, name: str, description=None):
# value is function name
super(FunctionName, self).__init__(name, description=description)
# override the name property set by `capa.features.Feature`
# that would be `functionname` (note missing dash)
self.name = "function-name"

View File

@@ -19,6 +19,10 @@ json format:
...
},
'scopes': {
'global': [
(str(name), [any(arg), ...], int(va), ()),
...
},
'file': [
(str(name), [any(arg), ...], int(va), ()),
...
@@ -53,11 +57,11 @@ import json
import zlib
import logging
import capa.features
import capa.features.file
import capa.features.insn
import capa.features.common
import capa.features.basicblock
import capa.features.extractors
import capa.features.extractors.base_extractor
from capa.helpers import hex
logger = logging.getLogger(__name__)
@@ -67,7 +71,7 @@ def serialize_feature(feature):
return feature.freeze_serialize()
KNOWN_FEATURES = {F.__name__: F for F in capa.features.Feature.__subclasses__()}
KNOWN_FEATURES = {F.__name__: F for F in capa.features.common.Feature.__subclasses__()}
def deserialize_feature(doc):
@@ -80,7 +84,7 @@ def dumps(extractor):
serialize the given extractor to a string
args:
extractor: capa.features.extractor.FeatureExtractor:
extractor: capa.features.extractors.base_extractor.FeatureExtractor:
returns:
str: the serialized features.
@@ -90,12 +94,15 @@ def dumps(extractor):
"base address": extractor.get_base_address(),
"functions": {},
"scopes": {
"global": [],
"file": [],
"function": [],
"basic block": [],
"instruction": [],
},
}
for feature, va in extractor.extract_global_features():
ret["scopes"]["global"].append(serialize_feature(feature) + (hex(va), ()))
for feature, va in extractor.extract_file_features():
ret["scopes"]["file"].append(serialize_feature(feature) + (hex(va), ()))
@@ -122,7 +129,7 @@ def dumps(extractor):
)
for insnva, insn in sorted(
[(insn.__int__(), insn) for insn in extractor.get_instructions(f, bb)], key=lambda p: p[0]
[(int(insn), insn) for insn in extractor.get_instructions(f, bb)], key=lambda p: p[0]
):
ret["functions"][hex(f)][hex(bb)].append(hex(insnva))
@@ -150,6 +157,7 @@ def loads(s):
features = {
"base address": doc.get("base address"),
"global features": [],
"file features": [],
"functions": {},
}
@@ -179,6 +187,12 @@ def loads(s):
# ('MatchedRule', ('foo', ), '0x401000', ('0x401000', ))
# ^^^^^^^^^^^^^ ^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^^^^^
# feature name args addr func/bb/insn
for feature in doc.get("scopes", {}).get("global", []):
va, loc = feature[2:]
va = int(va, 0x10)
feature = deserialize_feature(feature[:2])
features["global features"].append((va, feature))
for feature in doc.get("scopes", {}).get("file", []):
va, loc = feature[2:]
va = int(va, 0x10)
@@ -217,7 +231,7 @@ def loads(s):
feature = deserialize_feature(feature[:2])
features["functions"][loc[0]]["basic blocks"][loc[1]]["instructions"][loc[2]]["features"].append((va, feature))
return capa.features.extractors.NullFeatureExtractor(features)
return capa.features.extractors.base_extractor.NullFeatureExtractor(features)
MAGIC = "capa0000".encode("ascii")
@@ -228,7 +242,7 @@ def dump(extractor):
return MAGIC + zlib.compress(dumps(extractor).encode("utf-8"))
def is_freeze(buf):
def is_freeze(buf: bytes) -> bool:
return buf[: len(MAGIC)] == MAGIC
@@ -248,35 +262,16 @@ def main(argv=None):
if argv is None:
argv = sys.argv[1:]
formats = [
("auto", "(default) detect file type automatically"),
("pe", "Windows PE file"),
("sc32", "32-bit shellcode"),
("sc64", "64-bit shellcode"),
]
format_help = ", ".join(["%s: %s" % (f[0], f[1]) for f in formats])
parser = argparse.ArgumentParser(description="save capa features to a file")
parser.add_argument("sample", type=str, help="Path to sample to analyze")
capa.main.install_common_args(parser, {"sample", "format", "backend", "signatures"})
parser.add_argument("output", type=str, help="Path to output file")
parser.add_argument("-v", "--verbose", action="store_true", help="Enable verbose output")
parser.add_argument("-q", "--quiet", action="store_true", help="Disable all output but errors")
parser.add_argument(
"-f", "--format", choices=[f[0] for f in formats], default="auto", help="Select sample format, %s" % format_help
)
args = parser.parse_args(args=argv)
capa.main.handle_common_args(args)
if args.quiet:
logging.basicConfig(level=logging.ERROR)
logging.getLogger().setLevel(logging.ERROR)
elif args.verbose:
logging.basicConfig(level=logging.DEBUG)
logging.getLogger().setLevel(logging.DEBUG)
else:
logging.basicConfig(level=logging.INFO)
logging.getLogger().setLevel(logging.INFO)
sigpaths = capa.main.get_signatures(args.signatures)
extractor = capa.main.get_extractor(args.sample, args.format, args.backend, sigpaths, False)
extractor = capa.main.get_extractor(args.sample, args.format)
with open(args.output, "wb") as f:
f.write(dump(extractor))

View File

@@ -6,11 +6,12 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
from capa.features import Feature
import capa.render.utils
from capa.features.common import Feature
class API(Feature):
def __init__(self, name, description=None):
def __init__(self, name: str, description=None):
# Downcase library name if given
if "." in name:
modname, _, impname = name.rpartition(".")
@@ -20,21 +21,21 @@ class API(Feature):
class Number(Feature):
def __init__(self, value, arch=None, description=None):
super(Number, self).__init__(value, arch=arch, description=description)
def __init__(self, value: int, bitness=None, description=None):
super(Number, self).__init__(value, bitness=bitness, description=description)
def get_value_str(self):
return "0x%X" % self.value
return capa.render.utils.hex(self.value)
class Offset(Feature):
def __init__(self, value, arch=None, description=None):
super(Offset, self).__init__(value, arch=arch, description=description)
def __init__(self, value: int, bitness=None, description=None):
super(Offset, self).__init__(value, bitness=bitness, description=description)
def get_value_str(self):
return "0x%X" % self.value
return capa.render.utils.hex(self.value)
class Mnemonic(Feature):
def __init__(self, value, description=None):
def __init__(self, value: str, description=None):
super(Mnemonic, self).__init__(value, description=description)

View File

@@ -12,25 +12,21 @@ _hex = hex
def hex(i):
# under py2.7, long integers get formatted with a trailing `L`
# and this is not pretty. so strip it out.
return _hex(oint(i)).rstrip("L")
return _hex(int(i))
def oint(i):
# there seems to be some trouble with using `int(viv_utils.Function)`
# with the black magic we do with binding the `__int__()` routine.
# i haven't had a chance to debug this yet (and i have no hotel wifi).
# so in the meantime, detect this, and call the method directly.
try:
return int(i)
except TypeError:
return i.__int__()
def get_file_taste(sample_path):
def get_file_taste(sample_path: str) -> bytes:
if not os.path.exists(sample_path):
raise IOError("sample path %s does not exist or cannot be accessed" % sample_path)
with open(sample_path, "rb") as f:
taste = f.read(8)
return taste
def is_runtime_ida():
try:
import idc
except ImportError:
return False
else:
return True

View File

@@ -10,28 +10,34 @@ import logging
import datetime
import idc
import six
import idaapi
import idautils
import ida_bytes
import ida_loader
import capa
import capa.version
import capa.features.common
logger = logging.getLogger("capa")
SUPPORTED_IDA_VERSIONS = [
"7.1",
"7.2",
"7.3",
# IDA version as returned by idaapi.get_kernel_version()
SUPPORTED_IDA_VERSIONS = (
"7.4",
"7.5",
]
"7.6",
)
# file type names as returned by idaapi.get_file_type_name()
SUPPORTED_FILE_TYPES = [
"Portable executable for 80386 (PE)",
"Portable executable for AMD64 (PE)",
"Binary file", # x86/AMD64 shellcode support
]
# file type as returned by idainfo.file_type
SUPPORTED_FILE_TYPES = (
idaapi.f_PE,
idaapi.f_ELF,
idaapi.f_BIN,
# idaapi.f_MACHO,
)
# arch type as returned by idainfo.procname
SUPPORTED_ARCH_TYPES = ("metapc",)
def inform_user_ida_ui(message):
@@ -51,13 +57,13 @@ def is_supported_ida_version():
def is_supported_file_type():
file_type = idaapi.get_file_type_name()
if file_type not in SUPPORTED_FILE_TYPES:
file_info = idaapi.get_inf_structure()
if file_info.filetype not in SUPPORTED_FILE_TYPES:
logger.error("-" * 80)
logger.error(" Input file does not appear to be a PE file.")
logger.error(" Input file does not appear to be a supported file type.")
logger.error(" ")
logger.error(
" capa currently only supports analyzing PE files (or binary files containing x86/AMD64 shellcode) with IDA."
" capa currently only supports analyzing PE, ELF, or binary files containing x86 (32- and 64-bit) shellcode."
)
logger.error(" If you don't know the input file type, you can try using the `file` utility to guess it.")
logger.error("-" * 80)
@@ -65,13 +71,25 @@ def is_supported_file_type():
return True
def is_supported_arch_type():
file_info = idaapi.get_inf_structure()
if file_info.procname not in SUPPORTED_ARCH_TYPES or not any((file_info.is_32bit(), file_info.is_64bit())):
logger.error("-" * 80)
logger.error(" Input file does not appear to target a supported architecture.")
logger.error(" ")
logger.error(" capa currently only supports analyzing x86 (32- and 64-bit).")
logger.error("-" * 80)
return False
return True
def get_disasm_line(va):
""" """
return idc.generate_disasm_line(va, idc.GENDSM_FORCE_CODE)
def is_func_start(ea):
""" check if function stat exists at virtual address """
"""check if function stat exists at virtual address"""
f = idaapi.get_func(ea)
return f and f.start_ea == ea
@@ -82,14 +100,26 @@ def get_func_start_ea(ea):
return f if f is None else f.start_ea
def collect_metadata():
def get_file_md5():
""" """
md5 = idautils.GetInputFileMD5()
if not isinstance(md5, six.string_types):
md5 = capa.features.bytes_to_str(md5)
if not isinstance(md5, str):
md5 = capa.features.common.bytes_to_str(md5)
return md5
def get_file_sha256():
""" """
sha256 = idaapi.retrieve_input_file_sha256()
if not isinstance(sha256, six.string_types):
sha256 = capa.features.bytes_to_str(sha256)
if not isinstance(sha256, str):
sha256 = capa.features.common.bytes_to_str(sha256)
return sha256
def collect_metadata():
""" """
md5 = get_file_md5()
sha256 = get_file_sha256()
return {
"timestamp": datetime.datetime.now().isoformat(),
@@ -107,3 +137,30 @@ def collect_metadata():
},
"version": capa.version.__version__,
}
class IDAIO:
"""
An object that acts as a file-like object,
using bytes from the current IDB workspace.
"""
def __init__(self):
super(IDAIO, self).__init__()
self.offset = 0
def seek(self, offset, whence=0):
assert whence == 0
self.offset = offset
def read(self, size):
ea = ida_loader.get_fileregion_ea(self.offset)
if ea == idc.BADADDR:
# best guess, such as if file is mapped at address 0x0.
ea = self.offset
logger.debug("reading 0x%x bytes at 0x%x (ea: 0x%x)", size, self.offset, ea)
return ida_bytes.get_bytes(ea, size)
def close(self):
return

View File

@@ -1,65 +1,82 @@
![capa explorer](../../../.github/capa-explorer-logo.png)
capa explorer is an IDA Pro plugin written in Python that integrates the FLARE team's open-source framework, capa, with IDA. capa is a framework that uses a well-defined collection of rules to
capa explorer is an IDAPython plugin that integrates the FLARE team's open-source framework, capa, with IDA Pro. capa is a framework that uses a well-defined collection of rules to
identify capabilities in a program. You can run capa against a PE file or shellcode and it tells you what it thinks the program can do. For example, it might suggest that
the program is a backdoor, can install services, or relies on HTTP to communicate. You can use capa explorer to run capa directly on an IDA database without requiring access
to the source binary. Once a database has been analyzed, capa explorer can be used to quickly identify and navigate to interesting areas of a program
and dissect capa rule matches at the assembly level.
the program is a backdoor, can install services, or relies on HTTP to communicate. capa explorer runs capa directly against your IDA Pro database (IDB) without requiring access
to the original binary file. Once a database has been analyzed, capa explorer helps you identify interesting areas of a program and build new capa rules using features extracted from your IDB.
We love using capa explorer during malware analysis because it teaches us what parts of a program suggest a behavior. As we click on rows, capa explorer jumps directly
to important addresses in the IDA Pro database and highlights key features in the Disassembly view so they stand out visually. To illustrate, we use capa explorer to
to important addresses in the IDB and highlights key features in the Disassembly view so they stand out visually. To illustrate, we use capa explorer to
analyze Lab 14-02 from [Practical Malware Analysis](https://nostarch.com/malware) (PMA) available [here](https://practicalmalwareanalysis.com/labs/). Our goal is to understand
the program's functionality.
After loading Lab 14-02 into IDA and analyzing the database with capa explorer, we see that capa detected a rule match for `self delete via COMSPEC environment variable`:
![](../../../doc/img/ida_plugin_example_1.png)
![](../../../doc/img/explorer_condensed.png)
We can use capa explorer to navigate the IDA Disassembly view directly to the suspect function and get an assembly-level breakdown of why capa matched `self delete via COMSPEC environment variable`
for this particular function.
We can use capa explorer to navigate our Disassembly view directly to the suspect function and get an assembly-level breakdown of why capa matched `self delete via COMSPEC environment variable`.
![](../../../doc/img/ida_plugin_example_2.png)
![](../../../doc/img/explorer_expanded.png)
Using the `Rule Information` and `Details` columns capa explorer shows us that the suspect function matched `self delete via COMSPEC environment variable` because it contains capa rule matches for `create process`, `get COMSPEC environment variable`,
and `query environment variable`, references to the strings `COMSPEC`, ` > nul`, and `/c del`, and calls to the Windows API functions `GetEnvironmentVariableA` and `ShellExecuteEx`.
and `query environment variable`, references to the strings `COMSPEC`, ` > nul`, and `/c del `, and calls to the Windows API functions `GetEnvironmentVariableA` and `ShellExecuteEx`.
capa explorer also helps you build new capa rules. To start select the `Rule Generator` tab, navigate to a function in your Disassembly view,
and click `Analyze`. capa explorer will extract features from the function and display them in the `Features` pane. You can add features listed in this pane to the `Editor` pane
by either double-clicking a feature or using multi-select + right-click to add multiple features at once. The `Preview` and `Editor` panes help edit your rule. Use the `Preview` pane
to modify the rule text directly and the `Editor` pane to construct and rearrange your hierarchy of statements and features. When you finish a rule you can save it directly to a file by clicking `Save`.
![](../../../doc/img/rulegen_expanded.png)
For more information on the FLARE team's open-source framework, capa, check out the overview in our first [blog](https://www.fireeye.com/blog/threat-research/2020/07/capa-automatically-identify-malware-capabilities.html).
## Features
![](../../../doc/img/ida_plugin_intro.gif)
* Display capa results in an interactive tree view of rule matches and their locations in the current database
* Search for keywords or phrases found in the `Rule Information`, `Address`, or `Details` columns
* Display rule source content when a user hovers their cursor over a rule match
* Double-click `Address` column to view associated feature in the IDA Disassembly view
* Limit tree view results to the function currently displayed in the IDA Disassembly view; update results as a user navigates to different functions
* Export results as formatted JSON by navigating to `File > Export results...`
* Remember a user's capa rules directory for future runs; change capa rules directory by navigating to `Rules > Change rules directory...`
* Automatically re-analyze database when user performs a program rebase
* Automatically update results when IDA is used to rename a function
* Select one or more checkboxes to highlight the associated addresses in the IDA Disassembly view
* Right-click a function match to rename it; the new function name is propagated to the current IDA database
* Right-click to copy a result by column or by row
* Sort results by column
* Reset tree view and IDA Disassembly view highlighting by clicking `Reset`
## Getting Started
### Requirements
capa explorer supports the following IDA setups:
capa explorer supports Python versions >= 3.6.x and the following IDA Pro versions:
* IDA Pro 7.4+ with Python 2.7 or Python 3.
* IDA 7.4
* IDA 7.5
* IDA 7.6 (caveat below)
capa explorer is however limited to the Python versions supported by your IDA installation (which may not include all Python versions >= 3.6.x). Based on our testing the following matrix shows the Python versions supported
by each supported IDA version:
| | IDA 7.4 | IDA 7.5 | IDA 7.6 |
| --- | --- | --- | --- |
| Python 3.6.x | Yes | Yes | Yes |
| Python 3.7.x | Yes | Yes | Yes |
| Python 3.8.x | Partial (see below) | Yes | Yes |
| Python 3.9.x | No | Partial (see below) | Yes |
To use capa explorer with IDA 7.4 and Python 3.8.x you must follow the instructions provided by hex-rays [here](https://hex-rays.com/blog/ida-7-4-and-python-3-8/).
To use capa explorer with IDA 7.5 and Python 3.9.x you must follow the instructions provided by hex-rays [here](https://hex-rays.com/blog/python-3-9-support-for-ida-7-5/).
If you encounter issues with your specific setup, please open a new [Issue](https://github.com/fireeye/capa/issues).
#### IDA 7.6 caveat: IDA 7.6sp1 or patch required
As described [here](https://www.hex-rays.com/blog/ida-7-6-empty-qtreeview-qtreewidget/):
> A rather nasty issue evaded our testing and found its way into IDA 7.6: using the PyQt5 modules that are shipped with IDA, QTreeView (or QTreeWidget) instances will always fail to display contents.
Therefore, in order to use capa under IDA 7.6 you need the [Service Pack 1 for IDA 7.6](https://www.hex-rays.com/products/ida/news/7_6sp1). Alternatively, you can download and install the fix corresponding to your IDA installation, replacing the original QtWidgets DLL with the one contained in the .zip file (links to Hex-Rays):
- Windows: [pyqt5_qtwidgets_win](https://www.hex-rays.com/wp-content/uploads/2021/04/pyqt5_qtwidgets_win.zip)
- Linux: [pyqt5_qtwidgets_linux](https://www.hex-rays.com/wp-content/uploads/2021/04/pyqt5_qtwidgets_linux.zip)
- MacOS (Intel): [pyqt5_qtwidgets_mac_x64](https://www.hex-rays.com/wp-content/uploads/2021/04/pyqt5_qtwidgets_mac_x64.zip)
- MacOS (AppleSilicon): [pyqt5_qtwidgets_mac_arm](https://www.hex-rays.com/wp-content/uploads/2021/04/pyqt5_qtwidgets_mac_arm.zip)
### Supported File Types
capa explorer is limited to the file types supported by capa, which includes:
capa explorer is limited to the file types supported by capa, which include:
* Windows 32-bit and 64-bit PE files
* Windows 32-bit and 64-bit shellcode
* Windows x86 (32- and 64-bit) PE and ELF files
* Windows x86 (32- and 64-bit) shellcode
### Installation
@@ -74,38 +91,49 @@ You can install capa explorer using the following steps:
### Usage
1. Run IDA and analyze a supported file type (select the `Manual Load` and `Load Resources` options in IDA for best results)
1. Open IDA and analyze a supported file type (select the `Manual Load` and `Load Resources` options in IDA for best results)
2. Open capa explorer in IDA by navigating to `Edit > Plugins > FLARE capa explorer` or using the keyboard shortcut `Alt+F5`
3. Click the `Analyze` button
You can also use `ida_loader.load_and_run_plugin("capa_explorer", arg)`. `arg` is a bitflag for which setting the LSB enables automatic analysis. See `capa.ida.plugin.form.Options` for more details.
3. Select the `Program Analysis` tab
4. Click the `Analyze` button
When running capa explorer for the first time you are prompted to select a file directory containing capa rules. The plugin conveniently
remembers your selection for future runs; you can change this selection by navigating to `Rules > Change rules directory...`. We recommend
remembers your selection for future runs; you can change this selection and other default settings by clicking `Settings`. We recommend
downloading and using the [standard collection of capa rules](https://github.com/fireeye/capa-rules) when getting started with the plugin.
#### Tips
#### Tips for Program Analysis
* Start analysis by clicking the `Analyze` button
* Reset the plugin user interface and remove highlighting from IDA disassembly view by clicking the `Reset` button
* Change your capa rules directory by navigating to `Rules > Change rules directory...` from the plugin menu
* Reset the plugin user interface and remove highlighting from your Disassembly view by clicking the `Reset` button
* Change your capa rules directory and other default settings by clicking `Settings`
* Hover your cursor over a rule match to view the source content of the rule
* Double-click the `Address` column to navigate the IDA Disassembly view to the associated feature
* Double-click the `Address` column to navigate your Disassembly view to the address of the associated feature
* Double-click a result in the `Rule Information` column to expand its children
* Select a checkbox in the `Rule Information` column to highlight the address of the associated feature in the IDA Dissasembly view
* Select a checkbox in the `Rule Information` column to highlight the address of the associated feature in your Dissasembly view
#### Tips for Rule Generator
* Navigate to a function in your Disassembly view and click`Analyze` to get started
* Double-click or use multi-select + right-click to add features from the `Features` pane to the `Editor` pane
* Right-click features in the `Editor` pane to make context-specific modifications
* Drag-and-drop (single click + multi-select support) features in the `Editor` pane to construct your hierarchy of statements and features
* Right-click anywhere in the `Editor` pane not on a feature to remove all features
* Add descriptions or comments to a feature by editing the corresponding column in the `Editor` pane
* Directly edit rule text and metadata fields using the `Preview` pane
* Change the default rule author and default rule scope displayed in the `Preview` pane by clicking `Settings`
## Development
Because capa explorer is packaged with capa you will need to install capa locally for development.
You can install capa locally by following the steps outlined in `Method 3: Inspecting the capa source code` of the [capa
capa explorer is packaged with capa so you will need to install capa locally for development. You can install capa locally by following the steps outlined in `Method 3: Inspecting the capa source code` of the [capa
installation guide](https://github.com/fireeye/capa/blob/master/doc/installation.md#method-3-inspecting-the-capa-source-code). Once installed, copy [capa_explorer.py](https://raw.githubusercontent.com/fireeye/capa/master/capa/ida/plugin/capa_explorer.py)
to your IDA plugins directory to run the plugin in IDA.
to your plugins directory to install capa explorer in IDA.
### Components
capa explorer consists of two main components:
* An IDA [feature extractor](https://github.com/fireeye/capa/tree/master/capa/features/extractors/ida) built on top of IDA's binary analysis engine
* This component uses IDAPython to extract [capa features](https://github.com/fireeye/capa-rules/blob/master/doc/format.md#extracted-features) from the IDA database such as strings,
* An [feature extractor](https://github.com/fireeye/capa/tree/master/capa/features/extractors/ida) built on top of IDA's binary analysis engine
* This component uses IDAPython to extract [capa features](https://github.com/fireeye/capa-rules/blob/master/doc/format.md#extracted-features) from your IDBs such as strings,
disassembly, and control flow; these extracted features are used by capa to find feature combinations that result in a rule match
* An [interactive user interface](https://github.com/fireeye/capa/tree/master/capa/ida/plugin) for displaying and exploring capa rule matches
* This component integrates the IDA feature extractor and capa, providing an interactive user interface to dissect rule matches found by capa using features extracted by the IDA feature extractor
* This component integrates the feature extractor and capa, providing an interactive user interface to dissect rule matches found by capa using features extracted directly from your IDBs

View File

@@ -11,7 +11,6 @@ import logging
import idaapi
import ida_kernwin
from capa.ida.helpers import is_supported_file_type, is_supported_ida_version
from capa.ida.plugin.form import CapaExplorerForm
from capa.ida.plugin.icon import ICON
@@ -41,10 +40,14 @@ class CapaExplorerPlugin(idaapi.plugin_t):
"""called when IDA is loading the plugin"""
logging.basicConfig(level=logging.INFO)
import capa.ida.helpers
# do not load plugin if IDA version/file type not supported
if not is_supported_ida_version():
if not capa.ida.helpers.is_supported_ida_version():
return idaapi.PLUGIN_SKIP
if not is_supported_file_type():
if not capa.ida.helpers.is_supported_file_type():
return idaapi.PLUGIN_SKIP
if not capa.ida.helpers.is_supported_arch_type():
return idaapi.PLUGIN_SKIP
return idaapi.PLUGIN_OK
@@ -53,8 +56,14 @@ class CapaExplorerPlugin(idaapi.plugin_t):
pass
def run(self, arg):
"""called when IDA is running the plugin as a script"""
self.form = CapaExplorerForm(self.PLUGIN_NAME)
"""
called when IDA is running the plugin as a script
args:
arg (int): bitflag. Setting LSB enables automatic analysis upon
loading. The other bits are currently undefined. See `form.Options`.
"""
self.form = CapaExplorerForm(self.PLUGIN_NAME, arg)
return True

File diff suppressed because it is too large Load Diff

View File

@@ -6,7 +6,6 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import sys
import codecs
import idc
@@ -32,23 +31,22 @@ def location_to_hex(location):
return "%08X" % location
class CapaExplorerDataItem(object):
class CapaExplorerDataItem:
"""store data for CapaExplorerDataModel"""
def __init__(self, parent, data):
def __init__(self, parent, data, can_check=True):
"""initialize item"""
self.pred = parent
self._data = data
self.children = []
self._checked = False
self._can_check = can_check
# default state for item
self.flags = (
QtCore.Qt.ItemIsEnabled
| QtCore.Qt.ItemIsSelectable
| QtCore.Qt.ItemIsTristate
| QtCore.Qt.ItemIsUserCheckable
)
self.flags = QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable
if self._can_check:
self.flags = self.flags | QtCore.Qt.ItemIsUserCheckable | QtCore.Qt.ItemIsTristate
if self.pred:
self.pred.appendChild(self)
@@ -70,6 +68,10 @@ class CapaExplorerDataItem(object):
"""
self._checked = checked
def canCheck(self):
""" """
return self._can_check
def isChecked(self):
"""get item is checked"""
return self._checked
@@ -165,7 +167,7 @@ class CapaExplorerRuleItem(CapaExplorerDataItem):
fmt = "%s (%d matches)"
def __init__(self, parent, name, namespace, count, source):
def __init__(self, parent, name, namespace, count, source, can_check=True):
"""initialize item
@param parent: parent node
@@ -175,7 +177,7 @@ class CapaExplorerRuleItem(CapaExplorerDataItem):
@param source: rule source (tooltip)
"""
display = self.fmt % (name, count) if count > 1 else name
super(CapaExplorerRuleItem, self).__init__(parent, [display, "", namespace])
super(CapaExplorerRuleItem, self).__init__(parent, [display, "", namespace], can_check)
self._source = source
@property
@@ -199,7 +201,7 @@ class CapaExplorerRuleMatchItem(CapaExplorerDataItem):
@property
def source(self):
""" return rule contents for display """
"""return rule contents for display"""
return self._source
@@ -208,14 +210,14 @@ class CapaExplorerFunctionItem(CapaExplorerDataItem):
fmt = "function(%s)"
def __init__(self, parent, location):
def __init__(self, parent, location, can_check=True):
"""initialize item
@param parent: parent node
@param location: virtual address of function as seen by IDA
"""
super(CapaExplorerFunctionItem, self).__init__(
parent, [self.fmt % idaapi.get_name(location), location_to_hex(location), ""]
parent, [self.fmt % idaapi.get_name(location), location_to_hex(location), ""], can_check
)
@property
@@ -325,14 +327,10 @@ class CapaExplorerByteViewItem(CapaExplorerFeatureItem):
"""
byte_snap = idaapi.get_bytes(location, 32)
details = ""
if byte_snap:
byte_snap = codecs.encode(byte_snap, "hex").upper()
if sys.version_info >= (3, 0):
details = " ".join([byte_snap[i : i + 2].decode() for i in range(0, len(byte_snap), 2)])
else:
details = " ".join([byte_snap[i : i + 2] for i in range(0, len(byte_snap), 2)])
else:
details = ""
details = " ".join([byte_snap[i : i + 2].decode() for i in range(0, len(byte_snap), 2)])
super(CapaExplorerByteViewItem, self).__init__(parent, display, location=location, details=details)
self.ida_highlight = idc.get_color(location, idc.CIC_ITEM)

View File

@@ -6,14 +6,16 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
from collections import deque
from collections import deque, defaultdict
import idc
import idaapi
from PyQt5 import QtGui, QtCore
import capa.rules
import capa.ida.helpers
import capa.render.utils as rutils
import capa.features.common
from capa.ida.plugin.item import (
CapaExplorerDataItem,
CapaExplorerRuleItem,
@@ -110,6 +112,8 @@ class CapaExplorerDataModel(QtCore.QAbstractItemModel):
if role == QtCore.Qt.CheckStateRole and column == CapaExplorerDataModel.COLUMN_INDEX_RULE_INFORMATION:
# inform view how to display content of checkbox - un/checked
if not item.canCheck():
return None
return QtCore.Qt.Checked if item.isChecked() else QtCore.Qt.Unchecked
if role == QtCore.Qt.FontRole and column in (
@@ -424,14 +428,34 @@ class CapaExplorerDataModel(QtCore.QAbstractItemModel):
for child in match.get("children", []):
self.render_capa_doc_match(parent2, child, doc)
def render_capa_doc(self, doc):
"""render capa features specified in doc
@param doc: capa result doc
"""
# inform model that changes are about to occur
self.beginResetModel()
def render_capa_doc_by_function(self, doc):
""" """
matches_by_function = {}
for rule in rutils.capability_rules(doc):
for ea in rule["matches"].keys():
ea = capa.ida.helpers.get_func_start_ea(ea)
if ea is None:
# file scope, skip rendering in this mode
continue
if not matches_by_function.get(ea, ()):
# new function root
matches_by_function[ea] = (CapaExplorerFunctionItem(self.root_node, ea, can_check=False), [])
function_root, match_cache = matches_by_function[ea]
if rule["meta"]["name"] in match_cache:
# rule match already rendered for this function root, skip it
continue
match_cache.append(rule["meta"]["name"])
CapaExplorerRuleItem(
function_root,
rule["meta"]["name"],
rule["meta"].get("namespace"),
len(rule["matches"]),
rule["source"],
can_check=False,
)
def render_capa_doc_by_program(self, doc):
""" """
for rule in rutils.capability_rules(doc):
rule_name = rule["meta"]["name"]
rule_namespace = rule["meta"].get("namespace")
@@ -451,6 +475,19 @@ class CapaExplorerDataModel(QtCore.QAbstractItemModel):
self.render_capa_doc_match(parent2, match, doc)
def render_capa_doc(self, doc, by_function):
"""render capa features specified in doc
@param doc: capa result doc
"""
# inform model that changes are about to occur
self.beginResetModel()
if by_function:
self.render_capa_doc_by_function(doc)
else:
self.render_capa_doc_by_program(doc)
# inform model changes have ended
self.endResetModel()
@@ -459,13 +496,17 @@ class CapaExplorerDataModel(QtCore.QAbstractItemModel):
@param feature: capa feature read from doc
"""
if feature[feature["type"]]:
key = feature["type"]
value = feature[feature["type"]]
if value:
if key == "string":
value = '"%s"' % capa.features.common.escape_string(value)
if feature.get("description", ""):
return "%s(%s = %s)" % (feature["type"], feature[feature["type"]], feature["description"])
return "%s(%s = %s)" % (key, value, feature["description"])
else:
return "%s(%s)" % (feature["type"], feature[feature["type"]])
return "%s(%s)" % (key, value)
else:
return "%s" % feature["type"]
return "%s" % key
def render_capa_doc_feature_node(self, parent, feature, locations, doc):
"""process capa doc feature node
@@ -521,8 +562,15 @@ class CapaExplorerDataModel(QtCore.QAbstractItemModel):
parent, display, source=doc["rules"].get(feature[feature["type"]], {}).get("source", "")
)
if feature["type"] == "regex":
return CapaExplorerStringViewItem(parent, display, location, feature["match"])
if feature["type"] in ("regex", "substring"):
for s, locations in feature["matches"].items():
if location in locations:
return CapaExplorerStringViewItem(
parent, display, location, '"' + capa.features.common.escape_string(s) + '"'
)
# programming error: the given location should always be found in the regex matches
raise ValueError("regex match at location not found")
if feature["type"] == "basicblock":
return CapaExplorerBlockItem(parent, location)
@@ -547,10 +595,15 @@ class CapaExplorerDataModel(QtCore.QAbstractItemModel):
if feature["type"] in ("string",):
# display string preview
return CapaExplorerStringViewItem(parent, display, location, feature[feature["type"]])
return CapaExplorerStringViewItem(
parent, display, location, '"%s"' % capa.features.common.escape_string(feature[feature["type"]])
)
if feature["type"] in ("import", "export"):
if feature["type"] in ("import", "export", "function-name"):
# display no preview
return CapaExplorerFeatureItem(parent, location=location, display=display)
if feature["type"] in ("arch", "os", "format"):
return CapaExplorerFeatureItem(parent, display=display)
raise RuntimeError("unexpected feature type: " + str(feature["type"]))

View File

@@ -5,7 +5,6 @@
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import six
from PyQt5 import QtCore
from PyQt5.QtCore import Qt
@@ -208,7 +207,7 @@ class CapaExplorerSearchProxyModel(QtCore.QSortFilterProxyModel):
if not data:
continue
if not isinstance(data, six.string_types):
if not isinstance(data, str):
# sanity check: should already be a string, but double check
continue

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,266 +0,0 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import json
import six
import capa.rules
import capa.engine
def convert_statement_to_result_document(statement):
"""
"statement": {
"type": "or"
},
"statement": {
"max": 9223372036854775808,
"min": 2,
"type": "range"
},
"""
statement_type = statement.name.lower()
result = {"type": statement_type}
if statement.description:
result["description"] = statement.description
if statement_type == "some" and statement.count == 0:
result["type"] = "optional"
elif statement_type == "some":
result["count"] = statement.count
elif statement_type == "range":
result["min"] = statement.min
result["max"] = statement.max
result["child"] = convert_feature_to_result_document(statement.child)
elif statement_type == "subscope":
result["subscope"] = statement.scope
return result
def convert_feature_to_result_document(feature):
"""
"feature": {
"number": 6,
"type": "number"
},
"feature": {
"api": "ws2_32.WSASocket",
"type": "api"
},
"feature": {
"match": "create TCP socket",
"type": "match"
},
"feature": {
"characteristic": [
"loop",
true
],
"type": "characteristic"
},
"""
result = {"type": feature.name, feature.name: feature.get_value_str()}
if feature.description:
result["description"] = feature.description
if feature.name == "regex":
result["match"] = feature.match
return result
def convert_node_to_result_document(node):
"""
"node": {
"type": "statement",
"statement": { ... }
},
"node": {
"type": "feature",
"feature": { ... }
},
"""
if isinstance(node, capa.engine.Statement):
return {
"type": "statement",
"statement": convert_statement_to_result_document(node),
}
elif isinstance(node, capa.features.Feature):
return {
"type": "feature",
"feature": convert_feature_to_result_document(node),
}
else:
raise RuntimeError("unexpected match node type")
def convert_match_to_result_document(rules, capabilities, result):
"""
convert the given Result instance into a common, Python-native data structure.
this will become part of the "result document" format that can be emitted to JSON.
"""
doc = {
"success": bool(result.success),
"node": convert_node_to_result_document(result.statement),
"children": [convert_match_to_result_document(rules, capabilities, child) for child in result.children],
}
# logic expression, like `and`, don't have locations - their children do.
# so only add `locations` to feature nodes.
if isinstance(result.statement, capa.features.Feature):
if bool(result.success):
doc["locations"] = result.locations
elif isinstance(result.statement, capa.rules.Range):
if bool(result.success):
doc["locations"] = result.locations
# if we have a `match` statement, then we're referencing another rule.
# this could an external rule (written by a human), or
# rule generated to support a subscope (basic block, etc.)
# we still want to include the matching logic in this tree.
#
# so, we need to lookup the other rule results
# and then filter those down to the address used here.
# finally, splice that logic into this tree.
if (
doc["node"]["type"] == "feature"
and doc["node"]["feature"]["type"] == "match"
# only add subtree on success,
# because there won't be results for the other rule on failure.
and doc["success"]
):
rule_name = doc["node"]["feature"]["match"]
rule = rules[rule_name]
rule_matches = {address: result for (address, result) in capabilities[rule_name]}
if rule.meta.get("capa/subscope-rule"):
# for a subscope rule, fixup the node to be a scope node, rather than a match feature node.
#
# e.g. `contain loop/30c4c78e29bf4d54894fc74f664c62e8` -> `basic block`
scope = rule.meta["scope"]
doc["node"] = {
"type": "statement",
"statement": {
"type": "subscope",
"subscope": scope,
},
}
for location in doc["locations"]:
doc["children"].append(convert_match_to_result_document(rules, capabilities, rule_matches[location]))
return doc
def convert_capabilities_to_result_document(meta, rules, capabilities):
"""
convert the given rule set and capabilities result to a common, Python-native data structure.
this format can be directly emitted to JSON, or passed to the other `render_*` routines
to render as text.
see examples of substructures in above routines.
schema:
```json
{
"meta": {...},
"rules: {
$rule-name: {
"meta": {...copied from rule.meta...},
"matches: {
$address: {...match details...},
...
}
},
...
}
}
```
Args:
meta (Dict[str, Any]):
rules (RuleSet):
capabilities (Dict[str, List[Tuple[int, Result]]]):
"""
doc = {
"meta": meta,
"rules": {},
}
for rule_name, matches in capabilities.items():
rule = rules[rule_name]
if rule.meta.get("capa/subscope-rule"):
continue
doc["rules"][rule_name] = {
"meta": dict(rule.meta),
"source": rule.definition,
"matches": {
addr: convert_match_to_result_document(rules, capabilities, match) for (addr, match) in matches
},
}
return doc
def render_vverbose(meta, rules, capabilities):
# there's an import loop here
# if capa.render imports capa.render.vverbose
# and capa.render.vverbose import capa.render (implicitly, as a submodule)
# so, defer the import until routine is called, breaking the import loop.
import capa.render.vverbose
doc = convert_capabilities_to_result_document(meta, rules, capabilities)
return capa.render.vverbose.render_vverbose(doc)
def render_verbose(meta, rules, capabilities):
# break import loop
import capa.render.verbose
doc = convert_capabilities_to_result_document(meta, rules, capabilities)
return capa.render.verbose.render_verbose(doc)
def render_default(meta, rules, capabilities):
# break import loop
import capa.render.default
import capa.render.verbose
doc = convert_capabilities_to_result_document(meta, rules, capabilities)
return capa.render.default.render_default(doc)
class CapaJsonObjectEncoder(json.JSONEncoder):
"""JSON encoder that emits Python sets as sorted lists"""
def default(self, obj):
if isinstance(obj, (list, dict, int, float, bool, type(None))) or isinstance(obj, six.string_types):
return json.JSONEncoder.default(self, obj)
elif isinstance(obj, set):
return list(sorted(obj))
else:
# probably will TypeError
return json.JSONEncoder.default(self, obj)
def render_json(meta, rules, capabilities):
return json.dumps(
convert_capabilities_to_result_document(meta, rules, capabilities),
cls=CapaJsonObjectEncoder,
sort_keys=True,
)

View File

@@ -8,15 +8,18 @@
import collections
import six
import tabulate
import capa.render.utils as rutils
import capa.render.result_document
from capa.rules import RuleSet
from capa.engine import MatchResults
from capa.render.utils import StringIO
tabulate.PRESERVE_WHITESPACE = True
def width(s, character_count):
def width(s: str, character_count: int) -> str:
"""pad the given string to at least `character_count`"""
if len(s) < character_count:
return s + " " * (character_count - len(s))
@@ -24,11 +27,14 @@ def width(s, character_count):
return s
def render_meta(doc, ostream):
def render_meta(doc, ostream: StringIO):
rows = [
(width("md5", 22), width(doc["meta"]["sample"]["md5"], 82)),
("sha1", doc["meta"]["sample"]["sha1"]),
("sha256", doc["meta"]["sample"]["sha256"]),
("os", doc["meta"]["analysis"]["os"]),
("format", doc["meta"]["analysis"]["format"]),
("arch", doc["meta"]["analysis"]["arch"]),
("path", doc["meta"]["sample"]["path"]),
]
@@ -64,7 +70,7 @@ def find_subrule_matches(doc):
return matches
def render_capabilities(doc, ostream):
def render_capabilities(doc, ostream: StringIO):
"""
example::
@@ -102,7 +108,7 @@ def render_capabilities(doc, ostream):
ostream.writeln(rutils.bold("no capabilities found"))
def render_attack(doc, ostream):
def render_attack(doc, ostream: StringIO):
"""
example::
@@ -124,27 +130,16 @@ def render_attack(doc, ostream):
continue
for attack in rule["meta"]["att&ck"]:
tactic, _, rest = attack.partition("::")
if "::" in rest:
technique, _, rest = rest.partition("::")
subtechnique, _, id = rest.rpartition(" ")
tactics[tactic].add((technique, subtechnique, id))
else:
technique, _, id = rest.rpartition(" ")
tactics[tactic].add((technique, id))
tactics[attack["tactic"]].add((attack["technique"], attack.get("subtechnique"), attack["id"]))
rows = []
for tactic, techniques in sorted(tactics.items()):
inner_rows = []
for spec in sorted(techniques):
if len(spec) == 2:
technique, id = spec
for (technique, subtechnique, id) in sorted(techniques):
if subtechnique is None:
inner_rows.append("%s %s" % (rutils.bold(technique), id))
elif len(spec) == 3:
technique, subtechnique, id = spec
inner_rows.append("%s::%s %s" % (rutils.bold(technique), subtechnique, id))
else:
raise RuntimeError("unexpected ATT&CK spec format")
inner_rows.append("%s::%s %s" % (rutils.bold(technique), subtechnique, id))
rows.append(
(
rutils.bold(tactic.upper()),
@@ -161,7 +156,7 @@ def render_attack(doc, ostream):
ostream.write("\n")
def render_mbc(doc, ostream):
def render_mbc(doc, ostream: StringIO):
"""
example::
@@ -180,32 +175,17 @@ def render_mbc(doc, ostream):
if not rule["meta"].get("mbc"):
continue
mbcs = rule["meta"]["mbc"]
if not isinstance(mbcs, list):
raise ValueError("invalid rule: MBC mapping is not a list")
for mbc in mbcs:
objective, _, rest = mbc.partition("::")
if "::" in rest:
behavior, _, rest = rest.partition("::")
method, _, id = rest.rpartition(" ")
objectives[objective].add((behavior, method, id))
else:
behavior, _, id = rest.rpartition(" ")
objectives[objective].add((behavior, id))
for mbc in rule["meta"]["mbc"]:
objectives[mbc["objective"]].add((mbc["behavior"], mbc.get("method"), mbc["id"]))
rows = []
for objective, behaviors in sorted(objectives.items()):
inner_rows = []
for spec in sorted(behaviors):
if len(spec) == 2:
behavior, id = spec
inner_rows.append("%s %s" % (rutils.bold(behavior), id))
elif len(spec) == 3:
behavior, method, id = spec
inner_rows.append("%s::%s %s" % (rutils.bold(behavior), method, id))
for (behavior, method, id) in sorted(behaviors):
if method is None:
inner_rows.append("%s [%s]" % (rutils.bold(behavior), id))
else:
raise RuntimeError("unexpected MBC spec format")
inner_rows.append("%s::%s [%s]" % (rutils.bold(behavior), method, id))
rows.append(
(
rutils.bold(objective.upper()),
@@ -232,3 +212,8 @@ def render_default(doc):
render_capabilities(doc, ostream)
return ostream.getvalue()
def render(meta, rules: RuleSet, capabilities: MatchResults) -> str:
doc = capa.render.result_document.convert_capabilities_to_result_document(meta, rules, capabilities)
return render_default(doc)

33
capa/render/json.py Normal file
View File

@@ -0,0 +1,33 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import json
import capa.render.result_document
from capa.rules import RuleSet
from capa.engine import MatchResults
class CapaJsonObjectEncoder(json.JSONEncoder):
"""JSON encoder that emits Python sets as sorted lists"""
def default(self, obj):
if isinstance(obj, (list, dict, int, float, bool, type(None))) or isinstance(obj, str):
return json.JSONEncoder.default(self, obj)
elif isinstance(obj, set):
return list(sorted(obj))
else:
# probably will TypeError
return json.JSONEncoder.default(self, obj)
def render(meta, rules: RuleSet, capabilities: MatchResults) -> str:
return json.dumps(
capa.render.result_document.convert_capabilities_to_result_document(meta, rules, capabilities),
cls=CapaJsonObjectEncoder,
sort_keys=True,
)

View File

@@ -0,0 +1,329 @@
# Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import copy
import capa.rules
import capa.engine
import capa.render.utils
import capa.features.common
from capa.rules import RuleSet
from capa.engine import MatchResults
def convert_statement_to_result_document(statement):
"""
"statement": {
"type": "or"
},
"statement": {
"max": 9223372036854775808,
"min": 2,
"type": "range"
},
"""
statement_type = statement.name.lower()
result = {"type": statement_type}
if statement.description:
result["description"] = statement.description
if statement_type == "some" and statement.count == 0:
result["type"] = "optional"
elif statement_type == "some":
result["count"] = statement.count
elif statement_type == "range":
result["min"] = statement.min
result["max"] = statement.max
result["child"] = convert_feature_to_result_document(statement.child)
elif statement_type == "subscope":
result["subscope"] = statement.scope
return result
def convert_feature_to_result_document(feature):
"""
"feature": {
"number": 6,
"type": "number"
},
"feature": {
"api": "ws2_32.WSASocket",
"type": "api"
},
"feature": {
"match": "create TCP socket",
"type": "match"
},
"feature": {
"characteristic": [
"loop",
true
],
"type": "characteristic"
},
"""
result = {"type": feature.name, feature.name: feature.get_value_str()}
if feature.description:
result["description"] = feature.description
if feature.name in ("regex", "substring"):
result["matches"] = feature.matches
return result
def convert_node_to_result_document(node):
"""
"node": {
"type": "statement",
"statement": { ... }
},
"node": {
"type": "feature",
"feature": { ... }
},
"""
if isinstance(node, capa.engine.Statement):
return {
"type": "statement",
"statement": convert_statement_to_result_document(node),
}
elif isinstance(node, capa.features.common.Feature):
return {
"type": "feature",
"feature": convert_feature_to_result_document(node),
}
else:
raise RuntimeError("unexpected match node type")
def convert_match_to_result_document(rules, capabilities, result):
"""
convert the given Result instance into a common, Python-native data structure.
this will become part of the "result document" format that can be emitted to JSON.
"""
doc = {
"success": bool(result.success),
"node": convert_node_to_result_document(result.statement),
"children": [convert_match_to_result_document(rules, capabilities, child) for child in result.children],
}
# logic expression, like `and`, don't have locations - their children do.
# so only add `locations` to feature nodes.
if isinstance(result.statement, capa.features.common.Feature):
if bool(result.success):
doc["locations"] = result.locations
elif isinstance(result.statement, capa.engine.Range):
if bool(result.success):
doc["locations"] = result.locations
# if we have a `match` statement, then we're referencing another rule or namespace.
# this could an external rule (written by a human), or
# rule generated to support a subscope (basic block, etc.)
# we still want to include the matching logic in this tree.
#
# so, we need to lookup the other rule results
# and then filter those down to the address used here.
# finally, splice that logic into this tree.
if (
doc["node"]["type"] == "feature"
and doc["node"]["feature"]["type"] == "match"
# only add subtree on success,
# because there won't be results for the other rule on failure.
and doc["success"]
):
name = doc["node"]["feature"]["match"]
if name in rules:
# this is a rule that we're matching
#
# pull matches from the referenced rule into our tree here.
rule_name = doc["node"]["feature"]["match"]
rule = rules[rule_name]
rule_matches = {address: result for (address, result) in capabilities[rule_name]}
if rule.meta.get("capa/subscope-rule"):
# for a subscope rule, fixup the node to be a scope node, rather than a match feature node.
#
# e.g. `contain loop/30c4c78e29bf4d54894fc74f664c62e8` -> `basic block`
scope = rule.meta["scope"]
doc["node"] = {
"type": "statement",
"statement": {
"type": "subscope",
"subscope": scope,
},
}
for location in doc["locations"]:
doc["children"].append(convert_match_to_result_document(rules, capabilities, rule_matches[location]))
else:
# this is a namespace that we're matching
#
# check for all rules in the namespace,
# seeing if they matched.
# if so, pull their matches into our match tree here.
ns_name = doc["node"]["feature"]["match"]
ns_rules = rules.rules_by_namespace[ns_name]
for rule in ns_rules:
if rule.name in capabilities:
# the rule matched, so splice results into our tree here.
#
# note, there's a shortcoming in our result document schema here:
# we lose the name of the rule that matched in a namespace.
# for example, if we have a statement: `match: runtime/dotnet`
# and we get matches, we can say the following:
#
# match: runtime/dotnet @ 0x0
# or:
# import: mscoree._CorExeMain @ 0x402000
#
# however, we lose the fact that it was rule
# "compiled to the .NET platform"
# that contained this logic and did the match.
#
# we could introduce an intermediate node here.
# this would be a breaking change and require updates to the renderers.
# in the meantime, the above might be sufficient.
rule_matches = {address: result for (address, result) in capabilities[rule.name]}
for location in doc["locations"]:
# doc[locations] contains all matches for the given namespace.
# for example, the feature might be `match: anti-analysis/packer`
# which matches against "generic unpacker" and "UPX".
# in this case, doc[locations] contains locations for *both* of thse.
#
# rule_matches contains the matches for the specific rule.
# this is a subset of doc[locations].
#
# so, grab only the locations for current rule.
if location in rule_matches:
doc["children"].append(
convert_match_to_result_document(rules, capabilities, rule_matches[location])
)
return doc
def convert_meta_to_result_document(meta):
# make a copy so that we don't modify the given parameter
meta = copy.deepcopy(meta)
attacks = meta.get("att&ck", [])
meta["att&ck"] = [parse_canonical_attack(attack) for attack in attacks]
mbcs = meta.get("mbc", [])
meta["mbc"] = [parse_canonical_mbc(mbc) for mbc in mbcs]
return meta
def parse_canonical_attack(attack: str):
"""
parse capa's canonical ATT&CK representation: `Tactic::Technique::Subtechnique [Identifier]`
"""
tactic = ""
technique = ""
subtechnique = ""
parts, id = capa.render.utils.parse_parts_id(attack)
if len(parts) > 0:
tactic = parts[0]
if len(parts) > 1:
technique = parts[1]
if len(parts) > 2:
subtechnique = parts[2]
return {
"parts": parts,
"id": id,
"tactic": tactic,
"technique": technique,
"subtechnique": subtechnique,
}
def parse_canonical_mbc(mbc: str):
"""
parse capa's canonical MBC representation: `Objective::Behavior::Method [Identifier]`
"""
objective = ""
behavior = ""
method = ""
parts, id = capa.render.utils.parse_parts_id(mbc)
if len(parts) > 0:
objective = parts[0]
if len(parts) > 1:
behavior = parts[1]
if len(parts) > 2:
method = parts[2]
return {
"parts": parts,
"id": id,
"objective": objective,
"behavior": behavior,
"method": method,
}
def convert_capabilities_to_result_document(meta, rules: RuleSet, capabilities: MatchResults):
"""
convert the given rule set and capabilities result to a common, Python-native data structure.
this format can be directly emitted to JSON, or passed to the other `capa.render.*.render()` routines
to render as text.
see examples of substructures in above routines.
schema:
```json
{
"meta": {...},
"rules: {
$rule-name: {
"meta": {...copied from rule.meta...},
"matches: {
$address: {...match details...},
...
}
},
...
}
}
```
Args:
meta (Dict[str, Any]):
rules (RuleSet):
capabilities (Dict[str, List[Tuple[int, Result]]]):
"""
doc = {
"meta": meta,
"rules": {},
}
for rule_name, matches in capabilities.items():
rule = rules[rule_name]
if rule.meta.get("capa/subscope-rule"):
continue
rule_meta = convert_meta_to_result_document(rule.meta)
doc["rules"][rule_name] = {
"meta": rule_meta,
"source": rule.definition,
"matches": {
addr: convert_match_to_result_document(rules, capabilities, match) for (addr, match) in matches
},
}
return doc

View File

@@ -6,21 +6,22 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import six
import io
import termcolor
def bold(s):
def bold(s: str) -> str:
"""draw attention to the given string"""
return termcolor.colored(s, "blue")
def bold2(s):
def bold2(s: str) -> str:
"""draw attention to the given string, within a `bold` section"""
return termcolor.colored(s, "green")
def hex(n):
def hex(n: int) -> str:
"""render the given number using upper case hex, like: 0x123ABC"""
if n < 0:
return "-0x%X" % (-n)
@@ -28,6 +29,24 @@ def hex(n):
return "0x%X" % n
def parse_parts_id(s: str):
id = ""
parts = s.split("::")
if len(parts) > 0:
last = parts.pop()
last, _, id = last.rpartition(" ")
id = id.lstrip("[").rstrip("]")
parts.append(last)
return parts, id
def format_parts_id(data):
"""
format canonical representation of ATT&CK/MBC parts and ID
"""
return "%s [%s]" % ("::".join(data["parts"]), data["id"])
def capability_rules(doc):
"""enumerate the rules in (namespace, name) order that are 'capability' rules (not lib/subscope/disposition/etc)."""
for (_, _, rule) in sorted(
@@ -49,7 +68,7 @@ def capability_rules(doc):
yield rule
class StringIO(six.StringIO):
class StringIO(io.StringIO):
def writeln(self, s):
self.write(s)
self.write("\n")

View File

@@ -26,6 +26,9 @@ import tabulate
import capa.rules
import capa.render.utils as rutils
import capa.render.result_document
from capa.rules import RuleSet
from capa.engine import MatchResults
def render_meta(ostream, doc):
@@ -38,7 +41,9 @@ def render_meta(ostream, doc):
path /tmp/suspicious.dll_
timestamp 2020-07-03T10:17:05.796933
capa version 0.0.0
format auto
os windows
format pe
arch amd64
extractor VivisectFeatureExtractor
base address 0x10000000
rules (embedded rules)
@@ -52,11 +57,14 @@ def render_meta(ostream, doc):
("path", doc["meta"]["sample"]["path"]),
("timestamp", doc["meta"]["timestamp"]),
("capa version", doc["meta"]["version"]),
("os", doc["meta"]["analysis"]["os"]),
("format", doc["meta"]["analysis"]["format"]),
("arch", doc["meta"]["analysis"]["arch"]),
("extractor", doc["meta"]["analysis"]["extractor"]),
("base address", hex(doc["meta"]["analysis"]["base_address"])),
("rules", doc["meta"]["analysis"]["rules"]),
("function count", len(doc["meta"]["analysis"]["feature_counts"]["functions"])),
("library function count", len(doc["meta"]["analysis"]["library_functions"])),
(
"total feature count",
doc["meta"]["analysis"]["feature_counts"]["file"]
@@ -119,3 +127,8 @@ def render_verbose(doc):
ostream.write("\n")
return ostream.getvalue()
def render(meta, rules: RuleSet, capabilities: MatchResults) -> str:
doc = capa.render.result_document.convert_capabilities_to_result_document(meta, rules, capabilities)
return render_verbose(doc)

View File

@@ -6,13 +6,15 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import collections
import tabulate
import capa.rules
import capa.render.utils as rutils
import capa.render.verbose
import capa.features.common
import capa.render.result_document
from capa.rules import RuleSet
from capa.engine import MatchResults
def render_locations(ostream, match):
@@ -56,7 +58,11 @@ def render_statement(ostream, match, statement, indent=0):
child = statement["child"]
if child[child["type"]]:
value = rutils.bold2(child[child["type"]])
if child["type"] == "string":
value = '"%s"' % capa.features.common.escape_string(child[child["type"]])
else:
value = child[child["type"]]
value = rutils.bold2(value)
if child.get("description"):
ostream.write("count(%s(%s = %s)): " % (child["type"], value, child["description"]))
else:
@@ -81,27 +87,51 @@ def render_statement(ostream, match, statement, indent=0):
raise RuntimeError("unexpected match statement type: " + str(statement))
def render_string_value(s):
return '"%s"' % capa.features.common.escape_string(s)
def render_feature(ostream, match, feature, indent=0):
ostream.write(" " * indent)
key = feature["type"]
value = feature[feature["type"]]
if key == "regex":
key = "string" # render string for regex to mirror the rule source
value = feature["match"] # the match provides more information than the value for regex
ostream.write(key)
ostream.write(": ")
if key not in ("regex", "substring"):
# like:
# number: 10 = SOME_CONSTANT @ 0x401000
if key == "string":
value = render_string_value(value)
if value:
ostream.write(rutils.bold2(value))
ostream.write(key)
ostream.write(": ")
if "description" in feature:
ostream.write(capa.rules.DESCRIPTION_SEPARATOR)
ostream.write(feature["description"])
if value:
ostream.write(rutils.bold2(value))
render_locations(ostream, match)
ostream.write("\n")
if "description" in feature:
ostream.write(capa.rules.DESCRIPTION_SEPARATOR)
ostream.write(feature["description"])
if key not in ("os", "arch"):
render_locations(ostream, match)
ostream.write("\n")
else:
# like:
# regex: /blah/ = SOME_CONSTANT
# - "foo blah baz" @ 0x401000
# - "aaa blah bbb" @ 0x402000, 0x403400
ostream.write(key)
ostream.write(": ")
ostream.write(value)
ostream.write("\n")
for match, locations in sorted(feature["matches"].items(), key=lambda p: p[0]):
ostream.write(" " * (indent + 1))
ostream.write("- ")
ostream.write(rutils.bold2(render_string_value(match)))
render_locations(ostream, {"locations": locations})
ostream.write("\n")
def render_node(ostream, match, node, indent=0):
@@ -190,6 +220,12 @@ def render_rules(ostream, doc):
continue
v = rule["meta"][key]
if not v:
continue
if key in ("att&ck", "mbc"):
v = [rutils.format_parts_id(vv) for vv in v]
if isinstance(v, list) and len(v) == 1:
v = v[0]
elif isinstance(v, list) and len(v) > 1:
@@ -229,3 +265,8 @@ def render_vverbose(doc):
ostream.write("\n")
return ostream.getvalue()
def render(meta, rules: RuleSet, capabilities: MatchResults) -> str:
doc = capa.render.result_document.convert_capabilities_to_result_document(meta, rules, capabilities)
return render_vverbose(doc)

View File

@@ -6,29 +6,35 @@
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import io
import re
import uuid
import codecs
import logging
import binascii
import functools
import collections
try:
from functools import lru_cache
except ImportError:
from backports.functools_lru_cache import lru_cache
# need to type ignore this due to mypy bug here (duplicate name):
# https://github.com/python/mypy/issues/1153
from backports.functools_lru_cache import lru_cache # type: ignore
from typing import Any, Dict, List, Union, Iterator
import six
import yaml
import ruamel.yaml
import capa.engine
import capa.engine as ceng
import capa.features
import capa.features.file
import capa.features.insn
import capa.features.common
import capa.features.basicblock
from capa.engine import *
from capa.features import MAX_BYTES_FEATURE_SIZE
from capa.engine import Statement, FeatureSet
from capa.features.common import MAX_BYTES_FEATURE_SIZE, Feature
logger = logging.getLogger(__name__)
@@ -65,37 +71,45 @@ BASIC_BLOCK_SCOPE = "basic block"
SUPPORTED_FEATURES = {
FILE_SCOPE: {
capa.features.MatchedRule,
capa.features.common.MatchedRule,
capa.features.file.Export,
capa.features.file.Import,
capa.features.file.Section,
capa.features.Characteristic("embedded pe"),
capa.features.String,
capa.features.file.FunctionName,
capa.features.common.Characteristic("embedded pe"),
capa.features.common.String,
capa.features.common.Format,
capa.features.common.OS,
capa.features.common.Arch,
},
FUNCTION_SCOPE: {
# plus basic block scope features, see below
capa.features.basicblock.BasicBlock,
capa.features.Characteristic("calls from"),
capa.features.Characteristic("calls to"),
capa.features.Characteristic("loop"),
capa.features.Characteristic("recursive call"),
capa.features.common.Characteristic("calls from"),
capa.features.common.Characteristic("calls to"),
capa.features.common.Characteristic("loop"),
capa.features.common.Characteristic("recursive call"),
capa.features.common.OS,
capa.features.common.Arch,
},
BASIC_BLOCK_SCOPE: {
capa.features.MatchedRule,
capa.features.common.MatchedRule,
capa.features.insn.API,
capa.features.insn.Number,
capa.features.String,
capa.features.Bytes,
capa.features.common.String,
capa.features.common.Bytes,
capa.features.insn.Offset,
capa.features.insn.Mnemonic,
capa.features.Characteristic("nzxor"),
capa.features.Characteristic("peb access"),
capa.features.Characteristic("fs access"),
capa.features.Characteristic("gs access"),
capa.features.Characteristic("cross section flow"),
capa.features.Characteristic("tight loop"),
capa.features.Characteristic("stack string"),
capa.features.Characteristic("indirect call"),
capa.features.common.Characteristic("nzxor"),
capa.features.common.Characteristic("peb access"),
capa.features.common.Characteristic("fs access"),
capa.features.common.Characteristic("gs access"),
capa.features.common.Characteristic("cross section flow"),
capa.features.common.Characteristic("tight loop"),
capa.features.common.Characteristic("stack string"),
capa.features.common.Characteristic("indirect call"),
capa.features.common.OS,
capa.features.common.Arch,
},
}
@@ -138,22 +152,32 @@ class InvalidRuleSet(ValueError):
return str(self)
def ensure_feature_valid_for_scope(scope, feature):
if isinstance(feature, capa.features.Characteristic):
if capa.features.Characteristic(feature.value) not in SUPPORTED_FEATURES[scope]:
raise InvalidRule("feature %s not support for scope %s" % (feature, scope))
elif not isinstance(feature, tuple(filter(lambda t: isinstance(t, type), SUPPORTED_FEATURES[scope]))):
raise InvalidRule("feature %s not support for scope %s" % (feature, scope))
def ensure_feature_valid_for_scope(scope: str, feature: Union[Feature, Statement]):
# if the given feature is a characteristic,
# check that is a valid characteristic for the given scope.
if (
isinstance(feature, capa.features.common.Characteristic)
and isinstance(feature.value, str)
and capa.features.common.Characteristic(feature.value) not in SUPPORTED_FEATURES[scope]
):
raise InvalidRule("feature %s not supported for scope %s" % (feature, scope))
if not isinstance(feature, capa.features.common.Characteristic):
# features of this scope that are not Characteristics will be Type instances.
# check that the given feature is one of these types.
types_for_scope = filter(lambda t: isinstance(t, type), SUPPORTED_FEATURES[scope])
if not isinstance(feature, tuple(types_for_scope)): # type: ignore
raise InvalidRule("feature %s not supported for scope %s" % (feature, scope))
def parse_int(s):
def parse_int(s: str) -> int:
if s.startswith("0x"):
return int(s, 0x10)
else:
return int(s, 10)
def parse_range(s):
def parse_range(s: str):
"""
parse a string "(0, 1)" into a range (min, max).
min and/or max may by None to indicate an unbound range.
@@ -166,23 +190,21 @@ def parse_range(s):
raise InvalidRule("invalid range: %s" % (s))
s = s[len("(") : -len(")")]
min, _, max = s.partition(",")
min = min.strip()
max = max.strip()
min_spec, _, max_spec = s.partition(",")
min_spec = min_spec.strip()
max_spec = max_spec.strip()
if min:
min = parse_int(min.strip())
min = None
if min_spec:
min = parse_int(min_spec)
if min < 0:
raise InvalidRule("range min less than zero")
else:
min = None
if max:
max = parse_int(max.strip())
max = None
if max_spec:
max = parse_int(max_spec)
if max < 0:
raise InvalidRule("range max less than zero")
else:
max = None
if min is not None and max is not None:
if max < min:
@@ -191,36 +213,38 @@ def parse_range(s):
return min, max
def parse_feature(key):
def parse_feature(key: str):
# keep this in sync with supported features
if key == "api":
return capa.features.insn.API
elif key == "string":
return capa.features.StringFactory
return capa.features.common.StringFactory
elif key == "substring":
return capa.features.common.Substring
elif key == "bytes":
return capa.features.Bytes
return capa.features.common.Bytes
elif key == "number":
return capa.features.insn.Number
elif key.startswith("number/"):
arch = key.partition("/")[2]
bitness = key.partition("/")[2]
# the other handlers here return constructors for features,
# and we want to as well,
# however, we need to preconfigure one of the arguments (`arch`).
# however, we need to preconfigure one of the arguments (`bitness`).
# so, instead we return a partially-applied function that
# provides `arch` to the feature constructor.
# provides `bitness` to the feature constructor.
# it forwards any other arguments provided to the closure along to the constructor.
return functools.partial(capa.features.insn.Number, arch=arch)
return functools.partial(capa.features.insn.Number, bitness=bitness)
elif key == "offset":
return capa.features.insn.Offset
elif key.startswith("offset/"):
arch = key.partition("/")[2]
return functools.partial(capa.features.insn.Offset, arch=arch)
bitness = key.partition("/")[2]
return functools.partial(capa.features.insn.Offset, bitness=bitness)
elif key == "mnemonic":
return capa.features.insn.Mnemonic
elif key == "basic blocks":
return capa.features.basicblock.BasicBlock
elif key == "characteristic":
return capa.features.Characteristic
return capa.features.common.Characteristic
elif key == "export":
return capa.features.file.Export
elif key == "import":
@@ -228,7 +252,16 @@ def parse_feature(key):
elif key == "section":
return capa.features.file.Section
elif key == "match":
return capa.features.MatchedRule
return capa.features.common.MatchedRule
elif key == "function-name":
return capa.features.file.FunctionName
elif key == "os":
return capa.features.common.OS
elif key == "format":
return capa.features.common.Format
elif key == "arch":
return capa.features.common.Arch
else:
raise InvalidRule("unexpected statement: %s" % key)
@@ -240,39 +273,70 @@ def parse_feature(key):
DESCRIPTION_SEPARATOR = " = "
def parse_description(s, value_type, description=None):
"""
s can be an int or a string
"""
if value_type != "string" and isinstance(s, six.string_types) and DESCRIPTION_SEPARATOR in s:
if description:
raise InvalidRule(
'unexpected value: "%s", only one description allowed (inline description with `%s`)'
% (s, DESCRIPTION_SEPARATOR)
)
value, _, description = s.partition(DESCRIPTION_SEPARATOR)
if description == "":
raise InvalidRule('unexpected value: "%s", description cannot be empty' % s)
else:
def parse_bytes(s: str) -> bytes:
try:
b = codecs.decode(s.replace(" ", "").encode("ascii"), "hex")
except binascii.Error:
raise InvalidRule('unexpected bytes value: must be a valid hex sequence: "%s"' % s)
if len(b) > MAX_BYTES_FEATURE_SIZE:
raise InvalidRule(
"unexpected bytes value: byte sequences must be no larger than %s bytes" % MAX_BYTES_FEATURE_SIZE
)
return b
def parse_description(s: Union[str, int, bytes], value_type: str, description=None):
if value_type == "string":
# string features cannot have inline descriptions,
# so we assume the entire value is the string,
# like: `string: foo = bar` -> "foo = bar"
value = s
else:
# other features can have inline descriptions, like `number: 10 = CONST_FOO`.
# in this case, the RHS will be like `10 = CONST_FOO` or some other string
if isinstance(s, str):
if DESCRIPTION_SEPARATOR in s:
if description:
# there is already a description passed in as a sub node, like:
#
# - number: 10 = CONST_FOO
# description: CONST_FOO
raise InvalidRule(
'unexpected value: "%s", only one description allowed (inline description with `%s`)'
% (s, DESCRIPTION_SEPARATOR)
)
if isinstance(value, six.string_types):
if value_type == "bytes":
try:
value = codecs.decode(value.replace(" ", ""), "hex")
# TODO: Remove TypeError when Python2 is not used anymore
except (TypeError, binascii.Error):
raise InvalidRule('unexpected bytes value: "%s", must be a valid hex sequence' % value)
value, _, description = s.partition(DESCRIPTION_SEPARATOR)
if description == "":
# sanity check:
# there is an empty description, like `number: 10 =`
raise InvalidRule('unexpected value: "%s", description cannot be empty' % s)
else:
# this is a string, but there is no description,
# like: `api: CreateFileA`
value = s
if len(value) > MAX_BYTES_FEATURE_SIZE:
raise InvalidRule(
"unexpected bytes value: byte sequences must be no larger than %s bytes" % MAX_BYTES_FEATURE_SIZE
)
elif value_type in ("number", "offset") or value_type.startswith(("number/", "offset/")):
try:
value = parse_int(value)
except ValueError:
raise InvalidRule('unexpected value: "%s", must begin with numerical value' % value)
# cast from the received string value to the appropriate type.
#
# without a description, this type would already be correct,
# but since we parsed the description from a string,
# we need to convert the value to the expected type.
#
# for example, from `number: 10 = CONST_FOO` we have
# the string "10" that needs to become the number 10.
if value_type == "bytes":
value = parse_bytes(value)
elif value_type in ("number", "offset") or value_type.startswith(("number/", "offset/")):
try:
value = parse_int(value)
except ValueError:
raise InvalidRule('unexpected value: "%s", must begin with numerical value' % value)
else:
# the value might be a number, like: `number: 10`
value = s
return value, description
@@ -312,28 +376,28 @@ def pop_statement_description_entry(d):
return description["description"]
def build_statements(d, scope):
def build_statements(d, scope: str):
if len(d.keys()) > 2:
raise InvalidRule("too many statements")
key = list(d.keys())[0]
description = pop_statement_description_entry(d[key])
if key == "and":
return And([build_statements(dd, scope) for dd in d[key]], description=description)
return ceng.And([build_statements(dd, scope) for dd in d[key]], description=description)
elif key == "or":
return Or([build_statements(dd, scope) for dd in d[key]], description=description)
return ceng.Or([build_statements(dd, scope) for dd in d[key]], description=description)
elif key == "not":
if len(d[key]) != 1:
raise InvalidRule("not statement must have exactly one child statement")
return Not(build_statements(d[key][0], scope), description=description)
return ceng.Not(build_statements(d[key][0], scope), description=description)
elif key.endswith(" or more"):
count = int(key[: -len("or more")])
return Some(count, [build_statements(dd, scope) for dd in d[key]], description=description)
return ceng.Some(count, [build_statements(dd, scope) for dd in d[key]], description=description)
elif key == "optional":
# `optional` is an alias for `0 or more`
# which is useful for documenting behaviors,
# like with `write file`, we might say that `WriteFile` is optionally found alongside `CreateFileA`.
return Some(0, [build_statements(dd, scope) for dd in d[key]], description=description)
return ceng.Some(0, [build_statements(dd, scope) for dd in d[key]], description=description)
elif key == "function":
if scope != FILE_SCOPE:
@@ -342,7 +406,7 @@ def build_statements(d, scope):
if len(d[key]) != 1:
raise InvalidRule("subscope must have exactly one child statement")
return Subscope(FUNCTION_SCOPE, build_statements(d[key][0], FUNCTION_SCOPE))
return ceng.Subscope(FUNCTION_SCOPE, build_statements(d[key][0], FUNCTION_SCOPE))
elif key == "basic block":
if scope != FUNCTION_SCOPE:
@@ -351,7 +415,7 @@ def build_statements(d, scope):
if len(d[key]) != 1:
raise InvalidRule("subscope must have exactly one child statement")
return Subscope(BASIC_BLOCK_SCOPE, build_statements(d[key][0], BASIC_BLOCK_SCOPE))
return ceng.Subscope(BASIC_BLOCK_SCOPE, build_statements(d[key][0], BASIC_BLOCK_SCOPE))
elif key.startswith("count(") and key.endswith(")"):
# e.g.:
@@ -392,22 +456,28 @@ def build_statements(d, scope):
count = d[key]
if isinstance(count, int):
return Range(feature, min=count, max=count, description=description)
return ceng.Range(feature, min=count, max=count, description=description)
elif count.endswith(" or more"):
min = parse_int(count[: -len(" or more")])
max = None
return Range(feature, min=min, max=max, description=description)
return ceng.Range(feature, min=min, max=max, description=description)
elif count.endswith(" or fewer"):
min = None
max = parse_int(count[: -len(" or fewer")])
return Range(feature, min=min, max=max, description=description)
return ceng.Range(feature, min=min, max=max, description=description)
elif count.startswith("("):
min, max = parse_range(count)
return Range(feature, min=min, max=max, description=description)
return ceng.Range(feature, min=min, max=max, description=description)
else:
raise InvalidRule("unexpected range: %s" % (count))
elif key == "string" and not isinstance(d[key], six.string_types):
elif key == "string" and not isinstance(d[key], str):
raise InvalidRule("ambiguous string value %s, must be defined as explicit string" % d[key])
elif (
(key == "os" and d[key] not in capa.features.common.VALID_OS)
or (key == "format" and d[key] not in capa.features.common.VALID_FORMAT)
or (key == "arch" and d[key] not in capa.features.common.VALID_ARCH)
):
raise InvalidRule("unexpected %s value %s" % (key, d[key]))
else:
Feature = parse_feature(key)
value, description = parse_description(d[key], key, d.get("description"))
@@ -419,16 +489,16 @@ def build_statements(d, scope):
return feature
def first(s):
def first(s: List[Any]) -> Any:
return s[0]
def second(s):
def second(s: List[Any]) -> Any:
return s[1]
class Rule(object):
def __init__(self, name, scope, statement, meta, definition=""):
class Rule:
def __init__(self, name: str, scope: str, statement: Statement, meta, definition=""):
super(Rule, self).__init__()
self.name = name
self.scope = scope
@@ -458,7 +528,7 @@ class Rule(object):
deps = set([])
def rec(statement):
if isinstance(statement, capa.features.MatchedRule):
if isinstance(statement, capa.features.common.MatchedRule):
# we're not sure at this point if the `statement.value` is
# really a rule name or a namespace name (we use `MatchedRule` for both cases).
# we'll give precedence to namespaces, and then assume if that does work,
@@ -474,7 +544,7 @@ class Rule(object):
# not a namespace, assume its a rule name.
deps.add(statement.value)
elif isinstance(statement, Statement):
elif isinstance(statement, ceng.Statement):
for child in statement.get_children():
rec(child)
@@ -485,11 +555,9 @@ class Rule(object):
return deps
def _extract_subscope_rules_rec(self, statement):
if isinstance(statement, Statement):
if isinstance(statement, ceng.Statement):
# for each child that is a subscope,
for subscope in filter(
lambda statement: isinstance(statement, capa.engine.Subscope), statement.get_children()
):
for subscope in filter(lambda statement: isinstance(statement, ceng.Subscope), statement.get_children()):
# create a new rule from it.
# the name is a randomly generated, hopefully unique value.
@@ -514,7 +582,7 @@ class Rule(object):
)
# update the existing statement to `match` the new rule
new_node = capa.features.MatchedRule(name)
new_node = capa.features.common.MatchedRule(name)
statement.replace_child(subscope, new_node)
# and yield the new rule to our caller
@@ -551,15 +619,16 @@ class Rule(object):
for new_rule in self._extract_subscope_rules_rec(self.statement):
yield new_rule
def evaluate(self, features):
def evaluate(self, features: FeatureSet):
return self.statement.evaluate(features)
@classmethod
def from_dict(cls, d, definition):
name = d["rule"]["meta"]["name"]
meta = d["rule"]["meta"]
name = meta["name"]
# if scope is not specified, default to function scope.
# this is probably the mode that rule authors will start with.
scope = d["rule"]["meta"].get("scope", FUNCTION_SCOPE)
scope = meta.get("scope", FUNCTION_SCOPE)
statements = d["rule"]["features"]
# the rule must start with a single logic node.
@@ -567,13 +636,19 @@ class Rule(object):
if len(statements) != 1:
raise InvalidRule("rule must begin with a single top level statement")
if isinstance(statements[0], capa.engine.Subscope):
if isinstance(statements[0], ceng.Subscope):
raise InvalidRule("top level statement may not be a subscope")
if scope not in SUPPORTED_FEATURES.keys():
raise InvalidRule("{:s} is not a supported scope".format(scope))
return cls(name, scope, build_statements(statements[0], scope), d["rule"]["meta"], definition)
meta = d["rule"]["meta"]
if not isinstance(meta.get("att&ck", []), list):
raise InvalidRule("ATT&CK mapping must be a list")
if not isinstance(meta.get("mbc", []), list):
raise InvalidRule("MBC mapping must be a list")
return cls(name, scope, build_statements(statements[0], scope), meta, definition)
@staticmethod
@lru_cache()
@@ -699,7 +774,7 @@ class Rule(object):
for key in hidden_meta.keys():
del meta[key]
ostream = six.BytesIO()
ostream = io.BytesIO()
self._get_ruamel_yaml_parser().dump(definition, ostream)
for key, value in hidden_meta.items():
@@ -736,53 +811,42 @@ class Rule(object):
# the below regex makes these adjustments and while ugly, we don't have to explore the ruamel.yaml insides
doc = re.sub(r"!!int '0x-([0-9a-fA-F]+)'", r"-0x\1", doc)
# normalize CRLF to LF
doc = doc.replace("\r\n", "\n")
return doc
def get_rules_with_scope(rules, scope):
def get_rules_with_scope(rules, scope) -> List[Rule]:
"""
from the given collection of rules, select those with the given scope.
args:
rules (List[capa.rules.Rule]):
scope (str): one of the capa.rules.*_SCOPE constants.
returns:
List[capa.rules.Rule]:
`scope` is one of the capa.rules.*_SCOPE constants.
"""
return list(rule for rule in rules if rule.scope == scope)
def get_rules_and_dependencies(rules, rule_name):
def get_rules_and_dependencies(rules: List[Rule], rule_name: str) -> Iterator[Rule]:
"""
from the given collection of rules, select a rule and its dependencies (transitively).
args:
rules (List[Rule]):
rule_name (str):
yields:
Rule:
"""
# we evaluate `rules` multiple times, so if its a generator, realize it into a list.
rules = list(rules)
namespaces = index_rules_by_namespace(rules)
rules = {rule.name: rule for rule in rules}
rules_by_name = {rule.name: rule for rule in rules}
wanted = set([rule_name])
def rec(rule):
wanted.add(rule.name)
for dep in rule.get_dependencies(namespaces):
rec(rules[dep])
rec(rules_by_name[dep])
rec(rules[rule_name])
rec(rules_by_name[rule_name])
for rule in rules.values():
for rule in rules_by_name.values():
if rule.name in wanted:
yield rule
def ensure_rules_are_unique(rules):
def ensure_rules_are_unique(rules: List[Rule]) -> None:
seen = set([])
for rule in rules:
if rule.name in seen:
@@ -790,7 +854,7 @@ def ensure_rules_are_unique(rules):
seen.add(rule.name)
def ensure_rule_dependencies_are_met(rules):
def ensure_rule_dependencies_are_met(rules: List[Rule]) -> None:
"""
raise an exception if a rule dependency does not exist.
@@ -800,14 +864,14 @@ def ensure_rule_dependencies_are_met(rules):
# we evaluate `rules` multiple times, so if its a generator, realize it into a list.
rules = list(rules)
namespaces = index_rules_by_namespace(rules)
rules = {rule.name: rule for rule in rules}
for rule in rules.values():
rules_by_name = {rule.name: rule for rule in rules}
for rule in rules_by_name.values():
for dep in rule.get_dependencies(namespaces):
if dep not in rules:
if dep not in rules_by_name:
raise InvalidRule('rule "%s" depends on missing rule "%s"' % (rule.name, dep))
def index_rules_by_namespace(rules):
def index_rules_by_namespace(rules: List[Rule]) -> Dict[str, List[Rule]]:
"""
compute the rules that fit into each namespace found within the given rules.
@@ -821,11 +885,6 @@ def index_rules_by_namespace(rules):
c2/shell: [create reverse shell]
c2/file-transfer: [download and write a file]
c2: [create reverse shell, download and write a file]
Args:
rules (List[Rule]):
Returns: Dict[str, List[Rule]]
"""
namespaces = collections.defaultdict(list)
@@ -841,7 +900,38 @@ def index_rules_by_namespace(rules):
return dict(namespaces)
class RuleSet(object):
def topologically_order_rules(rules: List[Rule]) -> List[Rule]:
"""
order the given rules such that dependencies show up before dependents.
this means that as we match rules, we can add features for the matches, and these
will be matched by subsequent rules if they follow this order.
assumes that the rule dependency graph is a DAG.
"""
# we evaluate `rules` multiple times, so if its a generator, realize it into a list.
rules = list(rules)
namespaces = index_rules_by_namespace(rules)
rules_by_name = {rule.name: rule for rule in rules}
seen = set([])
ret = []
def rec(rule):
if rule.name in seen:
return
for dep in rule.get_dependencies(namespaces):
rec(rules_by_name[dep])
ret.append(rule)
seen.add(rule.name)
for rule in rules_by_name.values():
rec(rule)
return ret
class RuleSet:
"""
a ruleset is initialized with a collection of rules, which it verifies and sorts into scopes.
each set of scoped rules is sorted topologically, which enables rules to match on past rule matches.
@@ -856,7 +946,7 @@ class RuleSet(object):
capa.engine.match(ruleset.file_rules, ...)
"""
def __init__(self, rules):
def __init__(self, rules: List[Rule]):
super(RuleSet, self).__init__()
ensure_rules_are_unique(rules)
@@ -872,6 +962,7 @@ class RuleSet(object):
self.function_rules = self._get_rules_for_scope(rules, FUNCTION_SCOPE)
self.basic_block_rules = self._get_rules_for_scope(rules, BASIC_BLOCK_SCOPE)
self.rules = {rule.name: rule for rule in rules}
self.rules_by_namespace = index_rules_by_namespace(rules)
def __len__(self):
return len(self.rules)
@@ -879,6 +970,9 @@ class RuleSet(object):
def __getitem__(self, rulename):
return self.rules[rulename]
def __contains__(self, rulename):
return rulename in self.rules
@staticmethod
def _get_rules_for_scope(rules, scope):
"""
@@ -899,7 +993,7 @@ class RuleSet(object):
continue
scope_rules.update(get_rules_and_dependencies(rules, rule.name))
return get_rules_with_scope(capa.engine.topologically_order_rules(scope_rules), scope)
return get_rules_with_scope(topologically_order_rules(list(scope_rules)), scope)
@staticmethod
def _extract_subscope_rules(rules):
@@ -923,7 +1017,7 @@ class RuleSet(object):
return done
def filter_rules_by_meta(self, tag):
def filter_rules_by_meta(self, tag: str) -> "RuleSet":
"""
return new rule set with rules filtered based on all meta field values, adds all dependency rules
apply tag-based rule filter assuming that all required rules are loaded
@@ -932,11 +1026,11 @@ class RuleSet(object):
TODO handle circular dependencies?
TODO support -t=metafield <k>
"""
rules = self.rules.values()
rules = list(self.rules.values())
rules_filtered = set([])
for rule in rules:
for k, v in rule.meta.items():
if isinstance(v, six.string_types) and tag in v:
if isinstance(v, str) and tag in v:
logger.debug('using rule "%s" and dependencies, found tag in meta.%s: %s', rule.name, k, v)
rules_filtered.update(set(capa.rules.get_rules_and_dependencies(rules, rule.name)))
break

View File

@@ -1 +1 @@
__version__ = "1.5.1"
__version__ = "3.0.2"

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

BIN
doc/img/changelog/tab.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 141 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 322 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 173 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 135 KiB

View File

@@ -27,6 +27,10 @@ To install capa as a Python library use `pip` to fetch the `flare-capa` module.
#### *Note*:
This method is appropriate for integrating capa in an existing project.
This technique doesn't pull the default rule set, so you should check it out separately from [capa-rules](https://github.com/fireeye/capa-rules/) and pass the directory to the entrypoint using `-r` or set the rules path in the IDA Pro plugin.
This technique also doesn't set up the default library identification [signatures](https://github.com/fireeye/capa/tree/master/sigs). You can pass the signature directory using the `-s` argument.
For example, to run capa with both a rule path and a signature path:
capa -r /path/to/capa-rules -s /path/to/capa-sigs suspicious.exe
Alternatively, see Method 3 below.
### 1. Install capa module
@@ -42,6 +46,7 @@ If you'd like to review and modify the capa source code, you'll need to check it
Next, clone the capa git repository.
We use submodules to separate [code](https://github.com/fireeye/capa), [rules](https://github.com/fireeye/capa-rules), and [test data](https://github.com/fireeye/capa-testfiles).
To clone everything use the `--recurse-submodules` option:
- CAUTION: The capa testfiles repository contains many malware samples. If you pull down everything using this method, you may want to install to a directory that won't trigger your anti-virus software.
- `$ git clone --recurse-submodules https://github.com/fireeye/capa.git /local/path/to/src` (HTTPS)
- `$ git clone --recurse-submodules git@github.com:fireeye/capa.git /local/path/to/src` (SSH)
@@ -59,6 +64,25 @@ Use `pip` to install the source code in "editable" mode. This means that Python
You'll find that the `capa.exe` (Windows) or `capa` (Linux/MacOS) executables in your path now invoke the capa binary from this directory.
#### Development
##### venv [optional]
For development, we recommend to use [venv](https://docs.python.org/3/tutorial/venv.html). It allows you to create a virtual environment: a self-contained directory tree that contains a Python installation for a particular version of Python, plus a number of additional packages. This approach avoids conflicts between the requirements of different applications on your computer. It also ensures that you don't overlook to add a new requirement to `setup.up` using a library already installed on your system.
To create an environment (in the parent directory, to avoid commiting it by accident or messing with the linters), run:
`$ python3 -m venv ../capa-env`
To activate `capa-env` in Linux or MacOS, run:
`$ source ../capa-env/bin/activate`
To activate `capa-env` in Windows, run:
`$ ..\capa-env\Scripts\activate.bat`
For more details about creating and using virtual environments, check out the [venv documentation](https://docs.python.org/3/tutorial/venv.html).
##### Install development dependencies
We use the following tools to ensure consistent code style and formatting:
- [black](https://github.com/psf/black) code formatter, with `-l 120`
- [isort 5](https://pypi.org/project/isort/) code formatter, with `--profile black --length-sort --line-width 120`
@@ -69,30 +93,28 @@ To install these development dependencies, run:
`$ pip install -e /local/path/to/src[dev]`
Note that some development dependencies (including the black code formatter) require Python 3.
To check the code style, formatting and run the tests you can run the script `scripts/ci.sh`.
You can run it with the argument `no_tests` to skip the tests and only run the code style and formatting: `scripts/ci.sh no_tests`
##### Setup hooks [optional]
If you plan to contribute to capa, you may want to setup the hooks.
Run `scripts/setup-hooks.sh` to set the following hooks up:
- The `pre-commit` hook runs checks before every `git commit`.
It runs `scripts/ci.sh no_tests` aborting the commit if there are code style or rule linter offenses you need to fix.
- The `pre-push` hook runs checks before every `git push`.
It runs `scripts/ci.sh` aborting the push if there are code style or rule linter offenses or if the tests fail.
This way you can ensure everything is alright before sending a pull request.
You can skip the checks by using the `--no-verify` git option.
### 3. Compile binary using PyInstaller
We compile capa standalone binaries using PyInstaller. To reproduce the build process check out the source code as described above and follow these steps.
#### Install PyInstaller:
For Python 2.7: `$ pip install 'pyinstaller==3.*'` (PyInstaller 4 doesn't support Python 2.7)
For Python 3: `$ pip install 'pyinstaller`
`$ pip install pyinstaller` (Python 3)
#### Run Pyinstaller
`$ pyinstaller .github/pyinstaller/pyinstaller.spec`
You can find the compiled binary in the created directory `dist/`.
### 4. Setup hooks [optional]
If you plan to contribute to capa, you may want to setup the hooks.
Run `scripts/setup-hooks.sh` to set the following hooks up:
- The `pre-commit` hook runs checks before every `git commit`.
It runs `scripts/ci.sh no_tests` aborting the commit if there are code style or rule linter offenses you need to fix.
You can skip this check by using the `--no-verify` git option.
- The `pre-push` hook runs checks before every `git push`.
It runs `scripts/ci.sh` aborting the push if there are code style or rule linter offenses or if the tests fail.
This way you can ensure everything is alright before sending a pull request.

43
doc/release.md Normal file
View File

@@ -0,0 +1,43 @@
# Release checklist
- [ ] Ensure all [milestoned issues/PRs](https://github.com/fireeye/capa/milestones) are addressed, or reassign to a new milestone.
- [ ] Add the `dont merge` label to all PRs that are close to be ready to merge (or merge them if they are ready) in [capa](https://github.com/fireeye/capa/pulls) and [capa-rules](https://github.com/fireeye/capa-rules/pulls).
- [ ] Ensure the [CI workflow succeeds in master](https://github.com/fireeye/capa/actions/workflows/tests.yml?query=branch%3Amaster).
- [ ] Ensure that `python scripts/lint.py rules/ --thorough` succeeds (only `missing examples` offenses are allowed in the nursery).
- [ ] Review changes
- capa https://github.com/fireeye/capa/compare/\<last-release\>...master
- capa-rules https://github.com/fireeye/capa-rules/compare/\<last-release>\...master
- [ ] Update [CHANGELOG.md](https://github.com/fireeye/capa/blob/master/CHANGELOG.md)
- Do not forget to add a nice introduction thanking contributors
- Remember that we need a major release if we introduce breaking changes
- Sections: see template below
- Update `Raw diffs` links
- Create placeholder for `master (unreleased)` section
```
## master (unreleased)
### New Features
### Breaking Changes
### New Rules (0)
-
### Bug Fixes
### capa explorer IDA Pro plugin
### Development
### Raw diffs
- [capa <release>...master](https://github.com/fireeye/capa/compare/<release>...master)
- [capa-rules <release>...master](https://github.com/fireeye/capa-rules/compare/<release>...master)
```
- [ ] Update [capa/version.py](https://github.com/fireeye/capa/blob/master/capa/version.py)
- [ ] Create a PR with the updated [CHANGELOG.md](https://github.com/fireeye/capa/blob/master/CHANGELOG.md) and [capa/version.py](https://github.com/fireeye/capa/blob/master/capa/version.py). Copy this checklist in the PR description.
- [ ] After PR review, merge the PR and [create the release in GH](https://github.com/fireeye/capa/releases/new) using text from the [CHANGELOG.md](https://github.com/fireeye/capa/blob/master/CHANGELOG.md).
- [ ] Verify GH actions [upload artifacts](https://github.com/fireeye/capa/releases), [publish to PyPI](https://pypi.org/project/flare-capa) and [create a tag in capa rules](https://github.com/fireeye/capa-rules/tags) upon completion.
- [ ] [Spread the word](https://twitter.com)
- [ ] Update internal service

View File

@@ -11,3 +11,9 @@ For example, `capa -t william.ballenthin@mandiant.com` runs rules that reference
### IDA Pro plugin: capa explorer
Please check out the [capa explorer documentation](/capa/ida/plugin/README.md).
### save time by reusing .viv files
Set the environment variable `CAPA_SAVE_WORKSPACE` to instruct the underlying analysis engine to
cache its intermediate results to the file system. For example, vivisect will create `.viv` files.
Subsequently, capa may run faster when reprocessing the same input file.
This is particularly useful during rule development as you repeatedly test a rule against a known sample.

2
rules

Submodule rules updated: a3274122c5...f04491001d

View File

@@ -1,247 +1,223 @@
#!/usr/bin/env python
"""
bulk-process
Invoke capa recursively against a directory of samples
and emit a JSON document mapping the file paths to their results.
By default, this will use subprocesses for parallelism.
Use `-n/--parallelism` to change the subprocess count from
the default of current CPU count.
Use `--no-mp` to use threads instead of processes,
which is probably not useful unless you set `--parallelism=1`.
example:
$ python scripts/bulk-process /tmp/suspicious
{
"/tmp/suspicious/suspicious.dll_": {
"rules": {
"encode data using XOR": {
"matches": {
"268440358": {
[...]
"/tmp/suspicious/1.dll_": { ... }
"/tmp/suspicious/2.dll_": { ... }
}
usage:
usage: bulk-process.py [-h] [-r RULES] [-d] [-q] [-n PARALLELISM] [--no-mp]
input
detect capabilities in programs.
positional arguments:
input Path to directory of files to recursively analyze
optional arguments:
-h, --help show this help message and exit
-r RULES, --rules RULES
Path to rule file or directory, use embedded rules by
default
-d, --debug Enable debugging output on STDERR
-q, --quiet Disable all output but errors
-n PARALLELISM, --parallelism PARALLELISM
parallelism factor
--no-mp disable subprocesses
Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at: [package root]/LICENSE.txt
Unless required by applicable law or agreed to in writing, software distributed under the License
is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
"""
import sys
import json
import logging
import os.path
import argparse
import multiprocessing
import multiprocessing.pool
import capa
import capa.main
import capa.render
logger = logging.getLogger("capa")
def get_capa_results(args):
"""
run capa against the file at the given path, using the given rules.
args is a tuple, containing:
rules (capa.rules.RuleSet): the rules to match
format (str): the name of the sample file format
path (str): the file system path to the sample to process
args is a tuple because i'm not quite sure how to unpack multiple arguments using `map`.
returns an dict with two required keys:
path (str): the file system path of the sample to process
status (str): either "error" or "ok"
when status == "error", then a human readable message is found in property "error".
when status == "ok", then the capa results are found in the property "ok".
the capa results are a dictionary with the following keys:
meta (dict): the meta analysis results
capabilities (dict): the matched capabilities and their result objects
"""
rules, format, path = args
logger.info("computing capa results for: %s", path)
try:
extractor = capa.main.get_extractor(path, format, disable_progress=True)
except capa.main.UnsupportedFormatError:
# i'm 100% sure if multiprocessing will reliably raise exceptions across process boundaries.
# so instead, return an object with explicit success/failure status.
#
# if success, then status=ok, and results found in property "ok"
# if error, then status=error, and human readable message in property "error"
return {
"path": path,
"status": "error",
"error": "input file does not appear to be a PE file: %s" % path,
}
except capa.main.UnsupportedRuntimeError:
return {
"path": path,
"status": "error",
"error": "unsupported runtime or Python interpreter",
}
except Exception as e:
return {
"path": path,
"status": "error",
"error": "unexpected error: %s" % (e),
}
meta = capa.main.collect_metadata("", path, "", format, extractor)
capabilities, counts = capa.main.find_capabilities(rules, extractor, disable_progress=True)
meta["analysis"].update(counts)
return {
"path": path,
"status": "ok",
"ok": {
"meta": meta,
"capabilities": capabilities,
},
}
def main(argv=None):
if argv is None:
argv = sys.argv[1:]
parser = argparse.ArgumentParser(description="detect capabilities in programs.")
parser.add_argument("input", type=str, help="Path to directory of files to recursively analyze")
parser.add_argument(
"-r",
"--rules",
type=str,
default="(embedded rules)",
help="Path to rule file or directory, use embedded rules by default",
)
parser.add_argument("-d", "--debug", action="store_true", help="Enable debugging output on STDERR")
parser.add_argument("-q", "--quiet", action="store_true", help="Disable all output but errors")
parser.add_argument(
"-n", "--parallelism", type=int, default=multiprocessing.cpu_count(), help="parallelism factor"
)
parser.add_argument("--no-mp", action="store_true", help="disable subprocesses")
args = parser.parse_args(args=argv)
if args.quiet:
logging.basicConfig(level=logging.ERROR)
logging.getLogger().setLevel(logging.ERROR)
elif args.debug:
logging.basicConfig(level=logging.DEBUG)
logging.getLogger().setLevel(logging.DEBUG)
else:
logging.basicConfig(level=logging.INFO)
logging.getLogger().setLevel(logging.INFO)
# disable vivisect-related logging, it's verbose and not relevant for capa users
capa.main.set_vivisect_log_level(logging.CRITICAL)
# py2 doesn't know about cp65001, which is a variant of utf-8 on windows
# tqdm bails when trying to render the progress bar in this setup.
# because cp65001 is utf-8, we just map that codepage to the utf-8 codec.
# see #380 and: https://stackoverflow.com/a/3259271/87207
import codecs
codecs.register(lambda name: codecs.lookup("utf-8") if name == "cp65001" else None)
if args.rules == "(embedded rules)":
logger.info("using default embedded rules")
logger.debug("detected running from source")
args.rules = os.path.join(os.path.dirname(__file__), "..", "rules")
logger.debug("default rule path (source method): %s", args.rules)
else:
logger.info("using rules path: %s", args.rules)
try:
rules = capa.main.get_rules(args.rules)
rules = capa.rules.RuleSet(rules)
logger.info("successfully loaded %s rules", len(rules))
except (IOError, capa.rules.InvalidRule, capa.rules.InvalidRuleSet) as e:
logger.error("%s", str(e))
return -1
samples = []
for (base, directories, files) in os.walk(args.input):
for file in files:
samples.append(os.path.join(base, file))
def pmap(f, args, parallelism=multiprocessing.cpu_count()):
"""apply the given function f to the given args using subprocesses"""
return multiprocessing.Pool(parallelism).imap(f, args)
def tmap(f, args, parallelism=multiprocessing.cpu_count()):
"""apply the given function f to the given args using threads"""
return multiprocessing.pool.ThreadPool(parallelism).imap(f, args)
def map(f, args, parallelism=None):
"""apply the given function f to the given args in the current thread"""
for arg in args:
yield f(arg)
if args.no_mp:
if args.parallelism == 1:
logger.debug("using current thread mapper")
mapper = map
else:
logger.debug("using threading mapper")
mapper = tmap
else:
logger.debug("using process mapper")
mapper = pmap
results = {}
for result in mapper(
get_capa_results, [(rules, "pe", sample) for sample in samples], parallelism=args.parallelism
):
if result["status"] == "error":
logger.warning(result["error"])
elif result["status"] == "ok":
meta = result["ok"]["meta"]
capabilities = result["ok"]["capabilities"]
# our renderer expects to emit a json document for a single sample
# so we deserialize the json document, store it in a larger dict, and we'll subsequently re-encode.
results[result["path"]] = json.loads(capa.render.render_json(meta, rules, capabilities))
else:
raise ValueError("unexpected status: %s" % (result["status"]))
print(json.dumps(results))
logger.info("done.")
return 0
if __name__ == "__main__":
sys.exit(main())
#!/usr/bin/env python
"""
bulk-process
Invoke capa recursively against a directory of samples
and emit a JSON document mapping the file paths to their results.
By default, this will use subprocesses for parallelism.
Use `-n/--parallelism` to change the subprocess count from
the default of current CPU count.
Use `--no-mp` to use threads instead of processes,
which is probably not useful unless you set `--parallelism=1`.
example:
$ python scripts/bulk-process /tmp/suspicious
{
"/tmp/suspicious/suspicious.dll_": {
"rules": {
"encode data using XOR": {
"matches": {
"268440358": {
[...]
"/tmp/suspicious/1.dll_": { ... }
"/tmp/suspicious/2.dll_": { ... }
}
usage:
usage: bulk-process.py [-h] [-r RULES] [-d] [-q] [-n PARALLELISM] [--no-mp]
input
detect capabilities in programs.
positional arguments:
input Path to directory of files to recursively analyze
optional arguments:
-h, --help show this help message and exit
-r RULES, --rules RULES
Path to rule file or directory, use embedded rules by
default
-d, --debug Enable debugging output on STDERR
-q, --quiet Disable all output but errors
-n PARALLELISM, --parallelism PARALLELISM
parallelism factor
--no-mp disable subprocesses
Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at: [package root]/LICENSE.txt
Unless required by applicable law or agreed to in writing, software distributed under the License
is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
"""
import os
import sys
import json
import logging
import os.path
import argparse
import multiprocessing
import multiprocessing.pool
import capa
import capa.main
import capa.rules
import capa.render.json
logger = logging.getLogger("capa")
def get_capa_results(args):
"""
run capa against the file at the given path, using the given rules.
args is a tuple, containing:
rules (capa.rules.RuleSet): the rules to match
signatures (List[str]): list of file system paths to signature files
format (str): the name of the sample file format
path (str): the file system path to the sample to process
args is a tuple because i'm not quite sure how to unpack multiple arguments using `map`.
returns an dict with two required keys:
path (str): the file system path of the sample to process
status (str): either "error" or "ok"
when status == "error", then a human readable message is found in property "error".
when status == "ok", then the capa results are found in the property "ok".
the capa results are a dictionary with the following keys:
meta (dict): the meta analysis results
capabilities (dict): the matched capabilities and their result objects
"""
rules, sigpaths, format, path = args
should_save_workspace = os.environ.get("CAPA_SAVE_WORKSPACE") not in ("0", "no", "NO", "n", None)
logger.info("computing capa results for: %s", path)
try:
extractor = capa.main.get_extractor(
path, format, capa.main.BACKEND_VIV, sigpaths, should_save_workspace, disable_progress=True
)
except capa.main.UnsupportedFormatError:
# i'm 100% sure if multiprocessing will reliably raise exceptions across process boundaries.
# so instead, return an object with explicit success/failure status.
#
# if success, then status=ok, and results found in property "ok"
# if error, then status=error, and human readable message in property "error"
return {
"path": path,
"status": "error",
"error": "input file does not appear to be a PE file: %s" % path,
}
except capa.main.UnsupportedRuntimeError:
return {
"path": path,
"status": "error",
"error": "unsupported runtime or Python interpreter",
}
except Exception as e:
return {
"path": path,
"status": "error",
"error": "unexpected error: %s" % (e),
}
meta = capa.main.collect_metadata("", path, "", extractor)
capabilities, counts = capa.main.find_capabilities(rules, extractor, disable_progress=True)
meta["analysis"].update(counts)
return {
"path": path,
"status": "ok",
"ok": {
"meta": meta,
"capabilities": capabilities,
},
}
def main(argv=None):
if argv is None:
argv = sys.argv[1:]
parser = argparse.ArgumentParser(description="detect capabilities in programs.")
capa.main.install_common_args(parser, wanted={"rules", "signatures"})
parser.add_argument("input", type=str, help="Path to directory of files to recursively analyze")
parser.add_argument(
"-n", "--parallelism", type=int, default=multiprocessing.cpu_count(), help="parallelism factor"
)
parser.add_argument("--no-mp", action="store_true", help="disable subprocesses")
args = parser.parse_args(args=argv)
capa.main.handle_common_args(args)
try:
rules = capa.main.get_rules(args.rules)
rules = capa.rules.RuleSet(rules)
logger.info("successfully loaded %s rules", len(rules))
except (IOError, capa.rules.InvalidRule, capa.rules.InvalidRuleSet) as e:
logger.error("%s", str(e))
return -1
try:
sig_paths = capa.main.get_signatures(args.signatures)
except (IOError) as e:
logger.error("%s", str(e))
return -1
samples = []
for (base, directories, files) in os.walk(args.input):
for file in files:
samples.append(os.path.join(base, file))
def pmap(f, args, parallelism=multiprocessing.cpu_count()):
"""apply the given function f to the given args using subprocesses"""
return multiprocessing.Pool(parallelism).imap(f, args)
def tmap(f, args, parallelism=multiprocessing.cpu_count()):
"""apply the given function f to the given args using threads"""
return multiprocessing.pool.ThreadPool(parallelism).imap(f, args)
def map(f, args, parallelism=None):
"""apply the given function f to the given args in the current thread"""
for arg in args:
yield f(arg)
if args.no_mp:
if args.parallelism == 1:
logger.debug("using current thread mapper")
mapper = map
else:
logger.debug("using threading mapper")
mapper = tmap
else:
logger.debug("using process mapper")
mapper = pmap
results = {}
for result in mapper(
get_capa_results, [(rules, sig_paths, "pe", sample) for sample in samples], parallelism=args.parallelism
):
if result["status"] == "error":
logger.warning(result["error"])
elif result["status"] == "ok":
meta = result["ok"]["meta"]
capabilities = result["ok"]["capabilities"]
# our renderer expects to emit a json document for a single sample
# so we deserialize the json document, store it in a larger dict, and we'll subsequently re-encode.
results[result["path"]] = json.loads(capa.render.json.render(meta, rules, capabilities))
else:
raise ValueError("unexpected status: %s" % (result["status"]))
print(json.dumps(results))
logger.info("done.")
return 0
if __name__ == "__main__":
sys.exit(main())

760
scripts/capa2yara.py Normal file
View File

@@ -0,0 +1,760 @@
"""
Convert capa rules to YARA rules (where this is possible)
usage: capa2yara.py [-h] [--private] [--version] [-v] [-vv] [-d] [-q] [--color {auto,always,never}] [-t TAG] rules
Capa to YARA rule converter
positional arguments:
rules Path to rules
optional arguments:
-h, --help show this help message and exit
--private, -p Create private rules
--version show program's version number and exit
-v, --verbose enable verbose result document (no effect with --json)
-vv, --vverbose enable very verbose result document (no effect with --json)
-d, --debug enable debugging output on STDERR
-q, --quiet disable all output but errors
--color {auto,always,never}
enable ANSI color codes in results, default: only during interactive session
-t TAG, --tag TAG filter on rule meta field values
Copyright (C) 2020, 2021 Arnim Rupp (@ruppde) and FireEye, Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at: [package root]/LICENSE.txt
Unless required by applicable law or agreed to in writing, software distributed under the License
is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
"""
import re
import sys
import string
import logging
import argparse
import datetime
import itertools
import capa.main
import capa.rules
import capa.engine
import capa.features
import capa.features.insn
from capa.features.common import BITNESS_X32, BITNESS_X64, String
logger = logging.getLogger("capa2yara")
today = str(datetime.date.today())
# create unique variable names for each rule in case somebody wants to move/copy stuff around later
var_names = ["".join(letters) for letters in itertools.product(string.ascii_lowercase, repeat=3)]
# this have to be the internal names used by capa.py which are sometimes different to the ones written out in the rules, e.g. "2 or more" is "Some", count is Range
unsupported = ["characteristic", "mnemonic", "offset", "subscope", "Range"]
# TODO shorten this list, possible stuff:
# - 2 or more strings: e.g.
# -- https://github.com/fireeye/capa-rules/blob/master/collection/file-managers/gather-direct-ftp-information.yml
# -- https://github.com/fireeye/capa-rules/blob/master/collection/browser/gather-firefox-profile-information.yml
# - count(string (1 rule: /executable/subfile/pe/contain-an-embedded-pe-file.yml)
# - count(match( could be done by creating the referenced rule a 2nd time with the condition, that it hits x times (only 1 rule: ./anti-analysis/anti-disasm/contain-anti-disasm-techniques.yml)
# - it would be technically possible to get the "basic blocks" working, but the rules contain mostly other non supported statements in there => not worth the effort.
# collect all converted rules to be able to check if we have needed sub rules for match:
converted_rules = []
count_incomplete = 0
default_tags = "CAPA "
# minimum number of rounds to do be able to convert rules which depend on referenced rules in several levels of depth
min_rounds = 5
unsupported_capa_rules = open("unsupported_capa_rules.yml", "wb")
unsupported_capa_rules_names = open("unsupported_capa_rules.txt", "wb")
unsupported_capa_rules_list = []
condition_header = """
capa_pe_file and
"""
condition_rule = """
private rule capa_pe_file : CAPA {
meta:
description = "match in PE files. used by all further CAPA rules"
author = "Arnim Rupp"
condition:
uint16be(0) == 0x4d5a
or uint16be(0) == 0x558b
or uint16be(0) == 0x5649
}
"""
def check_feature(statement, rulename):
if statement in unsupported:
logger.info("unsupported: " + statement + " in rule: " + rulename)
return True
else:
return False
def get_rule_url(path):
path = re.sub(r"\.\.\/", "", path)
path = re.sub(r"capa-rules\/", "", path)
return "https://github.com/fireeye/capa-rules/blob/master/" + path
def convert_capa_number_to_yara_bytes(number):
if not number.startswith("0x"):
print("TODO: fix decimal")
sys.exit()
number = re.sub(r"^0[xX]", "", number)
logger.info("number ok: " + repr(number))
# include spaces every 2 hex
bytesv = re.sub(r"(..)", r"\1 ", number)
# reverse order
bytesl = bytesv.split(" ")
bytesl.reverse()
bytesv = " ".join(bytesl)
# fix spaces
bytesv = bytesv[1:] + " "
return bytesv
def convert_rule_name(rule_name):
# yara rule names: "Identifiers must follow the same lexical conventions of the C programming language, they can contain any alphanumeric character and the underscore character, but the first character cannot be a digit. Rule identifiers are case sensitive and cannot exceed 128 characters." so we replace any non-alpanum with _
rule_name = re.sub(r"\W", "_", rule_name)
rule_name = "capa_" + rule_name
return rule_name
def convert_description(statement):
try:
desc = statement.description
if desc:
yara_desc = " // " + desc
logger.info("using desc: " + repr(yara_desc))
return yara_desc
except:
# no description
pass
return ""
def convert_rule(rule, rulename, cround, depth):
depth += 1
logger.info("recursion depth: " + str(depth))
global var_names
def do_statement(s_type, kid):
yara_strings = ""
yara_condition = ""
if check_feature(s_type, rulename):
return "BREAK", s_type
elif s_type == "string":
string = kid.value
logger.info("doing string: " + repr(string))
string = string.replace("\\", "\\\\")
string = string.replace("\n", "\\n")
string = string.replace("\t", "\\t")
var_name = "str_" + var_names.pop(0)
yara_strings += "\t$" + var_name + ' = "' + string + '" ascii wide' + convert_description(kid) + "\n"
yara_condition += "\t$" + var_name + " "
elif s_type == "api" or s_type == "import":
# TODO: is it possible in YARA to make a difference between api & import?
# https://github.com/fireeye/capa-rules/blob/master/doc/format.md#api
api = kid.value
logger.info("doing api: " + repr(api))
# e.g. kernel32.CreateNamedPipe => look for kernel32.dll and CreateNamedPipe
if "." in api:
dll, api = api.split(".")
# usage of regex is needed and /i because string search for "CreateMutex" in imports() doesn't look for e.g. CreateMutexA
yara_condition += "\tpe.imports(/" + dll + "/i, /" + api + "/) "
else:
# e.g. - api: 'CallNextHookEx'
# (from user32.dll)
# even looking for empty string in dll_regex doesn't work for some files (list below) with pe.imports so do just a string search
# yara_condition += '\tpe.imports(/.{0,30}/i, /' + api + '/) '
# 5fbbfeed28b258c42e0cfeb16718b31c, 2D3EDC218A90F03089CC01715A9F047F, 7EFF498DE13CC734262F87E6B3EF38AB, C91887D861D9BD4A5872249B641BC9F9, a70052c45e907820187c7e6bcdc7ecca, 0596C4EA5AA8DEF47F22C85D75AACA95
var_name = "api_" + var_names.pop(0)
# limit regex with word boundary \b but also search for appended A and W
# TODO: better use something like /(\\x00|\\x01|\\x02|\\x03|\\x04)' + api + '(A|W)?\\x00/ ???
yara_strings += "\t$" + var_name + " = /\\b" + api + "(A|W)?\\b/ ascii wide\n"
yara_condition += "\t$" + var_name + " "
elif s_type == "export":
export = kid.value
logger.info("doing export: " + repr(export))
yara_condition += '\tpe.exports("' + export + '") '
elif s_type == "section":
# https://github.com/fireeye/capa-rules/blob/master/doc/format.md#section
section = kid.value
logger.info("doing section: " + repr(section))
# e.g. - section: .rsrc
var_name_sec = var_names.pop(0)
# yeah, it would be better to make one loop out of multiple sections but we're in POC-land (and I guess it's not much of a performance hit, loop over short array?)
yara_condition += (
"\tfor any " + var_name_sec + " in pe.sections : ( " + var_name_sec + '.name == "' + section + '" ) '
)
elif s_type == "match":
# https://github.com/fireeye/capa-rules/blob/master/doc/format.md#matching-prior-rule-matches-and-namespaces
match = kid.value
logger.info("doing match: " + repr(match))
# e.g. - match: create process
# - match: host-interaction/file-system/write
match_rule_name = convert_rule_name(match)
if match.startswith(rulename + "/"):
logger.info("Depending on myself = basic block: " + match)
return "BREAK", "Depending on myself = basic block"
if match_rule_name in converted_rules:
yara_condition += "\t" + match_rule_name + "\n"
else:
# don't complain in the early rounds as there should be 3+ rounds (if all rules are converted)
if cround > min_rounds - 2:
logger.info("needed sub-rule not converted (yet, maybe in next round): " + repr(match))
return "BREAK", "needed sub-rule not converted"
else:
return "BREAK", "NOLOG"
elif s_type == "bytes":
bytesv = kid.get_value_str()
logger.info("doing bytes: " + repr(bytesv))
var_name = var_names.pop(0)
yara_strings += "\t$" + var_name + " = { " + bytesv + " }" + convert_description(kid) + "\n"
yara_condition += "\t$" + var_name + " "
elif s_type == "number":
number = kid.get_value_str()
logger.info("doing number: " + repr(number))
if len(number) < 10:
logger.info("too short for byte search (until I figure out how to do it properly)" + repr(number))
return "BREAK", "Number too short"
# there's just one rule which contains 0xFFFFFFF but yara gives a warning if if used
if number == "0xFFFFFFFF":
return "BREAK", "slow byte pattern for YARA search"
logger.info("number ok: " + repr(number))
number = convert_capa_number_to_yara_bytes(number)
logger.info("number ok: " + repr(number))
var_name = "num_" + var_names.pop(0)
yara_strings += "\t$" + var_name + " = { " + number + "}" + convert_description(kid) + "\n"
yara_condition += "$" + var_name + " "
elif s_type == "regex":
regex = kid.get_value_str()
logger.info("doing regex: " + repr(regex))
# change capas /xxx/i to yaras /xxx/ nocase, count will be used later to decide appending 'nocase'
regex, count = re.subn(r"/i$", "/", regex)
# remove / in the begining and end
regex = regex[1:-1]
# all .* in the regexes of capa look like they should be maximum 100 chars so take 1000 to speed up rules and prevent yara warnings on poor performance
regex = regex.replace(".*", ".{,1000}")
# strange: capa accepts regexes with unsescaped / like - string: /com/exe4j/runtime/exe4jcontroller/i in capa-rules/compiler/exe4j/compiled-with-exe4j.yml, needs a fix for yara:
# would assume that get_value_str() gives the raw string
regex = re.sub(r"(?<!\\)/", r"\/", regex)
# capa uses python regex which accepts /reg(|.exe)/ but yaras regex engine doesn't not => fix it
# /reg(|.exe)/ => /reg(.exe)?/
regex = re.sub(r"\(\|([^\)]+)\)", r"(\1)?", regex)
# change begining of line to null byte, e.g. /^open => /\x00open (not word boundary because we're not looking for the begining of a word in a text but usually a function name if there's ^ in a capa rule)
regex = re.sub(r"^\^", r"\\x00", regex)
# regex = re.sub(r"^\^", r"\\b", regex)
regex = "/" + regex + "/"
if count:
regex += " nocase"
# strange: if statement.name == "string", the string is as it is, if statement.name == "regex", the string has // around it, e.g. /regex/
var_name = "re_" + var_names.pop(0)
yara_strings += "\t" + "$" + var_name + " = " + regex + " ascii wide " + convert_description(kid) + "\n"
yara_condition += "\t" + "$" + var_name + " "
elif s_type == "Not" or s_type == "And" or s_type == "Or":
pass
else:
logger.info("something unhandled: " + repr(s_type))
sys.exit()
return yara_strings, yara_condition
############################## end def do_statement
yara_strings_list = []
yara_condition_list = []
rule_comment = ""
incomplete = 0
statement = rule.name
logger.info("doing statement: " + statement)
if check_feature(statement, rulename):
return "BREAK", statement, rule_comment, incomplete
if statement == "And" or statement == "Or":
desc = convert_description(rule)
if desc:
logger.info("description of bool statement: " + repr(desc))
yara_strings_list.append("\t" * depth + desc + "\n")
elif statement == "Not":
logger.info("one of those seldom nots: " + rule.name)
# check for nested statements
try:
kids = rule.children
num_kids = len(kids)
logger.info("kids: " + kids)
except:
logger.info("no kids in rule: " + rule.name)
try:
# maybe it's "Not" = only one child:
kid = rule.child
kids = [kid]
num_kids = 1
logger.info("kid: %s", kids)
except:
logger.info("no kid in rule: %s", rule.name)
# just a single statement without 'and' or 'or' before it in this rule
if "kids" not in locals().keys():
logger.info("no kids: " + rule.name)
yara_strings_sub, yara_condition_sub = do_statement(statement, rule)
if yara_strings_sub == "BREAK":
logger.info("Unknown feature at1: " + rule.name)
return "BREAK", yara_condition_sub, rule_comment, incomplete
yara_strings_list.append(yara_strings_sub)
yara_condition_list.append(yara_condition_sub)
else:
x = 0
logger.info("doing kids: %r - len: %s", kids, num_kids)
for kid in kids:
s_type = kid.name
logger.info("doing type: " + s_type + " kidnum: " + str(x))
if s_type == "Some":
cmin = kid.count
logger.info("Some type with mininum: " + str(cmin))
if not cmin:
logger.info("this is optional: which means, we can just ignore it")
x += 1
continue
elif statement == "Or":
logger.info("we're inside an OR, we can just ignore it")
x += 1
continue
else:
# this is "x or more". could be coded for strings TODO
return "BREAK", "Some aka x or more (TODO)", rule_comment, incomplete
if s_type == "And" or s_type == "Or" or s_type == "Not" and not kid.name == "Some":
logger.info("doing bool with recursion: " + repr(kid))
logger.info("kid coming: " + repr(kid.name))
# logger.info("grandchildren: " + repr(kid.children))
##### here we go into RECURSION ##################################################################################
yara_strings_sub, yara_condition_sub, rule_comment_sub, incomplete_sub = convert_rule(
kid, rulename, cround, depth
)
logger.info("coming out of this recursion, depth: " + repr(depth) + " s_type: " + s_type)
if yara_strings_sub == "BREAK":
logger.info(
"Unknown feature at2: " + rule.name + " - s_type: " + s_type + " - depth: " + str(depth)
)
# luckily this is only a killer, if we're inside an 'And', inside 'Or' we're just missing some coverage
# only accept incomplete rules in rounds > 3 because the reason might be a reference to another rule not converted yet because of missing dependencies
logger.info("rule.name, depth, cround: " + rule.name + ", " + str(depth) + ", " + str(cround))
if rule.name == "Or" and depth == 1 and cround > min_rounds - 1:
logger.info(
"Unknown feature, just ignore this branch and keep the rest bec we're in Or (1): "
+ s_type
+ " - depth: "
+ str(depth)
)
# remove last 'or'
# yara_condition = re.sub(r'\sor $', ' ', yara_condition)
rule_comment += "This rule is incomplete because a branch inside an Or-statement had an unsupported feature and was skipped => coverage is reduced compared to the original capa rule. "
x += 1
incomplete = 1
continue
else:
return "BREAK", yara_condition_sub, rule_comment, incomplete
rule_comment += rule_comment_sub
yara_strings_list.append(yara_strings_sub)
yara_condition_list.append(yara_condition_sub)
incomplete = incomplete or incomplete_sub
yara_strings_sub, yara_condition_sub = do_statement(s_type, kid)
if yara_strings_sub == "BREAK":
logger.info("Unknown feature at3: " + rule.name)
logger.info("rule.name, depth, cround: " + rule.name + ", " + str(depth) + ", " + str(cround))
if rule.name == "Or" and depth == 1 and cround > min_rounds - 1:
logger.info(
"Unknown feature, just ignore this branch and keep the rest bec we're in Or (2): "
+ s_type
+ " - depth: "
+ str(depth)
)
rule_comment += "This rule is incomplete because a branch inside an Or-statement had an unsupported feature and was skipped => coverage is reduced compared to the original capa rule. "
x += 1
incomplete = 1
continue
else:
return "BREAK", yara_condition_sub, rule_comment, incomplete
# don't append And or Or if we got no condition back from this kid from e.g. match in myself or unsupported feature inside 'Or'
if not yara_condition_sub:
continue
yara_strings_list.append(yara_strings_sub)
yara_condition_list.append(yara_condition_sub)
x += 1
# this might happen, if all conditions are inside "or" and none of them was supported
if not yara_condition_list:
return (
"BREAK",
'Multiple statements inside "- or:" where all unsupported, the last one was "' + s_type + '"',
rule_comment,
incomplete,
)
if statement == "And" or statement == "Or":
if yara_strings_list:
yara_strings = "".join(yara_strings_list)
else:
yara_strings = ""
yara_condition = " (\n\t\t" + ("\n\t\t" + statement.lower() + " ").join(yara_condition_list) + " \n\t) "
elif statement == "Some":
cmin = rule.count
logger.info("Some type with mininum at2: " + str(cmin))
if not cmin:
logger.info("this is optional: which means, we can just ignore it")
else:
# this is "x or more". could be coded for strings TODO
return "BREAK", "Some aka x or more (TODO)", rule_comment, incomplete
elif statement == "Not":
logger.info("Not")
yara_strings = "".join(yara_strings_list)
yara_condition = "not " + "".join(yara_condition_list) + " "
else:
if len(yara_condition_list) != 1:
logger.info("something wrong around here" + repr(yara_condition_list) + " - " + statement)
sys.exit()
# strings might be empty with only conditions
if yara_strings_list:
yara_strings = "\n\t" + yara_strings_list[0]
yara_condition = "\n\t" + yara_condition_list[0]
logger.info(
f"################# end of convert_rule() #strings: {len(yara_strings_list)} #conditions: {len(yara_condition_list)}"
)
logger.info(f"strings: {yara_strings} conditions: {yara_condition}")
return yara_strings, yara_condition, rule_comment, incomplete
def output_yar(yara):
print(yara + "\n")
def output_unsupported_capa_rules(yaml, capa_rulename, url, reason):
if reason != "NOLOG":
if capa_rulename not in unsupported_capa_rules_list:
logger.info("unsupported: " + capa_rulename + " - reason: " + reason + " - url: " + url)
unsupported_capa_rules_list.append(capa_rulename)
unsupported_capa_rules.write(yaml.encode("utf-8") + b"\n")
unsupported_capa_rules.write(
(
"Reason: "
+ reason
+ " (there might be multiple unsupported things in this rule, this is the 1st one encountered)"
).encode("utf-8")
+ b"\n"
)
unsupported_capa_rules.write(url.encode("utf-8") + b"\n----------------------------------------------\n")
unsupported_capa_rules_names.write(capa_rulename.encode("utf-8") + b":")
unsupported_capa_rules_names.write(reason.encode("utf-8") + b":")
unsupported_capa_rules_names.write(url.encode("utf-8") + b"\n")
def convert_rules(rules, namespaces, cround):
for rule in rules.rules.values():
rule_name = convert_rule_name(rule.name)
if rule.meta.get("capa/subscope-rule", False):
logger.info("skipping sub scope rule capa: " + rule.name)
continue
if rule_name in converted_rules:
logger.info("skipping already converted rule capa: " + rule.name + " - yara rule: " + rule_name)
continue
logger.info("-------------------------- DOING RULE CAPA: " + rule.name + " - yara rule: " + rule_name)
if "capa/path" in rule.meta:
url = get_rule_url(rule.meta["capa/path"])
else:
url = "no url"
logger.info("URL: " + url)
logger.info("statements: " + repr(rule.statement))
# don't really know what that passed empty string is good for :)
dependencies = rule.get_dependencies(namespaces)
if len(dependencies):
logger.info("Dependencies at4: " + rule.name + " - dep: " + str(dependencies))
for dep in dependencies:
logger.info("Dependencies at44: " + dep)
if not dep.startswith(rule.name + "/"):
logger.info("Depending on another rule: " + dep)
continue
yara_strings, yara_condition, rule_comment, incomplete = convert_rule(rule.statement, rule.name, cround, 0)
if yara_strings == "BREAK":
# only give up if in final extra round #9000
if cround == 9000:
output_unsupported_capa_rules(rule.to_yaml(), rule.name, url, yara_condition)
logger.info("Unknown feature at5: " + rule.name)
else:
yara_meta = ""
metas = rule.meta
rule_tags = ""
for meta in metas:
meta_name = meta
# e.g. 'examples:' can be a list
seen_hashes = []
if isinstance(metas[meta], list):
if meta_name == "examples":
meta_name = "hash"
if meta_name == "att&ck":
meta_name = "attack"
for attack in list(metas[meta]):
logger.info("attack:" + attack)
# cut out tag in square brackets, e.g. Defense Evasion::Obfuscated Files or Information [T1027] => T1027
r = re.search(r"\[(T[^\]]*)", attack)
if r:
tag = r.group(1)
logger.info("attack tag:" + tag)
tag = re.sub(r"\W", "_", tag)
rule_tags += tag + " "
# also add a line "attack = ..." to yaras 'meta:' to keep the long description:
yara_meta += '\tattack = "' + attack + '"\n'
elif meta_name == "mbc":
for mbc in list(metas[meta]):
logger.info("mbc:" + mbc)
# cut out tag in square brackets, e.g. Cryptography::Encrypt Data::RC6 [C0027.010] => C0027.010
r = re.search(r"\[(.[^\]]*)", mbc)
if r:
tag = r.group(1)
logger.info("mbc tag:" + tag)
tag = re.sub(r"\W", "_", tag)
rule_tags += tag + " "
# also add a line "mbc = ..." to yaras 'meta:' to keep the long description:
yara_meta += '\tmbc = "' + mbc + '"\n'
for value in metas[meta]:
if meta_name == "hash":
value = re.sub(r"^([0-9a-f]{20,64}):0x[0-9a-f]{1,10}$", r"\1", value, flags=re.IGNORECASE)
# examples in capa can contain the same hash several times with different offset, so check if it's already there:
# (keeping the offset might be interessting for some but breaks yara-ci for checking of the final rules
if not value in seen_hashes:
yara_meta += "\t" + meta_name + ' = "' + value + '"\n'
seen_hashes.append(value)
else:
# no list:
if meta == "capa/path":
url = get_rule_url(metas[meta])
meta_name = "reference"
meta_value = "This YARA rule converted from capa rule: " + url
else:
meta_value = metas[meta]
if meta_name == "name":
meta_name = "description"
meta_value += " (converted from capa rule)"
elif meta_name == "lib":
meta_value = str(meta_value)
elif meta_name == "capa/nursery":
meta_name = "capa_nursery"
meta_value = str(meta_value)
# for the rest of the maec/malware-category names:
meta_name = re.sub(r"\W", "_", meta_name)
if meta_name and meta_value:
yara_meta += "\t" + meta_name + ' = "' + meta_value + '"\n'
rule_name_bonus = ""
if rule_comment:
yara_meta += '\tcomment = "' + rule_comment + '"\n'
yara_meta += '\tdate = "' + today + '"\n'
yara_meta += '\tminimum_yara = "3.8"\n'
yara_meta += '\tlicense = "Apache-2.0 License"\n'
# check if there's some beef in condition:
tmp_yc = re.sub(r"(and|or|not)", "", yara_condition)
if re.search(r"\w", tmp_yc):
yara = ""
if make_priv:
yara = "private "
# put yara rule tags here:
rule_tags = default_tags + rule_tags
yara += "rule " + rule_name + " : " + rule_tags + " { \n meta: \n " + yara_meta + "\n"
if "$" in yara_strings:
yara += " strings: \n " + yara_strings + " \n"
yara += " condition:" + condition_header + yara_condition + "\n}"
# TODO: now the rule is finished and could be automatically checked with the capa-testfile(s) named in meta (doing it for all of them using yara-ci upload at the moment)
output_yar(yara)
converted_rules.append(rule_name)
global count_incomplete
count_incomplete += incomplete
else:
output_unsupported_capa_rules(rule.to_yaml(), rule.name, url, yara_condition)
pass
def main(argv=None):
if argv is None:
argv = sys.argv[1:]
parser = argparse.ArgumentParser(description="Capa to YARA rule converter")
parser.add_argument("rules", type=str, help="Path to rules")
parser.add_argument("--private", "-p", action="store_true", help="Create private rules", default=False)
capa.main.install_common_args(parser, wanted={"tag"})
args = parser.parse_args(args=argv)
global make_priv
make_priv = args.private
if args.verbose:
level = logging.DEBUG
elif args.quiet:
level = logging.ERROR
else:
level = logging.INFO
logging.basicConfig(level=level)
logging.getLogger("capa2yara").setLevel(level)
try:
rules = capa.main.get_rules(args.rules, disable_progress=True)
namespaces = capa.rules.index_rules_by_namespace(list(rules))
rules = capa.rules.RuleSet(rules)
logger.info("successfully loaded %s rules (including subscope rules which will be ignored)", len(rules))
if args.tag:
rules = rules.filter_rules_by_meta(args.tag)
logger.debug("selected %s rules", len(rules))
for i, r in enumerate(rules.rules, 1):
logger.debug(" %d. %s", i, r)
except (IOError, capa.rules.InvalidRule, capa.rules.InvalidRuleSet) as e:
logger.error("%s", str(e))
return -1
output_yar(
"// Rules from FireEye's https://github.com/fireeye/capa-rules converted to YARA using https://github.com/fireeye/capa/blob/master/scripts/capa2yara.py by Arnim Rupp"
)
output_yar(
"// Beware: These are less rules than capa (because not all fit into YARA, stats at EOF) and is less precise because e.g. capas function scopes are applied to the whole file"
)
output_yar(
'// Beware: Some rules are incomplete because an optional branch was not supported by YARA. These rules are marked in a comment in meta: (search for "incomplete")'
)
output_yar("// Rule authors and license stay the same")
output_yar(
'// att&ck and MBC tags are put into YARA rule tags. All rules are tagged with "CAPA" for easy filtering'
)
output_yar("// The date = in meta: is the date of converting (there is no date in capa rules)")
output_yar("// Minimum YARA version is 3.8.0 plus PE module")
output_yar('\nimport "pe"')
output_yar(condition_rule)
# do several rounds of converting rules because some rules for match: might not be converted in the 1st run
num_rules = 9999999
cround = 0
while num_rules != len(converted_rules) or cround < min_rounds:
cround += 1
logger.info("doing convert_rules(), round: " + str(cround))
num_rules = len(converted_rules)
convert_rules(rules, namespaces, cround)
# one last round to collect all unconverted rules
convert_rules(rules, namespaces, 9000)
stats = "\n// converted rules : " + str(len(converted_rules))
stats += "\n// among those are incomplete : " + str(count_incomplete)
stats += "\n// unconverted rules : " + str(len(unsupported_capa_rules_list)) + "\n"
logger.info(stats)
output_yar(stats)
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -7,16 +7,17 @@ import capa.main
import capa.rules
import capa.engine
import capa.features
import capa.render.json
import capa.render.utils as rutils
import capa.render.default
import capa.render.result_document
from capa.engine import *
from capa.render import convert_capabilities_to_result_document
# edit this to set the path for file to analyze and rule directory
RULES_PATH = "/tmp/capa/rules/"
# load rules from disk
rules = capa.main.get_rules(RULES_PATH, disable_progress=True)
rules = capa.rules.RuleSet(rules)
rules = capa.rules.RuleSet(capa.main.get_rules(RULES_PATH, disable_progress=True))
# == Render ddictionary helpers
def render_meta(doc, ostream):
@@ -103,28 +104,16 @@ def render_attack(doc, ostream):
for rule in rutils.capability_rules(doc):
if not rule["meta"].get("att&ck"):
continue
for attack in rule["meta"]["att&ck"]:
tactic, _, rest = attack.partition("::")
if "::" in rest:
technique, _, rest = rest.partition("::")
subtechnique, _, id = rest.rpartition(" ")
tactics[tactic].add((technique, subtechnique, id))
else:
technique, _, id = rest.rpartition(" ")
tactics[tactic].add((technique, id))
tactics[attack["tactic"]].add((attack["technique"], attack.get("subtechnique"), attack["id"]))
for tactic, techniques in sorted(tactics.items()):
inner_rows = []
for spec in sorted(techniques):
if len(spec) == 2:
technique, id = spec
for (technique, subtechnique, id) in sorted(techniques):
if subtechnique is None:
inner_rows.append("%s %s" % (technique, id))
elif len(spec) == 3:
technique, subtechnique, id = spec
inner_rows.append("%s::%s %s" % (technique, subtechnique, id))
else:
raise RuntimeError("unexpected ATT&CK spec format")
inner_rows.append("%s::%s %s" % (technique, subtechnique, id))
ostream["ATTCK"].setdefault(tactic.upper(), inner_rows)
@@ -149,31 +138,16 @@ def render_mbc(doc, ostream):
if not rule["meta"].get("mbc"):
continue
mbcs = rule["meta"]["mbc"]
if not isinstance(mbcs, list):
raise ValueError("invalid rule: MBC mapping is not a list")
for mbc in mbcs:
objective, _, rest = mbc.partition("::")
if "::" in rest:
behavior, _, rest = rest.partition("::")
method, _, id = rest.rpartition(" ")
objectives[objective].add((behavior, method, id))
else:
behavior, _, id = rest.rpartition(" ")
objectives[objective].add((behavior, id))
for mbc in rule["meta"]["mbc"]:
objectives[mbc["objective"]].add((mbc["behavior"], mbc.get("method"), mbc["id"]))
for objective, behaviors in sorted(objectives.items()):
inner_rows = []
for spec in sorted(behaviors):
if len(spec) == 2:
behavior, id = spec
inner_rows.append("%s %s" % (behavior, id))
elif len(spec) == 3:
behavior, method, id = spec
inner_rows.append("%s::%s %s" % (behavior, method, id))
for (behavior, method, id) in sorted(behaviors):
if method is None:
inner_rows.append("%s [%s]" % (behavior, id))
else:
raise RuntimeError("unexpected MBC spec format")
inner_rows.append("%s::%s [%s]" % (behavior, method, id))
ostream["MBC"].setdefault(objective.upper(), inner_rows)
@@ -191,24 +165,24 @@ def render_dictionary(doc):
def capa_details(file_path, output_format="dictionary"):
# extract features and find capabilities
extractor = capa.main.get_extractor(file_path, "auto", disable_progress=True)
extractor = capa.main.get_extractor(file_path, "auto", capa.main.BACKEND_VIV, [], False, disable_progress=True)
capabilities, counts = capa.main.find_capabilities(rules, extractor, disable_progress=True)
# collect metadata (used only to make rendering more complete)
meta = capa.main.collect_metadata("", file_path, RULES_PATH, "auto", extractor)
meta = capa.main.collect_metadata("", file_path, RULES_PATH, extractor)
meta["analysis"].update(counts)
capa_output = False
if output_format == "dictionary":
# ...as python dictionary, simplified as textable but in dictionary
doc = convert_capabilities_to_result_document(meta, rules, capabilities)
doc = capa.render.result_document.convert_capabilities_to_result_document(meta, rules, capabilities)
capa_output = render_dictionary(doc)
elif output_format == "json":
# render results
# ...as json
capa_output = json.loads(capa.render.render_json(meta, rules, capabilities))
capa_output = json.loads(capa.render.json.render(meta, rules, capabilities))
elif output_format == "texttable":
# ...as human readable text table
capa_output = capa.render.render_default(meta, rules, capabilities)
capa_output = capa.render.default.render(meta, rules, capabilities)
return capa_output

View File

@@ -65,6 +65,8 @@ def main(argv=None):
return 0
else:
logger.info("rule requires reformatting (%s)", rule.name)
if "\r\n" in rule.definition:
logger.info("please make sure that the file uses LF (\\n) line endings only")
return 1
if args.in_place:

View File

@@ -9,6 +9,7 @@
# See the License for the specific language governing permissions and limitations under the License.
# Use a console with emojis support for a better experience
# Use venv to ensure that `python` calls the correct python version
# Stash uncommited changes
MSG="pre-push-$(date +%s)";
@@ -25,17 +26,8 @@ restore_stashed() {
fi
}
python_3() {
case "$(uname -s)" in
CYGWIN*|MINGW32*|MSYS*|MINGW*)
py -3 -m $1 > $2 2>&1;;
*)
python3 -m $1 > $2 2>&1;;
esac
}
# Run isort and print state
python_3 'isort --profile black --length-sort --line-width 120 -c .' 'isort-output.log';
python -m isort --profile black --length-sort --line-width 120 -c . > isort-output.log 2>&1;
if [ $? == 0 ]; then
echo 'isort succeeded!! 💖';
else
@@ -46,7 +38,7 @@ else
fi
# Run black and print state
python_3 'black -l 120 --check .' 'black-output.log';
python -m black -l 120 --check . > black-output.log 2>&1;
if [ $? == 0 ]; then
echo 'black succeeded!! 💝';
else
@@ -70,7 +62,7 @@ fi
# Run tests except if first argument is no_tests
if [ "$1" != 'no_tests' ]; then
echo 'Running tests, please wait ⌛';
pytest tests/ --maxfail=1;
python -m pytest tests/ --maxfail=1;
if [ $? == 0 ]; then
echo 'Tests succeed!! 🎉';
else

74
scripts/detect-elf-os.py Normal file
View File

@@ -0,0 +1,74 @@
#!/usr/bin/env python2
"""
Copyright (C) 2021 FireEye, Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at: [package root]/LICENSE.txt
Unless required by applicable law or agreed to in writing, software distributed under the License
is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
detect-elf-os
Attempt to detect the underlying OS that the given ELF file targets.
"""
import sys
import logging
import argparse
import contextlib
from typing import BinaryIO
import capa.helpers
import capa.features.extractors.elf
logger = logging.getLogger("capa.detect-elf-os")
def main(argv=None):
if capa.helpers.is_runtime_ida():
from capa.ida.helpers import IDAIO
f: BinaryIO = IDAIO()
else:
if argv is None:
argv = sys.argv[1:]
parser = argparse.ArgumentParser(description="Detect the underlying OS for the given ELF file")
parser.add_argument("sample", type=str, help="path to ELF file")
logging_group = parser.add_argument_group("logging arguments")
logging_group.add_argument("-d", "--debug", action="store_true", help="enable debugging output on STDERR")
logging_group.add_argument(
"-q", "--quiet", action="store_true", help="disable all status output except fatal errors"
)
args = parser.parse_args(args=argv)
if args.quiet:
logging.basicConfig(level=logging.WARNING)
logging.getLogger().setLevel(logging.WARNING)
elif args.debug:
logging.basicConfig(level=logging.DEBUG)
logging.getLogger().setLevel(logging.DEBUG)
else:
logging.basicConfig(level=logging.INFO)
logging.getLogger().setLevel(logging.INFO)
f = open(args.sample, "rb")
with contextlib.closing(f):
try:
print(capa.features.extractors.elf.detect_elf_os(f))
return 0
except capa.features.extractors.elf.CorruptElfFile as e:
logger.error("corrupt ELF file: %s", str(e.args[0]))
return -1
if __name__ == "__main__":
if capa.helpers.is_runtime_ida():
main()
else:
sys.exit(main())

View File

@@ -25,7 +25,8 @@ Derived from: https://github.com/fireeye/capa/blob/master/scripts/import-to-ida.
import os
import json
from binaryninja import *
import binaryninja
import binaryninja.interaction
def append_func_cmt(bv, va, cmt):
@@ -46,31 +47,31 @@ def append_func_cmt(bv, va, cmt):
def load_analysis(bv):
shortname = os.path.splitext(os.path.basename(bv.file.filename))[0]
dirname = os.path.dirname(bv.file.filename)
log_info(f"dirname: {dirname}\nshortname: {shortname}\n")
binaryninja.log_info(f"dirname: {dirname}\nshortname: {shortname}\n")
if os.access(os.path.join(dirname, shortname + ".js"), os.R_OK):
path = os.path.join(dirname, shortname + ".js")
elif os.access(os.path.join(dirname, shortname + ".json"), os.R_OK):
path = os.path.join(dirname, shortname + ".json")
else:
path = interaction.get_open_filename_input("capa report:", "JSON (*.js *.json);;All Files (*)")
path = binaryninja.interaction.get_open_filename_input("capa report:", "JSON (*.js *.json);;All Files (*)")
if not path or not os.access(path, os.R_OK):
log_error("Invalid filename.")
binaryninja.log_error("Invalid filename.")
return 0
log_info("Using capa file %s" % path)
binaryninja.log_info("Using capa file %s" % path)
with open(path, "rb") as f:
doc = json.loads(f.read().decode("utf-8"))
if "meta" not in doc or "rules" not in doc:
log_error("doesn't appear to be a capa report")
binaryninja.log_error("doesn't appear to be a capa report")
return -1
a = doc["meta"]["sample"]["md5"].lower()
md5 = Transform["MD5"]
rawhex = Transform["RawHex"]
md5 = binaryninja.Transform["MD5"]
rawhex = binaryninja.Transform["RawHex"]
b = rawhex.encode(md5.encode(bv.parent_view.read(bv.parent_view.start, bv.parent_view.end))).decode("utf-8")
if not a == b:
log_error("sample mismatch")
binaryninja.log_error("sample mismatch")
return -2
rows = []
@@ -96,7 +97,7 @@ def load_analysis(bv):
else:
cmt = "%s" % (name,)
log_info("0x%x: %s" % (va, cmt))
binaryninja.log_info("0x%x: %s" % (va, cmt))
try:
# message will look something like:
#
@@ -105,7 +106,7 @@ def load_analysis(bv):
except ValueError:
continue
log_info("ok")
binaryninja.log_info("ok")
PluginCommand.register("Load capa file", "Loads an analysis file from capa", load_analysis)
binaryninja.PluginCommand.register("Load capa file", "Loads an analysis file from capa", load_analysis)

View File

@@ -31,10 +31,8 @@ See the License for the specific language governing permissions and limitations
import json
import logging
import idc
import idautils
import ida_funcs
import ida_idaapi
import ida_kernwin
logger = logging.getLogger("capa")

View File

@@ -13,32 +13,78 @@ Unless required by applicable law or agreed to in writing, software distributed
is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
"""
import gc
import os
import sys
import time
import string
import difflib
import hashlib
import inspect
import logging
import os.path
import pathlib
import argparse
import itertools
import posixpath
import contextlib
from typing import Set, Dict, List
from pathlib import Path
from dataclasses import field, dataclass
import tqdm
import termcolor
import ruamel.yaml
import tqdm.contrib.logging
import capa.main
import capa.rules
import capa.engine
import capa.features
import capa.features.insn
import capa.features.common
from capa.rules import Rule, RuleSet
from capa.features.common import Feature
logger = logging.getLogger("capa.lint")
logger = logging.getLogger("lint")
class Lint(object):
def red(s):
return termcolor.colored(s, "red")
def orange(s):
return termcolor.colored(s, "yellow")
def green(s):
return termcolor.colored(s, "green")
@dataclass
class Context:
"""
attributes:
samples: mapping from content hash (MD5, SHA, etc.) to file path.
rules: rules to inspect
is_thorough: should inspect long-running lints
capabilities_by_sample: cache of results, indexed by file path.
"""
samples: Dict[str, Path]
rules: RuleSet
is_thorough: bool
capabilities_by_sample: Dict[Path, Set[str]] = field(default_factory=dict)
class Lint:
WARN = orange("WARN")
FAIL = red("FAIL")
name = "lint"
level = FAIL
recommendation = ""
def check_rule(self, ctx, rule):
def check_rule(self, ctx: Context, rule: Rule):
return False
@@ -46,7 +92,7 @@ class NameCasing(Lint):
name = "rule name casing"
recommendation = "Rename rule using to start with lower case letters"
def check_rule(self, ctx, rule):
def check_rule(self, ctx: Context, rule: Rule):
return rule.name[0] in string.ascii_uppercase and rule.name[1] not in string.ascii_uppercase
@@ -55,7 +101,7 @@ class FilenameDoesntMatchRuleName(Lint):
recommendation = "Rename rule file to match the rule name"
recommendation_template = 'Rename rule file to match the rule name, expected: "{:s}", found: "{:s}"'
def check_rule(self, ctx, rule):
def check_rule(self, ctx: Context, rule: Rule):
expected = rule.name
expected = expected.lower()
expected = expected.replace(" ", "-")
@@ -77,7 +123,7 @@ class MissingNamespace(Lint):
name = "missing rule namespace"
recommendation = "Add meta.namespace so that the rule is emitted correctly"
def check_rule(self, ctx, rule):
def check_rule(self, ctx: Context, rule: Rule):
return (
"namespace" not in rule.meta
and not is_nursery_rule(rule)
@@ -90,7 +136,7 @@ class NamespaceDoesntMatchRulePath(Lint):
name = "file path doesn't match rule namespace"
recommendation = "Move rule to appropriate directory or update the namespace"
def check_rule(self, ctx, rule):
def check_rule(self, ctx: Context, rule: Rule):
# let the other lints catch namespace issues
if "namespace" not in rule.meta:
return False
@@ -108,7 +154,7 @@ class MissingScope(Lint):
name = "missing scope"
recommendation = "Add meta.scope so that the scope is explicit (defaults to `function`)"
def check_rule(self, ctx, rule):
def check_rule(self, ctx: Context, rule: Rule):
return "scope" not in rule.meta
@@ -116,7 +162,7 @@ class InvalidScope(Lint):
name = "invalid scope"
recommendation = "Use only file, function, or basic block rule scopes"
def check_rule(self, ctx, rule):
def check_rule(self, ctx: Context, rule: Rule):
return rule.meta.get("scope") not in ("file", "function", "basic block")
@@ -124,7 +170,7 @@ class MissingAuthor(Lint):
name = "missing author"
recommendation = "Add meta.author so that users know who to contact with questions"
def check_rule(self, ctx, rule):
def check_rule(self, ctx: Context, rule: Rule):
return "author" not in rule.meta
@@ -132,7 +178,7 @@ class MissingExamples(Lint):
name = "missing examples"
recommendation = "Add meta.examples so that the rule can be tested and verified"
def check_rule(self, ctx, rule):
def check_rule(self, ctx: Context, rule: Rule):
return (
"examples" not in rule.meta
or not isinstance(rule.meta["examples"], list)
@@ -145,7 +191,7 @@ class MissingExampleOffset(Lint):
name = "missing example offset"
recommendation = "Add offset of example function"
def check_rule(self, ctx, rule):
def check_rule(self, ctx: Context, rule: Rule):
if rule.meta.get("scope") in ("function", "basic block"):
examples = rule.meta.get("examples")
if isinstance(examples, list):
@@ -159,7 +205,7 @@ class ExampleFileDNE(Lint):
name = "referenced example doesn't exist"
recommendation = "Add the referenced example to samples directory ($capa-root/tests/data or supplied via --samples)"
def check_rule(self, ctx, rule):
def check_rule(self, ctx: Context, rule: Rule):
if not rule.meta.get("examples"):
# let the MissingExamples lint catch this case, don't double report.
return False
@@ -168,19 +214,52 @@ class ExampleFileDNE(Lint):
for example in rule.meta.get("examples", []):
if example:
example_id = example.partition(":")[0]
if example_id in ctx["samples"]:
if example_id in ctx.samples:
found = True
break
return not found
DEFAULT_SIGNATURES = capa.main.get_default_signatures()
def get_sample_capabilities(ctx: Context, path: Path) -> Set[str]:
nice_path = os.path.abspath(str(path))
if path in ctx.capabilities_by_sample:
logger.debug("found cached results: %s: %d capabilities", nice_path, len(ctx.capabilities_by_sample[path]))
return ctx.capabilities_by_sample[path]
logger.debug("analyzing sample: %s", nice_path)
extractor = capa.main.get_extractor(
nice_path, "auto", capa.main.BACKEND_VIV, DEFAULT_SIGNATURES, False, disable_progress=True
)
capabilities, _ = capa.main.find_capabilities(ctx.rules, extractor, disable_progress=True)
# mypy doesn't seem to be happy with the MatchResults type alias & set(...keys())?
# so we ignore a few types here.
capabilities = set(capabilities.keys()) # type: ignore
assert isinstance(capabilities, set)
logger.debug("computed results: %s: %d capabilities", nice_path, len(capabilities))
ctx.capabilities_by_sample[path] = capabilities
# when i (wb) run the linter in thorough mode locally,
# the OS occasionally kills the process due to memory usage.
# so, be extra aggressive in keeping memory usage down.
#
# tbh, im not sure this actually does anything, but maybe it helps?
gc.collect()
return capabilities
class DoesntMatchExample(Lint):
name = "doesn't match on referenced example"
recommendation = "Fix the rule logic or provide a different example"
def check_rule(self, ctx, rule):
if not ctx["is_thorough"]:
def check_rule(self, ctx: Context, rule: Rule):
if not ctx.is_thorough:
return False
examples = rule.meta.get("examples", [])
@@ -190,29 +269,75 @@ class DoesntMatchExample(Lint):
for example in examples:
example_id = example.partition(":")[0]
try:
path = ctx["samples"][example_id]
path = ctx.samples[example_id]
except KeyError:
# lint ExampleFileDNE will catch this.
# don't double report.
continue
try:
extractor = capa.main.get_extractor(path, "auto", disable_progress=True)
capabilities, meta = capa.main.find_capabilities(ctx["rules"], extractor, disable_progress=True)
capabilities = get_sample_capabilities(ctx, path)
except Exception as e:
logger.error("failed to extract capabilities: %s %s %s", rule.name, path, e)
logger.error("failed to extract capabilities: %s %s %s", rule.name, str(path), e, exc_info=True)
return True
if rule.name not in capabilities:
return True
class StatementWithSingleChildStatement(Lint):
name = "rule contains one or more statements with a single child statement"
recommendation = "remove the superfluous parent statement"
recommendation_template = "remove the superfluous parent statement: {:s}"
violation = False
def check_rule(self, ctx: Context, rule: Rule):
self.violation = False
def rec(statement, is_root=False):
if isinstance(statement, (capa.engine.And, capa.engine.Or)):
children = list(statement.get_children())
if not is_root and len(children) == 1 and isinstance(children[0], capa.engine.Statement):
self.recommendation = self.recommendation_template.format(str(statement))
self.violation = True
for child in children:
rec(child)
rec(rule.statement, is_root=True)
return self.violation
class OrStatementWithAlwaysTrueChild(Lint):
name = "rule contains an `or` statement that's always True because of an `optional` or other child statement that's always True"
recommendation = "clarify the rule logic, e.g. by moving the always True child statement"
recommendation_template = "clarify the rule logic, e.g. by moving the always True child statement: {:s}"
violation = False
def check_rule(self, ctx: Context, rule: Rule):
self.violation = False
def rec(statement):
if isinstance(statement, capa.engine.Or):
children = list(statement.get_children())
for child in children:
# `Some` implements `optional` which is an alias for `0 or more`
if isinstance(child, capa.engine.Some) and child.count == 0:
self.recommendation = self.recommendation_template.format(str(child))
self.violation = True
rec(child)
rec(rule.statement)
return self.violation
class UnusualMetaField(Lint):
name = "unusual meta field"
recommendation = "Remove the meta field"
recommendation_template = 'Remove the meta field: "{:s}"'
def check_rule(self, ctx, rule):
def check_rule(self, ctx: Context, rule: Rule):
for key in rule.meta.keys():
if key in capa.rules.META_KEYS:
continue
@@ -228,7 +353,7 @@ class LibRuleNotInLibDirectory(Lint):
name = "lib rule not found in lib directory"
recommendation = "Move the rule to the `lib` subdirectory of the rules path"
def check_rule(self, ctx, rule):
def check_rule(self, ctx: Context, rule: Rule):
if is_nursery_rule(rule):
return False
@@ -242,7 +367,7 @@ class LibRuleHasNamespace(Lint):
name = "lib rule has a namespace"
recommendation = "Remove the namespace from the rule"
def check_rule(self, ctx, rule):
def check_rule(self, ctx: Context, rule: Rule):
if "lib" not in rule.meta:
return False
@@ -253,9 +378,10 @@ class FeatureStringTooShort(Lint):
name = "feature string too short"
recommendation = 'capa only extracts strings with length >= 4; will not match on "{:s}"'
def check_features(self, ctx, features):
def check_features(self, ctx: Context, features: List[Feature]):
for feature in features:
if isinstance(feature, capa.features.String):
if isinstance(feature, (capa.features.common.String, capa.features.common.Substring)):
assert isinstance(feature.value, str)
if len(feature.value) < 4:
self.recommendation = self.recommendation.format(feature.value)
return True
@@ -270,20 +396,93 @@ class FeatureNegativeNumber(Lint):
'representation; will not match on "{:d}"'
)
def check_features(self, ctx, features):
def check_features(self, ctx: Context, features: List[Feature]):
for feature in features:
if isinstance(feature, (capa.features.insn.Number,)):
assert isinstance(feature.value, int)
if feature.value < 0:
self.recommendation = self.recommendation_template.format(feature.value)
return True
return False
class FeatureNtdllNtoskrnlApi(Lint):
name = "feature api may overlap with ntdll and ntoskrnl"
level = Lint.WARN
recommendation_template = (
"check if {:s} is exported by both ntdll and ntoskrnl; if true, consider removing {:s} "
"module requirement to improve detection"
)
def check_features(self, ctx: Context, features: List[Feature]):
for feature in features:
if isinstance(feature, capa.features.insn.API):
assert isinstance(feature.value, str)
modname, _, impname = feature.value.rpartition(".")
if modname == "ntdll":
if impname in (
"LdrGetProcedureAddress",
"LdrLoadDll",
"NtCreateThread",
"NtCreatUserProcess",
"NtLoadDriver",
"NtQueryDirectoryObject",
"NtResumeThread",
"NtSuspendThread",
"NtTerminateProcess",
"NtWriteVirtualMemory",
"RtlGetNativeSystemInformation",
"NtCreateThreadEx",
"NtCreateUserProcess",
"NtOpenDirectoryObject",
"NtQueueApcThread",
"ZwResumeThread",
"ZwSuspendThread",
"ZwWriteVirtualMemory",
"NtCreateProcess",
"ZwCreateThread",
"NtCreateProcessEx",
"ZwCreateThreadEx",
"ZwCreateProcess",
"ZwCreateUserProcess",
"RtlCreateUserProcess",
):
# ntoskrnl.exe does not export these routines
continue
if modname == "ntoskrnl":
if impname in (
"PsGetVersion",
"PsLookupProcessByProcessId",
"KeStackAttachProcess",
"ObfDereferenceObject",
"KeUnstackDetachProcess",
):
# ntdll.dll does not export these routines
continue
if modname in ("ntdll", "ntoskrnl"):
self.recommendation = self.recommendation_template.format(impname, modname)
return True
return False
class FormatLineFeedEOL(Lint):
name = "line(s) end with CRLF (\\r\\n)"
recommendation = "convert line endings to LF (\\n) for example using dos2unix"
def check_rule(self, ctx: Context, rule: Rule):
if len(rule.definition.split("\r\n")) > 0:
return False
return True
class FormatSingleEmptyLineEOF(Lint):
name = "EOF format"
recommendation = "end file with a single empty line"
def check_rule(self, ctx, rule):
def check_rule(self, ctx: Context, rule: Rule):
if rule.definition.endswith("\n") and not rule.definition.endswith("\n\n"):
return False
return True
@@ -293,25 +492,73 @@ class FormatIncorrect(Lint):
name = "rule format incorrect"
recommendation_template = "use scripts/capafmt.py or adjust as follows\n{:s}"
def check_rule(self, ctx, rule):
def check_rule(self, ctx: Context, rule: Rule):
actual = rule.definition
expected = capa.rules.Rule.from_yaml(rule.definition, use_ruamel=True).to_yaml()
if actual != expected:
diff = difflib.ndiff(actual.splitlines(1), expected.splitlines(1))
self.recommendation = self.recommendation_template.format("".join(diff))
diff = difflib.ndiff(actual.splitlines(1), expected.splitlines(True))
recommendation_template = self.recommendation_template
if "\r\n" in actual:
recommendation_template = (
self.recommendation_template + "\nplease make sure that the file uses LF (\\n) line endings only"
)
self.recommendation = recommendation_template.format("".join(diff))
return True
return False
def run_lints(lints, ctx, rule):
class FormatStringQuotesIncorrect(Lint):
name = "rule string quotes incorrect"
def check_rule(self, ctx: Context, rule: Rule):
events = capa.rules.Rule._get_ruamel_yaml_parser().parse(rule.definition)
for key in events:
if isinstance(key, ruamel.yaml.ScalarEvent) and key.value == "string":
value = next(events) # assume value is next event
if not isinstance(value, ruamel.yaml.ScalarEvent):
# ignore non-scalar
continue
if value.value.startswith("/") and value.value.endswith(("/", "/i")):
# ignore regex for now
continue
if value.style is None:
# no quotes
self.recommendation = 'add double quotes to "%s"' % value.value
return True
if value.style == "'":
# single quote
self.recommendation = 'change single quotes to double quotes for "%s"' % value.value
return True
elif isinstance(key, ruamel.yaml.ScalarEvent) and key.value == "substring":
value = next(events) # assume value is next event
if not isinstance(value, ruamel.yaml.ScalarEvent):
# ignore non-scalar
continue
if value.style is None:
# no quotes
self.recommendation = 'add double quotes to "%s"' % value.value
return True
if value.style == "'":
# single quote
self.recommendation = 'change single quotes to double quotes for "%s"' % value.value
return True
else:
continue
return False
def run_lints(lints, ctx: Context, rule: Rule):
for lint in lints:
if lint.check_rule(ctx, rule):
yield lint
def run_feature_lints(lints, ctx, features):
def run_feature_lints(lints, ctx: Context, features: List[Feature]):
for lint in lints:
if lint.check_features(ctx, features):
yield lint
@@ -323,7 +570,7 @@ NAME_LINTS = (
)
def lint_name(ctx, rule):
def lint_name(ctx: Context, rule: Rule):
return run_lints(NAME_LINTS, ctx, rule)
@@ -333,7 +580,7 @@ SCOPE_LINTS = (
)
def lint_scope(ctx, rule):
def lint_scope(ctx: Context, rule: Rule):
return run_lints(SCOPE_LINTS, ctx, rule)
@@ -350,28 +597,27 @@ META_LINTS = (
)
def lint_meta(ctx, rule):
def lint_meta(ctx: Context, rule: Rule):
return run_lints(META_LINTS, ctx, rule)
FEATURE_LINTS = (
FeatureStringTooShort(),
FeatureNegativeNumber(),
)
FEATURE_LINTS = (FeatureStringTooShort(), FeatureNegativeNumber(), FeatureNtdllNtoskrnlApi())
def lint_features(ctx, rule):
def lint_features(ctx: Context, rule: Rule):
features = get_features(ctx, rule)
return run_feature_lints(FEATURE_LINTS, ctx, features)
FORMAT_LINTS = (
FormatLineFeedEOL(),
FormatSingleEmptyLineEOF(),
FormatStringQuotesIncorrect(),
FormatIncorrect(),
)
def lint_format(ctx, rule):
def lint_format(ctx: Context, rule: Rule):
return run_lints(FORMAT_LINTS, ctx, rule)
@@ -379,11 +625,11 @@ def get_normpath(path):
return posixpath.normpath(path).replace(os.sep, "/")
def get_features(ctx, rule):
def get_features(ctx: Context, rule: Rule):
# get features from rule and all dependencies including subscopes and matched rules
features = []
namespaces = capa.rules.index_rules_by_namespace([rule])
deps = [ctx["rules"].rules[dep] for dep in rule.get_dependencies(namespaces)]
namespaces = ctx.rules.rules_by_namespace
deps = [ctx.rules.rules[dep] for dep in rule.get_dependencies(namespaces)]
for r in [rule] + deps:
features.extend(get_rule_features(r))
return features
@@ -403,10 +649,14 @@ def get_rule_features(rule):
return features
LOGIC_LINTS = (DoesntMatchExample(),)
LOGIC_LINTS = (
DoesntMatchExample(),
StatementWithSingleChildStatement(),
OrStatementWithAlwaysTrueChild(),
)
def lint_logic(ctx, rule):
def lint_logic(ctx: Context, rule: Rule):
return run_lints(LOGIC_LINTS, ctx, rule)
@@ -419,7 +669,7 @@ def is_nursery_rule(rule):
return rule.meta.get("capa/nursery")
def lint_rule(ctx, rule):
def lint_rule(ctx: Context, rule: Rule):
logger.debug(rule.name)
violations = list(
@@ -434,58 +684,124 @@ def lint_rule(ctx, rule):
)
if len(violations) > 0:
category = rule.meta.get("rule-category")
# don't show nursery rules with a single violation: needs examples.
# this is by far the most common reason to be in the nursery,
# and ends up just producing a lot of noise.
if not (is_nursery_rule(rule) and len(violations) == 1 and violations[0].name == "missing examples"):
category = rule.meta.get("rule-category")
print("")
print(
"%s%s %s"
% (
" (nursery) " if is_nursery_rule(rule) else "",
rule.name,
("(%s)" % category) if category else "",
)
)
level = "WARN" if is_nursery_rule(rule) else "FAIL"
for violation in violations:
print("")
print(
"%s %s: %s: %s"
"%s%s %s"
% (
" " if is_nursery_rule(rule) else "",
level,
violation.name,
violation.recommendation,
" (nursery) " if is_nursery_rule(rule) else "",
rule.name,
("(%s)" % category) if category else "",
)
)
elif len(violations) == 0 and is_nursery_rule(rule):
print("")
print("%s%s" % (" (nursery) ", rule.name))
print("%s %s: %s: %s" % (" ", "WARN", "no violations", "Graduate the rule"))
for violation in violations:
print(
"%s %s: %s: %s"
% (
" " if is_nursery_rule(rule) else "",
Lint.WARN if is_nursery_rule(rule) else violation.level,
violation.name,
violation.recommendation,
)
)
return len(violations) > 0 and not is_nursery_rule(rule)
print("")
if is_nursery_rule(rule):
has_examples = not any(map(lambda v: v.level == Lint.FAIL and v.name == "missing examples", violations))
lints_failed = len(
tuple(
filter(
lambda v: v.level == Lint.FAIL
and not (v.name == "missing examples" or v.name == "referenced example doesn't exist"),
violations,
)
)
)
lints_warned = len(
tuple(
filter(
lambda v: v.level == Lint.WARN
or (v.level == Lint.FAIL and v.name == "referenced example doesn't exist"),
violations,
)
)
)
if (not lints_failed) and (not lints_warned) and has_examples:
print("")
print("%s%s" % (" (nursery) ", rule.name))
print("%s %s: %s: %s" % (" ", Lint.WARN, green("no lint failures"), "Graduate the rule"))
print("")
else:
lints_failed = len(tuple(filter(lambda v: v.level == Lint.FAIL, violations)))
lints_warned = len(tuple(filter(lambda v: v.level == Lint.WARN, violations)))
return (lints_failed, lints_warned)
def lint(ctx, rules):
def width(s, count):
if len(s) > count:
return s[: count - 3] + "..."
else:
return s.ljust(count)
@contextlib.contextmanager
def redirecting_print_to_tqdm():
"""
Args:
samples (Dict[string, string]): map from sample id to path.
for each sample, record sample id of sha256, md5, and filename.
see `collect_samples(path)`.
rules (List[Rule]): the rules to lint.
tqdm (progress bar) expects to have fairly tight control over console output.
so calls to `print()` will break the progress bar and make things look bad.
so, this context manager temporarily replaces the `print` implementation
with one that is compatible with tqdm.
via: https://stackoverflow.com/a/42424890/87207
"""
did_suggest_fix = False
for rule in rules.rules.values():
if rule.meta.get("capa/subscope-rule", False):
continue
old_print = print
did_suggest_fix = lint_rule(ctx, rule) or did_suggest_fix
def new_print(*args, **kwargs):
return did_suggest_fix
# If tqdm.tqdm.write raises error, use builtin print
try:
tqdm.tqdm.write(*args, **kwargs)
except:
old_print(*args, **kwargs)
try:
# Globaly replace print with new_print
inspect.builtins.print = new_print
yield
finally:
inspect.builtins.print = old_print
def collect_samples(path):
def lint(ctx: Context):
"""
Returns: Dict[string, Tuple(int, int)]
- # lints failed
- # lints warned
"""
ret = {}
with tqdm.contrib.logging.tqdm_logging_redirect(ctx.rules.rules.items(), unit="rule") as pbar:
with redirecting_print_to_tqdm():
for name, rule in pbar:
if rule.meta.get("capa/subscope-rule", False):
continue
pbar.set_description(width("linting rule: %s" % (name), 48))
ret[name] = lint_rule(ctx, rule)
return ret
def collect_samples(path) -> Dict[str, Path]:
"""
recurse through the given path, collecting all file paths, indexed by their content sha256, md5, and filename.
"""
@@ -503,10 +819,10 @@ def collect_samples(path):
if name.endswith(".fnames"):
continue
path = os.path.join(root, name)
path = pathlib.Path(os.path.join(root, name))
try:
with open(path, "rb") as f:
with path.open("rb") as f:
buf = f.read()
except IOError:
continue
@@ -532,7 +848,8 @@ def main(argv=None):
samples_path = os.path.join(os.path.dirname(__file__), "..", "tests", "data")
parser = argparse.ArgumentParser(description="A program.")
parser = argparse.ArgumentParser(description="Lint capa rules.")
capa.main.install_common_args(parser, wanted={"tag"})
parser.add_argument("rules", type=str, help="Path to rules")
parser.add_argument("--samples", type=str, default=samples_path, help="Path to samples")
parser.add_argument(
@@ -540,24 +857,15 @@ def main(argv=None):
action="store_true",
help="Enable thorough linting - takes more time, but does a better job",
)
parser.add_argument("-t", "--tag", type=str, help="filter on rule meta field values")
parser.add_argument("-v", "--verbose", action="store_true", help="Enable debug logging")
parser.add_argument("-q", "--quiet", action="store_true", help="Disable all output but errors")
args = parser.parse_args(args=argv)
capa.main.handle_common_args(args)
if args.verbose:
level = logging.DEBUG
elif args.quiet:
level = logging.ERROR
if args.debug:
logging.getLogger("capa").setLevel(logging.DEBUG)
logging.getLogger("viv_utils").setLevel(logging.DEBUG)
else:
level = logging.INFO
logging.basicConfig(level=level)
logging.getLogger("capa.lint").setLevel(level)
capa.main.set_vivisect_log_level(logging.CRITICAL)
logging.getLogger("capa").setLevel(logging.CRITICAL)
logging.getLogger("viv_utils").setLevel(logging.CRITICAL)
logging.getLogger("capa").setLevel(logging.ERROR)
logging.getLogger("viv_utils").setLevel(logging.ERROR)
time0 = time.time()
@@ -581,22 +889,35 @@ def main(argv=None):
samples = collect_samples(args.samples)
ctx = {
"samples": samples,
"rules": rules,
"is_thorough": args.thorough,
}
ctx = Context(samples=samples, rules=rules, is_thorough=args.thorough)
did_violate = lint(ctx, rules)
results_by_name = lint(ctx)
failed_rules = []
warned_rules = []
for name, (fail_count, warn_count) in results_by_name.items():
if fail_count > 0:
failed_rules.append(name)
if warn_count > 0:
warned_rules.append(name)
min, sec = divmod(time.time() - time0, 60)
logger.debug("lints ran for ~ %02d:%02dm", min, sec)
if not did_violate:
logger.info("no suggestions, nice!")
return 0
else:
if warned_rules:
print(orange("rules with WARN:"))
for warned_rule in sorted(warned_rules):
print(" - " + warned_rule)
print()
if failed_rules:
print(red("rules with FAIL:"))
for failed_rule in sorted(failed_rules):
print(" - " + failed_rule)
return 1
else:
logger.info(green("no lints failed, nice!"))
return 0
if __name__ == "__main__":

View File

@@ -0,0 +1,134 @@
#!/usr/bin/env python3
"""
Copyright (C) 2021 FireEye, Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at: [package root]/LICENSE.txt
Unless required by applicable law or agreed to in writing, software distributed under the License
is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
match-function-id
Show the names of functions as recognized by the function identification subsystem.
This can help identify library functions statically linked into a program,
such as when triaging false positive matches in capa rules.
Example::
$ python scripts/match-function-id.py --signature sigs/vc6.pat.gz /tmp/suspicious.dll_
0x44cf30: ?GetPdbDll@@YAPAUHINSTANCE__@@XZ
0x44bb20: ?_strlen_priv@@YAIPBD@Z
0x44b6b0: ?invoke_main@@YAHXZ
0x44a5d0: ?find_pe_section@@YAPAU_IMAGE_SECTION_HEADER@@QAEI@Z
0x44a690: ?is_potentially_valid_image_base@@YA_NQAX@Z
0x44cbe0: ___get_entropy
0x44a4a0: __except_handler4
0x44b3d0: ?pre_cpp_initialization@@YAXXZ
0x44b2e0: ?pre_c_initialization@@YAHXZ
0x44b3c0: ?post_pgo_initialization@@YAHXZ
0x420156: ?
0x420270: ?
0x430dcd: ?
0x44d930: __except_handler4_noexcept
0x41e960: ?
0x44a1e0: @_RTC_AllocaHelper@12
0x44ba90: ?_getMemBlockDataString@@YAXPAD0PBDI@Z
0x44a220: @_RTC_CheckStackVars2@12
0x44a790: ___scrt_dllmain_after_initialize_c
0x44a7d0: ___scrt_dllmain_before_initialize_c
0x44a800: ___scrt_dllmain_crt_thread_attach
0x44a860: ___scrt_dllmain_exception_filter
0x44a900: ___scrt_dllmain_uninitialize_critical
0x44ad10: _at_quick_exit
0x44b940: ?_RTC_Failure@@YAXPAXH@Z
0x44be60: __RTC_UninitUse
0x44bfd0: __RTC_GetErrDesc
0x44c060: __RTC_SetErrorType
0x44cb60: ?
0x44cba0: __guard_icall_checks_enforced
"""
import sys
import logging
import argparse
import flirt
import viv_utils
import viv_utils.flirt
import capa.main
import capa.rules
import capa.engine
import capa.helpers
import capa.features
import capa.features.freeze
logger = logging.getLogger("capa.match-function-id")
def main(argv=None):
if argv is None:
argv = sys.argv[1:]
parser = argparse.ArgumentParser(description="FLIRT match each function")
parser.add_argument("sample", type=str, help="Path to sample to analyze")
parser.add_argument(
"-F",
"--function",
type=lambda x: int(x, 0x10),
help="match a specific function by VA, rather than add functions",
)
parser.add_argument(
"--signature",
action="append",
dest="signatures",
type=str,
default=[],
help="use the given signatures to identify library functions, file system paths to .sig/.pat files.",
)
parser.add_argument("-d", "--debug", action="store_true", help="Enable debugging output on STDERR")
parser.add_argument("-q", "--quiet", action="store_true", help="Disable all output but errors")
args = parser.parse_args(args=argv)
if args.quiet:
logging.basicConfig(level=logging.ERROR)
logging.getLogger().setLevel(logging.ERROR)
elif args.debug:
logging.basicConfig(level=logging.DEBUG)
logging.getLogger().setLevel(logging.DEBUG)
else:
logging.basicConfig(level=logging.INFO)
logging.getLogger().setLevel(logging.INFO)
# disable vivisect-related logging, it's verbose and not relevant for capa users
capa.main.set_vivisect_log_level(logging.CRITICAL)
analyzers = []
for sigpath in args.signatures:
sigs = viv_utils.flirt.load_flirt_signature(sigpath)
with capa.main.timing("flirt: compiling sigs"):
matcher = flirt.compile(sigs)
analyzer = viv_utils.flirt.FlirtFunctionAnalyzer(matcher, sigpath)
logger.debug("registering viv function analyzer: %s", repr(analyzer))
analyzers.append(analyzer)
vw = viv_utils.getWorkspace(args.sample, analyze=True, should_save=False)
functions = vw.getFunctions()
if args.function:
functions = [args.function]
for function in functions:
logger.debug("matching function: 0x%04x", function)
for analyzer in analyzers:
name = viv_utils.flirt.match_function_flirt_signatures(analyzer.matcher, vw, function)
if name:
print("0x%04x: %s" % (function, name))
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,167 +0,0 @@
#!/usr/bin/env python
"""
migrate rules and their namespaces.
example:
$ python scripts/migrate-rules.py migration.csv ./rules ./new-rules
Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at: [package root]/LICENSE.txt
Unless required by applicable law or agreed to in writing, software distributed under the License
is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
"""
import os
import csv
import sys
import logging
import os.path
import argparse
import collections
import capa.rules
logger = logging.getLogger("migrate-rules")
def read_plan(plan_path):
with open(plan_path, "rb") as f:
return list(
csv.DictReader(
f,
restkey="other",
fieldnames=(
"existing path",
"existing name",
"existing rule-category",
"proposed name",
"proposed namespace",
"ATT&CK",
"MBC",
"comment1",
),
)
)
def read_rules(rule_directory):
rules = {}
for root, dirs, files in os.walk(rule_directory):
for file in files:
path = os.path.join(root, file)
if not path.endswith(".yml"):
logger.info("skipping file: %s", path)
continue
rule = capa.rules.Rule.from_yaml_file(path)
rules[rule.name] = rule
if "nursery" in path:
rule.meta["capa/nursery"] = True
return rules
def main(argv=None):
if argv is None:
argv = sys.argv[1:]
parser = argparse.ArgumentParser(description="migrate rules.")
parser.add_argument("plan", type=str, help="Path to CSV describing migration")
parser.add_argument("source", type=str, help="Source directory of rules")
parser.add_argument("destination", type=str, help="Destination directory of rules")
args = parser.parse_args(args=argv)
logging.basicConfig(level=logging.INFO)
logging.getLogger().setLevel(logging.INFO)
plan = read_plan(args.plan)
logger.info("read %d plan entries", len(plan))
rules = read_rules(args.source)
logger.info("read %d rules", len(rules))
planned_rules = set([row["existing name"] for row in plan])
unplanned_rules = [rule for (name, rule) in rules.items() if name not in planned_rules]
if unplanned_rules:
logger.error("plan does not account for %d rules:" % (len(unplanned_rules)))
for rule in unplanned_rules:
logger.error(" " + rule.name)
return -1
# pairs of strings (needle, replacement)
match_translations = []
for row in plan:
if not row["existing name"]:
continue
rule = rules[row["existing name"]]
if rule.meta["name"] != row["proposed name"]:
logger.info("renaming rule '%s' -> '%s'", rule.meta["name"], row["proposed name"])
# assume the yaml is formatted like `- match: $rule-name`.
# but since its been linted, this should be ok.
match_translations.append(("- match: " + rule.meta["name"], "- match: " + row["proposed name"]))
rule.meta["name"] = row["proposed name"]
rule.name = row["proposed name"]
if "rule-category" in rule.meta:
logger.info("deleting rule category '%s'", rule.meta["rule-category"])
del rule.meta["rule-category"]
rule.meta["namespace"] = row["proposed namespace"]
if row["ATT&CK"] != "n/a" and row["ATT&CK"] != "":
tag = row["ATT&CK"]
name, _, id = tag.rpartition(" ")
tag = "%s [%s]" % (name, id)
rule.meta["att&ck"] = [tag]
if row["MBC"] != "n/a" and row["MBC"] != "":
tag = row["MBC"]
rule.meta["mbc"] = [tag]
for rule in rules.values():
filename = rule.name
filename = filename.lower()
filename = filename.replace(" ", "-")
filename = filename.replace("(", "")
filename = filename.replace(")", "")
filename = filename.replace("+", "")
filename = filename.replace("/", "")
filename = filename + ".yml"
try:
if rule.meta.get("capa/nursery"):
directory = os.path.join(args.destination, "nursery")
elif rule.meta.get("lib"):
directory = os.path.join(args.destination, "lib")
else:
directory = os.path.join(args.destination, rule.meta.get("namespace"))
os.makedirs(directory)
except OSError:
pass
else:
logger.info("created namespace: %s", directory)
path = os.path.join(directory, filename)
logger.info("writing rule %s", path)
doc = rule.to_yaml().decode("utf-8")
for (needle, replacement) in match_translations:
doc = doc.replace(needle, replacement)
with open(path, "wb") as f:
f.write(doc.encode("utf-8"))
return 0
if __name__ == "__main__":
sys.exit(main())

73
scripts/profile-memory.py Normal file
View File

@@ -0,0 +1,73 @@
import gc
import linecache
import tracemalloc
tracemalloc.start()
def display_top(snapshot, key_type="lineno", limit=10):
# via: https://docs.python.org/3/library/tracemalloc.html#pretty-top
snapshot = snapshot.filter_traces(
(
tracemalloc.Filter(False, "<frozen importlib._bootstrap_external>"),
tracemalloc.Filter(False, "<frozen importlib._bootstrap>"),
tracemalloc.Filter(False, "<unknown>"),
)
)
top_stats = snapshot.statistics(key_type)
print("Top %s lines" % limit)
for index, stat in enumerate(top_stats[:limit], 1):
frame = stat.traceback[0]
print("#%s: %s:%s: %.1f KiB" % (index, frame.filename, frame.lineno, stat.size / 1024))
line = linecache.getline(frame.filename, frame.lineno).strip()
if line:
print(" %s" % line)
other = top_stats[limit:]
if other:
size = sum(stat.size for stat in other)
print("%s other: %.1f KiB" % (len(other), size / 1024))
total = sum(stat.size for stat in top_stats)
print("Total allocated size: %.1f KiB" % (total / 1024))
def main():
# import within main to keep isort happy
# while also invoking tracemalloc.start() immediately upon start.
import io
import os
import time
import contextlib
import psutil
import capa.main
count = int(os.environ.get("CAPA_PROFILE_COUNT", 1))
print("total iterations planned: %d (set via env var CAPA_PROFILE_COUNT)." % (count))
print()
for i in range(count):
print("iteration %d/%d..." % (i + 1, count))
with contextlib.redirect_stdout(io.StringIO()):
with contextlib.redirect_stderr(io.StringIO()):
t0 = time.time()
capa.main.main()
t1 = time.time()
gc.collect()
process = psutil.Process(os.getpid())
print(" duration: %0.02fs" % (t1 - t0))
print(" rss: %.1f MiB" % (process.memory_info().rss / 1024 / 1024))
print(" vms: %.1f MiB" % (process.memory_info().vms / 1024 / 1024))
print("done.")
gc.collect()
snapshot0 = tracemalloc.take_snapshot()
display_top(snapshot0)
main()

View File

@@ -59,11 +59,10 @@ import colorama
import capa.main
import capa.rules
import capa.engine
import capa.render
import capa.features
import capa.render.utils as rutils
import capa.features.freeze
import capa.features.extractors.viv
import capa.render.result_document
from capa.helpers import get_file_taste
logger = logging.getLogger("capa.show-capabilities-by-function")
@@ -111,143 +110,89 @@ def main(argv=None):
if argv is None:
argv = sys.argv[1:]
formats = [
("auto", "(default) detect file type automatically"),
("pe", "Windows PE file"),
("sc32", "32-bit shellcode"),
("sc64", "64-bit shellcode"),
("freeze", "features previously frozen by capa"),
]
format_help = ", ".join(["%s: %s" % (f[0], f[1]) for f in formats])
parser = argparse.ArgumentParser(description="detect capabilities in programs.")
capa.main.install_common_args(parser, wanted={"format", "backend", "sample", "signatures", "rules", "tag"})
args = parser.parse_args(args=argv)
capa.main.handle_common_args(args)
parser = argparse.ArgumentParser(description="detect capabilities in programs.")
parser.add_argument("sample", type=str, help="Path to sample to analyze")
parser.add_argument(
"-r",
"--rules",
type=str,
default="(embedded rules)",
help="Path to rule file or directory, use embedded rules by default",
)
parser.add_argument("-t", "--tag", type=str, help="Filter on rule meta field values")
parser.add_argument("-d", "--debug", action="store_true", help="Enable debugging output on STDERR")
parser.add_argument("-q", "--quiet", action="store_true", help="Disable all output but errors")
parser.add_argument(
"-f",
"--format",
choices=[f[0] for f in formats],
default="auto",
help="Select sample format, %s" % format_help,
)
args = parser.parse_args(args=argv)
try:
taste = get_file_taste(args.sample)
except IOError as e:
logger.error("%s", str(e))
return -1
if args.quiet:
logging.basicConfig(level=logging.ERROR)
logging.getLogger().setLevel(logging.ERROR)
elif args.debug:
logging.basicConfig(level=logging.DEBUG)
logging.getLogger().setLevel(logging.DEBUG)
else:
logging.basicConfig(level=logging.INFO)
logging.getLogger().setLevel(logging.INFO)
try:
rules = capa.main.get_rules(args.rules)
rules = capa.rules.RuleSet(rules)
logger.info("successfully loaded %s rules", len(rules))
if args.tag:
rules = rules.filter_rules_by_meta(args.tag)
logger.info("selected %s rules", len(rules))
except (IOError, capa.rules.InvalidRule, capa.rules.InvalidRuleSet) as e:
logger.error("%s", str(e))
return -1
# disable vivisect-related logging, it's verbose and not relevant for capa users
capa.main.set_vivisect_log_level(logging.CRITICAL)
try:
sig_paths = capa.main.get_signatures(args.signatures)
except (IOError) as e:
logger.error("%s", str(e))
return -1
if (args.format == "freeze") or (args.format == "auto" and capa.features.freeze.is_freeze(taste)):
format = "freeze"
with open(args.sample, "rb") as f:
extractor = capa.features.freeze.load(f.read())
else:
format = args.format
should_save_workspace = os.environ.get("CAPA_SAVE_WORKSPACE") not in ("0", "no", "NO", "n", None)
try:
taste = get_file_taste(args.sample)
except IOError as e:
logger.error("%s", str(e))
extractor = capa.main.get_extractor(
args.sample, args.format, args.backend, sig_paths, should_save_workspace
)
except capa.main.UnsupportedFormatError:
logger.error("-" * 80)
logger.error(" Input file does not appear to be a PE file.")
logger.error(" ")
logger.error(
" capa currently only supports analyzing PE files (or shellcode, when using --format sc32|sc64)."
)
logger.error(" If you don't know the input file type, you can try using the `file` utility to guess it.")
logger.error("-" * 80)
return -1
except capa.main.UnsupportedRuntimeError:
logger.error("-" * 80)
logger.error(" Unsupported runtime or Python interpreter.")
logger.error(" ")
logger.error(" capa supports running under Python 2.7 using Vivisect for binary analysis.")
logger.error(" It can also run within IDA Pro, using either Python 2.7 or 3.5+.")
logger.error(" ")
logger.error(" If you're seeing this message on the command line, please ensure you're running Python 2.7.")
logger.error("-" * 80)
return -1
# py2 doesn't know about cp65001, which is a variant of utf-8 on windows
# tqdm bails when trying to render the progress bar in this setup.
# because cp65001 is utf-8, we just map that codepage to the utf-8 codec.
# see #380 and: https://stackoverflow.com/a/3259271/87207
import codecs
meta = capa.main.collect_metadata(argv, args.sample, args.rules, extractor)
capabilities, counts = capa.main.find_capabilities(rules, extractor)
meta["analysis"].update(counts)
codecs.register(lambda name: codecs.lookup("utf-8") if name == "cp65001" else None)
if args.rules == "(embedded rules)":
logger.info("-" * 80)
logger.info(" Using default embedded rules.")
logger.info(" To provide your own rules, use the form `capa.exe -r ./path/to/rules/ /path/to/mal.exe`.")
logger.info(" You can see the current default rule set here:")
logger.info(" https://github.com/fireeye/capa-rules")
logger.info("-" * 80)
logger.debug("detected running from source")
args.rules = os.path.join(os.path.dirname(__file__), "..", "rules")
logger.debug("default rule path (source method): %s", args.rules)
else:
logger.info("using rules path: %s", args.rules)
try:
rules = capa.main.get_rules(args.rules)
rules = capa.rules.RuleSet(rules)
logger.info("successfully loaded %s rules", len(rules))
if args.tag:
rules = rules.filter_rules_by_meta(args.tag)
logger.info("selected %s rules", len(rules))
except (IOError, capa.rules.InvalidRule, capa.rules.InvalidRuleSet) as e:
logger.error("%s", str(e))
if capa.main.has_file_limitation(rules, capabilities):
# bail if capa encountered file limitation e.g. a packed binary
# do show the output in verbose mode, though.
if not (args.verbose or args.vverbose or args.json):
return -1
if (args.format == "freeze") or (args.format == "auto" and capa.features.freeze.is_freeze(taste)):
format = "freeze"
with open(args.sample, "rb") as f:
extractor = capa.features.freeze.load(f.read())
else:
format = args.format
try:
extractor = capa.main.get_extractor(args.sample, args.format)
except capa.main.UnsupportedFormatError:
logger.error("-" * 80)
logger.error(" Input file does not appear to be a PE file.")
logger.error(" ")
logger.error(
" capa currently only supports analyzing PE files (or shellcode, when using --format sc32|sc64)."
)
logger.error(
" If you don't know the input file type, you can try using the `file` utility to guess it."
)
logger.error("-" * 80)
return -1
except capa.main.UnsupportedRuntimeError:
logger.error("-" * 80)
logger.error(" Unsupported runtime or Python interpreter.")
logger.error(" ")
logger.error(" capa supports running under Python 2.7 using Vivisect for binary analysis.")
logger.error(" It can also run within IDA Pro, using either Python 2.7 or 3.5+.")
logger.error(" ")
logger.error(
" If you're seeing this message on the command line, please ensure you're running Python 2.7."
)
logger.error("-" * 80)
return -1
# colorama will detect:
# - when on Windows console, and fixup coloring, and
# - when not an interactive session, and disable coloring
# renderers should use coloring and assume it will be stripped out if necessary.
colorama.init()
doc = capa.render.result_document.convert_capabilities_to_result_document(meta, rules, capabilities)
print(render_matches_by_function(doc))
colorama.deinit()
meta = capa.main.collect_metadata(argv, args.sample, args.rules, format, extractor)
capabilities, counts = capa.main.find_capabilities(rules, extractor)
meta["analysis"].update(counts)
logger.info("done.")
if capa.main.has_file_limitation(rules, capabilities):
# bail if capa encountered file limitation e.g. a packed binary
# do show the output in verbose mode, though.
if not (args.verbose or args.vverbose or args.json):
return -1
# colorama will detect:
# - when on Windows console, and fixup coloring, and
# - when not an interactive session, and disable coloring
# renderers should use coloring and assume it will be stripped out if necessary.
colorama.init()
doc = capa.render.convert_capabilities_to_result_document(meta, rules, capabilities)
print(render_matches_by_function(doc))
colorama.deinit()
logger.info("done.")
return 0
return 0
if __name__ == "__main__":

Some files were not shown because too many files have changed in this diff Show More