Skip to content

Latest commit

 

History

History
170 lines (107 loc) · 16.7 KB

security.md

File metadata and controls

170 lines (107 loc) · 16.7 KB
title description
Security
Security practices and processes for Powertools for AWS Lambda (Python)

Overview

Open Source Security Foundation Best Practices

This page describes our security processes and supply chain practices.

!!! info "We continuously check and evolve our practices, therefore it is possible some diagrams may be eventually consistent."

--8<-- "SECURITY.md"

Supply chain

Verifying signed builds

!!! note "Starting from v2.20.0 releases, builds are reproducible{target="_blank"} and signed publicly."

![SLSA Supply Chain Threats](https://slsa.dev/images/v1.0/supply-chain-threats.svg)

Supply Chain Threats visualized by SLSA

Terminology

We use SLSA{target="_blank"} to ensure our builds are reproducible and to adhere to supply chain security practices.

Within our releases page, you will notice a new metadata file: multiple.intoto.jsonl. It's metadata to describe where, when, and how our build artifacts were produced - or simply, attestation in SLSA terminology.

For this to be useful, we need a verification tool - SLSA Verifier. SLSA Verifier decodes attestation to confirm the authenticity, identity, and the steps we took in our release pipeline (e.g., inputs, git commit/branch, GitHub org/repo, build SHA256, etc.).

HOWTO

You can do this manually or automated via a shell script. We maintain the latter to ease adoption in CI systems (feel free to modify to your needs).

=== "Manually"

* Download [SLSA Verifier binary](https://github.com/slsa-framework/slsa-verifier#download-the-binary)
* Download the [latest release artifact from PyPi](https://pypi.org/project/aws-lambda-powertools/#files) (either wheel or tar.gz )
* Download `multiple.intoto.jsonl` attestation from the [latest release](https://github.com/aws-powertools/powertools-lambda-python/releases/latest) under _Assets_

!!! note "Next steps assume macOS as the operating system, and release v2.20.0"

You should have the following files in the current directory:

* **SLSA Verifier tool**: `slsa-verifier-darwin-arm64`
* **Powertools Release artifact**: `aws_lambda_powertools-2.20.0-py3-none-any.whl`
* **Powertools attestation**: `multiple.intoto.jsonl`

You can now run SLSA Verifier with the following options:

```bash
./slsa-verifier-darwin-arm64 verify-artifact \
    --provenance-path "multiple.intoto.jsonl" \
    --source-uri github.com/aws-powertools/powertools-lambda-python \
    aws_lambda_powertools-2.20.0-py3-none-any.whl
```

=== "Automated"

```shell title="Verifying a release with verify_provenance.sh script"
bash verify_provenance.sh 2.20.0
```

!!! question "Wait, what does this script do?"

I'm glad you asked! It takes the following actions:

1. **Downloads SLSA Verifier** using the pinned version (_e.g., 2.3.0)
2. **Verifies the integrity** of our newly downloaded SLSA Verifier tool
3. **Downloads attestation** file for the given release version
4. **Downloads `aws-lambda-powertools`** release artifact from PyPi for the given release version
5. **Runs SLSA Verifier against attestation**, GitHub Source, and release binary
6. **Cleanup** by removing downloaded files to keep your current directory tidy

??? info "Expand or [click here](https://github.com/heitorlessa/aws-lambda-powertools-python/blob/refactor/ci-seal/.github/actions/verify-provenance/verify_provenance.sh#L95){target="_blank"} to see the script source code"

      ```bash title=".github/actions/verify-provenance/verify_provenance.sh"
      ---8<-- ".github/actions/verify-provenance/verify_provenance.sh"
      ```

Continuous integration practices

!!! note "We adhere to industry recommendations from the OSSF Scorecard project{target="_blank"}, among others{target="_blank"}."

Since all code changes require a pull request (PR) along with one or more reviewers, we automate quality and security checks before, during, and after a PR is merged to trunk (develop).

We use a combination of tools coupled with peer review to increase its compound effect in detecting issues early.

This is a snapshot of our automated checks at a glance.

Continuous Integration practices

Pre-commit checks

Pre-commit configuration{target="_blank"}.

Pre-commit checks are crucial for a fast feedback loop while ensuring security practices at the individual change level.

To prevent scenarios where these checks are intentionally omitted at the client side, we run at CI level too.

!!! note "These run locally only for changed files"

Pre-Pull Request checks

For an improved contributing experience, most of our checks can run locally. For maintainers, this also means increased focus on reviewing actual value instead of standards and security malpractices that can be caught earlier.

!!! note "These are in addition to pre-commit checks."

  • Static typing analysis. mypy checks for static typing annotations to prevent common bugs in Python that may or may not lead to abuse.
  • Tests{target="_blank"}. We run unit, functional, and performance tests (see our definition{target="_blank"}). Besides breaking changes, we are investing in mutation testing to find additional sources of bugs and potential abuse.
  • Security baseline{target="_blank"}. bandit detects common security issues defined by Python Code Quality Authority (PyCQA).
  • Complexity baseline{target="_blank"}. We run a series of maintenability and cyclomatic checks to reduce code and logic complexity. This aids reviewers' cognitive overhead and long-term maintainers revisiting legacy code at a later date.

Pull Request checks

While we trust contributors and maintainers do go through pre-commit and pre-pull request due diligence, we verify them at CI level.

!!! note "Checks described earlier are omitted to improve reading experience."

  • Semantic PR title{target="_blank"}. We enforce PR titles follow semantic naming, for example chore(category): change. This benefits contributors with a lower entry bar, no need for semantic commits. It also benefits everyone looking for an useful changelog message{target="_blank"} on what changed and where.
  • Related issue check{target="_blank"}. Every change require an issue{target="_blank"} describing its needs. This enforces a PR has a related issue by blocking merge operations if missing.
  • Acknowledgment check{target="_blank"}. Ensures PR template{target="_blank"} is used and every contributor is aware of code redistribution.
  • Code coverage diff{target="_blank"}. Educates contributors and maintainers about code coverage differences for a given change.
  • Contribution size check{target="_blank"}. Suggests contributors and maintainers to break up large changes (100-499 LOC) in smaller PRs. It helps reduce overlooking security and other practices due to increased cognitive overhead.
  • Dependency vulnerability check{target="_blank"}. Verifies any dependency changes for common vulnerability exposures (CVEs), in addition to our daily check on any dependencies used (e.g., Python, Docker, Go, etc.)
  • GitHub Actions security check{target="_blank"}. Enforces use of immutable 3rd-party GitHub Actions (_e.g., actions/checkout@<git-SHA>_) to prevent abuse. Upgrades are handled by a separate automated process{target="_blank"} that includes a maintainer review to also prevent unexpected behavior changes.

After merge checks

!!! note "Checks described earlier are omitted to improve reading experience."

We strike a balance in security and contribution experience. These automated checks take several minutes to complete. Failures are reviewed by a maintainer on-call and before a release.

  • End-to-end tests{target="_blank"}. We run E2E with a high degree of parallelization. While it is designed to also run locally, it may incur AWS charges to contributors. For additional security, all infrastructure is ephemeral per change and per Python version.
  • SAST check{target="_blank"}. GitHub CodeQL runs ~30m static analysis in the entire codebase.
  • Security posture check{target="_blank"}. OSSF Scorecard runs numerous automated checks upon changes, and raises security alerts if OSSF security practices{target="_blank"} are no longer followed.
  • Rebuild Changelog{target="_blank"}. We rebuild our entire changelog upon changes and create a PR for maintainers. This has the added benefit in keeping a protected branch{target="_blank"} while keeping removing error-prone tasks from maintainers.
  • Stage documentation{target="_blank"}. We rebuild and deploy changes to the documentation to a staged version{target="_blank"}. This gives us safety that our docs can always be rebuilt, and ready to release to production when needed.
  • Update draft release{target="_blank"}. We use Release Drafter{target="_blank"} to generate a portion of our release notes and to always keep a fresh draft upon changes. You can read our thoughts on a good quality release notes here{target="_blank"} (human readable changes + automation).

Continuous deployment practices

!!! note "We adhere to industry recommendations from the OSSF Scorecard project{target="_blank"}, among others{target="_blank"}."

Releases are triggered by maintainers along with a reviewer - detailed info here{target="_blank"}. In addition to checks that run for every code change, our pipeline requires a manual approval before releasing.

We use a combination of provenance and signed attestation for our builds, source code sealing, SAST scanners, Python specific static code analysis, ephemeral credentials that last a given job step, and more.

This is a snapshot of our automated checks at a glance.

Continuous Deployment practices