Well-Architected Framework
Implement fully-automated deployments
Fully-automated deployments represent the advanced stage in your process automation journey, where you eliminate manual intervention through Git-driven workflows, CI/CD pipelines, and comprehensive monitoring. A fully automated infrastructure deployment system combines Git-driven infrastructure changes with CI/CD systems that execute automated tests and monitoring.
Fully automated systems build on the best practices of semi-automated systems such as VCS, automation tools, and audit logs.
This document guides you through implementing Git-driven CI/CD, self-service infrastructure platforms, comprehensive monitoring and testing, and advanced deployment strategies.
Why advance to fully-automated deployments
Advancing from semi-automated to fully-automated deployments addresses the following operational challenges:
Eliminate manual execution bottlenecks: Semi-automated deployments often require someone to execute scripts, coordinate timing, and manually trigger infrastructure changes. Fully-automated CI/CD triggers deployments automatically on code commits, enabling continuous delivery without human intervention.
Enable developer self-service: Semi-automated deployments often require operations teams to execute scripts and provision infrastructure, creating dependencies and wait times. Self-service platforms let developers provision infrastructure independently through standardized workflows while maintaining governance controls.
Catch issues before production: Fully-automated pipelines integrate automated testing, security scanning, and policy validation that prevent issues from reaching production.
Deploy frequently with lower risk: Semi-automated deployments limit teams to infrequent releases because each deployment requires coordination and manual validation. Fully-automated deployments with monitoring, automated rollback, and progressive deployment strategies enable continuous deployment with lower risk.
Run Git-driven deployments through CI/CD
Git-driven deployments use version control commits to automatically trigger infrastructure changes through CI/CD pipelines, eliminating manual execution steps. When you manage your IaC through Git-driven development, you combine your IaC management with interactions to your Git VCS and CI/CD. When you want to modify your infrastructure, such as standing up or tearing down a server, you must commit a change to your VCS, which triggers a CI/CD job to change your infrastructure. This process ensures that every infrastructure modification goes through proper testing, security scanning, and validation before reaching production.
Using Git-driven development, you gain the following benefits:
Documentation as code: Git commits tie infrastructure changes directly to code, creating living documentation that stays synchronized with your actual infrastructure. When you need to understand why infrastructure exists or how it changed over time, the Git history provides context through commit messages, pull request discussions, and linked issues.
Complete audit trail: The commit history provides a complete audit trail showing who made what changes, when they made them, and why through commit messages and pull requests. This audit trail supports compliance requirements, incident investigation, and knowledge transfer when team members change.
Efficient scaling: Git-driven workflows enable multiple teams to modify infrastructure simultaneously through branching and merging strategies. Teams can work on infrastructure changes in parallel without conflicts, and CI/CD systems handle concurrent deployments safely through locking mechanisms and queue management.
Automated validation: CI/CD pipelines automatically test infrastructure changes before deployment, running validation checks, security scans, and policy enforcement on every commit. This automation catches errors immediately and prevents invalid infrastructure from reaching production, reducing deployment failures and rollbacks.
With HCP Terraform, you can use the built-in VCS workflow to automatically trigger runs based on changes to your VCS repositories. The CLI-driven workflow allows you to quickly iterate on your configuration and work locally, while the VCS-driven workflow enables collaboration within teams by establishing your shared repositories as the source of truth for infrastructure configuration.
You can manage your image creation with Git and CI/CD, similar to how you manage your other infrastructure. Once you commit a change to your Packer file, your CI/CD triggers a Packer build. Upon completion, your CI/CD system tags and uploads your image to an image repository. You can use HCP Packer to store metadata about the images you build, including when you create the artifact, the associated platform, and which Git commit is associated with your build. HCP Packer allows your downstream processes, like Terraform, to consume these images efficiently.
The following example shows a GitHub Actions workflow that automates Packer image builds and Terraform deployments:
# .github/workflows/infrastructure.yml - Automated infrastructure pipeline
name: Infrastructure Pipeline
on:
push:
branches: [main]
paths:
- 'packer/**'
- 'terraform/**'
jobs:
build-image:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Packer
uses: hashicorp/setup-packer@main
- name: Initialize Packer
run: packer init packer/
- name: Validate Packer template
run: packer validate packer/
- name: Build image with HCP Packer
env:
HCP_CLIENT_ID: ${{ secrets.HCP_CLIENT_ID }}
HCP_CLIENT_SECRET: ${{ secrets.HCP_CLIENT_SECRET }}
run: packer build packer/golden-image.pkr.hcl
deploy-infrastructure:
needs: build-image
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
- name: Terraform Init
working-directory: terraform/
env:
TF_TOKEN_app_terraform_io: ${{ secrets.TF_API_TOKEN }}
run: terraform init
- name: Terraform Plan
working-directory: terraform/
run: terraform plan
- name: Terraform Apply
working-directory: terraform/
run: terraform apply -auto-approve
The GitHub Actions workflow automates your complete infrastructure pipeline. When you push changes to Packer templates or Terraform configurations, GitHub Actions validates the Packer template, builds a new image, registers it with HCP Packer, then triggers Terraform to deploy the updated infrastructure. The pipeline ensures every infrastructure change goes through validation before deployment, eliminating manual execution steps. CI/CD credentials for HCP Packer and HCP Terraform are stored as GitHub secrets, keeping sensitive values out of your code.
You can learn more about using Git and GitOps with Learn how to implement a GitOps workflow.
Manage secrets in CI/CD with Vault
In semi-automated deployments, you used Vault to store secrets that Terraform and Packer retrieve when you manually run scripts. In fully-automated deployments, Vault's role expands—CI/CD pipelines authenticate to Vault and retrieve dynamic, short-lived credentials automatically. This eliminates the need to store any credentials in your CI/CD system and enables automatic credential rotation without pipeline changes.
Fully-automated CI/CD pipelines require access to sensitive credentials—cloud provider API keys, database passwords, API tokens, and encryption keys. Storing these secrets in CI/CD environment variables or configuration files creates security risks. When secrets are hardcoded in CI/CD systems, rotating credentials requires updating every pipeline configuration, and leaked secrets compromise your entire infrastructure.
Vault provides dynamic secrets and centralized secret management for CI/CD pipelines. Instead of storing long-lived credentials in GitHub secrets or CI/CD variables, your pipelines authenticate to Vault and retrieve short-lived, automatically rotating credentials. When a pipeline needs AWS access, Vault generates temporary AWS credentials that expire after the deployment completes, eliminating persistent credentials that can be compromised.
The following example shows how a CI/CD pipeline integrates with Vault for dynamic secrets:
# .github/workflows/vault-integration.yml - Vault-integrated pipeline
name: Infrastructure with Vault
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Import Secrets from Vault
uses: hashicorp/vault-action@v2
with:
url: https://vault.example.com:8200
method: jwt
role: github-actions
secrets: |
secret/data/aws access_key | AWS_ACCESS_KEY_ID ;
secret/data/aws secret_key | AWS_SECRET_ACCESS_KEY ;
secret/data/terraform token | TF_API_TOKEN
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
- name: Deploy Infrastructure
run: |
terraform init
terraform apply -auto-approve
The GitHub Actions pipeline authenticates to Vault using JWT authentication, retrieves AWS credentials and Terraform tokens dynamically, and uses them for infrastructure deployment. The secrets never persist in GitHub's environment—Vault generates them at runtime and they expire after use. When you rotate credentials in Vault, all pipelines immediately use the new credentials without pipeline modifications. Vault's audit logs track which pipelines access which secrets, providing complete visibility into credential usage across your automation systems.
Enforce policies with Sentinel
Fully-automated deployments require automated validation to prevent dangerous changes from reaching production. Manual code reviews and approval gates create bottlenecks that slow deployments. When operations teams manually review every infrastructure change, they become deployment bottlenecks. However, removing manual reviews without automated checks allows dangerous changes—like publicly exposing databases, deploying oversized instances, or removing security groups—to reach production.
Sentinel provides policy as code for HCP Terraform, automatically validating infrastructure changes before deployment. You define policies that enforce security requirements, cost controls, and compliance standards. HCP Terraform evaluates these policies during the plan phase and blocks changes that violate policies, allowing safe changes to proceed automatically.
The following example shows Sentinel policies that enforce security and cost controls:
policy.sentinel
# Policy enforcement for automated deployments
import "tfplan/v2" as tfplan
# Require all S3 buckets to be private
mandatory_s3_encryption = rule {
all tfplan.resource_changes as _, rc {
rc.type is "aws_s3_bucket" implies
rc.change.after.acl is "private" and
rc.change.after.versioning[0].enabled is true
}
}
# Limit EC2 instance sizes to control costs
restrict_instance_types = rule {
all tfplan.resource_changes as _, rc {
rc.type is "aws_instance" implies
rc.change.after.instance_type in ["t3.micro", "t3.small", "t3.medium"]
}
}
# Require tags for resource management
require_resource_tags = rule {
all tfplan.resource_changes as _, rc {
rc.type is "aws_instance" implies (
"Environment" in keys(rc.change.after.tags) and
"Owner" in keys(rc.change.after.tags)
)
}
}
main = rule {
mandatory_s3_encryption and
restrict_instance_types and
require_resource_tags
}
The Sentinel policies automatically validate every Terraform plan in your CI/CD pipeline. When a developer tries to create a public S3 bucket, deploy an oversized instance, or skip required tags, Sentinel blocks the change and explains the violation. Approved changes that meet all policies proceed automatically to deployment without manual review. Policy automation enables rapid deployments while maintaining security and compliance standards. HCP Terraform enforces these policies before applying changes, preventing policy violations from reaching your infrastructure.
Deploy self-service infrastructure
Self-service infrastructure platforms enable developers to provision infrastructure through standardized workflows without writing infrastructure code or waiting for operations teams. Application developers can utilize IaC and temporary infrastructure without writing IaC. You increase application development velocity by creating a process for developers to quickly and reliably build the infrastructure they need to run their application code.
Internal developer platforms (IDPs) allow platform teams to define golden patterns and workflows that enable a self-service experience for developers. Developers understand the requirements of their application, such as dependencies like MySQL and Redis, but they should not have to create and maintain the infrastructure on which their application runs. Platform teams define golden workflows for actions such as building an application, deploying to production, performing a rollback, and other workflows. Developers can execute these workflows with a simple UX while being abstracted from the details.
Use infrastructure monitoring and testing
Automated monitoring and testing continuously validate infrastructure health and catch issues before they impact users, reducing mean time to detection and resolution. Infrastructure monitoring and testing are important to help prevent outages, security breaches, and performance issues before they impact your business. By continuously monitoring your infrastructure, you gain real-time visibility into system health, resource utilization, and performance metrics, allowing you to detect and address issues proactively rather than reactively.
Testing complements monitoring by validating that your infrastructure works as intended before promoting it to production. Through comprehensive testing, including load testing, security scanning, and disaster recovery drills, you can verify that your infrastructure is not just running, but running correctly and securely.
You can use Terraform and Packer to install and deploy monitoring agents into your application images. By automating the installation of the monitoring agents and deploying the application image, you can ensure you will have visibility over the infrastructure your application runs on.
You can also use Terraform to configure cloud-native tools to monitor your cloud infrastructure stacks. You can create dashboards and alarms and automate responses to alerts.
Blue-green and canary deployments
Progressive deployment strategies like blue-green, canary, and rolling deployments enable you to release changes gradually while minimizing user impact and enabling rapid rollback if issues occur. Blue/green, canary, and rolling deployments all improve application reliability and reduce risk. While they share similar goals, each approach offers unique advantages that make it more suitable for certain types of applications or organizational needs. By choosing the most appropriate deployment method, companies can ensure smoother updates and reduce the likelihood of service disruptions.
- Blue/green deployments maintain two identical production environments concurrently. This method allows you to shift traffic from the current version (blue) to the upgraded version (green).
- Canary deployments introduce new versions incrementally to a subset of users. This approach lets you test upgrades with limited exposure, working alongside other deployment systems.
- Rolling deployments update applications gradually across multiple servers. This technique ensures only a portion of your infrastructure changes at once, reducing the risk of widespread issues.
You can learn more about these deployment methods in the Zero-downtime deployments document set.
HashiCorp resources:
- Implement a GitOps workflow for Git-driven deployments
- Package applications with Packer in CI/CD
- Deploy applications and automate pipelines
- Automate testing
- Implement zero-downtime deployments with blue-green and canary strategies
Get started with automation tools:
- Get started with Terraform tutorials and read the Terraform introduction for infrastructure as code
- Get started with Packer tutorials and read the Packer introduction for image building
- Get started with Vault tutorials and read the Vault introduction for secrets management
- Get started with Consul tutorials and read the Consul introduction for service networking
- Get started with Nomad tutorials and read the Nomad introduction for workload orchestration
- Get started with Sentinel tutorials and read the Sentinel introduction for policy as code
Terraform CI/CD automation:
- Learn how to use VCS-driven workflow with HCP Terraform
- Automate Terraform with GitHub Actions for CI/CD integration
- Read about HCP Terraform features and capabilities
- Configure run triggers for automated workflows
- Use notifications for deployment tracking
Packer image automation:
- Automate Packer with GitHub Actions
- Build golden image pipelines
- Read about HCP Packer for image metadata management
- Use HCP Packer channels for environment promotion
Vault secrets in CI/CD:
- Integrate Vault with GitHub Actions for CI/CD pipelines
- Learn about Vault dynamic secrets for automation
- Use Vault with Terraform for infrastructure secrets
Sentinel policy automation:
- Write Sentinel policies for Terraform
- Browse sample Sentinel policies for common use cases
Monitoring and observability:
- Identify common metrics
- Set up monitoring agents
- Set up dashboards and alerts
External resources:
Next steps
In this section of Process automation, you learned how to implement fully-automated deployments with Git-driven CI/CD, self-service infrastructure, comprehensive monitoring and testing, and advanced deployment strategies. Implement fully-automated deployments is part of the Define and automate processes pillar.
Review related automation documents:
- Assess your current automation level with the automation maturity model
- Implement semi-automated deployments