Well-Architected Framework
Automate infrastructure and application deployments
Deployment automation removes manual errors and accelerates release cycles. Manual deployment processes create inconsistent environments, increase deployment failures, and slow release cycles. Automate your deployments to reduce these risks and create a predictable, repeatable processes for both infrastructure and applications.
Use infrastructure as code to provision infrastructure and deploy applications to container orchestrators or virtual machines with immutable infrastructure patterns.
Why automate deployments
Automating deployments addresses the following operational and security challenges:
Reduce deployment inconsistencies: When you automate deployments, you reduce human error and ensure that you configure every environment identically. Manual deployments can lead to configuration drift, where environments diverge over time. Deployment automation implements consistency across development, staging, and production.
Reduce deployment time and risk: Automated deployments are faster and less error-prone than manual processes. Automation enables frequent, small deployments. Manual deployments often involve multiple steps that increase the chance of mistakes and downtime.
Enable rapid rollback: Automated deployments track all changes in version control, allowing you to rollback to previous states. If you manually deploy, you may not have a clear record of what changed, making it difficult to revert changes.
Improve audit and compliance: Manual processes lack detailed audit trails. Automation creates logs of who deployed what and when, meeting compliance requirements and simplifying troubleshooting.
When you automate deployments, you gain the following benefits:
Use infrastructure as code: Define all infrastructure in version-controlled configuration files, enabling peer review, change tracking, and rollback capabilities. Infrastructure as code reduces configuration errors and accelerates infrastructure provisioning.
Adopt immutable infrastructure: Build images with your application pre-installed rather than configuring after deployment, increasing consistency between environments and simplifying troubleshooting. Immutable infrastructure eliminates configuration drift and reduces deployment failures.
Promote through environments: Validate automation in non-production before promoting to production, catching issues early and reducing production incidents. Start with a development environment, progress to staging, then deploy to production with confidence.
Implement progressive deployments: Deploy changes gradually with zero-downtime strategies like blue-green or canary deployments, minimizing risk and enabling quick rollbacks when issues occur.
Automate infrastructure with Terraform
This section explains how to use Terraform and HCP Terraform to provision and manage infrastructure with version-controlled configuration.
Use Terraform to provision and manage your application infrastructure in a predictable, repeatable way. Terraform has thousands of providers, letting you deploy to any cloud provider or platform that has an API.
Use HCP Terraform when you have multiple team members making infrastructure changes, need policy enforcement, or want managed runners. HCP Terraform manages Terraform runs in a consistent and reliable environment, securely stores your state files, and enables team collaboration.
HCP Terraform includes the following key features:
- Collaborate on your infrastructure management with remote state storage and Terraform operations
- Connect to your VCS provider for infrastructure development workflows
- Store modules in a private registry for code reuse
- Run tasks to integrate third-party services during deployment
- Granular access controls for team permissions
- Policy enforcement with Sentinel for configuration guardrails
The following is an example of a Terraform configuration that provisions infrastructure:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "us-west-2"
}
resource "aws_instance" "app" {
ami = "ami-12345678" # AMI built with Packer
instance_type = "t3.micro"
tags = {
Name = "application-server"
Environment = "production"
}
}
This configuration configures the AWS provider and provisions an EC2 instance using an AMI built with Packer. HCP Terraform allows your team to share the same state file, decreasing the chances of conflicts when multiple people make infrastructure changes.
If you are building AMIs with Packer, you can reference the image in Terraform using the AMI ID that Packer outputs. You can use a Terraform data source to dynamically query for the latest AMI by tag, ensuring you always deploy the most recent version:
# Query for the latest Packer-built AMI
data "aws_ami" "app" {
most_recent = true
owners = ["self"]
filter {
name = "tag:Name"
values = ["myapp"]
}
filter {
name = "tag:Version"
values = ["1.0.0"]
}
}
resource "aws_instance" "app" {
ami = data.aws_ami.app.id # References the Packer-built AMI
instance_type = "t3.micro"
tags = {
Name = "application-server"
Environment = "production"
}
}
The data source queries your AWS account for the most recent AMI tagged with Name=myapp and Version=1.0.0. When you run terraform apply, Terraform uses the AMI ID from the data source to launch instances with your pre-packaged application. Using data sources eliminates hardcoded AMI IDs and ensures you deploy the correct version.
To learn more about Terraform configuration syntax and how to apply this configuration, visit the Terraform getting started tutorials.
Automate application deployment
Your application deployment strategy depends on your workload and operational requirements. After packaging your application into container or machine images, deploy these immutable artifacts using Terraform or a container orchestrator.
The deployment workflow follows these steps:
- Build images: Use Packer to create container images or machine images with your application pre-installed. Packer outputs the image ID or tag (for example, AMI ID
ami-0abcd1234efgh5678or container tagmyregistry/myapp:1.0.0). - Store artifacts: Push container images to a registry (Docker Hub, ECR, ACR, GCR) using
docker pushor a post-processor. Machine images are automatically stored in your cloud provider during the Packer build. You can use HCP Packer to store metadata about your machine images to make it easier to automatically find the image you need. - Deploy: Reference the stored images in your Terraform or container orchestrator configuration using the image ID, tag, or a data source query.
- Manage lifecycle: Use Terraform or your container orchestrator to update deployments when new image versions are available by updating the image reference and applying the changes.
Manage Kubernetes with Terraform
Kubernetes automates deployment and scaling of containerized workloads with extensive ecosystem support. Use Kubernetes when you need advanced networking features, extensive third-party integrations, or multi-cloud container deployments. The Terraform Kubernetes provider lets you deploy and manage workloads through the Kubernetes API. Deploy Helm packages with the Helm Terraform provider.
Before deploying applications to Kubernetes with Terraform, you need a running Kubernetes cluster and kubectl configured to access it. The Terraform Kubernetes provider requires API access to your cluster to manage resources.
The following example assumes your cluster is configured and you have pushed a container image to a registry. This Terraform configuration deploys a containerized application to Kubernetes:
resource "kubernetes_deployment" "app" {
metadata {
name = "myapp"
labels = {
app = "myapp"
}
}
spec {
replicas = 3
selector {
match_labels = {
app = "myapp"
}
}
template {
metadata {
labels = {
app = "myapp"
}
}
spec {
container {
name = "myapp"
image = "myregistry.azurecr.io/myapp:1.0.0" # Container image built with Packer
port {
container_port = 8080
}
resources {
requests = {
cpu = "100m"
memory = "128Mi"
}
limits = {
cpu = "500m"
memory = "512Mi"
}
}
}
}
}
}
}
resource "kubernetes_service" "app" {
metadata {
name = "myapp-service"
}
spec {
selector = {
app = kubernetes_deployment.app.metadata[0].labels.app
}
port {
port = 80
target_port = 8080
}
type = "LoadBalancer"
}
}
The Kubernetes Deployment creates three replicas of a containerized application, defines resource requests and limits for predictable performance, and creates a LoadBalancer service to expose the application externally.
Manage Nomad with Terraform
Nomad orchestrates containers, standalone binaries, and batch jobs with simplified operations and better resource efficiency. Use Nomad when you need to manage diverse workload types, prefer streamlined operations, or require higher density and performance. Use the Terraform Nomad provider to manage workloads as code, or use Nomad Pack for templating and package management.
Before deploying applications to Nomad with Terraform, you need a running Nomad cluster. The following example assumes your cluster is configured and you have pushed a container image to a registry. This Terraform configuration deploys a containerized application to Nomad:
resource "nomad_job" "app" {
jobspec = <<EOT
job "myapp" {
datacenters = ["dc1"]
type = "service"
group "app" {
count = 3
task "myapp" {
driver = "docker"
config {
image = "myregistry.azurecr.io/myapp:1.0.0" # Container image built with Packer
ports = ["http"]
}
resources {
cpu = 500
memory = 512
}
service {
name = "myapp"
port = "http"
check {
type = "http"
path = "/health"
interval = "10s"
timeout = "2s"
}
}
}
network {
port "http" {
static = 8080
}
}
}
}
EOT
}
The Nomad job specification deploys three instances of a containerized application, defines resource allocations, and configures health checks to ensure the application is running correctly before routing traffic to it.
HashiCorp resources:
- Learn how to use Terraform to deploy your tools such as container orchestration, database infrastructure, and version control systems.
- Learn how to package applications with Packer.
- Use infrastructure as code to define infrastructure.
- Create immutable infrastructure with Packer and Terraform
- Implement semi-automated deployments with infrastructure as code
- Implement fully-automated deployments with CI/CD pipelines
- Create reusable Terraform modules to standardize infrastructure deployments
- Use version control to store infrastructure configurations
- Implement a GitOps workflow to create a fully auditable, and version-controlled deployment process.
- Learn Terraform with the Terraform tutorials and the Terraform documentation
- Get started with AWS, Azure, or GCP
To learn how to deploy applications to Kubernetes with Terraform:
- Learn how to deploy Federated Multi-Cloud Kubernetes Clusters
- Read the Terraform Kubernetes provider documentation for resource syntax and configuration options
- Learn how to Schedule deployments with Terraform
- Deploy Helm packages with the Helm Terraform provider
- Explore Kubernetes tutorials for deployment patterns and workflows
To learn how to deploy applications to Nomad with Terraform:
- Learn how to Deploy a Nomad cluster on AWS
- Read the Terraform Nomad provider documentation for job management and configuration
- Explore Nomad tutorials for application deployment examples
Next steps
In this section of Automate your workflows, you learned how to automate deployments for both infrastructure and applications. Automated deployments are part of the Define and automate processes pillar.
Visit the following documents to learn more about the automation workflow:
- Implement small, frequent infrastructure deployments with atomic deployments.
- Implement zero-downtime deployments.
- Create CI/CD pipelines to automate infrastructure and application deployments.