Well-Architected Framework
Implement semi-automated deployments
Semi-automated deployments are the first step in your process automation journey, establishing the foundation for advanced automation practices. Once you identify manual deployments in your organization, you can start automating them incrementally. Going from manual to fully automated deployments can be challenging and overwhelming, so organizations commonly migrate to semi-automated deployments before evolving into fully-automated ones.
This document guides you through implementing version control, scripting deployments, creating immutable infrastructure, and auditing changes.
Why start with semi-automated deployments
Implementing semi-automated deployments first addresses the following operational challenges while building toward full automation:
Establish foundation without overwhelming teams: Jumping directly to fully-automated CI/CD overwhelms teams unfamiliar with automation concepts, infrastructure as code, or version control workflows. Semi-automated deployments let teams learn these fundamentals incrementally, building confidence and skills while delivering immediate value. Teams can adopt version control, then scripting, then immutable infrastructure at a manageable pace.
Deliver quick wins that build momentum: Manual deployments create inconsistent configurations, forgotten steps, and deployment failures. Semi-automated practices deliver fast, measurable improvements to software delivery. Version-controlled scripts reduce configuration drift, automated image building removes manual errors, and audit logs catch unauthorized changes.
Create reusable assets for future automation: Scripts, Packer templates, and Terraform configurations you create during semi-automation become the building blocks for CI/CD pipelines. Version-controlled infrastructure code, golden images, and deployment scripts integrate directly into automated workflows.
Use version control systems (VCS)
Version control systems like GitHub or GitLab provide the foundation for automation by tracking every change to your infrastructure code. You should store your automation scripts and infrastructure as code in version control systems like GitHub or GitLab. Version control systems increase repeatability and reliability when deploying and configuring infrastructure. Each time you run a script, you should pull it down from your version control system to ensure it is up-to-date since the last time you ran it. You can use tags in VCS for each environment, one for development, staging, and production. Storing your Terraform configurations and Packer templates in version control ensures every team member uses the same tested automation scripts, reducing deployment failures from outdated or modified scripts.
Use scripts instead of manually running commands
Infrastructure automation through scripting can increase the reliability and security of your infrastructure and application. When you use scripts, you reduce human error, enable scaling, increase consistency, and create a foundation for further automation.
You can use infrastructure as code tools, such as Terraform and Ansible, to deploy and configure your infrastructure. Infrastructure as code lets you define your infrastructure using code and configuration files instead of manually configuring servers, networks, and other resources. Infrastructure as code allows you to version, test, and deploy infrastructure changes just like you would with application code, making it easier to maintain consistency, automate deployments, and quickly recover from failures. By treating infrastructure as code, you can ensure your environments are reproducible, scalable, and maintainable.
Manage secrets with Vault
Automation scripts require secrets like API keys, database credentials, cloud provider tokens, and encryption keys. Storing these secrets directly in your scripts, environment variables, or configuration files creates security risks and makes credential rotation difficult. When secrets leak into version control or logs, you must rotate all credentials and audit who accessed them.
Vault centralizes secrets management for your automation workflows. Instead of hardcoding credentials in Terraform configurations or Packer templates, your automation tools retrieve secrets from Vault at runtime. When you rotate a database password in Vault, all automation scripts immediately use the updated credential without code changes.
The following example shows how Terraform integrates with Vault to retrieve database credentials:
# Configure Vault provider
provider "vault" {
address = "https://vault.example.com:8200"
}
# Retrieve database credentials from Vault
data "vault_generic_secret" "database" {
path = "secret/data/production/database"
}
# Use Vault-managed credentials in your infrastructure
resource "aws_db_instance" "app_database" {
identifier = "app-db"
engine = "postgres"
instance_class = "db.t3.micro"
username = data.vault_generic_secret.database.data["username"]
password = data.vault_generic_secret.database.data["password"]
allocated_storage = 20
}
The Terraform configuration retrieves database credentials from Vault instead of hardcoding them. When you run terraform apply, Terraform authenticates to Vault and fetches the current credentials. You rotate passwords in Vault without modifying Terraform code, and Vault's audit logs track which automation scripts access which secrets.
Packer also integrates with Vault to securely access secrets during image builds. You can retrieve API keys, certificates, and configuration secrets without embedding them in Packer templates or build scripts.
Create application images with code
Automated image creation ensures consistent application deployments by packaging dependencies, security patches, and application code into immutable images. You should automate building images that your application runs on. These images contain dependencies, security patches, and the application. Manual configuration can lead to installing incorrect dependencies, missing security patches, and inconsistent application installation and configuration.
You can learn more about creating automated images with Packer in the Package applications with containers and machine images guide.
Create immutable infrastructure with Terraform
Immutable infrastructure is infrastructure that, once deployed, is never modified, only replaced. For example, in a mutable server, you update the server either by connecting to the server and running commands or by using a script. With immutable infrastructure, you would fully replace the server with a new one.
Terraform allows you to define and provision infrastructure environments, such as servers, virtual networks, and IAM roles and policies. When you change your infrastructure as code, Terraform updates your infrastructure by modifying the configuration or destroying and recreating it. To see what changes Terraform will apply, you can run terraform plan before running terraform apply.
The following example shows how Terraform deploys the Packer-built AMI using a data source to query the most recent image:
data "aws_ami" "golden_image" {
most_recent = true
owners = ["self"]
filter {
name = "name"
values = ["golden-web-app-*"]
}
}
resource "aws_instance" "web_server" {
ami = data.aws_ami.golden_image.id
instance_type = "t3.micro"
tags = {
Name = "web-server"
Environment = "production"
Built = "packer"
}
}
output "instance_id" {
value = aws_instance.web_server.id
}
The Terraform configuration queries the most recent Packer-built AMI using a data source and deploys it as an EC2 instance. When you update your application, run packer build to create a new AMI, then run terraform apply. Terraform destroys the old instance and creates a new one with the updated image, implementing immutable infrastructure. Using immutable infrastructure ensures your infrastructure matches your code exactly, eliminating configuration drift.
Create immutable infrastructure with Nomad
Nomad orchestrates application workloads across clusters, treating application containers and instances as immutable. When you need to update your application, Nomad replaces running instances with new versions rather than modifying existing ones.
The following example shows a Nomad job that deploys a containerized application using immutable deployment patterns:
web-app.nomad.hcl
job "web-app" {
datacenters = ["dc1"]
type = "service"
group "web" {
count = 3
task "app" {
driver = "docker"
config {
image = "myregistry/web-app:1.2.0"
ports = ["http"]
}
resources {
cpu = 500
memory = 256
}
service {
name = "web-app"
port = "http"
check {
type = "http"
path = "/health"
interval = "10s"
timeout = "2s"
}
}
}
network {
port "http" {
to = 8080
}
}
}
update {
max_parallel = 1
min_healthy_time = "10s"
healthy_deadline = "3m"
auto_revert = true
}
}
The Nomad job specification deploys three instances of your containerized web application. When you update the container image version and run nomad job run web-app.nomad.hcl, Nomad performs a rolling deployment—stopping one old instance, starting a new instance with the updated image, verifying health checks pass, then proceeding to the next instance. The auto_revert setting automatically rolls back to the previous version if health checks fail, implementing safe immutable deployments. Nomad integrates with Consul for service discovery and health checking, ensuring only healthy instances receive traffic.
HashiCorp co-founder and CTO Armon Dadgar explains the differences and trade-offs between mutable and immutable infrastructure.
Audit your cloud logs
Cloud audit logs detect manual infrastructure changes that bypass your automation workflows, helping you enforce automation across your team. Once you start automating your infrastructure, you must ensure that infrastructure changes only occur through automation. Tools like AWS CloudTrail, Azure Activity Log, or Google Cloud Audit Logs reveal who makes manual changes through the console, when these changes occur, and what resources change outside of automated processes. These audit logs help you identify team members who need additional training on automation workflows and catch unauthorized infrastructure modifications before they cause production issues.
Common pitfalls to avoid
As you implement semi-automated deployments, watch for the following common mistakes:
Skipping version control: Running scripts locally without committing changes to Git defeats automation benefits. Always commit scripts before execution.
Hardcoding secrets: Embedding credentials in scripts creates security risks. Use Vault from the start, even if it feels like extra work initially.
Inconsistent execution environments: Running scripts from different machines with different tool versions causes failures. Document required tool versions and consider using Docker containers for script execution.
Treating scripts as throwaway code: Writing quick, undocumented scripts creates technical debt. Apply the same code quality standards to infrastructure scripts as you do to application code.
Ignoring audit log alerts: Manual changes that bypass automation indicate team members need training or automation improvements. Address root causes rather than just fixing individual violations.
Ready for fully-automated deployments?
Before advancing to fully-automated deployments, ensure your organization has the following foundations in place:
The following are required prerequisites:
- Scripts and infrastructure code stored in version control
- Centralized secrets management
- You build immutable images with Packer regularly
- Audit logs catch manual infrastructure changes
Signs you're ready to advance:
- Developers ask for faster deployment processes
- Manual script execution creates coordination bottlenecks
- Multiple teams need to deploy infrastructure simultaneously
- You want to enforce policies automatically before deployment
- Operations team spends significant time executing scripts for developers
Skills your team needs:
- Basic understanding of CI/CD concepts
- Comfort with Git branching and pull requests
- Willingness to adopt automated testing practices
If you have these prerequisites and your team shows these signs, advance to fully-automated deployments.
HashiCorp resources:
- Implement a GitOps workflow for version-controlled deployments
- Package applications with Packer for consistent deployments
- Deploy applications with Terraform and orchestrators
- Automate testing for infrastructure and applications
- Use infrastructure as code with Terraform
- Create immutable infrastructure with Packer and Terraform
- Identify common metrics
Learn automation tools:
- Learn Terraform with the Terraform tutorials and read the introduction to Terraform for infrastructure as code
- Learn Packer with the Packer tutorials and read the introduction to Packer for image building
- Learn Consul with the Consul tutorials and read the introduction to Consul for service networking
- Learn Vault with the Vault tutorials and read the introduction to Vault for secrets management
- Learn Nomad with the Nomad tutorials and read the introduction to Nomad for workload orchestration
Packer for image automation:
- Build immutable infrastructure with Packer in CI/CD
- Learn to build a golden image pipeline with HCP Packer
- Learn about Packer builders for different platforms
- Use Packer provisioners to configure images
Vault for secrets management:
- Inject secrets into Terraform using the Vault provider
- Store and retrieve dynamic secrets with Vault for automated credential management
Next steps
In this section of Process automation, you learned how to implement semi-automated deployments with version control, scripting, immutable infrastructure, and audit logging. Implement semi-automated deployments is part of the Define and automate processes pillar.
Continue your automation journey with the following documents:
- Implement fully-automated deployments
- Assess your current automation level