Deploying AWS Elastic Beanstalk applications with Terraform
To facilitate our migration from EC2/Chef environments to Docker/ElasticBeanstalk, we wanted to automate provisioning of an AWS Elastic Beanstalk (EB) environment and reuse it to build other environments. This article discussed how we achieved this infrastructure-as-code approach using HashiCorp Terraform with the following topics covered:
- Integrating Terraform into a Continuous Integration/Deployment pipeline.
- Terraform shortcomings.
- Possible improvements to our initial approach.
- Comparison to the AWS alternative: Cloudformation.
What is Terraform?
Terraform is a tool to enable the abstraction of infrastructure configurations into archivable, version-controlled code just like software code. This code then drives the provisioning process such as instantiating servers, databases, network topologies and other resources.
Configuration Management (CM) tools, like Chef and Puppet, typically assume the pre-existence of certain bare-metal nodes and components onto which the server/application provisioning process is executed. Terraform handles the previous stage of setting up the nodes and underlying infrastructure required for these CM tools to run on. While doing so, Terraform records the infrastructure it created in state files so it can manage and update it later.
We can then keep track of past changes to the infrastructure and preview upcoming ones, opening it up to the deployment lifecycle along with staging and testing. The Terraform declarative code essentially becomes our infrastructure documentation, simplifying reuse across teams and onboarding of new team members.
Infrastructure configuration files
In each of our EB-based microservice’s Git repositories, we include a sub-directory with the following Terraform configuration files:
scripts/
├── application
│ ├── create_application.tf
│ ├── terraform.tfstate (auto-generated)
│ └── terraform.tfstate.backup (auto-generated)
└── environment
├── prd
│ ├── create_environment.tf
│ ├── terraform.tfstate (auto-generated)
│ └── terraform.tfstate.backup (auto-generated)
└── uat
├── create_environment.tf
├── terraform.tfstate (auto-generated)
└── terraform.tfstate.backup (auto-generated)
The scripts/application/
directory contains the Terraform code necessary to create the EB application for the microservice called ovo-microservice-example
. An EB application is a shell inside which multiple application versions and environments are created and launched.
The scripts/application/create_application.tf
file is written in a JSON-compatible domain-specific language (DSL) called Hashicorp Configuration Language (HCL). In our case, it is sparse as all our infrastructure resource declarations (such as EC2 instances, ELB load balancers and Auto Scaling groups) are in the environment configuration files.
Contents of file scripts/application/create_application.tf
:
provider "aws" {
region = "eu-west-1"
}
resource "aws_elastic_beanstalk_application" "ovo-microservice-example" {
name = "ovo-microservice-example"
}
When launched with scripts/application/terraform apply
, Terraform will execute all .tf
files in the current directory (but not its sub-directories) so the naming of .tf
files is arbitrary. On each run, Terraform will create or update the terraform.tfstate
and (optionally) terraform.tfstate.backup
state files to keep track of the created resources. This is the core of Terraform’s infrastructure state management . These auto-generated files help Terraform manage, version and incrementally update these resources on subsequent updates by minimising fully removing and rebuilding the infrastructure.
For safety, it is best to preview potential changes to the resources by running scripts/application/terraform plan
first which outputs a list of what resources Terraform intends to modify, or delete (or leave alone).
Typically, we have 2 to 3 environments per micro-service application:
- PROD : environment for Live / Production
- UAT: environment for user acceptance testing
- DEV: environment for development and testing purposes
The following example environment file contains the resources required for our UAT EB instance.
Contents of file scripts/environment/uat/create_application.tf
:
provider "aws" {
region = "eu-west-1"
}
resource "aws_elastic_beanstalk_environment" "ovo-microservice-example-uat" {
name = "ovo-microservice-example-uat"
application = "ovo-microservice-example"
solution_stack_name = "64bit Amazon Linux 2016.xxxx running Docker xxxx"
cname_prefix = "ovo-microservice-example-uat-domain"
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "aws-elasticbeanstalk-ec2-role"
}
setting {
namespace = "aws:ec2:vpc"
name = "VPCId"
value = "vpc-xxxxx"
}
setting {
namespace = "aws:ec2:vpc"
name = "AssociatePublicIpAddress"
value = "true"
}
setting {
namespace = "aws:ec2:vpc"
name = "Subnets"
value = "subnet-xxxx,subnet-xxxx,subnet-xxxxx"
}
setting {
namespace = "aws:ec2:vpc"
name = "ELBSubnets"
value = "subnet-xxxx,subnet-xxxx,subnet-xxxx"
}
setting {
namespace = "aws:ec2:vpc"
name = "ELBScheme"
value = "internal"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "InstanceType"
value = "t2.micro"
}
setting {
namespace = "aws:autoscaling:asg"
name = "Availability Zones"
value = "Any 2"
}
setting {
namespace = "aws:autoscaling:asg"
name = "MinSize"
value = "2"
}
setting {
namespace = "aws:autoscaling:asg"
name = "MaxSize"
value = "3"
}
setting {
namespace = "aws:elasticbeanstalk:environment"
name = "ServiceRole"
value = "xxxx"
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "environment"
value = "uat"
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "LOGGING_APPENDER"
value = "GRAYLOG"
}
setting {
namespace = "aws:elasticbeanstalk:healthreporting:system"
name = "SystemType"
value = "enhanced"
}
setting {
namespace = "aws:autoscaling:updatepolicy:rollingupdate"
name = "RollingUpdateEnabled"
value = "true"
}
setting {
namespace = "aws:autoscaling:updatepolicy:rollingupdate"
name = "RollingUpdateType"
value = "Health"
}
setting {
namespace = "aws:autoscaling:updatepolicy:rollingupdate"
name = "MinInstancesInService"
value = "2"
}
setting {
namespace = "aws:autoscaling:updatepolicy:rollingupdate"
name = "MaxBatchSize"
value = "1"
}
setting {
namespace = "aws:elb:loadbalancer"
name = "CrossZone"
value = "true"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "BatchSizeType"
value = "Fixed"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "BatchSize"
value = "1"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "DeploymentPolicy"
value = "Rolling"
}
setting {
namespace = "aws:elb:policies"
name = "ConnectionDrainingEnabled"
value = "true"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "SecurityGroups"
value = "sg-xxxx"
}
tags {
Team = "OVO"
Environment = "UAT"
}
[...]
}
Its associate auto-generated state file snippet is shown below.
Contents of file scripts/environment/uat/terraform.tfstate
:
{
"version": 1,
"serial": 8,
"modules": [
{
"path": [
"root"
],
"outputs": {},
"resources": {
"aws_elastic_beanstalk_environment.ovo-microservice-example-uat": {
"type": "aws_elastic_beanstalk_environment",
"primary": {
"id": "e-x9p3z893ez",
"attributes": {
"all_settings.#": "101",
"all_settings.1036703857.name": "EvaluationPeriods",
"all_settings.1036703857.namespace": "aws:autoscaling:trigger",
"all_settings.1036703857.value": "1",
"all_settings.1055035439.name": "RootVolumeType",
"all_settings.1055035439.namespace": "aws:autoscaling:launchconfiguration",
"all_settings.1055035439.value": "",
"all_settings.1093784765.name": "ListenerProtocol",
"all_settings.1093784765.namespace": "aws:elb:listener:80",
"all_settings.1093784765.value": "HTTP",
"all_settings.1105127146.name": "LoadBalancerPortProtocol",
"all_settings.1105127146.namespace": "aws:elb:loadbalancer",
"all_settings.1105127146.value": "HTTP",
"all_settings.1194166391.name": "BlockDeviceMappings",
"all_settings.1194166391.namespace": "aws:autoscaling:launchconfiguration",
"all_settings.1194166391.value": "/dev/xvdcz=:12:true:gp2",
[...]
Updating the infrastructure from the build pipeline
Installing Terraform on the build server
First, we SSH to our go.CD Continuous Integration (CI) build server and in a suitable folder, we run:
wget https://releases.hashicorp.com/terraform/0.6.16/terraform_0.6.16_linux_amd64.zip
unzip terraform_0.6.16_linux_amd64.zip
(replacing the version number with the latest version available)
After ensuring the Terraform installation folder is on our GoCD agents’ $PATH
environment variable and that the AWS EB CLI is correctly installed, we add Terraform commands to the ovo-microservice-example
micro-service pipeline configuration as follows.
Updating the build pipeline
The basic steps in our pipeline are as follows:
-
Checkout the
ovo-microservice-example
Git repository including the Terraform config files inscripts/
detailed above.We ensure that in go.CD’s
Materials
section for this pipeline, we have addedscripts/**/*
to theBlacklist
. Otherwise, the pushed changes during the pipeline will cause it to be invoked again. -
Compile, test and build the
ovo-microservice-example
Docker image -
Use Terraform to create/update the EB application
Carried out by executing the build step:
terraform apply
from thescripts/application/
directory. -
Push any changes to the state files to our Git repository
We use a custom bash script
push_tf_scripts.sh
stored on our build server to check if any changes to thescripts/
directory exist, and if so push them to the Git repo:#! /bin/bash check_changes=`git status | grep terraform.tfstate` if [ "$check_changes" ] then echo "Pushing in tfstate changes" git add scripts git commit -m "Push terraform statefile changes" git pull echo "Pushing to master" git push -u origin master fi
-
Release the Docker image to the cloud-based image repository on DockerHub and create a new EB Application Version using an in-house sbt plugin.
The sbt plugin uses the AWS Java API but similar results can be achieved using the AWS CLI create-application-version command such as:
aws elasticbeanstalk create-application-version --application-name ovo-microservice-example [...]
-
Create/update the UAT EB application environment
By executing the build step:
terraform apply
from the directoryscripts/environment/uat
followed bypush_tf_scripts.sh
to push any changes to the associated.tfstate
files to the repo. -
Deploy the new application version to the new/updated UAT EB environment with the AWS CLI command:
aws elasticbeanstalk update-environment --environment-name ovo-microservice-example-uat --version-label 3.4.11
-
Deploy to Production
By manually triggering step 6. (but from within directory
scripts/environment/prd
) and step 7. (with the--environment-name ovo-microservice-example-prod
option instead)
Terraform Caveats
No support for EB application versions
As of version 0.7.2, Terraform does not currently support EB application versions, although support is actively being worked on. This does mean that one cannot specify a healthcheck URL in the EB environment definition such as in scripts/environment/uat/create_application.tf
.
This is because without a version, the EB environment is spun up with the EB sample application (which does not support the /ping
healthcheck endpoint we use), so the environment can never be healthy if the healthcheck URL is defined in the .tf
declaration file.
Currently, we must configure the health check URL manually after the creation of the environment - so only once.
Another temporary solution would be to execute this script to upload and deploy the application using the local-exec
provisioner in the .tf
file.
Storing state files in Git
The additional pushing to git by push_tf_scripts.sh
executed after each Terraform command in the pipeline can sometimes cause of git merge conflicts, especially if multiple developers are simultaneously working on the same repo. To get unstuck, one option is to make the changes to the EB environment manually via the AWS EB web UI or command line, then run terraform refresh
(to update local state file against the actual running EB instance), terraform apply
(to update the local state files) followed by git push
on the state files.
To avoid such issues, we are switching to storing the .tfstate
files remotely to an encrypted and versioned AWS S3 bucket using the suggested terraform remote
command:
terraform remote config -backend=s3 -backend-config="bucket=ovo.ebstate" -backend-config="key=tfstate/ovo-microservice-example-uat" -backend-config="region=eu-west-1"
S3 versioning will facilitate rollbacks in case of deployment errors and encryption will help keep any environment variables and secrets stored in the state files private and outside of the repo (at least if we use variables - see later).
Since Terraform does not provide locking, if terraform apply
is run at the same time then a S3-stored state file might be overwritten by another and the changes lost. Terragrunt was developed to address this although we have not found this to be an issue.
We are in the process of switching to Apache Zookeeper and/or Hashicorp Vault to store configuration settings or securely store secrets to avoid storing them in the tf
or .tfstate
files completely, or passing them as variables in the build process.
Importing existing infrastructure
It can be laborious to declare all the settings necessary to configure an EB environment in the HCL syntax. One option is to first create the EB environment manually via the AWS web UI. Then using the EB CLI config command, execute:
eb config get myconfigs/ovo-microservice-example-uat.cfg.yml --cfg ovo-microservice-example-uat
where:
ovo-microservice-example-uat
is the EB environmentmyconfigs/ovo-microservice-example-uat.cfg.yml
is the file name where the configuration is stored in the repo.
You’ll first need to run eb init
in the myconfigs/
directory to configure the EB CLI to point to the correct EB application.
The config file will look something like this:
EnvironmentConfigurationMetadata:
Description: Configuration created from the EB CLI using "eb config save".
DateCreated: 'xxxx'
DateModified: 'xxxx'
SolutionStack: 64bit Amazon Linux 2016.xxxx running Docker xxxx
OptionSettings:
aws:elasticbeanstalk:command:
BatchSize: '1'
BatchSizeType: Fixed
DeploymentPolicy: Rolling
aws:elasticbeanstalk:sns:topics:
Notification Endpoint: hello@test.com
aws:elb:policies:
ConnectionDrainingEnabled: true
aws:elasticbeanstalk:application:environment:
environment: uat
[...]
LOGGING_APPENDER: GRAYLOG
aws:elb:loadbalancer:
CrossZone: true
aws:elasticbeanstalk:environment:
ServiceRole: xxxx
aws:elasticbeanstalk:application:
Application Healthcheck URL: /ping
aws:elasticbeanstalk:healthreporting:system:
AWSEBHealthdGroupId: xxxx
SystemType: enhanced
aws:ec2:vpc:
Subnets: subnet-xxxx,subnet-xxxx,subnet-xxxx
VPCId: vpc-xxxx
ELBSubnets: subnet-xxxx,subnet-xxxx,subnet-xxxx
ELBScheme: external
AssociatePublicIpAddress: true
aws:autoscaling:launchconfiguration:
SecurityGroups:
- sg-xxxx
IamInstanceProfile: xxxx
InstanceType: t2.small
EC2KeyName: xxxx
aws:autoscaling:asg:
MinSize: '2'
Availability Zones: Any 2
MaxSize: '3'
aws:autoscaling:updatepolicy:rollingupdate:
MinInstancesInService: '2'
RollingUpdateType: Health
MaxBatchSize: '1'
RollingUpdateEnabled: true
EnvironmentTier:
Type: Standard
Name: WebServer
AWSConfigurationTemplateVersion: 1.1.0.0
[...]
Tags:
Environment: UAT
Team: OVO
This YAML file can then be used to manually create the Terraform .tf
file in HCL. Alternatively, as of the recent version 0.7.0, Terraform now supports the import command to import the .tfstate
file for an existing EB instance:
terraform import aws_elastic_beanstalk_environment.ovo-microservice-example-uat-domain e-xxxx
where:
ovo-microservice-example-uat-domain
is the CNAME of the EB instancee-xxxx
is the instance ID (which is displayed in the AWS EB UI or via commandeb status
).
Unfortunately, the import
command does not yet support the creation of .tf
creation files, so you’ll need to reconstruct it from the .tfstate
which can be tricky and error-prone.
Updating the EB instance outside of Terraform
Once your CI pipeline integrates Terraform, changes made manually via the AWS UI or AWS CLI to the EB environment will be overwritten by the state stored in the repo. It is essential to consider this aspect to avoid nasty surprises during releases.
One approach is to only make changes to EB instances via Terraform .tf
file updates and to disallow changes via the UI or CLI (at least if one expects them to remain).
An intermediate approach is to modify the environment via the UI then store the changes into the repo by running the refresh command: terraform refresh
which reconciles the current live EB environment with the stored state files to yield an up-to-date understanding of the current infrastructure.
If multiple changes have been made and you would like to just extract the HCL parameters to add to your .tf
file, then via the UI, first modify your DEV EB environment or a clone of UAT. Then run terraform refresh
and do a diff between the old and new state files to highlight the changes. Once these changes are ported to your .tf
file(s), push them to your repo and execute your CI pipeline to update your environment(s).
On a side note, it appears AWS does not support changing the tags on a running EB environment. This also might be the case for certain security group and VPC changes. The environment needs to be re-created from scratch to surface these changes. To maintain uptime, one option is to clone the current EB environment, swap the CNAMEs of the live environment with its clone, destroy and recreate the originally live environment then swap back the CNAMEs.
Elastic Beanstalk config backups
In addition to having backups of the EB environment in your repo as Terraform config files, you can supplement this with saved configurations directly available within the AWS UI and stored in S3. This will allow you to quickly spin up a previous or current version of your environments, such as if Terraform is causing issues or in a case of emergency by support staff without requiring Terraform console access or knowledge thereof.
The following bash script called eb-config-save.sh
(stored on the build server) will store the given EB environment both to local YAML files and remotely into up to 4 recycled slots in the AWS EB UI:
#!/bin/bash
function error_exit {
echo "$1" >&2
echo "Summary: Stores the current Elastic Beanstalk application environment configuration both remotely and into a local yml file."
echo "Requires:"
echo "1/ Install AWS EB CLI with the appropriate credentials to access your environment."
echo "2/ Run \"eb config\" first to setup the correct Elastic Beanstalk application to point to."
echo "3/ A file called environment-name.config.number (which has a number from 1 to 4 stored inside) to know which remote config slot to store current config into. Cycles through and eventually re-uses same 4 config slots."
echo "Usage: $0 environment-name"
echo "where environment-name is the name of the Elastic Beanstalk such as ovo-microservice-example"
exit "${2:-1}"
}
if [ -z $1 ]; then
error_exit "Error: Missing environment name parameter" 2
fi
EB_ENV=$1
echo "Elastic Beanstalk environment to save: $EB_ENV"
INCREMENT_FILE="${EB_ENV}.config.number"
if [ ! -f ${INCREMENT_FILE} ]; then
error_exit "./${INCREMENT_FILE} increment file not found!" 1
fi
eb use ${EB_ENV} || error_exit "Could not use environment ${EB_ENV}" 3
typeset -i LAST_CONF_NUMB=$(cat ${INCREMENT_FILE})
echo "Last saved config increment: $LAST_CONF_NUMB"
NEXT_CONF_NUMB=$((LAST_CONF_NUMB+1))
if (( NEXT_CONF_NUMB > 4 )); then
NEXT_CONF_NUMB=1
fi
NEXT_CONF="${EB_ENV}_slot${NEXT_CONF_NUMB}"
echo "Elastic Beanstalk config name to delete: ${NEXT_CONF}"
# clear up configuration slot
eb config delete ${NEXT_CONF} || error_exit "Could not delete environment ${NEXT_CONF}" 4
# save current environment config
eb config save ${EB_ENV} --cfg ${NEXT_CONF} || error_exit "Could not save environment ${EB_ENV} to slot ${NEXT_CONF}" 5
#update config increment number file
echo "${NEXT_CONF_NUMB}" > ${INCREMENT_FILE}
We would add the eb-config-save.sh ovo-microservice-example-uat
pipeline task after a successful UAT environment deploy. This is followed by the execution a custom bash file similar to the earlier push_tf_scripts.sh
to push any changes in say myconfigs/
(and more specifically themyconfigs/.ebelasticbeanstalk/saved_config/
directory) to the Git repo (requires modifying the pipeline to ignore triggers initiated by Git commits for these file):
#! /bin/bash
echo "Pushing in EB config changes"
git add myconfigs
git commit -m "Push EB config changes"
git pull
echo "Pushing to master"
git push -u origin master
Future Improvements
Instantiate other resource types
In addition to using Terraform to provision our EB micro-service instances, we are looking at using it to provision supporting infrastructure such as databases, security groups, VPCs, S3 buckets, Lambdas, roles and more.
This would help reduce the large amount of uncontrolled, unfamiliar and manually deployed state in our infrastructure. Since these systems are seldom configured or modified, people tend to be reticent to tinker with them and even less to test them appropriately.
More modular design
Terraform supports powerful constructs called modules that allow for better re-use of infrastructure code, improved security and avoid duplication errors.
Breaking up the existing design
Instead of duplicating the .tf
files for each environment within an application (eg. Prod, UAT and Dev) and maintaining them separately, we will switch to modules so that a single parameterised configuration module defines all environments. Only a few parameters would be necessary to define an environment such as the environment name, cname, instance type and numbers.
If we extracted a common template for an OVO EB micro-service into a separate module, the new file structure defined in directory scripts/environment/ovo-microservice-base/
would look like this:
scripts/
├── application
│ ├── create_application.tf
│ ├── terraform.tfstate (auto-generated)
│ └── terraform.tfstate.backup (auto-generated)
└── environment
├── ovo-microservice-base
│ ├── main.tf
│ └── variables.tf
├── prd
│ ├── main.tf
│ ├── terraform.tfstate (auto-generated)
│ └── terraform.tfstate.backup (auto-generated)
└── uat
├── main.tf
├── terraform.tfstate (auto-generated)
└── terraform.tfstate.backup (auto-generated)
The Prod and UAT instance of the module are defined in directories scripts/environment/prd/
and scripts/environment/uat/
respectively.
Here are snippets of the HCL declarations in each .tf
file:
scripts/environment/prd/main.tf
:
module "microservice-base" {
source = "../ovo-microservice-base"
name = "ovo-microservice-example-prd"
cname = "ovo-microservice-example-prd-domain"
vpc = "vpc-xxx-prd"
instance_type = "t2.small"
environment = "prd"
[...]
}
scripts/environment/uat/main.tf
:
module "microservice-base" {
source = "../ovo-microservice-base"
name = "ovo-microservice-example-uat"
cname = "ovo-microservice-example-uat-domain"
vpc = "vpc-xxx-uat"
instance_type = "t2.micro"
environment = "uat"
[...]
}
scripts/environment/ovo-microservice-base/variables.tf
:
variable "name" {
description = "name of the environment"
}
variable "instance_type" {
default = "t2.micro"
}
variable "vpc" {
description = "The VPC subnet the instance(s) will go in"
}
variable "cname" {}
variable "environment" {
description = "the environment identifier used by the service during bootup"
}
[...]
scripts/environment/ovo-microservice-base/main.tf
:
resource "aws_elastic_beanstalk_environment" "ovo-microservice" {
name = "${var.name}"
application = "ovo-microservice-example"
solution_stack_name = "64bit Amazon Linux 2016.xxxx running Docker xxxx"
cname_prefix = "${var.cname}"
setting {
namespace = "aws:ec2:vpc"
name = "VPCId"
value = "${var.vpc}"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "InstanceType"
value = "${var.instance_type}"
}
setting {
namespace = "aws:autoscaling:asg"
name = "Availability Zones"
value = "Any 2"
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "environment"
value = "${var.environment}"
}
[...]
Note that variables can also be injected from the server environment, via a .tfvars
file or via the command line with the -var
option.
Using remote modules
Another advantage is that modules can be stored remotely (like in a Git repo), so that updates made to a shared module can automatically affect all environments based on said module.
For example, security changes to a common micro-service template module from Devops would propagate to all micro-services on the next release of each service. Each service would be in compliance without any explicit code changes from each developer teams maintaining their own services.
To call a remote module hosted on Github, we simply need to modify the source
field in scripts/environment/prd/main.tf
to source = "github.com/ovotech/ovo-microservice"
.
This extends also pulling outputs from another Terraform run via remote state. For example, Devops could provision the main VPC (using a community VPC template) once and other teams could refer to the generated VPC IDs from their Terraform code.
Terraform alternative: CloudFormation
AWS CloudFormation is Amazon’s tool to automatically provision almost every service and resource offered on AWS. Elastic Beanstalk even uses CloudFormation under-the-hood to launch its resources.
Like Terraform, its infrastructure-as-code configuration files are defined in a somewhat more verbose JSON syntax. The Ruby DSL called cfn-dsl, the Python library Troposphere and, the AWS CloudFormation Designer drag-and-drop web interface are available to simplify the declaration process.
The JSON infrastructure definitions can also be pushed to Git to track changes, reuse, and easily revert to known good configurations. These definition files, referred to as templates, are then uploaded to CloudFormation which then takes care of the creation, updating, and deleting of AWS resources described in a stack (a set of JSON templates).
Like Terraform, CloudFormation can make incremental changes to infrastructure and preview those changes before proceeding to determine if they are in line with expectations using ChangeSets (similar to the terraform plan
command). Unlike Terraform, it does already support EB application versions via SourceBundles and creating usable templates from existing AWS resources with CloudFormer.
However, CloudFormation does not explicitly support states and only supports the AWS ecosystem compared to Terraform which supports many platforms such as Microsoft Azure, Google Cloud, Heroku etc.
Conclusion
Our experience systematising our EB provisioning process with Terraform has shown great promise, although not without some growing pains. We intend to expand this ability to obtain visibility on infrastructure changes, reproducibility, testability and reusability over to all our constantly-evolving and disparate infrastructure.