Creating an AMI with Image Builder
The code that accompanies this blogpost can be found here
I’ve been working with AWS Image Builder a lot more over the last couple of months, while replacing a Packer
setup that was run on a Windows laptop, with Image Builder.
From the AWS Image Builder landing page:
EC2 Image Builder simplifies the building, testing, and deployment of Virtual Machine and container images for use on AWS or on-premises.
Keeping Virtual Machine and container images up-to-date can be time consuming, resource intensive, and error-prone. Currently, customers either manually update and snapshot VMs or have teams that build automation scripts to maintain images.
Image Builder significantly reduces the effort of keeping images up-to-date and secure by providing a simple graphical interface, built-in automation, and AWS-provided security settings. With Image Builder, there are no manual steps for updating an image nor do you have to build your own automation pipeline.
Image Builder is offered at no cost, other than the cost of the underlying AWS resources used to create, store, and share the images.
There are some caveats when using Image Builder:
- EBS encryption by default should be off. An encrypted volume cannot be exported to an alternative image format.
- The S3 bucket the exported image will be stored in, should use the AWS managed KMS key for S3 (
SSE-S3
) for encryption. - AWS encourages you to use IMDSv2 when running EC2 instances. This requires an adjustments to any scripts querying the instance metadata, as well as an adjustment to the maximum number of hops for an HTTP put request.
More information on these caveats can be found later in this post.
Creating an Image Builder pipeline #
To create an Image Builder pipeline, the following resources are needed:
- IAM roles with permissions for building the Amazon Machine Image (AMI), lifecycle management of the created AMIs, and for exporting the AMI to an additional image format
- An S3 bucket to export the additional image format to
- Any custom components for building your custom AMI
- An image recipe
- An infrastructure configuration
- A distribution configuration
- The Image Builder pipeline
- (Optional) an SNS topic
In the GitHub repository I’ve linked, the code for the IAM-roles can be found in iam.tf
, the code for the S3 bucket can be found in s3.tf
, the code for SNS can be found in sns.tf
, and the code for the remaining resources can be found in main.tf
.
Make sure you’re using at least version 5.74.0
of the Terraform AWS provider, to be able to enjoy these enhancements:
- In version
5.74.0
support was added to theaws_imagebuilder_distribution_configuration
resource for exporting the AMI to S3. - In version
5.59.0
support was added to theaws_imagebuilder_image_pipeline
resource to set the workflow of the pipeline.
Custom components #
A component can have multiple steps, in any of the two phases build
ortest
. It should have at least one step, and can contain steps for both build
as well as test
.
The first phase that is run, is the build
phase. This is where the initial image if built. After the image has been created, a new EC2 instance (or container) will be started using that image, to run the test
steps of the used components.
In the example, I’m using a simple component, which sets the timezone of the AMI to Europe/Amsterdam
during the build
phase.
resource "aws_imagebuilder_component" "set_timezone" {
name = join("-", [var.name, "set-timezone-linux"])
description = "Sets the timezone to Europe/Amsterdam"
platform = "Linux"
version = "1.0.0"
skip_destroy = false # Setting this to true retains any previous versions
data = yamlencode({
schemaVersion = 1.0
phases = [{
name = "build"
steps = [
{
name = "SetTimezone"
action = "ExecuteBash"
onFailure = "Abort"
inputs = {
commands = [
"timedatectl set-timezone Europe/Amsterdam"
]
}
}
]
}]
})
}
Image recipe #
The image recipe brings together all the ‘ingredients’ that make the image.
The recipe is where you define the source image (parent_image
) you’re building on, making overrides to settings of the source, as well as adding your own or AWS managed components, which are executed in the order they’re listed in the recipe (per phase).
You can also have the SSM agent removed after building the image. This is useful to do if the image will be used outside of AWS, where the SSM agent has no use.
resource "aws_imagebuilder_image_recipe" "this" {
# Currently the service only supports x86-based images for import or export.
name = join("-", [var.name, "image-recipe"])
parent_image = "arn:aws:imagebuilder:eu-west-1:aws:image/amazon-linux-2023-ecs-optimized-x86/x.x.x"
version = "1.0.0"
block_device_mapping {
# The device name is the same device name as the root volume of the selected AMI,
# which means we're overriding (some of) the root disk configuration in the AMI.
# In this case we're increasing the size of the disk from 20 GB to 40 GB.
device_name = "/dev/xvda"
no_device = false
ebs {
delete_on_termination = true
volume_size = 40
volume_type = "gp3"
encrypted = false
iops = 3000
throughput = 125
}
}
# Add the components to the recipe.
# Recipes require a minimum of one build component, and can have a maximum of 20 build and test components in any combination.
# Components are executed in the order they are listed here.
component {
# Here we're adding an AWS managed component to install the AWS CLI
component_arn = "arn:aws:imagebuilder:${data.aws_region.current.name}:aws:component/aws-cli-version-2-linux/x.x.x"
}
component {
# Here we're adding our custom component
component_arn = aws_imagebuilder_component.set_timezone.arn
}
systems_manager_agent {
# Set this to false to keep the SSM agent installed after building the image.
uninstall_after_build = true
}
lifecycle {
# Adding resources to the replace_triggered_by, ensures that replacing a resource doesn't fail because of dependencies.
# Instead, this resource will be replaced as well.
replace_triggered_by = [
aws_imagebuilder_component.set_timezone
]
}
}
Infrastructure configuration #
The infrastructure configuration defines what instance type(s) can be used to build and test the image, which subnet the build/test instances should use, as well as which security group(s) should be attached to the instance. The instance profile to use is also defined here, as well as the SNS topic to send messages to upon either success or failure of the pipeline run.
If no subnet ID and security group IDs are provided, a subnet from the default VPC will be used, with the default security group. When providing a subnet ID, one or more security group IDs must also be provided.
If you run into issues during the build phase, you can set terminate_instance_on_failure
to false. This means the build-instance will not be terminated, and can be used to investigate the issue.
In this example, IMDSv2 is used (http_tokens = required
). Also see
here for more information about the http_put_response_hop_limit
.
resource "aws_imagebuilder_infrastructure_configuration" "this" {
name = join("-", [var.name, "infrastructure-config"])
description = "Infrastructure Configuration for ${var.name}."
instance_profile_name = aws_iam_instance_profile.imagebuilder_build.name
instance_types = var.instance_types
sns_topic_arn = aws_sns_topic.this.arn
# If you want to keep the instance when an error occurs, so you can debug the issue, set this to false
terminate_instance_on_failure = true
# When not providing a subnet id and security group id(s),
# Image Builder uses a subnet in the default VPC with the default security group.
security_group_ids = var.security_group_ids
subnet_id = var.subnet_id
instance_metadata_options {
http_tokens = "required"
http_put_response_hop_limit = 1 # Increase this to 3 when building a container image
}
tags = {
ImageType = "CustomisedAmazonLinux2023Image"
}
}
Distribution configuration #
The distribution configuration tells Image Builder how to name the output AMI, and how to distribute the output AMI to different accounts, regions, organisations, and export the AMI to an alternative image format (VHD
, VMDK
or RAW
)
resource "aws_imagebuilder_distribution_configuration" "this" {
name = join("-", [var.name, "distribution-config"])
description = "Distribution Configuration for ${var.name}."
distribution {
region = data.aws_region.current.name
ami_distribution_configuration {
name = join("-", [var.name, "{{ imagebuilder:buildDate }}-{{ imagebuilder:buildVersion }}"])
kms_key_id = null
ami_tags = {
ImageType = "CustomisedAmazonLinux2023Image"
}
}
s3_export_configuration {
role_name = aws_iam_role.vmexport.name
disk_image_format = upper(var.image_export_format)
s3_bucket = aws_s3_bucket.this.id
}
}
}
Image Builder pipeline #
The Image Builder pipeline is what ties all the previous resources together. This orchestrates building the image, and additionally trigger scanning of the output AMI for security issues. Amazon Inspector should be enabled in the account to be able to scan the image.
The pipeline also defines the workflow to use. By default, a workflow that runs both the build
and the test
phases is used. In the example, no test
components are used, so we’re shaving some time off of the pipeline runtime, by selecting an AWS-managed workflow that only runs the build
phase. When changing the default workflow, an
execution role must also be provided.
The pipeline can also be scheduled to run at certain intervals using cron expressions. The pipeline can also be triggered using EventBridge rules, which requires additional resources to set up, which are not included in this sample.
When no schedule is provided, the pipeline can only be run manually, or when targeted by an EventBridge rule.
resource "aws_imagebuilder_image_pipeline" "this" {
name = join("-", [var.name, "image-pipeline"])
description = "Pipeline to create the custom image for ${var.name}"
image_recipe_arn = aws_imagebuilder_image_recipe.this.arn
infrastructure_configuration_arn = aws_imagebuilder_infrastructure_configuration.this.arn
distribution_configuration_arn = aws_imagebuilder_distribution_configuration.this.arn
image_scanning_configuration {
# Amazon Inspector needs to be enabled for the account when setting this to true
image_scanning_enabled = false
}
image_tests_configuration {
image_tests_enabled = true
timeout_minutes = 720
}
# When changing the workflow from default, an execution role must also be provided
execution_role = "arn:aws:iam::${data.aws_caller_identity.account.account_id}:role/aws-service-role/imagebuilder.amazonaws.com/AWSServiceRoleForImageBuilder"
workflow {
# We're setting an AWS managed workflow, that only executes Build-steps of the component. No testing or validation is done.
workflow_arn = "arn:aws:imagebuilder:${data.aws_region.current.name}:aws:workflow/build/build-image/x.x.x"
}
# Here you can set one or more schedules, to automate image building.
dynamic "schedule" {
for_each = var.schedule_expression != null ? [1] : []
content {
schedule_expression = var.schedule_expression
}
}
lifecycle {
# Adding resources to the replace_triggered_by, ensures that replacing a resource doesn't fail because of dependencies.
# Instead, this resource will be replaced as well.
replace_triggered_by = [
aws_imagebuilder_image_recipe.this
]
}
}
More info on the caveats of using Image Builder #
EBS Encryption by default #
To check if EBS encryption by default is enabled, we can use the following AWS CLI command:
$ aws ec2 get-ebs-encryption-by-default
{
"EbsEncryptionByDefault": false
}
If it’s true, check for Service Control Policies (SCPs) and/or other tooling used for managing the organisation/accounts. If this feature has been enabled through an SCP or other automated way, disable it for the account you’ll be running Image Builder in.
To manually disable EBS encryption by default, the following AWS CLI command can be used:
$ aws ec2 disable-ebs-encryption-by-default
{
"EbsEncryptionByDefault": false
}
S3 bucket encryption #
To be able to export the image to an S3 bucket, the bucket needs to be encrypted with the default AWS managed KMS key for S3 (SSE-S3
). Otherwise the export will fail with an InsufficientPermissions
exception.
Instance Metadata Service (IMDS) #
AWS encourages the use of IMDSv2 over IMDSv1. Version 2 is more secure, but requires some adjustments to any existing script using IMDSv1 when querying the instance metadata.
IMDSv1:
curl http://169.254.169.254/
IMDSv2:
TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/
Basic check to see if IMDSv1 or IMDSv2 is enabled:
if [ -z $(curl -s http://169.254.169.254/) ]; then echo "Instance has been configured to use IMDSv2."; fi
More information can be found here
Another setting that will need adjustment when using Image Builder for building a container image, is setting the Metadata Hop Limit (HttpPutResponseHopLimit
) to 2 or 3.
More information on the IMDS options can be found here.
Conclusion #
My goal with this post was to show you how you can start using Image Builder to automate creating your custom AMIs or container images, and help you take that initial hurdle to start looking into Image Builder.
It also shows the issues I ran into while implementing Image Builder for a project I’m working on, and how to overcome those.
If you have any feedback on this post, please reach out to me.