[toc]
This repository consists of:
- A Ansible playbook
- A number of Ansible templates that generate AWS CloudFormation templates with an external configuration file as driver
Combined with the configuration file, the Ansible playbook creates a set of AWS CloudFormation templates, and deploys these templates to your AWS account.
Example execution command:
ansible-playbook CreateOrUpdateEnv.yml --extra-vars configfile=/path/to/your/environment/config/file
- A Docker engine when using the
dockerwrapper
to build and deploy the templates - A local Ansible and AWS CLI client installation when not using the
dockerwrapper
When the switch -e create_changeset=yes
is added to the command, nothing will be changed,
a change set will be created for all processed templates (--tags
also works), and a
change set report will be printed at the end of the playbook.
An example:
$ ansible-playbook CreateOrUpdateEnv.yml \
--extra-vars configfile=~/projects/IxorDocs/aws-ixor.ixordocs-dev-ixordocs-config/config.yml \
--extra-vars create_changeset=yes \
--tags=bastion,cw
...
TASK [Dump changeset report] *************************************************************
ok: [localhost] => {
"changeset.stdout_lines": [
"IxordocsDevBastion",
"Modify BastionHost True",
"Modify RecordSetForBastionHost Conditional",
"IxordocsDevCW",
"Modify AwsLambdaCWLogsSubscription False",
"Modify AwsLambdaCWLogsSubscriptionPermission Conditional",
"Modify AwsLambdaEC2InstallCWAgent False",
"Modify AwsLambdaEC2InstallCWAgentPermission Conditional",
"Modify CWEventRuleCWCreateLogGroup False",
"Modify CWEventRuleSSMInstallCWAgent False",
"Remove TestPolicy None",
"IxordocsDevBastion",
"Modify BastionHost True",
"Modify RecordSetForBastionHost Conditional",
"IxordocsDevCW",
"Modify AwsLambdaCWLogsSubscription False",
"Modify AwsLambdaCWLogsSubscriptionPermission Conditional",
"Modify AwsLambdaEC2InstallCWAgent False",
"Modify AwsLambdaEC2InstallCWAgentPermission Conditional",
"Modify CWEventRuleCWCreateLogGroup False",
"Modify CWEventRuleSSMInstallCWAgent False",
"Remove TestPolicy None",
"IxordocsDevBastion",
"Modify BastionHost True",
"Modify RecordSetForBastionHost Conditional",
"IxordocsDevCW",
"Modify AwsLambdaCWLogsSubscription False",
"Modify AwsLambdaCWLogsSubscriptionPermission Conditional",
"Modify AwsLambdaEC2InstallCWAgent False",
"Modify AwsLambdaEC2InstallCWAgentPermission Conditional",
"Modify CWEventRuleCWCreateLogGroup False",
"Modify CWEventRuleSSMInstallCWAgent False",
"Remove TestPolicy None",
"IxordocsDevBastion",
"Modify BastionHost True",
"Modify RecordSetForBastionHost Conditional",
"IxordocsDevCW",
"Modify AwsLambdaCWLogsSubscription False",
"Modify AwsLambdaCWLogsSubscriptionPermission Conditional",
"Modify AwsLambdaEC2InstallCWAgent False",
"Modify AwsLambdaEC2InstallCWAgentPermission Conditional",
"Modify CWEventRuleCWCreateLogGroup False",
"Modify CWEventRuleSSMInstallCWAgent False",
"Remove TestPolicy None",
"IxordocsDevBastion",
"Modify BastionHost True",
"Modify RecordSetForBastionHost Conditional",
"IxordocsDevCW",
"Modify AwsLambdaCWLogsSubscription False",
"Modify AwsLambdaCWLogsSubscriptionPermission Conditional",
"Modify AwsLambdaEC2InstallCWAgent False",
"Modify AwsLambdaEC2InstallCWAgentPermission Conditional",
"Modify CWEventRuleCWCreateLogGroup False",
"Modify CWEventRuleSSMInstallCWAgent False",
"Remove TestPolicy None",
"IxordocsDevBastion",
"Modify BastionHost True",
"Modify RecordSetForBastionHost Conditional",
"IxordocsDevCW",
"Modify AwsLambdaCWLogsSubscription False",
"Modify AwsLambdaCWLogsSubscriptionPermission Conditional",
"Modify AwsLambdaEC2InstallCWAgent False",
"Modify AwsLambdaEC2InstallCWAgentPermission Conditional",
"Modify CWEventRuleCWCreateLogGroup False",
"Modify CWEventRuleSSMInstallCWAgent False",
"Remove TestPolicy None"
]
}
PLAY RECAP ******************************************************************************************
localhost : ok=39 changed=14 unreachable=0 failed=0
The Ansible templates used for the creation of the AWS CloudFormation templates, much like any other kind of code, evolves. Sometimes, evolution has a price, and that price is backward compatibility.
To make sure that a aws-cfn-gen configuration file will still build after a backward compatibility breaking change, the build environment should not change in time.
This is solved by using a Docker image to create the AWS CloudFormation templates and to deploy these AWS CloudFormation templates to the desired account.
The Docker image contains a combination of ansible and AWS CLI versions, and running the Docker image with the right set of environment variables allows the user to choose the tag in this repository to check out for the build and deploy.
The Docker image is called ixor/ansible-aws-cfn-gen
and can be
found here. The documentation for
this DockerWrapper is in rhe file README_DOCKERWRAPPER.md
in this repository.
All generated CloudFront templates are checked for correctness by a linter. Templates that fail to pass the check will cause the play to fail.
But sometimes, it's not possible or at least not easy to fix certain issues on short notice. Every case
should be subject to thorough evaluation, but when required and possible, the lint checks can be skipped
for a specific resource using the Ansible --skip-tags
option.
Available tags:
linter
orcfn-lint
to skip all lint checks- Resource specific tags in the form
lint-<resource>
:linter-vpc
linter-vpcendpoints
linter-sgrules
linter-kms
linter-secretemanager
linter-rdsparametergroups
linter-rds
linter-chatnotifications
linter-bastion
linter-ecr
linter-ecsmgmt
linter-route53delegation
linter-iam
linter-lambda
linter-lambdacloudfront
linter-cloudwatch
linter-efs
linter-dynamodb
linter-loadbalancers
linter-sns
linter-s3
linter-cloudfront
linter-route53
linter-ecs
linter-ecs2
linter-wafassociations
If, for example, you have a configuration file, config.yml
that is tested and approved
with:
- Ansible v2.6.1
- AWS CommandLine version 1.6
- v0.1.0 of the template files in this repository.
Your dockerwrapper
script will look like this:
#! /bin/bash
GITTAG=v0.1.0
docker run --rm \
-e AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} \
-e AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}" \
-e AWS_REGION=${AWS_REGION:-eu-central-1} \
-e AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN} \
-e GITTAG=${GITTAG:-v0.1.0} \
-e ANSIBLE_TAGS=${ANSIBLE_TAGS:-} \
-e CONFIG=config.yml \
-v ${PWD}:/config \
-it \
ixor/ansible-aws-cfn-gen:2.6.1-aws-1.6
To create the AWS CloudFormation template files and deploy them to the AWS account of your choice, follow these steps:
- Start your (Docker) engines
- Set AWS Credentials for that account. The roles and policies linked to the user that owns these credentials should be able to create all the configured resources in that account. The first step when the container is started is to check the account id you are logged in to with the account id in the configuration file. If they do not match, the build is cancelled.
- (optional) Set the environment variables
ANSIBLE_TAGS
andANSIBLE_SKIPTAGS
to limit the execution of the playbook to just the services you include minus the service you exclude. - Run the
dockerwrapper
script created above, by runningbash dockerwrapper
- AWS Application Load Balancers (ALB)
- AWS ECR
- AWC ECS Cluster
- AWS ECS Tasks and Services
- S3 buckets
- Route 53 private Hosted Zones
- IAM Users, Roles and Policies
- CloudFront distribution
- DynamoDB tables
Following modules have their own documentation file:
- Elastic Container Registry
- Elastic Container Service
- The mamagement ECS Fargate cluster
- KMS
- RDS
- RDS Parameter Group
- SecretsManager
The template for certain resources may depend on resources created by other templates. Those templates obviously have to be deployed for the dependencies to exist.
Therefor, the order of resource creation in the main playbook (CreateOrUpdateEnv.yml
)
should not be changed, unless there is a dependency issue and changing the order does
not introduce new dependencies.
The project configuation is used throughout the templates, mostly to prefix resource names and resource logical names to guarantee uniqueness.
All cfn_name
properties are used to create the CloudFormation logical resource names.
Those name can only include letters and numbers.
organization:
name: Acme
cfn_name: Acme
project:
name: my_big_project-prd
shortname: mbp-prd
These tags can be used in Cost Explorer to create reports per environment and per application, for example.
application: mybigapplication
env: prd
These are stacks (usually only the VPC stack) created by these templates,
and used troughout aws-cfn-gen
. As said in the Dependencies and Prerequisites section, this is meant
to disappear over time.
referenced_stacks:
VPCStackName: "VPCForAcmePrd"
This is mostly used for sanity checks (are you running on the account you think you are running on?) and for the region.
target_account:
name: "acme.mybigapplication-prd"
account_id: "123456789012"
region: "eu-central-1"
Example configuration:
vpc:
stackname: "MyVPCCFNStackName"
name: "MyVPC"
safe_ssh_01: "1.2.3.4/32"
safe_ssh_02: "1.2.3.5/32"
create_rds_subnets: true
nfs_for_sg_app: true
environment: "dev"
cidr: 10.121
nr_of_azs: 3
application: "myapp"
Running the environment setup with this config will create these resources following the AWS VPC Reference Architecture:
- A public subnet
xxx.yyy.0.0/24
(i.e. for bastion) - A IGW
- A NAT GW (only one, while not redundant, this saves on the bill)
- 3 (or 2) Private subnets for applications
xxx.yyy.10.0/24
xxx.yyy.11.0/24
xxx.yyy.12.0/24
- 3 (or 2) Public subnets for ELB
xxx.yyy.20.0/24
xxx.yyy.21.0/24
xxx.yyy.22.0/24
- 3 (or 2) Public subnets for RDS (optional)
xxx.yyy.30.0/24
xxx.yyy.31.0/24
xxx.yyy.32.0/24
- 2 routing tables (private and public)
- The necessary security groups to allow:
ssh
traffic to the public subnet fromsafe_ssh_01
andsafe_ssh_01
- HTTP and HTTPS traffic from everywhere to the load balancer subnets
- HTTP and HTTPS traffic from the load balancer subnets to the application subnets
- Database traffic (MySQL, PostgreSQL and SQL Server) from the application subnets to the RDS suibnets
- (optional) NFS traffic from the application subnets
The name used to create the CloudFormation stack. This name will also be used
when referncing to the VPC stack in the referenced_stacks
list.
The name of the VPC.
The first 2 bytes of the network CIDR for the VPC. Will be extended with
.0.0/16
to form a cpmplete CIDR.
Used to create a Security Group for the public subnets that allow ssh
traffic
from a limited (range of) IP addresses.
Should subnets and a subnet group be created for RDS?
Should the applicaton subnets be allowed to use NFS (i.e. for EFS
)
The application environment (dev, acc, prd, ....). Used to tag resources.
The name of the application (dev, acc, prd, ....). Used to tag resources.
The number of AZs to create subnets in. Default is 2, is set to 3
, 3 subnets
will be created for LB, private and RDS subnets.
An example:
bastion:
instance_type: t2.micro
route53_sns_topic: arn:aws:sns:eu-central-1:123456789012:RequestRoute53CNAMEZ123456789012
hostname: "bastion-myaccount"
eip: true
domain: "acme.com"
keypair_name: "id_rsa_myaccount"
pubkeys:
- owner: "user01"
key: "ssh-rsa ........"
- owner: "user02"
key: "ssh-rsa ........"
hostkeys:
- type: "ecdsa-sha2-nistp256"
location: "/etc/ssh/ssh_host_ecdsa_key"
key: "-----BEGIN EC PRIVATE KEY-----\\nMHcCAQEEIA\\nANOTHERLINE\\n...."
When this configuration is present in the configuration file, and the aws-cfn-gen
stack is run, these resources will be created:
- An EC2 instance
- A Route53 RecordSet (optional)
When set, a Custom resource will be created that triggers the creation of a Route53 RecordSet on the AWS account where the domain is managed. The current account needs to have permission to post events to the SNS topic.
If this property is not defined, not Route53 RecordSet will be created.
Only required if bastion.route53_sns_topic
is set.
Create an Elastic IP Address for the bastion instance and assing is to the instance if this property exists and is
true
.
Defualt value is false
.
Only required if bastion.route53_sns_topic
is set.
The name of an existing SSH key pair.
A list of dictionaries with these keys:
user
: The name of the owner of the SSH public keykey
: The SSH public key string
To avoid having to accept the host's host key after every re-creation of the bastion host, you can save the host keys and have them re-created when the instance is re-instantiated.
The value for bastion.hostkeys
is a list of dictionaries with these keys:
type
: The type of the host key (i.e.ecdsa-sha2-nistp256
)location
: The full path of the file for the private keykey
: The SSH private key string, on one line, add newlines with\\n
Create a scheduled CloudWatch event or a CloudWatch rule and attach a target
by importing a value (ARN
) from the exports of another CloudFormation stack.
cw:
auto_config_log_group_lambda_s3_key: "cw-logs-new-stream-to-lambda-9...ed50.zip"
log_group_settings:
retention_in_days: 14
filter_pattern: "-DEBUG"
logshipper_lambda_function_arn_import: "MyLogshipperLambdaImport"
event_rules:
- name: "Demo"
source: "aws.logs"
detail_type: "AWS API Call via CloudTrail"
event_source: "logs.amazonaws.com"
event_name:
- "CreateLogGroup"
description: "Emit event whenever a CreateLogGroup API call is made"
targets:
- type: "import"
value: "MyLambdaImport"
scheduled_rules:
- name: "ScheduledEventDaily6AM"
description: "Triggers daily at 6 AM"
schedule_expression: "cron(0 6 * * ? *)"
targets:
- type: "import"
value: "MyLambdaImport"
The value for filter_pattern
in cw.log_group_settings
is described in the AWS documentation. Use an empty string to disable the filter.
Let's start with an example:
lambda_functions:
- name: aws-lambda-s3-logs-to-cloudwatch
handler: handler
runtime: nodejs8.10
role: S3ToCloudwatchLogsRole
vpc: true | false
code:
s3_bucket: "{{ lambda_function_bucket_name }}"
s3_key: aws-lambda-s3-logs-to-cloudwatch-06b0c5cda86555d95f5939bedeca17830c81ff98.zip
environment:
- name: LOGGROUP_NAME
value: lb_access_logs
- name: LOGSTREAM_NAME
value: albint
invoke_permissions:
- type: predefined
description: "Allows bucket events to trigger this lambda function"
name: s3
bucket_arn: "arn:aws:s3:::ixordocs-dev-accesslogs-albint"
How to use the same function more than once in an environment?
Sometimes, the same function needs to be used more than once, for example if there are different triggers or a different set onf environment variables that influence the execution and the result of the function.
To achieve this, create identical blocks (with different envvars or whatever changes), and the
name
should have a suffix that starts with un underscore.
The name determines:
- The CloudFormation resource name
- The name of the function (i.e. the name of the file in the zip defined by
's3://' + code.s3_bucket + '/' + 'code.s3_key'
), unlessfunction_name
is defined.
The name
can contain:
- letters
- numbers
- hyphens
- 0 or 1 underscores, used to differentiate the CFN resource name in case of multiple instances of the same function.
If the name contains an underscore, the part before the underscore is used to determine the function name, and the complete string is used, after some CFN related transformation, as the CloudFormation resource name.
Assign fixed name to Lambda function, if the property is present. Changing this name will cause the resource to be re-created (and the old resource to be removed). This is at risk of the user.
The function will be in the (private) application subnets defined by vpc_privatesubnet_az*
and
the associated Security Group will be vpc_sg_app
.
Determine the principals that are allowed lambda:InvokeFunction
for the
Lambda function.
Used to create a role that grants the required permissions to the Lambda.
lambda_functions:
- name: aws-lambda-myFunction
...
execution_role_permissions:
- type: sns
Usage is identical to lambda_functions
, but use lambda_functions_cloudfront
instead.
To make this possible, some other changes have to be done to the account configuration, this is taken care off by https://github.com/rik2803/aws-account-config:
- Create Lambda bucket in all regions you use
- Deply the lambda functions to those different buckets
Some IAM resources are implicitely created by other components, i.e. in CloudFront to allow a user to invalidate the CloudFront distributions.
But it is sometimes useful to be able to create your own roles and policies. This can be accomplished by using these configuration sections:
managed_policies
awsroles
(this used to be calledroles
, but Ansible complained about me using its reserved word)iam_users
: Create users, assign policies and (optionally) create credentials
A list of policies, syntax is identical to the CloudFormation syntax for AWS::IAM::ManagedPolicy
.
managed_policies:
- name: S3DeployArtifactsAccess
policy_document:
Version: '2012-10-17'
Statement:
- Sid: AllowReadAccessToS3CentralDeployArtifacts
Effect: Allow
Action:
- s3:GetObject
- s3:GetObjectAcl
- s3:ListBucket
Resource:
- arn:aws:s3:::acme-s3-deploy-artifacts
- arn:aws:s3:::acme-s3-deploy-artifacts/*
A list of roles, syntax is identical to the CloudFormation syntax for AWS::IAM::Role
.
awsroles:
- name: MyECSGeneralTaskRole
policy_arns:
# The last part in arn:aws:iam::account_id:policy/policyname
- S3DeployArtifactsAccess
assumerole_policy_document:
Version: '2012-10-17'
Statement:
- Sid: ''
Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Action: sts:AssumeRole
iam_users:
- name: s3-deploy
cfn_name: S3DeployUser
managed_policies:
- S3DeployArtifactsAccess
create_accesskeys: true
Before 0.1.5
, managed policies for iam_users
were interpreted as a policy name and
extended to arn:aws:iam::123456789012:policy/<name>
. From version 0.1.5
, the full
arn
can also be specified.
Moved to here.
loadbalancers
is a list of, you guessed it, loadbalancers.
It creates a typical loadbalancer, with these components:
- An ALB or Application LoadBalancer (
AWS::ElasticLoadBalancingV2::LoadBalancer
). This can be an internet-facing loadbalancer (scheme: internet-facing
), or an internal loadbalancer (scheme: internal
). - The Security Groups and subnets used for the loadbalancer are extracted from the VPC stack mentioned before. That stack uses AWSs reference architecture and matches most setups.
- A HTTP listener on both internet-facing and internal loadbalancers.
- A HTTPS listener on the internet-facing loadbalancer. This requires a certificate for TLS termination.
- A default target group for HTTP and HTTPS
- Additional rules and target groups are created for the services defined in
applicationconfig
- (optional) Define redirects, see below in
redirects
loadbalancers:
- name: ALBExt
scheme: "internet-facing"
certificate_arn: "arn:aws:acm:eu-central-1:123456789012:certificate/55555555-4444-4444-7777-555555555555"
listener_certificate_list:
- cfn_name: "Cert2"
arn: "arn:aws:acm:eu-central-1:123456789012:certificate/88888888-2222-6666-9999-111111111111"
def_tg_http_healthcheckpath: /health
def_tg_https_healthcheckpath: /health
- name: ALBInt
scheme: "internal"
idle_timeout_seconds: 120
accesslogs:
state: enabled
log_expiry_days: 14
s3_objectcreated_lambda_import: StackName-LambdaTriggeredOnS3ObjectCreation
cw_logs:
log_group_name: lb_loggroup_name
cw_logs_subscription_filter:
type: lambda
lambda_cfn_export_name: ExportName
filter_pattern: "-DEBUG"
Each element in the list has 2 properties:
cfn_name
: An alphanumerical string used to uniquely name the CloudFormation resourcearn
: The ARN of an existing and validated Certificate in the same account and region as the load blancer is is user for
For more information on the subject, also see here
The SSL/TLS policy to use for the HTTPS listener. It defaults to
ELBSecurityPolicy-FS-1-2-Res-2019-08
, and can have any value from the list you can find
here.
When access_logs
is defined and state
is enabled
,
following resources are created:
- A S3 bucket named
{{application }}-{{ env }}-accesslogs-{{ lbname }}
- An lifecycle rule that expires the access logs after
log_expiry_days
days - A bucket policy that allows the AWS ALB account in the current region to write to that bucket
- A
s3.ObjectCreated
trigger to a lambda function ifaccesslogs.s3_objectcreated_lambda_import
is defined. That Lambda function can, for example, be used to ship the S3 logs to CloudWatch.
And the loadbalancer will get the attributes required to enable access logs, as specified here.
access_logs.cw_logs.log_group_name
: the log group name to be created and wheres3_objectcreated_lambda_import
will send the logs to
The CW logs subscription filter to assign to the log group. It can be a Lambda function that sends the log to a service such as DataDogHQ.
access_logs.cw_logs_subscription_filter.type
: Currently onlylambda
access_logs.cw_logs_subscription_filter.lambda_cfn_export_name
: Used iftype ==
lambda`, it's an export from another CloudFormation stack that returns the ARN of the Lambda function to be usedaccess_logs.cw_logs_subscription_filter.filter_pattern
: The (optional) filter to apply to the subscription filter. Can be a positive or a negative filter.
Default is 60
, sets the LB LoadBalancerAttribute
named idle_timeout.timeout_seconds
to this
value.
loadbalancers:
- name: ALBExtRedirectTest
...
redirects
is a list of dicts that define URLs to redirect.
A hostname that will trigger the redirect.
A path_pattern
strings that will trigger the redirect.
Determines the order of the redirect rules.
On of these strings:
HTTP_301
for permanent redirectHTTP_302
for temporary redirect
Skip the creation of a Route 53 record if true
.
redirects[n].skiproute53
: Skip in both public and private hosted zoneredirects[n].skiproute53public
: Skip in public hosted zoneredirects[n].skiproute53private
: skip in private hosted zone
TODO
Create S3 buckets.
Other S3 buckets might be created implicitely by the other components (i.e. CloudFront),
but s3
can be used to explicitely create buckets.
s3:
- name: mybucket
cfn_name: MyBucket
access_control: Private
object_ownership: ObjectWriter
static_website_hosting: no
versioning: {Enabled|Suspended}
skip_output: {true|false}
lifecycle_configuration: |
Rules:
- ExpirationInDays: 14
cors:
allowed_headers:
- '*'
allowed_methods:
- 'GET'
- 'PUT'
allowed_origins:
- '*'
tags:
- key: "ass:s3:backup-and-empty-bucket-on-stop"
value: "yes"
- key: "ass:s3:backup-and-enpty-bucket-on-stop-acl"
value: "private"
- key: "..."
value: "..."
The name for the bucket. The resulting name will be the value of this variable,
prefixed with {{ application }}-{{ env }}-
.
application: mybigapplication
env: prd
...
s3:
- name: mybucket
cfn_name: MyBucket
access_control: Private
static_website_hosting: no
For the above configuration, the resulting bucket will be named mybigapplication-prd-mybucket
.
The name to be used for the CloudFormation logical resource.
The final CloudFormation logical name will be {{ cfn_project }}{{ bucket.cfn_name }}
where
{{ cfn_project }}
.
This setting grants predefined permissions to the bucket. All object created after this setting was set or updated will get that ACL.
See here for valid values.
Valid values:
yes
orYes
true
orTrue
on
orOn
All other values will not enable website hosting on the bucket.
Important: This potentially exposes object to the evil internet.
Enable or disable (suspend) bucket versioning.
Allowed values:
Enabled
Suspended
Default behaviour is to create an output for an s3 bucket, use this property to skip the creation of the output.
This property was added to avoid the number of outputs to reach 60, which is a AWS limit on the number of outputs per stack.
Use the exact same yaml as described in Amazon S3 Bucket Rule.
If lifecycle_configuration
is not specified, the default lifecycle rule is:
Rules:
- NoncurrentVersionExpirationInDays: 60
Status: Enabled
Add CORS permissions to the bucket. This is optional, when omitted no CORS settings will be applied. This is the default AWS behaviour.
You can specify all properties like this:
s3:
- name: mybucket
cors:
allowed_headers:
- '*'
allowed_methods:
- 'GET'
- 'PUT'
allowed_origins:
- '*'
exposed_headers:
- 'Header1'
- 'Header2'
Or:
s3:
- name: mybucket
cors: yes
The value does not matter, the presence of the cors
property will apply these CORS settings to the
bucket:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Omitting a property from the configuration will use following defaults:
allowed_origins
:['*']
allowed_methods
:['GET', 'PUT']
allowed_headers
:['*']
- No defaults for
exposed_headers
Give custom tags to S3 buckets (e.g. ass:s3:backup-and-empty-bucket-on-stop
)
ass:s3:backup-and-empty-bucket-on-stop
: lets `ass-start-stop know if this bucket is part of the backup process.ass:s3:backup-and-enpty-bucket-on-stop-acl
: acl settings forass-start-stop
(default:private
).
tags:
- key: "ass:s3:backup-and-empty-bucket-on-stop"
value: "yes"
- key: "ass:s3:backup-and-enpty-bucket-on-stop-acl"
value: "private"
- key: "..."
value: "..."
applicationconfig
is a list of applications to run in the ECS cluster. Each
element in the applicationconfig
list contains the application description.
For each service, a lot of resources are created:
- ECS Task Description
- ECS Service
- R53 Record Sets
- The application propeties are also used by the ALB stack to create:
- ALB Listener rules
- ALB Target Group
- Target Health checks
- ...
- CloudWatch log group
- Metric filter to create a custom CW metric to monitor service restarts. The Metric Filter looks
for the string determined by the property
applicationconfig[n].monitoring.start_filter_string
(default value isStarted Application in
) - Alarm on that metric filter, target is SNS Queue created when the AWS account was setup
(also see
awc-account-config
) - The flow for the
ServiceStartAlert
is:- The ECS task is started
- The logs are sent to the CW log group for that service
- The Metric Filter scans for the
start_filter_string
and ... - ... creates a custom CloudWatch metric
- The CloudWatch alarm is triggered
- Sends en event to the SNS queue (created by the AWS account configuration (
aws-account-config
)) - All subscribers to the SNS topic will receive the event (i.s Slack, Chat)
- Metric filter to create a custom CW metric to monitor service restarts. The Metric Filter looks
for the string determined by the property
- name: "servicename"
cfn_name: ServiceName
target: "ecs"
environment:
- name: JAVA_TOOL_OPTIONS
value: "-Xmx2048m"
monitoring:
start_filter_string: "string to match in service logs"
alarm_actions_enabled: "true"
ecs:
image: "123456789012.dkr.ecr.eu-central-1.amazonaws.com/example/service:latest"
containerport: 8080
memory: 2048
cpu: 512
desiredcount: 2
healthcheckgraceperiodseconds: 3600
task_role_arn: "arn:aws:iam::123456789012:role/ECSTaskRole"
ulimits:
- name: nofile
hard_limit: 102400
soft_limit: 102400
lb:
name: ALBExt
### Can be public or private, determines if DNS entries are created in the public
### or private hosted zones.
type: public
healthcheckpath: /actuator/health
healthcheckokcode: "200"
domains:
- name: example.com
cfn_name: ExampleCom
listener_rule_host_header: service.example.com
priority: 210
- name: voorbeeld.be
cfn_name: VoorbeeldBe
listener_rule_host_header: service.voorbeeld.be
priority: 211
skiproute53: false
The role ECSExecutionRoleAwsCfnGen
that is implicitly created by the IAM
module will be used as ExecutionRoleArn
in the task definitions if no
execution_role_arn
is defined in the service configuration.
Important: The name
should only contain letters, numbers, hyphens and colons. Underscores are not allowed.
Important: The name
should not be changed one the service was created. If it is changed, the service
and the related reources might be recreated and will cause downtime.
The name defines the name to be used for the service. It is alse used to create related resources:
- Listener Rule name in the
ALB.yml
template - Target Group CloudFormation export for use in other templates in
ALB.yml
- Name of the CloudWatch Log Group in
ECS.yml
- Name of the Task Definition in
ECS.yml
CloudFormation logical names are restricted to letters and numbers only. All cfn_
properties are used
for naming CloudFormation resource logical names.
Where and how the service will be running. Currently supports ecs
and ecs_scheduled_task
.
ecs
: The container will run as a service, be always available and is monitoredecs_scheduled_task
: The task will be started much like acron
job is
Only used for ecs_scheduled_task
.
Default value: cron(0 3 ** ? *)
See here for more information on the syntax.
Only used for ecs_scheduled_task
.
Default value: ENABLED
Allowed values: ENABLED
or DISABLED
See here for more information on the syntax.
A list of key-value pairs to add to the environment variables of the running container
environment:
- name: JAVA_TOOL_OPTIONS
value: "-Xmx2048m"
applicationconfig:
- name: MyApplication
monitoring:
start_filter_string: "string to match in service logs"
alarm_actions_enabled: "true"
- Default:
Started Application in
- Allowed values: Any valid string
Determines whether or not an alarm will trigger its actions.
- Default:
"false"
- Allowed values:
"false"
"true"
The image to run in the container. This can be a ECR repository, or a (public) Docker Hub repository.
Private Docker Hub repositories are not supported at the moment.
The port the service inside the container is listening on. When the task is started, a port mapping will be created by the ECS Agent (which also runs in a Docker container), and that port will be registered with the Target Group to which the service is linked, in order for loadbalancing to do its job.
The number on MB to reserve for the container. If the container requires more memory than is available (i.e. not reserved) on any of the ECS cluster nodes, the task will not be started. This will be logged in the service’s events in de AWS Console.
The value is ignored if application[n].ecs.memory_reservation
is also set.
Same as application[n].ecs.memory
, but with the difference that more memory can be used by
the container when memory is available on the ECS instance node. Conversely, when the ECS Agent
looks for memory, it will require the extra memory allocated above the
application[n].ecs.memory_reservation
value, to be freed.
This property is stronger than application[n].ecs.memory
.
One of x84_64
(default) or ARM64
.
Will mostly be LINUX
, See AWS documentation for other values.
The number of CPU shares to allocate to the running container. Each vCPU on AWS
accounts for 1024 CPU shares. The available number of CPU shares in the cluster is
1024 * sum_of_vCPUs_of_all_clusternodes
.
For a list of vCPUs per instance type, look here.
The number of instances to start and maintain for that service.
- name: "servicename"
cfn_name: ServiceName
target: "ecs"
...
ecs:
image: "123456789012.dkr.ecr.eu-central-1.amazonaws.com/example/service:latest"
...
ulimits:
- name: nofile
hard_limit: 102400
soft_limit: 102400
ulimits
is a list of dicts with this structure:
name
: The name of theulimit
property to change. Must be one of:core
cpu
data
fsize
locks
memlock
msgqueue
nice
nofile
nproc
rss
rtprio
rttime
sigpending
stack
hard_limit
soft_limit
See also here.
Describes how the services will behave when a service is redeployed.
The maximum number of tasks, specified as a percentage of the Amazon ECS service's
DesiredCount value, that can run in a service during a deployment. To calculate
the maximum number of tasks, Amazon ECS uses this formula: the value of
DesiredCount * (the value of the MaximumPercent/100)
, rounded down to the nearest
integer value.
(From the AWS Documentation)
The minimum number of tasks, specified as a percentage of the Amazon ECS service's
DesiredCount value, that must continue to run and remain healthy during a deployment.
To calculate the minimum number of tasks, Amazon ECS uses this formula: the value of
DesiredCount * (the value of the MinimumHealthyPercent/100)
, rounded up to the
nearest integer value.
(From the AWS Documentation)
Important: Changing the LB requires the AWS::ECS::Service
resource to be
recreated. Since the framework assigns a name to the AWS::ECS::Service
resource,
this means that the template will fail unless (or):
- the
application[n]
with the changed loadbalancer is first deleted (remove it from the configuration file and deploy) and recreated - or the
application[n].name
is changed
The name of the LoadBalancer behind which the service should be put. External services should get their traffic from the external load balancer, while internal services should be put behind an internal load balancer. Internal traffic can be HTTP, while external traffic should be HTTPS.
Only used if DNS records have to be made for external services. This only works if the
domain is hosted in Route 53, and if the value of the property is public
.
The path to check the health of the service.
The HTTP code that reflects a healthy services. For example:
200
200-299
200-499
- Values in the
500
range cannot be used
applicationconfig:
- name: myapp
...
lb:
name: mylb
...
targetgroup_attributes:
- key: deregistration_delay.timeout_seconds
value: 0
- key: ...
value: ...
Allowed values are described in the CloudFormation documentation for
AWS::ElasticLoadBalancingV2::TargetGroup
. The list currently includes
these attributes:
deregistration_delay.timeout_seconds
slow_start.duration_seconds
stickiness.enabled
stickiness.type
stickiness.lb_cookie.duration_seconds
A list of domains, used to:
- Create LoadBalancer target group rules (
ALB.,yml
) - Create private and public Route53 record sets for the service endpoints
An example:
application_config:
- name: myapp
...
domains:
- name: acme.com
cfn_name: AcmeCom
cfn_name_suffix: ep1
listener_rule_host_header: ep1.acme.com
priority: 1
- name: acme.com
cfn_name: AcmeCom
cfn_name_suffix: ep2
listener_rule_host_header: ep2.acme.com
priority: 2
The name of the parent domain in which the service lives.
This name shoud comply with AWS CloudFormation resource naming convention. The
cfn_name
of the Route53 hosted zone that corresponds with the domain the service lves in, should match this cfn_name
.
The optional cfn_name_suffix
in applicationconfig[n].domains[n]
can be used
if 2 service endpoints within the same parent domain should be directed to this
service's target group.
The value of the property will be appended to the CloudFormation resource name for the Route53 recordset.
The property is optional to guarantee backward compatibility with existing environments.
When an incoming request's host header matches the value of this property (and
the optional listener_rule_path_pattern
), it will be directed to the
Target Group for the service.
Optional path pattern.
The order of the rule in the Target Group for the service. The lower the order, the earlier the rule will be checked for incoming traffic.
Assign higher priority
to more general rules to avoid specific rules never to
be reached.
See docs/ECR.md
See docs/Cloudfront.md
An example:
dynamodb:
- table_name: journal
backup: true
attributes:
- attribute_name: par
attribute_type: S
- attribute_name: num
attribute_type: N
key_schema:
- attribute_name: par
key_type: HASH
- attribute_name: num
key_type: RANGE
billing_mode: PROVISIONED | PAY_PER_REQUEST
provisioned_throughput:
read_capacity_units: 5
write_capacity_units: 5
This setting enables or disables the PITR for the table.
Allowed values:
true
false
(default)
billing_mode
can have 2 values:
PROVISIONED
(default)PAY_PER_REQUEST
provisioned_throughput
is ignored if billing_mode
is PAY_PER_REQUEST
.
See the AWS documentation for more details.
IMPORTANT: all aubscription names should be unique across all SNS topics. Using the same name twice will result in the creation of the last occurrence only!!!
sns:
- display_name: mytopic
topic_name: mytopic
subscriptions:
- name: subscr01
endpoint_export: mysubscriptionexport
subscription_protocol: lambda
This creates:
- an SNS topic
- subscriptions for the topic (optional)
- permissions to invoke a lambda function if the subscription protocol
is
lambda
The name of a CloudFormation export that contains the ARN to the resource that subscribes to the topic.
The ARN to the resource that subscribes to the topic.
The documentation for this module has been moved to docs/ECSMgmt.md
.
Important: This setup should be done only on the account where the hosted zones are defined.
+------------------------------------------------------+ +----------------------------+
| Route 53 Tooling Account| | Application Account |
| +--------------------+ | | |
| | | | |+--------------------------+|
| | | | || CloudFormation Template ||
| | | Lambda f() SNS Queue | || ||
| | | +----------+ +---------+ | || +----------------------+ ||
| | +----------------+ | | | | | | | || | | ||
| | | R53 Record Set <-+---+ <----+ | <-+----+--+Custom::CNAME Resource| ||
| | +----------------+ | | | | | | | || | | ||
| | | +----------+ +---------+ | || +----------------------+ ||
| +--------------------+ | |+--------------------------+|
+------------------------------------------------------+ +----------------------------+
The Hosted Zone itself should already already be created in the target account (usually the tooling account for the organization)
This allows the AWS accounts which have been granted access to this functionality to remotely add records to a hosted zone by using a custom CloudFormation resource.
This config file creates the above resources.
An example:
route53_delegation:
hostedzone:
- domain: "acme.com"
- id: "XXXXXXXXXXX"
- account_id: "123456789012"
allowed_accounts:
- name: account description
account_id: 234567890123
- name: account description
account_id: 345678901234
- name: account description
account_id: 456789012345
- name: account description
account_id: 567890123456
hostedzone.domain
: The domain name of the hosted zonehostedzone.id
: The Route53 ID of the hosted zonehostedzone.account_id
: The AWS account-id that owns the hosted zoneallowed_accounts
: The list of AWS account IDs that are allowed to remotely manage Route 53 record sets for the Hosted Zone using the CLI or CloudFormation
This will create the following resources on the account that hosts the Hosted Zone:
- An S3 bucket that holds the lambda function
- A Lambda function from a file on the S3 bucket
- The SNS topic that will trigger the Lambda function. This SNS topic is also required when using the custom CloudFormation resource to manage the Route53 Record Sets.
- An SNS Topic Policy
- A service policy for Lambda to allow the Route53 actions
- A role for CLI access
The full configuration of the DD log shipper takes 3 steps:
- Configure the DD Log Shipper Lambda
- Configure the Lambda function that automatically onboards new CloudWatch log groups and adds a subscription filter to those log groups
lambda_functions:
- name: aws-lambda-datadog-logshipper
handler: lambda_handler
runtime: python2.7
code:
s3_bucket: "{{ lambda_function_bucket_name }}"
s3_key: aws-lambda-datadog-logshipper-4c4579dfe5ab32ca8c5b9ecd8eb06b1281e5a5b7.zip
environment:
- name: APPLICATION
value: "{{ application }}"
- name: ENVIRONMENT
value: "{{ env }}"
- name: DD_API_KEY
value: xxxxxxxxxxxxxxxxxxxxxxxxxx
invoke_permissions:
- type: predefined
description: "Allows CloudWatch log events to trigger this lambda function"
name: logs
s3_key
: The name of the Lambda ZIP file on the S3 bucket, only requires change if the function is changed.- The
APPLICATION
andENVIRONMENT
environment variables are used to add metadata to the logged entries to allow for better filtering. - The
DD_API_KEY
determines to which DD account the log are sent
cw:
auto_config_log_group_lambda_s3_key: "cw-logs-new-stream-to-lambda-5de112e77e72fe069784d795412880499551fe5b.zip"
log_group_settings:
retention_in_days: 14
filter_pattern: "-DEBUG"
logshipper_lambda_function_arn_import: "AppEnvLambda-AwsLambdaDatadogLogshipperArn"
retention_in_days
(optional): How long are log streams kept in CW logs- Default settings:
production
: 180 daystesting|staging
: 14 days- Manual setup also possible.
- Default settings:
filter_pattern
: Determines the filter to be applied to incoming messages. See here for the syntax.logshipper_lambda_function_arn_import
: The Lambda to send the logs to. If the destination is DataDogHQ see the section above, but you can provide your own function, export its ARN and use it instead.
Because of the potential dependency between the ALB's TargetGroup and the service, removing a service is not as straightforward as it should be.
- TargetGroup for a service is created during the ALB setup
- The TargetGroup is referred to by the service's Service definition during the ECS setup
- ECS is always run after ALB
- Removing a service from the config file causes the ALB template to try to delete the TargetGroup, but that action fails because it is still used in the Service definition.
Ideally, the TargetGroup should be created in the ECS template, but that is a breaking change, requiring a fresh roll-out of the environments.
- Edit the configuration file and remove the service from the configuration file
- Generate the CloudFormation templates, but without applying them. This can be done in 2 ways:
- When using
dockerwrapper
, export these environment variables before starting thedockerwrapper
script:ANSIBLE_SKIPTAGS=deploy
ANSIBLE_TAGS=ecs,alb,route53
- When using the
ansible-playbook
command, add--tags=alb,ecs,route53 --skip-tags=deploy
to the commandline
- When using
- Now, got to the AWS console and update the
ECS
stack with the template you created in the previous step. This will remove the service and the dependency with the TargetGroup - Next, update the loadbalancer template(s), this step will remove (among others) the TargetGroup