Skip to content

bitrockteam/caravan-infra-aws

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

96 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Caravan Infra AWS

Caravan 2021 AWS

Prerequisites

  • AWS Credentials file at ~/.aws/credentials like
[default]
aws_access_key_id=AKIAJPZXVYEXAMPLE
aws_secret_access_key=4k6ZilhMPdshU6/kuwEExAmPlE

Prepare environment

You need an AWS bucket (and a DynamoDB table) for terraform state, so run the script project-setup.sh passing as arguments:

  1. Prefix name to give to resources (look at terraform inputs)
  2. AWS region
  3. AWS profile
./project-setup.sh <NAME> <REGION> <PROFILE>

Running

Edit the generate aws.tfvars and then run:

terraform init -reconfigure -upgrade
terraform apply --var-file aws.tfvars

Requirements

Name Version
terraform ~> 0.15.4
aws ~> 3.0

Providers

Name Version
aws 3.51.0
dns 3.2.1
local 2.1.0
null 3.1.0
random 3.1.0
tls 3.1.0

Modules

Name Source Version
caravan-bootstrap git::https://github.com/bitrockteam/caravan-bootstrap refs/tags/v0.2.12
cloud_init_control_plane git::https://github.com/bitrockteam/caravan-cloudinit refs/tags/v0.1.13
cloud_init_worker_plane git::https://github.com/bitrockteam/caravan-cloudinit refs/tags/v0.1.9
terraform_acme_le git::https://github.com/bitrockteam/caravan-acme-le refs/tags/v0.0.1
vpc terraform-aws-modules/vpc/aws n/a

Resources

Name Type
aws_acm_certificate.cert resource
aws_autoscaling_group.bastion-service resource
aws_autoscaling_group.hashicorp_workers resource
aws_ebs_volume.consul_cluster_data resource
aws_ebs_volume.csi resource
aws_ebs_volume.nomad_cluster_data resource
aws_ebs_volume.vault_cluster_data resource
aws_iam_instance_profile.control_plane resource
aws_iam_instance_profile.worker_plane resource
aws_iam_role.control_plane resource
aws_iam_role.worker_plane resource
aws_iam_role_policy.csi resource
aws_iam_role_policy.docker_pull resource
aws_iam_role_policy.vault_aws_auth resource
aws_iam_role_policy.vault_client resource
aws_iam_role_policy.vault_kms_unseal resource
aws_instance.hashicorp_cluster resource
aws_instance.monitoring resource
aws_key_pair.hashicorp_keypair resource
aws_kms_key.vault resource
aws_launch_configuration.bastion-service-host resource
aws_launch_template.hashicorp_workers resource
aws_lb.hashicorp_alb resource
aws_lb.hashicorp_nlb resource
aws_lb_listener.bastion-service resource
aws_lb_listener.http_80 resource
aws_lb_listener.http_8500 resource
aws_lb_listener.https_443 resource
aws_lb_listener.this resource
aws_lb_listener_rule.consul resource
aws_lb_listener_rule.nomad resource
aws_lb_listener_rule.vault resource
aws_lb_target_group.alb resource
aws_lb_target_group.bastion-service resource
aws_lb_target_group.consul resource
aws_lb_target_group.ingress resource
aws_lb_target_group.nomad resource
aws_lb_target_group.vault resource
aws_lb_target_group_attachment.consul resource
aws_lb_target_group_attachment.nomad resource
aws_lb_target_group_attachment.this resource
aws_lb_target_group_attachment.vault resource
aws_route53_record.all_cname resource
aws_route53_record.bastion resource
aws_route53_record.consul_cname resource
aws_route53_record.hashicorp_zone_ns resource
aws_route53_record.nomad_cname resource
aws_route53_record.vault_cname resource
aws_route53_zone.hashicorp_zone resource
aws_security_group.alb resource
aws_security_group.allow_cluster_basics resource
aws_security_group.internal_consul resource
aws_security_group.internal_nomad resource
aws_security_group.internal_vault resource
aws_security_group.internal_workers resource
aws_volume_attachment.consul_cluster_ec2 resource
aws_volume_attachment.nomad_cluster_ec2 resource
aws_volume_attachment.vault_cluster_ec2 resource
local_file.backend_tf_appsupport resource
local_file.backend_tf_platform resource
local_file.ssh_key resource
local_file.tfvars_appsupport resource
local_file.tfvars_platform resource
null_resource.ca_certs resource
null_resource.ca_certs_bundle resource
random_pet.env resource
tls_private_key.cert_private_key resource
tls_private_key.ssh_key resource
aws_ami.centos7 data source
aws_ami.debian data source
aws_iam_policy_document.assume_role data source
aws_iam_policy_document.vault_client data source
aws_iam_policy_document.vault_kms_unseal data source
aws_route53_zone.main data source
dns_a_record_set.alb data source

Inputs

Name Description Type Default Required
awsprofile AWS user profile string n/a yes
personal_ip_list IP address list for SSH connection to the VMs list(string) n/a yes
prefix The prefix of the objects' names string n/a yes
region AWS region to use string n/a yes
shared_credentials_file AWS credential file path string n/a yes
ami_filter_name Regexp to find AMI to use built with caravan-baking string "*caravan-centos-image-os-*" no
ca_certs Fake certificates from staging Let's Encrypt
map(object({
filename = string
pemurl = string
}))
{
"stg-int-r3": {
"filename": "letsencrypt-stg-int-r3.pem",
"pemurl": "https://letsencrypt.org/certs/staging/letsencrypt-stg-int-r3.pem"
},
"stg-root-x1": {
"filename": "letsencrypt-stg-root-x1.pem",
"pemurl": "https://letsencrypt.org/certs/staging/letsencrypt-stg-root-x1.pem"
}
}
no
consul_license_file Path to Consul Enterprise license string null no
control_plane_instance_count Control plane instances number number 3 no
control_plane_machine_type Control plane instance machine type string "t3.micro" no
csi_volumes Example:
{
"jenkins" : {
"availability_zone" : "eu-west-1a"
"size" : "30"
"type" : "gp3"
"tags" : { "application": "jenkins_master" }
}
}
map(map(string)) {} no
dc_name Hashicorp cluster name string "aws-dc" no
enable_monitoring Enable monitoring bool true no
external_domain Domain used for endpoints and certs string "" no
monitoring_machine_type Monitoring instance machine type string "t3.xlarge" no
nomad_license_file Path to Nomad Enterprise license string null no
ports n/a map(number)
{
"http": 80,
"https": 443
}
no
tfstate_bucket_name S3 Bucket where Terraform state is stored string "" no
tfstate_region AWS Region where Terraform state resources are string "" no
tfstate_table_name DynamoDB Table where Terraform state lock is acquired string "" no
use_le_staging Use staging Let's Encrypt endpoint bool true no
vault_license_file Path to Vault Enterprise license string null no
volume_data_size Volume size of control plan data disk number 20 no
volume_root_size Volume size of control plan root disk number 20 no
volume_size Volume size of workers disk number 100 no
volume_type Volume type of disks string "gp3" no
vpc_cidr VPC cidr string "10.0.0.0/16" no
vpc_private_subnets VCP private subnets list(string)
[
"10.0.1.0/24",
"10.0.2.0/24",
"10.0.3.0/24"
]
no
vpc_public_subnets VCP public subnets list(string)
[
"10.0.101.0/24",
"10.0.102.0/24",
"10.0.103.0/24"
]
no
worker_plane_machine_type Working plane instance machine type string "t3.large" no
workers_group_size Worker plane instances number number 3 no

Outputs

Name Description
PROJECT_APPSUPP_TFVAR Caravan Application Support tfvars
PROJECT_PLATFORM_TFVAR Caravan Platform tfvars
PROJECT_WORKLOAD_TFVAR Caravan Workload tfvars
ca_certs Let's Encrypt staging CA certificates
cluster_public_ips Control plane public IP addresses
control_plane_iam_role_arns Control plane iam role list
control_plane_role_name Control plane role name
csi_volumes n/a
hashicorp_endpoints Hashicorp clusters endpoints
load_balancer_ip_address Load Balancer IP address
region AWS region
vpc_id VPC ID
worker_node_service_account Worker plane ARN
worker_plane_iam_role_arns Worker plane iam role list
worker_plane_role_name Worker plane role name

Cleaning up

After terraform destroy -var-file=aws.tfvars, for removing bucket and dynamodb table, run the project-cleanup.sh script:

./project-cleanup.sh <NAME> <REGION> <PROFILE>