Releases: nasa/cumulus-orca
v10.0.1
Release v10.0.1
Important information
This release is only compatible with Cumulus v18.5.0 and up.
- Full Change Comparison: v10.0.0...v10.0.1
Added
- ORCA-920 - Fixed ORCA deployment failure for Cumulus when sharing an RDS cluster due to multiple IAM role association attempts. Added a new boolean variable
deploy_rds_cluster_role_association
which can be used to deploy multiple ORCA/cumulus stacks sharing the same RDS cluster in the same account by overwriting it tofalse
for 2nd user.
v10.0.0
Release v10.0.0
Important information
This release is only compatible with Cumulus v18.5.0 and up.
- Full Change Comparison: v9.0.5...v10.0.0
Migration Notes
Remove the s3_access_key
and s3_secret_key
variables from your orca.tf
file.
Post V2 Upgrade Comparison
Once the Aurora V1 database has been migrated/upgrade to Aurora V2 you can verify data integrity of the ORCA database by deploying the EC2 comparison instance which can be found at modules/db_compare_instance/main.tf
- Deployment Steps
- Fill in the variables in
modules/db_compare_instance/scripts/db_config.sh
- archive_bucket - ORCA Archive Bucket Name IMPORTANT: use underscores in place of dashes e.g. zrtest_orca_archive
- v1_endpoint - Endpoint of the V1 cluster e.g. orcaV1.cluster-c1xufm1sp0ux.us-west-2.rds.amazonaws.com
- v1_database - Database of the V1 cluster e.g. orca_db
- v1_user - Username of the V1 cluster e.g orcaV1_user
- v1_password - Password for the V1 user e.g. OrcaDBPass_4
- v2_endpoint - Endpoint of the V2 cluster e.g. orcaV2.cluster-c1xufm1sp0ux.us-west-2.rds.amazonaws.com
- v2_database - Database of the V2 cluster e.g. orca_db2
- v2_user - Username of the V2 cluster e.g orcaV2_user
- v2_password - Password for the V2 user e.g. OrcaDB2Pass_9
- cd to
modules/db_compare_instance
- Run
terraform init
- Run
terraform apply
- Once the instance is deployed add an inbound rule to both the V1 and V2 database security groups with the private IP of the EC2 instance.
- The private IP of the instance can be found via the console or AWS CLI by running the command:
aws ec2 describe-instances --filters "Name=instance-state-name,Values=running" "Name=instance-id,Values=<INSTANCE_ID>" --query 'Reservations[*].Instances[*].[PrivateIpAddress]' --output text
- This needs to be performed on BOTH V1 and V2 Security Groups The inbound rule can be added via the AWS console or AWS CLI by running the command:
aws ec2 authorize-security-group-ingress --group-id <DB_SECURITY_GROUP_ID> --protocol tcp --port 5432 --cidr <INSTANCE_PRIVATE_IP>/32
- The private IP of the instance can be found via the console or AWS CLI by running the command:
- Now you can connect to the EC2 via the AWS console or AWS CLI with the command:
aws ssm start-session --target <INSTANCE_ID>
- Once connected run the command
cd /home
- Once at the
/home
directory run the command:sh db_compare.sh
- When the script completes it will output two tables:
- v1_cluster - This table is count of data in the ORCA database of each table in the V1 cluster.
- v2_cluster - This table is count of data in the ORCA database of each table in the V2 cluster.
- Verify that the output of the V2 database matches that of the V1 database to ensure no data was lost during the migration.
- Once verified the EC2 instance can be destroyed by running
terraform destroy
Verify you are in the modules/db_compare_instance directory - This needs to be performed on BOTH V1 and V2 Security Groups Remove the added inbound rules that were added in step 5 either in the AWS Console or AWS CLI by running the command:
aws ec2 revoke-security-group-ingress --group-id <DB_SECURITY_GROUP_ID> --protocol tcp --port 5432 --cidr <INSTANCE_PRIVATE_IP>/32
- Delete the V1 database.
- Remove the snapshot identifier from the Terraform (If Applicable)
- In the AWS console navigate to RDS -> Snapshots and delete the snapshot the V2 database was restored from.
- Fill in the variables in
Added
- ORCA-845 - Created IAM role for RDS S3 import needed for Aurora v2 upgrade.
- ORCA-792 - Added DB comparison script at
modules/db_compare_instance/scripts/db_compare.sh
for the temporary EC2 to compare databases post migration. - ORCA-868 - Added EC2 instance for DB comparison after migration under
modules/db_compare_instance/main.tf
Changed
- ORCA-832 - Modified pyscopg2 installation to allow for SSL connections to database.
- ORCA-795 - Modified Graphql task policy to allow for S3 imports.
- ORCA-797 - Removed s3 credential variables from
deployment-with-cumulus.md
ands3-credentials.md
documentations since they are no longer used in Aurora v2 DB. - ORCA-873 - Modified build task script to copy schemas into a schema folder to resolve errors.
- ORCA-872 - Updated grapql version, modified policy in
modules/iam/main.tf
to resolve errors, and added DB role attachment tomodules/graphql_0/main.tf
- 530 - Added explicit
s3:GetObjectTagging
ands3:PutObjectTagging
actions to IAMrestore_object_role_policy
Deprecated
Removed
- ORCA-793 - Removed
s3_access_key
ands3_secret_key
variables from terraform. - ORCA-795 - Removed
s3_access_key
ands3_secret_key
variables from Graphql code and from get_current_archive_list task. - ORCA-798 - Removed
s3_access_key
ands3_secret_key
variables from integration tests. - ORCA-783 - Removed
tasks/copy_to_archive_adapter
andtasks/orca_recovery_adapter
as they are handled by Cumulus.
Fixed
- ORCA-835 - Fixed ORCA documentation bamboo CI/CD pipeline showing node package import errors.
- ORCA-864 - Updated ORCA archive bucket policy and IAM role to fix access denied error during backup/recovery process.
Security
- ORCA-851 - Updated bandit libraries to fix Snyk vulnerabilities.
v10.0.0-beta
Release v10.0.0-beta
Migration Notes
Remove the s3_access_key
and s3_secret_key
variables from your orca.tf
file.
Post V2 Upgrade Comparison
Once the Aurora V1 database has been migrated/upgrade to Aurora V2 you can verify data integrity of the ORCA database by deploying the EC2 comparison instance which can be found at modules/db_compare_instance/main.tf
- Deployment Steps
- Fill in the variables in
modules/db_compare_instance/scripts/db_config.sh
- archive_bucket - ORCA Archive Bucket Name IMPORTANT: use underscores in place of dashes e.g. zrtest_orca_archive
- v1_endpoint - Endpoint of the V1 cluster e.g. orcaV1.cluster-c1xufm1sp0ux.us-west-2.rds.amazonaws.com
- v1_database - Database of the V1 cluster e.g. orca_db
- v1_user - Username of the V1 cluster e.g orcaV1_user
- v1_password - Password for the V1 user e.g. OrcaDBPass_4
- v2_endpoint - Endpoint of the V2 cluster e.g. orcaV2.cluster-c1xufm1sp0ux.us-west-2.rds.amazonaws.com
- v2_database - Database of the V2 cluster e.g. orca_db2
- v2_user - Username of the V2 cluster e.g orcaV2_user
- v2_password - Password for the V2 user e.g. OrcaDB2Pass_9
- cd to
modules/db_compare_instance
- Run
terraform init
- Run
terraform apply
- Once the instance is deployed add an inbound rule to both the V1 and V2 database security groups with the private IP of the EC2 instance.
- The private IP of the instance can be found via the console or AWS CLI by running the command:
aws ec2 describe-instances --filters "Name=instance-state-name,Values=running" "Name=instance-id,Values=<INSTANCE_ID>" --query 'Reservations[*].Instances[*].[PrivateIpAddress]' --output text
- This needs to be performed on BOTH V1 and V2 Security Groups The inbound rule can be added via the AWS console or AWS CLI by running the command:
aws ec2 authorize-security-group-ingress --group-id <DB_SECURITY_GROUP_ID> --protocol tcp --port 5432 --cidr <INSTANCE_PRIVATE_IP>/32
- The private IP of the instance can be found via the console or AWS CLI by running the command:
- Now you can connect to the EC2 via the AWS console or AWS CLI with the command:
aws ssm start-session --target <INSTANCE_ID>
- Once connected run the command
cd /home
- Once at the
/home
directory run the command:sh db_compare.sh
- When the script completes it will output two tables:
- v1_cluster - This table is count of data in the ORCA database of each table in the V1 cluster.
- v2_cluster - This table is count of data in the ORCA database of each table in the V2 cluster.
- Verify that the output of the V2 database matches that of the V1 database to ensure no data was lost during the migration.
- Once verified the EC2 instance can be destroyed by running
terraform destroy
Verify you are in the modules/db_compare_instance directory - This needs to be performed on BOTH V1 and V2 Security Groups Remove the added inbound rules that were added in step 5 either in the AWS Console or AWS CLI by running the command:
aws ec2 revoke-security-group-ingress --group-id <DB_SECURITY_GROUP_ID> --protocol tcp --port 5432 --cidr <INSTANCE_PRIVATE_IP>/32
- Delete the V1 database.
- Remove the snapshot identifier from the Terraform (If Applicable)
- In the AWS console navigate to RDS -> Snapshots and delete the snapshot the V2 database was restored from.
- Fill in the variables in
Added
- ORCA-845 - Created IAM role for RDS S3 import needed for Aurora v2 upgrade.
- ORCA-792 - Added DB comparison script at
modules/db_compare_instance/scripts/db_compare.sh
for the temporary EC2 to compare databases post migration. - ORCA-868 - Added EC2 instance for DB comparison after migration under
modules/db_compare_instance/main.tf
Changed
- ORCA-832 - Modified pyscopg2 installation to allow for SSL connections to database.
- ORCA-795 - Modified Graphql task policy to allow for S3 imports.
- ORCA-797 - Removed s3 credential variables from
deployment-with-cumulus.md
ands3-credentials.md
documentations since they are no longer used in Aurora v2 DB. - ORCA-873 - Modified build task script to copy schemas into a schema folder to resolve errors.
- ORCA-872 - Updated grapql version, modified policy in
modules/iam/main.tf
to resolve errors, and added DB role attachment tomodules/graphql_0/main.tf
Deprecated
Removed
- ORCA-793 - Removed
s3_access_key
ands3_secret_key
variables from terraform. - ORCA-795 - Removed
s3_access_key
ands3_secret_key
variables from Graphql code and from get_current_archive_list task. - ORCA-798 - Removed
s3_access_key
ands3_secret_key
variables from integration tests. - ORCA-783 - Removed
tasks/copy_to_archive_adapter
andtasks/orca_recovery_adapter
as they are handled by Cumulus.
Fixed
- ORCA-835 - Fixed ORCA documentation bamboo CI/CD pipeline showing node package import errors.
- ORCA-864 - Updated ORCA archive bucket policy and IAM role to fix access denied error during backup/recovery process.
Security
- ORCA-851 - Updated bandit libraries to fix Snyk vulnerabilities.
v9.0.5
Release v9.0.5
Important information
This release is only compatible with Cumulus v18.x.x and up.
- Full Change Comparison: v9.0.4...v9.0.5
Migration Notes
If you are deploying ORCA for the first time or migrating from v6, no changes are needed.
If you are currently on v8 or v9, this means you already have load balancer deployed and you need to delete the load balancer target group before deploying this version. This is because terraform cannot delete existing load balancer target groups having a listener attached. Adding a HTTPS to the target group requires replacing the target group. Once the target group is deleted, you should be able to deploy ORCA.
- From AWS EC2 console, go to your load balancer named
<prefix-gql-a>
and select theListeners and rules
tab. Delete the rule. - Delete your target group
<random_name>-gql-a
. The target group name has been randomized to avoid terraform resource error. - Deploy ORCA.
If deployed correctly, the target group health checks should show as healthy.
For the DR buckets modify the bucket policy and remove the line that contains "s3:x-amz-acl": "bucket-owner-full-control" as well as the comma that is before/after it.
Added
- ORCA-450 - Removed Access Control List (ACL) requirement and added BucketOwnerEnforced to ORCA bucket objects.
- ORCA-452 - Added Deny non SSL policy to S3 buckets in
modules/dr_buckets/dr_buckets.tf
andmodules/dr_buckets_cloudformation/ dr-buckets.yaml
Changed
- ORCA-441 - Updated policies for ORCA buckets and copy_to_archive to give them only the permissions needed to restrict unwanted/unintended actions.
- ORCA-746 - Enabled HTTPS listener in application load balancer for GraphQL server using AWS Certificate Manager.
- ORCA-828 - Added prefix to ORCA SNS topic names to avoid
object already exists
errors.
Security
- ORCA-821 - Fixed snyk vulnerabilities from snyk report showing high issues and upgraded docusaurus to v3.1.0.
v9.0.4
Release v9.0.4
Important information
This release is only compatible with Cumulus v18.x.x and up.
- Full Change Comparison: v9.0.3...v9.0.4
Migration Notes
- For users upgrading from ORCA v8.x.x to v9.x.x, follow the below steps before deploying:
- Run the Lambda deletion script found in
python3 bin/delete_lambda.py
which will delete all of the ORCA lambdas with a provided prefix. You can also delete them manually in the AWS console. - Navigate to the AWS console and search for the Cumulus RDS security group.
- Remove the inbound rule with the source of
PREFIX-vpc-ingress-all-egress
in Cumulus RDS security group. - Search for
PREFIX-vpc-ingress-all-egress
and delete the security group NOTE: Due to the Lambdas using ENIs, when deleting the security groups it may say they are still associated with a Lambda that was deleted by the script. AWS may need a few minutes to refresh to fully disassociate the ENIs completely, if this error appears wait a few minutes and then try again.
- Run the Lambda deletion script found in
Changed
- ORCA-826 - Changed
bin/delete_lambda.py
to delete ORCA lambdas based on their tags. - ORCA-827 - Changed ORCA API gateway stage name from
orca
toorca_api
to avoid confusion in the URL path. The new ORCA execute API URL will behttps://<API_ID>.execute-api.<AWS_REGION>.amazonaws.com/orca_api
.
Fixed
- ORCA-827 Fixed API gateway URL not found issue seen in ORCA v9.0.3.
v9.0.4-beta
Release v9.0.4-beta
v9.0.3
Release v9.0.3
Important information
🔥 This release is only compatible with Cumulus v18.x.x and up🔥
- Full Change Comparison: v9.0.2...v9.0.3
Migration notes
If you are migrating from ORCA v8.x.x to this version, see the migration notes under v9.0.0.
Fixed
- ORCA-823 Fixed ORCA security group related deployment error seen in ORCA v9.0.2.
v9.0.2
Release v9.0.2
Important information
🔥 This release is only compatible with Cumulus v18.x.x and up🔥
- Full Change Comparison: v9.0.1...v9.0.2
Added
- ORCA-366 Added unit test for shared libraries.
- ORCA-769 Added API Gateway Stage resource to
modules/api-gateway/main.tf
- ORCA-369 Added DR S3 bucket template to
modules/dr_buckets/dr_buckets.tf
and updated S3 deployment documentation with steps.
Changed
- ORCA-784 Changed documentation to replace restore with copy based on task's naming as well as changed file name from
website/docs/operator/restore-to-orca.mdx
towebsite/docs/operator/reingest-to-orca.mdx
. - ORCA-724 Updated ORCA recovery documentation to include recovery workflow process and relevant inputs and outputs in
website/docs/operator/data-recovery.md
. - ORCA-789 Updated
extract_filepaths_for_granule
to more flexibly match file-regex values to keys. - ORCA-787 Modified
modules/api-gateway/main.tf
api gateway stage name to remove the extra orca from the data management URL path - ORCA-805 Changed
modules/security_groups/main.tf
security group resource name fromvpc_postgres_ingress_all_egress
tovpc-postgres-ingress-all-egress
to resolve errors when upgrading from ORCA v8 to v9. Also removed graphql_1 dependencymodule.orca_lambdas
since this module does not depend on the lambda module inmodules/orca/main.tf
Removed
- ORCA-361 Removed hardcoded test values from
extract_file_paths_for_granule
unit tests. - ORCA-710 Removed duplicate logging messages in
integration_test/workflow_tests/custom_logger.py
- ORCA-815 Removed steps for creating buckets using NGAP form in ORCA archive bucket documentation.
Fixed
- ORCA-811 Fixed
cumulus_orca
docker image by updating nodejs installation process. - ORCA-802 Fixed
extract_file_for_granule
documentation and schemas to includecollectionId
in input. - ORCA-785 Fixed checksum integrity issue in ORCA documentation bamboo pipeline.
- ORCA-820 Updated bandit and moto libraries to fix some snyk vulnerabilities.
v9.0.1
Release v9.0.1
Important information
🔥 This release is only compatible with Cumulus v18.x.x and up🔥
- Full Change Comparison: v9.0.0...v9.0.1
Added
- ORCA-766 Created AWS cloudformation template that can be used to deploy ORCA DR buckets.
- ORCA-765 Updated ORCA "Creating the Glacier Bucket" documentation with instructions to deploy ORCA DR buckets using cloudformation.
Changed
- ORCA-780 Updated ORCA "Deployment with Cumulus" documentation with instructions and examples to run ORCA recovery and archive workflows.
- ORCA-704 Updated dr-buckets.tf.template and buckets.tf.template with provider block to deploy in the us-west-2 region due to deployments failing in the other regions.
- ORCA-708 Updated integration_test/shared/setup-orca.sh script to use the root folder instead of cloning in a duplicate repository.
Fixed
- ORCA-731 Updated boto3 library used for unit tests to version 1.28.76 from version 1.18.40 to fix unit test warnings.
Security
- ORCA-778 Upgraded Docusaurus to version 2.4.3 to fix snyk vulnerabilities and security issues.
- ORCA-737 Updated moto library used for unit tests to version 4.2.2 from version 2.0.
v9.0.0
Important information
🔥 This release is only compatible with Cumulus v18.x.x and up🔥
- Full Change Comparison: v8.1.0...v9.0.0
Migration notes
Update teraform to version 1.5.x
Security
- ORCA-729 Updated terraform provider to use the latest version 1.5
- ORCA-713 Updated terraform, Dockerfile, and other IAC elements for best practices and security where able.