Skip to content

Commit

Permalink
Merge pull request #487 from nasa/release-9.0.2
Browse files Browse the repository at this point in the history
Release 9.0.2
  • Loading branch information
rizbihassan authored Jan 26, 2024
2 parents 440c878 + 6fb56c8 commit 7715b77
Show file tree
Hide file tree
Showing 51 changed files with 704 additions and 330 deletions.
43 changes: 43 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,43 @@ and includes an additional section for migration notes.

### Security

## [9.0.2] 2024-01-26

### Migration Notes

If you are migrating from ORCA v8.x.x to this version, see the migration notes under v9.0.0.

### Added

- *ORCA-366* Added unit test for shared libraries.
- *ORCA-769* Added API Gateway Stage resource to `modules/api-gateway/main.tf`
- *ORCA-369* Added DR S3 bucket template to `modules/dr_buckets/dr_buckets.tf` and updated S3 deployment documentation with steps.

### Changed

- *ORCA-784* Changed documentation to replace restore with copy based on task's naming as well as changed file name from `website/docs/operator/restore-to-orca.mdx` to `website/docs/operator/reingest-to-orca.mdx`.
- *ORCA-724* Updated ORCA recovery documentation to include recovery workflow process and relevant inputs and outputs in `website/docs/operator/data-recovery.md`.
- *ORCA-789* Updated `extract_filepaths_for_granule` to more flexibly match file-regex values to keys.
- *ORCA-787* Modified `modules/api-gateway/main.tf` api gateway stage name to remove the extra orca from the data management URL path
- *ORCA-805* Changed `modules/security_groups/main.tf` security group resource name from `vpc_postgres_ingress_all_egress` to `vpc-postgres-ingress-all-egress` to resolve errors when upgrading from ORCA v8 to v9. Also removed graphql_1 dependency `module.orca_lambdas` since this module does not depend on the lambda module in `modules/orca/main.tf`

### Deprecated

### Removed

- *ORCA-361* Removed hardcoded test values from `extract_file_paths_for_granule` unit tests.
- *ORCA-710* Removed duplicate logging messages in `integration_test/workflow_tests/custom_logger.py`
- *ORCA-815* Removed steps for creating buckets using NGAP form in ORCA archive bucket documentation.

### Fixed

- *ORCA-811* Fixed `cumulus_orca` docker image by updating nodejs installation process.
- *ORCA-802* Fixed `extract_file_for_granule` documentation and schemas to include `collectionId` in input.
- *ORCA-785* Fixed checksum integrity issue in ORCA documentation bamboo pipeline.
- *ORCA-820* Updated bandit and moto libraries to fix some snyk vulnerabilities.

### Security

## [9.0.1] 2023-11-16

### Added
Expand All @@ -44,6 +81,7 @@ and includes an additional section for migration notes.
### Fixed

- *ORCA-731* Updated boto3 library used for unit tests to version 1.28.76 from version 1.18.40 to fix unit test warnings.
- *ORCA-722* Fixed multiple granules happy path integration tests by randomizing large file name to avoid duplicate data being ingested.

### Security

Expand All @@ -55,6 +93,11 @@ and includes an additional section for migration notes.
### Migration Notes

- Update terraform to the latest 1.5 version
- For users upgrading from ORCA v8.x.x to v9.x.x, follow the below steps before deploying:
1. Run the Lambda deletion script found in `python3 bin/delete_lambda.py` this will delete all of the ORCA Lambdas with a provided prefix. Or delete them manually in the AWS console.
2. Navigate to the AWS console and search for the Cumulus RDS security group.
3. Remove the inbound rule with the source of `PREFIX-vpc-ingress-all-egress` in Cumulus RDS security group.
4. Search for `PREFIX-vpc-ingress-all-egress` and delete the security group **NOTE:** Due to the Lambdas using ENIs, when deleting the securty groups it may say they are still associated with a Lambda that was deleted by the script. AWS may need a few minutes to refresh to fully disassociate the ENIs completely, if this error appears wait a few minutes and then try again.

### Security

Expand Down
9 changes: 5 additions & 4 deletions base_images/Dockerfile.bamboo
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,17 @@ ENV NODE_VERSION="20.x"
ENV TERRAFORM_VERSION "1.5.5"
ENV PYTHON_VERSION "3.9.17"

# Add NodeJS and Yarn repos & update package index
# Add YARN repos & update package index
RUN \
curl -sL https://rpm.nodesource.com/setup_${NODE_VERSION} | bash - && \
curl -sL https://dl.yarnpkg.com/rpm/yarn.repo | tee /etc/yum.repos.d/yarn.repo && \
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/session-manager-plugin.rpm" -o "session-manager-plugin.rpm" && \
dnf update -y

# install pip, Python 3.9, CLI utilities, session manager and other tools
# Install node, pip, Python 3.9, CLI utilities, session manager and other tools
RUN \
dnf install -y jq gcc git make openssl openssl-devel wget zip unzip bzip2-devel libffi-devel ncurses-devel sqlite-devel readline-devel uuid-devel libuuid-devel gdbm-devel xz-devel tar nodejs yarn awscli session-manager-plugin.rpm procps parallel && \
dnf install https://rpm.nodesource.com/pub_${NODE_VERSION}/nodistro/repo/nodesource-release-nodistro-1.noarch.rpm -y && \
dnf install nodejs -y --setopt=nodesource-nodejs.module_hotfixes=1 && \
dnf install -y jq gcc git make openssl openssl-devel wget zip unzip bzip2-devel libffi-devel ncurses-devel sqlite-devel readline-devel uuid-devel libuuid-devel gdbm-devel xz-devel tar yarn awscli session-manager-plugin.rpm procps parallel && \
dnf groupinstall -y "Development Tools" && \
# Install Python 3.9
wget https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz && \
Expand Down
7 changes: 4 additions & 3 deletions base_images/Dockerfile.local
Original file line number Diff line number Diff line change
Expand Up @@ -9,14 +9,15 @@ ENV PYTHON_VERSION "3.9.17"

# Add NodeJS and Yarn repos & update package index
RUN \
curl -sL https://rpm.nodesource.com/setup_${NODE_VERSION} | bash - && \
curl -sL https://dl.yarnpkg.com/rpm/yarn.repo | tee /etc/yum.repos.d/yarn.repo && \
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/session-manager-plugin.rpm" -o "session-manager-plugin.rpm" && \
dnf update -y

# install pip, Python 3.9, CLI utilities, session manager and other tools
# Install node, pip, Python 3.9, CLI utilities, session manager and other tools
RUN \
dnf install -y jq gcc git make openssl openssl-devel wget zip unzip bzip2-devel libffi-devel ncurses-devel sqlite-devel readline-devel uuid-devel libuuid-devel gdbm-devel xz-devel tar nodejs yarn awscli session-manager-plugin.rpm procps parallel && \
dnf install https://rpm.nodesource.com/pub_${NODE_VERSION}/nodistro/repo/nodesource-release-nodistro-1.noarch.rpm -y && \
dnf install nodejs -y --setopt=nodesource-nodejs.module_hotfixes=1 && \
dnf install -y jq gcc git make openssl openssl-devel wget zip unzip bzip2-devel libffi-devel ncurses-devel sqlite-devel readline-devel uuid-devel libuuid-devel gdbm-devel xz-devel tar yarn awscli session-manager-plugin.rpm procps parallel && \
dnf groupinstall -y "Development Tools" && \
# Install Python 3.9
wget https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz && \
Expand Down
18 changes: 18 additions & 0 deletions bin/delete_lambda.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
import subprocess
import json

# Gets user input for the prefix of lambdas to delete
prefix = input("Enter Prefix: ")

# Gets ORCA lambda functions with given prefix
get_functions = f"aws lambda list-functions --query 'Functions[] | [?contains(FunctionName, `{prefix}`) == `true`]'"
completed_process = subprocess.run(get_functions, shell=True, capture_output=True)
output = completed_process.stdout
convert = json.loads(output.decode("utf-8").replace("'","'"))

# Deletes ORCA lambda functions with the given prefix
for sub in convert:
print("Deleting " + sub['FunctionName'])
lambda_output = sub['FunctionName']
delete_function = f"aws lambda delete-function --function-name {lambda_output}"
subprocess.run(delete_function, shell=True, capture_output=True)
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ requests~=2.28.1

## Additional validation libraries
## ---------------------------------------------------------------------------
bandit==1.7.5
bandit==1.7.6
flake8==6.1.0
black==22.3.0
isort==5.12.0
2 changes: 1 addition & 1 deletion graphql/requirements-dev.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ boto3~=1.18.65

## Additional validation libraries
## ---------------------------------------------------------------------------
bandit==1.7.5
bandit==1.7.6
flake8==6.1.0
black==22.3.0
isort==5.12.0
Expand Down
33 changes: 33 additions & 0 deletions integration_test/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@

### Running integration tests locally

The steps to run ORCA integration tests locally are shown below:

1. [Deploy ORCA to AWS](https://nasa.github.io/cumulus-orca/docs/developer/deployment-guide/deployment-with-cumulus).
2. Connect to the NASA VPN.
3. Set the following environment variables:
1. `orca_API_DEPLOYMENT_INVOKE_URL` Output from the ORCA TF module. ex: `https://0000000000.execute-api.us-west-2.amazonaws.com`
2. `orca_RECOVERY_STEP_FUNCTION_ARN` ARN of the recovery step function. ex: `arn:aws:states:us-west-2:000000000000:stateMachine:PREFIX-OrcaRecoveryWorkflow`
3. `orca_COPY_TO_ARCHIVE_STEP_FUNCTION_ARN` ARN of the copy_to_archive step function. ex: `arn:aws:states:us-west-2:000000000000:stateMachine:PREFIX-OrcaCopyToArchiveWorkflow`
4. `orca_RECOVERY_BUCKET_NAME` S3 bucket name where the recovered files will be archived. ex: `test-orca-primary`
5. `orca_BUCKETS`The list of ORCA buckets used. ex:
```json
'{"protected": {"name": "'$PREFIX'-protected", "type": "protected"}, "internal": {"name": "'$PREFIX'-internal", "type": "internal"}, "private": {"name": "'$PREFIX'-private", "type": "private"}, "public": {"name": "'$PREFIX'-public", "type": "public"}, "orca_default": {"name": "'$PREFIX'-orca-primary", "type": "orca"}, "provider": {"name": "orca-sandbox-s3-provider", "type": "provider"}}'
```

4.
Get your Cumulus EC2 instance ID using the following AWS CLI command using your `<PREFIX>`.
```shell
aws ec2 describe-instances --filters Name=instance-state-name,Values=running Name=tag:Name,Values={PREFIX}-CumulusECSCluster --query "Reservations[*].Instances[*].InstanceId" --output text
```
Then run the following bash command,
replacing `i-00000000000000000` with your `PREFIX-CumulusECSCluster` ec2 instance ID,
and `0000000000.execute-api.us-west-2.amazonaws.com` with your API Gateway identifier:

```shell
aws ssm start-session --target i-00000000000000000 --document-name AWS-StartPortForwardingSessionToRemoteHost --parameters '{"host":["0000000000.execute-api.us-west-2.amazonaws.com"],"portNumber":["443"], "localPortNumber":["8000"]}'
```
5. In the root folder `workflow_tests`, run the following command:
```shell
bin/run_tests.sh
```
7 changes: 4 additions & 3 deletions integration_test/workflow_tests/custom_logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@ def process(self, msg, kwargs):
@staticmethod
def set_logger(group_name):
logger = logging.getLogger(__name__)
syslog = logging.StreamHandler()
logger.addHandler(syslog)
logger_adapter = CustomLoggerAdapter(logger, {'my_context': group_name})
logger_adapter = CustomLoggerAdapter(logger, {"my_context": group_name})
if not logger.handlers:
syslog = logging.StreamHandler()
logger.addHandler(syslog)
return logger_adapter
2 changes: 1 addition & 1 deletion integration_test/workflow_tests/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ dataclasses-json

## Additional validation libraries
## ---------------------------------------------------------------------------
bandit==1.7.5
bandit==1.7.6
flake8==6.1.0
black==22.3.0
isort==5.12.0
Loading

0 comments on commit 7715b77

Please sign in to comment.