Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Changed
- Increased the maximum `stride_for_norm_param_estimation` to 32 from 16 for the `OPERA_DIST_S1` job specification to handle models with a larger input size.
- Changed readme to reflect current state of AWS, motivation for deploying a hyp3 stack, and clarify naming and formatting.
- The `ITS_LIVE_META` job spec has been renamed `ITS_LIVE_CROP` as it re-crops an existing ITS_LIVE product to ensure it is chunk-aligned and has a time dimension, and then it generates STAC JSON and other metadata files.
- The hyp3-ci stack permission for JPL deployments were expanded to support deploying HyP3-based monitoring stacks:
- Listing CloudFormation stacks is now allowed.
Expand Down
86 changes: 58 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,10 @@ A processing environment for HyP3 Plugins in AWS.

## Deployment

### Why would you set up a hyp3 deployment?

A HyP3 deployment stack provides a reproducible cloud processing environment that bundles AWS infrastructure, execution logic, and cost controls, enabling scalable, on-demand computation with clear operational costs.

> [!IMPORTANT]
> It's not currently possible to deploy HyP3 fully independent of ASF due to our integration with
> [ASF Vertex](https://search.alaska.edu). If you'd like your own deployment of HyP3, please open an issue here or
Expand Down Expand Up @@ -126,7 +130,7 @@ In order to integrate an ASF deployment we'll need:

These can be done by deploying the [ASF CI stack](cicd-stacks/ASF-deployment-ci-cf.yml).

*Warning: This stack should only be deployed once per AWS account. This stack also
*Warning: This stack only needs to be deployed once per AWS account. This stack also
assumes you are only deploying into a single AWS Region. If you are deploying into
multiple regions in the same AWS account, you'll need to adjust the IAM permissions
that are limited to a single region.*
Expand All @@ -146,7 +150,10 @@ Once the `github-actions` IAM user has been created, you can create an AWS acces
which we will use to deploy HyP3 via CI/CD tooling:

1. Go to AWS console -> IAM -> Users -> github-actions -> security credentials tab -> "create access key".
2. Store the access key ID and secret access key using your team's password manager.
2. Select "Other" for key usage
3. (Optional) Add tag value to describe the key, such as "For GitHub Actions CI/CD pipelines"
4. Store the access key ID and secret access key using your team's password manager. You will use them below in "Create the GitHub environment"
as `V2_AWS_ACCESS_KEY_ID` and `V2_AWS_SECRET_ACCESS_KEY`.
</details>

<details>
Expand Down Expand Up @@ -220,29 +227,33 @@ you will need to create an Earthdata Login user for your deployment if you do no
Go to AWS console -> Secrets Manager, then:

1. Click the orange "Store a new secret" button
1. For "Secret Type" select "Other type of secret"
1. Enter all required secret key-value pairs. Notably, the keys should be the secret names as listed (case-sensitive) in the [job specs](./job_spec/) that will be deployed
1. Click the orange "Next" button
1. Give the secret the same name that you plan to give to the HyP3 CloudFormation stack when you deploy it (below)
1. Click the orange "Next" button
1. Click the orange "Next" button (we won't configure rotation)
1. Click the orange "Store" button to save the Secret
2. For "Secret Type" select "Other type of secret"
3. Enter all required secret key-value pairs. Notably, the keys should be the secret names as listed (case-sensitive) in the [job specs](./job_spec/) that will be deployed
4. Click the orange "Next" button
5. Give the secret the same name that you plan to give to the HyP3 CloudFormation stack when you deploy it (below)
6. Click the orange "Next" button
7. Click the orange "Next" button (we won't configure rotation)
8. Click the orange "Store" button to save the Secret

#### Request SSL cert

*Note: For EDC accounts, you should create the cert in the `us-east-1` region
for use with the CloudFront distribution that you will create later,
even if you're deploying HyP3 to `us-west-2`.*
To allow HTTPS connections, HyP3 needs an SSL certificate that is valid for its deployment domain name (URL), which we can request from AWS.

To allow HTTPS connections, HyP3 needs an SSL certificate that is valid for its deployment domain name (URL):
[!NOTE]
> For EDC accounts, you should create the cert in the `us-east-1` region
> for use with the CloudFront distribution that you will create later,
> even if you're deploying HyP3 to `us-west-2`.*

AWS console -> AWS Certificate Manager -> Request a public certificate:
- The domain name should be something like `hyp3-foobar.asf.alaska.edu`
- Choose "DNS validation"
Go to the AWS console -> AWS Certificate Manager -> Request certificate and then:
1. Select "Request a public certificate"
2. Click the orange "Next" button
3. Choose a "Fully qualified domain name". Domain name should be something like `hyp3-foobar.asf.alaska.edu` or for a test deployment `hyp3-foobar-test.asf.alaska.edu`.
3. Choose "DNS validation"
4. Copy the "CNAME name" and "CNAME value"

Then add the validation record to
Then create a validation record in
https://gitlab.asf.alaska.edu/operations/puppet/-/edit/production/modules/legacy_dns/files/asf.alaska.edu.db
(see previous records for examples).
of the form `<CNAME_name> in CNAME <CNAME_value>`, stripping `.asf.alaska.edu` from the `CNAME_name` (see previous records for examples).

### Create the GitHub environment

Expand All @@ -251,27 +262,28 @@ https://gitlab.asf.alaska.edu/operations/puppet/-/edit/production/modules/legacy
> repository, which is generally only possible for ASF employees on HyP3 development teams.

1. Go to https://github.com/ASFHyP3/hyp3/settings/environments -> New Environment
2. Check "required reviewers" and add the appropriate team(s) or user(s)
3. Change "Deployment branches and tags" to "Selected branches and tags" and
2. Name the environment like your chosen domain name i.e. `hyp3-foobar` or `hyp3-foobar-test`
3. Check "required reviewers" and add the appropriate team(s) or user(s)
4. Change "Deployment branches and tags" to "Selected branches and tags" and
- add a deployment branch or tag rule
- use "Ref Type: Branch" and write the name of the branch it will be deploying out of.
(This is typically `main` for prod deployments, `develop` for test deployments, or a feature branch name for sandbox deployments.)
4. Add the following environment secrets:
5. Add the following environment secrets:
- `AWS_REGION` - e.g. `us-west-2`
- `CERTIFICATE_ARN` (ASF and JPL only) - ARN of the AWS Certificate Manager certificate that you created manually
- `CERTIFICATE_ARN` (ASF and JPL only) - ARN of the AWS Certificate Manager certificate that you created manually, e.g. `arn:aws:acm:us-west-2:XXXXXXXXXXXX:certificate/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX`
- `CLOUDFORMATION_ROLE_ARN` (ASF only) - part of the `hyp3-ci` stack that you deployed, e.g. `arn:aws:iam::xxxxxxxxxxxx:role/hyp3-ci-CloudformationDeploymentRole-XXXXXXXXXXXXX`
- `SECRET_ARN` - ARN for the AWS Secrets Manager Secret that you created manually
- `SECRET_ARN` - ARN for the AWS Secrets Manager Secret that you created manually, e.g. `arn:aws:secretsmanager:us-west-X:XXXXXXXXXXXX:secret:hyp3-foobar-XXXXXX`
- `V2_AWS_ACCESS_KEY_ID` - AWS access key ID:
- ASF: for the `github-actions` user
- ASF: for the `github-actions` user (created in step "Enable CI/CD above")
- JPL: for the service user
- EDC: created by an ASF developer via Kion
- `V2_AWS_SECRET_ACCESS_KEY` - The corresponding secret access key
- `VPC_ID` - ID of the default VPC for this AWS account and region (aws console -> vpc -> your VPCs, e.g. `vpc-xxxxxxxxxxxxxxxxx`)
- `SUBNET_IDS` - Comma delimited list (no spaces) of the default subnets for the VPC specified in `VPC_ID` (aws console -> vpc -> subnets, e.g. `subnet-xxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxx`)
- `VPC_ID` - ID of the default VPC for this AWS account and region (aws console -> VPC -> Your VPCs, e.g. `vpc-xxxxxxxxxxxxxxxxx`)
- `SUBNET_IDS` - Comma delimited list (no spaces) of the default subnets for the VPC specified in `VPC_ID` (aws console -> VPC -> Subnets, e.g. `subnet-xxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxx`)

### Create the HyP3 deployment

You will need to add the deployment to the matrix in an existing GitHub Actions `deploy-*.yml` workflow or create
You will need to add the deployment to the matrix in an existing GitHub Actions `deploy-*.yml` workflow located in the `.github/workflows/` directory, or create
a new one for the deployment. If you need to create a new one, we recommend copying one of the
existing workflows, and then updating all of the fields
as appropriate for your deployment. Also make sure to update the top-level `name` of the workflow and the name
Expand All @@ -281,6 +293,9 @@ of the branch to deploy from. (This is typically `main` for prod deployments, `d
> If you're deploying from a feature branch, make sure to [protect](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-protected-branches/about-protected-branches)
> it from accidental deletion.

> [!TIP]
> If your CI/CD workflow fails. Delete the "Rolled Back" stack (AWS Manager -> CloudFormation -> Stacks) before re-running the failed job.

The deployment workflow will run as soon as you merge your changes into the branch specified in the workflow file.

### Finishing touches
Expand All @@ -293,7 +308,8 @@ Once HyP3 is deployed, there are a few follow on tasks you may need to do for a
> This step must be done by an ASF employee.

Open a PR adding a line to https://gitlab.asf.alaska.edu/operations/puppet/-/blob/production/modules/legacy_dns/files/asf.alaska.edu.db
for the new custom domain name (AWS console -> api gateway -> custom domain names -> "API Gateway domain name").
for the new custom domain name (AWS console -> api gateway -> custom domain names -> "API Gateway domain name") of the format
`hyp3-foobar in CNAME <API Gateway domain name>.`. Follow similar examples.

Ask someone from ASF support to review/merge the PR.

Expand All @@ -312,6 +328,20 @@ Update the [AWS Accounts and HyP3 Deployments](https://docs.google.com/spreadshe
> ```
> Remember to remove this after the DNS PR is merged!

#### Testing and adding user credits to your hyp3 deployment

After successfully deploying HyP3 and your new DNS record has taken effect (or you've edited your local DNS name resolution), you can test your
deployment by accessing the Swagger UI and using the POST `/user` tab to check if your user is approved and has credits for running jobs on the
deployment. You will need to be authenticated by either providing an Earthdata Login Bearer Token using the "Authorize" button, or by having a
valid `asf-urs` browser cookie, typically obtained by logging into [Vertex](https://search.asf.alaska.edu). Interacting with HyP3 should
automatically add your user to the DynamoDB table with the default number of credits (typically 0).

To add credits to your (or any) user, log in to the AWS console and navigate to DynamoDB -> Explore items, then:
1. Find the table with a format like `hyp3-foobar-UsersTable-XXXXXXXXXXXXX`
2. Edit your user record if present (after using the Swagger UI in some way) or duplicate an existing reccord updaing the `user_id`.

You can then return the Swagger UI and use the POST `/jobs` to run a test job and confirm it completes.

#### Optional

<details>
Expand Down