You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+32-47Lines changed: 32 additions & 47 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,6 @@
1
1
# Automated OpenShift v4 installation on AWS
2
2
3
-
---
4
-
**Important**: use branch ocp46 to install OpenShift version 4.6.x or later
5
-
---
6
-
7
-
This project automates the Red Hat OpenShift Container Platform 4.x installation on Amazon AWS platform. It focuses on the OpenShift User-provided infrastructure installation (UPI) where implementers provide pre-existing infrastructure including VMs, networking, load balancers, DNS configuration etc.
3
+
This project automates the Red Hat OpenShift Container Platform 4.6 (for previous releases - checkout `pre46` branch) installation on Amazon AWS platform. It focuses on the OpenShift User-provided infrastructure installation (UPI) where implementers provide pre-existing infrastructure including VMs, networking, load balancers, DNS configuration etc.
@@ -19,7 +15,7 @@ This project uses mainly Terraform as infrastructure management and installation
19
15
20
16
### Prerequisites
21
17
22
-
1. To use Terraform automation, download the Terraform binaries [here](https://www.terraform.io/). The code here supports Terraform 0.12 - 0.12.13; there are warning messages to run this on 0.12.14 and later.
18
+
1. To use Terraform automation, download the Terraform binaries [here](https://www.terraform.io/). The code here supports Terraform 0.15 or later.
23
19
24
20
On MacOS, you can acquire it using [homebrew](brew.sh) using this command:
25
21
@@ -58,19 +54,21 @@ This project uses mainly Terraform as infrastructure management and installation
58
54
zypper install wget
59
55
```
60
56
61
-
4. Get the Terraform code
57
+
5. Install jq: see [https://stedolan.github.io/jq/download/](https://stedolan.github.io/jq/download/)
OpenShift requires a valid DNS domain, you can get one from AWS Route53 or using existing domain and registrar. The DNS must be registered as a Public Hosted Zone in Route53. (Even if you plan to use an airgapped environment)
67
+
OpenShift requires a valid public Route53 hosted zone. (Even if you plan to use an airgapped environment)
70
68
71
-
6. Prepare AWS Account Access
69
+
8. Prepare AWS Account Access
72
70
73
-
Please reference the [Required AWS Infrastructure components](https://docs.openshift.com/container-platform/4.1/installing/installing_aws_user_infra/installing-aws-user-infra.html#installation-aws-user-infra-requirements_installing-aws-user-infra) to setup your AWS account before installing OpenShift 4.
71
+
Please reference the [Required AWS Infrastructure components](https://docs.openshift.com/container-platform/4.6/installing/installing_aws/installing-aws-account.html) to setup your AWS account before installing OpenShift 4.
74
72
75
73
We suggest to create an AWS IAM user dedicated for OpenShift installation with permissions documented above.
76
74
On the bastion host, configure your AWS user credential as environment variables:
@@ -85,8 +83,7 @@ This project uses mainly Terraform as infrastructure management and installation
85
83
86
84
For detail on OpenShift UPI, please reference the following:
The terraform code in this repository supports 3 installation modes:
92
89
@@ -105,14 +102,15 @@ This project installs the OpenShift 4 in several stages where each stage automat
105
102
1. The deployment assumes that you run the terraform deployment from a Linux based environment. This can be performed on an AWS-linux EC2 instance. The deployment machine has the following requirements:
106
103
107
104
- git cli
108
-
- terraform 0.12 or later
105
+
- terraform 0.15 or later
109
106
- wget command
107
+
- jq command
110
108
111
109
2. Deploy the OpenShift 4 cluster using the following modules in the folders:
112
110
113
111
- route53: generate a private hosted zone using route 53
114
-
- vpc: Create the VPC, subnets, security groups and load balancers for the OpenShift cluster
115
112
- install: Build the installation files, ignition configs and modify YAML files
113
+
- vpc: Create the VPC, subnets, security groups and load balancers for the OpenShift cluster
116
114
- iam: define AWS authorities for the masters and workers
117
115
- bootstrap: main module to provision the bootstrap node and generates OpenShift installation files and resources
118
116
- master: create master nodes manually (UPI)
@@ -122,17 +120,15 @@ This project installs the OpenShift 4 in several stages where each stage automat
122
120
Create a `terraform.tfvars` file with following content:
|`cluster_id`| yes | This id will be prefixed to all the AWS infrastructure resources provisioned with the script - typically using the clustername as its prefix. |
150
-
|`clustername`| yes | The name of the OpenShift cluster you will install |
145
+
|`cluster_name`| yes | The name of the OpenShift cluster you will install |
151
146
|`base_domain`| yes | The domain that has been created in Route53 public hosted zone |
152
-
|`openshift_pull_secret`| no | The value refers to a file name that contain downloaded pull secret from https://cloud.redhat.com/openshift/install; the default name is `openshift_pull_secret.json`|
147
+
|`openshift_pull_secret`| no | The value refers to a file name that contain downloaded pull secret from https://cloud.redhat.com/openshift/pull-secret; the default name is `openshift_pull_secret.json`|
153
148
|`openshift_installer_url`| no | The URL to the download site for Red Hat OpenShift installation and client codes. |
154
149
|`aws_region`| yes | AWS region that the VPC will be created in. By default, uses `us-east-2`. Note that for an HA installation, the AWS selected region should have at least 3 availability zones. |
155
150
|`aws_extra_tags`| no | AWS tag to identify a resource for example owner:myname |
156
-
|`aws_ami`| yes | Red Hat CoreOS ami for your region (see [here](https://docs.openshift.com/container-platform/4.2/installing/installing_aws_user_infra/installing-aws-user-infra.html#installation-aws-user-infra-rhcos-ami_installing-aws-user-infra)). Other platforms images information can be found [here](https://github.com/openshift/installer/blob/master/data/data/rhcos.json) |
151
+
|`aws_ami`| yes | Red Hat CoreOS ami for your region (see [here](https://docs.openshift.com/container-platform/4.6/installing/installing_aws/installing-aws-user-infra.html#installation-aws-user-infra-rhcos-ami_installing-aws-user-infra)). Other platforms images information can be found [here](https://github.com/openshift/installer/blob/master/data/data/rhcos.json) |
157
152
|`aws_secret_access_key`| yes | adding aws_secret_access_key to the cluster |
158
153
|`aws_access_key_id`| yes | adding aws_access_key_id to the cluster |
159
154
|`aws_azs`| yes | list of availability zones to deploy VMs |
@@ -221,9 +216,9 @@ Setting up the mirror repository using AWS ECR:
221
216
3. Mirror quay.io and other OpenShift source into your repository
**Note**: To use `airgapped.enabled` of `true` must be done with `aws_publish_strategy` of `Internal` otherwise the deployment will fail.
262
+
**Note**: To use `airgapped.enabled` of `true` must be done with `aws_publish_strategy` of `Internal` otherwise the deployment will fail. Also ECR does not allow for unauthenticated image pulls, additional IAM policies must be defined and attached to the nodes to be able to pull from ECR.
267
263
268
264
Create your cluster and then associate the private Hosted Zone Record in Route53 with the loadbalancer for the `*.apps.<cluster>.<domain>`.
269
265
270
-
## Removal Procedure
271
-
272
-
For the removal of the cluster, there are several considerations for removing AWS resources that are created by the cluster directly, but not using Terraform. These resources are unknown to terraform and must be deleted manually from AWS console.
273
-
Some of these resources also hamper the ability to run `terraform destroy` as it becomes a dependent resource that prevent its parent resource to be deleted.
274
-
275
-
The cluster created resources are:
276
-
277
-
- Resources that prevents `terraform destroy` to be completed:
278
-
- Worker EC2 instances
279
-
- Application Load Balancer (classic load balancer) for the `*.apps.<cluster>.<domain>`
280
-
- Security Group for the application load balancer
281
-
- Other resources that are not deleted:
282
-
- S3 resource for image-registry
283
-
- IAM users for the cluster
284
-
- Public Route53 Record set associated with the application load balancer
285
-
266
+
## Removal procedure
286
267
287
-
**Update 11/2020**: A `delocp.sh` is added to remove resources - if you have the aws CLI; however the script does not account for timing just yet.
268
+
To delete the cluster - `terraform destroy` can be implemented.
269
+
The following items are not deleted (and may stop destroy from being successful):
270
+
- EBS volumes from the gp2 storage classes
271
+
- Public zone DNS updates
272
+
- Custom compute nodes that are not the initial worker nodes
0 commit comments