Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 24 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -138,45 +138,42 @@ svc/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 1m

Awesome. Now your `kubectl` is configured!

Next we need to enable the worker nodes to join your cluster.
## (Optional) Setup autoscaler

## Enable worker nodes to join your cluster
Download, edit, and apply the AWS authenticator configuration map:
1. To download a deployment example file provided by the Cluster Autoscaler project on GitHub, run the following command:

1.) Download the configuration map.
```
curl -O https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/aws-auth-cm.yaml
wget https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
```

2.) Open the file with your favorite text editor. Replace the <ARN of instance role (not instance profile)> snippet with the `NodeInstanceRole` value that you recorded in the previous procedure, and save the file.
2. Open the downloaded YAML file, and set the EKS cluster name (awsExampleClusterName) and environment variable (us-east-1) based on the following example. Then, save your changes.

This will be the `NodeInstanceRole` output from the nodes stack.

Important
> Do not modify any other lines in this file.
```
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
...
command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=info
- --cloud-provider=aws
- --skip-nodes-with-local-storage=false
- --expander=least-waste
- --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/<awsExampleClusterName>
env:
- name: AWS_REGION
value: us-east-1
...
```

3.) Apply the configuration. This command may take a few minutes to finish.
3. To create a Cluster Autoscaler deployment, run the following command:

```
kubectl apply -f aws-auth-cm.yaml
kubectl apply -f cluster-autoscaler-autodiscover.yaml
```

4.) Watch the status of your nodes and wait for them to reach the Ready status.
4. To check the Cluster Autoscaler deployment logs for deployment errors, run the following command:

```
kubectl get nodes --watch
kubectl logs -f deployment/cluster-autoscaler -n kube-system
```

Congratulations - Your new AWS EKS Kubernetes cluster is ready.
Expand All @@ -193,4 +190,4 @@ cim stack-delete

cd vpc
cim stack-delete
```
```
2 changes: 1 addition & 1 deletion cluster/cluster.stack.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Resources:
Cluster:
Type: "AWS::EKS::Cluster"
Properties:
Version: "1.10"
Version: "1.14"
RoleArn: !GetAtt ClusterRole.Arn
ResourcesVpcConfig:
SecurityGroupIds:
Expand Down
49 changes: 47 additions & 2 deletions nodes/nodes.stack.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Parameters:
NodeImageId:
Type: AWS::EC2::Image::Id
Description: AMI id for the node instances.
Default: ami-dea4d5a1
Default: ami-0812df1ae4450c89a

NodeInstanceType:
Description: EC2 instance type for the node instances
Expand Down Expand Up @@ -303,7 +303,7 @@ Resources:
ToPort: 443
FromPort: 443

NodeGroup:
AutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
DesiredCapacity: !Ref NodeAutoScalingGroupMaxSize
Expand All @@ -327,6 +327,51 @@ Resources:
MinInstancesInService: '1'
MaxBatchSize: '1'

NodeGroup:
Type: AWS::EKS::Nodegroup
Properties:
AmiType: "AL2_x86_64" # Append _GPU if you want GPU
ClusterName: !Ref ClusterName
DiskSize: 100 # 100 GiB
ForceUpdateEnabled: false
InstanceTypes:
- !Ref NodeInstanceType
Labels: {}
NodegroupName: !Ref NodeGroupName
NodeRole: !GetAtt NodeInstanceRole.Arn
RemoteAccess:
Ec2SshKey: !Ref KeyName
SourceSecurityGroups:
- !Ref NodeSecurityGroup
ScalingConfig:
DesiredSize: !Ref NodeAutoScalingGroupMinSize
MaxSize: !Ref NodeAutoScalingGroupMaxSize
MinSize: !Ref NodeAutoScalingGroupMinSize
Subnets:
- Fn::ImportValue:
!Sub "${VPCStack}-PublicSubnet1ID"
- Fn::ImportValue:
!Sub "${VPCStack}-PublicSubnet2ID"

ClusterAutoScalerPolicy:
Type: 'AWS::IAM::Policy'
Properties:
PolicyName: ClusterAutoScalerPolicy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- "autoscaling:DescribeAutoScalingGroups"
- "autoscaling:DescribeAutoScalingInstances"
- "autoscaling:DescribeLaunchConfigurations"
- "autoscaling:DescribeTags"
- "autoscaling:SetDesiredCapacity"
- "autoscaling:TerminateInstanceInAutoScalingGroup"
Resource: '*'
Roles:
- !Ref NodeInstanceRole

NodeLaunchConfig:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
Expand Down