Enabling IAM principal access to your cluster
Access to your cluster using IAM principals is enabled by the AWS IAM
Authenticator for Kubernetesaws-auth
ConfigMap. For all aws-auth
ConfigMap settings, see Full Configuration Format
Add IAM principals to your Amazon EKS cluster
When you create an Amazon EKS cluster, the IAM principal that creates the cluster is automatically granted system:masters permissions
in the cluster's role-based access control (RBAC) configuration in the Amazon EKS control plane. This principal doesn't appear in any visible configuration, so make sure to keep track of which principal originally created the cluster. To grant additional IAM principals the ability
to interact with your cluster, edit the aws-auth ConfigMap within Kubernetes and create a Kubernetes rolebinding or clusterrolebinding with the name of a group that you specify in the aws-auth
ConfigMap.
Note
For more information about Kubernetes role-based access control (RBAC) configuration,
see Using
RBAC Authorization
To add an IAM principal to an Amazon EKS cluster
-
Determine which credentials
kubectlis using to access your cluster. On your computer, you can see which credentialskubectluses with the following command. Replacewith the path to your~/.kube/configkubeconfigfile if you don't use the default path.cat~/.kube/configAn example output is as follows.
[...] contexts: - context: cluster:my-cluster.region-code.eksctl.iouser:admin@my-cluster.region-code.eksctl.ioname:admin@my-cluster.region-code.eksctl.iocurrent-context:admin@my-cluster.region-code.eksctl.io[...]In the previous example output, the credentials for a user named
are configured for a cluster namedadmin. If this is the user that created the cluster, then it already has access to your cluster. If it's not the user that created the cluster, then you need to complete the remaining steps to enable cluster access for other IAM principals. IAM best practices recommend that you grant permissions to roles instead of users. You can see which other principals currently have access to your cluster with the following command:my-clusterkubectl describe -n kube-system configmap/aws-authAn example output is as follows.
Name: aws-auth Namespace: kube-system Labels: <none> Annotations: <none> Data ==== mapRoles: ---- - groups: - system:bootstrappers - system:nodes rolearn: arn:aws:iam::111122223333:role/my-node-roleusername: system:node:{{EC2PrivateDNSName}} BinaryData ==== Events: <none>The previous example is a default
aws-authConfigMap. Only the node instance role has access to the cluster. -
Make sure that you have existing Kubernetes
rolesandrolebindingsorclusterrolesandclusterrolebindingsthat you can map IAM principals to. For more information about these resources, see Using RBAC Authorizationin the Kubernetes documentation. -
View your existing Kubernetes
rolesorclusterroles.Rolesare scoped to anamespace, butclusterrolesare scoped to the cluster.kubectl get roles -Akubectl get clusterroles -
View the details of any
roleorclusterrolereturned in the previous output and confirm that it has the permissions (rules) that you want your IAM principals to have in your cluster.Replace
with arole-namerolename returned in the output from the previous command. Replacewith the namespace of thekube-systemrole.kubectl describe rolerole-name-nkube-systemReplace
with acluster-role-nameclusterrolename returned in the output from the previous command.kubectl describe clusterrolecluster-role-name -
View your existing Kubernetes
rolebindingsorclusterrolebindings.Rolebindingsare scoped to anamespace, butclusterrolebindingsare scoped to the cluster.kubectl get rolebindings -Akubectl get clusterrolebindings -
View the details of any
rolebindingorclusterrolebindingand confirm that it has aroleorclusterrolefrom the previous step listed as aroleRefand a group name listed forsubjects.Replace
with arole-binding-namerolebindingname returned in the output from the previous command. Replacewith thekube-systemnamespaceof therolebinding.kubectl describe rolebindingrole-binding-name-nkube-systemAn example output is as follows.
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name:eks-console-dashboard-restricted-access-role-bindingnamespace:defaultsubjects: - kind: Group name:eks-console-dashboard-restricted-access-groupapiGroup: rbac.authorization.k8s.io roleRef: kind: Role name:eks-console-dashboard-restricted-access-roleapiGroup: rbac.authorization.k8s.ioReplace
with acluster-role-binding-nameclusterrolebindingname returned in the output from the previous command.kubectl describe clusterrolebindingcluster-role-binding-nameAn example output is as follows.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name:eks-console-dashboard-full-access-bindingsubjects: - kind: Group name:eks-console-dashboard-full-access-groupapiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name:eks-console-dashboard-full-access-clusterroleapiGroup: rbac.authorization.k8s.io
-
-
Edit the
aws-authConfigMap. You can use a tool such aseksctlto update theConfigMapor you can update it manually by editing it.Important
We recommend using
eksctl, or another tool, to edit theConfigMap. For information about other tools you can use, see Use tools to make changes to theaws-authConfigMapin the Amazon EKS best practices guides. An improperly formatted aws-authConfigMapcan cause you to lose access to your cluster.
Apply the aws-authConfigMap to your cluster
The aws-auth
ConfigMap is automatically created and applied to your cluster when you
create a managed node group or when you create a node group using eksctl. It is
initially created to allow nodes to join your cluster, but you also use this
ConfigMap to add role-based access control (RBAC) access to
IAM principals. If you've launched self-managed nodes and haven't applied the
aws-auth
ConfigMap to your cluster, you can do so with the following
procedure.
To apply the aws-authConfigMap to your cluster
-
Check to see if you've already applied the
aws-authConfigMap.kubectl describe configmap -n kube-system aws-authIf you receive an error stating "
Error from server (NotFound): configmaps "aws-auth" not found", then proceed with the following steps to apply the stockConfigMap. -
Download, edit, and apply the AWS authenticator configuration map.
-
Download the configuration map.
curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/aws-auth-cm.yaml -
In the
file, set theaws-auth-cm.yamlrolearnto the Amazon Resource Name (ARN) of the IAM role associated with your nodes. You can do this with a text editor, or by replacingand running the following command:my-node-instance-rolesed -i.bak -e 's|<ARN of instance role (not instance profile)>|my-node-instance-role|' aws-auth-cm.yamlDon't modify any other lines in this file.
Important
The role ARN can't include a path such as
role/my-team/developers/my-role. The format of the ARN must bearn:aws:iam::. In this example,111122223333:role/my-rolemy-team/developers/needs to be removed.You can inspect the AWS CloudFormation stack outputs for your node groups and look for the following values:
-
InstanceRoleARN – For node groups that were created with
eksctl -
NodeInstanceRole – For node groups that were created with Amazon EKS vended AWS CloudFormation templates in the AWS Management Console
-
-
Apply the configuration. This command may take a few minutes to finish.
kubectl apply -f aws-auth-cm.yamlNote
If you receive any authorization or resource type errors, see Unauthorized or access denied (kubectl) in the troubleshooting topic.
-
-
Watch the status of your nodes and wait for them to reach the
Readystatus.kubectl get nodes --watchEnter
Ctrl+Cto return to a shell prompt.