EKS is a managed service from AWS for creating kubernetes clusters. AWS manages the control plane components while the cluster access is managed by the end user. This is normally a 2 step process:
- Create the appropriate IAM role with the right permissions
- Create the ClusterRole / Role and its role bindings
- Update the aws-auth configmap to associate the IAM role with the role group created in step 2
This is often a manual process even when using IAC as we would have to create the cluster first and then run something like kustomize to update the aws-auth configmap. It also introduces tight coupling between the IAM role and the aws-auth configmap - any changes made to the IAM role would require rebuilding the aws-auth configmap. We also need to add additional permissions to the IAM role to allow access to say the EKS console in the dashboard.
With the latest EKS versions ( 1.29+ ), we can use access policies which are based on Kubernetes user-facing roles to create an access entry, which links the IAM role to the specified cluster role.
To view the list of access policies:
For example, given an IAM role of KubernetesAdmin
, we can run the following CLI command to associate it with the AmazonEKSClusterAdminPolicy
, which grants cluster-wide admin access:
In terraform using the eks
module:
To view the access entries, we can click on the cluster in EKS console and navigate to Access
or via the cli:
From the deployment above, the following entries were created:
From above, we can see that the KubernetesAdmin
role has been added successfully to the cluster access entries. To test the assignment, we can generate the kubeconfig for the role and try to access the cluster:
We can try to perform some cluster admin tasks by viewing and creating resources:
Personally, I find that using the access entries and policies to be easier to understand in terms of access management as we don’t have to manually update the aws-auth configmap.
More details can be found on Managing EKS access entries.