top of page

Fix Kubectl Access Not Working in EKS

ree

How to Fix Kubectl Access Not Working in EKS

So you’ve set up your EKS cluster, ran your GitHub Actions pipeline (or maybe a kubectl command from your terminal) and hit this:


ree

This usually happens because kubectl cannot authenticate with your EKS cluster.


And that’s usually caused by:


Your AWS credentials have expired.


The kubeconfig isn’t generated or is outdated.


You’re inside CI/CD and forgot to assume the right IAM role.


You’re using EKS Access Entries but the identity isn’t mapped.


The IAM identity running the command doesn’t have access to the cluster.


ree

You can break this down into 4 parts. Here’s what I do when I hit this error:


1. Check the STS identity

Run this command to see which IAM identity is being used:


aws sts get-caller-identity

If this fails, your AWS credentials are broken or expired. In CI, make sure the workflow is assuming the right role.


2. Update your kubeconfig

Once AWS credentials are sorted, you need to tell kubectl how to talk to the EKS cluster:


aws eks update-kubeconfig --region <region> --name <cluster-name>

This command populates your kubeconfig with the right token provider and endpoint. You can verify it worked with:


kubectl get svc

3. Check if the IAM identity has EKS access

If you're using EKS Access Entries, run:


eksctl get accessentry --cluster <cluster-name>

Look for your IAM user or role in the list.


If you're using the older aws-auth configMap method:


kubectl -n kube-system get configmap aws-auth -o yaml

Ensure your IAM entity is mapped under mapRoles or mapUsers.


4. Fix it in CI/CD

If this error is in GitHub Actions or another CI tool, make sure you're running these steps before you use kubectl:


- name: Configure AWS credentials

uses: aws-actions/configure-aws-credentials@v2

with:

role-to-assume: arn:aws:iam::<account-id>:role/<role-name>

aws-region: <aws-region>


- name: Update kubeconfig

run: aws eks update-kubeconfig --name <cluster-name> --region <aws-region>


- name: Verify access

run: kubectl get nodes

This ensures kubectl can connect to the cluster using the right identity.

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page