Certificates in Kubernetes: Creating Users and Managing Access

Kubernetes uses certificates for secure communication between its components and to authenticate users. At the heart of this system lies a Certificate Authority (CA) that signs and verifies certificates used within the cluster.

We are going to learn :

  • How the Kubernetes CA works.
  • How to manually create a user and kubeconfig using certificates (with kind).
  • How to manage access in AWS (EKS) and Google Cloud (GKE), which handle authentication differently.

Understanding the Kubernetes Certificate Authority (CA)

Kubernetes clusters typically include a built-in Certificate Authority managed by the control plane. The CA is used to:

  • Sign the certificates used by the API server, kubelet, controller manager, etc.
  • Sign client certificates for users and service accounts (if manually configured).
  • Verify incoming client requests using mutual TLS (mTLS).

The Kubernetes API server can authenticate clients via:

  • Client certificates.
  • Bearer tokens (ServiceAccounts, OIDC, etc.).
  • Webhooks or external auth integrations (e.g., AWS IAM, GCP IAM).

Creating a New User Using Certificates (with kind)

Let's go through a step-by-step example using kind (Kubernetes in Docker).

1: Create a Kind Cluster

kind create cluster --name cert-demo

2: Extract the CA from the Cluster

Kind stores the Kubernetes configuration in ~/.kube/config and the CA certificate in there.

kubectl config view --raw > kubeconfig.yaml

Extract the CA certificate:

cat kubeconfig.yaml | grep certificate-authority-data | awk '{print $2}' | base64 -d > ca.crt

3: Generate a Private Key and CSR for a New User

openssl genrsa -out newuser.key 2048

openssl req -new -key newuser.key -out newuser.csr -subj "/CN=newuser/O=developers"

4: Sign the Certificate with the Cluster CA

You'll need to access the Kubernetes control plane CA key. For kind, it can be accessed inside the control plane container.

docker exec -it kind-control-plane bash
cd /etc/kubernetes/pki
openssl x509 -req -in /tmp/newuser.csr -CA ca.crt -CAkey ca.key -CAcreateserial \
-out /tmp/newuser.crt -days 365

Copy newuser.crt back to your machine.

5: Create a kubeconfig for the New User

kubectl config set-credentials newuser \
--client-certificate=newuser.crt \
--client-key=newuser.key \
--embed-certs=true \
--kubeconfig=newuser.kubeconfig

kubectl config set-cluster kind-cert-demo \
--server=https://127.0.0.1:6443 \
--certificate-authority=ca.crt \
--embed-certs=true \
--kubeconfig=newuser.kubeconfig

kubectl config set-context newuser-context \
--cluster=kind-cert-demo \
--user=newuser \
--kubeconfig=newuser.kubeconfig

kubectl config use-context newuser-context --kubeconfig=newuser.kubeconfig

6: RBAC – Grant the User Access

kubectl create rolebinding newuser-rb \
--clusterrole=view \
--user=newuser \
--namespace=default

Now test access:

KUBECONFIG=newuser.kubeconfig kubectl get pods

Certificates and Authentication in AWS (EKS)

Ok so we have taking a loook on how to do it with kind locally, let's see how we can apply this in a production cluster

EKS does not use client certificates for authentication like self-hosted clusters. Instead, AWS IAM identities (users or roles) authenticate to the Kubernetes API server. The bridge between AWS IAM and Kubernetes RBAC is the aws-auth ConfigMap.

So we are going to cover:

  • How IAM roles/users gain access to EKS
  • How to configure the aws-auth ConfigMap using mapRoles
  • How Kubernetes RBAC controls what those IAM identities can do

1: Create an IAM Role or User

You can allow access using either:

  • An IAM user (for direct developers, though not recommended for production)
  • An IAM role (preferred for federated access, CI/CD, automation, or IAM Identity Center)

Make sure the IAM entity has permission to call:

{
"Action": "eks:DescribeCluster",
"Effect": "Allow",
"Resource": "*
}

This is the minimum permission to authenticate to an EKS cluster.

2: Update the aws-auth ConfigMap

To map the IAM role or user to a Kubernetes identity, edit the aws-auth ConfigMap in the kube-system namespace:

kubectl edit configmap aws-auth -n kube-system

So we mapRoles Works

The mapRoles section maps IAM roles to Kubernetes usernames and groups. Here's a full example:

mapRoles: |
  - rolearn: arn:aws:iam::123456789012:role/DevOpsRole
    username: devops
    groups:
      - system:masters

Ok lets take a look on what each Field does:

Field Description

rolearn The ARN of the IAM role. Anyone who assumes this role can access the cluster.

username The Kubernetes username this role maps to. Shown in audit logs.

groups A list of Kubernetes groups. These are used in RBAC to define permissions.

Something else that we can do is to use t dynamic username mapping.

You can use placeholders in username, like {{SessionName}}, to dynamically include the session name from AWS:

mapRoles: |
  - rolearn: arn:aws:iam::123456789012:role/SRETeamRole
    username: sre:{{SessionName}}
    groups:
      - devops-readonly

This makes audit logs more useful by identifying who assumed the role.

Example: Mapping an IAM User

You can also map IAM users with mapUsers (less common and not recommended for production):

mapUsers: |
  - userarn: arn:aws:iam::123456789012:user/dev-user
    username: dev-user
    groups:
      - view-only

Atention to this: IAM roles are preferred over users because they're more flexible, secure, and work better with Identity Center.

3: Create RBAC Bindings in Kubernetes

Once an IAM identity is mapped to a Kubernetes user and group, RBAC takes over.

Here’s how to bind the devops-readonly group to the view ClusterRole:

kubectl create clusterrolebinding readonly-access \
  --clusterrole=view \
  --group=devops-readonly

Here is a Full Example aws-auth ConfigMap Snippet

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::123456789012:role/DevOpsRole
      username: devops
      groups:
        - devops-admin
    - rolearn: arn:aws:iam::123456789012:role/ReadOnlyRole
      username: readonly:{{SessionName}}
      groups:
        - devops-readonly
  mapUsers: |
    - userarn: arn:aws:iam::123456789012:user/dev-user
      username: dev-user
      groups:
        - view-only

Then, use ClusterRoleBinding to bind these groups to RBAC permissions.

Certificates and Authentication in Google Cloud (GKE)

GKE also does not use client certificates for user auth. It uses Google Cloud IAM with gcloud CLI.

Authentication in GKE

When you run:

gcloud container clusters get-credentials my-cluster --region us-central1

It sets up a kubeconfig file that uses OIDC tokens for authentication.

GKE uses:

OAuth 2.0 tokens from Google identity.

RBAC in Kubernetes for access control.

You can bind a Google user or service account to Kubernetes roles:

kubectl create clusterrolebinding gcp-user-binding \
  --clusterrole=cluster-admin \
  --user=myuser@gmail.com

No manual certificate handling is needed.

Summary

Platform User Auth Method Manual Certs? Notes

Kind (manual cluster) Client certificate (mTLS) Yes Good for learning and testing

AWS (EKS) IAM identity + aws-auth ConfigMap. No Uses IAM roles for auth

GCP (GKE) OIDC via gcloud / Google IAM