AWS EKS OIDC with Google Workspace

In order to achieve user traceability of admin actions in you Kubernetes cluster, it is a good idea to set up personalized accounts. In the IAM User Traceability in AWS EKS post I was showing the specifics of aws-auth ConfigMap user and role mapping. This post dives into OIDC integration for AWS EKS user management.

Alternatively to aws-auth it is possible to use an OpenID Connect Identity provider with AWS EKS. The diagram below illustrates this approach.

AWS EKS OpenID Connect

Usage of the identity provider offers huge benefits (if the users are configured to have personalised identities):

For a list of certified providers, see OpenID Certification on the OpenID site.

An important detail however, is that not all OIDC providers offer ‘groups’ claim, therefore it is not possible to do fine-tune user authorization. For example Google Workspace does not offer groups claim, so if you want to use Google Workspace as your user directory for Kubernetes cluster users, you will not be able to manage different levels of user access with Google Groups.

To overcome this problem, we can use a federated OpenID Connect provider. There are many options available, free and paid ones. This post covers a free solution.

Federated OpenID Connect Providers

Federated OpenID Connect Providers allow to combine multiple upstream Identity Providers and offer extra functionalities, like providing groups claim where it is not natively supported by the upstream provider.

Dex is an OSS example of a federated provider. Here is a list of Dex-implemented connectors. As we can see, Google connector supports groups claim and refresh tokens (which is required for kubectl to work).

AWS EKS OIDC Federated

In this example I will set up AWS EKS user management with Google Workspace and group support via Dex google connector. If you have Google Workspace Business Plus and up, you might want to concider using Dex to LDAP integration - this connector is stable, while Dex to Google is still in Alpha state.

From the diagram above, it is clear that we need to set up/configure the following resources:

  1. Google Workspace OIDC configuration
  2. Dex OIDC Provider
  3. Configure AWS EKS Cluster to work with iDP
  4. Configure client

1. Google Workspace OIDC configuration

Create a new Google Cloud Platform project

This step is optional. You can use an existing project if you have a suitable one. Here is the official documenation for creating a project. Below is a brief recap.

GCP Select Project

GCP New Project Screen

GCP New Project Config

GCP Select Existing Project

OAuth needs a consent screen before it can acquire user credentials.

GCP OAuth Screen User External

GCP OAuth Screen Config

GCP OAuth Screen Scopes

GCP OAuth Test User

Create Google API credentials for OIDC

Create a new OAuth ClientID Key

GCP Create OAuth Key

Add ‘’ to “Authorised redirect URIs” field.

Create a service account to fetch groups

To allow Dex to fetch group information from Google, you will need to configure a service account for Dex to use. This account needs Domain-Wide Delegation and permission to access the API scope.

  1. Open the Service Accounts page
  2. Click “Create Service Account”
  3. Provide a name, id and description for the account and click “Create and Continue” GCP Create SA
  4. On the “Grant this service account access to the project” screen click “Continue” (do not add anything)
  5. On the “Grant users access to this service account” screen click “Done” (do not add anything)
  6. Click on the newly created service account and open “Keys” tab
  7. Click Add key > Create new key > JSON and click Create, the key will be downloaded to your computer
  8. From your Google Workspace domain’s Admin console, go to Menu > Security > Access and data control > API controls.
  9. In the “Domain-wide delegation” area click “Manage domain-wide delegation”
  10. Click “Add new”
  11. In the “Client ID” field enter the client id from the JSON generated on step 7 above
  12. in “OAuth scopes” enter “”
  13. Click “Authorise”
  14. Enable the Admin SDK

2. Dex OIDC Provider

It is generally a good idea to run the Dex OIDC Provider outside of the Kubernetes cluster which it is going to support. Failed Dex might lock you out of the cluster. In this case you will have to fall back to using IAM account access to restore Dex functionality. Running outside of cluster requires some extra effort if you do not have a separate cluster or workflow for such workloads.

Dex needs to be accessible from both, client web browser and Kubernetes API.

For the sake of simplicity and focus on the OIDC configuration, I will take a couple of shortcuts, these need to be adjusted according to the real cluster setup:

  • Dex will run on the same cluster.
  • I will use a Classic Service load balancer to skip Ingress controller installation. This is a legacy solution. Normally you would use Ingress.
  • Certificate will be created manually. This can be automated by CertManager.


Make sure you have the following dependencies prepared:

Ingress Controller

We need some kind of ingress controller to provide access to Dex. This default configuration uses a Classic LoadBalancer, and one should concider using AWS LoadBalancer Controller instead. More info in AWS Docs.

helm repo add ingress-nginx
helm repo update
helm install \
  ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx \

Verify ingress installation. You should see a DNS name assigned to ingress-nginx-controller in EXTERNAL-IP column

kubectl get svc -n ingress-nginx

Set up DNS record for your ingress controller. This can be automated via ExternalDNS controller. Get the canonical HostedZoneId of the load balancer with the following command:

aws elb describe-load-balancers --load-balancer-names XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Then generate batch

cat << 'EOF' > batch.json
    "Comment": "Create ingress record",
    "Changes": [
            "Action": "CREATE",
            "ResourceRecordSet": {
                "Name": "dex.<",
                "Type": "A",
                "AliasTarget": {
                    "HostedZoneId": "ZXXXXXXXXXXXXX",
                    "DNSName": "",
                    "EvaluateTargetHealth": false

Then create a route53 record. Note that the hosted zone id is the id of the Route53 hosted zone, not the canonical hosted zone id from above.

aws route53 change-resource-record-sets --hosted-zone-id ZXXXXXXXXXXXXX --change-batch file://batch.json


Dex requires TLS setup, so we need a CertManager or some other way of managing certificates.

helm repo add jetstack
helm repo update
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --set installCRDs=true

Create an ACME Certificate issuer. More info in CertManager documentation.

cat << 'EOF' | kubectl apply -f -
kind: ClusterIssuer
  name: acme
      name: acme-account-key
    server: ''
      - http01:
            class: nginx


Create a new namespace and make it default

kubectl create ns dex

Create secrets with Google Workspace credentials. We generated these credentials in one of the previous steps.

kubectl create secret generic \
  dex-env \
  --namespace dex

Create secrets with Google Workspace Service Account JSON downloaded earlier:

kubectl create secret generic dex-google-sa \
  --from-file=<downloaded_sa_file>.json \
  --namespace dex

Create a minimal Dex configuration. Replace placeholder values:

# dex.yaml
  enabled: true
  className: nginx
  annotations: "true" acme
    - host:
        - path: /
          pathType: ImplementationSpecific
    - secretName: dex-tls

# Use credentials from a Secret
  - secretRef:
      name: dex-env

# Mount a volume with google service account json
  - name: dex-google-sa
      secretName: dex-google-sa
      defaultMode: 420
  - mountPath: /var/run/dex-google-sa
    name: dex-google-sa

    type: kubernetes
      inCluster: true

  # See for more options
    - type: google
      id: google
      name: Google
        # Connector config values starting with a "$" will read from the environment.
        clientID: $GOOGLE_CLIENT_ID
        clientSecret: $GOOGLE_CLIENT_SECRET

        # Dex's issuer URL + "/callback"

        # Whitelist allowed domains

        # Google does not support the OpenID Connect groups claim and only supports
        # fetching a user's group membership with a service account.
        # This service account requires an authentication JSON file and the email
        # of a G Suite admin to impersonate:
        serviceAccountFilePath: /var/run/dex-google-sa/downloaded_sa_file.json
        # adminEmail should be the email of a G Suite super user. The service account you
        # created earlier will impersonate this user when making calls to the admin API.
        # A valid user should be able to retrieve a list of groups when testing the API.

  # Client for kubelogin
    - id: kubelogin
      secret: some-random-secret
      name: "Kubelogin"
        - http://localhost:8000
        - http://localhost:18000

Add dex Helm repo and install Helm Chart

helm repo add dex
helm repo update
helm install \
  dex dex/dex \
  --namespace dex \
  --values dex.yaml \

3. Configure AWS EKS Cluster to work with iDP

It is possible to do this either via console or eksctl. Below is the AWS Console version, see official AWS documentation for the eksctl version.

Set up Kubernetes RBAC

In this example we map group to a standard pre-existing cluster-admin role. You can create other roles and mappings for different user types.

cat << 'EOF' | kubectl apply -f -
kind: ClusterRoleBinding
  name: dex-cluster-cluster-admin
  kind: ClusterRole
  name: cluster-admin
  - kind: Group
    name: ""

4. Configure client

Add user to a group

Previously we mapped the cluster-admin role to the group. A Google Workspace user needs to be a part of this group.

Configure kubectl with kubelogin

You will need the the kubelogin plugin to make kubectl work with OIDC.

Use one of the following methods:

  # Homebrew (macOS and Linux)
  brew install int128/kubelogin/kubelogin

  # Krew (macOS, Linux, Windows and ARM)
  kubectl krew install oidc-login

  # Chocolatey (Windows)
  choco install kubelogin

Verify authentication

Run this command, use the following parameters from dex config which we defined earlier:

  • YOUR_CLIENT_ID - kubelogin
  • YOUR_CLIENT_SECRET - some-random-secret
kubectl oidc-login setup \
  --oidc-issuer-url=ISSUER_URL \
  --oidc-client-id=YOUR_CLIENT_ID \
  --oidc-client-secret=YOUR_CLIENT_SECRET \
  --oidc-extra-scope groups \
  --oidc-extra-scope email \
  --oidc-extra-scope profile

This will open browser and ask you to approve OIDC login. In case everything is done correctly, you will see claims provided by Dex.

Dex Login Page

Configure kubectl

Add a new context to kubeconfig

aws eks update-kubeconfig --name <eks-cluster-name> --alias <eks-cluster-name>-oidc

Add a new user to kubeconfig

kubectl config set-credentials <eks-cluster-name>-oidc \ \
  --exec-command=kubectl \
  --exec-arg=oidc-login \
  --exec-arg=get-token \
  --exec-arg=--oidc-issuer-url= \
  --exec-arg=--oidc-client-id=kubelogin \
  --exec-arg=--oidc-client-secret=some-random-secret \
  --exec-arg=--oidc-extra-scope=groups \
  --exec-arg=--oidc-extra-scope=email \

Apply new user with OIDC auth to the new context, and switch to the new context:

kubectl config set-context <eks-cluster-name>-oidc --user=<eks-cluster-name>-oidc
kubectl config use-context <eks-cluster-name>-oidc

Assuming your current kubeconfig context is pointing to AWS EKS Cluster being configured, try to run a couple of kubernetes commands:

kubectl cluster-info
kubectl get ns

Check EKS Cluster Logs

Now we can properly audit user actions on AWS EKS Cluster. If we open CloudWatch logs, it is now possible to clearly identify WHO did WHAT and WHY it was allowed. Yay! 🚀

AWS EKS Associate oidc Identity Provider

Read my other post on IAM User Traceability in AWS EKS.