Motivation
Many services, especially those in the Hadoop world are running under Kerberos Authentication. Kerberos auth is a quite a cumbersome topic. Thus, we want to design a modular and scalable solution we can attach to existing applications.
Solution
Adding a sidecar container to authenticate application containers. This can work for multiple types of K8s Pod controllers, such as Deployments and Jobs. To this end, we will:
- import the keytab file as a K8s secret
- import the /etc/krb5.conf file as a config map
- prepare an application deployment defining an application and a sidecar container
Implementation
Mind that for the following steps you may interchangeably use the oc and kubectl commands.
1. Adding the user keytab
There are mainly 2 ways of authenticating to Kerberos: i) password and ii) keytab file. Keytab stands for key table, given that each entry in the file pairs a Kerberos principal (i.e. username@realm) to its key for a specific encryption type. I generally prefer using a keytab over moving around a plain-text password. In k8s, passwords can be stored as a secret or otherwise an external vault may be refered to.
You can directly create a secret from a keytab file. Also, I tend to use a lot the dry-run feature to let the client write a skeleton of yaml I can fill then in. I find this super handy.
You can have a look at the possibilities with a
kubectl create secret --help
Creating a secret is something like:
kubectl create secret <type> <name> --dry-run=client -o yaml
with type being one of: docker-registry, generic (from a file, dir or literal) and tls.
Assuming a generic secret we then have:
kubectl create secret generic testsecret --dry-run=client -o yaml
which returns:
The secret can be imported from multiple sources: i) file path or folder containing multiple secret files, ii) from key-value secret pairs, iii) from a combination of file and literal, iv) from an env file.
So:
kubectl create secret generic user-keytab --from-file=<path-to-keytab> --dry-run-client -o yaml
To create the secret, you can either save the output and apply it (kubectl apply -f <file>) or remove the --dry-run-client -o yaml and let the kubectl client create it for you. I tend to prefer saving yaml files, since these can be easily versioned on git.
2. Adding the krb5.conf file as config map
Similarly to the imported secret, we can create a config map from an existing file with:
kubectl create configmap <name> --from-file=<file-to-import> --dry-run=client -o yaml
assuming, we named it krb5-conf, this returns:
apiVersion: v1
data:
krb5.conf: |
....content of krb5.conf...
kind: ConfigMap
metadata:
name: krb5-conf
Again, this file can be saved locally and create with the usual kubectl create -f <file>.
3. Create a deployment
Let's now come to the actual meat: how to ramp up an application pod with a sidecar container initiating the Kerberos auth.
Similarly to the two previous examples, we can generate a first skeleton with:
kubectl create deployment <deployment-name> --dry-run=client -o yaml
and use the --help to get some hints.
As visible, a --image=<image> is the sole main requirement.
So:
kubectl create deployment testdeployment --image=appimage:latest --dry-run=client -o yaml
which returns something like:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testdeployment
name: testdeployment
spec:
replicas: 1
selector:
matchLabels:
app: testdeployment
template:
metadata:
labels:
app: testdeployment
spec:
containers:
- name: appimage
image: appimage:latest
resources: {}
Let's fill this in according to the following steps:
- add a sidecar container having an available kinit binary. For the purpose I wrote the geronzio project on github: an alpine image installing krb5 and kstart (for krenew, useful to keep the ticket alive).
- mount the krb5.conf map on both the application container and the sidecar (as a read only volume); as you may try and see, it is not a good idea to mount something at /etc as kubernetes normally injects host and dns info at this location and may result in the Pod being rejected by the admission controller or an error. A similar behavior may occurr at /tmp. So just change the default paths with something creative. Mind that we can use KRB5_CONFIG and KRB5CCNAME to respectively overwrite the default location of the krb5.conf and cache files. Specifically, the cache file can be set to be written to an ephimeral volume used as communication means between main and sidecar containers.
- mount the keytab secret on the sidecar container; I do mount it as read only volume at the /keytabs location.
imagePullPolicy: IfNotPresent
- name: KRB5CCNAME
value: /tmp-krb5/krb5cc
- name: KRBUSER
value: anyusername
- name: REALM
value: anyrealm.com
command: ["kinit", "-kt", "/keytabs/user.keytab", "$(KRBUSER)@$(REALM)"]
restartPolicy: onFailure
imagePullPolicy: IfNotPresent
env:
- name: KRB5_CONFIG
value: /etc-krb5/krb5-conf
- name: KRB5CCNAME
value: /tmp-krb5/krb5cc
volumeMounts:
- mountPath: /etc-krb5
name: krb5-conf-volume
- mountPath: /tmp-krb5
name: shared-cache
securityContext: {}
volumes:
- name: krb5-conf-volume
configMap:
defaultMode: 420
name: krb5-conf
- name: shared-cache
emptyDir: {}
- name: keytab-volume
secret:
secretName: user-keytab
No comments:
Post a Comment