Thursday, October 1, 2020

Authentication and Authorization: a Kafka use case




Introduction

The problem with plain TCP/UDP connections is that they are unencrypted, and as such, the entire traffic is visible across the subnet using simple (optionally ARP-poisoning) and sniffing techniques. That becomes a big concern, considering that we may be sending sensitive information, such as passwords.

 

1. SSL/TLS Security

Secure-Socket Layer (SSL) and its successor transport-layer security (TLS) are transport-layer protocols designed to guarantee encrypted communication. The protocol consists of an initial handshake when details about the communication modality are shared, and a communication stage where actual data is exchanged. To authenticate the parts and establish a secure TLS/SSL communication,  a certificate granted by a shared trusted certificate authority (CA) is used. TLS/SSL relies on a public key infrastructure (PKI) to distribute keys, i.e., the recipient's public key is used to encrypt messages, while the recipient uses a private key to decrypt it. As we will see, we distinguish in i) a 1-way encryption when the client performs certificate check on the signed certificate (by the CA) provided by the server, and ii) a 2-way mutual certificate check, called authentication, when both client and server have a signed certificate to provide and perform certificate validation on the one received by the counterpart. In the 1-way type the client is totally anonymous to the broker, since the signed certificate is actually used to verify the broker certificates. The 2-way type is called authentication, since they both verify their certificates, i.e., the servers can distinguish their clients and perform advanced functionalities, such as access control (on ACLs).

To summarize:

  1. Setting up a certificate on the Server
    • A shared certificate authority (CA) is set up, be it public or private, such as an enterprise CA
    • A keystore is set up on the Server in order to send certificate signing requests to the CA, in order to sign certificates for the broker itself
    • Once the CA signs the certificates, the CA sends it back to the keystore, which stores it as CRT signed by the CA for the server
  2. Setting up a certificate on the Client
    • 1-way: A trust store is set up on the client side, in order to trust the CA, by accepting all certificates signed by that CA; this is done by saving the CA root public certificate;
    • 2-way: a key store is set up also on the client side, in order to contain a signed certificate (by the CA);
  3. SSL Handshake
    • The client requests the server for a signed SSL certificate
    • The client verifies the SSL certificate from the Server, using its trust store; if ok, then an encrypted communication can be established


1.1 Setting up a CA

Setting up a CA can be done with the openssl command:

openssl rew -new -newkey rsa:4096 -days 365 -x509 -subj "/CN=Whatever-CA" -keyout ca-key -out ca-cert -nodes

This produces a private key (ca-key) and a public key (ca-cert) file, which can be distributed across the PKI.

1.2 Setting up a certificate on the Server

Setting up the certificate on the server involves the following steps:

  • set a password for the keystore
    export SRVSTOREPWD=serverpassword
  • create a keystore and a server certificate
    keytool -genkey -keystore server.keystore.jks -validity 365 -storepass $SRVSTOREPWD -keypass $SRVSTOREPWD -dname "CN=host.com" -storetype pkcs12
    the CN is the common name, which should be the exposed public hostname of the service; after running the command a server.keystore.jks file is created and already contains the certificate.
  • sign server certificate
    • create a signing request
      keytool -keystore server.keystore.jks -certreq -file cert-file -storepass $SRVSTOREPWD -keypass $SRVSTOREPWD which returns a cert-file file
    • send the signing request to the CA for signing
      openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:$SRVSTOREPWD
      which returns the cert-signed file, containing the signed certificate;
  • create a trust store on the server
    keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert -storepass $SRVSTOREPWD -keypass $SRVSTOREPWD -noprompt
  • import the certificates (CA certificate and signed certificate) into the server key store
    keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert -storepass $SRVSTOREPWD -keypass $SRVSTOREPWD -noprompt; which imports the public CA certificate ca-cert;
    keytool -keystore server.keystore.jks -import -file cert-signed -storepass $SRVSTOREPWD -keypass $SRVSTOREPWD -noprompt; which imports the cert-signed into the keystore; 

1.3 Setting up a certificate on the Client

  1. download the public CA cert file (ca-cert) on the client
  2. set a password for the truststore on the client
    export $CLSTOREPWD=clientpassword
  3. create a truststore for the client and import the public CA certificate
    keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert -storepass $CLSTOREPWD -keypass $CLSTOREPWD -noprompt
  4. 2-way authentication (certificate signed on client as well)
    1. generate client certificate and include in the keystore
      keytool -genkey -keystore client.keystore.jks -validity 365 -storepass $CLSTOREPWD -keypass $CLSTOREPWD -dname "CN=clientname" -alias entry-name-for-client -storetype pkcs12
    2. create a certificate signing request
      keytool -keystore client.keystore.jks -certreq -file client-cert-req -alias entry-name-for-client -storepass $CLSTOREPWD -keypass $CLSTOREPWD
    3. send the client-cert-req to the CA manager (or copy to where the CA is hosted) to request signing the client certificate
      openssl x509 -req -CA ca-cert -CAkey ca-key -in client-cert-req -out client-cert-signed -days 365 -CAcreateserial -passin pass:$
      SRVSTOREPWD
    4. Copy back the client-cert-signed file
    5. import the CA public key in the key store
      keytool -keystore client.keystore.jks -alias CARoot -import -file ca-cert -storepass $CLSTOREPWD -keypass $CLSTOREPWD -noprompt
    6. import the client-cert-signed file in the key store
      keytool -keystore client.keystore.jks -import -file client-cert-signed -alias entry-name-for-client -storepass $CLSTOREPWD -keypass $CLSTOREPWD -noprompt

1.4 Notes

Using SSL implies a gain in security but a loss in performance, since Servers will use more CPU and RAM to encrypt/decrypt packets.


2. SASL/Kerberos Authentication

The Simple Authentication and Security Layer (SASL) is a framework for authentication and security in the internet protocol (IP). It does not affect the application protocol it is used to authenticate, and it is generally complementary to SSL/TLS. 

SASL in Kafka supports the following authentication mechanisms:

  • PLAIN - username and password
  • Salted Challenge Response Auth Mechanism (SCRAM) - username, password and challenge, i.e. it uses the PBKDF2 to combine the plain text password, a salt and a derived key.
  • Generic Security Service Application Program Interface (GSS-API) - this is a standard API not providing any security. The most prominent implementation its Kerberos.

Kerberos is an authentication protocol, initially developed by MIT, whose main concept is the presence of service principals (users and actual services), which can be granted access to resources by means of tickets. Both principals and tickets are managed by an external key distribution center (KDC).

In summary:

  • a client is given either a username/password pair (generally for manual authentication) or a keytab file to perform the authentication;
  • the client uses either or of the provided credentials to request a ticket to the KDC; the ticket is associated with a lease, meaning that it expires and can be renewed by the client using the keytab file;

 

3. Kafka

3.1 Setting up the SSL certificate on the Broker 

The settings for the broker are available in the server.properties file. Specifically, we add an additional listener "SSL://0.0.0.0:9093" for the TLS/SSL endpoint, as well as in the advertised.listeners.
 
The following settings are then set for the SSL protocol:
  • ssl.keystore.location the location of server.keystore.jks
  • ssl.keystore.password the value of $STOREPWD
  • ssl.key.password the value of $STOREPWD
  • ssl.truststore.location the location of server.truststore.jks
  • ssl.truststore.password the value of $STOREPWD

 For 2-way authentication, we also set "ssl.client.auth=required".


3.2 Setting up the SSL certificate for the Client 

The settings for the client are generally set in the client.properties file (at least for the console consumer and the Kafka Connect clients). In detail, the following properties are to be set:

  • security.protocol=SSL
  • ssl.truststore.location the location of client.truststore.jks
  • ssl.truststore.password the value of $CLSTOREPWD

For 2-way authentication, we also add the following:

  • ssl.keystore.location the location of client.keystore.jks
  • ssl.keystore.password the value of $CLSTOREPWD
  • ssl.key.password the value of $CLSTOREPWD

 

3.3 Setting up Kerberos

Assuming you already have a running Kerberos KDC service, the first step is to install a kerberos client on each kafka broker and set the /etc/krb5.conf file to point to the KDC location, then simply create e new principal for your user (defined as username@REALM):

kadmin -q "add_principal -randkey user@REALM"

Specifically, a principal is created for each Kafka broker. To restict the usage of the principal, the username can be specified as "user/host:

kadmin -q "add_principal -randkey kafka/hostname@REALM"

A keytab file can now be created for the principal:

kadmin -q "xst -kt kafka.keytab kafka/hostname@REALM"

The keytab can then be tested by attempting ticket creation:

kinit -kt kafka.keytab kafka/hostname@REALM

and its presence checked with the command klist.

Now the generated Kerberos credentials can be set for each broker, by modifying the config/server.properties file accordingly:

  • an additional listener of type "SASL_SSL://0.0.0.0:9094" is added to "listeners"; this means we are exposing the broker on 9094 with Kerberos authentication and SSL encryption
  • similarly, another endpoint is specified in advertised.listeners as "SASL_SSL://hostname:9094"
  • sasl.enabled.mechanism=GSSAPI
  • sasl.kerberos.service.name=kafka
A JAAS file (e.g. named kafka_jaas.conf) is created in the config folder to hold the Kerberos detail for the brokers, such as

KafkaServer {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeytab=true
    storeKey=true
    keyTab="/path/to/kafka.keytab"
    principal="kafka/hostname@REALM"
}


the file can be referred using the KAFKA_OPTS variable, which is directly passed to the JVM

KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/kafka_jaas.conf" 

 
Similarly, at client side:
  • a new principal can be created for the client
  • for Java Kafka clients the same or a similar JAAS file can be used
    KafkaClient {
        com.sun.security.auth.module.Krb5LoginModule required
        useKeytab=true
        storeKey=true
        keyTab="/path/to/kafka-client.keytab"
        principal="kafka-client/hostname@REALM"
    }
  • depending on the selected client, provided as either KAFKA_OPTS (as previously) or directly as JVM parameter
    • export KAFKA_OPTS="-Djava.security.auth.login.config=client_jaas.conf"
    • -Djava.security.auth.login.config=client_jaas.conf
  • A kafka_kerberos.properties file is then defined for the client to point to the SASL Kerberos configuration at client side
    • security.protocol=SASL_SSL
    • sasl.kerberos.service.name=kafka
    • ssl.truststore.location the location of client.truststore.jks
    • ssl.truststore.password the value of $CLSTOREPWD
  • the kafka_kerberos.properties can also directly contain the JAAS definition
    • sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/path/to/kafka-client.keytab" principal="kafka-client/hostname@REALM";
 

3.4 Setting up Kafka Access Control

With SASL/Kerberos configured we have enforced authentication but still miss authorization, i.e. access control. Specifically, Kafka supports access control lists (ACLs) at both topic and consumer-group level. ACLs are managed by super users and are stored in Zookeeper under /kafka-acl. A simple possibility, although less secure, is to set network policies so that only broker nodes can access Zookeeper. Also:
  • /brokers/topics for the list of topics
  • /brokers/ids for the list of brokers
  • /configs/topics for the configuration of topics
 
Therefore, a first step in securing the cluster is to restrict access to Zookeeper, so that only actual admins can create/edit/delete topics, consumer groups and ACLs. Zookeeper supports: i) world (no authentication), ii) digest (username and password) and iii) SASL. For the latter we need to create a specific principal of type "zookeeper/host@REALM" and a keytab (e.g. zookeeper.keytab). Similar to the broker case, we can create a JAAS file as:

Server {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keytab="zookeeper.keytab"
    principal="zookeeper/host@REALM";
}

and set a KAFKA_OPTS variable to pass the environment variable to the JVM on which Zookeeper is running, as:

KAFKA_OPTS=-Djava.security.auth.login.config=zookeeper_jaas.conf
 
The same JAAS is also appended to the broker's kafka_jaas.conf as:
Client {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keytab="zookeeper.keytab"
    principal="zookeeper/host@REALM";
}
 
The zookeeper.properties file is then edited on the broker, by adding:
  • authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
  • jaasLoginRenew=3600000
  • kerberos.removeHostFromPrincipal=true
  • kerberos.removeRealmFromPrincipal=true

The last two allows for re-using the same Kerberos principal name on all brokers, instead of using a different principal for each broker, which would be time-intensive. See this for further information. 

Restart brokers and Zookeeper and you are done.


3.4.1 Enabling ACL support

ACL support can be enabled in the server.properties file, by adding:

  • authorizer.class.name=kafka.security.auth.AclAuthorizer
  • super.users=User:kafka;User:<username>;
  • allow.everyone.if.no.acl.found=false (i.e., whitelist the users via ACLs)
  • security.inter.broker.protocol=SASL_SSL (i.e., enforce Kerberos/SSL also between brokers)
  • zookeeper.set.acl=true (i.e. enforces all new topics to have an ACL set for the Zookeeper principal used for the authentication between the brokers and Zookeeper)

Then restart the broker service, as always.

 

3.4.2 Add/Edit/Remove ACLs

ACLs are part of the Kafka API and directly manageable using the kafka-acls CLI.

kafka-acls \
--bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--add \
--allow-principal User:* \
--allow-host 10.0.0.1 --allow-host 10.0.0.11 \
--operation All \
--topic testTopic \
--group '*'

This specifies, respectively:

  • the list of kafka brokers
  • the file where the connection details are defined, such as
    security.protocol=SASL_PLAINTEXT
    sasl.mechanism=PLAIN
    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required
    username="alice"
    password="s3cr3t";
  • the action to perform on the ACL, --add to add and --remove to remove it
  • the Kerberos principal(s) to allow for the ACL
  • the list of hosts from which access is allowed (given that they keytab may be moved around and Kerberos authentication bypassed)
  • the operations allowed, i.e. all or listed such as "--operation read --operation write"
  • the topic(s) on which the user has access
  • the consumer groups on which the user has access

Please visit the official Kafka documentation for an exhaustive explanation.


No comments:

Post a Comment