This quick article is a wrap up for reference on how to connect to ScyllaDB using Spark 2 when authentication and SSL are enforced for the clients on the Scylla cluster.
We encountered multiple problems, even more since we distribute our workload using a YARN cluster so that our worker nodes should have everything they need to connect properly to Scylla.
We found very little help online so I hope it will serve anyone facing similar issues (that's also why I copy/pasted them here).
The authentication part is easy going by itself and was not the source of our problems, SSL on the client side was.
- (py)spark: 2.1.0.cloudera2
- spark-cassandra-connector: datastax:spark-cassandra-connector: 2.0.1-s_2.11
- python: 3.5.5
- java: 1.8.0_144
- scylladb: 2.1.5
SSL cipher setup¶
The Datastax spark cassandra driver uses default the TLS_RSA_WITH_AES_256_CBC_SHA cipher that the JVM does not support by default. This raises the following error when connecting to Scylla:
18/07/18 13:13:41 WARN channel.ChannelInitializer: Failed to initialize a channel. Closing: [id: 0x8d6f78a7] java.lang.IllegalArgumentException: Cannot support TLS_RSA_WITH_AES_256_CBC_SHA with currently installed providers
According to the ssl documentation we have two ciphers available:
We can get get rid of the error by lowering the cipher to TLS_RSA_WITH_AES_128_CBC_SHA using the following configuration:
However, this is not really a good solution and instead we'd be inclined to use the TLS_RSA_WITH_AES_256_CBC_SHA version. For this we need to follow this Datastax's procedure.
Then we need to deploy the JCE security jars on our all client nodes, if using YARN like us this means that you have to deploy these jars to all your NodeManager nodes.
For example by hand:
# unzip jce_policy-8.zip
cp UnlimitedJCEPolicyJDK8/*.jar /opt/oracle-jdk-bin-220.127.116.11/jre/lib/security/¶
Java trust store¶
When connecting, the clients need to be able to validate the Scylla cluster's self-signed CA. This is done by setting up a trustStore JKS file and providing it to the spark connector configuration (note that you protect this file with a password).
keyStore vs trustStore¶
In SSL handshake purpose of trustStore is to verify credentials and purpose of keyStore is to provide credentials. keyStore in Java stores private key and certificates corresponding to the public keys and is required if you are a SSL Server or SSL requires client authentication. TrustStore stores certificates from third parties or your own self-signed certificates, your application identify and validates them using this trustStore.
The spark-cassandra-connector documentation has two options to handle keyStore and trustStore.
When we did not use the trustStore option, we would get some obscure error when connecting to Scylla:
com.datastax.driver.core.exceptions.TransportException: [node/18.104.22.168:9042] Channel has been closed
When enabling DEBUG logging, we get a clearer error which indicated a failure in validating the SSL certificate provided by the Scylla server node:
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
setting up the trustStore JKS¶
You need to have the self-signed CA public certificate file, then issue the following command:
# keytool -importcert -file /usr/local/share/ca-certificates/MY_SELF_SIGNED_CA.crt -keystore COMPANY_TRUSTSTORE.jks -noprompt
Enter keystore password:
Re-enter new password: Certificate was added to keystore
using the trustStore¶
Now you need to configure spark to use the trustStore like this:
.config("spark.cassandra.connection.ssl.trustStore.password", "PASSWORD")\ .config("spark.cassandra.connection.ssl.trustStore.path", "COMPANY_TRUSTSTORE.jks")\
Spark SSL configuration example¶
This wraps up the SSL connection configuration used for spark.
This example uses pyspark2 and reads a table in Scylla from a YARN cluster:
$ pyspark2 --packages datastax:spark-cassandra-connector:2.0.1-s_2.11 --files COMPANY_TRUSTSTORE.jks
spark = SparkSession.builder.appName("scylla_app")\ .config("spark.cassandra.auth.password", "test")\ .config("spark.cassandra.auth.username", "test")\ .config("spark.cassandra.connection.host", "node1,node2,node3")\ .config("spark.cassandra.connection.ssl.clientAuth.enabled", True)\ .config("spark.cassandra.connection.ssl.enabled", True)\ .config("spark.cassandra.connection.ssl.trustStore.password", "PASSWORD")\ .config("spark.cassandra.connection.ssl.trustStore.path", "COMPANY_TRUSTSTORE.jks")\ .config("spark.cassandra.input.split.size_in_mb", 1)\ .config("spark.yarn.queue", "scylla_queue").getOrCreate()
df = spark.read.format("org.apache.spark.sql.cassandra").options(table="my_table", keyspace="test").load() df.show()