Details
Description
When reading in a column set as not null this is not retained as part of the spark.read.
All columns are showing as nullable = true
Is there a way to change this behaviour to retain the null setting from the source?
See here for more info
https://github.com/microsoft/sql-spark-connector/issues/121
Example code from databricks:
tableName = "dbo.MyTable"
df = spark.read
.format("com.microsoft.sqlserver.jdbc.spark")
.option("url", myJdbcUrl)
.option("accessToken", accessToken)
.option("dbTable", tableName)
.load()
df.printSchema()