When calling SparkSession.createDataFrame with a Pandas DataFrame as input (with Arrow disabled) a Pandas warning is raised when the DataFrame is a slice:
In : import numpy as np
...: import pandas as pd
...: pdf = pd.DataFrame(np.random.rand(100, 2))
In : df = spark.createDataFrame(pdf[:10])
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
pdf[column] = s
This doesn't seem to cause a bug in this case, but might for others. It could be avoided by only assigning the series if it was a modified timestamp field.