In Spark DataFrames (and in Pandas as well), the correct way to construct a conjunctive expression is to use the bitwise and operator, i.e.: "(x > 5) & (y > 6)".
However, a lot of users assume that they should be using the Python "and" keyword, i.e. doing "x > 5 and y > 6". Python's boolean evaluation logic converts "x > 5 and y > 6" into just "y > 6" (since "x > 5" is not None). This is super confusing & error prone.
We should override _bool_ and _nonzero_ for Column to throw an exception if users call "and" and "or" on Column expressions.
Background: see this blog post http://www.nodalpoint.com/unexpected-behavior-of-spark-dataframe-filter-method/
- is duplicated by
SPARK-8573 For PySpark's DataFrame API, we need to throw exceptions when users try to use and/or/not
- links to