Discovered this through using pyarrow and dealing with RecordBatch Streams and parquet. The issue can be replicated as follows:
When writing record batch streams, if there are no nulls in an array, Arrow will put a placeholder nullptr instead of putting the full bitmap of 1s, when deserializing that stream, the bitmap for the nulls isn't populated and is left to being a nullptr. When attempting to write this table via pyarrow.parquet, you end up here in the parquet writer code which attempts to Cast the dictionary to a non-dictionary representation. Since the null count isn't checked before creating a BitmapReader, the BitmapReader is constructed with a nullptr for the bitmap_data, but a non-zero length which then segfaults in the constructor here because bitmap is null.
So a simple check of the null count before constructing the BitmapReader avoids the segfault.
Already filed PR 1896