That is good but I was expecting the distance from average (128kb here) to be less than the chunk size (16kb), which is clearly not the case. Is there anything in the dataset that could explain why chunk sizes vary so much? Or maybe we should just decrease the block size or the average is wrongly computed...
Probably, i bet rows from the same country and even provinces within a country are typically grouped together?
Though before this jira issue, i did experiments randomizing the dataset with sort -r and it didnt make much difference...
In all cases you can get it from http://download.geonames.org/export/dump/allCountries.zip
Its UTF-8 and you can parse with split("\t")
Good question. Encoding deltas currently requires 14 or 15 bits per values (because it can grow a little larger than the chunk size which is 2^14) so it is still a little more compact, and it is less prone to worst cases I think? There is some overhead at read time to build the packed ints array instead of just deserializing it but I think this is negligible. If we manage to make bpvs smaller than 14 on "standard" datasets then I think it makes sense.
Well i wasnt really thinking about a few smaller bits on disk... if we want that, LZ4 this "metadata stuff" too (just kidding!).
I was just thinking simpler code in the reader.