It costs about 2 milliseconds in my desktop to decode a strip group with 6 blocks, each block is 64k. This decoding time merely depends on how fast the CPU is. It's "Intel(R) Core(TM) i5-4460 CPU @ 3.20GHz" with 4 cores in my desktop, not the leading edge CPU model. Given that CPU is becoming more and more powerful, I think it is not safe to use millisecond granularity to record one time decoding time. We can choose between nanosecond or microsecond. I would prefer nanosecond for one reason. it can be directly get through System.nanoTime(). If microsecond is used, there is one extra division of 1000. That's not good from performance point of view. And a long number can host nanoseconds, representing hundreds of years. So there is not going to a overflow quickly.