As a follow up to
HADOOP-15820, I was doing more testing on ZSTD compression and still encountered segfaults in the JVM in HBase after that fix.
I took a deeper look and realized there is still another bug, which looks like it's that we are actually calling setInt() on the "remaining" variable on the ZStandardDecompressor class itself (instead of an instance of that class) because the Java stub for the native C init() function is marked static, leading to memory corruption and a crash during GC later.
Initially I thought we would fix this by changing the Java init() method to be non-static, but it looks like the "remaining" setInt() call is actually unnecessary anyway, because in ZStandardDecompressor.java's reset() we set "remaining" to 0 right after calling the JNI init() call. So ZStandardDecompressor.java init() doesn't have to be changed to an instance method, we can leave it as static, but remove the JNI init() call's "remaining" setInt() call altogether.
Furthermore we should probably clean up the class/instance distinction in the C file because that's what led to this confusion. There are some other methods where the distinction is incorrect or ambiguous, we should fix them to prevent this from happening again.
I talked to Jason Darrell Lowe who further pointed out the ZStandardCompressor also has similar problems and needs to be fixed too.