Affects Version/s: None
Fix Version/s: None
Operating System: All
Assume from a struts-tomcate web-app I get a large file that I want to store in
a DB. To save DB permanent storage, I want to compress this. To be portable, I
rather want to do that in java than on the DB-side with some proprietary commands.
In order not to have to load into the RAM at least the entire compressed output
by e.g. creating a ByteArrayOutputStream with for example
java.util.zip.GZIPOutputStream or java.util.zip.DeflaterOutputStream and then
converting it back to an InputStream for the PreparedStatement
InputStream stream, int length), I envision an InputStream converter that is
I started implementing it myself and got it working at least for
single-byte reads. But it is far from production-readiness - perhaps I should
rather extend java.io.PipedInputStream than trying to do my own efficient
buffer management and do all the synchronize() blocking etc.
==> shouldn't something go into org.apache.commons.compress.zip?
There are certainly other applications for such a class.
P.S.: Alternatively, there is a "OutputStream java.sql.Blob.setBinaryStream(long
pos)" but for example in MySQL that "updatable BLOB that can update in-place" is
not yet implemented. Doing proprietary SQL-"COMPRESS" as per
http://dev.mysql.com/doc/mysql/en/string-functions.html is probably an even less
|Field||Original Value||New Value|
|Component/s||Compress [ 12311183 ]|
|Resolution||Fixed [ 1 ]|
|Status||Open [ 1 ]||Closed [ 6 ]|
|Assignee||Torsten Curdt [ tcurdt ]|
|Transition||Time In Source Status||Execution Times||Last Executer||Last Execution Date|
|1324d 11h 10m||1||Torsten Curdt||07/Jan/09 13:39|