Details
-
Improvement
-
Status: Patch Available
-
Major
-
Resolution: Unresolved
-
6.2.1
-
None
-
None
Description
My backup to cloud storage (Google cloud storage in this case, but I think this is a general problem) takes 8 minutes ... the restore of the same core takes hours. The restore loop in RestoreCore is serial and doesn't allow me to parallelize the expensive part of this operation (the IO from the remote cloud storage service). We need the option to parallelize the download (like distcp).
Also, I tried downloading the same directory using gsutil and it was very fast, like 2 minutes. So I know it's not the pipe that's limiting perf here.
Here's a very rough patch that does the parallelization. We may also want to consider a two-step approach: 1) download in parallel to a temp dir, 2) perform all the of the checksum validation against the local temp dir. That will save round trips to the remote cloud storage.
Attachments
Attachments
Issue Links
- is related to
-
SOLR-7284 HdfsUpdateLog is using hdfs FileSystem.get without turning off the cache.
- Closed
-
SOLR-7286 Using hdfs FileSystem.newInstance does not gaurantee a new instance.
- Closed
-
SOLR-9952 S3BackupRepository
- Closed
-
SOLR-11473 Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)
- Closed
-
SOLR-13330 Improve HDFS tests
- Closed
- relates to
-
SOLR-13587 Close BackupRepository after every usage
- Patch Available