Details
-
Task
-
Status: Resolved
-
Critical
-
Resolution: Fixed
-
None
-
None
-
None
-
None
Description
Post move to the ci-hadoop build servers we are now the biggest user of space. That is not a problem in and of itself, but the master node has run out of disk space twice now. As of this snapshot we are using 125GB of storage and the next largest project is only using 20GB.
https://paste.apache.org/kyrds
We should review our builds for any issues and come up with expectations for what our steady-state disk usage should look like
- we are supposed to compress any test logs (usually this gets us 90-99% space savings)
- we are supposed to clean up workspaces when jobs are done
- we are supposed to keep a fixed window of prior builds (either by days or number of runs)
If all of our jobs are currently following these guidelines, another possibility is to push the artifacts we need over to nightlies.a.o. Barring that, we should formally request asf infra set up a plugin for storing artifact on s3.
Attachments
Attachments
Issue Links
- is related to
-
HBASE-24930 Reduce the number builds to keep for our jenkins jobs
- Resolved