Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Not A Problem
-
0.7.0
-
None
-
None
Description
This morning, I started a Spark Standalone cluster on EC2 using 50 m1.medium instances. When I tried to rebuild Spark ~5.5 hours later, the build failed because the master ran out of disk space. It looks like /var/lib/ganglia/rrds/spark grew to 4.2 gigabytes, using over half of the AMI's EBS disk space.
Is there a default setting that we can change to place a harder limit on the total amount of space used by Ganglia to prevent this from happening?