-
Type:
Improvement
-
Status: Resolved
-
Priority:
Major
-
Resolution: Fixed
-
Affects Version/s: None
-
Fix Version/s: None
-
Component/s: None
-
Labels:None
right now, we are choosing arbitrary instance sizes that we know to work well. Using this approach, we have to repeat effort per-provider. Also, service requirements aren't visible. I suggest we switch to a dynamically calculated system.
ex. instead of m1.small, do minRam( (-mx heap size + 25% overhead)*JVMs + overhead of os )
- is related to
-
WHIRR-282 Set number of Hadoop slots based on hardware
-
- Resolved
-