Type: New Feature
Affects Version/s: None
Fix Version/s: None
Hadoop as a Service provides a standards-based web services interface that layers on top of Hadoop on Demand and allows Hadoop jobs to be submitted via popular schedulers, such as Sun Grid Engine (SGE), Platform LSF, Microsoft HPC Server 2008 etc., to local or remote Hadoop clusters. This allows multiple Hadoop clusters within an organization to be efficiently shared and provides flexibility, allowing remote Hadoop clusters, offered as Cloud services, to be used for experimentation and burst capacity. HaaS hides complexity, allowing users to submit many types of compute or data intensive work via a single scheduler without actually knowing where it will be done. Additionally providing a standards-based front-end to Hadoop means that users would be able to easily choose HaaS providers without being locked in, i.e. via proprietary interfaces such as Amazon's map/reduce service.
Our HaaS implementation uses the OGF High Performance Computing Basic Profile standard to define interoperable job submission descriptions and management interfaces to Hadoop. It uses Hadoop on Demand to provision capacity. Our HaaS implementation also supports files stage in/out with protocols like FTP, SCP and GridFTP.
Our HaaS implementation also provides a suit of RESTful interface which compliant with HPC-BP.
|Transition||Time In Source Status||Execution Times||Last Executer||Last Execution Date|
|1663d 19h 8m||1||Allen Wittenauer||29/Jul/14 23:01|
|Status||Open [ 1 ]||Resolved [ 5 ]|
|Resolution||Incomplete [ 4 ]|