Actually Hive CLI today does not have any credentials at all currently.
I would say that we put the work for credentials out to a separate JIRA and not include the notions of credentials and SessionManager as part of this JIRA. We spec that out as part of a separate JIRA. With that part in place we anyway will have to rework the CLI, JDBC driver and HWI. Should we get the intial web UI in first and then go and fix ti and the other clients to use the SessionManager as part of another JIRA?
If you are not reading the entire data in one shot and making repeated fetch calls you have the problem of how to age out sessions. An easy way is to have some session timeout after which the session is aged out. I am not sure if FIFO is going to be helpful
unless the user is going to be scrolling up and down the result data a lot and the client side buffers are not big enough to deal with that. I
would say that for now, keep it simple and just read out the stuff from the temporary file directly (Hive is already producing a temporary
directory in hdfs for you which is held on till close is called on the dirver handle (which could be tied to an explicit close done by the user or
an aged out session) and let the client application deal with any buffering.
Internally we punted on this altogether by allowing the user to download the data into a local file or spreadsheet and so we did not have
to maintain any cursors inside the hipal application. Bascially in hipal
SELECT people.* FROM people
CREATE TABLE tmp_hwi_<QUERYID> ();
ALTER TABLE tmp_hwi_<QUERYID> SET TBLPROPERTIES ('RETENTION'='7');
INSERT OVERWRITE TABLE tmp_hwi_<QUERYID>
SELECT people.* FROM people;
with retention set to 7 so that a cleanup tool can cleanup any of these tables which are more than 7 days old.
creating a temporary table has the added advantage that the run results could also be shared with the rest of the users without them
having to run the same query again and again.