unfortunately jira is not available so I have to response by mail. I will attach the text to the issue later for reference.
You modified HExecutionEngine and LocalExecutionEngine to not have updateConfiguration and getConfiguration, but you didn't change the abstract class ExecutionEngine in the same way. Did you intend to remove it from ExecutionEngine as well. Also, why did you want to remove these features? I think it's fine to not implement them for now (as they weren't in Local), but if we want to remove we need a good reason.
Sorry I messed this up in the last second during patch generation and didn't notice it. I rolled it back in again, since it looks like you dont wanna remove it. (see next patch version) However I planed to remove it before.
Here are my thoughts:
I personal think an interfaces especially in such an early stage of a project should be driven from the implementations not because we might or might not need it. some day in the long future. Having those interfaces and do not implement them in all our implementations seems wrong, we carry around code we do not need, but need to maintain and users get a wrong impressions - eg a update configuration during runtime is possible. Hadoop does not allow changing configurations during runtime (or only very limited) we can only reconnect to a different hadoop cluster or so.
The getConfiguration method is not sensefull in the executionengine since the pigContext hold all configuration properties, that's why it is a context. Having such a method in the ExecutionEngine gives again a wrong impression to the user that reads the java doc - a user would have the impression that an ExecutionEngine can have a different configuration as the context, but actually that is not the case.
Another couple of questions. My expectation in looking at the patch is that by setting exectype in the config file, Main should then respond by starting PigServer with that exectype. But that doesn't happen.
Sorry, that is a bug and this is fixed in my next patch verison.
Similarly if I set the cluster value to the name and port of a job tracker, I was expecting it to attach to that cluster.
Also fixed in the next patch version - thanks for catching this.
You changed the default mode from mapreduce to local. We shouldn't make interface changes like that without discussion.
I'm sorry you are right we should discuss this first, I wasn't aware that the default mode is map reduce. What makes no sense from my point of view. I think the user experience should be download, unzip start writing pig latin - out of the box. No extra configuration, no additional installation. Therefore I think local execution mode or map reduce but using the hadoop local jobclient should be chosen default.
Also, I was pondering how to include hadoop specific data here. Right now pig attaches to a cluster by reading a hadoop config file. Obviously we don't want this in our general config file. But maybe the file to be referenced should be referred to in this config file. Or maybe it's ok to just pick the hadoop-site.xml up off the classpath, as is done now. The modification would then be that we only do it if we're in mapreduce mode. Thoughts on this?
What exactly we do need in our configuration hadoop specific? Namenode and Jobtracker. Everything else should be not relevant for us and will be overwritten in the job configuration when we submit a job, or do I misunderstand something here?
This two properties can be easily in our properties file and actually I would even prefer if we call it mapred.job.tracker and not cluster. Since again it would be easier to understandable for the user. For different execution engines we need different sets of parameters anyway.
In case we need the hadoop configuration files in the classpath I would suggest do this optional and extend the shell script with a new parameter like -hadoopHome=/myHadoopFolder
Thanks for taking the time to look into this patch!