- just switch to a loop entirely. see below....
IMO, best way to solve this is by making hostnames delimited by (,)
Nope, definitely not.
The start of the problems are definitely here:
This construction has two key issues:
Without quotes, the array of HADOOP_USER_PARAMS will always have its metachars expanded. This means that an array of 4 elements will now become 4+n elements, depending upon what else is in there. So if a user passes:
elements 2 3 4 just got expanded into my, cool, and dir rather than just the single "my cool dir".
So if we change the construction to
That element expansion no longer happens. But now we've introduced a new problem. We're doing a substitution, but turning those elements empty! This is where the empty parameter problem comes in, because this means that if we had:
hadoop-daemons.sh --hostnames "1 2" start namenode
We'd end up with:
after the substitution.
Then when we get to
on the exec line, it turns into:
hdfs --slaves --daemon "start" --hostnames "1 2" "" namenode
Thus we also need to filter out this empty element array.
So why don't we just switch to using commas here? Because as evidenced by the above, it doesn't actually fix all the problems with metachar expansion. If any other parameter has them, it's going to blow up in our face. The other problem we've got is backward compatibility. A lot of people use hadoop/yarn-daemons.sh in scripts, and changing this to use commas would be a pretty hefty tax especially when we know we can fix it another way.
One of the goals I had in mind with this code was to avoid a loop. But there's still another problem here:
if a hostname has start, stop, or status, it's going to get removed. Since we already have the loop now to deal with the empty element, we might as well fix that bug too by using a loop rather than cheating. We still have a problem if some other param is specifically start/stop/status (e.g., --config start), but there's not much we can do about that without building a pretty complex test for what mode we're in.