Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
1.4.4
-
all
Description
improve the HostInfo.py (Host Check) during the ambari installer process, particularly for step 3 of the process where the nodes are registered and confirmed. In particular this JIRA will address the erroneous iptables check that currently always returns a value of 0 which indicates in the script that iptables is running and active, even when the iptables have actually been flushed. This results in an erroneous warning message.
Attachments
Attachments
- AMBARI-5194.patch
- 3 kB
- Scott Creeley
- AMBARI-5194.patch
- 2 kB
- Mahadev Konar
Activity
Looks like the prev patch had some patch apply failures - uploading the one that applies fine - no change in the patch Scott.
Mahadev Konar,Erin A Boyd
Updated Patch to contain both HostInfo.py and the unit test for HostInfo.py
Mahadev Konar - thanks for the tips...
So, here is the successful unit test:
=============
specific line:
=============
test_checkIptables (TestHostInfo.TestHostInfo) ... ok
=============
All HostInfo tests
=============
test_analyze_yum_output (TestHostInfo.TestHostInfo) ... ok
test_analyze_yum_output_err (TestHostInfo.TestHostInfo) ... ok
test_analyze_zypper_out (TestHostInfo.TestHostInfo) ... ok
test_checkFolders (TestHostInfo.TestHostInfo) ... ok
test_checkIptables (TestHostInfo.TestHostInfo) ... ok
test_checkLiveServices (TestHostInfo.TestHostInfo) ... ok
test_checkUsers (TestHostInfo.TestHostInfo) ... ok
test_dirType (TestHostInfo.TestHostInfo) ... ok
test_etcAlternativesConf (TestHostInfo.TestHostInfo) ... ok
test_getReposToRemove (TestHostInfo.TestHostInfo) ... ok
test_hadoopVarLogCount (TestHostInfo.TestHostInfo) ... ok
test_hadoopVarRunCount (TestHostInfo.TestHostInfo) ... ok
test_hostinfo_register (TestHostInfo.TestHostInfo) ... ok
test_hostinfo_register_suse (TestHostInfo.TestHostInfo) ... ok
test_javaProcs (TestHostInfo.TestHostInfo) ... ok
test_osdiskAvailableSpace (TestHostInfo.TestHostInfo) ... ok
test_perform_package_analysis (TestHostInfo.TestHostInfo) ... ok
so it passed, but I had to modify the TestHostInfo.py for that particular test, as it was still expecting the values from the old method. So what is the process for that, do I now need a separate JIRA so I can update the TestHostInfo.py as well? Below is what I changed (old routine used faulty returncode, new routine uses boolean values)
result = hostInfo.checkIptables()
self.assertTrue(result == True)
result = hostInfo.checkIptables()
self.assertFalse(result == False)
Scott Creeley thats part of agent unit tests. What I meant was would you be able to add more unit tests to TestHostInfo.py? You can run the ambari-agent unit tests by doing:
cd ambari-agent
mvn clean test. That should run all he unit testso n the ambari agent side.
Mahadev Konar
specifically, there is a python file called ambari-agent/src/test/python/ambari_agent/TestHostInfo.py, how do I get that to run within a unit test?
Hi Mahadev,
It looks like a unit test already exists for HostInfo, but not sure I ran it correctly, here is what I did...I ran the following:
mvn clean test
and piped the results out, and found this line:
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.801 sec
Running org.apache.ambari.server.agent.AgentHostInfoTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.249 sec
Running org.apache.ambari.server.agent.TestHeartbeatMonitor
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.257 sec
Running org.apache.ambari.server.agent.TestActionQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.319 sec
Running org.apache.ambari.server.agent.TestHeartbeatHandler
if this is not what you wanted or there is another preferred way to run unit tests for this let me know...thanks.
HostInfo.py path with mvn clean test results:
Results :
Failed tests: testNoNagiosServerCompoonent(org.apache.ambari.server.controller.nagios.NagiosPropertyProviderTest): Expected no alerts
Tests in error:
testDeleteUsers(org.apache.ambari.server.controller.AmbariManagementControllerTest): Could not remove user user1. System should have at least one user with administrator role.
Tests run: 1404, Failures: 1, Errors: 1, Skipped: 7
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Ambari Main ....................................... SUCCESS [0.063s]
[INFO] Apache Ambari Project POM ......................... SUCCESS [0.024s]
[INFO] Ambari Web ........................................ SUCCESS [5.396s]
[INFO] Ambari Views ...................................... SUCCESS [1.291s]
[INFO] Ambari Server ..................................... FAILURE [8:43.111s]
[INFO] Ambari Agent ...................................... SKIPPED
[INFO] Ambari Client ..................................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 8:50.247s
[INFO] Finished at: Tue Mar 25 14:48:26 GMT-05:00 2014
[INFO] Final Memory: 34M/812M
[INFO] ------------------------------------------------------------------------
this patch will address the iptablesIsRunning Host check - testing is confirmed
removed RHS environment as this is not specific to the glusterfs stack
After working with my team lead, we decided that we would break out the user and directory fix for another JIRA after we do some more tests as we are not convinced the proposed solution is fully encompassing the desired behavior. This JIRA will now only focus on the iptablesIsRunning check from the HostInfo.py
Scott, please test on non glusterfs installation to make sure the host check works propertly and run mvn clean test and attach the results to the JIRA. Code looks solid! Good job.
This proposed patch will fix a few user experience warning issues while doing a GlusterFS deployment and will also fix erroneous iptables warnings
suggested improvements:
1. existing users and directories created during RHS GlusterFS install - test for existence of gluster and then if found follow the existing suppressed control block
- see if /var/lib/amabari-agent/data/HDP-.Gluster exists OR if /mnt/glusterfs exists
- we can assume rhs/gluster is being used or was once used
REPO_ARTIFACT_PATH = "/var/lib/ambari-agent/data/"
REPO_ARTIFACT_FILE = "HDP-*.Gluster"
REPO_ARTIFACT_SEARCH = "%s%s" % (REPO_ARTIFACT_PATH,REPO_ARTIFACT_FILE) + "*"
gluster_artifact_result = glob.glob(REPO_ARTIFACT_SEARCH)
GLUSTER_FS_PATH = "/mnt/glusterfs"
gluster_fs_result = glob.glob(GLUSTER_FS_PATH)
isGluster = gluster_artifact_result != [] or gluster_fs_result != []
if componentsMapped or commandsInProgress or isSuse or isGluster:
dict['existingRepos'] = [self.RESULT_UNAVAILABLE]
dict['installedPackages'] = []
dict['alternatives'] = []
dict['stackFoldersAndFiles'] = []
dict['existingUsers'] = []
else:
etcs = []
2. iptables improvement - change command to iptables -S. Cleaner output that can be verfied easily to show iptables have been flushed
def checkIptables(self):
iptablesIsRunning = True
try:
iptables = subprocess.Popen(["iptables", "-S"], stdout=subprocess.PIPE)
stdout = iptables.communicate()
if stdout == ('-P INPUT ACCEPT\n-P FORWARD ACCEPT\n-P OUTPUT ACCEPT\n', None):
iptablesIsRunning = False
except:
pass
dict['iptablesIsRunning'] = self.checkIptables()
Sorry for the late commit Scott - looks like I missed this.