Bug 46215 - Race condition in bybusyness algorithm
Summary: Race condition in bybusyness algorithm
Status: RESOLVED LATER
Alias: None
Product: Apache httpd-2
Classification: Unclassified
Component: mod_proxy_balancer (show other bugs)
Version: 2.2.10
Hardware: PC Linux
: P2 normal with 6 votes (vote)
Target Milestone: ---
Assignee: Apache HTTPD Bugs Mailing List
URL:
Keywords: MassUpdate
: 53618 (view as bug list)
Depends on:
Blocks:
 
Reported: 2008-11-14 16:24 UTC by Dan Udey
Modified: 2018-11-07 21:09 UTC (History)
5 users (show)



Attachments
Patch resolving the issue (1.05 KB, patch)
2008-11-14 16:24 UTC, Dan Udey
Details | Diff
Patch using apr_atomic functions, also adding "busyness" column in balancer-manager (3.47 KB, patch)
2009-04-21 06:20 UTC, Thomas Binder
Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Dan Udey 2008-11-14 16:24:49 UTC
Created attachment 22876 [details]
Patch resolving the issue

In scenarios with large numbers of backend workers, a race condition can prevent the 'busy' counter from being decremented properly, causing some workers to be ignored completely. Typically, all workers should be at the same busyness when idle.

ps output showing the effect:

       deploy   27326 14.3  3.2 271728 130632 ?       Sl   23:23   1:03 mongrel_rails [8000/0/289]: idle
       deploy   27329 15.4  3.7 298428 150368 ?       Sl   23:23   1:08 mongrel_rails [8001/0/289]: idle
       deploy   27332 16.6  3.8 296292 154976 ?       Sl   23:23   1:13 mongrel_rails [8002/0/288]: idle
       deploy   27335 15.5  3.3 279404 136820 ?       Sl   23:23   1:08 mongrel_rails [8003/0/289]: idle
       deploy   27338 16.6  3.4 280396 139452 ?       Sl   23:23   1:13 mongrel_rails [8004/0/290]: idle
       deploy   27341 13.6  3.3 275600 134724 ?       Sl   23:23   1:00 mongrel_rails [8005/0/288]: idle
       deploy   27344  1.1  1.5 155708 62616 ?        Sl   23:23   0:04 mongrel_rails [8006/0/7]: idle
       deploy   27347 16.2  3.7 299976 153908 ?       Sl   23:23   1:11 mongrel_rails [8007/0/287]: idle
       deploy   27350  1.3  2.5 241708 104364 ?       Sl   23:23   0:05 mongrel_rails [8008/0/5]: idle
       deploy   27354  1.4  2.6 246368 109044 ?       Sl   23:23   0:06 mongrel_rails [8009/0/4]: idle
       deploy   27359  1.0  1.4 151124 58096 ?        Sl   23:23   0:04 mongrel_rails [8010/0/0]: idle
       deploy   27362  0.9  1.4 151140 58112 ?        Sl   23:23   0:04 mongrel_rails [8011/0/0]: idle

balancer-manager output, showing all balancers are in 'Ok' state:

             Worker URL       Route RouteRedir Factor Set Status Elected  To  From
       http://cimbar:8000                      1      0   Ok     415     315K 22M
       http://cimbar:8001                      1      0   Ok     416     324K 22M
       http://cimbar:8002                      1      0   Ok     484     392K 27M
       http://cimbar:8003                      1      0   Ok     483     381K 26M
       http://cimbar:8004                      1      0   Ok     484     379K 26M
       http://cimbar:8005                      1      0   Ok     484     374K 25M
       http://cimbar:8006                      1      0   Ok     52      44K  2.6M
       http://cimbar:8007                      1      0   Ok     608     474K 34M
       http://cimbar:8008                      1      0   Ok     53      41K  2.6M
       http://cimbar:8009                      1      0   Ok     53      43K  2.9M
       http://cimbar:8010                      1      0   Ok     5       1.1K 6.6K
       http://cimbar:8011                      1      0   Ok     7       1.2K 62K

When we look at the debug logs, however, we see that the 'busy' counter for port 8006 and 8008-8011 are all at '3', and that these are the ones which are not receiving requests.

       dan@waterdeep:/var/log/apache2$ for port in {8000..8011}; do fgrep "bybusyness selected worker \"http://cimbar:${port}" /tmp/logfile | tail -n1; done
       [Thu Nov 13 23:32:39 2008] [debug] mod_proxy_balancer.c(1173): proxy: bybusyness selected worker "http://cimbar:8000" : busy 2 : lbstatus -1922
       [Thu Nov 13 23:32:45 2008] [debug] mod_proxy_balancer.c(1173): proxy: bybusyness selected worker "http://cimbar:8001" : busy 2 : lbstatus -1910
       [Thu Nov 13 23:34:24 2008] [debug] mod_proxy_balancer.c(1173): proxy: bybusyness selected worker "http://cimbar:8002" : busy 2 : lbstatus -2233
       [Thu Nov 13 23:34:25 2008] [debug] mod_proxy_balancer.c(1173): proxy: bybusyness selected worker "http://cimbar:8003" : busy 2 : lbstatus -2236
       [Thu Nov 13 23:34:23 2008] [debug] mod_proxy_balancer.c(1173): proxy: bybusyness selected worker "http://cimbar:8004" : busy 2 : lbstatus -2234
       [Thu Nov 13 23:34:24 2008] [debug] mod_proxy_balancer.c(1173): proxy: bybusyness selected worker "http://cimbar:8005" : busy 2 : lbstatus -2236
       [Thu Nov 13 23:32:45 2008] [debug] mod_proxy_balancer.c(1173): proxy: bybusyness selected worker "http://cimbar:8006" : busy 3 : lbstatus 2468
       [Thu Nov 13 23:34:25 2008] [debug] mod_proxy_balancer.c(1173): proxy: bybusyness selected worker "http://cimbar:8007" : busy 1 : lbstatus -3444
       [Thu Nov 13 23:33:54 2008] [debug] mod_proxy_balancer.c(1173): proxy: bybusyness selected worker "http://cimbar:8008" : busy 3 : lbstatus 2724
       [Thu Nov 13 23:32:43 2008] [debug] mod_proxy_balancer.c(1173): proxy: bybusyness selected worker "http://cimbar:8009" : busy 3 : lbstatus 2459
       [Thu Nov 13 23:32:39 2008] [debug] mod_proxy_balancer.c(1173): proxy: bybusyness selected worker "http://cimbar:8010" : busy 3 : lbstatus 2987
       [Thu Nov 13 23:32:45 2008] [debug] mod_proxy_balancer.c(1173): proxy: bybusyness selected worker "http://cimbar:8011" : busy 3 : lbstatus 2983

I've traced this effect to modules/proxy/mod_proxy_balancer.c:578-579 where the counter is incremented. There is no mutex locking around this code, so a race condition ensues where simultaneous incrementing and decrementing by separate threads on separate CPUs can result in an upward skew.

While it's likely that the skew would occur downwards as well, the decrement function only works if busyness is > 0, so it will never go below that point. It could only skew downwards from busyness of 2 or greater, and at that point would receive more traffic so it would rapidly rise again. The skew has an effective upper bound of the maximum concurrent busyness the server handles, as higher numbers, once skewed, are ignored until all other workers are at the same level. It would require a very high consistent load to travel far beyond a busyness of 3.

I've attached a patch which moves the decrement code in proxy_balancer_post_request() up into the previously-commented-out mutex lock/unlock code in the same function. This has so far resolved the issue on our production system.
Comment 1 Thomas Binder 2009-04-21 06:20:17 UTC
Created attachment 23519 [details]
Patch using apr_atomic functions, also adding "busyness" column in balancer-manager

I've experienced the same issue, but the proposed patch did not completely resolve it for me. Running something like

ab -k -c 20000 -n 1000 http://example.com/

would still cause the "busy" counter to stay greater zero from time to time.

I've therefore tried a different approach, using functions from apr_atomic.h to increase / decrease the "busy" counter instead of locking the balancer. So far, the problem hasn't reappeared even after running the "benchmark" above several times.

The patch also adds a "busyness" column to the output of balancer-manager, to make it easier to check the current behaviour of the balancer. If you don't want / need that, simply remove the hunks "@@ -830,6 +831,7 @@" and "@@ -861,6 +863,7 @@" from the patch.
Comment 2 Markus Stoll 2010-10-05 08:36:25 UTC
I had to fight with this one too - the second patch did not help, so I had to investigate further. IMHO we are dealing with bug #48735. I extended the patch using atomic function with proper error handling on unconnectable workers. I'll upload my fix on the bug #48735 page - with this bugfix, the problem went away for me - even under very heavy load.
Comment 3 Christophe JAILLET 2018-05-28 18:05:34 UTC
*** Bug 53618 has been marked as a duplicate of this bug. ***
Comment 5 William A. Rowe Jr. 2018-11-07 21:09:02 UTC
Please help us to refine our list of open and current defects; this is a mass update of old and inactive Bugzilla reports which reflect user error, already resolved defects, and still-existing defects in httpd.

As repeatedly announced, the Apache HTTP Server Project has discontinued all development and patch review of the 2.2.x series of releases. The final release 2.2.34 was published in July 2017, and no further evaluation of bug reports or security risks will be considered or published for 2.2.x releases. All reports older than 2.4.x have been updated to status RESOLVED/LATER; no further action is expected unless the report still applies to a current version of httpd.

If your report represented a question or confusion about how to use an httpd feature, an unexpected server behavior, problems building or installing httpd, or working with an external component (a third party module, browser etc.) we ask you to start by bringing your question to the User Support and Discussion mailing list, see [https://httpd.apache.org/lists.html#http-users] for details. Include a link to this Bugzilla report for completeness with your question.

If your report was clearly a defect in httpd or a feature request, we ask that you retest using a modern httpd release (2.4.33 or later) released in the past year. If it can be reproduced, please reopen this bug and change the Version field above to the httpd version you have reconfirmed with.

Your help in identifying defects or enhancements still applicable to the current httpd server software release is greatly appreciated.