Uploaded image for project: 'Commons Math'
  1. Commons Math
  2. MATH-156

Brent solver is non-optimal, because it doesn't use the user's guess.

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • None
    • None
    • None

    Description

      Hi guys, I noticed that your brent solver isn't using the initial guess given by the user. This can often radically improve the performance of the solver. In my tests, it improved it by roughly 30% or more, with a decent guess. Basically, we should try the guess first. If that's close enough, rreturn it. Otherwise, try one of the end points. If that's close enough, return it. Then if that brackets, go into the main loop. If that doesn't bracket, then we instead try the other endpoint. If that's close enough, return it. If that doesn't bracket, throw an exception. If it does bracket, go into the main loop with all three trial points available, allowing the quadratic interpolation to be used immediately.

      Worst case scenario, the initial guess doesn't bracket. In that case it is better than the default algorithm only if the user's initial guess is better than linear interpolation, which I imagine it almost always would be. If the user can't guess better than linear interpolation, presumably they wouldn't provide a guess at all, and then nothing would change.

      It's a small addition, less than 100 lines of code. I can't send it in due to legal at work, but from the above ideas, you should be able to put it in easily. One caveat. You may need to slightly modify the baisic solve(double, double) method to linearly interpolate a good beginning point and then break out a six double (solve(x0, y0, x1, y1, x2, y2)) method that both solve(min, max) and solve(min, max, initial) can use. If you don't do this, then solve(min, max) will bisect on its first iteration rather than intepolating, which could cost performance. If you do break it out like so, then this will always perform better than the current implementation.

      As a good case study, here's our situation from work.

      We are trying to compute a root, we know it falls in a VERY large interval (in this case -1.0 to 1.0), but more than 99% of the time it will be between (say) 1.04 and 1.07. We can guess with stunning accuracy, at least 90% of the time we can come to within 10^-4 for very little cost (a very nice cubic approximation of the more complicated function), but we really need it to within 10^-8 or better. In addition, that 1% of the time, it can fall in very strange areas (0.9, or -0.3 or so), and our guess isn't very good when that happens. So we can't tighten down the interval, because that would cause spurious errors, and at the same time your linear interpolation isn't going to be very good at all. It easily takes you 3 or 4 steps to get as close as our first guess. Using the first guess in this manner would cut the steps to convergence from about 8-10 to about 3-5. A speed up of around 100%, with no loss in accuracy.

      Attachments

        Activity

          People

            luc Luc Maisonobe
            tyward Tyler Ward
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: