• Sub-task
    • Status: Closed
    • Major
    • Resolution: Implemented
    • Trunk
    • 16.11.01
    • None
    • Bug Crush Event - 21/2/2015


      I recently (~4 weeks ago) started the "Performance over security, is that reasonable?" thread on dev ML. I think I did not explain me well then. I must say it's easy to drown down in details with this subject when you want to illustrate the reasons.

      So instead of only answering on the dev ML, I decided it will be good to create a Jira task with maybe related tasks, here it is.

      For now I consider it only an improvement, but since it's a security matter we can discuss backporting later.


      Performance over security?

      So why was this thread opposing performance and security? First we need to understand that here performance stands for HTTP and security for HTTPS.

      Why is HTTP standing for performance?

      Actually is now not much performance difference between the 2 protocols, but you can't cache HTTPS requests and it sometimes (inter-continental requests) matters.

      And why the question about being reasonable or not?

      I think it's unreasonable to put performance over security. And nowadays you are not secure when you use HTTP mixed with HTTPS. Most of the time when you mix both is because you want to identity an user using a sessionId. So with HTTPS, after the user started with HTTP. As concisely explained Forrest in the above referenced thread

      If you're switching between HTTPS and HTTP based on some criteria, an attacker can leverage that to trick the user into all kind of things.

      It's also well and simply explained (with other things) in this article:

      The HTTP spec defines a “Secure” flag for cookies, which instructs the browser to only send that cookie value over SSL. If sites set that cookie like they’re supposed to, then yes, SSL is helping you out. Most sites don’t, however, and browsers will happily send the sensitive cookies over unencrypted HTTP. Our hypothetical skeezebag really just needs some way to trick you into opening a normal HTTP URL, maybe by e-mailing you a link to so he can sniff the plain-text cookie off your unencrypted HTTP request, or by surreptitiously embedding a JavaScript file via some site’s XSS vulnerability.

      Of course if you site is only showing things but nobody has never to identify, then you are not at risk and HTTP only is perfect. But with ecommerce kind of site or such, it's rarely the case, most of the time users need to identify!

      So why are people still mixing HTTP and HTTPS on their site? In the 1st answer at [1] Thomas Pornin and others gave some interesting points and answers. At [2] Yves Lafon gave also a good summary even if a bit old now. I took some questions/answers from [3] also. So you might check those links by yourself, here is an abstract:

      1. "Some browsers may not support SSL" Only old Lynx versions, negligible
      2. "Connection initiation requires some extra network roundtrips" Negligible but for sites which serve mostly static contents, see "static content takes a hit" below.
      3. "the SSL initial key exchange adds to the latency" As completely explained here: "most TLS server use a RSA key and the client part of RSA is cheap (the server incurs most of the cost in RSA)". Still better to have not too short sessions as explained here
      4. "static content takes a hit" You should though store static content apart. OFBiz comes with ofbizContentUrl and for that. But you should still use HTTPS. The complete answer for the last question (just above this one) also applies here. Also this is quite interesting and proves HTTPS can be faster than HTTP
      5. "HTTPS servers must use one IP per server name" or "it doesn't work with virtual hosts" This issue has long been solved by Server Name Indication which is supported by all major browsers nowadays.
      6. Certificates are expensive For demos, etc. (ie not for real production sites where a certificate is mandatory anyway) but this no longer an issue with letsencrypt
      7. "Proxy servers cannot cache pages served with HTTPS" This is the more important point. Nowadays this is only a performance problem with inter-continental requests. Note that you can use HTTP for static content inside OFBiz

      I must say the point 4 is disputable. If for instance you are serving a lot of static contents intercontinentally, you should measure and take any solution which fits with your case at the moment you measured it (means in few years it could change and be negligible)


      As [2] concluded in 2011:

      In the Web of the future the main concern won't just be how fast a site loads, but how well it safeguards you and protects your data once it does load.

      And I you are really interested in every details you should read this other article from 2011. You might also notice that there are not much new articles on this subject. I still wonder why, I guess because most was already said and it's more to people (site developpers) now to take care


        1. OFBIZ-6849.patch
          3 kB
          Deepak Dixit
        2. OFBIZ-6849.patch
          28 kB
          Jacques Le Roux
        3. OFBIZ-6849.patch
          26 kB
          Jacques Le Roux

        Issue Links



              jleroux Jacques Le Roux
              jleroux Jacques Le Roux
              0 Vote for this issue
              5 Start watching this issue