We recently announced the availability of Nginx connector for ColdFusion 2016. This post talks about the initial performance numbers we have seen with ColdFusion and Nginx. The numbers seen are only indicative. Continued efforts are being put to improve performance on Nginx Connectors.

The Specifics on installing and configuring connector are listed in the following document,
http://cfdownload.adobe.com/pub/adobe/coldfusion/nginx/prerelease/v7/Configuring_Nginx_with_ColdFusion.pdf

 

Nginx Optimizations:
Before collecting performance numbers, a handful of optimizations were done to the Nginx Configurations. The Nginx Tuning Guide was referred before making these changes.

  • Updating the number of worker processes to use one worker per CPU.
    • worker_processes  auto;
  • Worker connections count updated to an appropriate value.
    • worker_connections 1024;
  • Keep Alive request count updated to 150.
    • keepalive_requests  150;
  • Connection queue size updated in /etc/sysctl.conf,
    • net.core.somaxconn = 65536
    • net.ipv4.tcp_max_tw_buckets = 1440000

 

Baselines and Performance Numbers:

To comparatively measure the performance of the Nginx connector, we collected baselines for the following configurations:

  • ColdFusion – Vanilla ColdFusion 2016 with requests served by the bundled Tomcat server.
  • Nginx Proxy – The traditional method to configure ColdFusion with Nginx. The nginx.conf file used for this purpose can be downloaded here.  
  • Apache Connector – Amongst the supported webservers, Apache connector is the most suitable due to parity with platforms.

Baselines were captured for two CFM pages, one, a simple Hello World, and another, a more complex CFM page which uses getRealPath, CGI variables, references static content and invokes a CFC via REST.

All requests were executed with 100 concurrent threads for the duration of 180 seconds and averaged over 3 executions. The results are listed below.

 

Simple CFM - Hello World     

 

Throughput
(req/sec)

ART
(ms)

ColdFusion

10088.80

8.67

Nginx Proxy

10301.27

9

Apache Connector

9933.70

9.33

Nginx Connector

9427.77

9.66

Nginx Connector with CFM handler only

10097.13

9

 

Complex CFM - Includes static content, getRealPath, CGI, POST Request

 

Throughput
(req/sec)

ART
(ms)

ColdFusion

36.90

2658.67

Nginx Proxy

37.13

2650.67

Apache Connector

37.53

2620.33

Nginx Connector

37.37

2635.67

 

 

 

 

 

 

 

Going by the initial numbers, performance on Apache stands uncontested. Nginx Proxy shows better performance than the Nginx connector when smaller payloads are being processed, with ColdFusion doing all the processing. This is because ColdFusion registers a large number of handlers, as seen in the connector configuration file, ‘ajp_location.conf’. Having only the CFM handler registered, results in numbers very similar to Nginx Proxy.

The other benefits that the Nginx connector provides, that Nginx Proxy does not, are CGI scope variables support and Search-Engine-Safe URL support. Additional restrictions such as having a unified webroot for ColdFusion and Nginx also come into play with Nginx Proxy, making it an impractical solution for deployments.

While we continue to work on the connector performance, do share your feedback and inputs on using the Nginx connector. 

4 Comments to “ColdFusion Nginx Connector – Initial Performance Numbers”

  1. Mike Collins
    Times look pretty close for that amount of throughput. Someone running at that rate would likely be more concerned about stability than a few fractions of a millisecond.
  2. Bradley Wood
    Thanks for the writeup. I have some questions.

    Firstly, what operating systems were these tests run on? My understanding of Nginx is that it's real power comes into play only on Unix-based operating systems who have a different process model. I don't believe the full power of Nginx can be harnessed on Windows.

    > All requests were executed with 100 concurrent threads

    Nginx has been widely advertized as handling large numbers of concurrent connections. (The "C10K problem") Did you do any testing that specifically targeted large numbers of connections?

    > Nginx Connector with CFM handler only

    Can you explain how this differs from the line labeled "Nginx Connector"? I didn't see a description in the blog post.

    > The other benefits that the Nginx connector provides, that Nginx Proxy does not, are CGI scope variables support and Search-Engine-Safe URL support.

    Can you elaborate on this? Are you saying this is not possible? I use Nginx in front of Lucee Server with SES URLs and it works great using the proxy_pass directive.

    > Additional restrictions such as having a unified webroot for ColdFusion and Nginx also come into play with Nginx Proxy, making it an impractical solution for deployments.

    Can to elaborate on this as well? Again, I've been doing this for years with Lucee Server and the mod_cfml project where Nginx is able to pass the web root to CF engine via an HTTP header. Is this not also possible with Adobe ColdFusion?

    And finally, it would be great if you could release your testing environment as a docker or Vagrant file so other people could play around with the exact same setup and offer improvements.
  3. Immanuel Noel N
    Thanks for taking time to share your feedback.

    Here are our responses,

    -> Machine used: RHEL 7.2 64bit - 32 GB Memory - Physical Machine

    -> All requests were executed with 100 concurrent threads
       We did not specifically test for the C10k problem. We found ColdFusion to work well with Nginx / Apache at this load, and choose this as a benchmarking criteria.

    -> Nginx Connector with CFM handler only   
       This is explained right after the stats are displayed. To be more elaborate, the Nginx connector looks for a number of URI mappings, as defined in the uriworkermap.properties. The proxy we configured only looked for .cfm's.
       "Nginx Connector with CFM handler only" lists numbers captured when mappings other than "/*.cfm = cfusion" were removed from uriworkermap.properties. These numbers were found to be identical to those captured with the proxy configuration. The connector, in this case, is still better than the proxy because of all the benefits it provides over the proxy, while compensating for the overhead by communicating over the AJP protocol.

    -> SES Support
       Could you please elaborate on how this is configured? Rules defined in the conf file would help.

    -> Seperate NGinX webroot
       ColdFusion does not look for webroot path's in headers, but always queries the webserver for its own webroot's, and attempts to find the required files there. The proxy configuration would require Nginx and ColdFusions webroots to be same.
       
    We would not be able to share the testing environment, but would be happy to answer any other queries you may have.
  4. Joseph Gooch
    -> CGI Variable Support

    Could you elaborate on exactly which CGI variables you're referring to? I haven't noticed any CGI variables lacking using Nginx as a HTTP proxy in front of Tomcat.


    BDW -> SES Support
    INN-> Could you please elaborate on how this is configured? Rules defined in the conf file would help.

    My suspicion is that this assertion is more a product of your modified Tomcat than it is your Nginx config, or your Nginx connector config. Does Adobe have any plans to release the changes done to packaged Tomcat provided with CF2016 for community review?

    SES URLs can be handled various ways
    1) Use something other than Tomcat. (i.e. Undertow, like CommandBox uses)
    2) Use a J2EE Filter.. It's a fairly trivial bit of Java code to implement.
    For instance:
    https://github.com/OpenBD/openbd-core/blob/master/src/com/newatlanta/filters/SearchEngineFriendlyURLFilter.java
    With a web.xml config:
    <!-- Implement SES URLS Start -->
    <filter>
    <filter-name>SearchEngineFriendlyURLFilter</filter-name>
    <display-name>SearchEngineFriendlyURLFilter</display-name>
    <description>SearchEngineFriendlyURLFilter</description>
    <filter-class>com.newatlanta.filters.SearchEngineFriendlyURLFilter</filter-class>
    <init-param>
    <param-name>extensions</param-name>
    <param-value>cfm,cfml</param-value>
    </init-param>
    </filter>
    <!-- Implement SES URLS End -->
    <!-- Implement SES URLS Start -->
    <filter-mapping>
    <filter-name>SearchEngineFriendlyURLFilter</filter-name>
    <url-pattern>/*</url-pattern>
    </filter-mapping>
    <!-- Implement SES URLS End -->
    3) Implement a rewrite rule that redirects to a proper cfm page/url (i.e. the drupal index.cfm?q= convention). If using urls without a cfm at all (i.e. /ColdBoxApp/some/route) you likely have a rewrite in place *anyway* to rewrite to /ColdBoxApp/index.cfm/some/route, it's not that much further of a leap to chop off anything after a .cfm url and throw it in the query string.



    I use features in Nginx (i.e. proxy_cookie_path) to overcome Tomcat behavior... For instance, rewrite CF cookies so they're constrained to the application directory that generated them. (So my JSESSIONIDs don't overlap ever)

    I also use Nginx rewrites to overcome CF Architecture problems introduced with a load balancer, for instance, the necessity to serve files via /CFFileServlet/, regardless of application source. If /App1/ is calling cfimage (for instance), and the LB is doing session affinity based on a /App1/ scoped cookie, accessing the url /CFFileServlet/_something will never end up on the same server.

    As such, this allows me to use /App1/CFFileServlet/ and all is well with the world:
    location ~* /.+(/CFFileServlet/_) {
    rewrite ^/.+/(CFFileServlet/_.*)$ /$1 break;
    try_files @cfusion @cfusion;
    }



    The benchmarking is very interesting... I'd come to the conclusion on my own that the benefits of AJP (Which are mainly that it's a binary transport, vs an ASCII one) are more useful when your web front end and coldfusion backend are on different machines, with some network (other than localhost) in between. In that scenario I'd fully expect the Apache Connector and Nginx Connector tests to go faster, depending on how fast the network in between is.

    Given the numbers above I'm not sure the Nginx connector benefits in performance provide a significant enough improvement given our current proxy_http setup is rock solid. Unless there are CGI variable features I'm unaware of.

Leave a Comment

Leave this field empty: