{"id":48,"date":"2010-01-28T23:51:53","date_gmt":"2010-01-29T04:51:53","guid":{"rendered":"http:\/\/blogs.law.harvard.edu\/djcp\/?p=48"},"modified":"2010-01-29T00:20:57","modified_gmt":"2010-01-29T05:20:57","slug":"nginx-as-a-front-end-proxy-cache-for-wordpress","status":"publish","type":"post","link":"https:\/\/archive.blogs.harvard.edu\/djcp\/2010\/01\/nginx-as-a-front-end-proxy-cache-for-wordpress\/","title":{"rendered":"Nginx as a front-end proxy cache for WordPress"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/blogs.law.harvard.edu\/djcp\/files\/2010\/01\/nginx-wp-love.jpg\" alt=\"nginx-wp-love\" title=\"nginx-wp-love\" width=\"325\" height=\"39\" class=\"alignright size-full wp-image-80\" srcset=\"https:\/\/archive.blogs.harvard.edu\/djcp\/files\/2010\/01\/nginx-wp-love.jpg 325w, https:\/\/archive.blogs.harvard.edu\/djcp\/files\/2010\/01\/nginx-wp-love-300x36.jpg 300w\" sizes=\"auto, (max-width: 325px) 100vw, 325px\" \/><\/p>\n<p><strong>The short version:<\/strong><\/p>\n<p>We put an nginx caching proxy server in front of our wordpress mu install and sped it up dramatically &#8211; in some cases a thousandfold.  I&#8217;ve packaged up a plugin, along with installation instructions here &#8211; <a href=\"http:\/\/wordpress.org\/extend\/plugins\/nginx-proxy-cache-integrator\/\">WordPress Nginx proxy cache integrator<\/a>.<\/p>\n<p><strong>The long version:<\/strong><\/p>\n<p>Here at <a href=\"http:\/\/blogs.law.harvard.edu\/\">blogs.law.harvard.edu,<\/a> our wordpress mu was having problems. We get a fair amount of traffic (650k+ visits\/month), &#8211; combine that with &#8216;bots (good and bad) &#8211; and we were having serious problems. RSS feeds (we serve many from some <a href=\"http:\/\/blogs.law.harvard.edu\/doc\">pretty<\/a> <a href=\"http:\/\/blogs.law.harvard.edu\/philg\">prominent<\/a> <a href=\"http:\/\/blogs.law.harvard.edu\/mesh\">blogs.<\/a>) are expensive to create, files are gatewayed through PHP (on wpmu), and letting PHP dynamically create each page meant we were VERY close to maxing out our capacity &#8211; which we frequently did, bringing our blogs to a crawl.<\/p>\n<p>WordPress &#8211; as lovely as it is &#8211; needs some kind of caching system in place once you start to see even moderate levels of traffic. There are <a href=\"http:\/\/wordpress.org\/extend\/plugins\/wp-super-cache\/\">many<\/a>, <a href=\"http:\/\/wordpress.org\/extend\/plugins\/w3-total-cache\/\">many<\/a> high quality and well-maintained options for caching &#8211; however, none of them really made me happy, or fit my definition of the &#8220;holy grail&#8221; of how a web app cache should work.  <\/p>\n<p>In my mind, caching should:<\/p>\n<ul>\n<li>be high performance (digg and slashdot proof),<\/li>\n<li>light-weight,<\/li>\n<li>be structured to avoid invoking the heavy application frameworks it sits in front of. If you hit your app server (in this case, wordpress) &#8211; you&#8217;ve failed.<\/li>\n<li>be as unobtrusive as possible: caching should be a completely separate layer that lives above your web apps,<\/li>\n<li>have centralized and easily tweaked rules, and<\/li>\n<li>be flexible enough to work for any type (or amount) of traffic.<\/li>\n<\/ul>\n<p>So I decided to put a proxy in front of wordpress to static cache as much as possible. ALL non-authenticated traffic is served directly from the nginx file cache, taking some requests (such as RSS feed generation) from 6 pages\/second to 7000+ pages\/second. Oof.  Nginx also handles logging and gzipping, leaving the heavier backend apaches to do what they do best: serve dynamic wordpress pages only when needed. <\/p>\n<p>A frontend proxy also handles &#8220;lingering closes&#8221; &#8211; clients that fail to close a connection, or that take a long time to do so (say, for instance, because they&#8217;re on a slow connection).  Taken to an extreme, lingering closes act as a <a href=\"http:\/\/hackaday.com\/2009\/06\/17\/slowloris-http-denial-of-service\/\">&#8220;slow loris&#8221;<\/a> attack, and without a frontend proxy your heavy apaches are left tied up. With a lightweight frontend proxy, you can handle more connections with less memory. Throw a cache in the mix and you can bypass the backend entirely, giving you absolutely SILLY scalability.<\/p>\n<p>On nginx &#8211; it&#8217;s so efficient it&#8217;s scary. I&#8217;ve never seen it use more than 10 to 15 meg of RAM and a blip of CPU, even under our heaviest load.  Our <a href=\"http:\/\/ganglia.sourceforge.net\">ganglia<\/a> graphs don&#8217;t lie: we halved our memory requirements, doubled our outgoing network throughput and completely leveled out our load. We have had basically no problems since we set this up.<\/p>\n<p>To make a long story short (too late), I packaged this up as a plugin along with detailed installation and configuration info. Check it out! Feedback appreciated: <a href=\"http:\/\/wordpress.org\/extend\/plugins\/nginx-proxy-cache-integrator\/\">WordPress Nginx proxy cache integrator<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The short version: We put an nginx caching proxy server in front of our wordpress mu install and sped it up dramatically &#8211; in some cases a thousandfold. I&#8217;ve packaged up a plugin, along with installation instructions here &#8211; WordPress &hellip; <a href=\"https:\/\/archive.blogs.harvard.edu\/djcp\/2010\/01\/nginx-as-a-front-end-proxy-cache-for-wordpress\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1984,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[14626,14627,14628,3919],"class_list":["post-48","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-nginx","tag-performance","tag-proxy-cache","tag-wordpress"],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/archive.blogs.harvard.edu\/djcp\/wp-json\/wp\/v2\/posts\/48","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/archive.blogs.harvard.edu\/djcp\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/archive.blogs.harvard.edu\/djcp\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/archive.blogs.harvard.edu\/djcp\/wp-json\/wp\/v2\/users\/1984"}],"replies":[{"embeddable":true,"href":"https:\/\/archive.blogs.harvard.edu\/djcp\/wp-json\/wp\/v2\/comments?post=48"}],"version-history":[{"count":14,"href":"https:\/\/archive.blogs.harvard.edu\/djcp\/wp-json\/wp\/v2\/posts\/48\/revisions"}],"predecessor-version":[{"id":81,"href":"https:\/\/archive.blogs.harvard.edu\/djcp\/wp-json\/wp\/v2\/posts\/48\/revisions\/81"}],"wp:attachment":[{"href":"https:\/\/archive.blogs.harvard.edu\/djcp\/wp-json\/wp\/v2\/media?parent=48"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/archive.blogs.harvard.edu\/djcp\/wp-json\/wp\/v2\/categories?post=48"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/archive.blogs.harvard.edu\/djcp\/wp-json\/wp\/v2\/tags?post=48"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}