How to Survive a Good Slashdotting or DiGGing

John P.

Site StatsOn Thursday February 8 one of my recently posted articles was Dugg and then proceeded to almost immediately go viral.

The rush to my site was so overwhelming that it completely disabled the server for the first 3 hours. Here is what I did to bring it back online, and how I shored things up for the future.

First of all, One Man’s Blog is a dynamically generated site using WordPress and running on a dedicated server.

  • It has a 2.0 Ghz Celeron processor
  • It has 512 MB of memory
  • It runs Linux
  • This is way more capacity than has ever been necessary in the past.
  • Most of the time the server is running around 2% utilization.

If you ever find yourself in a similar situation some of these details will be important, but we’ll get back to that…

When the article first appeared on the front page of Digg there were several thousand requests per hour coming to the server. In fact, I would estimate there were over 5 people every second for several hours, slowing to just over 1 request every 4 seconds for the next 24 hours.

Since the site is dynamically generated this means that the pages have to be created on the fly from information in the database each time a page is requested (theoretically). Now, since I had never had any capacity issues in the past, I had never bothered to really optimize the PHP code on my site and each page was requiring about 62 queries from the database. That’s a LOT, but I’ve got a lot of features on the site and I like it that way…

If you do a little simple math you can quickly see the problem. Take around 25,000 visitors x 62 database queries and you’ve got over 1.5 million database requests in the first 24 hours! Sorry, but a $99 / month dedicated server ain’t gonna handle all that. :-)

Looking back at the event, I felt extremely happy that my article was so appreciated, but also extremely frustrated because the server was failing. I had to do something, and do it fast, to ensure that everyone that wanted to view the page could.

At this time I did what I always do in times of Linux crisis. I turned to my partner, and resident uber-genius, Liam Quinn for advice.

Liam informed me that the problem had to be the database, and that I needed to relieve the traffic from that page if I hoped to recover.

Since Liam is not really familiar with WordPress he didn’t know exactly how to pull this off (and therefore neither did I ;-)) so for about half an hour I sat around dazed and confused. Finally it hit me…

  • I logged into my Web provider’s emergency recovery console and manually rebooted the server, kicking all of the current requests off.
  • As soon as the server rebooted I loaded the page that was getting hammered in my Web browser, then I did a VIEW SOURCE to get the page HTML.
  • Next I FTP’ed into the server and created the exact directory structure that WordPress was generating for the page and I stuck a temporary index.html file in there.
  • The temporary index file, to my great satisfaction, did indeed override the dynamically generated (fake) WordPress path and immediately removed the load from the database.

At this point the server was stabilized, but now I wanted to ensure that the page people were visiting was still formatted like the rest of the site, so I took the source code from the original and edited it to remove all of the “dynamic content” links thus converting it into a static page. I then uploaded it to replace the temporary index.html file.

That solved the immediate problem, and allowed the site to continue to operate over the coming week, but there was the bigger problem of how to prevent this from happening again in the future.

Again I asked Liam for advice because my immediate reaction was to throw more hardware at the problem. Liam the Wise suggested that more hardware might slow down the failure, but told me I need to turn my attention to the software.

I knew there was plenty of room for improvement within the WordPress code, but I didn’t want to turn off features I like on the site. Here are a few of the changes I made which had a huge impact in reducing the load on the DB server and therefore increased page load speed. I believe the same principles can be applied to any Blog or CMS software:

  • I cranked up the expire time in WP-Cache to 30 minutes. It was previously set at 3 minutes. Allowing the system to deliver pages from cache for 10 times as long helped quite a bit. There is no reason you couldn’t set it for even 12-24 hours if your site is infrequently updated.
  • I opened and manually edited my WordPress theme’s header.php file to replace instances of <?php bloginfo('name'); ?> with “One Man’s Blog”. Since the title is always the same this saves one call to the Database for every page load.
  • My theme had LINK elements in the HEAD which included PHP calls such as href="<?php bloginfo('rss2_url'); ?>" which I quickly replaced with href="". In my case this saved another 4 calls to the database for things that really should be static anyway.
  • My theme also had a META element which looked up the WordPress version using <?php bloginfo('version'); ?>. I removed that altogether because not only does it save another call to the DB, but it’s a security risk to tell people what version of code your using. Hackers might know of an exploit for that version…
  • If you have the following archives link in your theme, I’d remove it. <?php wp_get_archives('type=monthly&format=link'); ?> It just doesn’t serve any useful purpose and it costs you another DB call.
  • Once I got down into the BODY of the page, I again replaced the <?php bloginfo('name'); ?> with “One Man’s Blog”.
  • I also replaced the <?php bloginfo('description'); ?> with “Specialization is for Insects.”
  • Moving on to the Footer.php file, if you have anything like the following, remove it: Page Load Information: <?php echo get_num_queries(); ?> queries. <?php timer_stop(1); ?> seconds. This is great for troubleshooting purposes, but it costs you two calls to the DB for every single page load and it isn’t even close to worth it.

Hopefully it’s obvious where we are going with this. If you can remove any PHP and replace it with static HTML you are going to enhance performance. Go through each of your theme’s PHP files and do this. Some themes are going to be naturally leaner than others.

As a final note, when it comes to optimizing your Blog for performance, there is no getting around the fact that you’ll have to do some manual editing of the PHP files. WordPress and other packages have been designed for ease and speed of installation – which is great – but if you’re expecting a lot of traffic it’s imperative to make these changes.

Here are a few other folks who have shared their experience in this matter. I just wish I knew this a week ago:

Your alternative is to helplessly watch your server crash until people decide to never come back. :-(

If you need assistance with editing your theme files please open a topic on the HTMLHelp Forums. I’m just not equipped to answer those kinds of questions in the comments area of this page.


  1. says

    Thanks for all the useful tips! I couldn’t find the post that got you on the front page of Digg. Anyways, I was disappointed when I found out that my blog received a grade of “E” from the YSlow Firefox plugin which measures site performance. It’s a month-old blog, so it wasn’t so surprising. Nevertheless, I couldn’t settle for a “E” Hopefully, your tips will help me bump that grade up to an “A!”

  2. says

    replacing any of the is pointless, these are all loaded with 1 query when wordpress loads. so all your cutting down is php parsing time,
    which wont really affect much

  3. says

    I’m not saying that your suggestions aren’t faster (although the gains for your suggestions are extremely small), and of course a direct static page is going to be faster than WP-Cache.

    What I’m saying is that the combination of the two doesn’t make sense. Consider that with WP-Cache, your page is generated once. That takes, what, 3-4 seconds? Maybe up to 10-15 or so if you have an exceptionally overloaded server. But after that first generation of the page, WP-Cache serves the static version. The database goes untouched after that. The PHP code you’re suggesting to replace never runs in the first place, because you’re serving a static cached version of the page, letting it rebuild once every hour (by default) or more.

    Now, yes, a pure manually made static page will be a lot faster than WP-Cache. For one thing, the PHP process will never get invoked… Although if you’re using mod_php instead of php as a cgi application, that invocation time is pretty minimal to begin with.

    There’s better places to make performance enhancements, is all I’m saying. Use one of the several PHP caching systems, that will improve performance right away as it won’t be reinterpreting the PHP code all the time. Use WP-Cache. Increase the query cache on the mySQL server. Any of those has a much more profound impact.

    And if doing the sort of thing that you describe is actually noticeably improving performance for you, then I’d suggest enabling the built-in WordPress object cache. Just set it to true in the wp-config.php file. This will cache the results of SQL queries locally and pull from there when it can. That eliminates the query time associated with the calls you’re describing, which are essentially pulling from the wp_options table.

    Worst case scenario, you redirect to a mirror. I’ve done that on my site before. Simple .htaccess rules, if a referrer comes from digg, it redirects to the auto-cached version of the page. Easy.

    Just saying that this seems like a lot of work for almost zero benefit. Making PHP calls is not processor intensive in and of itself, it’s what the PHP call is doing that makes the difference. Once you’re in a PHP execution mode, then “static” output is identical to PHP “echo” type commands, from a processing standpoint.

  4. says


    I should have stated up front that I’m working under the assumption that WP Cache is already installed and functioning. That alone however will NOT keep the server running under a front page digging. I have proven this on my own server no fewer than 4 times.

    Barry and Matt also gave a lecture about improving WP performance and Barry compared WP Cache to static pages and you can see that the servers still cannot handle anywhere near the load as a pure static HTML document.

    Bottom line = a manually cached page is still about twice as fast as any other method. I could also give many more examples of where and why this is true, but it is the single most effective way of surviving a Digging.

    Also, I have witnessed first hand how removing minor PHP calls can make a signifigant performance improvement in terms of survivability. We recently made these changes to a friend’s site in between front page appearances and the second and future events were far more workable with noticable performance improvements.

    One has to consider the secondary effect of a Digging which is that people move from the page that is being viewed to other pages on your site. Those “minor” PHP calls, multiplied by the thousands eventually represent signifigant overhead – especially when the site is already under what is the equivalent of a DOS attack on the Dugg page.


  5. says

    Your static page trick will work in a pinch, but the rest of this article is poorly founded. Removing minor PHP calls and other simple optimizations won’t save your site from a slashdotting. What you need to do is to make the whole site static, but still flexible and dynamic.

    Solution: WP-Cache. It works. Really.

  6. says

    Great tips for anyone and I think they’re especially valuable for those of us on shared hosted environments where resources are far more constrained. Plus there’s the benefit of minimizing periods of poor performance which might influence whether a potential new reader stays or goes. Time to do some tweaking…

  7. says

    Thanks, some good tips there, I do not think any of my sites would hit those server loads, but I think I might test some of these tips anyway just to be prepared.


Leave a Reply

Your email address will not be published. Required fields are marked *