(last updated on April 29, 2014)
I’m running the very cheapest of (dv) plans ($50/month), which has a “guaranteed” memory allocation of 256MB. It actually can use more, because the (dv) is a virtual server sharing a single machine with others. If you need more memory, and it’s available, your server can grab it. Freshly minted, my (dv) was configured to make as much use as possible of this pooled memory, which I suppose encourages people to upgrade to higher-capacity (and more expensive) plans. I can’t afford that, so I learned how to modify the MySQL, Apache, and SMTP configuration to run within a 256MB footprint. Then, still seeing esoteric memory allocation failures, I tracked down some significant inefficiencies in my WordPress installation and got rid of them. Just in time too, to handle the unexpected spike in traffic.
It may have been the time of day (2PM), but the peak Digg traffic lasted only a couple hours. Those first couple of hours, though, the (dv) served 2500-2750 pageloads per hour without breaking a sweat, the server load hovering between 0.5 and 0.7 for most of the time. The site remained highly responsive, once I turned off the “KeepAlive” web server option. This option allows a web browser connection to serve more than multiple chunks of data (like all the graphic files on a web page) in one long transaction; ordinarily it’s one chunk per transaction. KeepAlive is sort of like being able to monopolize a shoe salesman at a big shoe warehouse, insisting that he bring you a steady stream of shoes for your convenience exclusively. This isn’t a problem until the number of pushy customers exceeds the number of salespeople. Then, anyone who’s late to the party will wait a looong time to get any service. With 2750 page requests, each with 30 chunks of data and only 30 processes maximum to deal with them, I had to turn off KeepAlive so everyone got served in a timely manner instead of timing out. And yes, I did have a short KeepAliveTimeout set (2 seconds). There is probably some interesting formula to calculate the optimal way to serve the most connections with the least resources, but since I didn’t know it I just watched the server and made sure it didn’t boil over. When it failed to even get warm, I disabled WP-Cache (remembering to delete the existing cache) to see what kind of increase I’d see. By this time traffic was starting to die off slightly, pulling only 20-40 pageloads per minute, but I saw the load climb to about 1.5 to 2.5. Still not too bad, but I turned the cache back on.
As far as Digg effects go, my experience was relatively mild compared to others. 2750 pageloads/hour is still the record for my site; previously the max I saw was 1600 pageloads/hour, which almost killed the shared host I was on. Of course, the inefficiencies in my WordPress setup (the Mint pepper DLoads, primarily) helped drag the entire server down. I’m starting to keep notes in a new area of the site; if you want a sneak peek, you can read about my experiences with WordPress and shared hosting. I’ll be writing up my (dv) experience (and configuration) later.
On a side note, I’ve been fairly happy with (mt) customer service. They can take a couple days to get back to you via the request system (weekends are especially long), but the quality of support has not been bad. Everyone I’ve talked with, via email and phone, has been polite and respectful. Of course if you need something done right now or you’re experiencing yet another (gs) outage, you probably have a different view of things.
That’s it for now!