(last edited on April 29, 2014 at 1:27 am)
I got dugg for the first time yesterday, for the Water post of all things, and this was an excellent test of my new Media Temple dedicated virtual (dv) server.
I’m running the very cheapest of (dv) plans ($50/month), which has a “guaranteed” memory allocation of 256MB. It actually can use more, because the (dv) is a virtual server sharing a single machine with others. If you need more memory, and it’s available, your server can grab it. Freshly minted, my (dv) was configured to make as much use as possible of this pooled memory, which I suppose encourages people to upgrade to higher-capacity (and more expensive) plans. I can’t afford that, so I learned how to modify the MySQL, Apache, and SMTP configuration to run within a 256MB footprint. Then, still seeing esoteric memory allocation failures, I tracked down some significant inefficiencies in my WordPress installation and got rid of them. Just in time too, to handle the unexpected spike in traffic.
It may have been the time of day (2PM), but the peak Digg traffic lasted only a couple hours. Those first couple of hours, though, the (dv) served 2500-2750 pageloads per hour without breaking a sweat, the server load hovering between 0.5 and 0.7 for most of the time. The site remained highly responsive, once I turned off the “KeepAlive” web server option. This option allows a web browser connection to serve more than multiple chunks of data (like all the graphic files on a web page) in one long transaction; ordinarily it’s one chunk per transaction. KeepAlive is sort of like being able to monopolize a shoe salesman at a big shoe warehouse, insisting that he bring you a steady stream of shoes for your convenience exclusively. This isn’t a problem until the number of pushy customers exceeds the number of salespeople. Then, anyone who’s late to the party will wait a looong time to get any service. With 2750 page requests, each with 30 chunks of data and only 30 processes maximum to deal with them, I had to turn off KeepAlive so everyone got served in a timely manner instead of timing out. And yes, I did have a short KeepAliveTimeout set (2 seconds). There is probably some interesting formula to calculate the optimal way to serve the most connections with the least resources, but since I didn’t know it I just watched the server and made sure it didn’t boil over. When it failed to even get warm, I disabled WP-Cache (remembering to delete the existing cache) to see what kind of increase I’d see. By this time traffic was starting to die off slightly, pulling only 20-40 pageloads per minute, but I saw the load climb to about 1.5 to 2.5. Still not too bad, but I turned the cache back on.
As far as Digg effects go, my experience was relatively mild compared to others. 2750 pageloads/hour is still the record for my site; previously the max I saw was 1600 pageloads/hour, which almost killed the shared host I was on. Of course, the inefficiencies in my WordPress setup (the Mint pepper DLoads, primarily) helped drag the entire server down. I’m starting to keep notes in a new area of the site; if you want a sneak peek, you can read about my experiences with WordPress and shared hosting. I’ll be writing up my (dv) experience (and configuration) later.
On a side note, I’ve been fairly happy with (mt) customer service. They can take a couple days to get back to you via the request system (weekends are especially long), but the quality of support has not been bad. Everyone I’ve talked with, via email and phone, has been polite and respectful. Of course if you need something done right now or you’re experiencing yet another (gs) outage, you probably have a different view of things.
That’s it for now!
4 Comments
Hi David,
Congratulations on your first digg! And yes, finally everything works just fine after your move!
Look forward to more great articles!
——-
Interesting to read.
I’m current “testing” a (gs) server, and I have to say it’s downright the worst I’ve ever seen. Luckily I’m only running it concurrently against a dedicated server I’m using at a different host.
Using mon.itor.us, I able to see uptime %, and equally important, response time.
My current host is up 100% of the time, with about a 300ms response time. (mt) is up 80% of the time, with a 9500ms average response time. I told them I cannot make the switch, and will probably have to cancel the service, despite my current solution costing 8x as much as a (gs).
They emailed back (after a few day wait), saying:
“The issue you have reported has been identified by (mt) Media Temple as possibly being part of a wider problem affecting more than one customer. An internal incident (INC# 180) has been opened to track the issue and to provide you collective updates as progress is made toward a resolution.
Please visit this special incident URL to learn more about the status of the issue:
http://www.mediatemple.net/weblog/category/system-incidents/intermittent-latency-and-unavailability-on-gridcluster1/ “
Great, eh?
Although their (dv) plans looked ok, I need at least 1gb of ram, and not sure I’d be too comfortable with the slow response times, although they do answer the phone 24/7 and have been pretty good.
I’m curious to see what the future holds in store for your (dv), David.
Hmm, glad the server was able to stay up while being digged. Yay!
I saw the water post over of Lifehack too.
Could you share where you learned this? I have a 512MB allocation on my VPS but it’d be nice to reduce the usage. I know it’s in the docs for every product, but it’s always a pain to find there!
Oh, and what was measuring the page views – mint?