I was over at WebHostingTalk and I bumped into this topic here, in which a forum poster asks how much CPU load is too much CPU load for a dedicated server. The answer most given to him throughout the course of the topic is that too much is when it affects performance (though they also say more than 1.00/load per core is too much, a rule of thumb that seems fairly ok to me). They are right in a certain sense, but there is something else to consider in all this talk about performance: preventing its loss!

I’m not at all opposed to measuring a server’s performance in terms of practicality. In fact, that should be all you need to base your performance assessment on; if a server is serving perfectly with no sign of sluggishness, then it is handling its workload perfectly well. There are naysayers that say you should monitor top all the time and so on and so forth, but it boils down to whether or not the server is doing what it is supposed to do well. That doesn’t mean, however, that you should ignore resource management reports completely!

Resource usage is incredibly handy in terms of figuring out when you should start thinking about upgrading or doing something to ease the strain on a server before its performance starts to degrade. Look at things that indicate you may want to upgrade your hardware: the above rule of thumb of 1.00/core, for example. Is the server at 4.00/load? Is it swapping to disk often? Are these isolated incidents where traffic peaked anomalously, or is there a consistent trend of rising traffic and thus rising strain on your server? If it’s a trend and not an anomaly, start looking to ease the strain! It’s always better to anticipate problems- it better enables you to deal with the ones that will assuredly show up unexpectedly!