by Steven J. Owens (unless otherwise attributed)
Software problems tend to boil down to one four areas:
Surprisingly, architecture can often be the key to solving the other three areas, and can massively influence the results of work on other areas. This is one of those things that seems to baffle managers and it's not exactly well-understood by the rest of us. Architecture is about identifying the class of problems you're trying to solve and the defining principles of solutions to that class of problems. Trying to do something with the wrong architecture is pretty much the bane of the software development world. Likewise, trying to add features to an architecture that isn't suited to them is swimming upstream. All of this tends to lead to poor performance and poor stability.
Features can mean either features on the to-do list or features on the to-redo list. Redoing them can be because of new understanding of the need or of the feature design, or because of bad information. Unfortunately, in the commercial software world, there's a lot of pressure to ignore misimplemented features -- unless they're so broken that paying customers are screaming at you and threatening to cancel their contracts -- to instead just add more features. This probably accounts for a lot of why programmers contribute their time to open-source projects :-).
Stability is how we keep the server from falling over :-). Good old-fashioned bugs. Well-written C programs can be very fast and powerful, while remaining stable, but poorly written C programs can corrupt memory and crash easily. Guess which is more common?
This is not to run down C, of course. Any language that lets you directly manipulate memory can corrupt memory. Languages that insulate you from resource management can still have programs that crash if you don't take care to manage resources. Memory isn't the only resource to worry about; file handles, network sockets, etc.
Performance problems usually boil down to either brute-force optimization, algorithmic optimization, or moving the problem around.
Brute-force optimization is the most well-known and traditional approach, and also usually the worst approach. A classic example is spending a couple hours optimizing a sort loop that will only ever be used to sort, at most, a few hundred records. Proper brute-force optimization involves actually running your application - with a realistic scenario - and using a profiling tool to find the "hot spot", and figuring out how to either spot-optimize it or change it.
A couple of truisms: "early optimization is the root of all evil", and "the right answer is almost always counter-intuitive". A programmer's expert opinion about where a program needs to be optimized is almost always incorrect. It doesn't matter what you think, only what you can measure.
Algorithmic optimization is usually the better "real" approach to improving performance, and boils down to looking at the big picture and making sure you're solving it in the right way. See my comments about architecture; often it is only after much work and development that you acquire the proper insight, in relation to the problem you're working on, to determine the proper architecture and algorithms.
Moving the problem around is a clever hack that is often surprisingly easy and useful, particularly in net-based applications. It can mean trading off between memory, CPU, storage and bandwidth. In networked contexts it can mean doing the work at the appropriate point - doing rendering and input pre-processing at the client, instead of at the server.