The fastest code never needs writing
You may have heard that the fastest code is the code you didn’t write. I used to really like that concept, and it served well back in Visual C++. Under .NET I find it doesn’t apply as well. The big difference is the huge body of common functionality built into the .NET runtime that wasn’t around in Visual C++. Back in the day, you ended up rolling a lot of your own code to deal with database access, network connectivity, and anything over basic low level data structures. Heck, you had to decide what string library you were going to use because it wasn’t part of the basic environment.
Modern environments combine languages with large runtimes (such as .NET, Java, etc.) that abstract away many of the traditional plumbing that developers had to work with. This trend isn’t decreasing either - WCF adds another pile of abstractions over .NET remoting which itself was an abstraction over RPC which was an abstraction over TCP/IP sockets… This is generally a good thing because it means that you can develop higher level functionality much faster. This lets us build applications that are much more sophisticated than were feasible a decade ago, which is great.
In this new world, it’s tempting to assume that code you didn’t write (that provided in the framework) is faster than whatever you could write. That may be true, and it’s also likely to save a lot of development time. But, these libraries can create large performance blind spots.
When there are all of these generic mechanisms to accomplish common tasks from database access to managing groups of objects, it’s easy to buy into the idea that you should leverage these shared tools for everything. To help sell each framework there’s a relentless pressure to make these common objects do the right thing in the fewest lines of code feasible. That’s great for getting started, but there’s an inevitable trade-off: Jack of all trades, master of none.
As a rule of thumb, any time you take something specific and generalize it the complexity of the implementation goes up about an order of magnitude. For example, if you were writing your own object you’d know if it might ever be asked to hold a duplicate item, get a null object, have to resort the set every time an object is added… When you generalize the problem you have to trade off performance for safety at many points.
For example, you’d be amazed at how fast you can load database records using ODBC in C++. If you’ve only ever used a higher level data access library like ADO.NET the performance of ODBC in C++ is astounding. Of course, you have to write a lot more code to make it work, and if you put your foot wrong then bad things will happen.
Most of the time, this won’t matter because the net performance of the application isn’t significantly affected. However, there are key parts of your system where it may make a very big difference indeed. In those cases, you can get a lot more performance rolling your own specialized object. We’ve found several key places where it was worth the work to do our own thing that could do less because it only had to serve our needs.
So to update the classic saw, the fastest code is the code no one needed to write - not you, not the folks that wrote the frameworks you use.