Back when I was first getting into enterprise systems design at John Deere I remember taking a challenge we were having with me to TechEd to pose to the guys behind MTS (The precursor to COM+ which you probably know as Enterprise Services…). We were having challenges with performance moving around recordsets of data. I found the program manager for MTS and laid out my problem: It took 60 seconds to shuffle 45,000 rows from the MTS component on the server back to the client via DCOM. His response sticks with me to this day:
Son, any system that relies on moving forty five thousand rows between multiple computers to answer a user’s question has an architectural problem, not a performance problem.
At first I was taken aback: Our approach was dictated by the architectural rules handed to us by the enterprise architects, and seemed sound when drawn up in a nice Visio chart. Fortunately, the Microsoft Program Manager took the time to walk me through what we were doing at each stage, and just how bad the ratio of effective work (that ultimately used to provide value to our users) was to the total work being done. In fact, we had an architectural problem.
When we’re talking with prospective users of Gibraltar, we often run across folks that are doing virtually no logging at all in their applications. When they have a problem, they work to get it to replicate in Visual Studio and then fix it. Usually we can help them identify a few key places where they can add little logging and get a lot of coverage, like a common database access class, security system, etc. Occasionally however we can’t, and in these cases it almost always highlights a different problem: an ineffective architecture.
I didn’t like the others, they were all too flat.
I’m not advancing the theory that your architecture should be oriented around logging, but I do believe that a good architecture will have sufficient separation of concerns that you can find gateway points that interesting execution flows go through. If not, your architecture is likely:
- Too thin: You may be just providing a small veneer on top of someone else’s goliath that’s doing most of the work. In this case, hopefully that system either has hooks to extend it for monitoring or has a built in monitoring system you can activate.
- Too linear: Everything your app does has its own code from the outside interface of your application all the way down to the core.
The linear architecture is easy to get trapped into if you keep adding feature-by-feature following the typical samples published in MSDN or other simple scenarios. By their nature, these examples try to make a point in one area and minimize the work everywhere else. If you scaled them up you’d end up with data access all over your application in its controls, forms, everywhere. Each function would have little overlap with another function.
It’s seductive to keep adding each new feature with the same approach because you don’t have anything to build it on top of, and going back to create common reusable layers is a lot of work. Eventually if your application achieves any scale in terms of features it just won’t be maintainable.
The latest update to PostSharp Diagnostics adds Loupe support, enabling extensive high-performance logging to be added to any .NET application with virtually no code changes. PostSharp even has a great free option for developers that complements Loupe Desktop! Read more
The first release of the Loupe Agent for .NET Core is also our first open source version of the Loupe Agent. This is the first step in our plan to open source the entire Loupe Agent to make it easier for anyone to extend and take advantage of what Loupe... Read more
Loupe Service now has a shorter, direct site name that's faster, anywhere in the world. Just to go App.OnLoupe.Com, the new CDN-accelerated endpoint for the Loupe Service. Your existing Agents and Loupe Desktops are unaffected by this change, but access to the web UI will be redirected to the new... Read more