We’re going to be shipping a new update to Gibraltar, version 2.5.2, in a bit over a week. This release is intended to address three defects and a usability issue which are important to some customers. Most notably this includes:
- Assembly Resolution Failure Logging Off by Default: This was a new feature in 2.5.1 and it turns out to be dramatically more noisy than we had hoped. For 2.5.2 this option will be off by default so in the normal case where you’re not attempting to troubleshoot assembly binding problems you don’t get the noise.
- Top N Errors Summary View Performance: The original implementation was designed for a “top 10” rows approach and doesn’t scale well as the number of unique errors increases. We’ve rewritten it and had all the work moved to a background thread to prevent it from stalling the user interface.
- Long XML Details Line Performance: The formatting code we used to pretty up XML for the XML Details view of a log message degraded exponentially as the length of a single line increased. This could get intolerable for cases like serializing view state from an ASP.NET page. We’ve switched over to using a commercial code editor for displaying both source code and XML details to address this problem.
The delay in shipping isn’t due to any known issues but rather because I’m travelling to the UK for a long series of meetings and it seems inadvisable to ship an update and then leave town. If you are affected by any of the issues above and want to get the release candidate build of 2.5.2, just contact support and we’ll send you a personal download link for it.
Rapidly diagnose each error in any .NET application with our new Web Log Viewer and Exception root cause analysis, new in Loupe 4.5. New integration with Azure Service Bus and Azure Search enables full Loupe functionality without any Virtual Servers in Azure. Read more
The recently reported Cloudflare vulnerability where fragments of secure, encrypted user data could be exposed to a third party does not affect Gibraltar Software even though we use Cloudflare because we only route static content through the Cloudflare proxy for acceleration. Read more
Back in January of 2016 we decided to completely transition out of our data centers and into the cloud. On Sunday we finally shut down the last cluster of our hardware. Read more for how we did it and whether we would do it all over again if we had... Read more