We have a service that starts multiple instances of a console application using
Process.Start(). During peak hours, there could potentially be 100 or more instances of the console application running. Obviously this isn’t an ideal solution, but it’s a legacy system that I’m currently supporting (if my boss is reading this, we need to redesign this system).
After a while, I noticed that many of the console applications weren’t starting and there were no indications of any problems until I looked at the event logs.
ConsoleApplication.exe - Application Error : The application was unable to start correctly (0xc0000142). Click OK to close the application.
Searching online for the error
0xc0000142 didn’t yield much information. It seemed to be a generic error message that means either you’re out of memory or the application is corrupt. This problem took me a few weeks to diagnose. At first, I just shrugged it off as an isolated incident without looking for an exact reason. Unfortunately, this happened again recently which prompted me to look for the root cause.
The service that’s starting these console applications used to be load balanced over two servers. We recently moved this service onto just a single server since CPU and memory usage were fairly low. What I found out was that starting all these console applications resulted in desktop heap exhaustion.
This is a great article describing what the desktop heap is:
Every desktop object has a single desktop heap associated with it. The desktop heap stores certain user interface objects, such as windows, menus, and hooks. When an application requires a user interface object, functions within user32.dll are called to allocate those objects.
Part 2 of the article mentions that the non-interactive heap size for services is 512 KB. After some testing, I was able to confirm that the
0xc0000142 error started happening when we reached approximately ~120 console applications. The desktop heap monitor confirmed that heap utilization was high.
At this point, I would have to assume that we never exhausted the desktop heap when our services were load balanced across two servers. Since migrating to a single server, our desktop heap size was reduced by half.
After six months of complaining about the frustrations of dealing with a slow development environment, I was finally given a Samsung 830 Series SSD.
What a huge difference. My old machine would take approximately 10 minutes from a cold boot to being in a usable state. With the new SSD, it takes less than 20 seconds. Every application I launch is a second or a few seconds quicker, resulting in plenty of gain throughout the day. The experience for nearly all of the disk IO-bound operations I do on a daily basis (log analysis, multitasking, etc…) have improved dramatically.
Compilation of a medium sized solution dropped from 32 seconds to 10 seconds! From all the reviews that I’ve read so far, most people don’t notice a dramatic difference in compilation speeds when they switched to a SSD. I’m not sure why there was such a dramatic difference in my case, but I would have to assume it’s because of all the background services (e.g., antivirus) that are slowing down the build process.
Not only does having a SSD eliminate a lot of time I’d otherwise be wasting waiting for an application to finish, but it’s so much more pleasant to use. I’m no longer losing my focus or train of thought if an operation takes too long. The small interruptions throughout the day due to a slow drive are gone.
Is a SSD a worthwhile investment? Absolutely.
Every time I hear the phrase “we’re using agile”, I cringe. Agile is not a noun. I was reviewing the slides for a presentation titled “Agile or Fragile” and a few points stood out:
You might be fragile if…
- Schedule takes precedence over quality in a “whatever it takes to make the date” sprint.
- You attempt to run a virtual agile team that is spread across a large enterprise or geography.
Add value to customer
- The focus is on delivering code to production in two-week sprints in order to meet project timelines.
- The focus is on delivery period.
- Customer value is limited as they must deal with buggy software until repaired.
Success requires participation
- Issues are pushed to the backlog as nothing can get in the way of delivery.
Collaboration not just co-location
- Not all team members are located together.
- Teams often work together in absolute silos with little accountability for the quality of their delivery.
- Progress is measured by the completion of a sprint on time.
- Schedule over quality becomes the unspoken imperative.
- Ship and repair is the deployment strategy.
Unfortunately, I’ve had the opportunity to see the fragile points mentioned above happen on more than one occasion. To some people, being agile gives them a chance to write poor code and add bugs or technical debt to the backlog. These poor development practices are not unique to agile methodologies, but since the term “agile” has a loose definition, it gives developers a chance to proclaim anything as agile.
I do not doubt the success of agile methodologies if done correctly. I am completely in favor of the principles behind the Agile Manifesto. However, it should be the developers responsibility to be able to identify when they’re incurring technical debt faster than they can address it. You are accountable for what you deploy to production. Do not use the “continuous delivery” principle as an excuse to deploy poor code.
I’ve had these feelings for a while now, but after today I can no longer keep it bottled up.
Unfortunately, this is what happens when you throw a developer who has no web experience onto a project and expect them to deliver production ready results within a few weeks. The high cost of low quality software is largely ignored when there’s an arbitrary deadline to meet.
Somehow, we’ve fallen into a rut where we’re staffing projects by numbers rather than what a developer is good at. These decisions will bring down any team or project, no matter how many developers are on staff. Please don’t get me wrong. These are very capable developers, but pushing them out of their comfort zone and expecting them to deliver customer-facing web applications within a short period of time is a sure way to obliterate quality. Expertise comes with time and the web is no exception.