The killer in any IT operation is unplanned work. Unplanned work may go by many names: firefighting, war rooms, Sev 1 incidents. The bottom line is that Operations must stop whatever planned work it was doing to manage this drill. This means little or no normal work is being accomplished. It is a scenario most of you will be familiar with: your application servers are humming along happily until suddenly, without an obvious reason, memory usage starts to increase, soon followed by longer garbage collection suspensions that finally force you to restart the application. The operations team is typically unaware of the actual impact on end users (other than a service being down), and it additionally lacks data and time to further investigate the issue. As communication between the traditional silos of operations, testing and development teams is often less than ideal, a scheduled restart in a “low impact” timeframe is often the easiest solution and turns into something resembling a “Production Best Practice” over time. This adds to the workload of an operations team because unplanned work becomes unnecessary preventative work. It also becomes a suspect every time there is a problem with the application. Wouldn’t it be better to actually fix the issues instead of just working around them? Shouldn’t there be a general understanding across all teams responsible for an application to fix the problem as fast as possible and make sure that it’s prevented in the future?