The following entry is a record in the “Catalogue of Catastrophe” - a list of failed or troubled projects from around the word.
J.P. Morgan Chase & Co.
Project type: Financial risk analysis tool
Project name: New Synthetic Credit VaR (Value at Risk) Model
Date: Sep 2011 (project) – Apr-Jun 2012 (operational failure) Cost: Approximately $6B
Sometimes the mightiest of the mighty is humbled by the meekest of the meek. Microsoft Excel may not be the most grandiose software tool in the market, but it’s amazing capabilities mean that it is one of the most widely used there is. As those of us who are regularly users know, there is however a dark side to the mathematical marvel that Excel has become. As you are absorbed into the wizardly magic of its number crunching capabilities, it is all too easy to make a mistake, and, once your formulas are wrong, it be can be very hard to see that you have gone wrong.
J.P. Morgan Chase, one of the world’s most mighty banking and financial services firms, is one organization that has learned the risks of Excel the hard way. In an incident that drew worldwide attention, J.P. Morgan lost billions of dollars in the so called ”London Whale” incident. The London Whale was a trader based in J.P. Morgan’s London Chief Investment Office (CIO). He had earned his nickname because of the magnitude of the trading bets he was making. It is said that his bets were so large his actions alone could move a market. Despite his undeniable power, things went seriously wrong between Apr and Jun 2012 and a poorly positioned trade resulted in losses that eventually totalled up into the billions of dollars.
According to available reports, the part of the CIO office involved was responsible for managing the bank’s financial risk using complex financial hedging strategies in the derivatives markets. To support the operations J.P. Morgan had developed a “Synthetic Credit Value at Risk (VaR) Model” that helped them understand the level of risk they were exposed to and hence make decisions about what trades they should be making and when.
The tool had been developed in-house in 2011 and was built using a series of Excel spreadsheets. According to J.P. Morgan’s own report to their shareholders that was published following the disaster, the spreadsheets “had to be completed manually, by a process of copying and pasting data from one spreadsheet to another”. To pick what appears to be an appropriate word in this particular case: YIKES! Immediately any serious user of Excel would know that relying on copy and paste is risky business. One minor slip and the data you have isn’t what you thought it was. One accidental move and you can wipe out the embedded formulas without realizing what you’ve done. Relying on copy and paste in a tool that supported billion dollar transactions seems unfathomable to me.
J.P. Morgan’s internal report into the incident illustrates how failures in the project that developed the tool were the driving forces that lead to the debacle. The report is interesting reading and illustrates that despite the financial risks involved a cavalier approach to developing the tool was used. According to the report six issues were contributing factors that left the London Whale with a tool that gave him the wrong advise. Quoting from the report, the investigating committee found:
- “Inadequate resources were dedicated to the development of the model. The individual who was responsible for the model’s development had not previously developed or implemented a VaR model, and was also not provided sufficient support – which he had requested – in developing the model.
- The model review policy and process for reviewing the new VaR model inappropriately presumed the existence of a robust operational and risk infrastructure similar to that generally found in the Firm’s client-facing businesses. It thus did not require the Model Review Group or any other Firm unit to test and monitor the approved model’s implementation. Back-testing was left to the discretion of the Model Review Group before approval and was not required by Firm policy. In this case, the Model Review Group required only limited back-testing of the new model, and it insufficiently analyzed the results that were submitted.
- The Model Review Group’s review of the new model was not as rigorous as it should have been and focused primarily on methodology and CIO-submitted test results. The Model Review Group did not compare the results under the existing Basel I model to the results being generated under the new model. Rather, it theorized that any comparison of the numbers being produced under the two models was unnecessary because the new model was more sophisticated and hence was expected to produce a more accurate VaR.
- The model was approved despite observed operational problems. The Model Review Group noted that the VaR computation was being done on spreadsheets using a manual process and it was therefore “error prone” and “not easily scalable.” Although the Model Review Group included an action plan requiring CIO to upgrade its infrastructure to enable the VaR calculation to be automated contemporaneously with the model’s approval, the Model Review Group had no basis for concluding that the contemplated automation would be possible on such a timetable. Moreover, neither the Model Review Group nor CIO Risk followed up to determine whether the automation had in fact taken place.
- The CIO Risk Management played too passive a role in the model’s development, approval, implementation and monitoring. CIO Risk Management personnel viewed themselves more as consumers of the model than as responsible in part for its development and operation.
- The CIO’s implementation of the model was flawed. CIO relied on the model creator, who reported to the front office, to operate the model. Data were uploaded manually without sufficient quality control. Spreadsheet-based calculations were conducted with insufficient controls and frequent formula and code changes were made. Inadequate information technology resources were devoted to the process. Contrary to the action plan contained in the model approval, the process was never automated.”
As has happened before in other organizations, a lack of due diligence and cavalier management oversight allowed the humble Excel spreadsheet to once again claim another victim. What makes this story so important is that the London Whale incident is likely to be the largest loss that can be directly attributed to the use of Excel.
Contributing factors as reported in the press and the court proceedings:
Lack of risk management in deciding how best to implement the new tool. Lack of quality control. Poor tool selection decision. Failure to follow through with quality related action items identified in a risk assessment review. Project governance and oversight failures. Failure to assign appropriately skilled resources to the project (despite the request from the team).
Notes: J.P. Morgan is not the only organization to be caught out by Excel.
- In last week’s post we outlined the BSkyB versus EDS court case. Again an error in a spreadsheet was partly to blame for pricing errors that in part lead to the £700M legal wrangle between the two parties.
- Another well publicized case is the “Growth in a Time of Debt” story in which two world leading academics (Reinhart and Rogoff) wrote an academic paper analyzing the effect of national debt on economic growth. Published in 2010 the paper’s findings were used by some politicians to argue that austerity policies needed to be adopted to manage national debt despite the fact that cutting government spending could further compound the 2008 Great Recession. The resulting austerity steps taken by countries around the world caused massive pain and some feel that the polices did indeed deepen the global slump. In 2013 a Ph.D. student at the University of Massachusetts (Thomas Herndon), tried to replicate Reinhart and Rogoff’s results. In doing so Herndon got hold of the original spreadsheet used by Reinhart and Rogoff and discovered that there were errors in the formulas they had used. Some argue that those errors lessened the effects reported in the paper and hence undercut the austerity argument made by the politicians. News of Herndon’s findings spread throughout the globe and Reinhart and Rogoff accepted the mistake by publishing an “errata” to correct the error. The argument however continues as to whether or not the politician’s interpretation of the paper was correct and whether the errors affected their argument substantively. If the argument were wrong and nations underwent their austerity programs on a false foundation, the impact may have been magnitudes bigger than the losses of the London Whale.
Robert Goatham – Editor