Is risk management broken? If so, how can it be fixed?
Financial crises are not new, but the global financial crisis of 2008 exposed the over-leveraged interconnectedness of our modern digital age. It reflected the failure of a laissez-faire economic and regulatory philosophy that had increasingly influenced policy circles during the past three decades; that banks could and should manage their own risk with little outside interference.
As the dust settled, fragile financial services networks brought systemic risk to the real economy. People, politicians and economists began to appreciate the significance of financial risk at both micro- and macro-level, and therefore had a right to enforce change. This has prompted a rethink in how financial institutions operate in regards to risk, particularly among systemically important financial institutions (SIFIs), those big organisations that can carry most risk.
Sure, guidance always has been, and is, better filtered down via supervisors and regulators. For example, when it comes to the biggest, most global banks, compliance officers are familiar with topical regulations such as the Basel-derived Fundamental Review of the Trading Book (FRTB) impacting institutions present in Europe, stressing tail “extreme” risks at the desk level, aggregating them up to build a firm-wide view.
However, regulations tend to repeat, differ or be inconsistent across geographies and functions, and are rarely harmonised in a way that actually makes them easy to apply. Contrarily, supervised institutions are best-placed to understand the inconsistencies, inefficiencies and above all the risk of risk regulation.
So what’s the best route forward?
First, set priorities accordingly. Account for the risk of regulatory risk changes, by edict and geography. Set your company up with the ambition of a versatile risk system, which can, at short notice per the whim of one or many regulator(s), troubleshoot, replicate earlier computations, offer useful transparent documentation, and enable incremental addition of modules, methods, and plug-ins. Engage a cross-organisation team to steer the system, across risk categories and desks. Overlay risk managers and analysts with open-minded IT and project management and enforce with executive sponsorship.
Second, in the context of that grand vision, confront lingering legacy systems. Too often, silos predominate, including spreadsheets or local applications maintained by “textperts”, vendor-supplied black-boxes, and developer-obfuscated applications. “Technical debt” follows, where IT systems have accrued “interest”. That interest is represented by time spent on maintenance, trouble-shooting, and navigating poor performance. By spending all your time on the “interest”, it is hard to focus on new capabilities and added service. In short, IT systems burdened with technical debt increase risk and reduce service.
Next, assess collaborative needs. In instances like the FRTB, calculations are determined at desk-level and then aggregated accordingly. Stress-testing cycles too cut across asset classes and desks. Think how communication, of data, scenarios, forecast methods, valuations and calculations, can be automated and managed in software. Avoid passing spreadsheets around, or if spreadsheets are unavoidable closely manage the methods and required inputs.
Finally, proactively seek informed, constructive conversations with your regulators, perhaps with your peers alongside. Do not bully them with balance sheets as perhaps happened in the past, but seek to understand each other’s pains and interests. In this way, collaborate to make a better world for yourselves, ourselves and future generations to come.
These challenges are daunting, particularly for SIFIs and even their fresh-faced greenfield challengers. Both must balance as far as possible wealth creation alongside risk management. Functional compliance does not usually equate to business need, challenging budget/cost-centre allocations as well as resourcing.
Software pipe dreams perhaps? For a few at least, it is an emerging reality. It can and has been done comprehensively elsewhere. Look to the automotive industry dating back to the troubled 1990s and their subsequent adoption of “model-based design” strategies, cultural and software methods that overcome perceived human and software barriers of research, software development, validation and verification. Concepts are iteratively tested on hardware, not re-programmed and thrown away as was the case before.
Software prototypes could be directly tested, improved, and implemented optimally on hardware facilitating iterative design, with added quality control care of automated (extensive) testing. Improved safety and vehicle life from more extensive testing coagulated with improvements in vehicle design from rapid iterative development. Costs and risks were both reduced, feature development increased, massively, and thus the vicious cycle was broken.
Some financial institutions have taken the brave step to follow this paradigm, augmenting the scripts of research experts directly into (tested) re-usable components that model markets, customers and products consistently. From a consistent model source, IT in these cases do proliferate and embed the components into front and middle offices.
Banks continue to be hit by a double whammy of increased regulation and challenger competition, overlaid by a paralysis of process. With a glass “half-full” approach however, risk-aware development can both reduce risk and enable service proliferation.
The journey is just beginning in financial services, but at least it has begun.
By Steve Wilcockson, industry manager – financial services, MathWorks