We have seen the pace of change rapidly increase over the last two years based on a report compiled by PwC revealed that 77% of financial institutions are increasing efforts to innovate. However, while business transformation brings significant opportunities, it also pushes to the fore the challenge of IT infrastructure not being appropriately aligned or configured to support technological change which threatens innovation, destroys customer satisfaction and causes business disruption.
Between January 2018 and October 2018, the Financial Conduct Authority reported a 138% rise in technology failures. and in Q2 2018 alone Britain’s five largest banks, HSBC, Santander, RBS, Barclays, and Lloyds Banking Group, reported that they had suffered 64 payment outages. Examples of IT failures such as The Lloyds Banking Group IT outage in January this year, and the TSB incident in 2018 where 1.9 billion customers were locked out of their accounts for days. This only emphasises that despite increased investment in digital solutions, underperforming technology and a lack of visibility across the entire IT infrastructure is a significant industry problem. These incidents, no matter how small, can cause extensive damage to businesses and thus, more must be done to ensure that organisations are implementing strategies to futureproof themselves against this type of threat.
The drip-drip effect why IT outages are unacceptable
IT outages can have a drip-drip effect on brand reputation and the potential loss of trust from customers is difficult to reverse. This is because just a few minutes of downtime can completely destroy the customer experience and if organisations fail to deliver exceptional customer service in today’s fast-moving world, competition will waste no time trying to steal customers and swallow market share. IT outages are also financially detrimental. Gartner recently sent shockwaves through the industry when it estimated that IT downtime costs $300,000 per hour. While this may seem like a huge amount, it is far from a theoretical risk: British Airways lost £170 million off its market value after a 2017 IT outage caused 75,000 passengers to be stranded. A similar outage at US airline Southwest due to router failure, led to over 2000 cancelled flights and an estimated cost of $54 million to $82 million in lost revenue.
The question is, why do these incidents keep occurring? In this digital age, businesses are increasingly defining their success by innovative digital implementations. In fact, IDC predicts that global spending on digital transformation will be approaching the $2 trillion mark by 2022, with 30% of the top global 2000 business allocating 10% of their revenue stream to digital investment by 2020. In order to remain competitive, organisations are therefore rushing to adopt digital. However, the result is a heterogeneous mix of decentralised systems and processes that don’t communicate with each other, and they frequently fail. This is dangerous for IT teams as it means that they run the risk of not being alerted to faults across the network as they only have a fragmented view of the IT infrastructure. This is only made worse by the siloed nature of IT departments, which limits cross departmental communication.
Visibility through a single pane of glass
With so much at stake, the only way for financial institutions to anticipate problems and quickly deal with them is to increase visibility into its IT systems. Yet, gaining that insight is a persistent challenge. It could be because the tools being used were designed only to monitor static, on-premise infrastructure of the past, rather than the modern, dynamic, cloud and virtual-based digital systems of the present. But more commonly, it’s because organisations are using multiple tools, producing multiple versions of the truth for siloed IT teams. Research from analyst firm Enterprise Management Associates has indicated that a vast number of organisations have more than ten different monitoring tools and that it can take businesses between three-six hours to find the source of an IT performance issue—this is clearly unsustainable.
Only by unifying IT operations and monitoring under a single pane of glass can an organisation hope to get a holistic view of what’s going on. A centralised view ensures that there is only a single version of the truth and will help bring siloed teams together, avoid duplication of effort and more importantly ensure that monitoring will finally fulfill its promise to improve service performance, availability, and the user experience. There are times when outages can occur suddenly and without a warning. In such cases, it’s vital to detect the failure quickly and know the impacted systems. Once identified, organisations should have processes in place to rapidly mitigate the issue—reducing downtime and lost revenue.
An outage may well be an IT responsibility, but in today’s environment, it is perceived as an indictment of the brand as a whole. Organisations lose revenue from outages not just from immediate lack of business, but also from degraded consumer trust. Consumers are often unforgiving and simply have too much choice and flexibility to be understanding of prolonged outages online which is all the more reason for companies to invest heavily in managing their processes if and when outages do occur.