Cyber Security Readiness in an Age of Data-Sharing

Published On January 20, 2019
11 MINUTE READ

Nick Hammond, lead adviser for financial services at World Wide Technology and former global head of networks at Barclays, outlines how risk and compliance can adapt to the complex cloud-computing environment.


2018 is presenting huge opportunities across the banking and financial services sector for those companies that continue to embrace cloud-computing and agile development. Improvements are happening faster, and with greater impact and better performance.

However, at the same time, a swathe of new regulations are being brought in to deal with some of the emerging security risks posed by these developments.

In response to a range of high profile cyber hacks (not to mention those hacks which never made the news), the security mandate as set out by regulators such as the UK Financial Conduct Authority, Bank of England and European Banking Authority is fundamentally shifting. What used to be a series of annual compliance tick box activities, have now become requirements for continued assurance: financial services institutions need to show that their systems are able to withstand attack and remain operative at all times.

Traditional institutions operating in this increasingly complex regulatory compliance environment realise that more work needs to be done on cyber security. Cyber budgets are increasing to meet these assurance mandates. Existing and impending punitive regulations are placing pressure on banks and financial organisations to craft a carefully considered approach to risk and compliance. Firms are often seeking to automate as much of the work as possible. This approach can go a long way towards ensuring that the right people and applications are granted access to the right data and applications within the firm’s system – and that any other requests are denied.

In order for banks and financial services firms to mitigate the ever-evolving risks, they need to understand how changing technology trends impact the process of compliance within this regulatory environment. For instance, while transitioning to cloud architectures offers a significant benefit in its ability to drive innovations, it also impacts how security, storage and communications infrastructures are managed.

The impact of cloud-computing on cyber readiness

Just under a decade ago, the notion of having a fixed perimeter around a data centre – which you could protect with firewall technology – first began to shift. Firstly, some companies started moving information out of their own data centres to store on public cloud providers such as Amazon and Microsoft. Companies also started opening up their data centres to third-party suppliers; a single application could be spread out over multiple data centres which, for example, could allow someone to access your network via a virtual private network (VPN). While not all companies make their data available to third parties, data still gets stored on employee and customer devices due to the rise of online banking and bring your own device schemes.

The advent of online and mobile banking, cloud computing, third party data storage and apps is, without a doubt, a double edged sword: while enabling innovative advances, they have also created a perimeter that is difficult to define. Banks are now facing the challenge of a rapid rise in users interacting with their systems, making it impossible to simply draw a firewall around the computing infrastructure.

Traditional legacy systems were not particularly difficult to protect, even if their internal architecture was complex. On the whole, vital data sets were kept inside the main structure, meaning that critical systems and data could be secured by a firewall surrounding the system perimeter. But the shift to digital systems means that data and applications within a bank or financial organisation are no longer locked down with limited access in and out of the data centre.

There is also a growing trend for applications (such as SWIFT, the interbank payments software) and the bank’s IT infrastructure which they run on to be managed by different teams, with little communication taking place between them. While the promise of new software applications available through cloud-based providers holds huge promise, the risk of cyber breaches is heightened and the task of securing critical systems is made more complicated. Of course, all banks and financial services firms want to protect their crown jewels. As the simple firewall-based cyber security system no longer remains fit for purpose, compliance with the assurance mandates of new regulations means that banks and financial services firms have no choice but to update their security approach. They can achieve this by writing security policies to surround each individual application that needs to be protected. The technology that enables this is microsegmentation. Once applications have a mobile, virtual element to them, the perimeter disappears, but microsegmentation aims to create security policies specific to each individual application: it is allowed to share data with A, B or C, but it has a zero-trust policy towards X, Y and Z.

Many institutions have a micro segmentation policy, but struggle to protect their applications as they have no real understanding of what talks to what, the real-time system interdependencies that enable everything to work. Uncovering these interdependencies before the policies are implemented is absolutely key. Otherwise, it may turn out that X and Y, perhaps a credit card database and an ecommerce system, completely depend on access to data from the payments application you have just closed off to them.

The impact of regulatory change

And it’s not only the shift to cloud architecture – with an increasingly complex arrangement of applications layered onto older legacy systems – that will change how cyber security is managed. Following a string of cyberattacks over the past few years, the influx of new regulations that are set to come into effect in 2018 are instructing companies to shift from basic compliance to full assurance, in order to ensure that they are in complete control of their systems and prevent dangerous events from occurring.

The Second Payment Services Directive (PSD2) came into effect on January 13th this year, but despite advance warning from the Competition and Markets Authority for banks to be prepared for open banking, six out of the nine largest UK current account providers failed to meet the deadline. One of these missed the cut-off entirely, and five requested an extension of the deadline, shining a harsh light on the ongoing technology challenge faced by banks.

Because banks are essentially service providers, due to the high level of technology infrastructure they provide around the globe, they need to be able to assure an extremely high level of security. PSD2 requires banks to facilitate third party access to their customers’ accounts via an open Application Programming Interface (API). The software intermediary provides a standardised platform and acts as a gateway to the data, making it essential that banks, financial institutions, and fintechs have the technology in place. But this kind of technology change can be very complex for banks, and with such high stakes, banks need the confidence to know that their systems are running, available and secure at all times. To make matters more difficult, many firms have to deal with overhauling legacy systems where documentation about how pieces of the architecture have been built over the years no longer exists within the organisation.

All legacy applications need to be refactored to fit with the agile API infrastructure. Many banks currently use private APIs to improve information flow internally between legacy systems, so they already have experience of this kind of programming. But the technology and security implications of open APIs are far greater and require a high level of assurance. The EU’s Second Markets in Financial Instruments Directive (MiFID II), which passed on 3rd January 2018, has rigorous requirements. These stipulate that development environments for new programmes must be completely separate from the working production environment, to ensure poorly written, untested code doesn’t pose a problem to the main structure. Firms must also adhere to the requirement of being able to stress test the scalability of systems. Given the monetary volumes involved, there’s a very clear need to close any backdoors and stop any accidental code update finding its way into production. Firms cannot afford to be unclear about what applications they have, where they are hosted in the infrastructure, what they talk to and who exactly can talk to them. The requirements under MiFID II to separate production and development are only the tip of the iceberg when it comes to testing system interdependencies within financial services. Most lack a real-time, dynamic record showing how applications interact within the system, and without this it is impossible to create the right security policies to tie around each critical application – or to ensure that communication between production and development environments is fully closed off.

Finally, the General Data Protection Regulation (GDPR), which reaches its compliance deadline in May 2018, requires that any company handling European customer data must detect any kind of cyber breach and report it within 72 hours. In addition, they must be able to respond to customers who demand the right to be forgotten, deleting the entire data set relating to this customer from all aspects of their systems. Due to this, financial services providers must have visibility over every place data is used and sent, and policies firmly in place to prevent and detect a leak.

The future is application insurance

The good news is that with a range of critical applications in different places, the majority of organisations have realised the value of protecting them by wrapping individual security policies around each one. But the bad news is that this microsegmentation process can be difficult to achieve, due to the need to trace each stitch in the patchwork of systems.
Each vital application communicates with others in a multitude of, often unmapped, ways. Implementing a single policy on one application can trigger a domino effect, rippling down the chain that brings other functions to a sudden halt. When it comes to vital applications such as SWIFT, financial services institutions cannot afford for this kind of disruption, due to the internal and external impact on customers.

It is clear, then, that simply investing in and installing new security products to navigate impending compliance requirements is not enough. In fact, implementing such products without an understanding of the way each critical application is functioning in relation to the wider system, is at best an ineffective investment and at worst will bring devastating internal disruption. It is becoming increasingly common for financial services firms to implicate themselves in a sticky situation, by hastily investing in security products, and then suddenly arriving at the realization that their chosen products cannot be effectively implemented into the already existing infrastructure. And, in the worst case scenario, this poses a serious risk to organizational security.

Before installing new applications or security policies, companies can install them on a production network, which involves creating a test environment that emulates the “real” network as closely as possible. Financial players can create a software testing environment that is cost-effective and scalable by using virtualization software to install multiple instances of the same or different operating systems on the same physical machine.

As their network grows, additional physical machines can be added to grow the test environment, so that it continues to stimulate the production network and allows for the avoidance of costly mistakes in deploying new operating systems and applications, or making big configuration changes to the software or network infrastructure.
To really make the most of digital innovation – now and in the future – banks need to ensure that their cyber security approach works on the level of individual applications. This will need to be led by business application owners and risk and compliance officers, who can speak from the top down about how each application needs to function and what the regulators require. For example, the business owner knows that their ecommerce system needs to access the credit card database, and the right policy can be written with this in mind.

With reference to the business need and regulatory mandate, policies can be recommended application by application, tested in the environment, and finally implemented to bring about a level of cyber readiness fit for the modern financial services institution.