For over 20 years much of the banking industry has had a penchant for Straight-Through-Processing (STP), and understandably so. With volumes and values of transactions processed going through the roof, the slippery hands of mere mortals cannot be trusted to get it right every time. Recent years have also seen the development of more formal Operational Risk Management, which brought greater scrutiny through the reporting of operational risk metrics and provisioning for estimated loss. This has provided a further driver to remove any remaining manual processing where automation is feasible. Combined with an overall desire to increase efficiency, this now sees us in an environment which is extremely automated. As a result, many banking functions now proudly publish their STP rates and seek to achieve that elusive 100%.
So all of this is good, right?
Well, of course it is, but an event in early 2016 had me thinking: “Have we now lost some of our control given that we STP so much?”
I am sure that this has probably crossed the minds of many following the Central Bank of Bangladesh incident in February 2016. In this case, hackers were able to access internal processing systems and send payment requests from the Central Bank of Bangladesh to the Federal Reserve Bank of New York, instructing it to pay nearly $1billion to a number of accounts in the Philippines on its behalf. Luckily, because of a reference to a word appearing on a sanctions list, only $81 million was paid, which promptly went missing. Twelve months later, only $15 million has been recovered and, as of yet no arrests have been made.
Problems like this happen frequently, all thanks to Straight-Through-Processing, or Straight-To-Problem as I am starting to think of it.
What can we do to stop it happening again?
To answer this question, it is relevant to consider how controls are applied both within and outside an STP environment. While not impossible, it is unlikely this incident would have happened in a pre-STP world with human actors applying common sense as opposed to perfunctory controls performed by machines today. Imagine this exact incident occurring 25 years ago before we used STP for all of our high-value payments……… ”Wow, aren’t the Central Bank of Bangladesh busy today?”, “We don’t normally see this much activity from the Central Bank of Bangladesh!” and “Isn’t it weird how all of their payments are destined for the Philippines?” may have been a few comments in the payments department. Within the STP environment, payment checks are more likely to have been systematic checks to ensure the payment message format is correct, appropriate monies being in the account, valid parties, within limits – oh, and of course, no sanctions hits.
To stop these sorts of incidents a new approach is needed – we need to introduce a ‘normality’ check. What does this mean? Well, it means that we need our STP systems to validate, somehow, if what is being processed automatically looks genuine. We need our systems to assess if the behaviour they are observing is ‘normal’.
Is this a realistic expectation of a modern day high volume system?
Well, yes, it is. The increase in data being stored by banks is significant. Big data science is developing rapidly and over the past decade a new sector has emerged commercialising services related to trend analysis, predictive analytics and anomaly reporting.
Banks in particular are starting to look at the swathes of data they possess and are realising the opportunity associated with possessing such information. It is our view that by looking at this data we have the ability to determine how to define ‘normal’ with regards to payments after all. We can create a template of ‘normal’ per currency, account owner, payment message, region or even time. We can embrace the world of predictive analytics and apply it real-time to the processing of transactions within our systems.
By assessing and comparing the content of our processing against this benchmark, we can soon start to evaluate whether transactions qualify as being ‘normal’, and define a strategy of what to do when ‘abnormal’ activity is detected. Whether our systems just provide users with essential information or whether we implement fail-safe mechanisms (a kind of process circuit breaker) to react to ‘abnormal’, the choice is ours.
This type of processing happens all over the cyber-security world today, with packets of data monitored to determine whether anything looks out of place. Now it is time to bring this level of monitoring into our businesses, our payment systems and our operations.
We do not need to stop at just financial instructions either! Does that manager normally book transfers at that time of the day? Do you normally process that many static data changes from that institution in such a short time frame? The limits are almost boundless!
Regulation is helping us to focus on a different approach. New Link Consulting works with many clients on Anti-Money Laundering, Counter Financing of Terrorism and Anti-Financial Crime developments using sophisticated monitoring. In addition, we have partnered with Tier 1 Investment Banks to assist them in developing algorithmic trader surveillance systems, safeguards and controls. We are experts in this field.
New Link Consulting has developed, in conjunction with our technology partner, a set of rules and an analytical approach that could be applied to any payments processing function. It is relatively straightforward to overlay these rules on your payments data, and then configure additional ‘anomaly detection’ checks.
New Link Consulting can help your organisation to:
- Identify improvements to your STP operations
- Leverage your data to detect an array of anomalies
- Identify the technical solutions which can help you reduce your exposure
News Source https://www.reuters.com/investigates/special-report/cyber-heist-federal/