The future of banking and how to remap it

March 2016  |  EXPERT BRIEFING  |  BANKING & FINANCE

financierworldwide.com

 

Most banks would admit familiarity with some or all of the following: (i) a large unwieldy mash-up of silo systems that may or may not talk to each other; (ii) complex, interacting business activities with unreliable reporting hierarchy, compromising ability to spot frauds, optimise RWA allocation and monitor risk positioning in a timely manner; (iii) internal controls grappling with constantly changing regulatory environment; (iv) highly informed client expectation, savvy with disruptive technologies, shadow banking and competition; and (v) a need to maintain growth and profitability.

But help is at hand. Or is it?

The technical analysis industry is awash with solutions – neural nets, genetic algorithms, fuzzy logic, Bayesian networks and so on. Yet, despite these various techniques, we still see instances of breach of AML regulations, trading fraud, credit card fraud and non-compliance with regulatory requirements. Clearly, something is amiss.

A study by the University of California in 1998 estimated that 1.5 billion gigabytes of new data was created each year, the rough equivalent of every inhabitant of Earth writing a novel every week. Back then, they thought that was a big data problem – but today that amount of data is being created each day. 

Let’s consider the limitations of computers (and humans). Computers are very good at some things –finding a direct match in large data spaces, where the data is static, for example. This is because all these solutions are underpinned by rules – a computer will find a direct match instantly. But hands up anyone who has received two identical mail shots because one letter of a mail code differs.

Now hark back to the opening statement and then ask how many of those scenarios utilise closed-loop rules? The critical point is that today’s data does not stand still, yet rules are written on what has happened in the past. Yes, you could write new rules, but since when did what happened yesterday give any guide to what happens tomorrow? What new experience exactly matches a past one?

Here is a simple illustration. To find a needle in a haystack, define the needle (three inches long, metal, hole in one end) and a computer will instantly locate anything conforming to definition. But if your needle is four inches long, you need to write some more rules. Then needle manufacturers bring out carbon fibre ones, and pretty soon all you are doing is rewriting rules to try and keep up.

So how do computers running algorithms, fuzzy logic, etc., cope with ever-changing and ever-expanding data? The answer is they throw away anything deemed surplus to requirements. Ponder that for a second. Throw away data? Is data that was irrelevant five minutes ago always going to be irrelevant?

At least one major trading fraud was found to result from a specific complex interaction of 13 separate parameters. It was detected precisely because no data, however irrelevant it seemed initially, was disposed of.

There is more. Consider the ‘salesman conundrum’ where a salesman has 10 visits to make. He uses his human knowledge to design his route, bringing in road conditions, customer requirements, etc. A computer does not have the benefit of intuition, so it computes all the variables. It concludes that this is a factorial 10 problem with 3.6 million possible options in computer terms; it would need just a matter of seconds to calculate. But suppose our salesman has 20 visits. His human experience means he is still able to decide on his route within a few minutes. But our poor old computer is now facing a factorial 20 calculation, a calculation so vast as to take 100,000 years.

Or consider this – a father is teaching his four year old son to catch a ball. He demonstrates how to do it, encourages him and after a few minutes, the child can catch the ball. Now, the child has received the latest robot as a Christmas present, so father and son try to teach the robot to catch the ball. They write some rules describing a ball, likely trajectories and velocity, actions required to catch and so on. Then a neighbour comes round with a slightly bigger ball, which the child quickly learns how to catch. Dad has to re-programme the robot with the characteristics of the new ball.

We can summarise – computers are only fit for purpose if rules are rigid, and if huge amounts of data deemed ‘irrelevant’ are disposed of. Humans on the other hand, can only function properly if complexities and volumes are manageable for the human brain. This is why banks still have problems with data. The need for new thinking becomes obvious. Ian Stewart, Professor of Mathematics at Warwick University, said “The human brain is wonderful at spotting patterns. It’s an ability that is one of the foundation stones of science. When we notice a pattern, we try to pin it down mathematically, and then use the maths to help us understand the world around us.”

Alan Turing’s code-breaking work at Bletchley Park in World War II pre-empted Professor Stewart. Turing developed the Rotary Pattern Verification technology (Bombes), based on a human ability to spot patterns. The Bombes identified and ‘remembered’ data patterns and applied that knowledge to subsequent intercepted messages.

So how about an analysis programme seeded initially with knowledge (akin to human experience), comparing this knowledge with real-time inputs, whilst all the while retaining every piece of data, the better to modify, then verify patterns, to take account of new information? In effect, a programme which won’t apply a binary ‘0’ or ‘1’ until it is certain which it should be.

Contrast this ‘uncertainty until certain’ approach with one which has to assume a state of certainty of a piece of data (is it 0 or 1?) to get a rules-driven programme started, then has to apply pre-determined rules which need to be re-written as new data arrives, and which requires throwing away of data to function. Data which might become relevant at some future date. We haven’t even touched on rules being known by a ne’er do well looking to evade them.

To return to our example of the haystack – instead of looking for the needle to the exclusion of all else, you could put the haystack itself to one side and look at what’s left. We could ask the computer to tell us what is interesting, what’s unusual. Or the salesman – the motorway is closed, show me side roads instead.

Imagine a capability to wrap existing structures in a non-invasive command and control system, driven by mirroring legacy system input. A command system dynamically adaptive in real-time, allocating and optimising resources, identifying risks. One which transforms internal and external reporting processes for CEOs and regulators alike. And, just as importantly, a command system which can identify profit potential, new income streams, and client acquisition strategies.

The future of banking – re-mapped.

 

David Sutton is the chief executive officer, David Goldsworth is the chief operating officer and Phil Thomas is a director at Sinus Iridum. Mr Sutton can be contacted on +44 (0)7545 458 377 or by email: davids@sinusiridum.com. Mr Goldsworth can be contacted on +44 (0)207 078 4185 or by email: davidg@sinusiridum.com. Mr Thomas can be contacted on +44 (0)7786 908 168 or by email: philt@sinusiridum.com.

© Financier Worldwide


BY

David Sutton, David Goldsworth and Phil Thomas

Sinus Iridum


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.