Harnessing the Power of AI: Home Runs, Pitfalls, and the Secrets to Success from Finance Industry Leaders

Early this month at Soho House New York, we continued to explore a central theme for 2018: how augmented intelligence (AI) pushes the boundaries of modern business and the inherent challenges that come with it.

A cross-vertical panel of experts led our discussion, including Solstice’s Karl Hampson and Ryan Maguire; Joydeep Mukherjee, global head of digital lending management at Citi Commercial Bank; and Norman Niemer, head of investment engineering at UBS O’Connor. 

We explored how our perspective on AI — from creating intelligent experiences to envisioning cognitive insights and intelligent process automation — can exponentially improve modern enterprise operations, and discussed the most common obstacles facing the future of AI in finance and beyond. 

From UX design fumbles to process automation home runs, these were the key takeaways from our deep dive into the secrets of AI pilot success.

1) What’s the key to success for AI adoption? Keeping your focus narrow and your goals clear.

A major theme during our panel discussion was the myth of the AI “blank check” — the false notion that AI adoption is single-handedly capable of solving broad challenges faced by banking organizations.

For our panelists, the key to success for their AI pilots was to deftly avoid this common pitfall. There was agreement across the board that success is most often rooted in keeping the focus narrow and the goals specific and quantifiable. 

Of course, a significant challenge that our panelists faced was how to identify those goals in the first place. How do organizations determine which parts of their business stand to benefit the most from AI? 

The answer was twofold: first, focus on challenges with the greatest upside; and second, aim to accelerate key areas of growth where AI provides the best opportunity to close a gap between you and your competitors.

First, consider the areas where successful adoption of AI would move the needle most — even if those areas seem small alongside broader challenges. This is especially true for startups that stay hyper-focused on disrupting particular efficiencies and processes, where aiming to close a competitive gap on a smaller scale is more likely to end in success.

A perfect example of how this way of thinking guided a recent AI pilot was this: “We were spending about forty-five minutes roughly in just cataloging and completing basic steps with customer paperwork. If you put AI on top of it, you multiply it by two million documents. By including AI, we’re drastically reducing the duties.”

While narrow and not necessarily the most glamorous use of AI, this straightforward, simple integration of intelligent process automation resulted in the kind of quantitative success leadership can get behind.

This hyper-focused approach can also lead to considerable qualitative success. Consider the experience of our panelist who illustrated how even conservative employment of AI can significantly improve customer experience: “If [clients] go to Amazon and they see an AI app suggesting products, they end up asking, ‘Why do I have to sit down with an agent to talk? Can’t you do that from a digital experience right away?’ Our [consumers prefer digital experiences they’re used to because] they are far superior to a traditional corporate experience, which is subpar.”

Simply put, by allowing your organization to follow what we call the “crawl, walk, run” approach — starting small and building on consecutive successes with each new pilot — you are not only better equipped to scale up AI adoption in the future but also more likely to gain leadership buy-in with the quantitative successes achieved with each new AI application.

2) The human element continues to bridge a critical gap between current business practices and AI.

Our panelists were quick to agree that AI should never be treated as a replacement for the human element in business but instead as an enhancement to it. It’s what Solstice calls “augmented intelligence” —technology that is designed explicitly to automate only intermediary tasks and processes between decisions that remain the forte of humans rather than machines.

In what Mukherjee termed a “humanistic approach to AI,” the goal should always be to create opportunities for a seamless handoff between human and machine — a process that enhances the productivity of employees while remaining largely invisible to both employees and clients. 

So how can a business determine where the human ends and the AI starts? For Maguire, it’s a matter of taking a good, hard look at what tasks are best left in the hands of employees rather than AI — and being honest about how technology’s current limitations will ultimately influence the success of an AI pilot.

USB’s Niemer echoed this sentiment by outlining his approach: “We had a framework in which we put humans and machines side-by-side and asked, ‘What’s the machine good at?’ ‘What’s the human good at?’ This gave us a good point of reference for how a business can decide how to integrate AI and machine learning into its current operations.”

Hampson summarized how this relates to the financial industry: “Machine learning is not something that can be making decisions — it’s a pattern-based, sensing technology. . . . If you use the machine learning–based AI as a sensing impulse to a cognitive decision, you can then have a fully alterable solution, and in financial services, that is obviously essential.” 

3) The bridge between human and machine should be as much a focus as the technology itself.

Since successful AI is rooted in enhancing the human element of business, it’s important to make the interaction between the technology and the employees as seamless as possible. Overlooking this important element is a pitfall that our panelists strongly caution against. 

Norman cited his own experience with this common mistake: “One of the biggest failures I have experienced is in designing the UX. Unless you change the organizational structure in such a way that actually incentivizes people to work with the machine, you’ll get nowhere.”

The answer is to create a careful strategy that focuses not just on the technology itself but also on meeting the critical needs of the employees who are tasked with using the technology. 

If the goal of AI is to make employees more productive and efficient, just as much attention should be paid to how employees will ultimately use and understand the technology. Overlooking this critical element is a surefire way to stall an otherwise technically sound AI pilot.

4) What are the most common pitfalls to avoid when launching your AI pilot?

Our panelists were quick to agree that the success of an AI pilot is about avoiding common pitfalls, and each one outlined which pitfalls they’d most strongly caution against.

For Norman, it was failing to focus on the front-end user experience. Or, more simply, not controlling for the human element in AI adoption.

Hampson noted that his biggest mistake had been failing to focus on the setup and not putting enough focus on communicating the possible outcomes. “For me it was about framing the problems correctly. So, if we thought that it was a low confidence of getting a decent business outcome, then we would frame it on the basis that this may end up actually bringing no value, but it could be a learning experience for all of us.”

In contrast, Mukherjee offered sage advice that centered mainly on practicing patience throughout the deployment of an AI pilot: “I wouldn’t necessarily say a failure, but a gradual progression. We started with financial spreading. When it comes to regulatory concerns and even making sound business decisions, you can't make a credit decision if you have 30 percent, 40 percent confidence. It has to be 90 percent accurate, and you need to be prepared to have an independent auditor verify your logic. That was our initial failure — we chose to just focus the model in one market, knowing we would get predictive biases. We quickly realized we needed to stretch it across markets." 

5) Regulations should be a major consideration in how the implementation of AI is carried out — but too much is at stake to allow the issue to stall a pilot altogether.

Our panelists agreed that financial regulations are an unavoidable hurdle in the implementation of AI pilots because some level of transparency for how risk decisions are made is critical for compliance. This remains one of the most convincing arguments for treating AI as a means of enhancing human intelligence, rather than an outright replacement for the role of people.

As Maguire notes, “You should be able to demonstrate how you came up to a particular decision, and if an outcome didn’t match with what you were expecting, there has to be a way that field staff can highlight the discrepancies.”

But given how customer demand drives the financial industry to create ever-faster and more efficient processes powered by AI, this hurdle is just that — a hurdle and not an outright barrier.

In implementing an AI pilot, be judicious in understanding and documenting how machine learning is affecting risk decisions. But this should be a consideration that simply guides the process rather than halts it entirely — if for no other reason than to avoid falling so far behind that catching up to the competition becomes all but impossible in the future.

To learn more about our AI practices, visit solstice.com/ai. Interested in attending our next event? Email me at dptak@solstice.com.