I want you to picture an image of a bot assisting a human with operational work. That is the north star I am chasing.

The whole idea of “human in the loop”, where an AI bot works in collaboration with a human to achieve a great outcome.

And if we get this absolutely right, maybe we can spend our evenings sipping a Chardonnay and thinking about more strategic work, rather than spending our time downloading data from ERP systems, stitching together manual spreadsheets, and then coming up with recommendations.

Because that is what happens today.

Believe it or not, a lot of us are stuck in what I call the manual chaos, spending hours and hours trying to get a good insight by stitching data together. That is what I want to avoid.

The question I am trying to answer is: how do you build AI products in finance that actually compound in value over time?

Why human skills are the future of finance
As AI transforms finance, it’s not just about tech, it’s about people. Discover why emotional intelligence, curiosity, and leadership are the real future skills that will set you apart.

The problem: meet Jane

Before I talk about the problem, I want you to meet a hypothetical character: Jane. Jane is a director of FP&A, extremely strategic, one of the highest performers in her company and team. 

When the stakes are really high, when new decisions have to be made, people go to Jane. That is what we all want to be.

But Jane is stuck in a manual chaos. Her manager Amanda asks, "Why is there a difference between your forecast and the actuals?" Jane is going to spend all her energy doing the manual climb, get good insight commentary, and share it back. The output is great. The process is broken.

Daniel Kahneman, the author of Thinking Fast and Slow, introduced us to two different kinds of thinking. System 1 is fast, reactive thinking, a little more error-prone by nature. System 2 is slower, more logical, more thought-through.

Jane's situation can be perfectly described as a System 1 trap. Not because she wants to operate that way, but because her system forces her to.

She is spending all her time compiling data to get good insights, so that by the time she arrives at the insight, she does not have much time left and ends up relying on System 1 thinking to get a result.

That is not what we hired a director for. We primarily hired a director to provide good strategic insights, which are mainly System 2 thinking. That is the critical point, and I will keep coming back to the System 1 versus System 2 concept throughout.

There is also a black box risk here. Jane is an incredibly astute person, but that intelligence lives in her head. If she decides to leave the company, that knowledge goes away. She will have documentation, sure, but how many of us honestly read the documentation, right?

There is a lot of knowledge that disappears with a person. And if Jane, without having thought through the manual chaos carefully, now puts an AI model on top of it, she is not solving the problem, she is only scaling the manual chaos faster.

We have all heard the analogy: garbage in, garbage out.

So to summarize: the problem, in my mind, is less about not having the perfect tool. It is more about having an operating model gap that prevents Jane from successfully using her System 2 thinking.

The gap between where she is forced to operate (System 1) and where she should be (System 2) is what I call the ROI gap. It is the talent tax we are paying when we do not leverage Jane's capabilities to their fullest.

How generative AI is transforming finance
Financial advisors could use AI to generate initial investment recommendations while focusing their expertise on understanding client needs and providing personalized guidance.