FP&A is among the fastest-evolving areas in finance; as a result, many FP&A organizations are moving toward automation much faster than their corresponding governance and security controls.

For example, FP&A has experienced significant increases in the amount of data required to be processed and a decrease in the amount of time available to process the data.

Additionally, executives expect immediate responses regarding how well a company is performing relative to prior periods (or other selected benchmarks).

This creates a need for greater amounts of timely information and thus creates an environment where spreadsheets and manual consolidations will no longer provide the level of responsiveness required to meet those demands.

As such, the promise of automation provides hope for meeting those demands; however, automation also creates a more silent, and therefore less noticeable, type of fraud risk.

That silent type of fraud risk does not present itself through major breaches or ransomware; instead, it presents itself as subtle misrepresentations of fact, or confident decisions based on incorrect assumptions due to corrupted inputs used during automated processes.

IBM's Institute for Business Value conducted research in regard to AI-based planning in FP&A and found that most of the FP&A leaders surveyed believed that the adoption of AI-based planning would provide them with a high degree of improvement in forecasting accuracy (i.e., over 20%), and an even greater reduction in the time required to complete their planning cycles (i.e., double digits).

These improvements were confirmed in the study. However, the same study also highlighted a growing gap between the maturation of planning automation and the preparedness of FP&A organizations to address the risks associated with the use of these types of automation tools.

There is a paradox at the heart of modern FP&A.

The very systems that are designed to increase the speed and precision of planning can create increased opportunities for errors and manipulations to occur in the automated systems if they are not built with security as a foundational element (i.e., security-by-design).

How to transform financial data into compelling stories
FP&A professionals have to understand the business deeply. We’re not just reporting results or explaining variances. We are taking on part of the decision-making burden.

Automation is changing the risk profile of FP&A

The way financial controls operated traditionally was based on the assumption of a stable operating environment.

Prior to today's cloud-based planning solutions; budgeting was done annually and forecasting was done on a quarterly basis. Data flow was generally linear. Today's FP&A processes operate differently than they did historically.

The primary difference is that today's FP&A processes rely on continuous ingestion of data from multiple sources (ERP, CRM, HRIS, billing systems), as well as APIs to synchronise assumptions across multiple tools in near real-time.

Additionally, Business Intelligence (BI) layers publish information via dashboards for use by many functions outside of the finance department.

Also, AI systems are now used to provide variance explanations, narrative around scenarios, and even first-pass forecasts.

The evolution of FP&A into a "distributed" system (as opposed to simply being a process within finance) has created a risk profile for distributed systems. Distributed systems fail in new ways.

6 strategies for FP&A to master scenario planning and risk management
How FP&A can help the organization prepare for potential outcomes, mitigate risk, and remain resilient and forward-focused.

For example, when a forecast driver is changed in an upstream system, the change can have an immediate impact throughout the downstream reports, dashboards and executive presentations.

Additionally, if access controls are too loose, or if it is difficult to understand where the data originated, it is possible that FP&A leaders will not know when a change occurs, potentially resulting in the execution of business strategies before a change is recognised.

Therefore, security in FP&A cannot solely be owned by IT. Finance owns the models that drive capital allocation decisions, hiring plans, pricing decisions and liquidity strategy.

Therefore, when these models are compromised (either intentionally or unintentionally), the company does not lose data; it loses decision integrity.

The majority of FP&A failures are based on mundane reasons rather than malicious ones.

Some common examples include:

How financial services firms can protect against AI fraud
Nearly 70% of Americans say they would pay more for services that offer stronger protections, and 83% believe financial institutions should be doing everything possible to protect them from fraud.

The failure of clearly defining data lineage for automated reporting

"Smart" spreadsheets and automated reports have become an increasingly large part of the FP&A environment.

However, the transformation of data as it flows through these systems is typically implied (i.e. not explicitly stated) and therefore may never be documented.

As a result, when a source system undergoes a change to its format or logic, the impact this has downstream may go unnoticed.

Ultimately, numbers will begin to drift, confidence will continue to erode, and there will be no explanation for why the change is happening.

Excessive permissioning is another area where FP&A departments struggle

Self-service analytics is a very effective tool in enabling employees to quickly produce their own analysis.

However, without proper guardrails, this self-service model also enables employees to export sensitive drivers, modify them and then import them into a new model with limited transparency.

Industry analysts have consistently identified excessive permissions and inadequate segregation of duties as two of the largest factors in the success of internal fraud schemes within companies that utilise high levels of automation in their FP&A processes.

Fraud detection in 2025: Lessons from a decade in the trenches
Learn more about fraud detection and explore lessons to help companies stay secure in the face of persistent threats.

Generative AI represents another layer of risk in the FP&A process

Increasingly, FP&A departments are utilising generative AI (tools designed to assist users in creating models faster by automating routine tasks such as updating formulas, drafting comments and summarising variance) to automate some of the more time-consuming aspects of the FP&A process.

According to a study conducted by IBM, more than 40% of finance organizations are either currently piloting or utilising generative AI in analytical workflows.

While generative AI can significantly increase the productivity and efficiency of the FP&A team, it does create risks when sensitive data is sent to public tools for processing or when outputs generated by generative AI are utilised by the FP&A team without established review standards.

Plausible does not mean correct.

The final area of risk for FP&A teams relates to integration sprawl and systemic fragility

Each API connection between systems represents a trust relationship.

For example, when revenue drivers are flowing from a customer management system, headcount data is being pulled from an HR system and cost data is flowing from an ERP system, each of these integrations must be authenticated, authorized, logged and monitored.

Unfortunately, too few FP&A teams have visibility into which system(s), user(s) and/or application(s) have access to modify the underlying planning inputs.

This results in machine identities having the ability to influence planning inputs far more frequently and with far less oversight than human identities.

Designing security into the planning stack

Secure-by-design FP&A starts with a mental shift. Planning must be treated as a system with architecture, dependencies, and failure modes, not as a sequence of tasks.

Teams that take this approach begin by mapping how data flows from source systems into models, where transformations occur, and who owns each decision point. The exercise is deceptively simple but revealing.

Most organizations discover they cannot clearly articulate where their forecast truly originates.

From there, the focus moves from blanket controls to impact-based governance. Not all finance data carries the same risk. A published KPI already shared across the company does not warrant the same scrutiny as a liquidity assumption or pricing model.

By classifying planning elements based on business impact rather than mere sensitivity, finance leaders can concentrate controls where mistakes would be most damaging.

Crucially, controls must be embedded into workflows rather than bolted on afterward.

Change management for critical assumptions, versioning across planning cycles, and explicit approval stages are not bureaucratic overhead, they are what allow automation to scale without eroding trust.

Organizations that implement these mechanisms consistently report smoother audits and fewer last-minute executive escalations, because disagreements shift from “whose numbers are right” to “which decision makes sense”.

APIs and service accounts deserve the same scrutiny as senior finance users. Least-privilege access, clear ownership, credential rotation, and centralized logging are no longer optional in environments where integrations can materially change forecasts.

Regulatory regimes like DORA make this explicit for financial institutions, but the discipline applies just as forcefully outside regulated sectors.

Monitoring, too, must align with financial reality. Security alerts that flag technical anomalies are of limited use to FP&A leaders.

What matters are signals tied to business risk: unusual shifts in key drivers, edits outside normal planning windows, or new data sources appearing without review.

These indicators catch quiet failures early, before they harden into executive narratives.

As generative AI becomes more deeply embedded in finance work, simple rules make an outsized difference.

Clear rules against sharing confidential data with public tools, requiring human checks on AI-generated results, and being open about how AI is used in reporting all help keep experimentation safe and effective instead of careless.

How generative AI is transforming finance
Financial advisors could use AI to generate initial investment recommendations while focusing their expertise on understanding client needs and providing personalized guidance.

The real payoff: Trust at speed

When we design FP&A automation with security at its core, the benefits multiply. Forecast cycles accelerate not because teams rush, but because data is cleaner and assumptions are trusted.

Leadership conversations shift away from reconciling numbers and toward debating trade-offs. Audit and compliance reviews become procedural rather than adversarial.

Perhaps most importantly, opportunities for fraud and manipulation shrink.

Studies of AI-enabled financial controls show that well-governed automation can improve anomaly detection by as much as 30%–50% while reducing false positives that drain analyst time.

Automation, in other words, does not have to increase risk. Done right, it becomes a risk-reduction mechanism.

The result is what resilience looks like in modern finance. Not perfect prevention, but systems that are transparent, testable, and recoverable, able to move fast without losing control.

Secure-by-design FP&A is not a constraint on ambition. It is what makes ambitious automation sustainable.


Looking for exclusive content from top finance leaders worldwide?

Our Insider membership's got you covered; and, for hours of ondemand videos plus even a free ticket to one of our events (and more), why not check our Pro and Pro+ subscriptions?

Free Finance Alliance Membership - Become an Insider
Join 1,000s other finance professionals and test drive your Finance Alliance membership without spending a dime.