Loading component...

A practical way for government finance officers to think about AI

Infor_3D Platform Image_Library_Dark_06.jpg

April 9, 2026By Jessica Dunyon | Director, Solution Marketing, Infor

At the 2026 NASC Annual Conference, I had the chance to share a practical perspective on how government finance teams can approach artificial intelligence (AI).

The focus was on balancing the push to modernize with the need to stay compliant, defensible, and in control. What resonated most was that this is not an abstract conversation—it is about making progress amid day-to-day operational demands.

Image below: Dr. Kathleen Baxter, State Comptroller for the State of Alabama (left), and Gerlda Hines, State Accounting Officer for the State of Georgia (right), with Jessica Dunyon from Infor (center) at the 2026 NASC Annual Conference in Eugene, OR.

Dr. Kathleen Baxter, State Comptroller for the State of Alabama (left), and Gerlda Hines, State Accounting Officer for the State of Georgia (right), with Jessica Dunyon from Infor (center) at the 2026 NASC Annual Conference in Eugene, OR.State government doesn’t get to pause operations to modernize. It must evolve as work continues—while payroll runs, vendors are paid, agency questions come in, and the books still close on time and in compliance.

That tension has always existed. What’s changed is the added pressure and opportunity of AI. This is why many finance leaders are rethinking how to introduce new capabilities without adding new risk.

In conversations across government finance, the focus quickly moves beyond capability to questions of trust, oversight, and control. This is especially true when AI is embedded directly into day-to-day workflows.

For finance leadership, the mandate is familiar: Move faster where it’s safe, stay defensible where it matters, and support a workforce navigating a patchwork of legacy and modern systems. The focus is less on novelty and more on sequencing. The goal is to introduce AI in ways that reduce friction without increasing risk or disrupting what already works.

Start with the reality of constraints

In practice, modernization is constrained less by ideas and more by what could be called risk capacity: The amount of change an organization can absorb without increasing operational exposure. Clear policies, stable data, and repeatable controls help restore that capacity and make scaling safer over time.

What research suggests about scaling AI responsibly

AI is harder to scale when data and workflow context are fragmented across point solutions. A recent Nucleus Research report notes that integration can consume 30–40% of total AI project cost in disconnected environments. That’s why we believe AI is most effective when it’s built into the systems where finance work already happens, with governance and context carried through the workflow.

For mixed legacy-and-modern environments, the takeaway is practical: The more the assistance layer is embedded in governed systems and draws from consistent, approved sources, the easier it is to keep outputs defensible. That foundation is also what makes AI useful for staff, not just administrators. It can help users find guidance, follow steps, and assemble audit-ready support without relying on tribal knowledge or manual reconciliation.This is where AI can help. It is not a replacement for people, but a support layer that helps employees navigate systems, find the right information, and complete work as intended.In many organizations, teams are already operating with unfilled roles or redistributed responsibilities. In these environments, AI can help fill some of that gap by supporting decision-making—surfacing information, guiding next steps, and reducing the burden on already stretched teams.

Focus on where work slows down

The most practical starting point for AI is not large-scale transformation. It is identifying where work consistently stalls.

NASC AI pilot steps
In state finance operations, that often shows up in familiar ways, such as:

  • Approvals that pause because the next step (or the right approver) isn’t clear
  • Questions like, “Which system is the source of truth?” and “Which code or policy applies here?”
  • Data that must be gathered, reconciled, and validated across legacy and modern tools
  • Audit and compliance requests that trigger time-consuming evidence collection and follow-ups

These aren’t new problems. But they are areas where AI can assist by surfacing relevant guidance, pulling information from approved sources, and keeping work moving. It should not require staff to become experts in every system.

Prove value in contained ways

One of the most common risks in adopting new technology is trying to do too much at once. A more effective approach is to start with focused, well-defined use cases that deliver value quickly.

Examples include:

  • Helping staff find the right policy, procedure, or coding guidance for a transaction
  • Guiding users through multi-step workflows across systems, with clear handoffs and status visibility
  • Assembling audit-ready support—what happened, who approved it, and which documents back it up

In regulated environments, another risk is assuming tools will remain static. AI models, policies, and approvals can change quickly. Early use cases should be designed to adapt, so organizations can adjust without starting over.

The goal isn’t to solve everything. It’s to reduce burden and confusion in a controlled environment, so teams can see the impact in fewer escalations, reduced rework, and better consistency.

Build governance alongside innovation

Speed matters, but in government, trust is the prerequisite. As AI becomes more integrated into workflows, governance can’t be an afterthought. It must be built in from the beginning, with enough flexibility to adapt as capabilities evolve without compromising accountability.

The most effective guardrails are the ones that cannot be bypassed. They are controls embedded directly into workflows, not dependent on training or informal convention.

Increasingly, organizations are recognizing the need to define their own guardrails, rather than relying solely on what a vendor or model provides. As AI capabilities advance rapidly, leaders need the ability to align controls with their policies, risk tolerance, and regulatory requirements.
This level of control is becoming a critical requirement for organizations adopting AI at scale.

That includes:

  • Transparent outputs (what the AI used and why it responded the way it did)
  • Audit trails and records retention aligned to existing requirements
  • Role-based access and clear human approval points, including segregation of duties
  • Controls for accuracy and risk (approved sources, monitoring, and escalation when confidence is low)

Put simply, the organization should be able to reconstruct how an output was produced. It should be clear what inputs were used, what workflow actions occurred, and where human decisions were applied.

This isn’t about slowing progress. It’s what makes progress sustainable.

Scale with intention

Once value is proven and guardrails are in place, organizations can scale not just technology, but also confidence. Prioritization matters. Apply what works, maintain consistent oversight, and don’t force every process to change at once.

NASC Proof before scale

What leadership looks like in this moment

A practical operating model is centralized governance with federated execution. Statewide leadership defines standards, controls, and decision rights, while agencies and teams apply approved patterns within day-to-day workflows.

That only works if leadership clearly defines and enforces how AI is used:

  • Where does AI fit across the organization?
  • What is it allowed to do?
  • Who is authorized to use it, and under what conditions?

Those expectations need to be created, communicated, and enforced consistently.

AI doesn’t replace leadership judgment. It reveals where leadership attention is needed—where processes break down, decisions get delayed, and systems create unnecessary complexity.

For government finance leaders, the role isn’t to chase every new capability. It’s to guide how these tools are introduced, where they are applied, and how they are governed.

Moving forward

AI presents real opportunities to improve how government operates, but only if it’s implemented in a way that reflects the realities of public service. Start with real friction points, prove value early in contained workflows, and build governance that keeps outputs defensible.

In government, moving faster only matters when it preserves control, continuity, and public trust. That is progress without disrupting what already works. As we continue this conversation throughout the year at events like Native American Finance Officers Association (NAFOA), Government Finance Officers Association (GFOA), National Association of State Chief Information Officers (NASCIO), National Association of State Budget Officers (NASBO), National Association of State Auditors, Comptrollers and Treasurers (NASACT), and the Infor Public Sector Leadership Council, I’d love to compare insights on what’s working, what’s stalling, and which guardrails are proving essential.

If you’re testing AI in finance operations, or deciding where to start, I welcome the conversation. Bring your use case, your concern, or your lesson learned.

Loading component...