The rise of AI agents: Architecting autonomous digital workers

November 10, 2025By multiple authors

Infor AWS logo

Authors:

  • Ujwal Bukka | Senior Partner Solutions Architect, Amazon Web Services
  • Martin Ristov | Senior Partner AI Technologist, Amazon Web Services

Contributors:

  • Vignesh Subramanian | Vice President of Product Management, Infor
  • Natalia Ptaszek | Product Director, Infor
  • Lisa James | Director of Solution Marketing, Infor Industry Cloud Platform

The evolution from basic chatbots to sophisticated artificial intelligence (AI) agents marks a transformative shift in how humans interact with technology. While chatbots were confined to scripted responses, today's AI agents represent an extraordinary leap in autonomous capabilities. Powered by large language models (LLMs), these digital assistants now function as intelligent co-workers, mastering autonomous decision-making and complex problem-solving.

Unlike traditional AI's simple input-output model, modern AI agents can actively engage with various tools, reason through problems, and work collaboratively to achieve objectives. This advancement has elevated AI-powered applications from reactive responses to proactive problem-solving, making these AI agents invaluable assets across enterprise workflows, consumer applications, and automation systems.

In this blog, you'll explore what AI agents are, their core capabilities and patterns, and the safeguards required for reliable adoption.

What are AI agents?

AI agents enter a new frontier in artificial intelligence that goes beyond the constraints of chat-based automation. These systems don’t just respond to commands, but proactively sense, decide, and act. They are autonomous systems that perceive inputs, reason about them, and act either through natural language or by invoking real-world tools and application programming interfaces (APIs).

Unlike standalone models that generate one-off responses, agents can plan, make decisions, use tools, and retain memory across interactions while continuously adapting to feedback and carrying goals forward. This makes them ideal for real-world applications, from automating workflows and powering conversational assistants to orchestrating complex multi-step processes with minimal human intervention.

Core capabilities of AI agents

Autonomy

AI agents can independently operate to execute assigned tasks with minimal human oversight. These autonomous systems don't just follow rigid scripts—they actively assess situations, make decisions, and adapt their approaches based on the data they gather. For example, a travel-planning agent can autonomously research destinations, compare prices, create itineraries, and confirm bookings by collecting and analyzing data from multiple sources in real-time.

Planning and reasoning

AI agents leverage planning and reasoning capabilities powered by LLMs to deconstruct a complex assigned task into actionable subtasks. This capability is critical for scenarios like supply chain optimization, customer support triage, or IT troubleshooting, where agents must dynamically adjust when conditions change, rather than simply generating responses.

Memory

Memory transforms AI agents from stateless systems into persistent, context-aware systems. Short-term memory maintains conversation history within ongoing interactions, while long-term memory stores and retrieves historical interactions, learned preferences, and domain-specific knowledge. This dual-memory framework enables continuity, personalized interactions, and iterative learning from past experiences.

Tools

AI agents become truly powerful when integrated with external tools like APIs, databases, enterprise systems such as customer relationship management (CRM) and enterprise resource planning (ERP), or custom functions. With tool access, AI agents can not only generate answers, but also take actions— like querying data, orchestrating workflows, and more. The Model Context Protocol (MCP) extends these capabilities by offering a standardized framework that links AI agents to various tools, data sources, and enterprise systems, ensuring smooth integration throughout the technology stack and operational environment.

Amazon Bedrock AgentCore on Amazon Web Services (AWS) provides a comprehensive platform for AI agent development through five key components:

  1. AgentCore Identity secures agent authentication and access.
  2. AgentCore Runtime offers a serverless hosting environment for deploying and running agents.
  3. AgentCore Memory maintains both immediate and long-term context for personalized interactions.
  4. AgentCore Gateway enables developers to build, deploy, discover, and connect tools at scale.
  5. AgentCore Observability completes the suite by providing tracing, debugging, and monitoring capabilities for production environments.

Together with Amazon Bedrock's foundational models, AgentCore accelerates the complete agent lifecycle—from building and deploying to scaling and securing AI agents.

Building simple agents

At its core, a basic AI agent connects an LLM to a task loop that:

  • Receives user input and defines a goal: The agent receives a user question or objective to accomplish.
  • Generates the next step using reasoning: The LLM applies prompt templates, chains, or other frameworks to determine actions.
  • Executes actions via tools: If enabled, the agent invokes APIs, databases, or other external systems.
  • Feeds results back iteratively: Output from actions returns to the LLM’s context to inform subsequent steps.
  • Produces a contextualized response: The agent generates final output incorporating all gathered information.

To build AI agents, developers leverage frameworks such as LangChain, LangGraph, or Strands, which provide abstracted building blocks for faster development. This is particularly useful for implementations that aim to automate repetitive tasks like answering FAQs, retrieving structured data, and triggering simple workflows.

Building multi-agentic systems

Multi-agent systems emerge as a necessary evolution when task complexity demands specialized expertise across multiple domains. In these systems:

  • Each agent specializes in a domain (e.g., finance, customer support, IT operations, or research).
  • A supervisor agent serves as the orchestrator, intelligently routing tasks and managing workload distribution based on sub-agents’ capabilities. Agents exchange information, hand off context, maintain contextual continuity during handoffs, and negotiate optimal solutions through structured interactions

This architectural pattern allows organizations to implement end-to-end automation spanning multiple enterprise systems, create support systems with intelligent escalation pathways, and develop digital twins for operational monitoring and optimization.

Amazon Bedrock AgentCore and Strands framework enable you to build everything from simple agents that handle basic tasks to sophisticated multi-agent systems capable of executing complex workflows.

Guardrails: Safety and control

With autonomy comes the need for robust safety mechanisms, especially in enterprise environments. Essential guardrails include role-based access controls for tool usage, clearly defined task boundaries to prevent overreach, and human-in-the-loop checkpoints for critical actions such as financial transactions. Content moderation layers validate inputs for prompt injection attempts, filter outputs for harmful or non-compliant content, and monitor sensitive data exposure. Agent execution should be governed by predefined rules that limit capabilities, redact sensitive information, and enforce approval workflows before high-impact operations.

Mitigating hallucination

Hallucinations occur when an LLM generates seemingly plausible but factually incorrect or fabricated information with high confidence, potentially leading to misinformed decisions or actions. Effective mitigation strategies include:

  • Grounding responses in trusted knowledge bases through retrieval-augmented generation (RAG)
  • Applying confidence scoring to filter uncertain outputs
  • Validating responses before execution

These techniques ensure agents provide reliable, fact-based assistance rather than plausible but incorrect answers.

On AWS, Amazon Bedrock Guardrails offers configurable safeguards for generative AI applications, tailored to your specific use cases and responsible AI policies. Additionally, Amazon Bedrock Knowledge Bases accelerates RAG development.

Observability: Transparent operation

For trust and adoption, organizations need comprehensive visibility into agent operations through observability tools that log every decision, tool call, and intermediate reasoning steps. These systems track critical metrics (accuracy, latency, error rates, and cost) while providing dashboards for auditing how agents arrive at their outputs. This transparency enables teams to understand agent behavior, debug failures, monitor ongoing performance, and demonstrate compliance for regulated industries—ensuring that as agents gain autonomy, they remain explainable, reproducible, and continuously improvable.

Amazon AgentCore Observability and Amazon CloudWatch together enable you to build an observability platform that provides comprehensive visibility into agent operations.

Moving from automation to autonomy

AI agents represent a fundamental shift from static models to dynamic, autonomous digital collaborators that can act, reason, and evolve. By combining planning capabilities, memory, and tool integration with essential safety guardrails and observability, these systems unlock immense value across industries—transforming how you interact with software from passive responses to goal-driven actions. As organizations progress from simple, single agents to sophisticated multi-agent orchestration, success depends on maintaining the careful balance between scalable autonomy and transparent control, ensuring agents remain trustworthy partners in enterprise operations.

AWS offers a complete suite of services for every stage of the agent development journey, enabling organizations to rapidly build and deploy AI agents without compromising enterprise-grade security, scalability, or governance.

At Infor™, we believe AI should integrate seamlessly into your workflow, enhancing productivity rather than disrupting it. That’s why, with the power of AWS, we’re excited to introduce Infor Industry AI Agents— micro-vertical, role-based AI agents infused with deep industry context, complete data, and orchestrated control.

Infor Industry AI Agents are currently available in limited release, with broader availability planned for 2026. See the complete list of available Infor Industry AI Agents and how they can help optimize your processes today.

Let's Connect

Contact us and we'll have a Business Development Representative contact you within 24 business hours.

By submitting this form you agree that Infor will process your personal data provided in the above form for communicating with you as our potential or actual customer or a client as described in our Privacy Policy.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.