

# Agentic workflow patterns
<a name="agentic-workflow-patterns"></a>

Agentic workflow patterns integrate modular software agents with structured large language model (LLM) workflows, enabling autonomous reasoning and action. While inspired by traditional serverless and event-driven architectures, these patterns shift core logic from static code to LLM-augmented agents, providing enhanced adaptability and contextual decision-making. This evolution transforms conventional cloud architectures from deterministic systems to ones capable of dynamic interpretation and intelligent augmentation, while maintaining fundamental principles of scalability and responsiveness.

**Topics**
+ [From event-driven to cognition-augmented systems](from-event-driven-to-cognition-augmented-systems.md)
+ [Prompt chaining saga patterns](prompt-chaining-saga-patterns.md)
+ [Routing dynamic dispatch patterns](routing-dynamic-dispatch-patterns.md)
+ [Parallelization and scatter-gather patterns](parallelization-and-scatter-gather-patterns.md)
+ [Saga orchestration patterns](saga-orchestration-patterns.md)
+ [Evaluator reflect-refine loop patterns](evaluator-reflect-refine-loop-patterns.md)
+ [Designing agentic workflows on AWS](designing-agentic-workflows-on-aws.md)
+ [Conclusion](conclusion.md)

# From event-driven to cognition-augmented systems
<a name="from-event-driven-to-cognition-augmented-systems"></a>

Modern cloud architectures, particularly those built on serverless and event-driven principles, have traditionally relied on patterns like routing, fan-out, and enrichment to create responsive, scalable systems. Agentic AI systems build upon these foundations while reframing them around LLM-augmented reasoning and cognitive flexibility. This approach allows for more sophisticated problem-solving and automation capabilities, potentially revolutionizing how complex tasks are handled in cloud environments.

## Event-driven architecture
<a name="event-driven-architecture"></a>

The following diagram shows a typical distributed system:

![\[Event-driven architecture with data enrichment.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/images/event-driven-architecture-with-data-enrichment.png)


1. A user submits a request to Amazon API Gateway.

1. Amazon API Gateway routes the request to an AWS Lambda function.

1. AWS Lambda performs data enrichment by querying an Amazon Aurora database

1. Amazon API Gateway returns the enriched payload to the caller.

This structure is both reliable and scalable, but it's fundamentally static. Business rules and logic paths must be explicitly coded, and adapting to changing contexts or incomplete information is limited.

## Cognition-augmented workflows
<a name="cognition-augmented-workflows"></a>

Agentic architectures add cognitive augmentation to an event-driven system. The following diagram shows an agentic equivalent:

![\[Cognition-augmented workflow.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/images/cognition-augmented-workflow.png)


1. A user submits a query through an SDK or API call.

1. An Amazon Bedrock agent receives the query.

1. The agent interprets the query by invoking an LLM

1. The agent performs semantic enrichment by searching the Amazon Bedrock knowledge base or other external data sources.

1. The LLM synthesizes a context-rich, goal-aligned response.

1. The system returns a synthesized response to the user.

In this flow, the LLM uses logic, understands intent, retrieves and combines relevant context, and then decides how best to respond. This pattern mirrors the traditional enrichment pattern, where messages are augmented with external data before being routed further. In agentic systems, however, this enrichment is not a static lookup. Instead, the enrichment is dynamic, semantically guided, and driven by purpose.

## Core insights
<a name="core-insights"></a>

Each LLM workflow can be mapped to an agentic workflow pattern, which mirrors and evolves traditional event-driven architecture styles. A basic building block of agentic workflows is the ability to augment an LLM's context with data, tools and memory. This creates a reasoning loop that's informed, adaptive, and aligned with user intent. Where traditional systems enrich messages with lookup data, agentic systems enable software to act less like scripts and more like intelligent collaborators.

# Prompt chaining saga patterns
<a name="prompt-chaining-saga-patterns"></a>

By reimagining LLM prompt chaining as an event-driven saga, we unlock a new operational model: workflows become distributed, recoverable, and semantically coordinated across autonomous agents. Each prompt-response step is reframed as an atomic task, emitted as an event, consumed by a dedicated agent, and enriched with contextual metadata. 

The following diagram is an example of LLM prompt chaining:

![\[LLM prompt chaining.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/images/workflow-patterns-llm-prompt-chaining.png)


## Saga choreography
<a name="saga-choreography"></a>

The saga choreography pattern is an implementation approach in distributed systems that has no central coordinator. Instead, each service or component publishes events that trigger the next workflow action. This pattern is widely used in distributed systems for managing transactions across multiple services. In a saga, the system runs a series of coordinated local transactions. If one fails, the system triggers compensating actions to maintain consistency.

The following diagram is an example of saga choreography:

![\[Saga choreography.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/images/workflow-patterns-saga-choreography.png)


1. Reserve inventory

1. Authorize payment

1. Create shipping order

If step 3 fails, the system invokes compensating actions (for example, cancel a payment or release inventory). 

This pattern is especially valuable in event-driven architectures where services are loosely coupled and states must be consistently resolved over time, even in the presence of partial failure.

## Prompt chaining pattern
<a name="prompt-chaining-pattern"></a>

Prompt chaining resembles the saga pattern in both structure and purpose. It executes a series of reasoning steps that build sequentially while preserving context and allowing for rollbacks and revisions.

## Agent choreography
<a name="agent-choreography"></a>

1. LLM interprets a complex user query and generates a hypothesis

1. LLM elaborates a plan to solve the task

1. LLM executes a subtask (for example, by using a tool call or retrieving knowledge)

1. LLM refines the output or revisits an earlier step if it deems a result unsatisfactory

If an intermediate result is flawed, the system can do one of the following:
+ Retry the steps using a different approach
+ Revert to a previous prompt and replan
+ Use an evaluator loop (for example, from the evaluator-optimizer pattern) to detect and correct failures

Like the saga pattern, prompt chaining allows for partial progress and rollback mechanisms. This happens through iterative refinement and LLM-directed correction rather than through compensating database transactions.

The following diagram is an example of agent choreography:

![\[Agent choreography.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/images/workflow-patterns-agent-choregraphy.png)


1. A user submits a query through an SDK.

1. An Amazon Bedrock agent orchestrates reasoning through the following:
   + Interpretation (LLM)
   + Planning (LLM)
   + Execution through a tool or knowledge base
   + Response construction

1. If a tool fails or returns insufficient data, the agent can dynamically replan or rephrase the task.

1. Memory (for example, a short-term vector store) can preserve its state across steps

## Takeaways
<a name="takeaways-prompt-chaining"></a>

Where the saga pattern manages distributed service calls with compensating logic, prompt chaining manages reasoning tasks with reflective sequencing and adaptive replanning. Both systems allow for incremental progress, decentralized decision points, and failure recovery, and it does all of this through informed reasoning rather than rigid rollback.

Prompt chaining introduces transactional reasoning, which is the cognitive equivalent of sagas. That is, each "thought" is reevaluated, revised, or abandoned as part of a broader goal-directed dialogue.

# Routing dynamic dispatch patterns
<a name="routing-dynamic-dispatch-patterns"></a>

In modern agentic systems, where tasks range from document parsing to autonomous software generation, the ability to dynamically route requests to the most capable large language model (LLM) or agent becomes critical. Static routing logic, often embedded within orchestration scripts or API layers, lacks the adaptability required for real-time, multi-model, multi-capability environments. To address this, LLM routing workflows can be transformed into an event-driven architecture that leverages a dynamic dispatch pattern, turning LLM calls into intelligently routed, context-aware events.

The following diagram is an example of LLM routing:

![\[LLM routing.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/images/workflow-patterns-llm-routing.png)


## Dynamic dispatch
<a name="dynamic-dispatch"></a>

In traditional distributed systems, the dynamic dispatch pattern selects and invokes specific services at runtime based on incoming event attributes, such as event type, source, and payload. This is commonly implemented using Amazon EventBridge, which can evaluate and route incoming events to appropriate targets (for example, AWS Lambda functions AWS Step Functions, or Amazon Elastic Container Service tasks).

The following diagram is an example of dynamic dispatch:

![\[Dynamic dispatch.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/images/workflow-patterns-dynamic-dispatch.png)


1. An application emits an event (for example, \$1"type": "orderCreated", "priority": "high"\$1).

1. Amazon EventBridge evaluates the event against its routing rules.

1. Based on an event's attributes, the system dynamically dispatches to the following:
   + `HighPriorityOrderProcessor` (service A)
   + `StandardOrderProcessor` (service B)
   + `UpdateOrderProcessor` (service C)

This pattern supports loose coupling, domain-based specialization, and runtime extensibility. This allows systems to respond intelligently to changing requirements and event semantics.

## LLM-based routing
<a name="llm-based-routing"></a>

In agentic systems, routing also performs dynamic task delegation – but instead of Amazon EventBridge rules or metadata filters, the LLM classifies and interprets the user's intent through natural language. The result is a flexible, semantic, and adaptive form of dispatching.

## Agent router
<a name="agent-router"></a>

This architecture enables rich intent-based dispatching without predefined schemas or event types, which is ideal for unstructured input and complex queries.

1. A user submits the request "Can you help me review my contract terms?"

1. The LLM interprets this as a legal document task.

1. The agent routes the task to one or more of the following:
   + Contract review prompt template
   + Legal reasoning subagent
   + Document parsing tool

The following diagram is an example of an agent router:

![\[Agent router.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/images/workflow-patterns-agent-router.png)


1. A user submits a natural language request through an SDK.

1. An Amazon Bedrock agent uses an LLM to classify the task (for example, legal, technical, or scheduling).

1. The agent dynamically routes the task through an action group to invoke the required agent:
   + Domain-specific agent
   + Specialized tool chain
   + Custom prompt configuration

1. The selected handler processes the task and returns a tailored response.

## Takeaways
<a name="takeaways-agentic-routing"></a>

Where traditional dynamic dispatch uses Amazon EventBridge rules for routing based on structured event attributes, agentic routing uses LLMs to semantically classify and route tasks based on meaning and intent. This expands the system's flexibility by enabling the following:
+ Broader input understanding
+ Intelligent fallback and tool selection
+ Natural extensibility through new agent roles or prompt styles

Agentic routing replaces rigid rules with dynamic cognitive dispatching, which allows systems to evolve with language rather than code.

# Parallelization and scatter-gather patterns
<a name="parallelization-and-scatter-gather-patterns"></a>

Many advanced reasoning and generation tasks – such as summarizing large documents, evaluating multiple solution paths, or comparing diverse perspectives – benefit from the parallel execution of prompts. Traditional sequential workflows fall short when scalability, responsiveness, and fault tolerance are required. To overcome this, LLM-based parallelization can be reimagined using an event-driven scatter-gather pattern, where tasks are dynamically fanned out to autonomous agents and results intelligently synthesized.

The following diagram is an example of an LLM parallelization workflow:

![\[LLM parallelization.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/images/workflow-for-parallelization.png)


## Scatter-gather
<a name="scatter-gather"></a>

In distributed systems, a scatter-gather pattern sends tasks to multiple services or processing units in parallel, waits for their responses, and then aggregates results into a consolidated output. Unlike fan-out, scatter-gather is coordinated because it expects responses and usually applies logic to combine, compare, and select results.

Common implementations for parallelization and scatter-gather include the following:
+ AWS Step Functions map a state for parallel task execution
+ AWS Lambda with concurrency, coordinating results from multiple invoked functions
+ Amazon EventBridge with correlation IDs and aggregation workflows
+ Custom controller pattern to manage fan-out and gather results by using Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, or queues

The following diagram is an example of scatter-gather:

![\[Scatter-gather.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/images/workflow-patterns-scatter-gather.png)


1. A user sends a request to a central coordinator function that scatters the task by publishing parallel messages to an Amazon Simple Notification Service (Amazon SNS) topic.

1. Each message includes task metadata and is routed to a specialized worker AWS Lambda.

1. Each worker AWS Lambda independently processes its assigned subtask (for example, querying an external API, processing a document, and analyzing data).

1. Results are written to a common storage layer, such as Amazon Simple Queue Service (Amazon SQS).

1. The aggregator function waits for all responses to be completed, and then it does the following:
   + Gathers and aggregates the results (for example, merges summaries, selects best matches)
   + Sends a final response or triggers a downstream workflow

Common use cases for scatter-gather patterns include the following:
+ Federated search
+ Price comparison engines
+ Aggregated data analysis
+ Multimodel inference

## LLM-based parallelization (scatter-gather cognition)
<a name="llm-based-parallelization-scatter-gather-cognition"></a>

In agentic systems, parallelization closely mirrors scatter-gather by distributing subtasks across multiple LLM calls or agents, each independently reasoning through a portion of the problem. Returned results are gathered and synthesized by an aggregation process, which is often another LLM or controller agent.

## Agent parallelization
<a name="agent-parallelization"></a>

1. An agent submits a request "Summarize insights across these 10 reports."

1. It scatters the reports to 10 parallel LLM summarization tasks.

1. When it returns all summaries, the agent does the following:
   + Aggregates summaries into a unified briefing
   + Identifies themes or contradictions
   + Sends the synthesized output to the user

This agentic workflow enables scalable, modular, and adaptive parallel reasoning. This is ideal for use cases that require high cognitive throughput.

The following diagram is an example of agent parallelization:

![\[Agent parallelization.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/images/workflow-patterns-agent-parallelization.png)


1. A user submits a multipart query or document set.

1. A controller AWS Lambda or step function distributes the subtasks. Each task invokes an Amazon Bedrock LLM call or subagent with its own prompt.

1. When the calls and subtasks are complete, results are stored (for example, in Amazon S3 or memory store), and an aggregation step merges, compares, or filters the outputs.

1. The system returns the final response to the user or downstream agent.

This system has a distributed reasoning loop with traceability, fault tolerance, and optional result weighting or selection logic.

## Takeaways
<a name="takeaways-parallelization"></a>

Agentic parallelization uses scatter-gather patterns to distribute LLM tasks, enabling parallel processing and intelligent result synthesis.

# Saga orchestration patterns
<a name="saga-orchestration-patterns"></a>

As workflows driven by LLMs become increasingly complex, spanning prompt chains, data processing steps, tool invocations, and agent collaboration, the need for intelligent orchestration becomes essential. Rather than relying on tightly coupled scripts or static predetermined execution flows, these workflows can be implemented as event-driven orchestration patterns, enabling LLM-based systems to dynamically coordinate, monitor, and adapt multi-step tasks across autonomous agents.

The following diagram is an example of an orchestrator:

![\[Orhestrator.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/images/workflow-patterns-orchestrator.png)


## Event orchestration
<a name="event-orchestration"></a>

In traditional distributed systems, event orchestration refers to a pattern in which a central coordinator manages a complex workflow by explicitly directing the flow of control across multiple services or tasks. Unlike event choreography (where each service reacts independently), orchestration provides centralized logic, visibility, and control over the entire process.

This is typically implemented using the following tools:
+ **AWS Step Functions** – Define and execute stateful workflows
+ **AWS Lambda** – Carry out discrete tasks within the orchestrated flow
+ **Amazon SQS** or **Amazon EventBridge** – Triggers asynchronous steps or responses

The following diagram is an example of saga orchestration:

![\[Saga orchestration.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/images/workflow-patterns-saga-orchestration.png)


An AWS Step Functions workflow manages a customer order process:

1. Create order (AWS Lambda)

1. Update inventory (AWS Lambda)

1. Make payment (AWS Lambda)

The orchestrator coordinates each step by managing retries, parallel branches, timeouts, and failures.

## Role-based agent system (orchestrator)
<a name="role-based-agent-system-orchestrator"></a>

In agentic systems, the orchestrator pattern mirrors event orchestration but distributes the logic across multiple reasoning agents, each with a defined role or specialization. A central orchestrator agent interprets the overall task, decomposes it into subtasks, and delegates those to worker agents, each optimized for a particular domain (for example, research, coding, summarization, review).

## Supervisor
<a name="supervisor"></a>

1. A user submits the query "Create a project brief and summarize the top 5 competitors."

1. The orchestrator agent does the following:
   + Assigns a research agent to find competitor data
   + Sends the raw findings to a summarization agent
   + Passes results to a brief-writer agent
   + Compiles the final output for the user

Each agent operates independently, but the orchestrator coordinates the tasks. This is like a Lambda function that handles workflow tasks.

The following diagram an example of a supervisor:

![\[Supervisor.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/images/workflow-patterns-supervisor.png)


1. A user submits a task to an Amazon Bedrock supervisor agent.

1. The supervisor agent parses the request into subtasks for each agent collaborator.

1. Each subtask is assigned to a collaborator agent with role-specific prompts or toolchains.

1. Worker agents call external APIs or tools through an action group.

1. Each worker agent returns the output in a structured format.

1. When all workers return their results, the supervisor evaluates, synthesizes, and returns the final response.

This structure allows for modularity, adaptability, and introspection across complex multistep agent workflows.

## Takeaways
<a name="takeaways-role-based"></a>

Where event orchestration uses centralized control (for example, AWS Step Functions) to direct service execution, role-based agent systems use an LLM-powered orchestrator agent to reason about the goal, delegate subtasks to worker agents, and synthesize the final output.

In both paradigms, the orchestrator does the following:
+ Maintains context and execution flow
+ Handles branching, sequencing, and error handling
+ Produces a unified result from distributed components

Agentic orchestration, however, adds reasoning, adaptability, and semantic delegation. This makes it well-suited to open-ended, ambiguous, and evolving tasks.

# Evaluator reflect-refine loop patterns
<a name="evaluator-reflect-refine-loop-patterns"></a>

Tasks such as code generation, summarization, or autonomous decision-making benefit greatly from runtime feedback, enabling the system to evolve through observation and refinement. To operationalize this, the reflect–refine cycle can be implemented as an event-driven feedback control loop – a pattern inspired by systems engineering, adapted for autonomous, intelligent workflows.

The following diagram is an example of an evaluator reflect-refine feedback loop:

![\[Evaluator reflect-refine feedback loop.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/images/workflow-for-evaluator.png)


## Feedback control loop
<a name="feedback-control-loop"></a>

A feedback control loop is a pattern that monitors its own outputs and behaviors, evaluates them against defined criteria or a desired state, and then adjusts its actions accordingly. This architecture is inspired by control theory and is foundational in domains such as automation, continuous integration and continuous delivery (CI/CD) pipelines, and machine learning operations.

The following diagram is an example of a feedback control loop:

![\[Feedback control loop.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/images/workflow-patterns-feedback-control-loop.png)


1. A deployment pipeline emits a buildComplete event.

1. The event triggers an automated test or evaluation job that validates the build.

1. If validation fails (for example, due to failing tests, security issues, or a policy violation), the system:
   + Emits a buildComplete event
   + Logs the issue or sends a notification
   + Triggers a remediation or corrective action, such as rollback, patching, or retry

The loop continues until it produces an acceptable outcome or escalation, or a time out occurs. This pattern is commonly used for the following:
+ Amazon EventBridge rules to route events to evaluation or remediation tasks
+ AWS Step Functions for iterative retry logic and branching on evaluation outcomes
+ Amazon Simple Notification Service (Amazon SNS) or Amazon CloudWatch alarms for feedback triggers and alerts
+ AWS Lambda functions or containerized workers to apply corrective actions

## Feedback control loop (evaluator)
<a name="feedback-control-loop-evaluator"></a>

An evaluator workflow is a cognitive feedback loop that's powered by LLMs or reasoning agents. The process consists of the following:

1. A generator agent or LLM produces an output (for example, a plan, answer, or draft).

1. An evaluator agent reviews the result using a critique prompt or evaluation rubric.

1. Based on the feedback, the original agent or a new optimizer agent revises the output.

The loop repeats until the result meets a set of criteria, is approved, or reaches a retry limit.

## Evaluator
<a name="evaluator"></a>

1. A user asks an agent to write a policy summary.

1. The generator agent drafts it.

1. An evaluator agent checks coverage, tone, and legal correctness.

1. If the response is inadequate, it's refined and resubmitted until the feedback loop converges.

This enables self-assessment, iterative refinement, and adaptive output control—all without human input.

The following diagram is an example of a feedback control loop (evaluator):

![\[Feedback control loop (evaluator).\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/agentic-ai-patterns/images/workflow-patterns-evaluator.png)


1. A user issues a task (for example, draft a business strategy).

1. An Amazon Bedrock agent generates an initial draft using an LLM.

1. A second agent (or a follow-up prompt) performs a structured evaluation (for example, "rate this output by clarity, completeness, and tone").

1. If the rating falls below a threshold, the response is revised by:
   + Reinvoking the generator with an embedded critique
   + Sending the feedback to a specialized refiner agent
   + Iterating until an acceptable response is reached

Optional components like AWS Lambda controllers or AWS Step Functions can manage feedback thresholds, retries, and fallback strategies.

## Takeaways
<a name="takeaways-evaluator"></a>

Where traditional feedback control loops use events, metrics, and remediation logic to validate and adjust system behavior, agentic evaluator loops use reasoning agents to evaluate, reflect, and revise output dynamically.

In both paradigms:
+ Output is evaluated after it's generated
+ Corrective or refining actions are triggered based on feedback
+ System continuously adapts toward a target quality or goal

The agentic version transforms static validation into semantic reflection, enabling self-improving agents that evaluate their own effectiveness.

# Designing agentic workflows on AWS
<a name="designing-agentic-workflows-on-aws"></a>

Each pattern in this guide can be built using AWS services. Amazon Bedrock agents provide orchestration, data access, and interaction channels.


| 
| 
| **Component** | **AWS service** | **Purpose** | 
| --- |--- |--- |
| LLM reasoning | Amazon Bedrock | Agent logic, planning, tool use | 
| Tool execution | AWS Lambda, Amazon ECS, Amazon SageMaker | Host external tools for agents | 
| Memory and RAG | Amazon Bedrock knowledge base, Amazon S3, OpenSearch | Persistent and semantic memory | 
| Orchestration | AWS Step Functions | Multistep task and agent coordination | 
| Event routing | Amazon EventBridge, Amazon SQS | Decoupled interagent messaging | 
| User interface | Amazon API Gateway, AWS AppSync, SDK | Entry points for applications or systems | 
| Monitoring | Amazon CloudWatch, AWS X-Ray, AWS Distro for OpenTelemetry | Observability and agent introspection | 

# Conclusion
<a name="conclusion"></a>

Agentic workflow patterns are the next evolutionary stage of event-driven architectures, wherein business logic is not statically defined but dynamically reasoned through using large language model (LLM)–enhanced cognition. By combining traditional cloud-native primitives with LLM workflows and agent design patterns, organizations can build adaptive, intelligent, and modular systems that respond with purpose and learn from experience.

In these patterns, Amazon Bedrock is the gateway to agentic cognition, allowing LLM-based agents to access event workflows, interact with tools and memory, and deliver structured, traceable, and aligned results.

As you design and deploy agentic systems, these workflow patterns provide blueprints for building autonomous, composable AI architectures. These systems are grounded in serverless best practices and augmented with intelligent foundation models.