Select your cookie preferences

We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.

If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”

Using Amazon Bedrock agents to automate creation of access entry controls in Amazon EKS through text-based prompts - AWS Prescriptive Guidance

Using Amazon Bedrock agents to automate creation of access entry controls in Amazon EKS through text-based prompts

Created by Keshav Ganesh (AWS) and Sudhanshu Saurav (AWS)

Summary

Organizations face challenges in managing access controls and resource provisioning when multiple teams need to work with a shared Amazon Elastic Kubernetes Service (Amazon EKS) cluster. A managed Kubernetes service such as Amazon EKS has simplified cluster operations. However, the administrative overhead of managing team access and resource permissions remains complex and time-consuming.

This pattern shows how Amazon Bedrock agents can help you automate Amazon EKS cluster access management. This automation allows development teams to focus on their core application development rather than dealing with access control setup and management. You can customize an Amazon Bedrock agent to perform actions for a wide variety of tasks through simple natural language prompts.

By using AWS Lambda functions as action groups, an Amazon Bedrock agent can handle tasks such as creating user access entries and managing access policies. In addition, an Amazon Bedrock agent can configure pod identity associations that allow access to AWS Identity and Access Management (IAM) resources for the pods running in the cluster. Using this solution, organizations can streamline their Amazon EKS cluster administration with simple text-based prompts, reduce manual overhead, and improve overall development efficiency.

Prerequisites and limitations

Prerequisites

  • An active AWS account.

  • Established IAM roles and permissions for the deployment process. This includes permissions to access Amazon Bedrock foundation models (FM), create Lambda functions, and any other required resources across the target AWS accounts.

  • Access enabled in the active AWS account to these Amazon Bedrock FMs: Amazon Titan Text Embeddings V2 and Anthropic Claude 3 Haiku.

  • AWS Command Line Interface (AWS CLI) version 2.9.11 or later, installed and configured.

  • eksctl 0.194.0 or later, installed.

Limitations

  • Training and documentation might be required to help ensure smooth adoption and effective use of these techniques. Using Amazon Bedrock, Amazon EKS, Lambda, Amazon OpenSearch Service, and OpenAPI involve a significant learning curve for developers and DevOps teams.

  • Some AWS services aren’t available in all AWS Regions. For Region availability, see AWS services by Region. For specific endpoints, see Service endpoints and quotas, and choose the link for the service.

Architecture

The following diagram shows the workflow and architecture components for this pattern.

Workflow and components to create access controls in Amazon EKS with Amazon Bedrock agents.

This solution performs the following steps:

  1. The user interacts with the Amazon Bedrock agent by submitting a prompt or query that serves as input for the agent to process and take action.

  2. Based on the prompt, the Amazon Bedrock agent checks the OpenAPI schema to identify the correct API to target. If the Amazon Bedrock agent finds the correct API call, the request goes to the action group that is associated with the Lambda function that implements these actions.

  3. If a relevant API isn’t found, the Amazon Bedrock agent queries the OpenSearch collection. The OpenSearch collection uses indexed knowledge base content that is sourced from the Amazon S3 bucket that contains the Amazon EKS User Guide.

  4. The OpenSearch collection returns relevant contextual information to the Amazon Bedrock agent.

  5. For actionable requests (those that match an API operation), the Amazon Bedrock agent executes within a virtual private cloud (VPC) and triggers the Lambda function.

  6. The Lambda function performs an action that’s based on the user’s input inside the Amazon EKS cluster.

  7. The Amazon S3 bucket for the Lambda code stores the artifact that has the code and logic written for the Lambda function.

Tools

AWS services

  • Amazon Bedrock is a fully managed service that makes high-performing foundation models (FMs) from leading AI startups and Amazon available for your use through a unified API.

  • AWS CloudFormation helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions.

  • Amazon Elastic Kubernetes Service (Amazon EKS) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.

  • AWS Identity and Access Management (IAM) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.

  • AWS Lambda is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.

  • Amazon OpenSearch Service is a managed service that helps you deploy, operate, and scale OpenSearch clusters in the AWS Cloud. Its collections feature helps you to organize your data and build comprehensive knowledge bases that AI assistants such as Amazon Bedrock agents can use.

  • Amazon Simple Storage Service (Amazon S3) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

Other tools

  • eksctl is a command-line utility for creating and managing Kubernetes clusters on Amazon EKS.

Code repository

The code for this pattern is available in the GitHub eks-access-controls-bedrock-agent repository.

Best practices

  • Maintain the highest possible security when implementing this pattern. Make sure that the Amazon EKS cluster is private, has limited access permissions, and all the resources are inside a virtual private cloud (VPC). For additional information, see Best practices for security in the Amazon EKS documentation.

  • Use AWS KMS customer managed keys wherever possible, and grant limited access permissions to them.

  • Follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see Grant least privilege and Security best practices in the IAM documentation.

Epics

TaskDescriptionSkills required

Clone the repository.

To clone this pattern’s repository, run the following command in your local workstation:

git clone https://github.com/aws-samples/eks-access-controls-bedrock-agent.git
AWS DevOps

Get the AWS account ID.

To get the AWS account ID, use the following steps:

  1. Open a shell in the root folder of the cloned repo, eks-access-controls-bedrock-agent.

  2. To get your AWS account ID, navigate to the cloned directory and run the following command:

    AWS_ACCOUNT=$(aws sts get-caller-identity --query "Account" --output text)

This command stores your AWS account ID in the AWS_ACCOUNT variable.

AWS DevOps

Create the S3 bucket for Lambda code.

To implement this solution, you must create three Amazon S3 buckets that serve different purposes, as shown in the architecture diagram. The S3 buckets are for Lambda code, a knowledge base, and OpenAPI schema.

To create the Lambda code bucket, use the following steps:

  1. To create an S3 bucket for Lambda code, run the following command:

    aws s3 mb s3://bedrock-agent-lambda-artifacts-${AWS_ACCOUNT} --region us-east-1
  2. To install the Lambda code dependency, run the following command:

    cd eks-lambda npm install tsc cd .. && cd opensearch-lambda npm install tsc cd ..
  3. To package the code and upload it to the S3 bucket for Lambda, run the following command:

    aws cloudformation package \ --template-file eks-access-controls.yaml \ --s3-bucket bedrock-agent-lambda-artifacts-${AWS_ACCOUNT} \ --output-template-file eks-access-controls-template.yaml \ --region us-east-1

The package command creates a new CloudFormation template (eks-access-controls-template.yaml) that contains:

  • References to the Lambda function code stored in your S3 bucket.

  • Definitions for all required AWS infrastructure including VPC, subnets, Amazon Bedrock agent, and OpenSearch collection. You can use this template to deploy the complete solution by using CloudFormation.

AWS DevOps

Create the S3 bucket for the knowledge base.

To create the Amazon S3 bucket for the knowledge base, use the following steps:

  1. To create the Amazon S3 bucket for the knowledge base, run the following command:

    aws s3 mb s3://eks-knowledge-base-${AWS_ACCOUNT} --region us-east-1
  2. To download the Amazon EKS User Guide and store it in a directory, run the following commands:

    mkdir dataSource cd dataSource curl https://docs.aws.amazon.com/pdfs/eks/latest/userguide/eks-ug.pdf -o eks-user-guide.pdf
  3. To upload the user guide to the S3 bucket that you created in step 1, run the following command:

    aws s3 cp eks-user-guide.pdf s3://eks-knowledge-base-${AWS_ACCOUNT} \ --region us-east-1 \
  4. To return to the root directory, run the following command:

    cd ..
AWS DevOps

Create the S3 bucket for the OpenAPI schema.

To create the Amazon S3 bucket for the OpenAPI schema, use the following steps:

  1. To create the S3 bucket, run the following command:

    aws s3 mb s3://eks-openapi-schema-${AWS_ACCOUNT} --region us-east-1
  2. To upload the OpenAPI schema to the S3 bucket, run the following command:

    aws s3 cp openapi-schema.yaml s3://eks-openapi-schema-${AWS_ACCOUNT} --region us-east-1
AWS DevOps

Set up the environment

TaskDescriptionSkills required

Clone the repository.

To clone this pattern’s repository, run the following command in your local workstation:

git clone https://github.com/aws-samples/eks-access-controls-bedrock-agent.git
AWS DevOps

Get the AWS account ID.

To get the AWS account ID, use the following steps:

  1. Open a shell in the root folder of the cloned repo, eks-access-controls-bedrock-agent.

  2. To get your AWS account ID, navigate to the cloned directory and run the following command:

    AWS_ACCOUNT=$(aws sts get-caller-identity --query "Account" --output text)

This command stores your AWS account ID in the AWS_ACCOUNT variable.

AWS DevOps

Create the S3 bucket for Lambda code.

To implement this solution, you must create three Amazon S3 buckets that serve different purposes, as shown in the architecture diagram. The S3 buckets are for Lambda code, a knowledge base, and OpenAPI schema.

To create the Lambda code bucket, use the following steps:

  1. To create an S3 bucket for Lambda code, run the following command:

    aws s3 mb s3://bedrock-agent-lambda-artifacts-${AWS_ACCOUNT} --region us-east-1
  2. To install the Lambda code dependency, run the following command:

    cd eks-lambda npm install tsc cd .. && cd opensearch-lambda npm install tsc cd ..
  3. To package the code and upload it to the S3 bucket for Lambda, run the following command:

    aws cloudformation package \ --template-file eks-access-controls.yaml \ --s3-bucket bedrock-agent-lambda-artifacts-${AWS_ACCOUNT} \ --output-template-file eks-access-controls-template.yaml \ --region us-east-1

The package command creates a new CloudFormation template (eks-access-controls-template.yaml) that contains:

  • References to the Lambda function code stored in your S3 bucket.

  • Definitions for all required AWS infrastructure including VPC, subnets, Amazon Bedrock agent, and OpenSearch collection. You can use this template to deploy the complete solution by using CloudFormation.

AWS DevOps

Create the S3 bucket for the knowledge base.

To create the Amazon S3 bucket for the knowledge base, use the following steps:

  1. To create the Amazon S3 bucket for the knowledge base, run the following command:

    aws s3 mb s3://eks-knowledge-base-${AWS_ACCOUNT} --region us-east-1
  2. To download the Amazon EKS User Guide and store it in a directory, run the following commands:

    mkdir dataSource cd dataSource curl https://docs.aws.amazon.com/pdfs/eks/latest/userguide/eks-ug.pdf -o eks-user-guide.pdf
  3. To upload the user guide to the S3 bucket that you created in step 1, run the following command:

    aws s3 cp eks-user-guide.pdf s3://eks-knowledge-base-${AWS_ACCOUNT} \ --region us-east-1 \
  4. To return to the root directory, run the following command:

    cd ..
AWS DevOps

Create the S3 bucket for the OpenAPI schema.

To create the Amazon S3 bucket for the OpenAPI schema, use the following steps:

  1. To create the S3 bucket, run the following command:

    aws s3 mb s3://eks-openapi-schema-${AWS_ACCOUNT} --region us-east-1
  2. To upload the OpenAPI schema to the S3 bucket, run the following command:

    aws s3 cp openapi-schema.yaml s3://eks-openapi-schema-${AWS_ACCOUNT} --region us-east-1
AWS DevOps
TaskDescriptionSkills required

Deploy the CloudFormation stack.

To deploy the CloudFormation stack, use the CloudFormation template file eks-access-controls-template.yaml that you created earlier. For more detailed instructions, see Create a stack from the CloudFormation console in the CloudFormation documentation.

Note

Provisioning the OpenSearch index with the CloudFormation template takes about 10 minutes.

After the stack is created, make a note of the VPC_ID and PRIVATE_SUBNET IDs.

AWS DevOps

Create the Amazon EKS cluster.

To create the Amazon EKS cluster inside the VPC, use the following steps:

  1. Create a copy of the eks-config.yaml configuration file, and name the copy as eks-deploy.yaml.

  2. Open eks-deploy.yaml in a text editor. Then, replace the following placeholder values with values from the deployed stack: VPC_ID, PRIVATE_SUBNET1, and PRIVATE_SUBNET2

  3. To create the cluster by using the eksctl utility, run the following command:

    eksctl create cluster -f eks-deploy.yaml
    Note

    This cluster creation process can take up to 15-20 minutes to complete.

  4. To verify that the cluster was created successfully, run the following commands:

    aws eks describe-cluster --name --query "cluster.status" aws eks update-kubeconfig --name --region kubectl get nodes

The expected results are as follows:

  • The cluster status is ACTIVE.

  • The command kubectl get nodes shows that all nodes are in Ready state.

AWS DevOps

Deploy the CloudFormation stack

TaskDescriptionSkills required

Deploy the CloudFormation stack.

To deploy the CloudFormation stack, use the CloudFormation template file eks-access-controls-template.yaml that you created earlier. For more detailed instructions, see Create a stack from the CloudFormation console in the CloudFormation documentation.

Note

Provisioning the OpenSearch index with the CloudFormation template takes about 10 minutes.

After the stack is created, make a note of the VPC_ID and PRIVATE_SUBNET IDs.

AWS DevOps

Create the Amazon EKS cluster.

To create the Amazon EKS cluster inside the VPC, use the following steps:

  1. Create a copy of the eks-config.yaml configuration file, and name the copy as eks-deploy.yaml.

  2. Open eks-deploy.yaml in a text editor. Then, replace the following placeholder values with values from the deployed stack: VPC_ID, PRIVATE_SUBNET1, and PRIVATE_SUBNET2

  3. To create the cluster by using the eksctl utility, run the following command:

    eksctl create cluster -f eks-deploy.yaml
    Note

    This cluster creation process can take up to 15-20 minutes to complete.

  4. To verify that the cluster was created successfully, run the following commands:

    aws eks describe-cluster --name --query "cluster.status" aws eks update-kubeconfig --name --region kubectl get nodes

The expected results are as follows:

  • The cluster status is ACTIVE.

  • The command kubectl get nodes shows that all nodes are in Ready state.

AWS DevOps
TaskDescriptionSkills required

Create a connection between the Amazon EKS cluster and the Lambda function.

To set up network and IAM permissions to allow the Lambda function to communicate with the Amazon EKS cluster, use the following steps:

  1. To identify the IAM role that’s attached to the Lambda function, open the AWS Management Console and locate the Lambda function named bedrock-agent-eks-access-control. Make a note of the Amazon Resource Name (ARN) of the IAM role.

  2. To create an access entry in the Amazon EKS cluster for the Lambda function’s IAM role, run the following command:

    aws eks create-access-entry --cluster-name eks-testing-cluster --principal-arn <principal-Role-ARN>
  3. To assign AmazonEKSClusterAdminPolicy permissions to this role, run the following command:

    aws eks associate-access-policy --cluster-name eks-testing-cluster --principal-arn <principal-Role-ARN> --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy --access-scope type=cluster

    For more information, see Associate access policies with access entries and AmazonEKSClusterAdminPolicy in the Amazon EKS documentation.

  4. Locate the Amazon EKS cluster’s security group. Add an inbound rule to allow incoming network traffic from the Lambda function to the Amazon EKS cluster.

    Use the following values for the inbound rule:

    • Type – HTTPS

    • Port range – 443

    • Source – Lambda security group

      For more information, see Configure security group rules in the Amazon VPC documentation.

AWS DevOps

Connect the Lambda function and the Amazon EKS cluster

TaskDescriptionSkills required

Create a connection between the Amazon EKS cluster and the Lambda function.

To set up network and IAM permissions to allow the Lambda function to communicate with the Amazon EKS cluster, use the following steps:

  1. To identify the IAM role that’s attached to the Lambda function, open the AWS Management Console and locate the Lambda function named bedrock-agent-eks-access-control. Make a note of the Amazon Resource Name (ARN) of the IAM role.

  2. To create an access entry in the Amazon EKS cluster for the Lambda function’s IAM role, run the following command:

    aws eks create-access-entry --cluster-name eks-testing-cluster --principal-arn <principal-Role-ARN>
  3. To assign AmazonEKSClusterAdminPolicy permissions to this role, run the following command:

    aws eks associate-access-policy --cluster-name eks-testing-cluster --principal-arn <principal-Role-ARN> --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy --access-scope type=cluster

    For more information, see Associate access policies with access entries and AmazonEKSClusterAdminPolicy in the Amazon EKS documentation.

  4. Locate the Amazon EKS cluster’s security group. Add an inbound rule to allow incoming network traffic from the Lambda function to the Amazon EKS cluster.

    Use the following values for the inbound rule:

    • Type – HTTPS

    • Port range – 443

    • Source – Lambda security group

      For more information, see Configure security group rules in the Amazon VPC documentation.

AWS DevOps
TaskDescriptionSkills required

Test the Amazon Bedrock agent.

Before testing the Amazon Bedrock agent, make sure that you do the following:

  • Test with non-production roles first.

  • Document any changes made to cluster access.

  • Have a plan to revert changes if needed.

To access the Amazon Bedrock agent, use the following steps:

  1. Sign in to the AWS Management Console using an IAM role with Amazon Bedrock permissions, and open the Amazon Bedrock console at https://console.aws.amazon.com/bedrock/.

  2. Select Agents from the left navigation pane. Then, choose your configured agent in the Agents section.

  3. To test the agent, try the following sample prompts in which you replace Principal-ARN-OF-ROLE with an actual IAM role ARN:

  • To create an access entry for any IAM role that you wanted to provide access to the EKS cluster, use the following prompt: Create an access entry in cluster eks-testing-new for a role whose principal arn is <Principal-ARN-OF-ROLE> with access policy as AmazonEKSAdminPolicy

    Expected result:

    • The agent should confirm the access entry creation.

    • To verify, check using the AWS Management Console or use the Amazon EKS API and run the following command: aws eks list-access-entries --cluster-name ekscluster

  • To describe the access entry that you created, use the following prompt: Describe an access entry in cluster eks-testing-new whose principal arn is <Principal-ARN-OF-ROLE>

    Expected result:

    • The agent should return details about the access entry.

    • The details should match what you configured earlier for the access entry.

  • To delete the access entry that you created, use the following prompt: Delete the access entry in cluster eks-testing-new whose principal arn is <Principal-ARN-OF-ROLE>

    Expected result:

    • The agent should confirm the deletion of the access entry.

    • To verify, check using the AWS Management Console or use the Amazon EKS API and run the following command: aws eks list-access-entries --cluster-name ekscluster

You can also ask the agent to perform actions for EKS Pod Identity associations. For more details, see Learn how EKS Pod Identity grants pods access to AWS services in the Amazon EKS documentation.

AWS DevOps

Test the solution

TaskDescriptionSkills required

Test the Amazon Bedrock agent.

Before testing the Amazon Bedrock agent, make sure that you do the following:

  • Test with non-production roles first.

  • Document any changes made to cluster access.

  • Have a plan to revert changes if needed.

To access the Amazon Bedrock agent, use the following steps:

  1. Sign in to the AWS Management Console using an IAM role with Amazon Bedrock permissions, and open the Amazon Bedrock console at https://console.aws.amazon.com/bedrock/.

  2. Select Agents from the left navigation pane. Then, choose your configured agent in the Agents section.

  3. To test the agent, try the following sample prompts in which you replace Principal-ARN-OF-ROLE with an actual IAM role ARN:

  • To create an access entry for any IAM role that you wanted to provide access to the EKS cluster, use the following prompt: Create an access entry in cluster eks-testing-new for a role whose principal arn is <Principal-ARN-OF-ROLE> with access policy as AmazonEKSAdminPolicy

    Expected result:

    • The agent should confirm the access entry creation.

    • To verify, check using the AWS Management Console or use the Amazon EKS API and run the following command: aws eks list-access-entries --cluster-name ekscluster

  • To describe the access entry that you created, use the following prompt: Describe an access entry in cluster eks-testing-new whose principal arn is <Principal-ARN-OF-ROLE>

    Expected result:

    • The agent should return details about the access entry.

    • The details should match what you configured earlier for the access entry.

  • To delete the access entry that you created, use the following prompt: Delete the access entry in cluster eks-testing-new whose principal arn is <Principal-ARN-OF-ROLE>

    Expected result:

    • The agent should confirm the deletion of the access entry.

    • To verify, check using the AWS Management Console or use the Amazon EKS API and run the following command: aws eks list-access-entries --cluster-name ekscluster

You can also ask the agent to perform actions for EKS Pod Identity associations. For more details, see Learn how EKS Pod Identity grants pods access to AWS services in the Amazon EKS documentation.

AWS DevOps
TaskDescriptionSkills required

Clean up resources.

To clean up the resources that this pattern created, use the following procedure. Wait for each deletion step to complete before proceeding to the next step.

Warning

This procedure will permanently delete all resources created by these stacks. Make sure that you've backed up any important data before proceeding.

  1. To delete the Amazon EKS cluster, run the following command:

    eksctl delete cluster -f eks-deploy.yaml
    Note

    This operation can take 15-20 minutes to complete.

  2. To delete the Amazon S3 buckets, run the following commands:

    • To empty the Lambda bucket:

      aws s3 rm s3://bedrock-agent-lambda-artifacts-${AWS_ACCOUNT} --recursive
    • To empty the knowledge base bucket:

      aws s3 rm s3://eks-knowledge-base-${AWS_ACCOUNT} –recursive
    • To empty the OpenAPI schema bucket:

      aws s3 rm s3://bedrock-agent-openapi-schema-${AWS_ACCOUNT} –recursive
    • To delete the empty buckets:

      aws s3 rb s3://bedrock-agent-lambda-artifacts-${AWS_ACCOUNT} aws s3 rb s3://eks-knowledge-base-${AWS_ACCOUNT} aws s3 rb s3://bedrock-agent-openapi-schema-${AWS_ACCOUNT}
  3. To delete the CloudFormation stack, run the following command:

    aws cloudformation delete-stack \ --stack-name
  4. To verify deletion of the Amazon EKS cluster, run the following command:

    eksctl get clusters
  5. To verify deletion of the Amazon S3 buckets, run the following commands:

    • To verify deletion of the Lambda bucket:

      aws s3 ls | grep "bedrock-agent-lambda-artifacts"
    • To verify deletion of the knowledge base bucket:

      aws s3 ls | grep "eks-knowledge-base"
    • To verify deletion of the OpenAPI schema bucket:

      aws s3 ls | grep "bedrock-agent-openapi-schema"
  6. To verify stack deletion, run the following command:

    aws cloudformation list-stacks \--query 'StackSummaries[?StackName==``]'

    If the stack fails to delete, see Troubleshooting.

AWS DevOps

Clean up

TaskDescriptionSkills required

Clean up resources.

To clean up the resources that this pattern created, use the following procedure. Wait for each deletion step to complete before proceeding to the next step.

Warning

This procedure will permanently delete all resources created by these stacks. Make sure that you've backed up any important data before proceeding.

  1. To delete the Amazon EKS cluster, run the following command:

    eksctl delete cluster -f eks-deploy.yaml
    Note

    This operation can take 15-20 minutes to complete.

  2. To delete the Amazon S3 buckets, run the following commands:

    • To empty the Lambda bucket:

      aws s3 rm s3://bedrock-agent-lambda-artifacts-${AWS_ACCOUNT} --recursive
    • To empty the knowledge base bucket:

      aws s3 rm s3://eks-knowledge-base-${AWS_ACCOUNT} –recursive
    • To empty the OpenAPI schema bucket:

      aws s3 rm s3://bedrock-agent-openapi-schema-${AWS_ACCOUNT} –recursive
    • To delete the empty buckets:

      aws s3 rb s3://bedrock-agent-lambda-artifacts-${AWS_ACCOUNT} aws s3 rb s3://eks-knowledge-base-${AWS_ACCOUNT} aws s3 rb s3://bedrock-agent-openapi-schema-${AWS_ACCOUNT}
  3. To delete the CloudFormation stack, run the following command:

    aws cloudformation delete-stack \ --stack-name
  4. To verify deletion of the Amazon EKS cluster, run the following command:

    eksctl get clusters
  5. To verify deletion of the Amazon S3 buckets, run the following commands:

    • To verify deletion of the Lambda bucket:

      aws s3 ls | grep "bedrock-agent-lambda-artifacts"
    • To verify deletion of the knowledge base bucket:

      aws s3 ls | grep "eks-knowledge-base"
    • To verify deletion of the OpenAPI schema bucket:

      aws s3 ls | grep "bedrock-agent-openapi-schema"
  6. To verify stack deletion, run the following command:

    aws cloudformation list-stacks \--query 'StackSummaries[?StackName==``]'

    If the stack fails to delete, see Troubleshooting.

AWS DevOps

Troubleshooting

IssueSolution

A non-zero error code is returned during environment setup.

Verify that you’re using the correct folder when running any command to deploy this solution. For more information, see the FIRST_DEPLOY.md file in this pattern’s repository.

The Lambda function isn’t able to do the task.

Make sure that connectivity is set up correctly from the Lambda function to the Amazon EKS cluster.

The agent prompts don’t recognize the APIs.

Redeploy the solution. For more information, see the RE_DEPLOY.md file in this pattern’s repository.

The stack fails to delete.

An initial attempt to delete the stack might fail. This failure can occur because of dependency issues with the custom resource that was created for the OpenSearch collection which does the indexing for the knowledge base. To delete the stack, retry the delete operation by retaining the custom resource.

Related resources

AWS Blog

Amazon Bedrock documentation

Amazon EKS documentation

PrivacySite termsCookie preferences
© 2025, Amazon Web Services, Inc. or its affiliates. All rights reserved.