Generative AI - AWS Prescriptive Guidance

Generative AI

Generative AI solutions cover multiple use cases that affect your security scope. To better understand the scope and corresponding key security disciplines, see the AWS blog post Securing generative AI: An introduction to the Generative AI Security Scoping Matrix. Depending on your use case, you might use a managed service where the service provider takes more responsibility for the management of the service and model, or you might build your own service and model. AWS offers a wide range of services to help you build, run, and integrate artificial intelligence and machine learning (AI/ML) solutions of any size, complexity, or use case. These services operate at all three layers of the generative AI stack: infrastructure layer for foundation model (FM) training and inference, tooling layer to build with large language models (LLMs) and other FMs, and application layer that uses LLMs and other FMs. This guidance focuses on the tooling layer, which provides access to all the models and tools you need to build and scale generative AI applications by using Amazon Bedrock. 

For an introduction to generative AI, see What is Generative AI? on the AWS website.

Note

The scope of this current guidance is exclusively around the generative AI capabilities of Amazon Bedrock. Future updates will iteratively expand the scope and add guidance to include the full array of AWS services for generative AI.