Capability 4. Providing secure access, usage, and implementation for generative AI model customization - AWS Prescriptive Guidance

Capability 4. Providing secure access, usage, and implementation for generative AI model customization

The following diagram illustrates the AWS services recommended for the Generative AI account for this capability. The scope of this scenario is to secure model customization. This use case focuses on securing the resources and training environment for a model customization job as well as securing the invocation of a custom model.

AWS services recommended for the Generative AI account for model customization.

The Generative AI account includes services required for customizing a model along with a suite of required security services to implement security guardrails and centralized security governance. You should create Amazon S3 gateway endpoints for the training data and evaluation buckets in Amazon S3 that a private VPC environment is configured to access to allow for private model customization. 

Rationale

Model customization is the process of providing training data to a model in order to improve its performance for specific use cases. In Amazon Bedrock, you can customize Amazon Bedrock foundation models (FMs) to improve their performance and to create a better customer experience by using methods such as continued pre-training with unlabeled data to enhance domain knowledge, and fine-tuning with labeled data to optimize task-specific performance. If you customize a model, you must purchase Provisioned Throughput to be able to use it. 

This use case refers to Scope 4 of the Generative AI Security Scoping Matrix. In Scope 4, you customize an FM, such as those offered in Amazon Bedrock, with your data to improve the model's performance on a specific task or domain. In this scope you control the application, any customer data that's used by the application, the training data, and the customized model, whereas the FM provider controls the pre-trained model and its training data. 

Alternatively, you can create a custom model in Amazon Bedrock by using the Custom Model Import feature to import FMs that you have customized in other environments, such as Amazon SageMaker. For the import source, we strongly recommend using Safetensors for the imported model serialization format. Unlike Pickle, Safetensors allows you to store only tensor data, not arbitrary Python objects. This eliminates vulnerabilities that stem from unpickling untrusted data. Safetensors can't run code―it only only stores and loads tensors safely.

When you give users access to generative AI model customization in Amazon Bedrock, you should address these key security considerations: 

  • Secure access to model invocation, training jobs, and training and validation files

  • Encryption of the training model job, the custom model, and the training and validation files

  • Alerts for potential security risks such as jailbreak prompts or sensitive information in training files 

The following sections discusses these security considerations and generative AI functionality.  

Amazon Bedrock model customization 

You can privately and securely customize foundation models (FMs) with your own data in Amazon Bedrock to build applications that are specific to your domain, organization, and use case. With fine-tuning, you can increase model accuracy by providing your own task-specific, labeled training dataset and further specialize your FMs. With continued pre-training, you can train models by using your own unlabeled data in a secure and managed environment with customer managed keys. For more information, see Custom models in the Amazon Bedrock documentation.

Security considerations

Generative AI model customization workloads face unique risks, including data exfiltration of training data, data poisoning through the injection of malicious prompts or malware into training data, and prompt injection or data exfiltration by threat actors during model inference. In Amazon Bedrock, model customization offers robust security controls for data protection, access control, network security, logging and monitoring, and input/output validation that can help mitigate these risks.  

Remediations

Data protection

Encrypt the model customization job, the output files (training and validation metrics) from the model customization job, and the resulting custom model by using a customer managed key in AWS KMS that you create, own, and manage. When you use Amazon Bedrock to run a model customization job, you store the input (training and validation data) files in your S3 bucket. When the job completes, Amazon Bedrock stores the output metrics files in the S3 bucket that you specified when you created the job, and stores the resulting custom model artifacts in an S3 bucket that's controlled by AWS. By default, the input and output files are encrypted with Amazon S3 SSE-S3 server-side encryption by using an AWS managed key. You can also choose to encrypt these files with a customer managed key.

Identity and access management

Create a custom service role for model customization or model import by following the principle of least privilege. For the model customization service role, create a trust relationship that allows Amazon Bedrock to assume this role and carry out the model customization job. Attach a policy to allow the role to access your training and validation data and the bucket you want to write your output data to. For the model import service role, create a trust relationship that allows Amazon Bedrock to assume this role and carry out the model import job. Attach a policy to allow the role to access the custom model files in your S3 bucket. If your model customization job is running in a VPC, attach VPC permissions to a model customization role

Network security

To control access to your data, use a virtual private cloud (VPC) with Amazon VPC. When you create your VPC, we recommend that you use the default DNS settings for your endpoint route table, so that standard Amazon S3 URLs resolve. 

If you configure your VPC with no internet access, you need to create an Amazon S3 VPC endpoint to allow your model customization jobs to access the S3 buckets that store your training and validation data and that will store the model artifacts.

After you finish setting up your VPC and endpoint, you need to attach permissions to your model customization IAM role. After you configure the VPC and the required roles and permissions, you can create a model customization job that uses this VPC. By creating a VPC with no internet access with an associated S3 VPC endpoint for the training data, you can run your model customization job with private connectivity (without any internet exposure). 

Recommended AWS services

Amazon S3

When you run a model customization job, the job accesses your S3 bucket to download the input data and to upload job metrics. You can choose fine-tuning or continued pre-training as the model type when you submit your model customization job on the Amazon Bedrock console or API. After a model customization job completes, you can analyze the results of the training process by viewing the files in the output S3 bucket that you specified when you submitted the job, or view details about the model. Encrypt both buckets with a customer managed key. For additional network security hardening, you can create a gateway endpoint for the S3 buckets that the VPC environment is configured to access. Access should be logged and monitored. Use versioning for backups. You can use resource-based policies to more tightly control access to your Amazon S3 files. 

Amazon Macie 

Macie can help identify sensitive data in your Amazon S3 training and validation datasets. For security best practices, see the previous Macie section in this guidance.

Amazon EventBridge

You can use Amazon EventBridge to configure Amazon SageMaker to respond automatically to a model customization job status change in Amazon Bedrock. Events from Amazon Bedrock are delivered to Amazon EventBridge in near real time. You can write simple rules to automate actions when an event matches a rule.

AWS KMS 

We recommend that you use a customer managed key to encrypt the model customization job, the output files (training and validation metrics) from the model customization job, the resulting custom model, and the S3 buckets that host the training, validation, and output data. For more information, see Encryption of model customization jobs and artifacts in the Amazon Bedrock documentation.

A key policy is a resource policy for an AWS KMS key. Key policies are the primary way to control access to KMS keys. You can also use IAM policies and grants to control access to KMS keys, but every KMS key must have a key policy. Use a key policy to provide permissions to a role to access the custom model that was encrypted with the customer managed key. This allows specified roles to use a custom model for inference.

Use Amazon CloudWatch, Amazon CloudTrail, Amazon OpenSearch Serverless, Amazon S3, and Amazon Comprehend as explained in previous capability sections.