Stability.ai Diffusion 1.0 text to image - Amazon Bedrock

Stability.ai Diffusion 1.0 text to image

The Stability.ai Diffusion 1.0 model has the following inference parameters and model response for making text to image inference calls.

Request and Response

The request body is passed in the body field of a request to InvokeModel or InvokeModelWithResponseStream.

For more information, see https://platform.stability.ai/docs/api-reference#tag/v1generation.

Request

The Stability.ai Diffusion 1.0 model has the following inference parameters for a text to image inference call.

{ "text_prompts": [ { "text": string, "weight": float } ], "height": int, "width": int, "cfg_scale": float, "clip_guidance_preset": string, "sampler": string, "samples", "seed": int, "steps": int, "style_preset": string, "extras" :JSON object }
  • text_prompts (Required) – An array of text prompts to use for generation. Each element is a JSON object that contains a prompt and a weight for the prompt.

    • text – The prompt that you want to pass to the model.

      Minimum Maximum

      0

      2000

    • weight (Optional) – The weight that the model should apply to the prompt. A value that is less than zero declares a negative prompt. Use a negative prompt to tell the model to avoid certain concepts. The default value for weight is one.

  • cfg_scale – (Optional) Determines how much the final image portrays the prompt. Use a lower number to increase randomness in the generation.

    Minimum Maximum Default

    0

    35

    7

  • clip_guidance_preset– (Optional) Enum: FAST_BLUE, FAST_GREEN, NONE, SIMPLE SLOW, SLOWER, SLOWEST.

  • height – (Optional) Height of the image to generate, in pixels, in an increment divible by 64.

    The value must be one of 1024x1024, 1152x896, 1216x832, 1344x768, 1536x640, 640x1536, 768x1344, 832x1216, 896x1152.

  • width – (Optional) Width of the image to generate, in pixels, in an increment divible by 64.

    The value must be one of 1024x1024, 1152x896, 1216x832, 1344x768, 1536x640, 640x1536, 768x1344, 832x1216, 896x1152.

  • sampler – (Optional) The sampler to use for the diffusion process. If this value is omitted, the model automatically selects an appropriate sampler for you.

    Enum: DDIM, DDPM, K_DPMPP_2M, K_DPMPP_2S_ANCESTRAL, K_DPM_2, K_DPM_2_ANCESTRAL, K_EULER, K_EULER_ANCESTRAL, K_HEUN K_LMS.

  • samples – (Optional) The number of image to generate. Currently Amazon Bedrock supports generating one image. If you supply a value for samples, the value must be one.

    Default Minimum Maximum

    1

    1

    1

  • seed – (Optional) The seed determines the initial noise setting. Use the same seed and the same settings as a previous run to allow inference to create a similar image. If you don't set this value, or the value is 0, it is set as a random number.

    Minimum Maximum Default

    0

    4294967295

    0

  • steps – (Optional) Generation step determines how many times the image is sampled. More steps can result in a more accurate result.

    Minimum Maximum Default

    10

    150

    30

  • style_preset (Optional) – A style preset that guides the image model towards a particular style. This list of style presets is subject to change.

    Enum: 3d-model, analog-film, anime, cinematic, comic-book, digital-art, enhance, fantasy-art, isometric, line-art, low-poly, modeling-compound, neon-punk, origami, photographic, pixel-art, tile-texture.

  • extras (Optional) – Extra parameters passed to the engine. Use with caution. These parameters are used for in-development or experimental features and might change without warning.

Response

The Stability.ai Diffusion 1.0 model returns the following fields for a text to image inference call.

{ "result": string, "artifacts": [ { "seed": int, "base64": string, "finishReason": string } ] }
  • result – The result of the operation. If successful, the response is success.

  • artifacts – An array of images, one for each requested image.

    • seed – The value of the seed used to generate the image.

    • base64 – The base64 encoded image that the model generated.

    • finishedReason – The result of the image generation process. Valid values are:

      • SUCCESS – The image generation process succeeded.

      • ERROR – An error occured.

      • CONTENT_FILTERED – The content filter filtered the image and the image might be blurred.

Code example

The following example shows how to run inference with the Stability.ai Diffusion 1.0 model and on demand throughput. The example submits a text prompt to a model, retrieves the response from the model, and finally shows the image.

# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to generate an image with SDXL 1.0 (on demand). """ import base64 import io import json import logging import boto3 from PIL import Image from botocore.exceptions import ClientError class ImageError(Exception): "Custom exception for errors returned by SDXL" def __init__(self, message): self.message = message logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def generate_image(model_id, body): """ Generate an image using SDXL 1.0 on demand. Args: model_id (str): The model ID to use. body (str) : The request body to use. Returns: image_bytes (bytes): The image generated by the model. """ logger.info("Generating image with SDXL model %s", model_id) bedrock = boto3.client(service_name='bedrock-runtime') accept = "application/json" content_type = "application/json" response = bedrock.invoke_model( body=body, modelId=model_id, accept=accept, contentType=content_type ) response_body = json.loads(response.get("body").read()) print(response_body['result']) base64_image = response_body.get("artifacts")[0].get("base64") base64_bytes = base64_image.encode('ascii') image_bytes = base64.b64decode(base64_bytes) finish_reason = response_body.get("artifacts")[0].get("finishReason") if finish_reason == 'ERROR' or finish_reason == 'CONTENT_FILTERED': raise ImageError(f"Image generation error. Error code is {finish_reason}") logger.info("Successfully generated image withvthe SDXL 1.0 model %s", model_id) return image_bytes def main(): """ Entrypoint for SDXL example. """ logging.basicConfig(level = logging.INFO, format = "%(levelname)s: %(message)s") model_id='stability.stable-diffusion-xl-v1' prompt="""Sri lanka tea plantation.""" # Create request body. body=json.dumps({ "text_prompts": [ { "text": prompt } ], "cfg_scale": 10, "seed": 0, "steps": 50, "samples" : 1, "style_preset" : "photographic" }) try: image_bytes=generate_image(model_id = model_id, body = body) image = Image.open(io.BytesIO(image_bytes)) image.show() except ClientError as err: message=err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) except ImageError as err: logger.error(err.message) print(err.message) else: print(f"Finished generating text with SDXL model {model_id}.") if __name__ == "__main__": main()