Class: Aws::SageMaker::Client

Inherits:
Seahorse::Client::Base show all
Includes:
ClientStubs
Defined in:
gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb

Overview

An API client for SageMaker. To construct a client, you need to configure a :region and :credentials.

client = Aws::SageMaker::Client.new(
  region: region_name,
  credentials: credentials,
  # ...
)

For details on configuring region and credentials see the developer guide.

See #initialize for a full list of supported configuration options.

Instance Attribute Summary

Attributes inherited from Seahorse::Client::Base

#config, #handlers

API Operations collapse

Instance Method Summary collapse

Methods included from ClientStubs

#api_requests, #stub_data, #stub_responses

Methods inherited from Seahorse::Client::Base

add_plugin, api, clear_plugins, define, new, #operation_names, plugins, remove_plugin, set_api, set_plugins

Methods included from Seahorse::Client::HandlerBuilder

#handle, #handle_request, #handle_response

Constructor Details

#initialize(options) ⇒ Client

Returns a new instance of Client.

Parameters:

  • options (Hash)

Options Hash (options):

  • :plugins (Array<Seahorse::Client::Plugin>) — default: []]

    A list of plugins to apply to the client. Each plugin is either a class name or an instance of a plugin class.

  • :credentials (required, Aws::CredentialProvider)

    Your AWS credentials. This can be an instance of any one of the following classes:

    • Aws::Credentials - Used for configuring static, non-refreshing credentials.

    • Aws::SharedCredentials - Used for loading static credentials from a shared file, such as ~/.aws/config.

    • Aws::AssumeRoleCredentials - Used when you need to assume a role.

    • Aws::AssumeRoleWebIdentityCredentials - Used when you need to assume a role after providing credentials via the web.

    • Aws::SSOCredentials - Used for loading credentials from AWS SSO using an access token generated from aws login.

    • Aws::ProcessCredentials - Used for loading credentials from a process that outputs to stdout.

    • Aws::InstanceProfileCredentials - Used for loading credentials from an EC2 IMDS on an EC2 instance.

    • Aws::ECSCredentials - Used for loading credentials from instances running in ECS.

    • Aws::CognitoIdentityCredentials - Used for loading credentials from the Cognito Identity service.

    When :credentials are not configured directly, the following locations will be searched for credentials:

    • Aws.config[:credentials]
    • The :access_key_id, :secret_access_key, :session_token, and :account_id options.
    • ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY'], ENV['AWS_SESSION_TOKEN'], and ENV['AWS_ACCOUNT_ID']
    • ~/.aws/credentials
    • ~/.aws/config
    • EC2/ECS IMDS instance profile - When used by default, the timeouts are very aggressive. Construct and pass an instance of Aws::InstanceProfileCredentials or Aws::ECSCredentials to enable retries and extended timeouts. Instance profile credential fetching can be disabled by setting ENV['AWS_EC2_METADATA_DISABLED'] to true.
  • :region (required, String)

    The AWS region to connect to. The configured :region is used to determine the service :endpoint. When not passed, a default :region is searched for in the following locations:

    • Aws.config[:region]
    • ENV['AWS_REGION']
    • ENV['AMAZON_REGION']
    • ENV['AWS_DEFAULT_REGION']
    • ~/.aws/credentials
    • ~/.aws/config
  • :access_key_id (String)
  • :account_id (String)
  • :active_endpoint_cache (Boolean) — default: false

    When set to true, a thread polling for endpoints will be running in the background every 60 secs (default). Defaults to false.

  • :adaptive_retry_wait_to_fill (Boolean) — default: true

    Used only in adaptive retry mode. When true, the request will sleep until there is sufficent client side capacity to retry the request. When false, the request will raise a RetryCapacityNotAvailableError and will not retry instead of sleeping.

  • :client_side_monitoring (Boolean) — default: false

    When true, client-side metrics will be collected for all API requests from this client.

  • :client_side_monitoring_client_id (String) — default: ""

    Allows you to provide an identifier for this client which will be attached to all generated client side metrics. Defaults to an empty string.

  • :client_side_monitoring_host (String) — default: "127.0.0.1"

    Allows you to specify the DNS hostname or IPv4 or IPv6 address that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_port (Integer) — default: 31000

    Required for publishing client metrics. The port that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_publisher (Aws::ClientSideMonitoring::Publisher) — default: Aws::ClientSideMonitoring::Publisher

    Allows you to provide a custom client-side monitoring publisher class. By default, will use the Client Side Monitoring Agent Publisher.

  • :convert_params (Boolean) — default: true

    When true, an attempt is made to coerce request parameters into the required types.

  • :correct_clock_skew (Boolean) — default: true

    Used only in standard and adaptive retry modes. Specifies whether to apply a clock skew correction and retry requests with skewed client clocks.

  • :defaults_mode (String) — default: "legacy"

    See DefaultsModeConfiguration for a list of the accepted modes and the configuration defaults that are included.

  • :disable_host_prefix_injection (Boolean) — default: false

    Set to true to disable SDK automatically adding host prefix to default service endpoint when available.

  • :disable_request_compression (Boolean) — default: false

    When set to 'true' the request body will not be compressed for supported operations.

  • :endpoint (String, URI::HTTPS, URI::HTTP)

    Normally you should not configure the :endpoint option directly. This is normally constructed from the :region option. Configuring :endpoint is normally reserved for connecting to test or custom endpoints. The endpoint should be a URI formatted like:

    'http://example.com'
    'https://example.com'
    'http://example.com:123'
    
  • :endpoint_cache_max_entries (Integer) — default: 1000

    Used for the maximum size limit of the LRU cache storing endpoints data for endpoint discovery enabled operations. Defaults to 1000.

  • :endpoint_cache_max_threads (Integer) — default: 10

    Used for the maximum threads in use for polling endpoints to be cached, defaults to 10.

  • :endpoint_cache_poll_interval (Integer) — default: 60

    When :endpoint_discovery and :active_endpoint_cache is enabled, Use this option to config the time interval in seconds for making requests fetching endpoints information. Defaults to 60 sec.

  • :endpoint_discovery (Boolean) — default: false

    When set to true, endpoint discovery will be enabled for operations when available.

  • :ignore_configured_endpoint_urls (Boolean)

    Setting to true disables use of endpoint URLs provided via environment variables and the shared configuration file.

  • :log_formatter (Aws::Log::Formatter) — default: Aws::Log::Formatter.default

    The log formatter.

  • :log_level (Symbol) — default: :info

    The log level to send messages to the :logger at.

  • :logger (Logger)

    The Logger instance to send log messages to. If this option is not set, logging will be disabled.

  • :max_attempts (Integer) — default: 3

    An integer representing the maximum number attempts that will be made for a single request, including the initial attempt. For example, setting this value to 5 will result in a request being retried up to 4 times. Used in standard and adaptive retry modes.

  • :profile (String) — default: "default"

    Used when loading credentials from the shared credentials file at HOME/.aws/credentials. When not specified, 'default' is used.

  • :request_min_compression_size_bytes (Integer) — default: 10240

    The minimum size in bytes that triggers compression for request bodies. The value must be non-negative integer value between 0 and 10485780 bytes inclusive.

  • :retry_backoff (Proc)

    A proc or lambda used for backoff. Defaults to 2**retries * retry_base_delay. This option is only used in the legacy retry mode.

  • :retry_base_delay (Float) — default: 0.3

    The base delay in seconds used by the default backoff function. This option is only used in the legacy retry mode.

  • :retry_jitter (Symbol) — default: :none

    A delay randomiser function used by the default backoff function. Some predefined functions can be referenced by name - :none, :equal, :full, otherwise a Proc that takes and returns a number. This option is only used in the legacy retry mode.

    @see https://www.awsarchitectureblog.com/2015/03/backoff.html

  • :retry_limit (Integer) — default: 3

    The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors, auth errors, endpoint discovery, and errors from expired credentials. This option is only used in the legacy retry mode.

  • :retry_max_delay (Integer) — default: 0

    The maximum number of seconds to delay between retries (0 for no limit) used by the default backoff function. This option is only used in the legacy retry mode.

  • :retry_mode (String) — default: "legacy"

    Specifies which retry algorithm to use. Values are:

    • legacy - The pre-existing retry behavior. This is default value if no retry mode is provided.

    • standard - A standardized set of retry rules across the AWS SDKs. This includes support for retry quotas, which limit the number of unsuccessful retries a client can make.

    • adaptive - An experimental retry mode that includes all the functionality of standard mode along with automatic client side throttling. This is a provisional mode that may change behavior in the future.

  • :sdk_ua_app_id (String)

    A unique and opaque application ID that is appended to the User-Agent header as app/sdk_ua_app_id. It should have a maximum length of 50. This variable is sourced from environment variable AWS_SDK_UA_APP_ID or the shared config profile attribute sdk_ua_app_id.

  • :secret_access_key (String)
  • :session_token (String)
  • :sigv4a_signing_region_set (Array)

    A list of regions that should be signed with SigV4a signing. When not passed, a default :sigv4a_signing_region_set is searched for in the following locations:

    • Aws.config[:sigv4a_signing_region_set]
    • ENV['AWS_SIGV4A_SIGNING_REGION_SET']
    • ~/.aws/config
  • :simple_json (Boolean) — default: false

    Disables request parameter conversion, validation, and formatting. Also disables response data type conversions. The request parameters hash must be formatted exactly as the API expects.This option is useful when you want to ensure the highest level of performance by avoiding overhead of walking request parameters and response data structures.

  • :stub_responses (Boolean) — default: false

    Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling ClientStubs#stub_responses. See ClientStubs for more information.

    Please note When response stubbing is enabled, no HTTP requests are made, and retries are disabled.

  • :telemetry_provider (Aws::Telemetry::TelemetryProviderBase) — default: Aws::Telemetry::NoOpTelemetryProvider

    Allows you to provide a telemetry provider, which is used to emit telemetry data. By default, uses NoOpTelemetryProvider which will not record or emit any telemetry data. The SDK supports the following telemetry providers:

    • OpenTelemetry (OTel) - To use the OTel provider, install and require the opentelemetry-sdk gem and then, pass in an instance of a Aws::Telemetry::OTelProvider for telemetry provider.
  • :token_provider (Aws::TokenProvider)

    A Bearer Token Provider. This can be an instance of any one of the following classes:

    • Aws::StaticTokenProvider - Used for configuring static, non-refreshing tokens.

    • Aws::SSOTokenProvider - Used for loading tokens from AWS SSO using an access token generated from aws login.

    When :token_provider is not configured directly, the Aws::TokenProviderChain will be used to search for tokens configured for your profile in shared configuration files.

  • :use_dualstack_endpoint (Boolean)

    When set to true, dualstack enabled endpoints (with .aws TLD) will be used if available.

  • :use_fips_endpoint (Boolean)

    When set to true, fips compatible endpoints will be used if available. When a fips region is used, the region is normalized and this config is set to true.

  • :validate_params (Boolean) — default: true

    When true, request parameters are validated before sending the request.

  • :endpoint_provider (Aws::SageMaker::EndpointProvider)

    The endpoint provider used to resolve endpoints. Any object that responds to #resolve_endpoint(parameters) where parameters is a Struct similar to Aws::SageMaker::EndpointParameters.

  • :http_continue_timeout (Float) — default: 1

    The number of seconds to wait for a 100-continue response before sending the request body. This option has no effect unless the request has "Expect" header set to "100-continue". Defaults to nil which disables this behaviour. This value can safely be set per request on the session.

  • :http_idle_timeout (Float) — default: 5

    The number of seconds a connection is allowed to sit idle before it is considered stale. Stale connections are closed and removed from the pool before making a request.

  • :http_open_timeout (Float) — default: 15

    The default number of seconds to wait for response data. This value can safely be set per-request on the session.

  • :http_proxy (URI::HTTP, String)

    A proxy to send requests through. Formatted like 'http://proxy.com:123'.

  • :http_read_timeout (Float) — default: 60

    The default number of seconds to wait for response data. This value can safely be set per-request on the session.

  • :http_wire_trace (Boolean) — default: false

    When true, HTTP debug output will be sent to the :logger.

  • :on_chunk_received (Proc)

    When a Proc object is provided, it will be used as callback when each chunk of the response body is received. It provides three arguments: the chunk, the number of bytes received, and the total number of bytes in the response (or nil if the server did not send a content-length).

  • :on_chunk_sent (Proc)

    When a Proc object is provided, it will be used as callback when each chunk of the request body is sent. It provides three arguments: the chunk, the number of bytes read from the body, and the total number of bytes in the body.

  • :raise_response_errors (Boolean) — default: true

    When true, response errors are raised.

  • :ssl_ca_bundle (String)

    Full path to the SSL certificate authority bundle file that should be used when verifying peer certificates. If you do not pass :ssl_ca_bundle or :ssl_ca_directory the the system default will be used if available.

  • :ssl_ca_directory (String)

    Full path of the directory that contains the unbundled SSL certificate authority files for verifying peer certificates. If you do not pass :ssl_ca_bundle or :ssl_ca_directory the the system default will be used if available.

  • :ssl_ca_store (String)

    Sets the X509::Store to verify peer certificate.

  • :ssl_cert (OpenSSL::X509::Certificate)

    Sets a client certificate when creating http connections.

  • :ssl_key (OpenSSL::PKey)

    Sets a client key when creating http connections.

  • :ssl_timeout (Float)

    Sets the SSL timeout in seconds

  • :ssl_verify_peer (Boolean) — default: true

    When true, SSL peer certificates are verified when establishing a connection.



451
452
453
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 451

def initialize(*args)
  super
end

Instance Method Details

#add_association(params = {}) ⇒ Types::AddAssociationResponse

Creates an association between the source and the destination. A source can be associated with multiple destinations, and a destination can be associated with multiple sources. An association is a lineage tracking entity. For more information, see Amazon SageMaker ML Lineage Tracking.

Examples:

Request syntax with placeholder values


resp = client.add_association({
  source_arn: "AssociationEntityArn", # required
  destination_arn: "AssociationEntityArn", # required
  association_type: "ContributedTo", # accepts ContributedTo, AssociatedWith, DerivedFrom, Produced, SameAs
})

Response structure


resp.source_arn #=> String
resp.destination_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :source_arn (required, String)

    The ARN of the source.

  • :destination_arn (required, String)

    The Amazon Resource Name (ARN) of the destination.

  • :association_type (String)

    The type of association. The following are suggested uses for each type. Amazon SageMaker places no restrictions on their use.

    • ContributedTo - The source contributed to the destination or had a part in enabling the destination. For example, the training data contributed to the training job.

    • AssociatedWith - The source is connected to the destination. For example, an approval workflow is associated with a model deployment.

    • DerivedFrom - The destination is a modification of the source. For example, a digest output of a channel input for a processing job is derived from the original inputs.

    • Produced - The source generated the destination. For example, a training job produced a model artifact.

Returns:

See Also:



513
514
515
516
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 513

def add_association(params = {}, options = {})
  req = build_request(:add_association, params)
  req.send_request(options)
end

#add_tags(params = {}) ⇒ Types::AddTagsOutput

Adds or overwrites one or more tags for the specified SageMaker resource. You can add tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform jobs, models, labeling jobs, work teams, endpoint configurations, and endpoints.

Each tag consists of a key and an optional value. Tag keys must be unique per resource. For more information about tags, see For more information, see Amazon Web Services Tagging Strategies.

Tags that you add to a hyperparameter tuning job by calling this API are also added to any training jobs that the hyperparameter tuning job launches after you call this API, but not to training jobs that the hyperparameter tuning job launched before you called this API. To make sure that the tags associated with a hyperparameter tuning job are also added to all training jobs that the hyperparameter tuning job launches, add the tags when you first create the tuning job by specifying them in the Tags parameter of CreateHyperParameterTuningJob

Tags that you add to a SageMaker Domain or User Profile by calling this API are also added to any Apps that the Domain or User Profile launches after you call this API, but not to Apps that the Domain or User Profile launched before you called this API. To make sure that the tags associated with a Domain or User Profile are also added to all Apps that the Domain or User Profile launches, add the tags when you first create the Domain or User Profile by specifying them in the Tags parameter of CreateDomain or CreateUserProfile.

Examples:

Request syntax with placeholder values


resp = client.add_tags({
  resource_arn: "ResourceArn", # required
  tags: [ # required
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.tags #=> Array
resp.tags[0].key #=> String
resp.tags[0].value #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The Amazon Resource Name (ARN) of the resource that you want to tag.

  • :tags (required, Array<Types::Tag>)

    An array of key-value pairs. You can use tags to categorize your Amazon Web Services resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging Amazon Web Services Resources.

Returns:

See Also:



596
597
598
599
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 596

def add_tags(params = {}, options = {})
  req = build_request(:add_tags, params)
  req.send_request(options)
end

#associate_trial_component(params = {}) ⇒ Types::AssociateTrialComponentResponse

Associates a trial component with a trial. A trial component can be associated with multiple trials. To disassociate a trial component from a trial, call the DisassociateTrialComponent API.

Examples:

Request syntax with placeholder values


resp = client.associate_trial_component({
  trial_component_name: "ExperimentEntityName", # required
  trial_name: "ExperimentEntityName", # required
})

Response structure


resp.trial_component_arn #=> String
resp.trial_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :trial_component_name (required, String)

    The name of the component to associated with the trial.

  • :trial_name (required, String)

    The name of the trial to associate with.

Returns:

See Also:



636
637
638
639
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 636

def associate_trial_component(params = {}, options = {})
  req = build_request(:associate_trial_component, params)
  req.send_request(options)
end

#batch_delete_cluster_nodes(params = {}) ⇒ Types::BatchDeleteClusterNodesResponse

Deletes specific nodes within a SageMaker HyperPod cluster. BatchDeleteClusterNodes accepts a cluster name and a list of node IDs.

Examples:

Request syntax with placeholder values


resp = client.batch_delete_cluster_nodes({
  cluster_name: "ClusterNameOrArn", # required
  node_ids: ["ClusterNodeId"], # required
})

Response structure


resp.failed #=> Array
resp.failed[0].code #=> String, one of "NodeIdNotFound", "InvalidNodeStatus", "NodeIdInUse"
resp.failed[0].message #=> String
resp.failed[0].node_id #=> String
resp.successful #=> Array
resp.successful[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_name (required, String)

    The name of the SageMaker HyperPod cluster from which to delete the specified nodes.

  • :node_ids (required, Array<String>)

    A list of node IDs to be deleted from the specified cluster.

    For SageMaker HyperPod clusters using the Slurm workload manager, you cannot remove instances that are configured as Slurm controller nodes.

Returns:

See Also:



699
700
701
702
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 699

def batch_delete_cluster_nodes(params = {}, options = {})
  req = build_request(:batch_delete_cluster_nodes, params)
  req.send_request(options)
end

#batch_describe_model_package(params = {}) ⇒ Types::BatchDescribeModelPackageOutput

This action batch describes a list of versioned model packages

Examples:

Request syntax with placeholder values


resp = client.batch_describe_model_package({
  model_package_arn_list: ["ModelPackageArn"], # required
})

Response structure


resp.model_package_summaries #=> Hash
resp.model_package_summaries["ModelPackageArn"].model_package_group_name #=> String
resp.model_package_summaries["ModelPackageArn"].model_package_version #=> Integer
resp.model_package_summaries["ModelPackageArn"].model_package_arn #=> String
resp.model_package_summaries["ModelPackageArn"].model_package_description #=> String
resp.model_package_summaries["ModelPackageArn"].creation_time #=> Time
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers #=> Array
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].container_hostname #=> String
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].image #=> String
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].image_digest #=> String
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].model_data_url #=> String
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].model_data_source.s3_data_source.s3_uri #=> String
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].model_data_source.s3_data_source.s3_data_type #=> String, one of "S3Prefix", "S3Object"
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].model_data_source.s3_data_source.compression_type #=> String, one of "None", "Gzip"
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].model_data_source.s3_data_source.model_access_config.accept_eula #=> Boolean
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].model_data_source.s3_data_source.hub_access_config.hub_content_arn #=> String
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].model_data_source.s3_data_source.manifest_s3_uri #=> String
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].product_id #=> String
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].environment #=> Hash
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].environment["EnvironmentKey"] #=> String
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].model_input.data_input_config #=> String
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].framework #=> String
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].framework_version #=> String
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].nearest_model_name #=> String
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].additional_s3_data_source.s3_data_type #=> String, one of "S3Object", "S3Prefix"
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].additional_s3_data_source.s3_uri #=> String
resp.model_package_summaries["ModelPackageArn"].inference_specification.containers[0].additional_s3_data_source.compression_type #=> String, one of "None", "Gzip"
resp.model_package_summaries["ModelPackageArn"].inference_specification.supported_transform_instance_types #=> Array
resp.model_package_summaries["ModelPackageArn"].inference_specification.supported_transform_instance_types[0] #=> String, one of "ml.m4.xlarge", "ml.m4.2xlarge", "ml.m4.4xlarge", "ml.m4.10xlarge", "ml.m4.16xlarge", "ml.c4.xlarge", "ml.c4.2xlarge", "ml.c4.4xlarge", "ml.c4.8xlarge", "ml.p2.xlarge", "ml.p2.8xlarge", "ml.p2.16xlarge", "ml.p3.2xlarge", "ml.p3.8xlarge", "ml.p3.16xlarge", "ml.c5.xlarge", "ml.c5.2xlarge", "ml.c5.4xlarge", "ml.c5.9xlarge", "ml.c5.18xlarge", "ml.m5.large", "ml.m5.xlarge", "ml.m5.2xlarge", "ml.m5.4xlarge", "ml.m5.12xlarge", "ml.m5.24xlarge", "ml.m6i.large", "ml.m6i.xlarge", "ml.m6i.2xlarge", "ml.m6i.4xlarge", "ml.m6i.8xlarge", "ml.m6i.12xlarge", "ml.m6i.16xlarge", "ml.m6i.24xlarge", "ml.m6i.32xlarge", "ml.c6i.large", "ml.c6i.xlarge", "ml.c6i.2xlarge", "ml.c6i.4xlarge", "ml.c6i.8xlarge", "ml.c6i.12xlarge", "ml.c6i.16xlarge", "ml.c6i.24xlarge", "ml.c6i.32xlarge", "ml.r6i.large", "ml.r6i.xlarge", "ml.r6i.2xlarge", "ml.r6i.4xlarge", "ml.r6i.8xlarge", "ml.r6i.12xlarge", "ml.r6i.16xlarge", "ml.r6i.24xlarge", "ml.r6i.32xlarge", "ml.m7i.large", "ml.m7i.xlarge", "ml.m7i.2xlarge", "ml.m7i.4xlarge", "ml.m7i.8xlarge", "ml.m7i.12xlarge", "ml.m7i.16xlarge", "ml.m7i.24xlarge", "ml.m7i.48xlarge", "ml.c7i.large", "ml.c7i.xlarge", "ml.c7i.2xlarge", "ml.c7i.4xlarge", "ml.c7i.8xlarge", "ml.c7i.12xlarge", "ml.c7i.16xlarge", "ml.c7i.24xlarge", "ml.c7i.48xlarge", "ml.r7i.large", "ml.r7i.xlarge", "ml.r7i.2xlarge", "ml.r7i.4xlarge", "ml.r7i.8xlarge", "ml.r7i.12xlarge", "ml.r7i.16xlarge", "ml.r7i.24xlarge", "ml.r7i.48xlarge", "ml.g4dn.xlarge", "ml.g4dn.2xlarge", "ml.g4dn.4xlarge", "ml.g4dn.8xlarge", "ml.g4dn.12xlarge", "ml.g4dn.16xlarge", "ml.g5.xlarge", "ml.g5.2xlarge", "ml.g5.4xlarge", "ml.g5.8xlarge", "ml.g5.12xlarge", "ml.g5.16xlarge", "ml.g5.24xlarge", "ml.g5.48xlarge", "ml.inf2.xlarge", "ml.inf2.8xlarge", "ml.inf2.24xlarge", "ml.inf2.48xlarge"
resp.model_package_summaries["ModelPackageArn"].inference_specification.supported_realtime_inference_instance_types #=> Array
resp.model_package_summaries["ModelPackageArn"].inference_specification.supported_realtime_inference_instance_types[0] #=> String, one of "ml.t2.medium", "ml.t2.large", "ml.t2.xlarge", "ml.t2.2xlarge", "ml.m4.xlarge", "ml.m4.2xlarge", "ml.m4.4xlarge", "ml.m4.10xlarge", "ml.m4.16xlarge", "ml.m5.large", "ml.m5.xlarge", "ml.m5.2xlarge", "ml.m5.4xlarge", "ml.m5.12xlarge", "ml.m5.24xlarge", "ml.m5d.large", "ml.m5d.xlarge", "ml.m5d.2xlarge", "ml.m5d.4xlarge", "ml.m5d.12xlarge", "ml.m5d.24xlarge", "ml.c4.large", "ml.c4.xlarge", "ml.c4.2xlarge", "ml.c4.4xlarge", "ml.c4.8xlarge", "ml.p2.xlarge", "ml.p2.8xlarge", "ml.p2.16xlarge", "ml.p3.2xlarge", "ml.p3.8xlarge", "ml.p3.16xlarge", "ml.c5.large", "ml.c5.xlarge", "ml.c5.2xlarge", "ml.c5.4xlarge", "ml.c5.9xlarge", "ml.c5.18xlarge", "ml.c5d.large", "ml.c5d.xlarge", "ml.c5d.2xlarge", "ml.c5d.4xlarge", "ml.c5d.9xlarge", "ml.c5d.18xlarge", "ml.g4dn.xlarge", "ml.g4dn.2xlarge", "ml.g4dn.4xlarge", "ml.g4dn.8xlarge", "ml.g4dn.12xlarge", "ml.g4dn.16xlarge", "ml.r5.large", "ml.r5.xlarge", "ml.r5.2xlarge", "ml.r5.4xlarge", "ml.r5.12xlarge", "ml.r5.24xlarge", "ml.r5d.large", "ml.r5d.xlarge", "ml.r5d.2xlarge", "ml.r5d.4xlarge", "ml.r5d.12xlarge", "ml.r5d.24xlarge", "ml.inf1.xlarge", "ml.inf1.2xlarge", "ml.inf1.6xlarge", "ml.inf1.24xlarge", "ml.dl1.24xlarge", "ml.c6i.large", "ml.c6i.xlarge", "ml.c6i.2xlarge", "ml.c6i.4xlarge", "ml.c6i.8xlarge", "ml.c6i.12xlarge", "ml.c6i.16xlarge", "ml.c6i.24xlarge", "ml.c6i.32xlarge", "ml.m6i.large", "ml.m6i.xlarge", "ml.m6i.2xlarge", "ml.m6i.4xlarge", "ml.m6i.8xlarge", "ml.m6i.12xlarge", "ml.m6i.16xlarge", "ml.m6i.24xlarge", "ml.m6i.32xlarge", "ml.r6i.large", "ml.r6i.xlarge", "ml.r6i.2xlarge", "ml.r6i.4xlarge", "ml.r6i.8xlarge", "ml.r6i.12xlarge", "ml.r6i.16xlarge", "ml.r6i.24xlarge", "ml.r6i.32xlarge", "ml.g5.xlarge", "ml.g5.2xlarge", "ml.g5.4xlarge", "ml.g5.8xlarge", "ml.g5.12xlarge", "ml.g5.16xlarge", "ml.g5.24xlarge", "ml.g5.48xlarge", "ml.g6.xlarge", "ml.g6.2xlarge", "ml.g6.4xlarge", "ml.g6.8xlarge", "ml.g6.12xlarge", "ml.g6.16xlarge", "ml.g6.24xlarge", "ml.g6.48xlarge", "ml.g6e.xlarge", "ml.g6e.2xlarge", "ml.g6e.4xlarge", "ml.g6e.8xlarge", "ml.g6e.12xlarge", "ml.g6e.16xlarge", "ml.g6e.24xlarge", "ml.g6e.48xlarge", "ml.p4d.24xlarge", "ml.c7g.large", "ml.c7g.xlarge", "ml.c7g.2xlarge", "ml.c7g.4xlarge", "ml.c7g.8xlarge", "ml.c7g.12xlarge", "ml.c7g.16xlarge", "ml.m6g.large", "ml.m6g.xlarge", "ml.m6g.2xlarge", "ml.m6g.4xlarge", "ml.m6g.8xlarge", "ml.m6g.12xlarge", "ml.m6g.16xlarge", "ml.m6gd.large", "ml.m6gd.xlarge", "ml.m6gd.2xlarge", "ml.m6gd.4xlarge", "ml.m6gd.8xlarge", "ml.m6gd.12xlarge", "ml.m6gd.16xlarge", "ml.c6g.large", "ml.c6g.xlarge", "ml.c6g.2xlarge", "ml.c6g.4xlarge", "ml.c6g.8xlarge", "ml.c6g.12xlarge", "ml.c6g.16xlarge", "ml.c6gd.large", "ml.c6gd.xlarge", "ml.c6gd.2xlarge", "ml.c6gd.4xlarge", "ml.c6gd.8xlarge", "ml.c6gd.12xlarge", "ml.c6gd.16xlarge", "ml.c6gn.large", "ml.c6gn.xlarge", "ml.c6gn.2xlarge", "ml.c6gn.4xlarge", "ml.c6gn.8xlarge", "ml.c6gn.12xlarge", "ml.c6gn.16xlarge", "ml.r6g.large", "ml.r6g.xlarge", "ml.r6g.2xlarge", "ml.r6g.4xlarge", "ml.r6g.8xlarge", "ml.r6g.12xlarge", "ml.r6g.16xlarge", "ml.r6gd.large", "ml.r6gd.xlarge", "ml.r6gd.2xlarge", "ml.r6gd.4xlarge", "ml.r6gd.8xlarge", "ml.r6gd.12xlarge", "ml.r6gd.16xlarge", "ml.p4de.24xlarge", "ml.trn1.2xlarge", "ml.trn1.32xlarge", "ml.trn1n.32xlarge", "ml.trn2.48xlarge", "ml.inf2.xlarge", "ml.inf2.8xlarge", "ml.inf2.24xlarge", "ml.inf2.48xlarge", "ml.p5.48xlarge", "ml.p5e.48xlarge", "ml.m7i.large", "ml.m7i.xlarge", "ml.m7i.2xlarge", "ml.m7i.4xlarge", "ml.m7i.8xlarge", "ml.m7i.12xlarge", "ml.m7i.16xlarge", "ml.m7i.24xlarge", "ml.m7i.48xlarge", "ml.c7i.large", "ml.c7i.xlarge", "ml.c7i.2xlarge", "ml.c7i.4xlarge", "ml.c7i.8xlarge", "ml.c7i.12xlarge", "ml.c7i.16xlarge", "ml.c7i.24xlarge", "ml.c7i.48xlarge", "ml.r7i.large", "ml.r7i.xlarge", "ml.r7i.2xlarge", "ml.r7i.4xlarge", "ml.r7i.8xlarge", "ml.r7i.12xlarge", "ml.r7i.16xlarge", "ml.r7i.24xlarge", "ml.r7i.48xlarge"
resp.model_package_summaries["ModelPackageArn"].inference_specification.supported_content_types #=> Array
resp.model_package_summaries["ModelPackageArn"].inference_specification.supported_content_types[0] #=> String
resp.model_package_summaries["ModelPackageArn"].inference_specification.supported_response_mime_types #=> Array
resp.model_package_summaries["ModelPackageArn"].inference_specification.supported_response_mime_types[0] #=> String
resp.model_package_summaries["ModelPackageArn"].model_package_status #=> String, one of "Pending", "InProgress", "Completed", "Failed", "Deleting"
resp.model_package_summaries["ModelPackageArn"].model_approval_status #=> String, one of "Approved", "Rejected", "PendingManualApproval"
resp.batch_describe_model_package_error_map #=> Hash
resp.batch_describe_model_package_error_map["ModelPackageArn"].error_code #=> String
resp.batch_describe_model_package_error_map["ModelPackageArn"].error_response #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :model_package_arn_list (required, Array<String>)

    The list of Amazon Resource Name (ARN) of the model package groups.

Returns:

See Also:



767
768
769
770
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 767

def batch_describe_model_package(params = {}, options = {})
  req = build_request(:batch_describe_model_package, params)
  req.send_request(options)
end

#create_action(params = {}) ⇒ Types::CreateActionResponse

Creates an action. An action is a lineage tracking entity that represents an action or activity. For example, a model deployment or an HPO job. Generally, an action involves at least one input or output artifact. For more information, see Amazon SageMaker ML Lineage Tracking.

Examples:

Request syntax with placeholder values


resp = client.create_action({
  action_name: "ExperimentEntityName", # required
  source: { # required
    source_uri: "SourceUri", # required
    source_type: "String256",
    source_id: "String256",
  },
  action_type: "String256", # required
  description: "ExperimentDescription",
  status: "Unknown", # accepts Unknown, InProgress, Completed, Failed, Stopping, Stopped
  properties: {
    "StringParameterValue" => "StringParameterValue",
  },
  metadata_properties: {
    commit_id: "MetadataPropertyValue",
    repository: "MetadataPropertyValue",
    generated_by: "MetadataPropertyValue",
    project_id: "MetadataPropertyValue",
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.action_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :action_name (required, String)

    The name of the action. Must be unique to your account in an Amazon Web Services Region.

  • :source (required, Types::ActionSource)

    The source type, ID, and URI.

  • :action_type (required, String)

    The action type.

  • :description (String)

    The description of the action.

  • :status (String)

    The status of the action.

  • :properties (Hash<String,String>)

    A list of properties to add to the action.

  • :metadata_properties (Types::MetadataProperties)

    Metadata properties of the tracking entity, trial, or trial component.

  • :tags (Array<Types::Tag>)

    A list of tags to apply to the action.

Returns:

See Also:



848
849
850
851
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 848

def create_action(params = {}, options = {})
  req = build_request(:create_action, params)
  req.send_request(options)
end

#create_algorithm(params = {}) ⇒ Types::CreateAlgorithmOutput

Create a machine learning algorithm that you can use in SageMaker and list in the Amazon Web Services Marketplace.

Examples:

Request syntax with placeholder values


resp = client.create_algorithm({
  algorithm_name: "EntityName", # required
  algorithm_description: "EntityDescription",
  training_specification: { # required
    training_image: "ContainerImage", # required
    training_image_digest: "ImageDigest",
    supported_hyper_parameters: [
      {
        name: "ParameterName", # required
        description: "EntityDescription",
        type: "Integer", # required, accepts Integer, Continuous, Categorical, FreeText
        range: {
          integer_parameter_range_specification: {
            min_value: "ParameterValue", # required
            max_value: "ParameterValue", # required
          },
          continuous_parameter_range_specification: {
            min_value: "ParameterValue", # required
            max_value: "ParameterValue", # required
          },
          categorical_parameter_range_specification: {
            values: ["ParameterValue"], # required
          },
        },
        is_tunable: false,
        is_required: false,
        default_value: "HyperParameterValue",
      },
    ],
    supported_training_instance_types: ["ml.m4.xlarge"], # required, accepts ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.p5.48xlarge, ml.p5e.48xlarge, ml.p5en.48xlarge, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.c5n.xlarge, ml.c5n.2xlarge, ml.c5n.4xlarge, ml.c5n.9xlarge, ml.c5n.18xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.16xlarge, ml.g6.12xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.16xlarge, ml.g6e.12xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.trn2.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.8xlarge, ml.c6i.4xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.r5d.large, ml.r5d.xlarge, ml.r5d.2xlarge, ml.r5d.4xlarge, ml.r5d.8xlarge, ml.r5d.12xlarge, ml.r5d.16xlarge, ml.r5d.24xlarge, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge
    supports_distributed_training: false,
    metric_definitions: [
      {
        name: "MetricName", # required
        regex: "MetricRegex", # required
      },
    ],
    training_channels: [ # required
      {
        name: "ChannelName", # required
        description: "EntityDescription",
        is_required: false,
        supported_content_types: ["ContentType"], # required
        supported_compression_types: ["None"], # accepts None, Gzip
        supported_input_modes: ["Pipe"], # required, accepts Pipe, File, FastFile
      },
    ],
    supported_tuning_job_objective_metrics: [
      {
        type: "Maximize", # required, accepts Maximize, Minimize
        metric_name: "MetricName", # required
      },
    ],
    additional_s3_data_source: {
      s3_data_type: "S3Object", # required, accepts S3Object, S3Prefix
      s3_uri: "S3Uri", # required
      compression_type: "None", # accepts None, Gzip
    },
  },
  inference_specification: {
    containers: [ # required
      {
        container_hostname: "ContainerHostname",
        image: "ContainerImage", # required
        image_digest: "ImageDigest",
        model_data_url: "Url",
        model_data_source: {
          s3_data_source: {
            s3_uri: "S3ModelUri", # required
            s3_data_type: "S3Prefix", # required, accepts S3Prefix, S3Object
            compression_type: "None", # required, accepts None, Gzip
            model_access_config: {
              accept_eula: false, # required
            },
            hub_access_config: {
              hub_content_arn: "HubContentArn", # required
            },
            manifest_s3_uri: "S3ModelUri",
          },
        },
        product_id: "ProductId",
        environment: {
          "EnvironmentKey" => "EnvironmentValue",
        },
        model_input: {
          data_input_config: "DataInputConfig", # required
        },
        framework: "String",
        framework_version: "ModelPackageFrameworkVersion",
        nearest_model_name: "String",
        additional_s3_data_source: {
          s3_data_type: "S3Object", # required, accepts S3Object, S3Prefix
          s3_uri: "S3Uri", # required
          compression_type: "None", # accepts None, Gzip
        },
      },
    ],
    supported_transform_instance_types: ["ml.m4.xlarge"], # accepts ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.12xlarge, ml.g5.16xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.inf2.xlarge, ml.inf2.8xlarge, ml.inf2.24xlarge, ml.inf2.48xlarge
    supported_realtime_inference_instance_types: ["ml.t2.medium"], # accepts ml.t2.medium, ml.t2.large, ml.t2.xlarge, ml.t2.2xlarge, ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.m5d.large, ml.m5d.xlarge, ml.m5d.2xlarge, ml.m5d.4xlarge, ml.m5d.12xlarge, ml.m5d.24xlarge, ml.c4.large, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.c5d.large, ml.c5d.xlarge, ml.c5d.2xlarge, ml.c5d.4xlarge, ml.c5d.9xlarge, ml.c5d.18xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.12xlarge, ml.r5.24xlarge, ml.r5d.large, ml.r5d.xlarge, ml.r5d.2xlarge, ml.r5d.4xlarge, ml.r5d.12xlarge, ml.r5d.24xlarge, ml.inf1.xlarge, ml.inf1.2xlarge, ml.inf1.6xlarge, ml.inf1.24xlarge, ml.dl1.24xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.12xlarge, ml.g5.16xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.12xlarge, ml.g6.16xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.12xlarge, ml.g6e.16xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.p4d.24xlarge, ml.c7g.large, ml.c7g.xlarge, ml.c7g.2xlarge, ml.c7g.4xlarge, ml.c7g.8xlarge, ml.c7g.12xlarge, ml.c7g.16xlarge, ml.m6g.large, ml.m6g.xlarge, ml.m6g.2xlarge, ml.m6g.4xlarge, ml.m6g.8xlarge, ml.m6g.12xlarge, ml.m6g.16xlarge, ml.m6gd.large, ml.m6gd.xlarge, ml.m6gd.2xlarge, ml.m6gd.4xlarge, ml.m6gd.8xlarge, ml.m6gd.12xlarge, ml.m6gd.16xlarge, ml.c6g.large, ml.c6g.xlarge, ml.c6g.2xlarge, ml.c6g.4xlarge, ml.c6g.8xlarge, ml.c6g.12xlarge, ml.c6g.16xlarge, ml.c6gd.large, ml.c6gd.xlarge, ml.c6gd.2xlarge, ml.c6gd.4xlarge, ml.c6gd.8xlarge, ml.c6gd.12xlarge, ml.c6gd.16xlarge, ml.c6gn.large, ml.c6gn.xlarge, ml.c6gn.2xlarge, ml.c6gn.4xlarge, ml.c6gn.8xlarge, ml.c6gn.12xlarge, ml.c6gn.16xlarge, ml.r6g.large, ml.r6g.xlarge, ml.r6g.2xlarge, ml.r6g.4xlarge, ml.r6g.8xlarge, ml.r6g.12xlarge, ml.r6g.16xlarge, ml.r6gd.large, ml.r6gd.xlarge, ml.r6gd.2xlarge, ml.r6gd.4xlarge, ml.r6gd.8xlarge, ml.r6gd.12xlarge, ml.r6gd.16xlarge, ml.p4de.24xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.trn2.48xlarge, ml.inf2.xlarge, ml.inf2.8xlarge, ml.inf2.24xlarge, ml.inf2.48xlarge, ml.p5.48xlarge, ml.p5e.48xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge
    supported_content_types: ["ContentType"],
    supported_response_mime_types: ["ResponseMIMEType"],
  },
  validation_specification: {
    validation_role: "RoleArn", # required
    validation_profiles: [ # required
      {
        profile_name: "EntityName", # required
        training_job_definition: { # required
          training_input_mode: "Pipe", # required, accepts Pipe, File, FastFile
          hyper_parameters: {
            "HyperParameterKey" => "HyperParameterValue",
          },
          input_data_config: [ # required
            {
              channel_name: "ChannelName", # required
              data_source: { # required
                s3_data_source: {
                  s3_data_type: "ManifestFile", # required, accepts ManifestFile, S3Prefix, AugmentedManifestFile
                  s3_uri: "S3Uri", # required
                  s3_data_distribution_type: "FullyReplicated", # accepts FullyReplicated, ShardedByS3Key
                  attribute_names: ["AttributeName"],
                  instance_group_names: ["InstanceGroupName"],
                },
                file_system_data_source: {
                  file_system_id: "FileSystemId", # required
                  file_system_access_mode: "rw", # required, accepts rw, ro
                  file_system_type: "EFS", # required, accepts EFS, FSxLustre
                  directory_path: "DirectoryPath", # required
                },
              },
              content_type: "ContentType",
              compression_type: "None", # accepts None, Gzip
              record_wrapper_type: "None", # accepts None, RecordIO
              input_mode: "Pipe", # accepts Pipe, File, FastFile
              shuffle_config: {
                seed: 1, # required
              },
            },
          ],
          output_data_config: { # required
            kms_key_id: "KmsKeyId",
            s3_output_path: "S3Uri", # required
            compression_type: "GZIP", # accepts GZIP, NONE
          },
          resource_config: { # required
            instance_type: "ml.m4.xlarge", # accepts ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.p5.48xlarge, ml.p5e.48xlarge, ml.p5en.48xlarge, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.c5n.xlarge, ml.c5n.2xlarge, ml.c5n.4xlarge, ml.c5n.9xlarge, ml.c5n.18xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.16xlarge, ml.g6.12xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.16xlarge, ml.g6e.12xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.trn2.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.8xlarge, ml.c6i.4xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.r5d.large, ml.r5d.xlarge, ml.r5d.2xlarge, ml.r5d.4xlarge, ml.r5d.8xlarge, ml.r5d.12xlarge, ml.r5d.16xlarge, ml.r5d.24xlarge, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge
            instance_count: 1,
            volume_size_in_gb: 1, # required
            volume_kms_key_id: "KmsKeyId",
            keep_alive_period_in_seconds: 1,
            instance_groups: [
              {
                instance_type: "ml.m4.xlarge", # required, accepts ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.p5.48xlarge, ml.p5e.48xlarge, ml.p5en.48xlarge, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.c5n.xlarge, ml.c5n.2xlarge, ml.c5n.4xlarge, ml.c5n.9xlarge, ml.c5n.18xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.16xlarge, ml.g6.12xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.16xlarge, ml.g6e.12xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.trn2.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.8xlarge, ml.c6i.4xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.r5d.large, ml.r5d.xlarge, ml.r5d.2xlarge, ml.r5d.4xlarge, ml.r5d.8xlarge, ml.r5d.12xlarge, ml.r5d.16xlarge, ml.r5d.24xlarge, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge
                instance_count: 1, # required
                instance_group_name: "InstanceGroupName", # required
              },
            ],
            training_plan_arn: "TrainingPlanArn",
          },
          stopping_condition: { # required
            max_runtime_in_seconds: 1,
            max_wait_time_in_seconds: 1,
            max_pending_time_in_seconds: 1,
          },
        },
        transform_job_definition: {
          max_concurrent_transforms: 1,
          max_payload_in_mb: 1,
          batch_strategy: "MultiRecord", # accepts MultiRecord, SingleRecord
          environment: {
            "TransformEnvironmentKey" => "TransformEnvironmentValue",
          },
          transform_input: { # required
            data_source: { # required
              s3_data_source: { # required
                s3_data_type: "ManifestFile", # required, accepts ManifestFile, S3Prefix, AugmentedManifestFile
                s3_uri: "S3Uri", # required
              },
            },
            content_type: "ContentType",
            compression_type: "None", # accepts None, Gzip
            split_type: "None", # accepts None, Line, RecordIO, TFRecord
          },
          transform_output: { # required
            s3_output_path: "S3Uri", # required
            accept: "Accept",
            assemble_with: "None", # accepts None, Line
            kms_key_id: "KmsKeyId",
          },
          transform_resources: { # required
            instance_type: "ml.m4.xlarge", # required, accepts ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.12xlarge, ml.g5.16xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.inf2.xlarge, ml.inf2.8xlarge, ml.inf2.24xlarge, ml.inf2.48xlarge
            instance_count: 1, # required
            volume_kms_key_id: "KmsKeyId",
          },
        },
      },
    ],
  },
  certify_for_marketplace: false,
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.algorithm_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :algorithm_name (required, String)

    The name of the algorithm.

  • :algorithm_description (String)

    A description of the algorithm.

  • :training_specification (required, Types::TrainingSpecification)

    Specifies details about training jobs run by this algorithm, including the following:

    • The Amazon ECR path of the container and the version digest of the algorithm.

    • The hyperparameters that the algorithm supports.

    • The instance types that the algorithm supports for training.

    • Whether the algorithm supports distributed training.

    • The metrics that the algorithm emits to Amazon CloudWatch.

    • Which metrics that the algorithm emits can be used as the objective metric for hyperparameter tuning jobs.

    • The input channels that the algorithm supports for training data. For example, an algorithm might support train, validation, and test channels.

  • :inference_specification (Types::InferenceSpecification)

    Specifies details about inference jobs that the algorithm runs, including the following:

    • The Amazon ECR paths of containers that contain the inference code and model artifacts.

    • The instance types that the algorithm supports for transform jobs and real-time endpoints used for inference.

    • The input and output content formats that the algorithm supports for inference.

  • :validation_specification (Types::AlgorithmValidationSpecification)

    Specifies configurations for one or more training jobs and that SageMaker runs to test the algorithm's training code and, optionally, one or more batch transform jobs that SageMaker runs to test the algorithm's inference code.

  • :certify_for_marketplace (Boolean)

    Whether to certify the algorithm so that it can be listed in Amazon Web Services Marketplace.

  • :tags (Array<Types::Tag>)

    An array of key-value pairs. You can use tags to categorize your Amazon Web Services resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging Amazon Web Services Resources.

Returns:

See Also:



1138
1139
1140
1141
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 1138

def create_algorithm(params = {}, options = {})
  req = build_request(:create_algorithm, params)
  req.send_request(options)
end

#create_app(params = {}) ⇒ Types::CreateAppResponse

Creates a running app for the specified UserProfile. This operation is automatically invoked by Amazon SageMaker upon access to the associated Domain, and when new kernel configurations are selected by the user. A user may have multiple Apps active simultaneously.

Examples:

Request syntax with placeholder values


resp = client.create_app({
  domain_id: "DomainId", # required
  user_profile_name: "UserProfileName",
  space_name: "SpaceName",
  app_type: "JupyterServer", # required, accepts JupyterServer, KernelGateway, DetailedProfiler, TensorBoard, CodeEditor, JupyterLab, RStudioServerPro, RSessionGateway, Canvas
  app_name: "AppName", # required
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
  resource_spec: {
    sage_maker_image_arn: "ImageArn",
    sage_maker_image_version_arn: "ImageVersionArn",
    sage_maker_image_version_alias: "ImageVersionAlias",
    instance_type: "system", # accepts system, ml.t3.micro, ml.t3.small, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.8xlarge, ml.m5.12xlarge, ml.m5.16xlarge, ml.m5.24xlarge, ml.m5d.large, ml.m5d.xlarge, ml.m5d.2xlarge, ml.m5d.4xlarge, ml.m5d.8xlarge, ml.m5d.12xlarge, ml.m5d.16xlarge, ml.m5d.24xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.12xlarge, ml.c5.18xlarge, ml.c5.24xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.12xlarge, ml.g6.16xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.12xlarge, ml.g6e.16xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.geospatial.interactive, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.p5.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge, ml.m6id.large, ml.m6id.xlarge, ml.m6id.2xlarge, ml.m6id.4xlarge, ml.m6id.8xlarge, ml.m6id.12xlarge, ml.m6id.16xlarge, ml.m6id.24xlarge, ml.m6id.32xlarge, ml.c6id.large, ml.c6id.xlarge, ml.c6id.2xlarge, ml.c6id.4xlarge, ml.c6id.8xlarge, ml.c6id.12xlarge, ml.c6id.16xlarge, ml.c6id.24xlarge, ml.c6id.32xlarge, ml.r6id.large, ml.r6id.xlarge, ml.r6id.2xlarge, ml.r6id.4xlarge, ml.r6id.8xlarge, ml.r6id.12xlarge, ml.r6id.16xlarge, ml.r6id.24xlarge, ml.r6id.32xlarge
    lifecycle_config_arn: "StudioLifecycleConfigArn",
  },
})

Response structure


resp.app_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :domain_id (required, String)

    The domain ID.

  • :user_profile_name (String)

    The user profile name. If this value is not set, then SpaceName must be set.

  • :space_name (String)

    The name of the space. If this value is not set, then UserProfileName must be set.

  • :app_type (required, String)

    The type of app.

  • :app_name (required, String)

    The name of the app.

  • :tags (Array<Types::Tag>)

    Each tag consists of a key and an optional value. Tag keys must be unique per resource.

  • :resource_spec (Types::ResourceSpec)

    The instance type and the Amazon Resource Name (ARN) of the SageMaker image created on the instance.

    The value of InstanceType passed as part of the ResourceSpec in the CreateApp call overrides the value passed as part of the ResourceSpec configured for the user profile or the domain. If InstanceType is not specified in any of those three ResourceSpec values for a KernelGateway app, the CreateApp call fails with a request validation error.

Returns:

See Also:



1217
1218
1219
1220
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 1217

def create_app(params = {}, options = {})
  req = build_request(:create_app, params)
  req.send_request(options)
end

#create_app_image_config(params = {}) ⇒ Types::CreateAppImageConfigResponse

Creates a configuration for running a SageMaker image as a KernelGateway app. The configuration specifies the Amazon Elastic File System storage volume on the image, and a list of the kernels in the image.

Examples:

Request syntax with placeholder values


resp = client.create_app_image_config({
  app_image_config_name: "AppImageConfigName", # required
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
  kernel_gateway_image_config: {
    kernel_specs: [ # required
      {
        name: "KernelName", # required
        display_name: "KernelDisplayName",
      },
    ],
    file_system_config: {
      mount_path: "MountPath",
      default_uid: 1,
      default_gid: 1,
    },
  },
  jupyter_lab_app_image_config: {
    file_system_config: {
      mount_path: "MountPath",
      default_uid: 1,
      default_gid: 1,
    },
    container_config: {
      container_arguments: ["NonEmptyString64"],
      container_entrypoint: ["NonEmptyString256"],
      container_environment_variables: {
        "NonEmptyString256" => "String256",
      },
    },
  },
  code_editor_app_image_config: {
    file_system_config: {
      mount_path: "MountPath",
      default_uid: 1,
      default_gid: 1,
    },
    container_config: {
      container_arguments: ["NonEmptyString64"],
      container_entrypoint: ["NonEmptyString256"],
      container_environment_variables: {
        "NonEmptyString256" => "String256",
      },
    },
  },
})

Response structure


resp.app_image_config_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :app_image_config_name (required, String)

    The name of the AppImageConfig. Must be unique to your account.

  • :tags (Array<Types::Tag>)

    A list of tags to apply to the AppImageConfig.

  • :kernel_gateway_image_config (Types::KernelGatewayImageConfig)

    The KernelGatewayImageConfig. You can only specify one image kernel in the AppImageConfig API. This kernel will be shown to users before the image starts. Once the image runs, all kernels are visible in JupyterLab.

  • :jupyter_lab_app_image_config (Types::JupyterLabAppImageConfig)

    The JupyterLabAppImageConfig. You can only specify one image kernel in the AppImageConfig API. This kernel is shown to users before the image starts. After the image runs, all kernels are visible in JupyterLab.

  • :code_editor_app_image_config (Types::CodeEditorAppImageConfig)

    The CodeEditorAppImageConfig. You can only specify one image kernel in the AppImageConfig API. This kernel is shown to users before the image starts. After the image runs, all kernels are visible in Code Editor.

Returns:

See Also:



1316
1317
1318
1319
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 1316

def create_app_image_config(params = {}, options = {})
  req = build_request(:create_app_image_config, params)
  req.send_request(options)
end

#create_artifact(params = {}) ⇒ Types::CreateArtifactResponse

Creates an artifact. An artifact is a lineage tracking entity that represents a URI addressable object or data. Some examples are the S3 URI of a dataset and the ECR registry path of an image. For more information, see Amazon SageMaker ML Lineage Tracking.

Examples:

Request syntax with placeholder values


resp = client.create_artifact({
  artifact_name: "ExperimentEntityName",
  source: { # required
    source_uri: "SourceUri", # required
    source_types: [
      {
        source_id_type: "MD5Hash", # required, accepts MD5Hash, S3ETag, S3Version, Custom
        value: "String256", # required
      },
    ],
  },
  artifact_type: "String256", # required
  properties: {
    "StringParameterValue" => "ArtifactPropertyValue",
  },
  metadata_properties: {
    commit_id: "MetadataPropertyValue",
    repository: "MetadataPropertyValue",
    generated_by: "MetadataPropertyValue",
    project_id: "MetadataPropertyValue",
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.artifact_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :artifact_name (String)

    The name of the artifact. Must be unique to your account in an Amazon Web Services Region.

  • :source (required, Types::ArtifactSource)

    The ID, ID type, and URI of the source.

  • :artifact_type (required, String)

    The artifact type.

  • :properties (Hash<String,String>)

    A list of properties to add to the artifact.

  • :metadata_properties (Types::MetadataProperties)

    Metadata properties of the tracking entity, trial, or trial component.

  • :tags (Array<Types::Tag>)

    A list of tags to apply to the artifact.

Returns:

See Also:



1392
1393
1394
1395
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 1392

def create_artifact(params = {}, options = {})
  req = build_request(:create_artifact, params)
  req.send_request(options)
end

#create_auto_ml_job(params = {}) ⇒ Types::CreateAutoMLJobResponse

Creates an Autopilot job also referred to as Autopilot experiment or AutoML job.

An AutoML job in SageMaker is a fully automated process that allows you to build machine learning models with minimal effort and machine learning expertise. When initiating an AutoML job, you provide your data and optionally specify parameters tailored to your use case. SageMaker then automates the entire model development lifecycle, including data preprocessing, model training, tuning, and evaluation. AutoML jobs are designed to simplify and accelerate the model building process by automating various tasks and exploring different combinations of machine learning algorithms, data preprocessing techniques, and hyperparameter values. The output of an AutoML job comprises one or more trained models ready for deployment and inference. Additionally, SageMaker AutoML jobs generate a candidate model leaderboard, allowing you to select the best-performing model for deployment.

For more information about AutoML jobs, see https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-automate-model-development.html in the SageMaker developer guide.

We recommend using the new versions CreateAutoMLJobV2 and DescribeAutoMLJobV2, which offer backward compatibility.

CreateAutoMLJobV2 can manage tabular problem types identical to those of its previous version CreateAutoMLJob, as well as time-series forecasting, non-tabular problem types such as image or text classification, and text generation (LLMs fine-tuning).

Find guidelines about how to migrate a CreateAutoMLJob to CreateAutoMLJobV2 in Migrate a CreateAutoMLJob to CreateAutoMLJobV2.

You can find the best-performing model after you run an AutoML job by calling DescribeAutoMLJobV2 (recommended) or DescribeAutoMLJob.

Examples:

Request syntax with placeholder values


resp = client.create_auto_ml_job({
  auto_ml_job_name: "AutoMLJobName", # required
  input_data_config: [ # required
    {
      data_source: {
        s3_data_source: { # required
          s3_data_type: "ManifestFile", # required, accepts ManifestFile, S3Prefix, AugmentedManifestFile
          s3_uri: "S3Uri", # required
        },
      },
      compression_type: "None", # accepts None, Gzip
      target_attribute_name: "TargetAttributeName", # required
      content_type: "ContentType",
      channel_type: "training", # accepts training, validation
      sample_weight_attribute_name: "SampleWeightAttributeName",
    },
  ],
  output_data_config: { # required
    kms_key_id: "KmsKeyId",
    s3_output_path: "S3Uri", # required
  },
  problem_type: "BinaryClassification", # accepts BinaryClassification, MulticlassClassification, Regression
  auto_ml_job_objective: {
    metric_name: "Accuracy", # required, accepts Accuracy, MSE, F1, F1macro, AUC, RMSE, BalancedAccuracy, R2, Recall, RecallMacro, Precision, PrecisionMacro, MAE, MAPE, MASE, WAPE, AverageWeightedQuantileLoss
  },
  auto_ml_job_config: {
    completion_criteria: {
      max_candidates: 1,
      max_runtime_per_training_job_in_seconds: 1,
      max_auto_ml_job_runtime_in_seconds: 1,
    },
    security_config: {
      volume_kms_key_id: "KmsKeyId",
      enable_inter_container_traffic_encryption: false,
      vpc_config: {
        security_group_ids: ["SecurityGroupId"], # required
        subnets: ["SubnetId"], # required
      },
    },
    candidate_generation_config: {
      feature_specification_s3_uri: "S3Uri",
      algorithms_config: [
        {
          auto_ml_algorithms: ["xgboost"], # required, accepts xgboost, linear-learner, mlp, lightgbm, catboost, randomforest, extra-trees, nn-torch, fastai, cnn-qr, deepar, prophet, npts, arima, ets
        },
      ],
    },
    data_split_config: {
      validation_fraction: 1.0,
    },
    mode: "AUTO", # accepts AUTO, ENSEMBLING, HYPERPARAMETER_TUNING
  },
  role_arn: "RoleArn", # required
  generate_candidate_definitions_only: false,
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
  model_deploy_config: {
    auto_generate_endpoint_name: false,
    endpoint_name: "EndpointName",
  },
})

Response structure


resp.auto_ml_job_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :auto_ml_job_name (required, String)

    Identifies an Autopilot job. The name must be unique to your account and is case insensitive.

  • :input_data_config (required, Array<Types::AutoMLChannel>)

    An array of channel objects that describes the input data and its location. Each channel is a named input source. Similar to InputDataConfig supported by HyperParameterTrainingJobDefinition. Format(s) supported: CSV, Parquet. A minimum of 500 rows is required for the training dataset. There is not a minimum number of rows required for the validation dataset.

  • :output_data_config (required, Types::AutoMLOutputDataConfig)

    Provides information about encryption and the Amazon S3 output path needed to store artifacts from an AutoML job. Format(s) supported: CSV.

  • :problem_type (String)

    Defines the type of supervised learning problem available for the candidates. For more information, see SageMaker Autopilot problem types.

  • :auto_ml_job_objective (Types::AutoMLJobObjective)

    Specifies a metric to minimize or maximize as the objective of a job. If not specified, the default objective metric depends on the problem type. See AutoMLJobObjective for the default values.

  • :auto_ml_job_config (Types::AutoMLJobConfig)

    A collection of settings used to configure an AutoML job.

  • :role_arn (required, String)

    The ARN of the role that is used to access the data.

  • :generate_candidate_definitions_only (Boolean)

    Generates possible candidates without training the models. A candidate is a combination of data preprocessors, algorithms, and algorithm parameter settings.

  • :tags (Array<Types::Tag>)

    An array of key-value pairs. You can use tags to categorize your Amazon Web Services resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging Amazon Web ServicesResources. Tag keys must be unique per resource.

  • :model_deploy_config (Types::ModelDeployConfig)

    Specifies how to generate the endpoint name for an automatic one-click Autopilot model deployment.

Returns:

See Also:



1591
1592
1593
1594
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 1591

def create_auto_ml_job(params = {}, options = {})
  req = build_request(:create_auto_ml_job, params)
  req.send_request(options)
end

#create_auto_ml_job_v2(params = {}) ⇒ Types::CreateAutoMLJobV2Response

Creates an Autopilot job also referred to as Autopilot experiment or AutoML job V2.

An AutoML job in SageMaker is a fully automated process that allows you to build machine learning models with minimal effort and machine learning expertise. When initiating an AutoML job, you provide your data and optionally specify parameters tailored to your use case. SageMaker then automates the entire model development lifecycle, including data preprocessing, model training, tuning, and evaluation. AutoML jobs are designed to simplify and accelerate the model building process by automating various tasks and exploring different combinations of machine learning algorithms, data preprocessing techniques, and hyperparameter values. The output of an AutoML job comprises one or more trained models ready for deployment and inference. Additionally, SageMaker AutoML jobs generate a candidate model leaderboard, allowing you to select the best-performing model for deployment.

For more information about AutoML jobs, see https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-automate-model-development.html in the SageMaker developer guide.

AutoML jobs V2 support various problem types such as regression, binary, and multiclass classification with tabular data, text and image classification, time-series forecasting, and fine-tuning of large language models (LLMs) for text generation.

CreateAutoMLJobV2 and DescribeAutoMLJobV2 are new versions of CreateAutoMLJob and DescribeAutoMLJob which offer backward compatibility.

CreateAutoMLJobV2 can manage tabular problem types identical to those of its previous version CreateAutoMLJob, as well as time-series forecasting, non-tabular problem types such as image or text classification, and text generation (LLMs fine-tuning).

Find guidelines about how to migrate a CreateAutoMLJob to CreateAutoMLJobV2 in Migrate a CreateAutoMLJob to CreateAutoMLJobV2.

For the list of available problem types supported by CreateAutoMLJobV2, see AutoMLProblemTypeConfig.

You can find the best-performing model after you run an AutoML job V2 by calling DescribeAutoMLJobV2.

Examples:

Request syntax with placeholder values


resp = client.create_auto_ml_job_v2({
  auto_ml_job_name: "AutoMLJobName", # required
  auto_ml_job_input_data_config: [ # required
    {
      channel_type: "training", # accepts training, validation
      content_type: "ContentType",
      compression_type: "None", # accepts None, Gzip
      data_source: {
        s3_data_source: { # required
          s3_data_type: "ManifestFile", # required, accepts ManifestFile, S3Prefix, AugmentedManifestFile
          s3_uri: "S3Uri", # required
        },
      },
    },
  ],
  output_data_config: { # required
    kms_key_id: "KmsKeyId",
    s3_output_path: "S3Uri", # required
  },
  auto_ml_problem_type_config: { # required
    image_classification_job_config: {
      completion_criteria: {
        max_candidates: 1,
        max_runtime_per_training_job_in_seconds: 1,
        max_auto_ml_job_runtime_in_seconds: 1,
      },
    },
    text_classification_job_config: {
      completion_criteria: {
        max_candidates: 1,
        max_runtime_per_training_job_in_seconds: 1,
        max_auto_ml_job_runtime_in_seconds: 1,
      },
      content_column: "ContentColumn", # required
      target_label_column: "TargetLabelColumn", # required
    },
    time_series_forecasting_job_config: {
      feature_specification_s3_uri: "S3Uri",
      completion_criteria: {
        max_candidates: 1,
        max_runtime_per_training_job_in_seconds: 1,
        max_auto_ml_job_runtime_in_seconds: 1,
      },
      forecast_frequency: "ForecastFrequency", # required
      forecast_horizon: 1, # required
      forecast_quantiles: ["ForecastQuantile"],
      transformations: {
        filling: {
          "TransformationAttributeName" => {
            "frontfill" => "FillingTransformationValue",
          },
        },
        aggregation: {
          "TransformationAttributeName" => "sum", # accepts sum, avg, first, min, max
        },
      },
      time_series_config: { # required
        target_attribute_name: "TargetAttributeName", # required
        timestamp_attribute_name: "TimestampAttributeName", # required
        item_identifier_attribute_name: "ItemIdentifierAttributeName", # required
        grouping_attribute_names: ["GroupingAttributeName"],
      },
      holiday_config: [
        {
          country_code: "CountryCode",
        },
      ],
      candidate_generation_config: {
        algorithms_config: [
          {
            auto_ml_algorithms: ["xgboost"], # required, accepts xgboost, linear-learner, mlp, lightgbm, catboost, randomforest, extra-trees, nn-torch, fastai, cnn-qr, deepar, prophet, npts, arima, ets
          },
        ],
      },
    },
    tabular_job_config: {
      candidate_generation_config: {
        algorithms_config: [
          {
            auto_ml_algorithms: ["xgboost"], # required, accepts xgboost, linear-learner, mlp, lightgbm, catboost, randomforest, extra-trees, nn-torch, fastai, cnn-qr, deepar, prophet, npts, arima, ets
          },
        ],
      },
      completion_criteria: {
        max_candidates: 1,
        max_runtime_per_training_job_in_seconds: 1,
        max_auto_ml_job_runtime_in_seconds: 1,
      },
      feature_specification_s3_uri: "S3Uri",
      mode: "AUTO", # accepts AUTO, ENSEMBLING, HYPERPARAMETER_TUNING
      generate_candidate_definitions_only: false,
      problem_type: "BinaryClassification", # accepts BinaryClassification, MulticlassClassification, Regression
      target_attribute_name: "TargetAttributeName", # required
      sample_weight_attribute_name: "SampleWeightAttributeName",
    },
    text_generation_job_config: {
      completion_criteria: {
        max_candidates: 1,
        max_runtime_per_training_job_in_seconds: 1,
        max_auto_ml_job_runtime_in_seconds: 1,
      },
      base_model_name: "BaseModelName",
      text_generation_hyper_parameters: {
        "TextGenerationHyperParameterKey" => "TextGenerationHyperParameterValue",
      },
      model_access_config: {
        accept_eula: false, # required
      },
    },
  },
  role_arn: "RoleArn", # required
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
  security_config: {
    volume_kms_key_id: "KmsKeyId",
    enable_inter_container_traffic_encryption: false,
    vpc_config: {
      security_group_ids: ["SecurityGroupId"], # required
      subnets: ["SubnetId"], # required
    },
  },
  auto_ml_job_objective: {
    metric_name: "Accuracy", # required, accepts Accuracy, MSE, F1, F1macro, AUC, RMSE, BalancedAccuracy, R2, Recall, RecallMacro, Precision, PrecisionMacro, MAE, MAPE, MASE, WAPE, AverageWeightedQuantileLoss
  },
  model_deploy_config: {
    auto_generate_endpoint_name: false,
    endpoint_name: "EndpointName",
  },
  data_split_config: {
    validation_fraction: 1.0,
  },
  auto_ml_compute_config: {
    emr_serverless_compute_config: {
      execution_role_arn: "RoleArn", # required
    },
  },
})

Response structure


resp.auto_ml_job_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :auto_ml_job_name (required, String)

    Identifies an Autopilot job. The name must be unique to your account and is case insensitive.

  • :auto_ml_job_input_data_config (required, Array<Types::AutoMLJobChannel>)

    An array of channel objects describing the input data and their location. Each channel is a named input source. Similar to the InputDataConfig attribute in the CreateAutoMLJob input parameters. The supported formats depend on the problem type:

    • For tabular problem types: S3Prefix, ManifestFile.

    • For image classification: S3Prefix, ManifestFile, AugmentedManifestFile.

    • For text classification: S3Prefix.

    • For time-series forecasting: S3Prefix.

    • For text generation (LLMs fine-tuning): S3Prefix.

  • :output_data_config (required, Types::AutoMLOutputDataConfig)

    Provides information about encryption and the Amazon S3 output path needed to store artifacts from an AutoML job.

  • :auto_ml_problem_type_config (required, Types::AutoMLProblemTypeConfig)

    Defines the configuration settings of one of the supported problem types.

  • :role_arn (required, String)

    The ARN of the role that is used to access the data.

  • :tags (Array<Types::Tag>)

    An array of key-value pairs. You can use tags to categorize your Amazon Web Services resources in different ways, such as by purpose, owner, or environment. For more information, see Tagging Amazon Web ServicesResources. Tag keys must be unique per resource.

  • :security_config (Types::AutoMLSecurityConfig)

    The security configuration for traffic encryption or Amazon VPC settings.

  • :auto_ml_job_objective (Types::AutoMLJobObjective)

    Specifies a metric to minimize or maximize as the objective of a job. If not specified, the default objective metric depends on the problem type. For the list of default values per problem type, see AutoMLJobObjective.

    * For tabular problem types: You must either provide both the AutoMLJobObjective and indicate the type of supervised learning problem in AutoMLProblemTypeConfig (TabularJobConfig.ProblemType), or none at all.

    • For text generation problem types (LLMs fine-tuning): Fine-tuning language models in Autopilot does not require setting the AutoMLJobObjective field. Autopilot fine-tunes LLMs without requiring multiple candidates to be trained and evaluated. Instead, using your dataset, Autopilot directly fine-tunes your target model to enhance a default objective metric, the cross-entropy loss. After fine-tuning a language model, you can evaluate the quality of its generated text using different metrics. For a list of the available metrics, see Metrics for fine-tuning LLMs in Autopilot.

  • :model_deploy_config (Types::ModelDeployConfig)

    Specifies how to generate the endpoint name for an automatic one-click Autopilot model deployment.

  • :data_split_config (Types::AutoMLDataSplitConfig)

    This structure specifies how to split the data into train and validation datasets.

    The validation and training datasets must contain the same headers. For jobs created by calling CreateAutoMLJob, the validation dataset must be less than 2 GB in size.

    This attribute must not be set for the time-series forecasting problem type, as Autopilot automatically splits the input dataset into training and validation sets.

  • :auto_ml_compute_config (Types::AutoMLComputeConfig)

    Specifies the compute configuration for the AutoML job V2.

Returns:

See Also:



1909
1910
1911
1912
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 1909

def create_auto_ml_job_v2(params = {}, options = {})
  req = build_request(:create_auto_ml_job_v2, params)
  req.send_request(options)
end

#create_cluster(params = {}) ⇒ Types::CreateClusterResponse

Creates a SageMaker HyperPod cluster. SageMaker HyperPod is a capability of SageMaker for creating and managing persistent clusters for developing large machine learning models, such as large language models (LLMs) and diffusion models. To learn more, see Amazon SageMaker HyperPod in the Amazon SageMaker Developer Guide.

Examples:

Request syntax with placeholder values


resp = client.create_cluster({
  cluster_name: "ClusterName", # required
  instance_groups: [ # required
    {
      instance_count: 1, # required
      instance_group_name: "ClusterInstanceGroupName", # required
      instance_type: "ml.p4d.24xlarge", # required, accepts ml.p4d.24xlarge, ml.p4de.24xlarge, ml.p5.48xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.12xlarge, ml.g5.16xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.12xlarge, ml.c5.18xlarge, ml.c5.24xlarge, ml.c5n.large, ml.c5n.2xlarge, ml.c5n.4xlarge, ml.c5n.9xlarge, ml.c5n.18xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.8xlarge, ml.m5.12xlarge, ml.m5.16xlarge, ml.m5.24xlarge, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.16xlarge, ml.g6.12xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.gr6.4xlarge, ml.gr6.8xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.16xlarge, ml.g6e.12xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.p5e.48xlarge, ml.p5en.48xlarge, ml.trn2.48xlarge
      life_cycle_config: { # required
        source_s3_uri: "S3Uri", # required
        on_create: "ClusterLifeCycleConfigFileName", # required
      },
      execution_role: "RoleArn", # required
      threads_per_core: 1,
      instance_storage_configs: [
        {
          ebs_volume_config: {
            volume_size_in_gb: 1, # required
          },
        },
      ],
      on_start_deep_health_checks: ["InstanceStress"], # accepts InstanceStress, InstanceConnectivity
      training_plan_arn: "TrainingPlanArn",
      override_vpc_config: {
        security_group_ids: ["SecurityGroupId"], # required
        subnets: ["SubnetId"], # required
      },
    },
  ],
  vpc_config: {
    security_group_ids: ["SecurityGroupId"], # required
    subnets: ["SubnetId"], # required
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
  orchestrator: {
    eks: { # required
      cluster_arn: "EksClusterArn", # required
    },
  },
  node_recovery: "Automatic", # accepts Automatic, None
})

Response structure


resp.cluster_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_name (required, String)

    The name for the new SageMaker HyperPod cluster.

  • :instance_groups (required, Array<Types::ClusterInstanceGroupSpecification>)

    The instance groups to be created in the SageMaker HyperPod cluster.

  • :vpc_config (Types::VpcConfig)

    Specifies an Amazon Virtual Private Cloud (VPC) that your SageMaker jobs, hosted models, and compute resources have access to. You can control access to and from your resources by configuring a VPC. For more information, see Give SageMaker Access to Resources in your Amazon VPC.

  • :tags (Array<Types::Tag>)

    Custom tags for managing the SageMaker HyperPod cluster as an Amazon Web Services resource. You can add tags to your cluster in the same way you add them in other Amazon Web Services services that support tagging. To learn more about tagging Amazon Web Services resources in general, see Tagging Amazon Web Services Resources User Guide.

  • :orchestrator (Types::ClusterOrchestrator)

    The type of orchestrator to use for the SageMaker HyperPod cluster. Currently, the only supported value is "eks", which is to use an Amazon Elastic Kubernetes Service (EKS) cluster as the orchestrator.

  • :node_recovery (String)

    The node recovery mode for the SageMaker HyperPod cluster. When set to Automatic, SageMaker HyperPod will automatically reboot or replace faulty nodes when issues are detected. When set to None, cluster administrators will need to manually manage any faulty cluster instances.

Returns:

See Also:



2024
2025
2026
2027
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 2024

def create_cluster(params = {}, options = {})
  req = build_request(:create_cluster, params)
  req.send_request(options)
end

#create_cluster_scheduler_config(params = {}) ⇒ Types::CreateClusterSchedulerConfigResponse

Create cluster policy configuration. This policy is used for task prioritization and fair-share allocation of idle compute. This helps prioritize critical workloads and distributes idle compute across entities.

Examples:

Request syntax with placeholder values


resp = client.create_cluster_scheduler_config({
  name: "EntityName", # required
  cluster_arn: "ClusterArn", # required
  scheduler_config: { # required
    priority_classes: [
      {
        name: "ClusterSchedulerPriorityClassName", # required
        weight: 1, # required
      },
    ],
    fair_share: "Enabled", # accepts Enabled, Disabled
  },
  description: "EntityDescription",
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.cluster_scheduler_config_arn #=> String
resp.cluster_scheduler_config_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    Name for the cluster policy.

  • :cluster_arn (required, String)

    ARN of the cluster.

  • :scheduler_config (required, Types::SchedulerConfig)

    Configuration about the monitoring schedule.

  • :description (String)

    Description of the cluster policy.

  • :tags (Array<Types::Tag>)

    Tags of the cluster policy.

Returns:

See Also:



2086
2087
2088
2089
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 2086

def create_cluster_scheduler_config(params = {}, options = {})
  req = build_request(:create_cluster_scheduler_config, params)
  req.send_request(options)
end

#create_code_repository(params = {}) ⇒ Types::CreateCodeRepositoryOutput

Creates a Git repository as a resource in your SageMaker account. You can associate the repository with notebook instances so that you can use Git source control for the notebooks you create. The Git repository is a resource in your SageMaker account, so it can be associated with more than one notebook instance, and it persists independently from the lifecycle of any notebook instances it is associated with.

The repository can be hosted either in Amazon Web Services CodeCommit or in any other Git repository.

Examples:

Request syntax with placeholder values


resp = client.create_code_repository({
  code_repository_name: "EntityName", # required
  git_config: { # required
    repository_url: "GitConfigUrl", # required
    branch: "Branch",
    secret_arn: "SecretArn",
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.code_repository_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :code_repository_name (required, String)

    The name of the Git repository. The name must have 1 to 63 characters. Valid characters are a-z, A-Z, 0-9, and - (hyphen).

  • :git_config (required, Types::GitConfig)

    Specifies details about the repository, including the URL where the repository is located, the default branch, and credentials to use to access the repository.

  • :tags (Array<Types::Tag>)

    An array of key-value pairs. You can use tags to categorize your Amazon Web Services resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging Amazon Web Services Resources.

Returns:

See Also:



2154
2155
2156
2157
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 2154

def create_code_repository(params = {}, options = {})
  req = build_request(:create_code_repository, params)
  req.send_request(options)
end

#create_compilation_job(params = {}) ⇒ Types::CreateCompilationJobResponse

Starts a model compilation job. After the model has been compiled, Amazon SageMaker saves the resulting model artifacts to an Amazon Simple Storage Service (Amazon S3) bucket that you specify.

If you choose to host your model using Amazon SageMaker hosting services, you can use the resulting model artifacts as part of the model. You can also use the artifacts with Amazon Web Services IoT Greengrass. In that case, deploy them as an ML resource.

In the request body, you provide the following:

  • A name for the compilation job

  • Information about the input model artifacts

  • The output location for the compiled model and the device (target) that the model runs on

  • The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker assumes to perform the model compilation job.

You can also provide a Tag to track the model compilation job's resource use and costs. The response body contains the CompilationJobArn for the compiled job.

To stop a model compilation job, use StopCompilationJob. To get information about a particular model compilation job, use DescribeCompilationJob. To get information about multiple model compilation jobs, use ListCompilationJobs.

Examples:

Request syntax with placeholder values


resp = client.create_compilation_job({
  compilation_job_name: "EntityName", # required
  role_arn: "RoleArn", # required
  model_package_version_arn: "ModelPackageArn",
  input_config: {
    s3_uri: "S3Uri", # required
    data_input_config: "DataInputConfig",
    framework: "TENSORFLOW", # required, accepts TENSORFLOW, KERAS, MXNET, ONNX, PYTORCH, XGBOOST, TFLITE, DARKNET, SKLEARN
    framework_version: "FrameworkVersion",
  },
  output_config: { # required
    s3_output_location: "S3Uri", # required
    target_device: "lambda", # accepts lambda, ml_m4, ml_m5, ml_m6g, ml_c4, ml_c5, ml_c6g, ml_p2, ml_p3, ml_g4dn, ml_inf1, ml_inf2, ml_trn1, ml_eia2, jetson_tx1, jetson_tx2, jetson_nano, jetson_xavier, rasp3b, rasp4b, imx8qm, deeplens, rk3399, rk3288, aisage, sbe_c, qcs605, qcs603, sitara_am57x, amba_cv2, amba_cv22, amba_cv25, x86_win32, x86_win64, coreml, jacinto_tda4vm, imx8mplus
    target_platform: {
      os: "ANDROID", # required, accepts ANDROID, LINUX
      arch: "X86_64", # required, accepts X86_64, X86, ARM64, ARM_EABI, ARM_EABIHF
      accelerator: "INTEL_GRAPHICS", # accepts INTEL_GRAPHICS, MALI, NVIDIA, NNA
    },
    compiler_options: "CompilerOptions",
    kms_key_id: "KmsKeyId",
  },
  vpc_config: {
    security_group_ids: ["NeoVpcSecurityGroupId"], # required
    subnets: ["NeoVpcSubnetId"], # required
  },
  stopping_condition: { # required
    max_runtime_in_seconds: 1,
    max_wait_time_in_seconds: 1,
    max_pending_time_in_seconds: 1,
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.compilation_job_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :compilation_job_name (required, String)

    A name for the model compilation job. The name must be unique within the Amazon Web Services Region and within your Amazon Web Services account.

  • :role_arn (required, String)

    The Amazon Resource Name (ARN) of an IAM role that enables Amazon SageMaker to perform tasks on your behalf.

    During model compilation, Amazon SageMaker needs your permission to:

    • Read input data from an S3 bucket

    • Write model artifacts to an S3 bucket

    • Write logs to Amazon CloudWatch Logs

    • Publish metrics to Amazon CloudWatch

    You grant permissions for all of these tasks to an IAM role. To pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole permission. For more information, see Amazon SageMaker Roles.

  • :model_package_version_arn (String)

    The Amazon Resource Name (ARN) of a versioned model package. Provide either a ModelPackageVersionArn or an InputConfig object in the request syntax. The presence of both objects in the CreateCompilationJob request will return an exception.

  • :input_config (Types::InputConfig)

    Provides information about the location of input model artifacts, the name and shape of the expected data inputs, and the framework in which the model was trained.

  • :output_config (required, Types::OutputConfig)

    Provides information about the output location for the compiled model and the target device the model runs on.

  • :vpc_config (Types::NeoVpcConfig)

    A VpcConfig object that specifies the VPC that you want your compilation job to connect to. Control access to your models by configuring the VPC. For more information, see Protect Compilation Jobs by Using an Amazon Virtual Private Cloud.

  • :stopping_condition (required, Types::StoppingCondition)

    Specifies a limit to how long a model compilation job can run. When the job reaches the time limit, Amazon SageMaker ends the compilation job. Use this API to cap model training costs.

  • :tags (Array<Types::Tag>)

    An array of key-value pairs. You can use tags to categorize your Amazon Web Services resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging Amazon Web Services Resources.

Returns:

See Also:



2316
2317
2318
2319
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 2316

def create_compilation_job(params = {}, options = {})
  req = build_request(:create_compilation_job, params)
  req.send_request(options)
end

#create_compute_quota(params = {}) ⇒ Types::CreateComputeQuotaResponse

Create compute allocation definition. This defines how compute is allocated, shared, and borrowed for specified entities. Specifically, how to lend and borrow idle compute and assign a fair-share weight to the specified entities.

Examples:

Request syntax with placeholder values


resp = client.create_compute_quota({
  name: "EntityName", # required
  description: "EntityDescription",
  cluster_arn: "ClusterArn", # required
  compute_quota_config: { # required
    compute_quota_resources: [
      {
        instance_type: "ml.p4d.24xlarge", # required, accepts ml.p4d.24xlarge, ml.p4de.24xlarge, ml.p5.48xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.12xlarge, ml.g5.16xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.12xlarge, ml.c5.18xlarge, ml.c5.24xlarge, ml.c5n.large, ml.c5n.2xlarge, ml.c5n.4xlarge, ml.c5n.9xlarge, ml.c5n.18xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.8xlarge, ml.m5.12xlarge, ml.m5.16xlarge, ml.m5.24xlarge, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.16xlarge, ml.g6.12xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.gr6.4xlarge, ml.gr6.8xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.16xlarge, ml.g6e.12xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.p5e.48xlarge, ml.p5en.48xlarge, ml.trn2.48xlarge
        count: 1, # required
      },
    ],
    resource_sharing_config: {
      strategy: "Lend", # required, accepts Lend, DontLend, LendAndBorrow
      borrow_limit: 1,
    },
    preempt_team_tasks: "Never", # accepts Never, LowerPriority
  },
  compute_quota_target: { # required
    team_name: "ComputeQuotaTargetTeamName", # required
    fair_share_weight: 1,
  },
  activation_state: "Enabled", # accepts Enabled, Disabled
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.compute_quota_arn #=> String
resp.compute_quota_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    Name to the compute allocation definition.

  • :description (String)

    Description of the compute allocation definition.

  • :cluster_arn (required, String)

    ARN of the cluster.

  • :compute_quota_config (required, Types::ComputeQuotaConfig)

    Configuration of the compute allocation definition. This includes the resource sharing option, and the setting to preempt low priority tasks.

  • :compute_quota_target (required, Types::ComputeQuotaTarget)

    The target entity to allocate compute resources to.

  • :activation_state (String)

    The state of the compute allocation being described. Use to enable or disable compute allocation.

    Default is Enabled.

  • :tags (Array<Types::Tag>)

    Tags of the compute allocation definition.

Returns:

See Also:



2398
2399
2400
2401
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 2398

def create_compute_quota(params = {}, options = {})
  req = build_request(:create_compute_quota, params)
  req.send_request(options)
end

#create_context(params = {}) ⇒ Types::CreateContextResponse

Creates a context. A context is a lineage tracking entity that represents a logical grouping of other tracking or experiment entities. Some examples are an endpoint and a model package. For more information, see Amazon SageMaker ML Lineage Tracking.

Examples:

Request syntax with placeholder values


resp = client.create_context({
  context_name: "ContextName", # required
  source: { # required
    source_uri: "SourceUri", # required
    source_type: "String256",
    source_id: "String256",
  },
  context_type: "String256", # required
  description: "ExperimentDescription",
  properties: {
    "StringParameterValue" => "StringParameterValue",
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.context_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :context_name (required, String)

    The name of the context. Must be unique to your account in an Amazon Web Services Region.

  • :source (required, Types::ContextSource)

    The source type, ID, and URI.

  • :context_type (required, String)

    The context type.

  • :description (String)

    The description of the context.

  • :properties (Hash<String,String>)

    A list of properties to add to the context.

  • :tags (Array<Types::Tag>)

    A list of tags to apply to the context.

Returns:

See Also:



2465
2466
2467
2468
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 2465

def create_context(params = {}, options = {})
  req = build_request(:create_context, params)
  req.send_request(options)
end

#create_data_quality_job_definition(params = {}) ⇒ Types::CreateDataQualityJobDefinitionResponse

Creates a definition for a job that monitors data quality and drift. For information about model monitor, see Amazon SageMaker Model Monitor.

Examples:

Request syntax with placeholder values


resp = client.create_data_quality_job_definition({
  job_definition_name: "MonitoringJobDefinitionName", # required
  data_quality_baseline_config: {
    baselining_job_name: "ProcessingJobName",
    constraints_resource: {
      s3_uri: "S3Uri",
    },
    statistics_resource: {
      s3_uri: "S3Uri",
    },
  },
  data_quality_app_specification: { # required
    image_uri: "ImageUri", # required
    container_entrypoint: ["ContainerEntrypointString"],
    container_arguments: ["ContainerArgument"],
    record_preprocessor_source_uri: "S3Uri",
    post_analytics_processor_source_uri: "S3Uri",
    environment: {
      "ProcessingEnvironmentKey" => "ProcessingEnvironmentValue",
    },
  },
  data_quality_job_input: { # required
    endpoint_input: {
      endpoint_name: "EndpointName", # required
      local_path: "ProcessingLocalPath", # required
      s3_input_mode: "Pipe", # accepts Pipe, File
      s3_data_distribution_type: "FullyReplicated", # accepts FullyReplicated, ShardedByS3Key
      features_attribute: "String",
      inference_attribute: "String",
      probability_attribute: "String",
      probability_threshold_attribute: 1.0,
      start_time_offset: "MonitoringTimeOffsetString",
      end_time_offset: "MonitoringTimeOffsetString",
      exclude_features_attribute: "ExcludeFeaturesAttribute",
    },
    batch_transform_input: {
      data_captured_destination_s3_uri: "DestinationS3Uri", # required
      dataset_format: { # required
        csv: {
          header: false,
        },
        json: {
          line: false,
        },
        parquet: {
        },
      },
      local_path: "ProcessingLocalPath", # required
      s3_input_mode: "Pipe", # accepts Pipe, File
      s3_data_distribution_type: "FullyReplicated", # accepts FullyReplicated, ShardedByS3Key
      features_attribute: "String",
      inference_attribute: "String",
      probability_attribute: "String",
      probability_threshold_attribute: 1.0,
      start_time_offset: "MonitoringTimeOffsetString",
      end_time_offset: "MonitoringTimeOffsetString",
      exclude_features_attribute: "ExcludeFeaturesAttribute",
    },
  },
  data_quality_job_output_config: { # required
    monitoring_outputs: [ # required
      {
        s3_output: { # required
          s3_uri: "MonitoringS3Uri", # required
          local_path: "ProcessingLocalPath", # required
          s3_upload_mode: "Continuous", # accepts Continuous, EndOfJob
        },
      },
    ],
    kms_key_id: "KmsKeyId",
  },
  job_resources: { # required
    cluster_config: { # required
      instance_count: 1, # required
      instance_type: "ml.t3.medium", # required, accepts ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.r5d.large, ml.r5d.xlarge, ml.r5d.2xlarge, ml.r5d.4xlarge, ml.r5d.8xlarge, ml.r5d.12xlarge, ml.r5d.16xlarge, ml.r5d.24xlarge
      volume_size_in_gb: 1, # required
      volume_kms_key_id: "KmsKeyId",
    },
  },
  network_config: {
    enable_inter_container_traffic_encryption: false,
    enable_network_isolation: false,
    vpc_config: {
      security_group_ids: ["SecurityGroupId"], # required
      subnets: ["SubnetId"], # required
    },
  },
  role_arn: "RoleArn", # required
  stopping_condition: {
    max_runtime_in_seconds: 1, # required
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.job_definition_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_definition_name (required, String)

    The name for the monitoring job definition.

  • :data_quality_baseline_config (Types::DataQualityBaselineConfig)

    Configures the constraints and baselines for the monitoring job.

  • :data_quality_app_specification (required, Types::DataQualityAppSpecification)

    Specifies the container that runs the monitoring job.

  • :data_quality_job_input (required, Types::DataQualityJobInput)

    A list of inputs for the monitoring job. Currently endpoints are supported as monitoring inputs.

  • :data_quality_job_output_config (required, Types::MonitoringOutputConfig)

    The output configuration for monitoring jobs.

  • :job_resources (required, Types::MonitoringResources)

    Identifies the resources to deploy for a monitoring job.

  • :network_config (Types::MonitoringNetworkConfig)

    Specifies networking configuration for the monitoring job.

  • :role_arn (required, String)

    The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.

  • :stopping_condition (Types::MonitoringStoppingCondition)

    A time limit for how long the monitoring job is allowed to run before stopping.

  • :tags (Array<Types::Tag>) — default: Optional

    An array of key-value pairs. For more information, see Using Cost Allocation Tags in the Amazon Web Services Billing and Cost Management User Guide.

Returns:

See Also:



2630
2631
2632
2633
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 2630

def create_data_quality_job_definition(params = {}, options = {})
  req = build_request(:create_data_quality_job_definition, params)
  req.send_request(options)
end

#create_device_fleet(params = {}) ⇒ Struct

Creates a device fleet.

Examples:

Request syntax with placeholder values


resp = client.create_device_fleet({
  device_fleet_name: "EntityName", # required
  role_arn: "RoleArn",
  description: "DeviceFleetDescription",
  output_config: { # required
    s3_output_location: "S3Uri", # required
    kms_key_id: "KmsKeyId",
    preset_deployment_type: "GreengrassV2Component", # accepts GreengrassV2Component
    preset_deployment_config: "String",
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
  enable_iot_role_alias: false,
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :device_fleet_name (required, String)

    The name of the fleet that the device belongs to.

  • :role_arn (String)

    The Amazon Resource Name (ARN) that has access to Amazon Web Services Internet of Things (IoT).

  • :description (String)

    A description of the fleet.

  • :output_config (required, Types::EdgeOutputConfig)

    The output configuration for storing sample data collected by the fleet.

  • :tags (Array<Types::Tag>)

    Creates tags for the specified fleet.

  • :enable_iot_role_alias (Boolean)

    Whether to create an Amazon Web Services IoT Role Alias during device fleet creation. The name of the role alias generated will match this pattern: "SageMakerEdge-DeviceFleetName".

    For example, if your device fleet is called "demo-fleet", the name of the role alias will be "SageMakerEdge-demo-fleet".

Returns:

  • (Struct)

    Returns an empty response.

See Also:



2689
2690
2691
2692
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 2689

def create_device_fleet(params = {}, options = {})
  req = build_request(:create_device_fleet, params)
  req.send_request(options)
end

#create_domain(params = {}) ⇒ Types::CreateDomainResponse

Creates a Domain. A domain consists of an associated Amazon Elastic File System volume, a list of authorized users, and a variety of security, application, policy, and Amazon Virtual Private Cloud (VPC) configurations. Users within a domain can share notebook files and other artifacts with each other.

EFS storage

When a domain is created, an EFS volume is created for use by all of the users within the domain. Each user receives a private home directory within the EFS volume for notebooks, Git repositories, and data files.

SageMaker uses the Amazon Web Services Key Management Service (Amazon Web Services KMS) to encrypt the EFS volume attached to the domain with an Amazon Web Services managed key by default. For more control, you can specify a customer managed key. For more information, see Protect Data at Rest Using Encryption.

VPC configuration

All traffic between the domain and the Amazon EFS volume is through the specified VPC and subnets. For other traffic, you can specify the AppNetworkAccessType parameter. AppNetworkAccessType corresponds to the network access type that you choose when you onboard to the domain. The following options are available:

  • PublicInternetOnly - Non-EFS traffic goes through a VPC managed by Amazon SageMaker, which allows internet access. This is the default value.

  • VpcOnly - All traffic is through the specified VPC and subnets. Internet access is disabled by default. To allow internet access, you must specify a NAT gateway.

    When internet access is disabled, you won't be able to run a Amazon SageMaker Studio notebook or to train or host models unless your VPC has an interface endpoint to the SageMaker API and runtime or a NAT gateway and your security groups allow outbound connections.

NFS traffic over TCP on port 2049 needs to be allowed in both inbound and outbound rules in order to launch a Amazon SageMaker Studio app successfully.

For more information, see Connect Amazon SageMaker Studio Notebooks to Resources in a VPC.

Examples:

Request syntax with placeholder values


resp = client.create_domain({
  domain_name: "DomainName", # required
  auth_mode: "SSO", # required, accepts SSO, IAM
  default_user_settings: { # required
    execution_role: "RoleArn",
    security_groups: ["SecurityGroupId"],
    sharing_settings: {
      notebook_output_option: "Allowed", # accepts Allowed, Disabled
      s3_output_path: "S3Uri",
      s3_kms_key_id: "KmsKeyId",
    },
    jupyter_server_app_settings: {
      default_resource_spec: {
        sage_maker_image_arn: "ImageArn",
        sage_maker_image_version_arn: "ImageVersionArn",
        sage_maker_image_version_alias: "ImageVersionAlias",
        instance_type: "system", # accepts system, ml.t3.micro, ml.t3.small, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.8xlarge, ml.m5.12xlarge, ml.m5.16xlarge, ml.m5.24xlarge, ml.m5d.large, ml.m5d.xlarge, ml.m5d.2xlarge, ml.m5d.4xlarge, ml.m5d.8xlarge, ml.m5d.12xlarge, ml.m5d.16xlarge, ml.m5d.24xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.12xlarge, ml.c5.18xlarge, ml.c5.24xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.12xlarge, ml.g6.16xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.12xlarge, ml.g6e.16xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.geospatial.interactive, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.p5.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge, ml.m6id.large, ml.m6id.xlarge, ml.m6id.2xlarge, ml.m6id.4xlarge, ml.m6id.8xlarge, ml.m6id.12xlarge, ml.m6id.16xlarge, ml.m6id.24xlarge, ml.m6id.32xlarge, ml.c6id.large, ml.c6id.xlarge, ml.c6id.2xlarge, ml.c6id.4xlarge, ml.c6id.8xlarge, ml.c6id.12xlarge, ml.c6id.16xlarge, ml.c6id.24xlarge, ml.c6id.32xlarge, ml.r6id.large, ml.r6id.xlarge, ml.r6id.2xlarge, ml.r6id.4xlarge, ml.r6id.8xlarge, ml.r6id.12xlarge, ml.r6id.16xlarge, ml.r6id.24xlarge, ml.r6id.32xlarge
        lifecycle_config_arn: "StudioLifecycleConfigArn",
      },
      lifecycle_config_arns: ["StudioLifecycleConfigArn"],
      code_repositories: [
        {
          repository_url: "RepositoryUrl", # required
        },
      ],
    },
    kernel_gateway_app_settings: {
      default_resource_spec: {
        sage_maker_image_arn: "ImageArn",
        sage_maker_image_version_arn: "ImageVersionArn",
        sage_maker_image_version_alias: "ImageVersionAlias",
        instance_type: "system", # accepts system, ml.t3.micro, ml.t3.small, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.8xlarge, ml.m5.12xlarge, ml.m5.16xlarge, ml.m5.24xlarge, ml.m5d.large, ml.m5d.xlarge, ml.m5d.2xlarge, ml.m5d.4xlarge, ml.m5d.8xlarge, ml.m5d.12xlarge, ml.m5d.16xlarge, ml.m5d.24xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.12xlarge, ml.c5.18xlarge, ml.c5.24xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.12xlarge, ml.g6.16xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.12xlarge, ml.g6e.16xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.geospatial.interactive, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.p5.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge, ml.m6id.large, ml.m6id.xlarge, ml.m6id.2xlarge, ml.m6id.4xlarge, ml.m6id.8xlarge, ml.m6id.12xlarge, ml.m6id.16xlarge, ml.m6id.24xlarge, ml.m6id.32xlarge, ml.c6id.large, ml.c6id.xlarge, ml.c6id.2xlarge, ml.c6id.4xlarge, ml.c6id.8xlarge, ml.c6id.12xlarge, ml.c6id.16xlarge, ml.c6id.24xlarge, ml.c6id.32xlarge, ml.r6id.large, ml.r6id.xlarge, ml.r6id.2xlarge, ml.r6id.4xlarge, ml.r6id.8xlarge, ml.r6id.12xlarge, ml.r6id.16xlarge, ml.r6id.24xlarge, ml.r6id.32xlarge
        lifecycle_config_arn: "StudioLifecycleConfigArn",
      },
      custom_images: [
        {
          image_name: "ImageName", # required
          image_version_number: 1,
          app_image_config_name: "AppImageConfigName", # required
        },
      ],
      lifecycle_config_arns: ["StudioLifecycleConfigArn"],
    },
    tensor_board_app_settings: {
      default_resource_spec: {
        sage_maker_image_arn: "ImageArn",
        sage_maker_image_version_arn: "ImageVersionArn",
        sage_maker_image_version_alias: "ImageVersionAlias",
        instance_type: "system", # accepts system, ml.t3.micro, ml.t3.small, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.8xlarge, ml.m5.12xlarge, ml.m5.16xlarge, ml.m5.24xlarge, ml.m5d.large, ml.m5d.xlarge, ml.m5d.2xlarge, ml.m5d.4xlarge, ml.m5d.8xlarge, ml.m5d.12xlarge, ml.m5d.16xlarge, ml.m5d.24xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.12xlarge, ml.c5.18xlarge, ml.c5.24xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.12xlarge, ml.g6.16xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.12xlarge, ml.g6e.16xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.geospatial.interactive, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.p5.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge, ml.m6id.large, ml.m6id.xlarge, ml.m6id.2xlarge, ml.m6id.4xlarge, ml.m6id.8xlarge, ml.m6id.12xlarge, ml.m6id.16xlarge, ml.m6id.24xlarge, ml.m6id.32xlarge, ml.c6id.large, ml.c6id.xlarge, ml.c6id.2xlarge, ml.c6id.4xlarge, ml.c6id.8xlarge, ml.c6id.12xlarge, ml.c6id.16xlarge, ml.c6id.24xlarge, ml.c6id.32xlarge, ml.r6id.large, ml.r6id.xlarge, ml.r6id.2xlarge, ml.r6id.4xlarge, ml.r6id.8xlarge, ml.r6id.12xlarge, ml.r6id.16xlarge, ml.r6id.24xlarge, ml.r6id.32xlarge
        lifecycle_config_arn: "StudioLifecycleConfigArn",
      },
    },
    r_studio_server_pro_app_settings: {
      access_status: "ENABLED", # accepts ENABLED, DISABLED
      user_group: "R_STUDIO_ADMIN", # accepts R_STUDIO_ADMIN, R_STUDIO_USER
    },
    r_session_app_settings: {
      default_resource_spec: {
        sage_maker_image_arn: "ImageArn",
        sage_maker_image_version_arn: "ImageVersionArn",
        sage_maker_image_version_alias: "ImageVersionAlias",
        instance_type: "system", # accepts system, ml.t3.micro, ml.t3.small, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.8xlarge, ml.m5.12xlarge, ml.m5.16xlarge, ml.m5.24xlarge, ml.m5d.large, ml.m5d.xlarge, ml.m5d.2xlarge, ml.m5d.4xlarge, ml.m5d.8xlarge, ml.m5d.12xlarge, ml.m5d.16xlarge, ml.m5d.24xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.12xlarge, ml.c5.18xlarge, ml.c5.24xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.12xlarge, ml.g6.16xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.12xlarge, ml.g6e.16xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.geospatial.interactive, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.p5.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge, ml.m6id.large, ml.m6id.xlarge, ml.m6id.2xlarge, ml.m6id.4xlarge, ml.m6id.8xlarge, ml.m6id.12xlarge, ml.m6id.16xlarge, ml.m6id.24xlarge, ml.m6id.32xlarge, ml.c6id.large, ml.c6id.xlarge, ml.c6id.2xlarge, ml.c6id.4xlarge, ml.c6id.8xlarge, ml.c6id.12xlarge, ml.c6id.16xlarge, ml.c6id.24xlarge, ml.c6id.32xlarge, ml.r6id.large, ml.r6id.xlarge, ml.r6id.2xlarge, ml.r6id.4xlarge, ml.r6id.8xlarge, ml.r6id.12xlarge, ml.r6id.16xlarge, ml.r6id.24xlarge, ml.r6id.32xlarge
        lifecycle_config_arn: "StudioLifecycleConfigArn",
      },
      custom_images: [
        {
          image_name: "ImageName", # required
          image_version_number: 1,
          app_image_config_name: "AppImageConfigName", # required
        },
      ],
    },
    canvas_app_settings: {
      time_series_forecasting_settings: {
        status: "ENABLED", # accepts ENABLED, DISABLED
        amazon_forecast_role_arn: "RoleArn",
      },
      model_register_settings: {
        status: "ENABLED", # accepts ENABLED, DISABLED
        cross_account_model_register_role_arn: "RoleArn",
      },
      workspace_settings: {
        s3_artifact_path: "S3Uri",
        s3_kms_key_id: "KmsKeyId",
      },
      identity_provider_o_auth_settings: [
        {
          data_source_name: "SalesforceGenie", # accepts SalesforceGenie, Snowflake
          status: "ENABLED", # accepts ENABLED, DISABLED
          secret_arn: "SecretArn",
        },
      ],
      direct_deploy_settings: {
        status: "ENABLED", # accepts ENABLED, DISABLED
      },
      kendra_settings: {
        status: "ENABLED", # accepts ENABLED, DISABLED
      },
      generative_ai_settings: {
        amazon_bedrock_role_arn: "RoleArn",
      },
      emr_serverless_settings: {
        execution_role_arn: "RoleArn",
        status: "ENABLED", # accepts ENABLED, DISABLED
      },
    },
    code_editor_app_settings: {
      default_resource_spec: {
        sage_maker_image_arn: "ImageArn",
        sage_maker_image_version_arn: "ImageVersionArn",
        sage_maker_image_version_alias: "ImageVersionAlias",
        instance_type: "system", # accepts system, ml.t3.micro, ml.t3.small, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.8xlarge, ml.m5.12xlarge, ml.m5.16xlarge, ml.m5.24xlarge, ml.m5d.large, ml.m5d.xlarge, ml.m5d.2xlarge, ml.m5d.4xlarge, ml.m5d.8xlarge, ml.m5d.12xlarge, ml.m5d.16xlarge, ml.m5d.24xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.12xlarge, ml.c5.18xlarge, ml.c5.24xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.12xlarge, ml.g6.16xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.12xlarge, ml.g6e.16xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.geospatial.interactive, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.p5.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge, ml.m6id.large, ml.m6id.xlarge, ml.m6id.2xlarge, ml.m6id.4xlarge, ml.m6id.8xlarge, ml.m6id.12xlarge, ml.m6id.16xlarge, ml.m6id.24xlarge, ml.m6id.32xlarge, ml.c6id.large, ml.c6id.xlarge, ml.c6id.2xlarge, ml.c6id.4xlarge, ml.c6id.8xlarge, ml.c6id.12xlarge, ml.c6id.16xlarge, ml.c6id.24xlarge, ml.c6id.32xlarge, ml.r6id.large, ml.r6id.xlarge, ml.r6id.2xlarge, ml.r6id.4xlarge, ml.r6id.8xlarge, ml.r6id.12xlarge, ml.r6id.16xlarge, ml.r6id.24xlarge, ml.r6id.32xlarge
        lifecycle_config_arn: "StudioLifecycleConfigArn",
      },
      custom_images: [
        {
          image_name: "ImageName", # required
          image_version_number: 1,
          app_image_config_name: "AppImageConfigName", # required
        },
      ],
      lifecycle_config_arns: ["StudioLifecycleConfigArn"],
      app_lifecycle_management: {
        idle_settings: {
          lifecycle_management: "ENABLED", # accepts ENABLED, DISABLED
          idle_timeout_in_minutes: 1,
          min_idle_timeout_in_minutes: 1,
          max_idle_timeout_in_minutes: 1,
        },
      },
      built_in_lifecycle_config_arn: "StudioLifecycleConfigArn",
    },
    jupyter_lab_app_settings: {
      default_resource_spec: {
        sage_maker_image_arn: "ImageArn",
        sage_maker_image_version_arn: "ImageVersionArn",
        sage_maker_image_version_alias: "ImageVersionAlias",
        instance_type: "system", # accepts system, ml.t3.micro, ml.t3.small, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.8xlarge, ml.m5.12xlarge, ml.m5.16xlarge, ml.m5.24xlarge, ml.m5d.large, ml.m5d.xlarge, ml.m5d.2xlarge, ml.m5d.4xlarge, ml.m5d.8xlarge, ml.m5d.12xlarge, ml.m5d.16xlarge, ml.m5d.24xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.12xlarge, ml.c5.18xlarge, ml.c5.24xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.12xlarge, ml.g6.16xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.12xlarge, ml.g6e.16xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.geospatial.interactive, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.p5.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge, ml.m6id.large, ml.m6id.xlarge, ml.m6id.2xlarge, ml.m6id.4xlarge, ml.m6id.8xlarge, ml.m6id.12xlarge, ml.m6id.16xlarge, ml.m6id.24xlarge, ml.m6id.32xlarge, ml.c6id.large, ml.c6id.xlarge, ml.c6id.2xlarge, ml.c6id.4xlarge, ml.c6id.8xlarge, ml.c6id.12xlarge, ml.c6id.16xlarge, ml.c6id.24xlarge, ml.c6id.32xlarge, ml.r6id.large, ml.r6id.xlarge, ml.r6id.2xlarge, ml.r6id.4xlarge, ml.r6id.8xlarge, ml.r6id.12xlarge, ml.r6id.16xlarge, ml.r6id.24xlarge, ml.r6id.32xlarge
        lifecycle_config_arn: "StudioLifecycleConfigArn",
      },
      custom_images: [
        {
          image_name: "ImageName", # required
          image_version_number: 1,
          app_image_config_name: "AppImageConfigName", # required
        },
      ],
      lifecycle_config_arns: ["StudioLifecycleConfigArn"],
      code_repositories: [
        {
          repository_url: "RepositoryUrl", # required
        },
      ],
      app_lifecycle_management: {
        idle_settings: {
          lifecycle_management: "ENABLED", # accepts ENABLED, DISABLED
          idle_timeout_in_minutes: 1,
          min_idle_timeout_in_minutes: 1,
          max_idle_timeout_in_minutes: 1,
        },
      },
      emr_settings: {
        assumable_role_arns: ["RoleArn"],
        execution_role_arns: ["RoleArn"],
      },
      built_in_lifecycle_config_arn: "StudioLifecycleConfigArn",
    },
    space_storage_settings: {
      default_ebs_storage_settings: {
        default_ebs_volume_size_in_gb: 1, # required
        maximum_ebs_volume_size_in_gb: 1, # required
      },
    },
    default_landing_uri: "LandingUri",
    studio_web_portal: "ENABLED", # accepts ENABLED, DISABLED
    custom_posix_user_config: {
      uid: 1, # required
      gid: 1, # required
    },
    custom_file_system_configs: [
      {
        efs_file_system_config: {
          file_system_id: "FileSystemId", # required
          file_system_path: "FileSystemPath",
        },
        f_sx_lustre_file_system_config: {
          file_system_id: "FileSystemId", # required
          file_system_path: "FileSystemPath",
        },
      },
    ],
    studio_web_portal_settings: {
      hidden_ml_tools: ["DataWrangler"], # accepts DataWrangler, FeatureStore, EmrClusters, AutoMl, Experiments, Training, ModelEvaluation, Pipelines, Models, JumpStart, InferenceRecommender, Endpoints, Projects, InferenceOptimization, PerformanceEvaluation, HyperPodClusters, LakeraGuard, Comet, DeepchecksLLMEvaluation, Fiddler
      hidden_app_types: ["JupyterServer"], # accepts JupyterServer, KernelGateway, DetailedProfiler, TensorBoard, CodeEditor, JupyterLab, RStudioServerPro, RSessionGateway, Canvas
      hidden_instance_types: ["system"], # accepts system, ml.t3.micro, ml.t3.small, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.8xlarge, ml.m5.12xlarge, ml.m5.16xlarge, ml.m5.24xlarge, ml.m5d.large, ml.m5d.xlarge, ml.m5d.2xlarge, ml.m5d.4xlarge, ml.m5d.8xlarge, ml.m5d.12xlarge, ml.m5d.16xlarge, ml.m5d.24xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.12xlarge, ml.c5.18xlarge, ml.c5.24xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.12xlarge, ml.g6.16xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.12xlarge, ml.g6e.16xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.geospatial.interactive, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.p5.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge, ml.m6id.large, ml.m6id.xlarge, ml.m6id.2xlarge, ml.m6id.4xlarge, ml.m6id.8xlarge, ml.m6id.12xlarge, ml.m6id.16xlarge, ml.m6id.24xlarge, ml.m6id.32xlarge, ml.c6id.large, ml.c6id.xlarge, ml.c6id.2xlarge, ml.c6id.4xlarge, ml.c6id.8xlarge, ml.c6id.12xlarge, ml.c6id.16xlarge, ml.c6id.24xlarge, ml.c6id.32xlarge, ml.r6id.large, ml.r6id.xlarge, ml.r6id.2xlarge, ml.r6id.4xlarge, ml.r6id.8xlarge, ml.r6id.12xlarge, ml.r6id.16xlarge, ml.r6id.24xlarge, ml.r6id.32xlarge
      hidden_sage_maker_image_version_aliases: [
        {
          sage_maker_image_name: "sagemaker_distribution", # accepts sagemaker_distribution
          version_aliases: ["ImageVersionAliasPattern"],
        },
      ],
    },
    auto_mount_home_efs: "Enabled", # accepts Enabled, Disabled, DefaultAsDomain
  },
  domain_settings: {
    security_group_ids: ["SecurityGroupId"],
    r_studio_server_pro_domain_settings: {
      domain_execution_role_arn: "RoleArn", # required
      r_studio_connect_url: "String",
      r_studio_package_manager_url: "String",
      default_resource_spec: {
        sage_maker_image_arn: "ImageArn",
        sage_maker_image_version_arn: "ImageVersionArn",
        sage_maker_image_version_alias: "ImageVersionAlias",
        instance_type: "system", # accepts system, ml.t3.micro, ml.t3.small, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.8xlarge, ml.m5.12xlarge, ml.m5.16xlarge, ml.m5.24xlarge, ml.m5d.large, ml.m5d.xlarge, ml.m5d.2xlarge, ml.m5d.4xlarge, ml.m5d.8xlarge, ml.m5d.12xlarge, ml.m5d.16xlarge, ml.m5d.24xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.12xlarge, ml.c5.18xlarge, ml.c5.24xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.12xlarge, ml.g6.16xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.12xlarge, ml.g6e.16xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.geospatial.interactive, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.p5.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge, ml.m6id.large, ml.m6id.xlarge, ml.m6id.2xlarge, ml.m6id.4xlarge, ml.m6id.8xlarge, ml.m6id.12xlarge, ml.m6id.16xlarge, ml.m6id.24xlarge, ml.m6id.32xlarge, ml.c6id.large, ml.c6id.xlarge, ml.c6id.2xlarge, ml.c6id.4xlarge, ml.c6id.8xlarge, ml.c6id.12xlarge, ml.c6id.16xlarge, ml.c6id.24xlarge, ml.c6id.32xlarge, ml.r6id.large, ml.r6id.xlarge, ml.r6id.2xlarge, ml.r6id.4xlarge, ml.r6id.8xlarge, ml.r6id.12xlarge, ml.r6id.16xlarge, ml.r6id.24xlarge, ml.r6id.32xlarge
        lifecycle_config_arn: "StudioLifecycleConfigArn",
      },
    },
    execution_role_identity_config: "USER_PROFILE_NAME", # accepts USER_PROFILE_NAME, DISABLED
    docker_settings: {
      enable_docker_access: "ENABLED", # accepts ENABLED, DISABLED
      vpc_only_trusted_accounts: ["AccountId"],
    },
    amazon_q_settings: {
      status: "ENABLED", # accepts ENABLED, DISABLED
      q_profile_arn: "QProfileArn",
    },
  },
  subnet_ids: ["SubnetId"], # required
  vpc_id: "VpcId", # required
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
  app_network_access_type: "PublicInternetOnly", # accepts PublicInternetOnly, VpcOnly
  home_efs_file_system_kms_key_id: "KmsKeyId",
  kms_key_id: "KmsKeyId",
  app_security_group_management: "Service", # accepts Service, Customer
  tag_propagation: "ENABLED", # accepts ENABLED, DISABLED
  default_space_settings: {
    execution_role: "RoleArn",
    security_groups: ["SecurityGroupId"],
    jupyter_server_app_settings: {
      default_resource_spec: {
        sage_maker_image_arn: "ImageArn",
        sage_maker_image_version_arn: "ImageVersionArn",
        sage_maker_image_version_alias: "ImageVersionAlias",
        instance_type: "system", # accepts system, ml.t3.micro, ml.t3.small, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.8xlarge, ml.m5.12xlarge, ml.m5.16xlarge, ml.m5.24xlarge, ml.m5d.large, ml.m5d.xlarge, ml.m5d.2xlarge, ml.m5d.4xlarge, ml.m5d.8xlarge, ml.m5d.12xlarge, ml.m5d.16xlarge, ml.m5d.24xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.12xlarge, ml.c5.18xlarge, ml.c5.24xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.12xlarge, ml.g6.16xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.12xlarge, ml.g6e.16xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.geospatial.interactive, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.p5.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge, ml.m6id.large, ml.m6id.xlarge, ml.m6id.2xlarge, ml.m6id.4xlarge, ml.m6id.8xlarge, ml.m6id.12xlarge, ml.m6id.16xlarge, ml.m6id.24xlarge, ml.m6id.32xlarge, ml.c6id.large, ml.c6id.xlarge, ml.c6id.2xlarge, ml.c6id.4xlarge, ml.c6id.8xlarge, ml.c6id.12xlarge, ml.c6id.16xlarge, ml.c6id.24xlarge, ml.c6id.32xlarge, ml.r6id.large, ml.r6id.xlarge, ml.r6id.2xlarge, ml.r6id.4xlarge, ml.r6id.8xlarge, ml.r6id.12xlarge, ml.r6id.16xlarge, ml.r6id.24xlarge, ml.r6id.32xlarge
        lifecycle_config_arn: "StudioLifecycleConfigArn",
      },
      lifecycle_config_arns: ["StudioLifecycleConfigArn"],
      code_repositories: [
        {
          repository_url: "RepositoryUrl", # required
        },
      ],
    },
    kernel_gateway_app_settings: {
      default_resource_spec: {
        sage_maker_image_arn: "ImageArn",
        sage_maker_image_version_arn: "ImageVersionArn",
        sage_maker_image_version_alias: "ImageVersionAlias",
        instance_type: "system", # accepts system, ml.t3.micro, ml.t3.small, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.8xlarge, ml.m5.12xlarge, ml.m5.16xlarge, ml.m5.24xlarge, ml.m5d.large, ml.m5d.xlarge, ml.m5d.2xlarge, ml.m5d.4xlarge, ml.m5d.8xlarge, ml.m5d.12xlarge, ml.m5d.16xlarge, ml.m5d.24xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.12xlarge, ml.c5.18xlarge, ml.c5.24xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.12xlarge, ml.g6.16xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.12xlarge, ml.g6e.16xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.geospatial.interactive, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.p5.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge, ml.m6id.large, ml.m6id.xlarge, ml.m6id.2xlarge, ml.m6id.4xlarge, ml.m6id.8xlarge, ml.m6id.12xlarge, ml.m6id.16xlarge, ml.m6id.24xlarge, ml.m6id.32xlarge, ml.c6id.large, ml.c6id.xlarge, ml.c6id.2xlarge, ml.c6id.4xlarge, ml.c6id.8xlarge, ml.c6id.12xlarge, ml.c6id.16xlarge, ml.c6id.24xlarge, ml.c6id.32xlarge, ml.r6id.large, ml.r6id.xlarge, ml.r6id.2xlarge, ml.r6id.4xlarge, ml.r6id.8xlarge, ml.r6id.12xlarge, ml.r6id.16xlarge, ml.r6id.24xlarge, ml.r6id.32xlarge
        lifecycle_config_arn: "StudioLifecycleConfigArn",
      },
      custom_images: [
        {
          image_name: "ImageName", # required
          image_version_number: 1,
          app_image_config_name: "AppImageConfigName", # required
        },
      ],
      lifecycle_config_arns: ["StudioLifecycleConfigArn"],
    },
    jupyter_lab_app_settings: {
      default_resource_spec: {
        sage_maker_image_arn: "ImageArn",
        sage_maker_image_version_arn: "ImageVersionArn",
        sage_maker_image_version_alias: "ImageVersionAlias",
        instance_type: "system", # accepts system, ml.t3.micro, ml.t3.small, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.8xlarge, ml.m5.12xlarge, ml.m5.16xlarge, ml.m5.24xlarge, ml.m5d.large, ml.m5d.xlarge, ml.m5d.2xlarge, ml.m5d.4xlarge, ml.m5d.8xlarge, ml.m5d.12xlarge, ml.m5d.16xlarge, ml.m5d.24xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.12xlarge, ml.c5.18xlarge, ml.c5.24xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.12xlarge, ml.g6.16xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.12xlarge, ml.g6e.16xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.geospatial.interactive, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.p5.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge, ml.m6id.large, ml.m6id.xlarge, ml.m6id.2xlarge, ml.m6id.4xlarge, ml.m6id.8xlarge, ml.m6id.12xlarge, ml.m6id.16xlarge, ml.m6id.24xlarge, ml.m6id.32xlarge, ml.c6id.large, ml.c6id.xlarge, ml.c6id.2xlarge, ml.c6id.4xlarge, ml.c6id.8xlarge, ml.c6id.12xlarge, ml.c6id.16xlarge, ml.c6id.24xlarge, ml.c6id.32xlarge, ml.r6id.large, ml.r6id.xlarge, ml.r6id.2xlarge, ml.r6id.4xlarge, ml.r6id.8xlarge, ml.r6id.12xlarge, ml.r6id.16xlarge, ml.r6id.24xlarge, ml.r6id.32xlarge
        lifecycle_config_arn: "StudioLifecycleConfigArn",
      },
      custom_images: [
        {
          image_name: "ImageName", # required
          image_version_number: 1,
          app_image_config_name: "AppImageConfigName", # required
        },
      ],
      lifecycle_config_arns: ["StudioLifecycleConfigArn"],
      code_repositories: [
        {
          repository_url: "RepositoryUrl", # required
        },
      ],
      app_lifecycle_management: {
        idle_settings: {
          lifecycle_management: "ENABLED", # accepts ENABLED, DISABLED
          idle_timeout_in_minutes: 1,
          min_idle_timeout_in_minutes: 1,
          max_idle_timeout_in_minutes: 1,
        },
      },
      emr_settings: {
        assumable_role_arns: ["RoleArn"],
        execution_role_arns: ["RoleArn"],
      },
      built_in_lifecycle_config_arn: "StudioLifecycleConfigArn",
    },
    space_storage_settings: {
      default_ebs_storage_settings: {
        default_ebs_volume_size_in_gb: 1, # required
        maximum_ebs_volume_size_in_gb: 1, # required
      },
    },
    custom_posix_user_config: {
      uid: 1, # required
      gid: 1, # required
    },
    custom_file_system_configs: [
      {
        efs_file_system_config: {
          file_system_id: "FileSystemId", # required
          file_system_path: "FileSystemPath",
        },
        f_sx_lustre_file_system_config: {
          file_system_id: "FileSystemId", # required
          file_system_path: "FileSystemPath",
        },
      },
    ],
  },
})

Response structure


resp.domain_arn #=> String
resp.url #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :domain_name (required, String)

    A name for the domain.

  • :auth_mode (required, String)

    The mode of authentication that members use to access the domain.

  • :default_user_settings (required, Types::UserSettings)

    The default settings to use to create a user profile when UserSettings isn't specified in the call to the CreateUserProfile API.

    SecurityGroups is aggregated when specified in both calls. For all other settings in UserSettings, the values specified in CreateUserProfile take precedence over those specified in CreateDomain.

  • :domain_settings (Types::DomainSettings)

    A collection of Domain settings.

  • :subnet_ids (required, Array<String>)

    The VPC subnets that the domain uses for communication.

  • :vpc_id (required, String)

    The ID of the Amazon Virtual Private Cloud (VPC) that the domain uses for communication.

  • :tags (Array<Types::Tag>)

    Tags to associated with the Domain. Each tag consists of a key and an optional value. Tag keys must be unique per resource. Tags are searchable using the Search API.

    Tags that you specify for the Domain are also added to all Apps that the Domain launches.

  • :app_network_access_type (String)

    Specifies the VPC used for non-EFS traffic. The default value is PublicInternetOnly.

    • PublicInternetOnly - Non-EFS traffic is through a VPC managed by Amazon SageMaker, which allows direct internet access

    • VpcOnly - All traffic is through the specified VPC and subnets

  • :home_efs_file_system_kms_key_id (String)

    Use KmsKeyId.

  • :kms_key_id (String)

    SageMaker uses Amazon Web Services KMS to encrypt EFS and EBS volumes attached to the domain with an Amazon Web Services managed key by default. For more control, specify a customer managed key.

  • :app_security_group_management (String)

    The entity that creates and manages the required security groups for inter-app communication in VPCOnly mode. Required when CreateDomain.AppNetworkAccessType is VPCOnly and DomainSettings.RStudioServerProDomainSettings.DomainExecutionRoleArn is provided. If setting up the domain for use with RStudio, this value must be set to Service.

  • :tag_propagation (String)

    Indicates whether custom tag propagation is supported for the domain. Defaults to DISABLED.

  • :default_space_settings (Types::DefaultSpaceSettings)

    The default settings for shared spaces that users create in the domain.

Returns:

See Also:



3165
3166
3167
3168
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 3165

def create_domain(params = {}, options = {})
  req = build_request(:create_domain, params)
  req.send_request(options)
end

#create_edge_deployment_plan(params = {}) ⇒ Types::CreateEdgeDeploymentPlanResponse

Creates an edge deployment plan, consisting of multiple stages. Each stage may have a different deployment configuration and devices.

Examples:

Request syntax with placeholder values


resp = client.create_edge_deployment_plan({
  edge_deployment_plan_name: "EntityName", # required
  model_configs: [ # required
    {
      model_handle: "EntityName", # required
      edge_packaging_job_name: "EntityName", # required
    },
  ],
  device_fleet_name: "EntityName", # required
  stages: [
    {
      stage_name: "EntityName", # required
      device_selection_config: { # required
        device_subset_type: "PERCENTAGE", # required, accepts PERCENTAGE, SELECTION, NAMECONTAINS
        percentage: 1,
        device_names: ["DeviceName"],
        device_name_contains: "DeviceName",
      },
      deployment_config: {
        failure_handling_policy: "ROLLBACK_ON_FAILURE", # required, accepts ROLLBACK_ON_FAILURE, DO_NOTHING
      },
    },
  ],
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.edge_deployment_plan_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :edge_deployment_plan_name (required, String)

    The name of the edge deployment plan.

  • :model_configs (required, Array<Types::EdgeDeploymentModelConfig>)

    List of models associated with the edge deployment plan.

  • :device_fleet_name (required, String)

    The device fleet used for this edge deployment plan.

  • :stages (Array<Types::DeploymentStage>)

    List of stages of the edge deployment plan. The number of stages is limited to 10 per deployment.

  • :tags (Array<Types::Tag>)

    List of tags with which to tag the edge deployment plan.

Returns:

See Also:



3234
3235
3236
3237
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 3234

def create_edge_deployment_plan(params = {}, options = {})
  req = build_request(:create_edge_deployment_plan, params)
  req.send_request(options)
end

#create_edge_deployment_stage(params = {}) ⇒ Struct

Creates a new stage in an existing edge deployment plan.

Examples:

Request syntax with placeholder values


resp = client.create_edge_deployment_stage({
  edge_deployment_plan_name: "EntityName", # required
  stages: [ # required
    {
      stage_name: "EntityName", # required
      device_selection_config: { # required
        device_subset_type: "PERCENTAGE", # required, accepts PERCENTAGE, SELECTION, NAMECONTAINS
        percentage: 1,
        device_names: ["DeviceName"],
        device_name_contains: "DeviceName",
      },
      deployment_config: {
        failure_handling_policy: "ROLLBACK_ON_FAILURE", # required, accepts ROLLBACK_ON_FAILURE, DO_NOTHING
      },
    },
  ],
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :edge_deployment_plan_name (required, String)

    The name of the edge deployment plan.

  • :stages (required, Array<Types::DeploymentStage>)

    List of stages to be added to the edge deployment plan.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



3273
3274
3275
3276
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 3273

def create_edge_deployment_stage(params = {}, options = {})
  req = build_request(:create_edge_deployment_stage, params)
  req.send_request(options)
end

#create_edge_packaging_job(params = {}) ⇒ Struct

Starts a SageMaker Edge Manager model packaging job. Edge Manager will use the model artifacts from the Amazon Simple Storage Service bucket that you specify. After the model has been packaged, Amazon SageMaker saves the resulting artifacts to an S3 bucket that you specify.

Examples:

Request syntax with placeholder values


resp = client.create_edge_packaging_job({
  edge_packaging_job_name: "EntityName", # required
  compilation_job_name: "EntityName", # required
  model_name: "EntityName", # required
  model_version: "EdgeVersion", # required
  role_arn: "RoleArn", # required
  output_config: { # required
    s3_output_location: "S3Uri", # required
    kms_key_id: "KmsKeyId",
    preset_deployment_type: "GreengrassV2Component", # accepts GreengrassV2Component
    preset_deployment_config: "String",
  },
  resource_key: "KmsKeyId",
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :edge_packaging_job_name (required, String)

    The name of the edge packaging job.

  • :compilation_job_name (required, String)

    The name of the SageMaker Neo compilation job that will be used to locate model artifacts for packaging.

  • :model_name (required, String)

    The name of the model.

  • :model_version (required, String)

    The version of the model.

  • :role_arn (required, String)

    The Amazon Resource Name (ARN) of an IAM role that enables Amazon SageMaker to download and upload the model, and to contact SageMaker Neo.

  • :output_config (required, Types::EdgeOutputConfig)

    Provides information about the output location for the packaged model.

  • :resource_key (String)

    The Amazon Web Services KMS key to use when encrypting the EBS volume the edge packaging job runs on.

  • :tags (Array<Types::Tag>)

    Creates tags for the packaging job.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



3340
3341
3342
3343
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 3340

def create_edge_packaging_job(params = {}, options = {})
  req = build_request(:create_edge_packaging_job, params)
  req.send_request(options)
end

#create_endpoint(params = {}) ⇒ Types::CreateEndpointOutput

Creates an endpoint using the endpoint configuration specified in the request. SageMaker uses the endpoint to provision resources and deploy models. You create the endpoint configuration with the CreateEndpointConfig API.

Use this API to deploy models using SageMaker hosting services.

You must not delete an EndpointConfig that is in use by an endpoint that is live or while the UpdateEndpoint or CreateEndpoint operations are being performed on the endpoint. To update an endpoint, you must create a new EndpointConfig.

The endpoint name must be unique within an Amazon Web Services Region in your Amazon Web Services account.

When it receives the request, SageMaker creates the endpoint, launches the resources (ML compute instances), and deploys the model(s) on them.

When you call CreateEndpoint, a load call is made to DynamoDB to verify that your endpoint configuration exists. When you read data from a DynamoDB table supporting Eventually Consistent Reads , the response might not reflect the results of a recently completed write operation. The response might include some stale data. If the dependent entities are not yet in DynamoDB, this causes a validation error. If you repeat your read request after a short time, the response should return the latest data. So retry logic is recommended to handle these possible issues. We also recommend that customers call DescribeEndpointConfig before calling CreateEndpoint to minimize the potential impact of a DynamoDB eventually consistent read.

When SageMaker receives the request, it sets the endpoint status to Creating. After it creates the endpoint, it sets the status to InService. SageMaker can then process incoming requests for inferences. To check the status of an endpoint, use the DescribeEndpoint API.

If any of the models hosted at this endpoint get model data from an Amazon S3 location, SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you provided. Amazon Web Services STS is activated in your Amazon Web Services account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For more information, see Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User Guide.

To add the IAM role policies for using this API operation, go to the IAM console, and choose Roles in the left navigation pane. Search the IAM role that you want to grant access to use the CreateEndpoint and CreateEndpointConfig API operations, add the following policies to the role.

  • Option 1: For a full SageMaker access, search and attach the AmazonSageMakerFullAccess policy.

  • Option 2: For granting a limited access to an IAM role, paste the following Action elements manually into the JSON file of the IAM role:

    "Action": ["sagemaker:CreateEndpoint", "sagemaker:CreateEndpointConfig"]

    "Resource": [

    "arn:aws:sagemaker:region:account-id:endpoint/endpointName"

    "arn:aws:sagemaker:region:account-id:endpoint-config/endpointConfigName"

    ]

    For more information, see SageMaker API Permissions: Actions, Permissions, and Resources Reference.

Examples:

Request syntax with placeholder values


resp = client.create_endpoint({
  endpoint_name: "EndpointName", # required
  endpoint_config_name: "EndpointConfigName", # required
  deployment_config: {
    blue_green_update_policy: {
      traffic_routing_configuration: { # required
        type: "ALL_AT_ONCE", # required, accepts ALL_AT_ONCE, CANARY, LINEAR
        wait_interval_in_seconds: 1, # required
        canary_size: {
          type: "INSTANCE_COUNT", # required, accepts INSTANCE_COUNT, CAPACITY_PERCENT
          value: 1, # required
        },
        linear_step_size: {
          type: "INSTANCE_COUNT", # required, accepts INSTANCE_COUNT, CAPACITY_PERCENT
          value: 1, # required
        },
      },
      termination_wait_in_seconds: 1,
      maximum_execution_timeout_in_seconds: 1,
    },
    rolling_update_policy: {
      maximum_batch_size: { # required
        type: "INSTANCE_COUNT", # required, accepts INSTANCE_COUNT, CAPACITY_PERCENT
        value: 1, # required
      },
      wait_interval_in_seconds: 1, # required
      maximum_execution_timeout_in_seconds: 1,
      rollback_maximum_batch_size: {
        type: "INSTANCE_COUNT", # required, accepts INSTANCE_COUNT, CAPACITY_PERCENT
        value: 1, # required
      },
    },
    auto_rollback_configuration: {
      alarms: [
        {
          alarm_name: "AlarmName",
        },
      ],
    },
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.endpoint_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :endpoint_name (required, String)

    The name of the endpoint.The name must be unique within an Amazon Web Services Region in your Amazon Web Services account. The name is case-insensitive in CreateEndpoint, but the case is preserved and must be matched in InvokeEndpoint.

  • :endpoint_config_name (required, String)

    The name of an endpoint configuration. For more information, see CreateEndpointConfig.

  • :deployment_config (Types::DeploymentConfig)

    The deployment configuration for an endpoint, which contains the desired deployment strategy and rollback configurations.

  • :tags (Array<Types::Tag>)

    An array of key-value pairs. You can use tags to categorize your Amazon Web Services resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging Amazon Web Services Resources.

Returns:

See Also:



3531
3532
3533
3534
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 3531

def create_endpoint(params = {}, options = {})
  req = build_request(:create_endpoint, params)
  req.send_request(options)
end

#create_endpoint_config(params = {}) ⇒ Types::CreateEndpointConfigOutput

Creates an endpoint configuration that SageMaker hosting services uses to deploy models. In the configuration, you identify one or more models, created using the CreateModel API, to deploy and the resources that you want SageMaker to provision. Then you call the CreateEndpoint API.

Use this API if you want to use SageMaker hosting services to deploy models into production.

In the request, you define a ProductionVariant, for each model that you want to deploy. Each ProductionVariant parameter also describes the resources that you want SageMaker to provision. This includes the number and type of ML compute instances to deploy.

If you are hosting multiple models, you also assign a VariantWeight to specify how much traffic you want to allocate to each model. For example, suppose that you want to host two models, A and B, and you assign traffic weight 2 for model A and 1 for model B. SageMaker distributes two-thirds of the traffic to Model A, and one-third to model B.

When you call CreateEndpoint, a load call is made to DynamoDB to verify that your endpoint configuration exists. When you read data from a DynamoDB table supporting Eventually Consistent Reads , the response might not reflect the results of a recently completed write operation. The response might include some stale data. If the dependent entities are not yet in DynamoDB, this causes a validation error. If you repeat your read request after a short time, the response should return the latest data. So retry logic is recommended to handle these possible issues. We also recommend that customers call DescribeEndpointConfig before calling CreateEndpoint to minimize the potential impact of a DynamoDB eventually consistent read.

Examples:

Request syntax with placeholder values


resp = client.create_endpoint_config({
  endpoint_config_name: "EndpointConfigName", # required
  production_variants: [ # required
    {
      variant_name: "VariantName", # required
      model_name: "ModelName",
      initial_instance_count: 1,
      instance_type: "ml.t2.medium", # accepts ml.t2.medium, ml.t2.large, ml.t2.xlarge, ml.t2.2xlarge, ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.m5d.large, ml.m5d.xlarge, ml.m5d.2xlarge, ml.m5d.4xlarge, ml.m5d.12xlarge, ml.m5d.24xlarge, ml.c4.large, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.c5d.large, ml.c5d.xlarge, ml.c5d.2xlarge, ml.c5d.4xlarge, ml.c5d.9xlarge, ml.c5d.18xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.12xlarge, ml.r5.24xlarge, ml.r5d.large, ml.r5d.xlarge, ml.r5d.2xlarge, ml.r5d.4xlarge, ml.r5d.12xlarge, ml.r5d.24xlarge, ml.inf1.xlarge, ml.inf1.2xlarge, ml.inf1.6xlarge, ml.inf1.24xlarge, ml.dl1.24xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.12xlarge, ml.g5.16xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.12xlarge, ml.g6.16xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.12xlarge, ml.g6e.16xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.p4d.24xlarge, ml.c7g.large, ml.c7g.xlarge, ml.c7g.2xlarge, ml.c7g.4xlarge, ml.c7g.8xlarge, ml.c7g.12xlarge, ml.c7g.16xlarge, ml.m6g.large, ml.m6g.xlarge, ml.m6g.2xlarge, ml.m6g.4xlarge, ml.m6g.8xlarge, ml.m6g.12xlarge, ml.m6g.16xlarge, ml.m6gd.large, ml.m6gd.xlarge, ml.m6gd.2xlarge, ml.m6gd.4xlarge, ml.m6gd.8xlarge, ml.m6gd.12xlarge, ml.m6gd.16xlarge, ml.c6g.large, ml.c6g.xlarge, ml.c6g.2xlarge, ml.c6g.4xlarge, ml.c6g.8xlarge, ml.c6g.12xlarge, ml.c6g.16xlarge, ml.c6gd.large, ml.c6gd.xlarge, ml.c6gd.2xlarge, ml.c6gd.4xlarge, ml.c6gd.8xlarge, ml.c6gd.12xlarge, ml.c6gd.16xlarge, ml.c6gn.large, ml.c6gn.xlarge, ml.c6gn.2xlarge, ml.c6gn.4xlarge, ml.c6gn.8xlarge, ml.c6gn.12xlarge, ml.c6gn.16xlarge, ml.r6g.large, ml.r6g.xlarge, ml.r6g.2xlarge, ml.r6g.4xlarge, ml.r6g.8xlarge, ml.r6g.12xlarge, ml.r6g.16xlarge, ml.r6gd.large, ml.r6gd.xlarge, ml.r6gd.2xlarge, ml.r6gd.4xlarge, ml.r6gd.8xlarge, ml.r6gd.12xlarge, ml.r6gd.16xlarge, ml.p4de.24xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.trn2.48xlarge, ml.inf2.xlarge, ml.inf2.8xlarge, ml.inf2.24xlarge, ml.inf2.48xlarge, ml.p5.48xlarge, ml.p5e.48xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge
      initial_variant_weight: 1.0,
      accelerator_type: "ml.eia1.medium", # accepts ml.eia1.medium, ml.eia1.large, ml.eia1.xlarge, ml.eia2.medium, ml.eia2.large, ml.eia2.xlarge
      core_dump_config: {
        destination_s3_uri: "DestinationS3Uri", # required
        kms_key_id: "KmsKeyId",
      },
      serverless_config: {
        memory_size_in_mb: 1, # required
        max_concurrency: 1, # required
        provisioned_concurrency: 1,
      },
      volume_size_in_gb: 1,
      model_data_download_timeout_in_seconds: 1,
      container_startup_health_check_timeout_in_seconds: 1,
      enable_ssm_access: false,
      managed_instance_scaling: {
        status: "ENABLED", # accepts ENABLED, DISABLED
        min_instance_count: 1,
        max_instance_count: 1,
      },
      routing_config: {
        routing_strategy: "LEAST_OUTSTANDING_REQUESTS", # required, accepts LEAST_OUTSTANDING_REQUESTS, RANDOM
      },
      inference_ami_version: "al2-ami-sagemaker-inference-gpu-2", # accepts al2-ami-sagemaker-inference-gpu-2
    },
  ],
  data_capture_config: {
    enable_capture: false,
    initial_sampling_percentage: 1, # required
    destination_s3_uri: "DestinationS3Uri", # required
    kms_key_id: "KmsKeyId",
    capture_options: [ # required
      {
        capture_mode: "Input", # required, accepts Input, Output, InputAndOutput
      },
    ],
    capture_content_type_header: {
      csv_content_types: ["CsvContentType"],
      json_content_types: ["JsonContentType"],
    },
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
  kms_key_id: "KmsKeyId",
  async_inference_config: {
    client_config: {
      max_concurrent_invocations_per_instance: 1,
    },
    output_config: { # required
      kms_key_id: "KmsKeyId",
      s3_output_path: "DestinationS3Uri",
      notification_config: {
        success_topic: "SnsTopicArn",
        error_topic: "SnsTopicArn",
        include_inference_response_in: ["SUCCESS_NOTIFICATION_TOPIC"], # accepts SUCCESS_NOTIFICATION_TOPIC, ERROR_NOTIFICATION_TOPIC
      },
      s3_failure_path: "DestinationS3Uri",
    },
  },
  explainer_config: {
    clarify_explainer_config: {
      enable_explanations: "ClarifyEnableExplanations",
      inference_config: {
        features_attribute: "ClarifyFeaturesAttribute",
        content_template: "ClarifyContentTemplate",
        max_record_count: 1,
        max_payload_in_mb: 1,
        probability_index: 1,
        label_index: 1,
        probability_attribute: "ClarifyProbabilityAttribute",
        label_attribute: "ClarifyLabelAttribute",
        label_headers: ["ClarifyHeader"],
        feature_headers: ["ClarifyHeader"],
        feature_types: ["numerical"], # accepts numerical, categorical, text
      },
      shap_config: { # required
        shap_baseline_config: { # required
          mime_type: "ClarifyMimeType",
          shap_baseline: "ClarifyShapBaseline",
          shap_baseline_uri: "Url",
        },
        number_of_samples: 1,
        use_logit: false,
        seed: 1,
        text_config: {
          language: "af", # required, accepts af, sq, ar, hy, eu, bn, bg, ca, zh, hr, cs, da, nl, en, et, fi, fr, de, el, gu, he, hi, hu, is, id, ga, it, kn, ky, lv, lt, lb, mk, ml, mr, ne, nb, fa, pl, pt, ro, ru, sa, sr, tn, si, sk, sl, es, sv, tl, ta, tt, te, tr, uk, ur, yo, lij, xx
          granularity: "token", # required, accepts token, sentence, paragraph
        },
      },
    },
  },
  shadow_production_variants: [
    {
      variant_name: "VariantName", # required
      model_name: "ModelName",
      initial_instance_count: 1,
      instance_type: "ml.t2.medium", # accepts ml.t2.medium, ml.t2.large, ml.t2.xlarge, ml.t2.2xlarge, ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.m5d.large, ml.m5d.xlarge, ml.m5d.2xlarge, ml.m5d.4xlarge, ml.m5d.12xlarge, ml.m5d.24xlarge, ml.c4.large, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.c5d.large, ml.c5d.xlarge, ml.c5d.2xlarge, ml.c5d.4xlarge, ml.c5d.9xlarge, ml.c5d.18xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.12xlarge, ml.r5.24xlarge, ml.r5d.large, ml.r5d.xlarge, ml.r5d.2xlarge, ml.r5d.4xlarge, ml.r5d.12xlarge, ml.r5d.24xlarge, ml.inf1.xlarge, ml.inf1.2xlarge, ml.inf1.6xlarge, ml.inf1.24xlarge, ml.dl1.24xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.12xlarge, ml.g5.16xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.12xlarge, ml.g6.16xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.12xlarge, ml.g6e.16xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.p4d.24xlarge, ml.c7g.large, ml.c7g.xlarge, ml.c7g.2xlarge, ml.c7g.4xlarge, ml.c7g.8xlarge, ml.c7g.12xlarge, ml.c7g.16xlarge, ml.m6g.large, ml.m6g.xlarge, ml.m6g.2xlarge, ml.m6g.4xlarge, ml.m6g.8xlarge, ml.m6g.12xlarge, ml.m6g.16xlarge, ml.m6gd.large, ml.m6gd.xlarge, ml.m6gd.2xlarge, ml.m6gd.4xlarge, ml.m6gd.8xlarge, ml.m6gd.12xlarge, ml.m6gd.16xlarge, ml.c6g.large, ml.c6g.xlarge, ml.c6g.2xlarge, ml.c6g.4xlarge, ml.c6g.8xlarge, ml.c6g.12xlarge, ml.c6g.16xlarge, ml.c6gd.large, ml.c6gd.xlarge, ml.c6gd.2xlarge, ml.c6gd.4xlarge, ml.c6gd.8xlarge, ml.c6gd.12xlarge, ml.c6gd.16xlarge, ml.c6gn.large, ml.c6gn.xlarge, ml.c6gn.2xlarge, ml.c6gn.4xlarge, ml.c6gn.8xlarge, ml.c6gn.12xlarge, ml.c6gn.16xlarge, ml.r6g.large, ml.r6g.xlarge, ml.r6g.2xlarge, ml.r6g.4xlarge, ml.r6g.8xlarge, ml.r6g.12xlarge, ml.r6g.16xlarge, ml.r6gd.large, ml.r6gd.xlarge, ml.r6gd.2xlarge, ml.r6gd.4xlarge, ml.r6gd.8xlarge, ml.r6gd.12xlarge, ml.r6gd.16xlarge, ml.p4de.24xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.trn2.48xlarge, ml.inf2.xlarge, ml.inf2.8xlarge, ml.inf2.24xlarge, ml.inf2.48xlarge, ml.p5.48xlarge, ml.p5e.48xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge
      initial_variant_weight: 1.0,
      accelerator_type: "ml.eia1.medium", # accepts ml.eia1.medium, ml.eia1.large, ml.eia1.xlarge, ml.eia2.medium, ml.eia2.large, ml.eia2.xlarge
      core_dump_config: {
        destination_s3_uri: "DestinationS3Uri", # required
        kms_key_id: "KmsKeyId",
      },
      serverless_config: {
        memory_size_in_mb: 1, # required
        max_concurrency: 1, # required
        provisioned_concurrency: 1,
      },
      volume_size_in_gb: 1,
      model_data_download_timeout_in_seconds: 1,
      container_startup_health_check_timeout_in_seconds: 1,
      enable_ssm_access: false,
      managed_instance_scaling: {
        status: "ENABLED", # accepts ENABLED, DISABLED
        min_instance_count: 1,
        max_instance_count: 1,
      },
      routing_config: {
        routing_strategy: "LEAST_OUTSTANDING_REQUESTS", # required, accepts LEAST_OUTSTANDING_REQUESTS, RANDOM
      },
      inference_ami_version: "al2-ami-sagemaker-inference-gpu-2", # accepts al2-ami-sagemaker-inference-gpu-2
    },
  ],
  execution_role_arn: "RoleArn",
  vpc_config: {
    security_group_ids: ["SecurityGroupId"], # required
    subnets: ["SubnetId"], # required
  },
  enable_network_isolation: false,
})

Response structure


resp.endpoint_config_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :endpoint_config_name (required, String)

    The name of the endpoint configuration. You specify this name in a CreateEndpoint request.

  • :production_variants (required, Array<Types::ProductionVariant>)

    An array of ProductionVariant objects, one for each model that you want to host at this endpoint.

  • :data_capture_config (Types::DataCaptureConfig)

    Configuration to control how SageMaker captures inference data.

  • :tags (Array<Types::Tag>)

    An array of key-value pairs. You can use tags to categorize your Amazon Web Services resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging Amazon Web Services Resources.

  • :kms_key_id (String)

    The Amazon Resource Name (ARN) of a Amazon Web Services Key Management Service key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint.

    The KmsKeyId can be any of the following formats:

    • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab

    • Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab

    • Alias name: alias/ExampleAlias

    • Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias

    The KMS key policy must grant permission to the IAM role that you specify in your CreateEndpoint, UpdateEndpoint requests. For more information, refer to the Amazon Web Services Key Management Service section Using Key Policies in Amazon Web Services KMS

    Certain Nitro-based instances include local storage, dependent on the instance type. Local storage volumes are encrypted using a hardware module on the instance. You can't request a KmsKeyId when using an instance type with local storage. If any of the models that you specify in the ProductionVariants parameter use nitro-based instances with local storage, do not specify a value for the KmsKeyId parameter. If you specify a value for KmsKeyId when using any nitro-based instances with local storage, the call to CreateEndpointConfig fails.

    For a list of instance types that support local instance storage, see Instance Store Volumes.

    For more information about local instance storage encryption, see SSD Instance Store Volumes.

  • :async_inference_config (Types::AsyncInferenceConfig)

    Specifies configuration for how an endpoint performs asynchronous inference. This is a required field in order for your Endpoint to be invoked using InvokeEndpointAsync.

  • :explainer_config (Types::ExplainerConfig)

    A member of CreateEndpointConfig that enables explainers.

  • :shadow_production_variants (Array<Types::ProductionVariant>)

    An array of ProductionVariant objects, one for each model that you want to host at this endpoint in shadow mode with production traffic replicated from the model specified on ProductionVariants. If you use this field, you can only specify one variant for ProductionVariants and one variant for ShadowProductionVariants.

  • :execution_role_arn (String)

    The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform actions on your behalf. For more information, see SageMaker Roles.

    To be able to pass this role to Amazon SageMaker, the caller of this action must have the iam:PassRole permission.

  • :vpc_config (Types::VpcConfig)

    Specifies an Amazon Virtual Private Cloud (VPC) that your SageMaker jobs, hosted models, and compute resources have access to. You can control access to and from your resources by configuring a VPC. For more information, see Give SageMaker Access to Resources in your Amazon VPC.

  • :enable_network_isolation (Boolean)

    Sets whether all model containers deployed to the endpoint are isolated. If they are, no inbound or outbound network calls can be made to or from the model containers.

Returns:

See Also:



3857
3858
3859
3860
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 3857

def create_endpoint_config(params = {}, options = {})
  req = build_request(:create_endpoint_config, params)
  req.send_request(options)
end

#create_experiment(params = {}) ⇒ Types::CreateExperimentResponse

Creates a SageMaker experiment. An experiment is a collection of trials that are observed, compared and evaluated as a group. A trial is a set of steps, called trial components, that produce a machine learning model.

In the Studio UI, trials are referred to as run groups and trial components are referred to as runs.

The goal of an experiment is to determine the components that produce the best model. Multiple trials are performed, each one isolating and measuring the impact of a change to one or more inputs, while keeping the remaining inputs constant.

When you use SageMaker Studio or the SageMaker Python SDK, all experiments, trials, and trial components are automatically tracked, logged, and indexed. When you use the Amazon Web Services SDK for Python (Boto), you must use the logging APIs provided by the SDK.

You can add tags to experiments, trials, trial components and then use the Search API to search for the tags.

To add a description to an experiment, specify the optional Description parameter. To add a description later, or to change the description, call the UpdateExperiment API.

To get a list of all your experiments, call the ListExperiments API. To view an experiment's properties, call the DescribeExperiment API. To get a list of all the trials associated with an experiment, call the ListTrials API. To create a trial call the CreateTrial API.

Examples:

Request syntax with placeholder values


resp = client.create_experiment({
  experiment_name: "ExperimentEntityName", # required
  display_name: "ExperimentEntityName",
  description: "ExperimentDescription",
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.experiment_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :experiment_name (required, String)

    The name of the experiment. The name must be unique in your Amazon Web Services account and is not case-sensitive.

  • :display_name (String)

    The name of the experiment as displayed. The name doesn't need to be unique. If you don't specify DisplayName, the value in ExperimentName is displayed.

  • :description (String)

    The description of the experiment.

  • :tags (Array<Types::Tag>)

    A list of tags to associate with the experiment. You can use Search API to search on the tags.

Returns:

See Also:



3950
3951
3952
3953
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 3950

def create_experiment(params = {}, options = {})
  req = build_request(:create_experiment, params)
  req.send_request(options)
end

#create_feature_group(params = {}) ⇒ Types::CreateFeatureGroupResponse

Create a new FeatureGroup. A FeatureGroup is a group of Features defined in the FeatureStore to describe a Record.

The FeatureGroup defines the schema and features contained in the FeatureGroup. A FeatureGroup definition is composed of a list of Features, a RecordIdentifierFeatureName, an EventTimeFeatureName and configurations for its OnlineStore and OfflineStore. Check Amazon Web Services service quotas to see the FeatureGroups quota for your Amazon Web Services account.

Note that it can take approximately 10-15 minutes to provision an OnlineStore FeatureGroup with the InMemory StorageType.

You must include at least one of OnlineStoreConfig and OfflineStoreConfig to create a FeatureGroup.

Examples:

Request syntax with placeholder values


resp = client.create_feature_group({
  feature_group_name: "FeatureGroupName", # required
  record_identifier_feature_name: "FeatureName", # required
  event_time_feature_name: "FeatureName", # required
  feature_definitions: [ # required
    {
      feature_name: "FeatureName", # required
      feature_type: "Integral", # required, accepts Integral, Fractional, String
      collection_type: "List", # accepts List, Set, Vector
      collection_config: {
        vector_config: {
          dimension: 1, # required
        },
      },
    },
  ],
  online_store_config: {
    security_config: {
      kms_key_id: "KmsKeyId",
    },
    enable_online_store: false,
    ttl_duration: {
      unit: "Seconds", # accepts Seconds, Minutes, Hours, Days, Weeks
      value: 1,
    },
    storage_type: "Standard", # accepts Standard, InMemory
  },
  offline_store_config: {
    s3_storage_config: { # required
      s3_uri: "S3Uri", # required
      kms_key_id: "KmsKeyId",
      resolved_output_s3_uri: "S3Uri",
    },
    disable_glue_table_creation: false,
    data_catalog_config: {
      table_name: "TableName", # required
      catalog: "Catalog", # required
      database: "Database", # required
    },
    table_format: "Default", # accepts Default, Glue, Iceberg
  },
  throughput_config: {
    throughput_mode: "OnDemand", # required, accepts OnDemand, Provisioned
    provisioned_read_capacity_units: 1,
    provisioned_write_capacity_units: 1,
  },
  role_arn: "RoleArn",
  description: "Description",
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.feature_group_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :feature_group_name (required, String)

    The name of the FeatureGroup. The name must be unique within an Amazon Web Services Region in an Amazon Web Services account.

    The name:

    • Must start with an alphanumeric character.

    • Can only include alphanumeric characters, underscores, and hyphens. Spaces are not allowed.

  • :record_identifier_feature_name (required, String)

    The name of the Feature whose value uniquely identifies a Record defined in the FeatureStore. Only the latest record per identifier value will be stored in the OnlineStore. RecordIdentifierFeatureName must be one of feature definitions' names.

    You use the RecordIdentifierFeatureName to access data in a FeatureStore.

    This name:

    • Must start with an alphanumeric character.

    • Can only contains alphanumeric characters, hyphens, underscores. Spaces are not allowed.

  • :event_time_feature_name (required, String)

    The name of the feature that stores the EventTime of a Record in a FeatureGroup.

    An EventTime is a point in time when a new event occurs that corresponds to the creation or update of a Record in a FeatureGroup. All Records in the FeatureGroup must have a corresponding EventTime.

    An EventTime can be a String or Fractional.

    • Fractional: EventTime feature values must be a Unix timestamp in seconds.

    • String: EventTime feature values must be an ISO-8601 string in the format. The following formats are supported yyyy-MM-dd'T'HH:mm:ssZ and yyyy-MM-dd'T'HH:mm:ss.SSSZ where yyyy, MM, and dd represent the year, month, and day respectively and HH, mm, ss, and if applicable, SSS represent the hour, month, second and milliseconds respsectively. 'T' and Z are constants.

  • :feature_definitions (required, Array<Types::FeatureDefinition>)

    A list of Feature names and types. Name and Type is compulsory per Feature.

    Valid feature FeatureTypes are Integral, Fractional and String.

    FeatureNames cannot be any of the following: is_deleted, write_time, api_invocation_time

    You can create up to 2,500 FeatureDefinitions per FeatureGroup.

  • :online_store_config (Types::OnlineStoreConfig)

    You can turn the OnlineStore on or off by specifying True for the EnableOnlineStore flag in OnlineStoreConfig.

    You can also include an Amazon Web Services KMS key ID (KMSKeyId) for at-rest encryption of the OnlineStore.

    The default value is False.

  • :offline_store_config (Types::OfflineStoreConfig)

    Use this to configure an OfflineFeatureStore. This parameter allows you to specify:

    • The Amazon Simple Storage Service (Amazon S3) location of an OfflineStore.

    • A configuration for an Amazon Web Services Glue or Amazon Web Services Hive data catalog.

    • An KMS encryption key to encrypt the Amazon S3 location used for OfflineStore. If KMS encryption key is not specified, by default we encrypt all data at rest using Amazon Web Services KMS key. By defining your bucket-level key for SSE, you can reduce Amazon Web Services KMS requests costs by up to 99 percent.

    • Format for the offline store table. Supported formats are Glue (Default) and Apache Iceberg.

    To learn more about this parameter, see OfflineStoreConfig.

  • :throughput_config (Types::ThroughputConfig)

    Used to set feature group throughput configuration. There are two modes: ON_DEMAND and PROVISIONED. With on-demand mode, you are charged for data reads and writes that your application performs on your feature group. You do not need to specify read and write throughput because Feature Store accommodates your workloads as they ramp up and down. You can switch a feature group to on-demand only once in a 24 hour period. With provisioned throughput mode, you specify the read and write capacity per second that you expect your application to require, and you are billed based on those limits. Exceeding provisioned throughput will result in your requests being throttled.

    Note: PROVISIONED throughput mode is supported only for feature groups that are offline-only, or use the Standard tier online store.

  • :role_arn (String)

    The Amazon Resource Name (ARN) of the IAM execution role used to persist data into the OfflineStore if an OfflineStoreConfig is provided.

  • :description (String)

    A free-form description of a FeatureGroup.

  • :tags (Array<Types::Tag>)

    Tags used to identify Features in each FeatureGroup.

Returns:

See Also:



4175
4176
4177
4178
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 4175

def create_feature_group(params = {}, options = {})
  req = build_request(:create_feature_group, params)
  req.send_request(options)
end

#create_flow_definition(params = {}) ⇒ Types::CreateFlowDefinitionResponse

Creates a flow definition.

Examples:

Request syntax with placeholder values


resp = client.create_flow_definition({
  flow_definition_name: "FlowDefinitionName", # required
  human_loop_request_source: {
    aws_managed_human_loop_request_source: "AWS/Rekognition/DetectModerationLabels/Image/V3", # required, accepts AWS/Rekognition/DetectModerationLabels/Image/V3, AWS/Textract/AnalyzeDocument/Forms/V1
  },
  human_loop_activation_config: {
    human_loop_activation_conditions_config: { # required
      human_loop_activation_conditions: "HumanLoopActivationConditions", # required
    },
  },
  human_loop_config: {
    workteam_arn: "WorkteamArn", # required
    human_task_ui_arn: "HumanTaskUiArn", # required
    task_title: "FlowDefinitionTaskTitle", # required
    task_description: "FlowDefinitionTaskDescription", # required
    task_count: 1, # required
    task_availability_lifetime_in_seconds: 1,
    task_time_limit_in_seconds: 1,
    task_keywords: ["FlowDefinitionTaskKeyword"],
    public_workforce_task_price: {
      amount_in_usd: {
        dollars: 1,
        cents: 1,
        tenth_fractions_of_a_cent: 1,
      },
    },
  },
  output_config: { # required
    s3_output_path: "S3Uri", # required
    kms_key_id: "KmsKeyId",
  },
  role_arn: "RoleArn", # required
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.flow_definition_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :flow_definition_name (required, String)

    The name of your flow definition.

  • :human_loop_request_source (Types::HumanLoopRequestSource)

    Container for configuring the source of human task requests. Use to specify if Amazon Rekognition or Amazon Textract is used as an integration source.

  • :human_loop_activation_config (Types::HumanLoopActivationConfig)

    An object containing information about the events that trigger a human workflow.

  • :human_loop_config (Types::HumanLoopConfig)

    An object containing information about the tasks the human reviewers will perform.

  • :output_config (required, Types::FlowDefinitionOutputConfig)

    An object containing information about where the human review results will be uploaded.

  • :role_arn (required, String)

    The Amazon Resource Name (ARN) of the role needed to call other services on your behalf. For example, arn:aws:iam::1234567890:role/service-role/AmazonSageMaker-ExecutionRole-20180111T151298.

  • :tags (Array<Types::Tag>)

    An array of key-value pairs that contain metadata to help you categorize and organize a flow definition. Each tag consists of a key and a value, both of which you define.

Returns:

See Also:



4266
4267
4268
4269
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 4266

def create_flow_definition(params = {}, options = {})
  req = build_request(:create_flow_definition, params)
  req.send_request(options)
end

#create_hub(params = {}) ⇒ Types::CreateHubResponse

Create a hub.

Examples:

Request syntax with placeholder values


resp = client.create_hub({
  hub_name: "HubName", # required
  hub_description: "HubDescription", # required
  hub_display_name: "HubDisplayName",
  hub_search_keywords: ["HubSearchKeyword"],
  s3_storage_config: {
    s3_output_path: "S3OutputPath",
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.hub_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :hub_name (required, String)

    The name of the hub to create.

  • :hub_description (required, String)

    A description of the hub.

  • :hub_display_name (String)

    The display name of the hub.

  • :hub_search_keywords (Array<String>)

    The searchable keywords for the hub.

  • :s3_storage_config (Types::HubS3StorageConfig)

    The Amazon S3 storage configuration for the hub.

  • :tags (Array<Types::Tag>)

    Any tags to associate with the hub.

Returns:

See Also:



4321
4322
4323
4324
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 4321

def create_hub(params = {}, options = {})
  req = build_request(:create_hub, params)
  req.send_request(options)
end

#create_hub_content_reference(params = {}) ⇒ Types::CreateHubContentReferenceResponse

Create a hub content reference in order to add a model in the JumpStart public hub to a private hub.

Examples:

Request syntax with placeholder values


resp = client.create_hub_content_reference({
  hub_name: "HubNameOrArn", # required
  sage_maker_public_hub_content_arn: "SageMakerPublicHubContentArn", # required
  hub_content_name: "HubContentName",
  min_version: "HubContentVersion",
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.hub_arn #=> String
resp.hub_content_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :hub_name (required, String)

    The name of the hub to add the hub content reference to.

  • :sage_maker_public_hub_content_arn (required, String)

    The ARN of the public hub content to reference.

  • :hub_content_name (String)

    The name of the hub content to reference.

  • :min_version (String)

    The minimum version of the hub content to reference.

  • :tags (Array<Types::Tag>)

    Any tags associated with the hub content to reference.

Returns:

See Also:



4373
4374
4375
4376
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 4373

def create_hub_content_reference(params = {}, options = {})
  req = build_request(:create_hub_content_reference, params)
  req.send_request(options)
end

#create_human_task_ui(params = {}) ⇒ Types::CreateHumanTaskUiResponse

Defines the settings you will use for the human review workflow user interface. Reviewers will see a three-panel interface with an instruction area, the item to review, and an input area.

Examples:

Request syntax with placeholder values


resp = client.create_human_task_ui({
  human_task_ui_name: "HumanTaskUiName", # required
  ui_template: { # required
    content: "TemplateContent", # required
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.human_task_ui_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :human_task_ui_name (required, String)

    The name of the user interface you are creating.

  • :ui_template (required, Types::UiTemplate)

    The Liquid template for the worker user interface.

  • :tags (Array<Types::Tag>)

    An array of key-value pairs that contain metadata to help you categorize and organize a human review workflow user interface. Each tag consists of a key and a value, both of which you define.

Returns:

See Also:



4420
4421
4422
4423
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 4420

def create_human_task_ui(params = {}, options = {})
  req = build_request(:create_human_task_ui, params)
  req.send_request(options)
end

#create_hyper_parameter_tuning_job(params = {}) ⇒ Types::CreateHyperParameterTuningJobResponse

Starts a hyperparameter tuning job. A hyperparameter tuning job finds the best version of a model by running many training jobs on your dataset using the algorithm you choose and values for hyperparameters within ranges that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by an objective metric that you choose.

A hyperparameter tuning job automatically creates Amazon SageMaker experiments, trials, and trial components for each training job that it runs. You can view these entities in Amazon SageMaker Studio. For more information, see View Experiments, Trials, and Trial Components.

Do not include any security-sensitive information including account access IDs, secrets or tokens in any hyperparameter field. If the use of security-sensitive credentials are detected, SageMaker will reject your training job request and return an exception error.

Examples:

Request syntax with placeholder values


resp = client.create_hyper_parameter_tuning_job({
  hyper_parameter_tuning_job_name: "HyperParameterTuningJobName", # required
  hyper_parameter_tuning_job_config: { # required
    strategy: "Bayesian", # required, accepts Bayesian, Random, Hyperband, Grid
    strategy_config: {
      hyperband_strategy_config: {
        min_resource: 1,
        max_resource: 1,
      },
    },
    hyper_parameter_tuning_job_objective: {
      type: "Maximize", # required, accepts Maximize, Minimize
      metric_name: "MetricName", # required
    },
    resource_limits: { # required
      max_number_of_training_jobs: 1,
      max_parallel_training_jobs: 1, # required
      max_runtime_in_seconds: 1,
    },
    parameter_ranges: {
      integer_parameter_ranges: [
        {
          name: "ParameterKey", # required
          min_value: "ParameterValue", # required
          max_value: "ParameterValue", # required
          scaling_type: "Auto", # accepts Auto, Linear, Logarithmic, ReverseLogarithmic
        },
      ],
      continuous_parameter_ranges: [
        {
          name: "ParameterKey", # required
          min_value: "ParameterValue", # required
          max_value: "ParameterValue", # required
          scaling_type: "Auto", # accepts Auto, Linear, Logarithmic, ReverseLogarithmic
        },
      ],
      categorical_parameter_ranges: [
        {
          name: "ParameterKey", # required
          values: ["ParameterValue"], # required
        },
      ],
      auto_parameters: [
        {
          name: "ParameterKey", # required
          value_hint: "ParameterValue", # required
        },
      ],
    },
    training_job_early_stopping_type: "Off", # accepts Off, Auto
    tuning_job_completion_criteria: {
      target_objective_metric_value: 1.0,
      best_objective_not_improving: {
        max_number_of_training_jobs_not_improving: 1,
      },
      convergence_detected: {
        complete_on_convergence: "Disabled", # accepts Disabled, Enabled
      },
    },
    random_seed: 1,
  },
  training_job_definition: {
    definition_name: "HyperParameterTrainingJobDefinitionName",
    tuning_objective: {
      type: "Maximize", # required, accepts Maximize, Minimize
      metric_name: "MetricName", # required
    },
    hyper_parameter_ranges: {
      integer_parameter_ranges: [
        {
          name: "ParameterKey", # required
          min_value: "ParameterValue", # required
          max_value: "ParameterValue", # required
          scaling_type: "Auto", # accepts Auto, Linear, Logarithmic, ReverseLogarithmic
        },
      ],
      continuous_parameter_ranges: [
        {
          name: "ParameterKey", # required
          min_value: "ParameterValue", # required
          max_value: "ParameterValue", # required
          scaling_type: "Auto", # accepts Auto, Linear, Logarithmic, ReverseLogarithmic
        },
      ],
      categorical_parameter_ranges: [
        {
          name: "ParameterKey", # required
          values: ["ParameterValue"], # required
        },
      ],
      auto_parameters: [
        {
          name: "ParameterKey", # required
          value_hint: "ParameterValue", # required
        },
      ],
    },
    static_hyper_parameters: {
      "HyperParameterKey" => "HyperParameterValue",
    },
    algorithm_specification: { # required
      training_image: "AlgorithmImage",
      training_input_mode: "Pipe", # required, accepts Pipe, File, FastFile
      algorithm_name: "ArnOrName",
      metric_definitions: [
        {
          name: "MetricName", # required
          regex: "MetricRegex", # required
        },
      ],
    },
    role_arn: "RoleArn", # required
    input_data_config: [
      {
        channel_name: "ChannelName", # required
        data_source: { # required
          s3_data_source: {
            s3_data_type: "ManifestFile", # required, accepts ManifestFile, S3Prefix, AugmentedManifestFile
            s3_uri: "S3Uri", # required
            s3_data_distribution_type: "FullyReplicated", # accepts FullyReplicated, ShardedByS3Key
            attribute_names: ["AttributeName"],
            instance_group_names: ["InstanceGroupName"],
          },
          file_system_data_source: {
            file_system_id: "FileSystemId", # required
            file_system_access_mode: "rw", # required, accepts rw, ro
            file_system_type: "EFS", # required, accepts EFS, FSxLustre
            directory_path: "DirectoryPath", # required
          },
        },
        content_type: "ContentType",
        compression_type: "None", # accepts None, Gzip
        record_wrapper_type: "None", # accepts None, RecordIO
        input_mode: "Pipe", # accepts Pipe, File, FastFile
        shuffle_config: {
          seed: 1, # required
        },
      },
    ],
    vpc_config: {
      security_group_ids: ["SecurityGroupId"], # required
      subnets: ["SubnetId"], # required
    },
    output_data_config: { # required
      kms_key_id: "KmsKeyId",
      s3_output_path: "S3Uri", # required
      compression_type: "GZIP", # accepts GZIP, NONE
    },
    resource_config: {
      instance_type: "ml.m4.xlarge", # accepts ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.p5.48xlarge, ml.p5e.48xlarge, ml.p5en.48xlarge, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.c5n.xlarge, ml.c5n.2xlarge, ml.c5n.4xlarge, ml.c5n.9xlarge, ml.c5n.18xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.16xlarge, ml.g6.12xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.16xlarge, ml.g6e.12xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.trn2.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.8xlarge, ml.c6i.4xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.r5d.large, ml.r5d.xlarge, ml.r5d.2xlarge, ml.r5d.4xlarge, ml.r5d.8xlarge, ml.r5d.12xlarge, ml.r5d.16xlarge, ml.r5d.24xlarge, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge
      instance_count: 1,
      volume_size_in_gb: 1, # required
      volume_kms_key_id: "KmsKeyId",
      keep_alive_period_in_seconds: 1,
      instance_groups: [
        {
          instance_type: "ml.m4.xlarge", # required, accepts ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.p5.48xlarge, ml.p5e.48xlarge, ml.p5en.48xlarge, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.c5n.xlarge, ml.c5n.2xlarge, ml.c5n.4xlarge, ml.c5n.9xlarge, ml.c5n.18xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.16xlarge, ml.g6.12xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.16xlarge, ml.g6e.12xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.trn2.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.8xlarge, ml.c6i.4xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.r5d.large, ml.r5d.xlarge, ml.r5d.2xlarge, ml.r5d.4xlarge, ml.r5d.8xlarge, ml.r5d.12xlarge, ml.r5d.16xlarge, ml.r5d.24xlarge, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge
          instance_count: 1, # required
          instance_group_name: "InstanceGroupName", # required
        },
      ],
      training_plan_arn: "TrainingPlanArn",
    },
    hyper_parameter_tuning_resource_config: {
      instance_type: "ml.m4.xlarge", # accepts ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.p5.48xlarge, ml.p5e.48xlarge, ml.p5en.48xlarge, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.c5n.xlarge, ml.c5n.2xlarge, ml.c5n.4xlarge, ml.c5n.9xlarge, ml.c5n.18xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.16xlarge, ml.g6.12xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.16xlarge, ml.g6e.12xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.trn2.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.8xlarge, ml.c6i.4xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.r5d.large, ml.r5d.xlarge, ml.r5d.2xlarge, ml.r5d.4xlarge, ml.r5d.8xlarge, ml.r5d.12xlarge, ml.r5d.16xlarge, ml.r5d.24xlarge, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge
      instance_count: 1,
      volume_size_in_gb: 1,
      volume_kms_key_id: "KmsKeyId",
      allocation_strategy: "Prioritized", # accepts Prioritized
      instance_configs: [
        {
          instance_type: "ml.m4.xlarge", # required, accepts ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.p5.48xlarge, ml.p5e.48xlarge, ml.p5en.48xlarge, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.c5n.xlarge, ml.c5n.2xlarge, ml.c5n.4xlarge, ml.c5n.9xlarge, ml.c5n.18xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.16xlarge, ml.g6.12xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.16xlarge, ml.g6e.12xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.trn2.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.8xlarge, ml.c6i.4xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.r5d.large, ml.r5d.xlarge, ml.r5d.2xlarge, ml.r5d.4xlarge, ml.r5d.8xlarge, ml.r5d.12xlarge, ml.r5d.16xlarge, ml.r5d.24xlarge, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge
          instance_count: 1, # required
          volume_size_in_gb: 1, # required
        },
      ],
    },
    stopping_condition: { # required
      max_runtime_in_seconds: 1,
      max_wait_time_in_seconds: 1,
      max_pending_time_in_seconds: 1,
    },
    enable_network_isolation: false,
    enable_inter_container_traffic_encryption: false,
    enable_managed_spot_training: false,
    checkpoint_config: {
      s3_uri: "S3Uri", # required
      local_path: "DirectoryPath",
    },
    retry_strategy: {
      maximum_retry_attempts: 1, # required
    },
    environment: {
      "HyperParameterTrainingJobEnvironmentKey" => "HyperParameterTrainingJobEnvironmentValue",
    },
  },
  training_job_definitions: [
    {
      definition_name: "HyperParameterTrainingJobDefinitionName",
      tuning_objective: {
        type: "Maximize", # required, accepts Maximize, Minimize
        metric_name: "MetricName", # required
      },
      hyper_parameter_ranges: {
        integer_parameter_ranges: [
          {
            name: "ParameterKey", # required
            min_value: "ParameterValue", # required
            max_value: "ParameterValue", # required
            scaling_type: "Auto", # accepts Auto, Linear, Logarithmic, ReverseLogarithmic
          },
        ],
        continuous_parameter_ranges: [
          {
            name: "ParameterKey", # required
            min_value: "ParameterValue", # required
            max_value: "ParameterValue", # required
            scaling_type: "Auto", # accepts Auto, Linear, Logarithmic, ReverseLogarithmic
          },
        ],
        categorical_parameter_ranges: [
          {
            name: "ParameterKey", # required
            values: ["ParameterValue"], # required
          },
        ],
        auto_parameters: [
          {
            name: "ParameterKey", # required
            value_hint: "ParameterValue", # required
          },
        ],
      },
      static_hyper_parameters: {
        "HyperParameterKey" => "HyperParameterValue",
      },
      algorithm_specification: { # required
        training_image: "AlgorithmImage",
        training_input_mode: "Pipe", # required, accepts Pipe, File, FastFile
        algorithm_name: "ArnOrName",
        metric_definitions: [
          {
            name: "MetricName", # required
            regex: "MetricRegex", # required
          },
        ],
      },
      role_arn: "RoleArn", # required
      input_data_config: [
        {
          channel_name: "ChannelName", # required
          data_source: { # required
            s3_data_source: {
              s3_data_type: "ManifestFile", # required, accepts ManifestFile, S3Prefix, AugmentedManifestFile
              s3_uri: "S3Uri", # required
              s3_data_distribution_type: "FullyReplicated", # accepts FullyReplicated, ShardedByS3Key
              attribute_names: ["AttributeName"],
              instance_group_names: ["InstanceGroupName"],
            },
            file_system_data_source: {
              file_system_id: "FileSystemId", # required
              file_system_access_mode: "rw", # required, accepts rw, ro
              file_system_type: "EFS", # required, accepts EFS, FSxLustre
              directory_path: "DirectoryPath", # required
            },
          },
          content_type: "ContentType",
          compression_type: "None", # accepts None, Gzip
          record_wrapper_type: "None", # accepts None, RecordIO
          input_mode: "Pipe", # accepts Pipe, File, FastFile
          shuffle_config: {
            seed: 1, # required
          },
        },
      ],
      vpc_config: {
        security_group_ids: ["SecurityGroupId"], # required
        subnets: ["SubnetId"], # required
      },
      output_data_config: { # required
        kms_key_id: "KmsKeyId",
        s3_output_path: "S3Uri", # required
        compression_type: "GZIP", # accepts GZIP, NONE
      },
      resource_config: {
        instance_type: "ml.m4.xlarge", # accepts ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.p5.48xlarge, ml.p5e.48xlarge, ml.p5en.48xlarge, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.c5n.xlarge, ml.c5n.2xlarge, ml.c5n.4xlarge, ml.c5n.9xlarge, ml.c5n.18xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.16xlarge, ml.g6.12xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.16xlarge, ml.g6e.12xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.trn2.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.8xlarge, ml.c6i.4xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.r5d.large, ml.r5d.xlarge, ml.r5d.2xlarge, ml.r5d.4xlarge, ml.r5d.8xlarge, ml.r5d.12xlarge, ml.r5d.16xlarge, ml.r5d.24xlarge, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge
        instance_count: 1,
        volume_size_in_gb: 1, # required
        volume_kms_key_id: "KmsKeyId",
        keep_alive_period_in_seconds: 1,
        instance_groups: [
          {
            instance_type: "ml.m4.xlarge", # required, accepts ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.p5.48xlarge, ml.p5e.48xlarge, ml.p5en.48xlarge, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.c5n.xlarge, ml.c5n.2xlarge, ml.c5n.4xlarge, ml.c5n.9xlarge, ml.c5n.18xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.16xlarge, ml.g6.12xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.16xlarge, ml.g6e.12xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.trn2.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.8xlarge, ml.c6i.4xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.r5d.large, ml.r5d.xlarge, ml.r5d.2xlarge, ml.r5d.4xlarge, ml.r5d.8xlarge, ml.r5d.12xlarge, ml.r5d.16xlarge, ml.r5d.24xlarge, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge
            instance_count: 1, # required
            instance_group_name: "InstanceGroupName", # required
          },
        ],
        training_plan_arn: "TrainingPlanArn",
      },
      hyper_parameter_tuning_resource_config: {
        instance_type: "ml.m4.xlarge", # accepts ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.p5.48xlarge, ml.p5e.48xlarge, ml.p5en.48xlarge, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.c5n.xlarge, ml.c5n.2xlarge, ml.c5n.4xlarge, ml.c5n.9xlarge, ml.c5n.18xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.16xlarge, ml.g6.12xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.16xlarge, ml.g6e.12xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.trn2.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.8xlarge, ml.c6i.4xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.r5d.large, ml.r5d.xlarge, ml.r5d.2xlarge, ml.r5d.4xlarge, ml.r5d.8xlarge, ml.r5d.12xlarge, ml.r5d.16xlarge, ml.r5d.24xlarge, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge
        instance_count: 1,
        volume_size_in_gb: 1,
        volume_kms_key_id: "KmsKeyId",
        allocation_strategy: "Prioritized", # accepts Prioritized
        instance_configs: [
          {
            instance_type: "ml.m4.xlarge", # required, accepts ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.p5.48xlarge, ml.p5e.48xlarge, ml.p5en.48xlarge, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.c5n.xlarge, ml.c5n.2xlarge, ml.c5n.4xlarge, ml.c5n.9xlarge, ml.c5n.18xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.16xlarge, ml.g6.12xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.16xlarge, ml.g6e.12xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.trn2.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.8xlarge, ml.c6i.4xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.r5d.large, ml.r5d.xlarge, ml.r5d.2xlarge, ml.r5d.4xlarge, ml.r5d.8xlarge, ml.r5d.12xlarge, ml.r5d.16xlarge, ml.r5d.24xlarge, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge
            instance_count: 1, # required
            volume_size_in_gb: 1, # required
          },
        ],
      },
      stopping_condition: { # required
        max_runtime_in_seconds: 1,
        max_wait_time_in_seconds: 1,
        max_pending_time_in_seconds: 1,
      },
      enable_network_isolation: false,
      enable_inter_container_traffic_encryption: false,
      enable_managed_spot_training: false,
      checkpoint_config: {
        s3_uri: "S3Uri", # required
        local_path: "DirectoryPath",
      },
      retry_strategy: {
        maximum_retry_attempts: 1, # required
      },
      environment: {
        "HyperParameterTrainingJobEnvironmentKey" => "HyperParameterTrainingJobEnvironmentValue",
      },
    },
  ],
  warm_start_config: {
    parent_hyper_parameter_tuning_jobs: [ # required
      {
        hyper_parameter_tuning_job_name: "HyperParameterTuningJobName",
      },
    ],
    warm_start_type: "IdenticalDataAndAlgorithm", # required, accepts IdenticalDataAndAlgorithm, TransferLearning
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
  autotune: {
    mode: "Enabled", # required, accepts Enabled
  },
})

Response structure


resp.hyper_parameter_tuning_job_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :hyper_parameter_tuning_job_name (required, String)

    The name of the tuning job. This name is the prefix for the names of all training jobs that this tuning job launches. The name must be unique within the same Amazon Web Services account and Amazon Web Services Region. The name must have 1 to 32 characters. Valid characters are a-z, A-Z, 0-9, and : + = @ _ % - (hyphen). The name is not case sensitive.

  • :hyper_parameter_tuning_job_config (required, Types::HyperParameterTuningJobConfig)

    The HyperParameterTuningJobConfig object that describes the tuning job, including the search strategy, the objective metric used to evaluate training jobs, ranges of parameters to search, and resource limits for the tuning job. For more information, see How Hyperparameter Tuning Works.

  • :training_job_definition (Types::HyperParameterTrainingJobDefinition)

    The HyperParameterTrainingJobDefinition object that describes the training jobs that this tuning job launches, including static hyperparameters, input data configuration, output data configuration, resource configuration, and stopping condition.

  • :training_job_definitions (Array<Types::HyperParameterTrainingJobDefinition>)

    A list of the HyperParameterTrainingJobDefinition objects launched for this tuning job.

  • :warm_start_config (Types::HyperParameterTuningJobWarmStartConfig)

    Specifies the configuration for starting the hyperparameter tuning job using one or more previous tuning jobs as a starting point. The results of previous tuning jobs are used to inform which combinations of hyperparameters to search over in the new tuning job.

    All training jobs launched by the new hyperparameter tuning job are evaluated by using the objective metric. If you specify IDENTICAL_DATA_AND_ALGORITHM as the WarmStartType value for the warm start configuration, the training job that performs the best in the new tuning job is compared to the best training jobs from the parent tuning jobs. From these, the training job that performs the best as measured by the objective metric is returned as the overall best training job.

    All training jobs launched by parent hyperparameter tuning jobs and the new hyperparameter tuning jobs count against the limit of training jobs for the tuning job.

  • :tags (Array<Types::Tag>)

    An array of key-value pairs. You can use tags to categorize your Amazon Web Services resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging Amazon Web Services Resources.

    Tags that you specify for the tuning job are also added to all training jobs that the tuning job launches.

  • :autotune (Types::Autotune)

    Configures SageMaker Automatic model tuning (AMT) to automatically find optimal parameters for the following fields:

    • ParameterRanges: The names and ranges of parameters that a hyperparameter tuning job can optimize.

    • ResourceLimits: The maximum resources that can be used for a training job. These resources include the maximum number of training jobs, the maximum runtime of a tuning job, and the maximum number of training jobs to run at the same time.

    • TrainingJobEarlyStoppingType: A flag that specifies whether or not to use early stopping for training jobs launched by a hyperparameter tuning job.

    • RetryStrategy: The number of times to retry a training job.

    • Strategy: Specifies how hyperparameter tuning chooses the combinations of hyperparameter values to use for the training jobs that it launches.

    • ConvergenceDetected: A flag to indicate that Automatic model tuning (AMT) has detected model convergence.

Returns:

See Also:



4919
4920
4921
4922
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 4919

def create_hyper_parameter_tuning_job(params = {}, options = {})
  req = build_request(:create_hyper_parameter_tuning_job, params)
  req.send_request(options)
end

#create_image(params = {}) ⇒ Types::CreateImageResponse

Creates a custom SageMaker image. A SageMaker image is a set of image versions. Each image version represents a container image stored in Amazon ECR. For more information, see Bring your own SageMaker image.

Examples:

Request syntax with placeholder values


resp = client.create_image({
  description: "ImageDescription",
  display_name: "ImageDisplayName",
  image_name: "ImageName", # required
  role_arn: "RoleArn", # required
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.image_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :description (String)

    The description of the image.

  • :display_name (String)

    The display name of the image. If not provided, ImageName is displayed.

  • :image_name (required, String)

    The name of the image. Must be unique to your account.

  • :role_arn (required, String)

    The ARN of an IAM role that enables Amazon SageMaker to perform tasks on your behalf.

  • :tags (Array<Types::Tag>)

    A list of tags to apply to the image.

Returns:

See Also:



4977
4978
4979
4980
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 4977

def create_image(params = {}, options = {})
  req = build_request(:create_image, params)
  req.send_request(options)
end

#create_image_version(params = {}) ⇒ Types::CreateImageVersionResponse

Creates a version of the SageMaker image specified by ImageName. The version represents the Amazon ECR container image specified by BaseImage.

Examples:

Request syntax with placeholder values


resp = client.create_image_version({
  base_image: "ImageBaseImage", # required
  client_token: "ClientToken", # required
  image_name: "ImageName", # required
  aliases: ["SageMakerImageVersionAlias"],
  vendor_guidance: "NOT_PROVIDED", # accepts NOT_PROVIDED, STABLE, TO_BE_ARCHIVED, ARCHIVED
  job_type: "TRAINING", # accepts TRAINING, INFERENCE, NOTEBOOK_KERNEL
  ml_framework: "MLFramework",
  programming_lang: "ProgrammingLang",
  processor: "CPU", # accepts CPU, GPU
  horovod: false,
  release_notes: "ReleaseNotes",
})

Response structure


resp.image_version_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :base_image (required, String)

    The registry path of the container image to use as the starting point for this version. The path is an Amazon ECR URI in the following format:

    <acct-id>.dkr.ecr.<region>.amazonaws.com/<repo-name[:tag] or [@digest]>

  • :client_token (required, String)

    A unique ID. If not specified, the Amazon Web Services CLI and Amazon Web Services SDKs, such as the SDK for Python (Boto3), add a unique value to the call.

    A suitable default value is auto-generated. You should normally not need to pass this option.**

  • :image_name (required, String)

    The ImageName of the Image to create a version of.

  • :aliases (Array<String>)

    A list of aliases created with the image version.

  • :vendor_guidance (String)

    The stability of the image version, specified by the maintainer.

    • NOT_PROVIDED: The maintainers did not provide a status for image version stability.

    • STABLE: The image version is stable.

    • TO_BE_ARCHIVED: The image version is set to be archived. Custom image versions that are set to be archived are automatically archived after three months.

    • ARCHIVED: The image version is archived. Archived image versions are not searchable and are no longer actively supported.

  • :job_type (String)

    Indicates SageMaker job type compatibility.

    • TRAINING: The image version is compatible with SageMaker training jobs.

    • INFERENCE: The image version is compatible with SageMaker inference jobs.

    • NOTEBOOK_KERNEL: The image version is compatible with SageMaker notebook kernels.

  • :ml_framework (String)

    The machine learning framework vended in the image version.

  • :programming_lang (String)

    The supported programming language and its version.

  • :processor (String)

    Indicates CPU or GPU compatibility.

    • CPU: The image version is compatible with CPU.

    • GPU: The image version is compatible with GPU.

  • :horovod (Boolean)

    Indicates Horovod compatibility.

  • :release_notes (String)

    The maintainer description of the image version.

Returns:

See Also:



5082
5083
5084
5085
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 5082

def create_image_version(params = {}, options = {})
  req = build_request(:create_image_version, params)
  req.send_request(options)
end

#create_inference_component(params = {}) ⇒ Types::CreateInferenceComponentOutput

Creates an inference component, which is a SageMaker hosting object that you can use to deploy a model to an endpoint. In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.

Examples:

Request syntax with placeholder values


resp = client.create_inference_component({
  inference_component_name: "InferenceComponentName", # required
  endpoint_name: "EndpointName", # required
  variant_name: "VariantName",
  specification: { # required
    model_name: "ModelName",
    container: {
      image: "ContainerImage",
      artifact_url: "Url",
      environment: {
        "EnvironmentKey" => "EnvironmentValue",
      },
    },
    startup_parameters: {
      model_data_download_timeout_in_seconds: 1,
      container_startup_health_check_timeout_in_seconds: 1,
    },
    compute_resource_requirements: {
      number_of_cpu_cores_required: 1.0,
      number_of_accelerator_devices_required: 1.0,
      min_memory_required_in_mb: 1, # required
      max_memory_required_in_mb: 1,
    },
    base_inference_component_name: "InferenceComponentName",
  },
  runtime_config: {
    copy_count: 1, # required
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.inference_component_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :inference_component_name (required, String)

    A unique name to assign to the inference component.

  • :endpoint_name (required, String)

    The name of an existing endpoint where you host the inference component.

  • :variant_name (String)

    The name of an existing production variant where you host the inference component.

  • :specification (required, Types::InferenceComponentSpecification)

    Details about the resources to deploy with this inference component, including the model, container, and compute resources.

  • :runtime_config (Types::InferenceComponentRuntimeConfig)

    Runtime settings for a model that is deployed with an inference component.

  • :tags (Array<Types::Tag>)

    A list of key-value pairs associated with the model. For more information, see Tagging Amazon Web Services resources in the Amazon Web Services General Reference.

Returns:

See Also:



5177
5178
5179
5180
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 5177

def create_inference_component(params = {}, options = {})
  req = build_request(:create_inference_component, params)
  req.send_request(options)
end

#create_inference_experiment(params = {}) ⇒ Types::CreateInferenceExperimentResponse

Creates an inference experiment using the configurations specified in the request.

Use this API to setup and schedule an experiment to compare model variants on a Amazon SageMaker inference endpoint. For more information about inference experiments, see Shadow tests.

Amazon SageMaker begins your experiment at the scheduled time and routes traffic to your endpoint's model variants based on your specified configuration.

While the experiment is in progress or after it has concluded, you can view metrics that compare your model variants. For more information, see View, monitor, and edit shadow tests.

Examples:

Request syntax with placeholder values


resp = client.create_inference_experiment({
  name: "InferenceExperimentName", # required
  type: "ShadowMode", # required, accepts ShadowMode
  schedule: {
    start_time: Time.now,
    end_time: Time.now,
  },
  description: "InferenceExperimentDescription",
  role_arn: "RoleArn", # required
  endpoint_name: "EndpointName", # required
  model_variants: [ # required
    {
      model_name: "ModelName", # required
      variant_name: "ModelVariantName", # required
      infrastructure_config: { # required
        infrastructure_type: "RealTimeInference", # required, accepts RealTimeInference
        real_time_inference_config: { # required
          instance_type: "ml.t2.medium", # required, accepts ml.t2.medium, ml.t2.large, ml.t2.xlarge, ml.t2.2xlarge, ml.t3.medium, ml.t3.large, ml.t3.xlarge, ml.t3.2xlarge, ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.m5d.large, ml.m5d.xlarge, ml.m5d.2xlarge, ml.m5d.4xlarge, ml.m5d.8xlarge, ml.m5d.12xlarge, ml.m5d.16xlarge, ml.m5d.24xlarge, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.c5d.xlarge, ml.c5d.2xlarge, ml.c5d.4xlarge, ml.c5d.9xlarge, ml.c5d.18xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.8xlarge, ml.r5.12xlarge, ml.r5.16xlarge, ml.r5.24xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.16xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.inf1.xlarge, ml.inf1.2xlarge, ml.inf1.6xlarge, ml.inf1.24xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.inf2.xlarge, ml.inf2.8xlarge, ml.inf2.24xlarge, ml.inf2.48xlarge, ml.p4d.24xlarge, ml.p4de.24xlarge, ml.p5.48xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge, ml.m6id.large, ml.m6id.xlarge, ml.m6id.2xlarge, ml.m6id.4xlarge, ml.m6id.8xlarge, ml.m6id.12xlarge, ml.m6id.16xlarge, ml.m6id.24xlarge, ml.m6id.32xlarge, ml.c6id.large, ml.c6id.xlarge, ml.c6id.2xlarge, ml.c6id.4xlarge, ml.c6id.8xlarge, ml.c6id.12xlarge, ml.c6id.16xlarge, ml.c6id.24xlarge, ml.c6id.32xlarge, ml.r6id.large, ml.r6id.xlarge, ml.r6id.2xlarge, ml.r6id.4xlarge, ml.r6id.8xlarge, ml.r6id.12xlarge, ml.r6id.16xlarge, ml.r6id.24xlarge, ml.r6id.32xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.12xlarge, ml.g6.16xlarge, ml.g6.24xlarge, ml.g6.48xlarge
          instance_count: 1, # required
        },
      },
    },
  ],
  data_storage_config: {
    destination: "DestinationS3Uri", # required
    kms_key: "KmsKeyId",
    content_type: {
      csv_content_types: ["CsvContentType"],
      json_content_types: ["JsonContentType"],
    },
  },
  shadow_mode_config: { # required
    source_model_variant_name: "ModelVariantName", # required
    shadow_model_variants: [ # required
      {
        shadow_model_variant_name: "ModelVariantName", # required
        sampling_percentage: 1, # required
      },
    ],
  },
  kms_key: "KmsKeyId",
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.inference_experiment_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name for the inference experiment.

  • :type (required, String)

    The type of the inference experiment that you want to run. The following types of experiments are possible:

    • ShadowMode: You can use this type to validate a shadow variant. For more information, see Shadow tests.

    ^

  • :schedule (Types::InferenceExperimentSchedule)

    The duration for which you want the inference experiment to run. If you don't specify this field, the experiment automatically starts immediately upon creation and concludes after 7 days.

  • :description (String)

    A description for the inference experiment.

  • :role_arn (required, String)

    The ARN of the IAM role that Amazon SageMaker can assume to access model artifacts and container images, and manage Amazon SageMaker Inference endpoints for model deployment.

  • :endpoint_name (required, String)

    The name of the Amazon SageMaker endpoint on which you want to run the inference experiment.

  • :model_variants (required, Array<Types::ModelVariantConfig>)

    An array of ModelVariantConfig objects. There is one for each variant in the inference experiment. Each ModelVariantConfig object in the array describes the infrastructure configuration for the corresponding variant.

  • :data_storage_config (Types::InferenceExperimentDataStorageConfig)

    The Amazon S3 location and configuration for storing inference request and response data.

    This is an optional parameter that you can use for data capture. For more information, see Capture data.

  • :shadow_mode_config (required, Types::ShadowModeConfig)

    The configuration of ShadowMode inference experiment type. Use this field to specify a production variant which takes all the inference requests, and a shadow variant to which Amazon SageMaker replicates a percentage of the inference requests. For the shadow variant also specify the percentage of requests that Amazon SageMaker replicates.

  • :kms_key (String)

    The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint. The KmsKey can be any of the following formats:

    • KMS key ID

      "1234abcd-12ab-34cd-56ef-1234567890ab"

    • Amazon Resource Name (ARN) of a KMS key

      "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"

    • KMS key Alias

      "alias/ExampleAlias"

    • Amazon Resource Name (ARN) of a KMS key Alias

      "arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias"

    If you use a KMS key ID or an alias of your KMS key, the Amazon SageMaker execution role must include permissions to call kms:Encrypt. If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. Amazon SageMaker uses server-side encryption with KMS managed keys for OutputDataConfig. If you use a bucket policy with an s3:PutObject permission that only allows objects with server-side encryption, set the condition key of s3:x-amz-server-side-encryption to "aws:kms". For more information, see KMS managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.

    The KMS key policy must grant permission to the IAM role that you specify in your CreateEndpoint and UpdateEndpoint requests. For more information, see Using Key Policies in Amazon Web Services KMS in the Amazon Web Services Key Management Service Developer Guide.

  • :tags (Array<Types::Tag>)

    Array of key-value pairs. You can use tags to categorize your Amazon Web Services resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging your Amazon Web Services Resources.

Returns:

See Also:



5376
5377
5378
5379
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 5376

def create_inference_experiment(params = {}, options = {})
  req = build_request(:create_inference_experiment, params)
  req.send_request(options)
end

#create_inference_recommendations_job(params = {}) ⇒ Types::CreateInferenceRecommendationsJobResponse

Starts a recommendation job. You can create either an instance recommendation or load test job.

Examples:

Request syntax with placeholder values


resp = client.create_inference_recommendations_job({
  job_name: "RecommendationJobName", # required
  job_type: "Default", # required, accepts Default, Advanced
  role_arn: "RoleArn", # required
  input_config: { # required
    model_package_version_arn: "ModelPackageArn",
    model_name: "ModelName",
    job_duration_in_seconds: 1,
    traffic_pattern: {
      traffic_type: "PHASES", # accepts PHASES, STAIRS
      phases: [
        {
          initial_number_of_users: 1,
          spawn_rate: 1,
          duration_in_seconds: 1,
        },
      ],
      stairs: {
        duration_in_seconds: 1,
        number_of_steps: 1,
        users_per_step: 1,
      },
    },
    resource_limit: {
      max_number_of_tests: 1,
      max_parallel_of_tests: 1,
    },
    endpoint_configurations: [
      {
        instance_type: "ml.t2.medium", # accepts ml.t2.medium, ml.t2.large, ml.t2.xlarge, ml.t2.2xlarge, ml.m4.xlarge, ml.m4.2xlarge, ml.m4.4xlarge, ml.m4.10xlarge, ml.m4.16xlarge, ml.m5.large, ml.m5.xlarge, ml.m5.2xlarge, ml.m5.4xlarge, ml.m5.12xlarge, ml.m5.24xlarge, ml.m5d.large, ml.m5d.xlarge, ml.m5d.2xlarge, ml.m5d.4xlarge, ml.m5d.12xlarge, ml.m5d.24xlarge, ml.c4.large, ml.c4.xlarge, ml.c4.2xlarge, ml.c4.4xlarge, ml.c4.8xlarge, ml.p2.xlarge, ml.p2.8xlarge, ml.p2.16xlarge, ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.c5.large, ml.c5.xlarge, ml.c5.2xlarge, ml.c5.4xlarge, ml.c5.9xlarge, ml.c5.18xlarge, ml.c5d.large, ml.c5d.xlarge, ml.c5d.2xlarge, ml.c5d.4xlarge, ml.c5d.9xlarge, ml.c5d.18xlarge, ml.g4dn.xlarge, ml.g4dn.2xlarge, ml.g4dn.4xlarge, ml.g4dn.8xlarge, ml.g4dn.12xlarge, ml.g4dn.16xlarge, ml.r5.large, ml.r5.xlarge, ml.r5.2xlarge, ml.r5.4xlarge, ml.r5.12xlarge, ml.r5.24xlarge, ml.r5d.large, ml.r5d.xlarge, ml.r5d.2xlarge, ml.r5d.4xlarge, ml.r5d.12xlarge, ml.r5d.24xlarge, ml.inf1.xlarge, ml.inf1.2xlarge, ml.inf1.6xlarge, ml.inf1.24xlarge, ml.dl1.24xlarge, ml.c6i.large, ml.c6i.xlarge, ml.c6i.2xlarge, ml.c6i.4xlarge, ml.c6i.8xlarge, ml.c6i.12xlarge, ml.c6i.16xlarge, ml.c6i.24xlarge, ml.c6i.32xlarge, ml.m6i.large, ml.m6i.xlarge, ml.m6i.2xlarge, ml.m6i.4xlarge, ml.m6i.8xlarge, ml.m6i.12xlarge, ml.m6i.16xlarge, ml.m6i.24xlarge, ml.m6i.32xlarge, ml.r6i.large, ml.r6i.xlarge, ml.r6i.2xlarge, ml.r6i.4xlarge, ml.r6i.8xlarge, ml.r6i.12xlarge, ml.r6i.16xlarge, ml.r6i.24xlarge, ml.r6i.32xlarge, ml.g5.xlarge, ml.g5.2xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g5.12xlarge, ml.g5.16xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g6.xlarge, ml.g6.2xlarge, ml.g6.4xlarge, ml.g6.8xlarge, ml.g6.12xlarge, ml.g6.16xlarge, ml.g6.24xlarge, ml.g6.48xlarge, ml.g6e.xlarge, ml.g6e.2xlarge, ml.g6e.4xlarge, ml.g6e.8xlarge, ml.g6e.12xlarge, ml.g6e.16xlarge, ml.g6e.24xlarge, ml.g6e.48xlarge, ml.p4d.24xlarge, ml.c7g.large, ml.c7g.xlarge, ml.c7g.2xlarge, ml.c7g.4xlarge, ml.c7g.8xlarge, ml.c7g.12xlarge, ml.c7g.16xlarge, ml.m6g.large, ml.m6g.xlarge, ml.m6g.2xlarge, ml.m6g.4xlarge, ml.m6g.8xlarge, ml.m6g.12xlarge, ml.m6g.16xlarge, ml.m6gd.large, ml.m6gd.xlarge, ml.m6gd.2xlarge, ml.m6gd.4xlarge, ml.m6gd.8xlarge, ml.m6gd.12xlarge, ml.m6gd.16xlarge, ml.c6g.large, ml.c6g.xlarge, ml.c6g.2xlarge, ml.c6g.4xlarge, ml.c6g.8xlarge, ml.c6g.12xlarge, ml.c6g.16xlarge, ml.c6gd.large, ml.c6gd.xlarge, ml.c6gd.2xlarge, ml.c6gd.4xlarge, ml.c6gd.8xlarge, ml.c6gd.12xlarge, ml.c6gd.16xlarge, ml.c6gn.large, ml.c6gn.xlarge, ml.c6gn.2xlarge, ml.c6gn.4xlarge, ml.c6gn.8xlarge, ml.c6gn.12xlarge, ml.c6gn.16xlarge, ml.r6g.large, ml.r6g.xlarge, ml.r6g.2xlarge, ml.r6g.4xlarge, ml.r6g.8xlarge, ml.r6g.12xlarge, ml.r6g.16xlarge, ml.r6gd.large, ml.r6gd.xlarge, ml.r6gd.2xlarge, ml.r6gd.4xlarge, ml.r6gd.8xlarge, ml.r6gd.12xlarge, ml.r6gd.16xlarge, ml.p4de.24xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge, ml.trn2.48xlarge, ml.inf2.xlarge, ml.inf2.8xlarge, ml.inf2.24xlarge, ml.inf2.48xlarge, ml.p5.48xlarge, ml.p5e.48xlarge, ml.m7i.large, ml.m7i.xlarge, ml.m7i.2xlarge, ml.m7i.4xlarge, ml.m7i.8xlarge, ml.m7i.12xlarge, ml.m7i.16xlarge, ml.m7i.24xlarge, ml.m7i.48xlarge, ml.c7i.large, ml.c7i.xlarge, ml.c7i.2xlarge, ml.c7i.4xlarge, ml.c7i.8xlarge, ml.c7i.12xlarge, ml.c7i.16xlarge, ml.c7i.24xlarge, ml.c7i.48xlarge, ml.r7i.large, ml.r7i.xlarge, ml.r7i.2xlarge, ml.r7i.4xlarge, ml.r7i.8xlarge, ml.r7i.12xlarge, ml.r7i.16xlarge, ml.r7i.24xlarge, ml.r7i.48xlarge
        serverless_config: {
          memory_size_in_mb: 1, # required
          max_concurrency: 1, # required
          provisioned_concurrency: 1,
        },
        inference_specification_name: "InferenceSpecificationName",
        environment_parameter_ranges: {
          categorical_parameter_ranges: [
            {
              name: "String64", # required
              value: ["String128"], # required
            },
          ],
        },
      },
    ],
    volume_kms_key_id: "KmsKeyId",
    container_config: {
      domain: "String",
      task: "String",
      framework: "String",
      framework_version: "RecommendationJobFrameworkVersion",
      payload_config: {
        sample_payload_url: "S3Uri",
        supported_content_types: ["RecommendationJobSupportedContentType"],
      },
      nearest_model_name: "String",
      supported_instance_types: ["String"],
      supported_endpoint_type: "RealTime", # accepts RealTime, Serverless
      data_input_config: "RecommendationJobDataInputConfig",
      supported_response_mime_types: ["RecommendationJobSupportedResponseMIMEType"],
    },
    endpoints: [
      {
        endpoint_name: "EndpointName",
      },
    ],
    vpc_config: {
      security_group_ids: ["RecommendationJobVpcSecurityGroupId"], # required
      subnets: ["RecommendationJobVpcSubnetId"], # required
    },
  },
  job_description: "RecommendationJobDescription",
  stopping_conditions: {
    max_invocations: 1,
    model_latency_thresholds: [
      {
        percentile: "String64",
        value_in_milliseconds: 1,
      },
    ],
    flat_invocations: "Continue", # accepts Continue, Stop
  },
  output_config: {
    kms_key_id: "KmsKeyId",
    compiled_output_config: {
      s3_output_uri: "S3Uri",
    },
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.job_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_name (required, String)

    A name for the recommendation job. The name must be unique within the Amazon Web Services Region and within your Amazon Web Services account. The job name is passed down to the resources created by the recommendation job. The names of resources (such as the model, endpoint configuration, endpoint, and compilation) that are prefixed with the job name are truncated at 40 characters.

  • :job_type (required, String)

    Defines the type of recommendation job. Specify Default to initiate an instance recommendation and Advanced to initiate a load test. If left unspecified, Amazon SageMaker Inference Recommender will run an instance recommendation (DEFAULT) job.

  • :role_arn (required, String)

    The Amazon Resource Name (ARN) of an IAM role that enables Amazon SageMaker to perform tasks on your behalf.

  • :input_config (required, Types::RecommendationJobInputConfig)

    Provides information about the versioned model package Amazon Resource Name (ARN), the traffic pattern, and endpoint configurations.

  • :job_description (String)

    Description of the recommendation job.

  • :stopping_conditions (Types::RecommendationJobStoppingConditions)

    A set of conditions for stopping a recommendation job. If any of the conditions are met, the job is automatically stopped.

  • :output_config (Types::RecommendationJobOutputConfig)

    Provides information about the output artifacts and the KMS key to use for Amazon S3 server-side encryption.

  • :tags (Array<Types::Tag>)

    The metadata that you apply to Amazon Web Services resources to help you categorize and organize them. Each tag consists of a key and a value, both of which you define. For more information, see Tagging Amazon Web Services Resources in the Amazon Web Services General Reference.

Returns:

See Also:



5539
5540
5541
5542
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 5539

def create_inference_recommendations_job(params = {}, options = {})
  req = build_request(:create_inference_recommendations_job, params)
  req.send_request(options)
end

#create_labeling_job(params = {}) ⇒ Types::CreateLabelingJobResponse

Creates a job that uses workers to label the data objects in your input dataset. You can use the labeled data to train machine learning models.

You can select your workforce from one of three providers:

  • A private workforce that you create. It can include employees, contractors, and outside experts. Use a private workforce when want the data to stay within your organization or when a specific set of skills is required.

  • One or more vendors that you select from the Amazon Web Services Marketplace. Vendors provide expertise in specific areas.

  • The Amazon Mechanical Turk workforce. This is the largest workforce, but it should only be used for public data or data that has been stripped of any personally identifiable information.

You can also use automated data labeling to reduce the number of data objects that need to be labeled by a human. Automated data labeling uses active learning to determine if a data object can be labeled by machine or if it needs to be sent to a human worker. For more information, see Using Automated Data Labeling.

The data objects to be labeled are contained in an Amazon S3 bucket. You create a manifest file that describes the location of each object. For more information, see Using Input and Output Data.

The output can be used as the manifest file for another labeling job or as training data for your machine learning models.

You can use this operation to create a static labeling job or a streaming labeling job. A static labeling job stops if all data objects in the input manifest file identified in ManifestS3Uri have been labeled. A streaming labeling job runs perpetually until it is manually stopped, or remains idle for 10 days. You can send new data objects to an active (InProgress) streaming labeling job in real time. To learn how to create a static labeling job, see Create a Labeling Job (API) in the Amazon SageMaker Developer Guide. To learn how to create a streaming labeling job, see Create a Streaming Labeling Job.

Examples:

Request syntax with placeholder values


resp = client.create_labeling_job({
  labeling_job_name: "LabelingJobName", # required
  label_attribute_name: "LabelAttributeName", # required
  input_config: { # required
    data_source: { # required
      s3_data_source: {
        manifest_s3_uri: "S3Uri", # required
      },
      sns_data_source: {
        sns_topic_arn: "SnsTopicArn", # required
      },
    },
    data_attributes: {
      content_classifiers: ["FreeOfPersonallyIdentifiableInformation"], # accepts FreeOfPersonallyIdentifiableInformation, FreeOfAdultContent
    },
  },
  output_config: { # required
    s3_output_path: "S3Uri", # required
    kms_key_id: "KmsKeyId",
    sns_topic_arn: "SnsTopicArn",
  },
  role_arn: "RoleArn", # required
  label_category_config_s3_uri: "S3Uri",
  stopping_conditions: {
    max_human_labeled_object_count: 1,
    max_percentage_of_input_dataset_labeled: 1,
  },
  labeling_job_algorithms_config: {
    labeling_job_algorithm_specification_arn: "LabelingJobAlgorithmSpecificationArn", # required
    initial_active_learning_model_arn: "ModelArn",
    labeling_job_resource_config: {
      volume_kms_key_id: "KmsKeyId",
      vpc_config: {
        security_group_ids: ["SecurityGroupId"], # required
        subnets: ["SubnetId"], # required
      },
    },
  },
  human_task_config: { # required
    workteam_arn: "WorkteamArn", # required
    ui_config: { # required
      ui_template_s3_uri: "S3Uri",
      human_task_ui_arn: "HumanTaskUiArn",
    },
    pre_human_task_lambda_arn: "LambdaFunctionArn",
    task_keywords: ["TaskKeyword"],
    task_title: "TaskTitle", # required
    task_description: "TaskDescription", # required
    number_of_human_workers_per_data_object: 1, # required
    task_time_limit_in_seconds: 1, # required
    task_availability_lifetime_in_seconds: 1,
    max_concurrent_task_count: 1,
    annotation_consolidation_config: {
      annotation_consolidation_lambda_arn: "LambdaFunctionArn", # required
    },
    public_workforce_task_price: {
      amount_in_usd: {
        dollars: 1,
        cents: 1,
        tenth_fractions_of_a_cent: 1,
      },
    },
  },
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.labeling_job_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :labeling_job_name (required, String)

    The name of the labeling job. This name is used to identify the job in a list of labeling jobs. Labeling job names must be unique within an Amazon Web Services account and region. LabelingJobName is not case sensitive. For example, Example-job and example-job are considered the same labeling job name by Ground Truth.

  • :label_attribute_name (required, String)

    The attribute name to use for the label in the output manifest file. This is the key for the key/value pair formed with the label that a worker assigns to the object. The LabelAttributeName must meet the following requirements.

    • The name can't end with "-metadata".

    • If you are using one of the following built-in task types, the attribute name must end with "-ref". If the task type you are using is not listed below, the attribute name must not end with "-ref".

      • Image semantic segmentation (SemanticSegmentation), and adjustment (AdjustmentSemanticSegmentation) and verification (VerificationSemanticSegmentation) labeling jobs for this task type.

      • Video frame object detection (VideoObjectDetection), and adjustment and verification (AdjustmentVideoObjectDetection) labeling jobs for this task type.

      • Video frame object tracking (VideoObjectTracking), and adjustment and verification (AdjustmentVideoObjectTracking) labeling jobs for this task type.

      • 3D point cloud semantic segmentation (3DPointCloudSemanticSegmentation), and adjustment and verification (Adjustment3DPointCloudSemanticSegmentation) labeling jobs for this task type.

      • 3D point cloud object tracking (3DPointCloudObjectTracking), and adjustment and verification (Adjustment3DPointCloudObjectTracking) labeling jobs for this task type.

    If you are creating an adjustment or verification labeling job, you must use a different LabelAttributeName than the one used in the original labeling job. The original labeling job is the Ground Truth labeling job that produced the labels that you want verified or adjusted. To learn more about adjustment and verification labeling jobs, see Verify and Adjust Labels.

  • :input_config (required, Types::LabelingJobInputConfig)

    Input data for the labeling job, such as the Amazon S3 location of the data objects and the location of the manifest file that describes the data objects.

    You must specify at least one of the following: S3DataSource or SnsDataSource.

    • Use SnsDataSource to specify an SNS input topic for a streaming labeling job. If you do not specify and SNS input topic ARN, Ground Truth will create a one-time labeling job that stops after all data objects in the input manifest file have been labeled.

    • Use S3DataSource to specify an input manifest file for both streaming and one-time labeling jobs. Adding an S3DataSource is optional if you use SnsDataSource to create a streaming labeling job.

    If you use the Amazon Mechanical Turk workforce, your input data should not include confidential information, personal information or protected health information. Use ContentClassifiers to specify that your data is free of personally identifiable information and adult content.

  • :output_config (required, Types::LabelingJobOutputConfig)

    The location of the output data and the Amazon Web Services Key Management Service key ID for the key used to encrypt the output data, if any.

  • :role_arn (required, String)

    The Amazon Resource Number (ARN) that Amazon SageMaker assumes to perform tasks on your behalf during data labeling. You must grant this role the necessary permissions so that Amazon SageMaker can successfully complete data labeling.

  • :label_category_config_s3_uri (String)

    The S3 URI of the file, referred to as a label category configuration file, that defines the categories used to label the data objects.

    For 3D point cloud and video frame task types, you can add label category attributes and frame attributes to your label category configuration file. To learn how, see Create a Labeling Category Configuration File for 3D Point Cloud Labeling Jobs.

    For named entity recognition jobs, in addition to "labels", you must provide worker instructions in the label category configuration file using the "instructions" parameter: "instructions": {"shortInstruction":"<h1>Add header</h1><p>Add Instructions</p>", "fullInstruction":"<p>Add additional instructions.</p>"}. For details and an example, see Create a Named Entity Recognition Labeling Job (API) .

    For all other built-in task types and custom tasks, your label category configuration file must be a JSON file in the following format. Identify the labels you want to use by replacing label_1, label_2,...,label_n with your label categories.

    {

    "document-version": "2018-11-28",

    "labels": [{"label": "label_1"},{"label": "label_2"},...{"label": "label_n"}]

    }

    Note the following about the label category configuration file:

    • For image classification and text classification (single and multi-label) you must specify at least two label categories. For all other task types, the minimum number of label categories required is one.

    • Each label category must be unique, you cannot specify duplicate label categories.

    • If you create a 3D point cloud or video frame adjustment or verification labeling job, you must include auditLabelAttributeName in the label category configuration. Use this parameter to enter the LabelAttributeName of the labeling job you want to adjust or verify annotations of.

  • :stopping_conditions (Types::LabelingJobStoppingConditions)

    A set of conditions for stopping the labeling job. If any of the conditions are met, the job is automatically stopped. You can use these conditions to control the cost of data labeling.

  • :labeling_job_algorithms_config (Types::LabelingJobAlgorithmsConfig)

    Configures the information required to perform automated data labeling.

  • :human_task_config (required, Types::HumanTaskConfig)

    Configures the labeling task and how it is presented to workers; including, but not limited to price, keywords, and batch size (task count).

  • :tags (Array<Types::Tag>)

    An array of key/value pairs. For more information, see Using Cost Allocation Tags in the Amazon Web Services Billing and Cost Management User Guide.

Returns:

See Also:



5848
5849
5850
5851
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 5848

def create_labeling_job(params = {}, options = {})
  req = build_request(:create_labeling_job, params)
  req.send_request(options)
end

#create_mlflow_tracking_server(params = {}) ⇒ Types::CreateMlflowTrackingServerResponse

Creates an MLflow Tracking Server using a general purpose Amazon S3 bucket as the artifact store. For more information, see Create an MLflow Tracking Server.

Examples:

Request syntax with placeholder values


resp = client.create_mlflow_tracking_server({
  tracking_server_name: "TrackingServerName", # required
  artifact_store_uri: "S3Uri", # required
  tracking_server_size: "Small", # accepts Small, Medium, Large
  mlflow_version: "MlflowVersion",
  role_arn: "RoleArn", # required
  automatic_model_registration: false,
  weekly_maintenance_window_start: "WeeklyMaintenanceWindowStart",
  tags: [
    {
      key: "TagKey", # required
      value: "TagValue", # required
    },
  ],
})

Response structure


resp.tracking_server_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :tracking_server_name (required, String)

    A unique string identifying the tracking server name. This string is part of the tracking server ARN.

  • :artifact_store_uri (required, String)

    The S3 URI for a general purpose bucket to use as the MLflow Tracking Server artifact store.

  • :tracking_server_size (String)

    The size of the tracking server you want to create. You can choose between "Small", "Medium", and "Large". The default MLflow Tracking Server configuration size is "Small". You can choose a size depending on the projected use of the tracking server such as the volume of data logged, number of users, and frequency of use.

    We recommend using a small tracking server for teams of up to 25 users, a medium tracking server for teams of up to 50 users, and a large tracking server for teams of up to 100 users.

  • :mlflow_version (String)

    The version of MLflow that the tracking server uses. To see which MLflow versions are available to use, see How it works.

  • :role_arn (required, String)

    The Amazon Resource Name (ARN) for an IAM role in your account that the MLflow Tracking Server uses to access the artifact store in Amazon S3. The role should have AmazonS3FullAccess permissions. For more information on IAM permissions for tracking server creation, see Set up IAM permissions for MLflow.

  • :automatic_model_registration (Boolean)

    Whether to enable or disable automatic registration of new MLflow models to the SageMaker Model Registry. To enable automatic model registration, set this value to True. To disable automatic model registration, set this value to False. If not specified, AutomaticModelRegistration defaults to False.

  • :weekly_maintenance_window_start (String)

    The day and time of the week in Coordinated Universal Time (UTC) 24-hour standard time that weekly maintenance updates are scheduled. For example: TUE:03:30.

  • :tags (Array<Types::Tag>)

    Tags consisting of key-value pairs used to manage metadata for the tracking server.

Returns:

See Also:



5945
5946
5947
5948
# File 'gems/aws-sdk-sagemaker/lib/aws-sdk-sagemaker/client.rb', line 5945

def create_mlflow_tracking_server(params = {}, options = {})
  req = build_request(:create_mlflow_tracking_server, params)
  req.send_request(options)
end

#create_model(params = {}) ⇒ Types::CreateModelOutput

Creates a model in SageMaker. In the request, you name the model and describe a primary container. For the primary container, you specify the Docker image that contains inference code, artifacts (from prior training), and a custom environment map that the inference code uses when you deploy the model for predictions.

Use this API to create a model if you want to use SageMaker hosting services or run a batch transform job.

To host your model, you create an endpoint configuration with the CreateEndpointConfig API, and then create an endpoint with the CreateEndpoint API. SageMaker then deploys all of the containers that you defined for the model in the hosting environment.

To run a batch transform using your model, you start a job with the CreateTransformJob API. SageMaker uses your model and your dataset to get inferences which are then saved to a specified S3 location.

In the request, you also provide an IAM role that SageMaker can assume to access model artifacts and docker image for deployment on ML compute hosting instances or for batch transform jobs. In addition, you also use the IAM role to manage permissions the inference code needs. For example, if the inference code access any other Amazon Web Services resources, you grant necessary permissions via this role.

Examples:

Request syntax with placeholder values


resp = client.create_model({
  model_name: "ModelName", # required
  primary_container: {
    container_hostname: "ContainerHostname",
    image: "ContainerImage",
    image_config: {
      repository_access_mode: "Platform", # required, accepts Platform, Vpc
      repository_auth_config: {
        repository_credentials_provider_arn: "RepositoryCredentialsProviderArn", # required
      },
    },
    mode: "SingleModel", # accepts SingleModel, MultiModel
    model_data_url: "Url",
    model_data_source: {
      s3_data_source: {
        s3_uri: "S3ModelUri", # required
        s3_data_type: "S3Prefix", # required, accepts S3Prefix, S3Object
        compression_type: "None", # required, accepts None, Gzip
        model_access_config: {
          accept_eula: false, # required
        },
        hub_access_config: {
          hub_content_arn: "HubContentArn", # required
        },
        manifest_s3_uri: "S3ModelUri",
      },
    },
    additional_model_data_sources: [
      {
        channel_name: "AdditionalModelChannelName", # required
        s3_data_source: { # required
          s3_uri: "S3ModelUri", # required
          s3_data_type: "S3Prefix", # required, accepts S3Prefix, S3Object
          compression_type: "None", # required, accepts None, Gzip
          model_access_config: {
            accept_eula: false, # required
          },
          hub_access_config: {
            hub_content_arn: "HubContentArn", # required
          },
          manifest_s3_uri: "S3ModelUri",
        },
      },
    ],
    environment: {
      "EnvironmentKey" => "EnvironmentValue",
    },
    model_package_name: "VersionedArnOrName",
    inference_specification_name: "InferenceSpecificationName",
    multi_model_config: {
      model_cache_setting: "Enabled", # accepts Enabled, Disabled
    },
  },
  containers: [
    {
      container_hostname: "ContainerHostname",
      image: "ContainerImage",
      image_config: {
        repository_access_mode: "Platform", # required, accepts Platform, Vpc
        repository_auth_config: {
          repository_credentials_provider_arn: "RepositoryCredentialsProviderArn", # required
        },
      },
      mode: "SingleModel", # accepts SingleModel, MultiModel
      model_data_url: "Url",
      model_data_source: {
        s3_data_source: {
          s3_uri: "S3ModelUri", # required
          s3_data_type: "S3Prefix", # required, accepts S3Prefix, S3Object
          compression_type: "None", # required, accepts None, Gzip
          model_access_config: {
            accept_eula: false, # required
          },
          hub_access_config: {
            hub_content_arn: "HubContentArn", # required
          },
          manifest_s3_uri: "S3ModelUri",
        },
      },
      additional_model_data_sources: [
        {
          channel_name: "AdditionalModelChannelName", # required
          s3_data_source: { # required
            s3_uri: "S3ModelUri", # required
            s3_data_type: "S3Prefix", # required, accepts S3Prefix, S3Object
            compression_type: "None", # required, accepts None, Gzip
            model_access_config: {
              accept_eula: false, # required
            },
            hub_access_config: {
              hub_content_arn: "HubContentArn", # required
            },
            manifest_s3_uri: "S3ModelUri",
          },
        },
      ],
      environment: {
        "EnvironmentKey" => "EnvironmentValue",
      },
      model_package_name: "VersionedArnOrName",
      inference_specification_name: "InferenceSpecificationName",
      multi_model_config: {
        model_cache_setting: "Enabled"