You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::SageMaker::Types::OutputConfig
- Inherits:
-
Struct
- Object
- Struct
- Aws::SageMaker::Types::OutputConfig
- Defined in:
- (unknown)
Overview
When passing OutputConfig as input to an Aws::Client method, you can use a vanilla Hash:
{
s3_output_location: "S3Uri", # required
target_device: "lambda", # accepts lambda, ml_m4, ml_m5, ml_c4, ml_c5, ml_p2, ml_p3, ml_g4dn, ml_inf1, jetson_tx1, jetson_tx2, jetson_nano, jetson_xavier, rasp3b, imx8qm, deeplens, rk3399, rk3288, aisage, sbe_c, qcs605, qcs603, sitara_am57x, amba_cv22, x86_win32, x86_win64, coreml
target_platform: {
os: "ANDROID", # required, accepts ANDROID, LINUX
arch: "X86_64", # required, accepts X86_64, X86, ARM64, ARM_EABI, ARM_EABIHF
accelerator: "INTEL_GRAPHICS", # accepts INTEL_GRAPHICS, MALI, NVIDIA
},
compiler_options: "CompilerOptions",
}
Contains information about the output location for the compiled model and the target device that the model runs on. TargetDevice
and TargetPlatform
are mutually exclusive, so you need to choose one between the two to specify your target device or platform. If you cannot find your device you want to use from the TargetDevice
list, use TargetPlatform
to describe the platform of your edge device and CompilerOptions
if there are specific settings that are required or recommended to use for particular TargetPlatform.
Returned by:
Instance Attribute Summary collapse
-
#compiler_options ⇒ String
Specifies additional parameters for compiler options in JSON format.
-
#s3_output_location ⇒ String
Identifies the S3 bucket where you want Amazon SageMaker to store the model artifacts.
-
#target_device ⇒ String
Identifies the target device or the machine learning instance that you want to run your model on after the compilation has completed.
-
#target_platform ⇒ Types::TargetPlatform
Contains information about a target platform that you want your model to run on, such as OS, architecture, and accelerators.
Instance Attribute Details
#compiler_options ⇒ String
Specifies additional parameters for compiler options in JSON format. The
compiler options are TargetPlatform
specific. It is required for
NVIDIA accelerators and highly recommended for CPU compilations. For any
other cases, it is optional to specify CompilerOptions.
CPU
: Compilation for CPU supports the following compiler options.mcpu
: CPU micro-architecture. For example,{'mcpu': 'skylake-avx512'}
mattr
: CPU flags. For example,{'mattr': ['+neon', '+vfpv4']}
ARM
: Details of ARM CPU compilations.NEON
: NEON is an implementation of the Advanced SIMD extension used in ARMv7 processors.For example, add
{'mattr': ['+neon']}
to the compiler options if compiling for ARM 32-bit platform with the NEON support.
NVIDIA
: Compilation for NVIDIA GPU supports the following compiler options.gpu_code
: Specifies the targeted architecture.trt-ver
: Specifies the TensorRT versions in x.y.z. format.cuda-ver
: Specifies the CUDA version in x.y format.
For example,
{'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}
ANDROID
: Compilation for the Android OS supports the following compiler options:ANDROID_PLATFORM
: Specifies the Android API levels. Available levels range from 21 to 29. For example,{'ANDROID_PLATFORM': 28}
.mattr
: Add{'mattr': ['+neon']}
to compiler options if compiling for ARM 32-bit platform with NEON support.
INFERENTIA
: Compilation for target ml_inf1 uses compiler options passed in as a JSON string. For example,"CompilerOptions": "\"--verbose 1 --num-neuroncores 2 -O2\""
.For information about supported compiler options, see Neuron Compiler CLI.
CoreML
: Compilation for the CoreML OutputConfig$TargetDevice supports the following compiler options:class_labels
: Specifies the classification labels file name inside input tar.gz file. For example,{"class_labels": "imagenet_labels_1000.txt"}
. Labels inside the txt file should be separated by newlines.
^
#s3_output_location ⇒ String
Identifies the S3 bucket where you want Amazon SageMaker to store the
model artifacts. For example, s3://bucket-name/key-name-prefix
.
#target_device ⇒ String
Identifies the target device or the machine learning instance that you
want to run your model on after the compilation has completed.
Alternatively, you can specify OS, architecture, and accelerator using
TargetPlatform fields. It can be used instead of
TargetPlatform
.
Possible values:
- lambda
- ml_m4
- ml_m5
- ml_c4
- ml_c5
- ml_p2
- ml_p3
- ml_g4dn
- ml_inf1
- jetson_tx1
- jetson_tx2
- jetson_nano
- jetson_xavier
- rasp3b
- imx8qm
- deeplens
- rk3399
- rk3288
- aisage
- sbe_c
- qcs605
- qcs603
- sitara_am57x
- amba_cv22
- x86_win32
- x86_win64
- coreml
#target_platform ⇒ Types::TargetPlatform
Contains information about a target platform that you want your model to
run on, such as OS, architecture, and accelerators. It is an alternative
of TargetDevice
.
The following examples show how to configure the TargetPlatform
and
CompilerOptions
JSON strings for popular target platforms:
Raspberry Pi 3 Model B+
"TargetPlatform": {"Os": "LINUX", "Arch": "ARM_EABIHF"},
"CompilerOptions": {'mattr': ['+neon']}
Jetson TX2
"TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "NVIDIA"},
"CompilerOptions": {'gpu-code': 'sm_62', 'trt-ver': '6.0.1', 'cuda-ver': '10.0'}
EC2 m5.2xlarge instance OS
"TargetPlatform": {"Os": "LINUX", "Arch": "X86_64", "Accelerator": "NVIDIA"},
"CompilerOptions": {'mcpu': 'skylake-avx512'}
RK3399
"TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "MALI"}
ARMv7 phone (CPU)
"TargetPlatform": {"Os": "ANDROID", "Arch": "ARM_EABI"},
"CompilerOptions": {'ANDROID_PLATFORM': 25, 'mattr': ['+neon']}
ARMv8 phone (CPU)
"TargetPlatform": {"Os": "ANDROID", "Arch": "ARM64"},
"CompilerOptions": {'ANDROID_PLATFORM': 29}