用于训练的常见数据格式 - Amazon SageMaker

本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。

用于训练的常见数据格式

为了准备培训,您可以使用各种 AWS 服务对数据进行预处理,包括亚马逊、Amazon Redshift AWS Glue EMR、亚马逊关系数据库服务和亚马逊 Athena。在进行预处理后,将数据发布到 Amazon S3 存储桶。为了进行训练,数据必须经过一系列的转换和转换,包括:

  • 训练数据序列化 (由您处理)

  • 训练数据反序列化 (由算法处理)

  • 训练模型序列化 (由算法处理)

  • 训练后的模型反序列化 (可选,由您处理)

SageMaker 在算法的训练部分使用 Amazon 时,请确保一次性上传所有数据。如果向该位置添加更多数据,则需发起新的训练调用以构建全新模型。

内置算法支持的内容类型

下表列出了一些通常支持的 ContentType 值和使用它们的算法:

ContentTypes 用于内置算法

ContentType 算法
application/x-image 对象检测算法,语义分割
application/x-recordio

对象检测算法

应用程序/ x-recordio-protobuf

因式分解机、K-Means、k-nn、潜在狄利克雷分配、线性学习器、、、、NTM PCA RCF Sequence-to-Sequence

application/jsonlines

BlazingText,Deepar

image/jpeg

对象检测算法,语义分割

image/png

对象检测算法,语义分割

text/csv

IP Insights、K-Means、k-nn、潜在狄利克雷分配、线性学习器、、、、NTM PCA RCF XGBoost

text/libsvm

XGBoost

有关每种算法支持的参数的摘要,请参阅各个算法的文档或此

使用管道模式

管道模式中,您的训练作业直接从 Amazon Simple Storage Service (Amazon S3) 流式传输数据。流式传输可以为训练作业提供更快的启动时间和更好的吞吐量。这与文件模式相反,这种模式下来自 Amazon S3 的数据存储在训练实例卷上。文件模式需要使用磁盘空间来存储最终的模型构件和完整的训练数据集。在管道模式下,通过直接从 Amazon S3 流式传输数据,可以减少训练实例的 Amazon Elastic Block Store 卷的大小。管道模式只需要足够的磁盘空间来存储最终模型构件。有关训练输入模式的更多详细信息,请参阅AlgorithmSpecification

使用CSV格式

许多 Amazon SageMaker 算法都支持使用CSV格式化数据进行训练。要使用CSV格式的数据进行训练,请在输入数据通道规范中指定text/csvContentType。Amazon SageMaker 要求CSV文件没有标题记录,并且目标变量位于第一列。要运行没有目标的自主学习算法,请指定内容类型中的标签列数。例如,在此示例中,'content_type=text/csv;label_size=0'。有关更多信息,请参阅立即将管道模式与CSV数据集配合使用,以便在 Amazon SageMaker 内置算法上更快地进行训练

使用 RecordIO 格式

在 protobuf Recordio 格式中, SageMaker 将数据集中的每个观测值转换为一组 4 字节浮点数的二进制表示形式,然后将其加载到 protobuf 值字段中。如果您使用 Python 进行数据准备,我们强烈建议您使用这些现有的转换。但是,如果您使用的是其他语言,则下面的 protobuf 定义文件提供了用于将数据转换为 p SageMaker rotobuf 格式的架构。

注意

有关演示如何将常用 numPy 数组转换为 protobuf Recordio 格式的示例,请参阅《分解机器简介》MNIST

syntax = "proto2"; package aialgs.data; option java_package = "com.amazonaws.aialgorithms.proto"; option java_outer_classname = "RecordProtos"; // A sparse or dense rank-R tensor that stores data as doubles (float64). message Float32Tensor { // Each value in the vector. If keys is empty, this is treated as a // dense vector. repeated float values = 1 [packed = true]; // If key is not empty, the vector is treated as sparse, with // each key specifying the location of the value in the sparse vector. repeated uint64 keys = 2 [packed = true]; // An optional shape that allows the vector to represent a matrix. // For example, if shape = [ 10, 20 ], floor(keys[i] / 20) gives the row, // and keys[i] % 20 gives the column. // This also supports n-dimensonal tensors. // Note: If the tensor is sparse, you must specify this value. repeated uint64 shape = 3 [packed = true]; } // A sparse or dense rank-R tensor that stores data as doubles (float64). message Float64Tensor { // Each value in the vector. If keys is empty, this is treated as a // dense vector. repeated double values = 1 [packed = true]; // If this is not empty, the vector is treated as sparse, with // each key specifying the location of the value in the sparse vector. repeated uint64 keys = 2 [packed = true]; // An optional shape that allows the vector to represent a matrix. // For example, if shape = [ 10, 20 ], floor(keys[i] / 10) gives the row, // and keys[i] % 20 gives the column. // This also supports n-dimensonal tensors. // Note: If the tensor is sparse, you must specify this value. repeated uint64 shape = 3 [packed = true]; } // A sparse or dense rank-R tensor that stores data as 32-bit ints (int32). message Int32Tensor { // Each value in the vector. If keys is empty, this is treated as a // dense vector. repeated int32 values = 1 [packed = true]; // If this is not empty, the vector is treated as sparse with // each key specifying the location of the value in the sparse vector. repeated uint64 keys = 2 [packed = true]; // An optional shape that allows the vector to represent a matrix. // For Exmple, if shape = [ 10, 20 ], floor(keys[i] / 10) gives the row, // and keys[i] % 20 gives the column. // This also supports n-dimensonal tensors. // Note: If the tensor is sparse, you must specify this value. repeated uint64 shape = 3 [packed = true]; } // Support for storing binary data for parsing in other ways (such as JPEG/etc). // This is an example of another type of value and may not immediately be supported. message Bytes { repeated bytes value = 1; // If the content type of the data is known, stores it. // This allows for the possibility of using decoders for common formats // in the future. optional string content_type = 2; } message Value { oneof value { // The numbering assumes the possible use of: // - float16, float128 // - int8, int16, int32 Float32Tensor float32_tensor = 2; Float64Tensor float64_tensor = 3; Int32Tensor int32_tensor = 7; Bytes bytes = 9; } } message Record { // Map from the name of the feature to the value. // // For vectors and libsvm-like datasets, // a single feature with the name `values` // should be specified. map<string, Value> features = 1; // An optional set of labels for this record. // Similar to the features field above, the key used for // generic scalar / vector labels should be 'values'. map<string, Value> label = 2; // A unique identifier for this record in the dataset. // // Whilst not necessary, this allows better // debugging where there are data issues. // // This is not used by the algorithm directly. optional string uid = 3; // Textual metadata describing the record. // // This may include JSON-serialized information // about the source of the record. // // This is not used by the algorithm directly. optional string metadata = 4; // An optional serialized JSON object that allows per-record // hyper-parameters/configuration/other information to be set. // // The meaning/interpretation of this field is defined by // the algorithm author and may not be supported. // // This is used to pass additional inference configuration // when batch inference is used (e.g. types of scores to return). optional string configuration = 5; }

创建协议缓冲区后,将其存储在 Amazon S3 位置,Amazon SageMaker 可以访问该位置,也可以将其作为其中InputDataConfig一部分传递create_training_job

注意

对于所有 Amazon SageMaker 算法,输入InputDataConfig必须设置为trainChannelName一些算法还支持验证或测试 input channels。它们通常用于通过使用保留数据集来评估模型的性能。保留数据集不用于初始训练,但可用于进一步调整模型。

训练后的模型反序列化

Amazon SageMaker 模型以 model.tar.gz 的形式存储在create_training_job调用OutputDataConfigS3OutputPath参数中指定的 S3 存储桶中。S3 存储桶必须与笔记本实例位于同一 AWS 区域。您可以在创建托管模型时指定这些模型构件中的大多数模型构件。您也可以在笔记本实例中打开并查看它们。解压缩后model.tar.gz,它包含model_algo-1序列化的 Apache 对象。MXNet例如,您可以使用以下命令将 k-means 模型加载到内存中并进行查看:

import mxnet as mx print(mx.ndarray.load('model_algo-1'))