

# AWS Glue PySpark 转换参考
<a name="aws-glue-programming-python-transforms"></a>

AWS Glue 提供了以下可在 PySpark ETL 操作中使用的内置转换。您的数据在一个称为 *DynamicFrame* 的数据结构中从转换传递到转换，该数据结构是 Apache Spark SQL `DataFrame` 的扩展。`DynamicFrame` 包含您的数据，并引用其架构来处理您的数据。

此外，其中的大多数转换也将作为 `DynamicFrame` 类的方法存在。更多相关信息，请参阅 [DynamicFrame 转换](aws-glue-api-crawler-pyspark-extensions-dynamic-frame.md#aws-glue-api-crawler-pyspark-extensions-dynamic-frame-_transforms)。
+ [GlueTransform 基类](aws-glue-api-crawler-pyspark-transforms-GlueTransform.md)
+ [ApplyMapping 类](aws-glue-api-crawler-pyspark-transforms-ApplyMapping.md)
+ [DropFields 类](aws-glue-api-crawler-pyspark-transforms-DropFields.md)
+ [DropNullFields 类](aws-glue-api-crawler-pyspark-transforms-DropNullFields.md)
+ [ErrorsAsDynamicFrame 类](aws-glue-api-crawler-pyspark-transforms-ErrorsAsDynamicFrame.md)
+ [EvaluateDataQuality 类](aws-glue-api-crawler-pyspark-transforms-EvaluateDataQuality.md)
+ [FillMissingValues 类](aws-glue-api-crawler-pyspark-transforms-fillmissingvalues.md)
+ [Filter 类](aws-glue-api-crawler-pyspark-transforms-filter.md)
+ [FindIncrementalMatches 类](aws-glue-api-crawler-pyspark-transforms-findincrementalmatches.md)
+ [FindMatches 类](aws-glue-api-crawler-pyspark-transforms-findmatches.md)
+ [FlatMap 类](aws-glue-api-crawler-pyspark-transforms-flat-map.md)
+ [Join 类](aws-glue-api-crawler-pyspark-transforms-join.md)
+ [Map 类](aws-glue-api-crawler-pyspark-transforms-map.md)
+ [MapToCollection 类](aws-glue-api-crawler-pyspark-transforms-MapToCollection.md)
+ [mergeDynamicFrame](aws-glue-api-crawler-pyspark-extensions-dynamic-frame.md#aws-glue-api-crawler-pyspark-extensions-dynamic-frame-merge)
+ [Relationalize 类](aws-glue-api-crawler-pyspark-transforms-Relationalize.md)
+ [RenameField 类](aws-glue-api-crawler-pyspark-transforms-RenameField.md)
+ [ResolveChoice 类](aws-glue-api-crawler-pyspark-transforms-ResolveChoice.md)
+ [SelectFields 类](aws-glue-api-crawler-pyspark-transforms-SelectFields.md)
+ [SelectFromCollection 类](aws-glue-api-crawler-pyspark-transforms-SelectFromCollection.md)
+ [Simplify\$1ddb\$1json 类](aws-glue-api-crawler-pyspark-transforms-simplify-ddb-json.md)
+ [Spigot 类](aws-glue-api-crawler-pyspark-transforms-spigot.md)
+ [SplitFields 类](aws-glue-api-crawler-pyspark-transforms-SplitFields.md)
+ [SplitRows 类](aws-glue-api-crawler-pyspark-transforms-SplitRows.md)
+ [Unbox 类](aws-glue-api-crawler-pyspark-transforms-Unbox.md)
+ [UnnestFrame 类](aws-glue-api-crawler-pyspark-transforms-UnnestFrame.md)

## 数据集成转换
<a name="aws-glue-programming-python-di-transforms"></a>

 对于 AWS Glue 4.0 及更高版本，使用 `key: --enable-glue-di-transforms, value: true` 创建或更新任务参数。

 示例任务脚本：

```
from pyspark.context import SparkContext
        
from awsgluedi.transforms import *
sc = SparkContext()

input_df = spark.createDataFrame(
    [(5,), (0,), (-1,), (2,), (None,)],
    ["source_column"],
)

try:
    df_output = math_functions.IsEven.apply(
        data_frame=input_df,
        spark_context=sc,
        source_column="source_column",
        target_column="target_column",
        value=None,
        true_string="Even",
        false_string="Not even",
    )
    df_output.show()   
except:
    print("Unexpected Error happened ")
    raise
```

 使用笔记本的示例会话 

```
%idle_timeout 2880
%glue_version 4.0
%worker_type G.1X
%number_of_workers 5
%region eu-west-1
```

```
%%configure
{
    "--enable-glue-di-transforms": "true"
}
```

```
from pyspark.context import SparkContext
from awsgluedi.transforms import *

sc = SparkContext()

input_df = spark.createDataFrame(
    [(5,), (0,), (-1,), (2,), (None,)],
    ["source_column"],
)

try:
    df_output = math_functions.IsEven.apply(
        data_frame=input_df,
        spark_context=sc,
        source_column="source_column",
        target_column="target_column",
        value=None,
        true_string="Even",
        false_string="Not even",
    )
    df_output.show()    
except:
    print("Unexpected Error happened ")
    raise
```

 使用 AWS CLI 的示例会话 

```
aws glue create-session --default-arguments "--enable-glue-di-transforms=true"
```

 DI 转换：
+  [FlagDuplicatesInColumn 类](aws-glue-api-pyspark-transforms-FlagDuplicatesInColumn.md) 
+  [FormatPhoneNumber 类](aws-glue-api-pyspark-transforms-FormatPhoneNumber.md) 
+  [FormatCase 类](aws-glue-api-pyspark-transforms-FormatCase.md) 
+  [FillWithMode 类](aws-glue-api-pyspark-transforms-FillWithMode.md) 
+  [FlagDuplicateRows 类](aws-glue-api-pyspark-transforms-FlagDuplicateRows.md) 
+  [RemoveDuplicates 类](aws-glue-api-pyspark-transforms-RemoveDuplicates.md) 
+  [MonthName 类](aws-glue-api-pyspark-transforms-MonthName.md) 
+  [IsEven 类](aws-glue-api-pyspark-transforms-IsEven.md) 
+  [CryptographicHash 类](aws-glue-api-pyspark-transforms-CryptographicHash.md) 
+  [解密类](aws-glue-api-pyspark-transforms-Decrypt.md) 
+  [加密类](aws-glue-api-pyspark-transforms-Encrypt.md) 
+  [IntToIp 类](aws-glue-api-pyspark-transforms-IntToIp.md) 
+  [IpToInt 类](aws-glue-api-pyspark-transforms-IpToInt.md) 

### Maven：将插件与 Spark 应用程序捆绑在一起
<a name="aws-glue-programming-python-di-transforms-maven"></a>

 在本地开发 Spark 应用程序时，您可以通过在 Maven `pom.xml` 中添加插件依赖关系，将转换依赖项与 Spark 应用程序和 Spark 发行版（版本 3.3）捆绑在一起。

```
<repositories>
   ...
    <repository>
        <id>aws-glue-etl-artifacts</id>
        <url>https://aws-glue-etl-artifacts.s3.amazonaws.com/release/ </url>
    </repository>
</repositories>
...
<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>AWSGlueTransforms</artifactId>
    <version>4.0.0</version>
</dependency>
```

 您也可以直接从 AWS Glue Maven 构件下载二进制文件，并将它们包含在您的 Spark 应用程序中，如下所示。

```
#!/bin/bash
sudo wget -v https://aws-glue-etl-artifacts.s3.amazonaws.com/release/com/amazonaws/AWSGlueTransforms/4.0.0/AWSGlueTransforms-4.0.0.jar -P /usr/lib/spark/jars/
```