@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class SparkSQL extends Object implements Serializable, Cloneable, StructuredPojo
Specifies a transform where you enter a SQL query using Spark SQL syntax to transform the data. The output is a
single DynamicFrame
.
Constructor and Description |
---|
SparkSQL() |
Modifier and Type | Method and Description |
---|---|
SparkSQL |
clone() |
boolean |
equals(Object obj) |
List<String> |
getInputs()
The data inputs identified by their node names.
|
String |
getName()
The name of the transform node.
|
List<GlueSchema> |
getOutputSchemas()
Specifies the data schema for the SparkSQL transform.
|
List<SqlAlias> |
getSqlAliases()
A list of aliases.
|
String |
getSqlQuery()
A SQL query that must use Spark SQL syntax and return a single data set.
|
int |
hashCode() |
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller . |
void |
setInputs(Collection<String> inputs)
The data inputs identified by their node names.
|
void |
setName(String name)
The name of the transform node.
|
void |
setOutputSchemas(Collection<GlueSchema> outputSchemas)
Specifies the data schema for the SparkSQL transform.
|
void |
setSqlAliases(Collection<SqlAlias> sqlAliases)
A list of aliases.
|
void |
setSqlQuery(String sqlQuery)
A SQL query that must use Spark SQL syntax and return a single data set.
|
String |
toString()
Returns a string representation of this object.
|
SparkSQL |
withInputs(Collection<String> inputs)
The data inputs identified by their node names.
|
SparkSQL |
withInputs(String... inputs)
The data inputs identified by their node names.
|
SparkSQL |
withName(String name)
The name of the transform node.
|
SparkSQL |
withOutputSchemas(Collection<GlueSchema> outputSchemas)
Specifies the data schema for the SparkSQL transform.
|
SparkSQL |
withOutputSchemas(GlueSchema... outputSchemas)
Specifies the data schema for the SparkSQL transform.
|
SparkSQL |
withSqlAliases(Collection<SqlAlias> sqlAliases)
A list of aliases.
|
SparkSQL |
withSqlAliases(SqlAlias... sqlAliases)
A list of aliases.
|
SparkSQL |
withSqlQuery(String sqlQuery)
A SQL query that must use Spark SQL syntax and return a single data set.
|
public void setName(String name)
The name of the transform node.
name
- The name of the transform node.public String getName()
The name of the transform node.
public SparkSQL withName(String name)
The name of the transform node.
name
- The name of the transform node.public List<String> getInputs()
The data inputs identified by their node names. You can associate a table name with each input node to use in the SQL query. The name you choose must meet the Spark SQL naming restrictions.
public void setInputs(Collection<String> inputs)
The data inputs identified by their node names. You can associate a table name with each input node to use in the SQL query. The name you choose must meet the Spark SQL naming restrictions.
inputs
- The data inputs identified by their node names. You can associate a table name with each input node to use
in the SQL query. The name you choose must meet the Spark SQL naming restrictions.public SparkSQL withInputs(String... inputs)
The data inputs identified by their node names. You can associate a table name with each input node to use in the SQL query. The name you choose must meet the Spark SQL naming restrictions.
NOTE: This method appends the values to the existing list (if any). Use
setInputs(java.util.Collection)
or withInputs(java.util.Collection)
if you want to override the
existing values.
inputs
- The data inputs identified by their node names. You can associate a table name with each input node to use
in the SQL query. The name you choose must meet the Spark SQL naming restrictions.public SparkSQL withInputs(Collection<String> inputs)
The data inputs identified by their node names. You can associate a table name with each input node to use in the SQL query. The name you choose must meet the Spark SQL naming restrictions.
inputs
- The data inputs identified by their node names. You can associate a table name with each input node to use
in the SQL query. The name you choose must meet the Spark SQL naming restrictions.public void setSqlQuery(String sqlQuery)
A SQL query that must use Spark SQL syntax and return a single data set.
sqlQuery
- A SQL query that must use Spark SQL syntax and return a single data set.public String getSqlQuery()
A SQL query that must use Spark SQL syntax and return a single data set.
public SparkSQL withSqlQuery(String sqlQuery)
A SQL query that must use Spark SQL syntax and return a single data set.
sqlQuery
- A SQL query that must use Spark SQL syntax and return a single data set.public List<SqlAlias> getSqlAliases()
A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For example, you
have a datasource named "MyDataSource". If you specify From
as MyDataSource, and Alias
as SqlName, then in your SQL you can do:
select * from SqlName
and that gets data from MyDataSource.
From
as MyDataSource,
and Alias
as SqlName, then in your SQL you can do:
select * from SqlName
and that gets data from MyDataSource.
public void setSqlAliases(Collection<SqlAlias> sqlAliases)
A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For example, you
have a datasource named "MyDataSource". If you specify From
as MyDataSource, and Alias
as SqlName, then in your SQL you can do:
select * from SqlName
and that gets data from MyDataSource.
sqlAliases
- A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For
example, you have a datasource named "MyDataSource". If you specify From
as MyDataSource, and
Alias
as SqlName, then in your SQL you can do:
select * from SqlName
and that gets data from MyDataSource.
public SparkSQL withSqlAliases(SqlAlias... sqlAliases)
A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For example, you
have a datasource named "MyDataSource". If you specify From
as MyDataSource, and Alias
as SqlName, then in your SQL you can do:
select * from SqlName
and that gets data from MyDataSource.
NOTE: This method appends the values to the existing list (if any). Use
setSqlAliases(java.util.Collection)
or withSqlAliases(java.util.Collection)
if you want to
override the existing values.
sqlAliases
- A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For
example, you have a datasource named "MyDataSource". If you specify From
as MyDataSource, and
Alias
as SqlName, then in your SQL you can do:
select * from SqlName
and that gets data from MyDataSource.
public SparkSQL withSqlAliases(Collection<SqlAlias> sqlAliases)
A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For example, you
have a datasource named "MyDataSource". If you specify From
as MyDataSource, and Alias
as SqlName, then in your SQL you can do:
select * from SqlName
and that gets data from MyDataSource.
sqlAliases
- A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For
example, you have a datasource named "MyDataSource". If you specify From
as MyDataSource, and
Alias
as SqlName, then in your SQL you can do:
select * from SqlName
and that gets data from MyDataSource.
public List<GlueSchema> getOutputSchemas()
Specifies the data schema for the SparkSQL transform.
public void setOutputSchemas(Collection<GlueSchema> outputSchemas)
Specifies the data schema for the SparkSQL transform.
outputSchemas
- Specifies the data schema for the SparkSQL transform.public SparkSQL withOutputSchemas(GlueSchema... outputSchemas)
Specifies the data schema for the SparkSQL transform.
NOTE: This method appends the values to the existing list (if any). Use
setOutputSchemas(java.util.Collection)
or withOutputSchemas(java.util.Collection)
if you want
to override the existing values.
outputSchemas
- Specifies the data schema for the SparkSQL transform.public SparkSQL withOutputSchemas(Collection<GlueSchema> outputSchemas)
Specifies the data schema for the SparkSQL transform.
outputSchemas
- Specifies the data schema for the SparkSQL transform.public String toString()
toString
in class Object
Object.toString()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojo
ProtocolMarshaller
.marshall
in interface StructuredPojo
protocolMarshaller
- Implementation of ProtocolMarshaller
used to marshall this object's data.