

 Amazon Redshift will no longer support the creation of new Python UDFs starting Patch 198. Existing Python UDFs will continue to function until June 30, 2026. For more information, see the [ blog post ](https://aws.amazon.com/blogs/big-data/amazon-redshift-python-user-defined-functions-will-reach-end-of-support-after-june-30-2026/). 

# Configuring connections in Amazon Redshift
<a name="configuring-connections"></a>

In the following section, learn how to configure JDBC, Python, and ODBC connections to connect to your cluster from SQL client tools. This section describes how to set up JDBC, Python, and ODBC connections. It also describes how to use Secure Sockets Layer (SSL) and server certificates to encrypt communication between the client and server. 

## JDBC, Python, and ODBC drivers for Amazon Redshift
<a name="connecting-drivers"></a>

To work with data in your cluster, you must have JDBC, Python, or ODBC drivers for connectivity from your client computer or instance. Code your applications to use JDBC, Python, or ODBC data access API operations, and use SQL client tools that support either JDBC, Python, or ODBC.

Amazon Redshift offers JDBC, Python, and ODBC drivers for download. These drivers are supported by Support. PostgreSQL drivers are not tested and not supported by the Amazon Redshift team. Use the Amazon Redshift–specific drivers when connecting to an Amazon Redshift cluster. The Amazon Redshift drivers have the following advantages:
+ Support for IAM, SSO, and federated authentication.
+ Support for new Amazon Redshift data types.
+ Support for authentication profiles.
+ Improved performance in conjunction with Amazon Redshift enhancements.

 For more information about how to download the JDBC and ODBC drivers and configure connections to your cluster, see [Configuring a connection for JDBC driver version 2.x for Amazon Redshift](jdbc20-install.md), [Amazon Redshift Python connector](python-redshift-driver.md), and [Configuring a connection for ODBC driver version 2.x for Amazon Redshift](odbc20-install.md). 

For more information about managing IAM identities, including best practices for IAM roles, see [Identity and access management in Amazon Redshift](redshift-iam-authentication-access-control.md).

# Finding your cluster connection string
<a name="connecting-connection-string"></a>

To connect to your cluster with your SQL client tool, you must have the cluster connection string. You can find the cluster connection string in the Amazon Redshift console, on a cluster's details page.

**To find the connection string for a cluster**

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose the cluster name from the list to open its details.

1. The **JDBC URL** and **ODBC URL** connection strings are available, along with additional details, in the **General information** section. Each string is based on the AWS Region where the cluster runs. Click the icon next to the appropriate connection string to copy it.

To connect to a cluster endpoint, you can use the cluster endpoint URL from a [DescribeClusters API request](https://docs.aws.amazon.com/redshift/latest/APIReference/API_DescribeClusters.html). The following is an example of a cluster endpoint URL.

```
mycluster.cmeaswqeuae.us-east-2.redshift.amazonaws.com
```

If you have set up a custom domain name for your cluster, you can also use that to connect to your cluster. For more information about creating a custom domain name, see [Setting up a custom domain name](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-connection-CNAME-connect.html).

**Note**  
When you connect, don't use the IP address of a cluster node or the IP address of the VPC endpoint. Always use the Redshift endpoint to avoid an unnecessary outage. The only exception to using the endpoint URL is when you use a custom domain name. For more information, see [Using a custom domain name for client connections](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-connection-CNAME.html).

# Configuring a connection for JDBC driver version 2.x for Amazon Redshift
<a name="jdbc20-install"></a>

You can use a JDBC driver version 2.x connection to connect to your Amazon Redshift cluster from many third-party SQL client tools. The Amazon Redshift JDBC connector provides an open source solution. You can browse the source code, request enhancements, report issues, and provide contributions. 

For the latest information about JDBC driver changes, see the [change log](https://github.com/aws/amazon-redshift-jdbc-driver/blob/master/CHANGELOG.md).

By default, the Amazon Redshift JDBC driver is configured to use TCP keepalives to prevent connections from timing out. You can specify when the driver starts sending keepalive packets or turn off the feature by setting the relevant properties in the connection URL. For more information about the syntax of the connection URL, see [Building the connection URL](jdbc20-build-connection-url.md).


| Property | Description | 
| --- | --- | 
|  `TCPKeepAlive`  |  To turn off TCP keepalives, set this property to `FALSE`.  | 

**Topics**
+ [

# Download the Amazon Redshift JDBC driver, version 2.1
](jdbc20-download-driver.md)
+ [

# Installing the Amazon Redshift JDBC driver, version 2.2
](jdbc20-install-driver.md)
+ [

# Getting the JDBC URL
](jdbc20-obtain-url.md)
+ [

# Building the connection URL
](jdbc20-build-connection-url.md)
+ [

# Configuring a JDBC connection with Apache Maven
](configure-jdbc20-connection-with-maven.md)
+ [

# Configuring authentication and SSL
](jdbc20-configure-authentication-ssl.md)
+ [

# Configuring logging
](jdbc20-configuring-logging.md)
+ [

# Data type conversions
](jdbc20-data-type-mapping.md)
+ [

# Using prepared statement support
](jdbc20-prepared-statement-support.md)
+ [

# Differences between the 2.2 and 1.x versions of the JDBC driver
](jdbc20-jdbc10-driver-differences.md)
+ [

# Creating initialization (.ini) files for JDBC driver version 2.x
](jdbc20-ini-file.md)
+ [

# Options for JDBC driver version 2.x configuration
](jdbc20-configuration-options.md)
+ [

# Previous versions of JDBC driver version 2.x
](jdbc20-previous-driver-version-20.md)

# Download the Amazon Redshift JDBC driver, version 2.1
<a name="jdbc20-download-driver"></a>

**Note**  
The Amazon Redshift JDBC 2.x driver isn't designed to be thread-safe. Two or more threads concurrently attempting to use the same connection can lead to deadlocks, errors, incorrect results, or other unexpected behaviors.  
If you do have a multi-threaded application, we recommend that you synchronize access to the driver to avoid concurrent access.

Amazon Redshift offers drivers for tools that are compatible with the JDBC 4.2 API. The class name for this driver is `com.amazon.redshift.Driver`.

For detailed information about how to install the JDBC driver, reference the JDBC driver libraries, and register the driver class, see the following topics. 

For each computer where you use the Amazon Redshift JDBC driver version 2.x, make sure that the Java Runtime Environment (JRE) 8.0 is installed. 

If you use the Amazon Redshift JDBC driver for database authentication, make sure that you have AWS SDK for Java 1.11.118 or later in your Java class path. If you don't have AWS SDK for Java installed, download the ZIP file with JDBC 4.2–compatible driver and driver dependent libraries for the AWS SDK:
+ [JDBC 4.2–compatible driver version 2.x and AWS SDK driver–dependent libraries](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.2.5/redshift-jdbc42-2.2.5.zip) 

  This ZIP file contains the JDBC 4.2–compatible driver version 2.x and AWS SDK for Java 1.x driver–dependent library files. Unzip the dependent jar files to the same location as the JDBC driver. Only the JDBC driver needs to be in CLASSPATH.

  This ZIP file doesn't include the complete AWS SDK for Java 1.x. However, it includes the AWS SDK for Java 1.x driver–dependent libraries that are required for AWS Identity and Access Management (IAM) database authentication.

  Use this Amazon Redshift JDBC driver with the AWS SDK that is required for IAM database authentication.

  To install the complete AWS SDK for Java 1.x, see [AWS SDK for Java 1.x](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/welcome.html) in the *AWS SDK for Java Developer Guide*. 
+ [JDBC 4.2–compatible driver version 2.x (without the AWS SDK)](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.2.5/redshift-jdbc42-2.2.5.jar) 

Review the JDBC driver version 2.x software license and change log file: 
+ [JDBC driver version 2.x license](https://github.com/aws/amazon-redshift-jdbc-driver/blob/master/LICENSE) 
+ [JDBC driver version 2.x change log](https://github.com/aws/amazon-redshift-jdbc-driver/blob/master/CHANGELOG.md)

JDBC drivers version 1.2.27.1051 and later support Amazon Redshift stored procedures. For more information, see [Creating stored procedures in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/dg/stored-procedure-overview.html) in the *Amazon Redshift Database Developer Guide*. 

# Installing the Amazon Redshift JDBC driver, version 2.2
<a name="jdbc20-install-driver"></a>

To install the Amazon Redshift JDBC 4.2–compatible driver version 2.x and driver–dependent libraries for AWS SDK, extract the files from the ZIP archive to the directory of your choice. 

To install the Amazon Redshift JDBC 4.2–compatible driver version 2.x (without the AWS SDK), copy the JAR file to the directory of your choice.

To access an Amazon Redshift data store using the Amazon Redshift JDBC driver, you need to perform configuration as described following.

**Topics**
+ [

# Referencing the JDBC driver libraries
](jdbc20-driver-libraries.md)
+ [

# Registering the driver class
](jdbc20-register-driver-class.md)

# Referencing the JDBC driver libraries
<a name="jdbc20-driver-libraries"></a>

The JDBC application or Java code that you use to connect to your data must access the driver JAR files. In the application or code, specify all the JAR files that you extracted from the ZIP archive. 

## Using the driver in a JDBC application
<a name="jdbc20-use-driver-jdbc-app"></a>

JDBC applications usually provide a set of configuration options for adding a list of driver library files. Use the provided options to include all the JAR files from the ZIP archive as part of the driver configuration in the application. For more information, see the documentation for your JDBC application. 

## Using the driver in Java code
<a name="jdbc20-use-driver-java-code"></a>

You must include all the driver library files in the class path. This is the path that the Java Runtime Environment searches for classes and other resource files. For more information, see the appropriate Java SE documentation to set the class path for your operating system. 
+ Windows: [https://docs.oracle.com/javase/7/docs/technotes/tools/windows/classpath.html](https://docs.oracle.com/javase/7/docs/technotes/tools/windows/classpath.html)
+ Linux and Solaris: [https://docs.oracle.com/javase/7/docs/technotes/tools/solaris/classpath.html](https://docs.oracle.com/javase/7/docs/technotes/tools/solaris/classpath.html)
+ macOS: The default macOS class path is the directory in which the JDBC driver is installed.

# Registering the driver class
<a name="jdbc20-register-driver-class"></a>

Make sure that you register the appropriate class for your application. You use following classes to connect the Amazon Redshift JDBC driver to Amazon Redshift data stores:
+ `Driver` classes extend `java.sql.Driver`.
+ `DataSource` classes extend `javax.sql.DataSource` and `javax.sql.ConnectionPoolDataSource`.

The driver supports the following fully qualified class names that are independent of the JDBC version:
+ `com.amazon.redshift.jdbc.Driver`
+ `com.amazon.redshift.jdbc.DataSource`

The following example shows how to use the DriverManager class to establish a connection for JDBC 4.2.

```
            private static Connection connectViaDM() throws Exception
{
Connection connection = null;
connection = DriverManager.getConnection(CONNECTION_URL);
return connection;
}
```

The following example shows how to use the `DataSource` class to establish a connection.

```
 private static Connection connectViaDS() throws Exception
{
Connection connection = null;
11
Amazon Redshift JDBC Driver Installation and Configuration Guide
DataSource ds = new com.amazon.redshift.jdbc.DataSource
();
ds.setURL(CONNECTION_URL);
connection = ds.getConnection();
return connection;
}
```

# Getting the JDBC URL
<a name="jdbc20-obtain-url"></a>

Before you can connect to your Amazon Redshift cluster from a SQL client tool, you need to know the JDBC URL of your cluster. The JDBC URL has the following format: `jdbc:redshift://endpoint:port/database`.

The fields of the preceding format have the following values.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/jdbc20-obtain-url.html)

The following is an example JDBC URL: `jdbc:redshift://examplecluster.abc123xyz789.us-west-2.redshift.amazonaws.com:5439/dev` 

If your URL values contain any of the following URI reserved characters, the values must be URL encoded:
+  ; 
+  \$1 
+  \$1 
+  \$1 
+  [ 
+  ] 
+  & 
+  = 
+  ? 
+  an empty space 

For example, if your `PWD` value is `password:password`, a connection URL using that value would look something like the following:

`jdbc:redshift://redshift.company.us-west-1.redshift.amazonaws.com:9000/dev;UID=amazon;PWD=password%3Apassword`

For information about how to get your JDBC connection, see [Finding your cluster connection string](connecting-connection-string.md). 

If the client computer fails to connect to the database, you can troubleshoot possible issues. For more information, see [Troubleshooting connection issues in Amazon Redshift](troubleshooting-connections.md). 

# Building the connection URL
<a name="jdbc20-build-connection-url"></a>

Use the connection URL to supply connection information to the data store that you are accessing. The following is the format of the connection URL for the Amazon Redshift JDBC driver version 2.x. Here, [Host] the endpoint of the Amazon Redshift server and [Port] is the number of the Transmission Control Protocol (TCP) port that the server uses to listen for client requests.

```
jdbc:redshift://[Host]:[Port]
```

The following is the format of a connection URL that specifies some optional settings.

```
jdbc:redshift://[Host]:[Port]/[database];[Property1]=[Value];
[Property2]=[Value];
```

If your URL values contain any of the following URI reserved characters, the values must be URL encoded:
+  ; 
+  \$1 
+  \$1 
+  \$1 
+  [ 
+  ] 
+  & 
+  = 
+  ? 
+  an empty space 

For example, if your `PWD` value is `password:password`, a connection URL using that value would look something like the following:

`jdbc:redshift://redshift.company.us-west-1.redshift.amazonaws.com:9000/dev;UID=amazon;PWD=password%3Apassword`

For example, suppose that you want to connect to port 9000 on an Amazon Redshift cluster in the US West (N. California) Region on AWS. You also want to access the database named `dev` and authenticate the connection using a database username and password. In this case, you use the following connection URL.

```
jdbc:redshift://redshift.company.us-west-1.redshift.amazonaws.com:9000/dev;UID=amazon;PWD=amazon
```

You can use the following characters to separate the configuration options from the rest of the URL string:
+ ;
+ ?

For example, the following URL strings are equivalent:

```
jdbc:redshift://my_host:5439/dev;ssl=true;defaultRowFetchSize=100
```

```
jdbc:redshift://my_host:5439/dev?ssl=true;defaultRowFetchSize=100
```

You can use the following characters to separate configuration options from each other in the URL string:
+ ;
+ &

For example, the following URL strings are equivalent:

```
jdbc:redshift://my_host:5439/dev;ssl=true;defaultRowFetchSize=100
```

```
jdbc:redshift://my_host:5439/dev;ssl=true&defaultRowFetchSize=100
```

The following URL example specifies a log level of 6 and the path for the logs.

```
jdbc:redshift://redshift.amazonaws.com:5439/dev;DSILogLevel=6;LogPath=/home/user/logs;
```

Don't duplicate properties in the connection URL.

For a complete list of the configuration options that you can specify, see [Options for JDBC driver version 2.x configuration](jdbc20-configuration-options.md). 

**Note**  
When you connect, don't use the IP address of a cluster node or the IP address of the VPC endpoint. Always use the Redshift endpoint to avoid an unnecessary outage. The only exception to using the endpoint URL is when you use a custom domain name. For more information, see [Using a custom domain name for client connections](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-connection-CNAME.html).

# Configuring a JDBC connection with Apache Maven
<a name="configure-jdbc20-connection-with-maven"></a>

Apache Maven is a software project management and comprehension tool. The AWS SDK for Java supports Apache Maven projects. For more information, see [Using the SDK with Apache Maven](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-project-maven.html) in the *AWS SDK for Java Developer Guide.* 

If you use Apache Maven, you can configure and build your projects to use an Amazon Redshift JDBC driver to connect to your Amazon Redshift cluster. To do this, add the JDBC driver as a dependency in your project's `pom.xml` file. If you use Maven to build your project and want to use a JDBC connection, take the steps in the following section. 

**To configure the JDBC driver as a Maven dependency**

1. Add either the Amazon repository or the Maven Central repository to the repositories section of your `pom.xml` file.
**Note**  
The URL in the following code example returns an error if used in a browser. Use this URL only in the context of a Maven project.

   To connect using Secure Sockets Layer (SSL), add the following repository to your `pom.xml` file.

   ```
   <repositories>
       <repository>
         <id>redshift</id>
         <url>https://s3.amazonaws.com/redshift-maven-repository/release</url>
       </repository>
   </repositories>
   ```

   For a Maven Central repository, add the following to your `pom.xml` file.

   ```
   <repositories>
       <repository>
         <id>redshift</id>
         <url>https://repo1.maven.org/maven2</url>
       </repository>
   </repositories>
   ```

1. Declare the version of the driver that you want to use in the dependencies section of your `pom.xml` file.

   Amazon Redshift offers drivers for tools that are compatible with the JDBC 4.2 API. For information about the functionality supported by these drivers, see [Download the Amazon Redshift JDBC driver, version 2.1](jdbc20-download-driver.md). 

   Replace `driver-version` in the following example with your driver version, for example `2.1.0.1`. For a JDBC 4.2–compatible driver, use the following. 

   ```
   <dependency>
      <groupId>com.amazon.redshift</groupId>
      <artifactId>redshift-jdbc42</artifactId>
      <version>driver-version</version>
   </dependency>
   ```

   The class name for this driver is `com.amazon.redshift.Driver`.

The Amazon Redshift Maven drivers need the following optional dependencies when you use IAM database authentication. 

```
<dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-core</artifactId>
      <version>1.12.23</version>
      <scope>runtime</scope>
      <optional>true</optional>
</dependency>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-redshift</artifactId>
      <version>1.12.23</version>
      <scope>runtime</scope>
      <optional>true</optional>
</dependency>
<dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-sts</artifactId>
      <version>1.12.23</version>
      <scope>runtime</scope>
      <optional>true</optional>
</dependency>
```

To upgrade or change the Amazon Redshift JDBC driver to the latest version, first modify the version section of the dependency to the latest version of the driver. Then clean your project with the Maven Clean Plugin, as shown following. 

```
mvn clean
```

# Configuring authentication and SSL
<a name="jdbc20-configure-authentication-ssl"></a>

To protect data from unauthorized access, Amazon Redshift data stores require all connections to be authenticated using user credentials. Some data stores also require connections to be made over the Secure Sockets Layer (SSL) protocol, either with or without one-way authentication.

The Amazon Redshift JDBC driver version 2.x provides full support for these authentication protocols. 

The SSL version that the driver supports depends on the JVM version that you are using. For information about the SSL versions that are supported by each version of Java, see [Diagnosing TLS, SSL, and HTTPS](https://blogs.oracle.com/java-platform-group/diagnosing-tls,-ssl,-and-https) on the Java Platform Group Product Management Blog. 

The SSL version used for the connection is the highest version that is supported by both the driver and the server, which is determined at connection time.

Configure the Amazon Redshift JDBC driver version 2.x to authenticate your connection according to the security requirements of the Redshift server that you are connecting to. 

You must always provide your Redshift username and password to authenticate the connection. Depending on whether SSL is enabled and required on the server, you might also need to configure the driver to connect through SSL. Or you might use one-way SSL authentication so that the client (the driver itself) verifies the identity of the server. 

You provide the configuration information to the driver in the connection URL. For more information about the syntax of the connection URL, see [Building the connection URL](jdbc20-build-connection-url.md). 

*SSL* indicates TLS/SSL, both Transport Layer Security and Secure Sockets Layer. The driver supports industry-standard versions of TLS/SSL. 

## Configuring IAM authentication
<a name="jdbc20-configure-iam-authentication"></a>

If you are connecting to a Amazon Redshift server using IAM authentication, set the following properties as part of your data source connection string. 

 For more information on IAM authentication, see [Identity and access management in Amazon Redshift](redshift-iam-authentication-access-control.md).

To use IAM authentication, use one of the following connection string formats:


| Connection string | Description | 
| --- | --- | 
|  `jdbc:redshift:iam:// [host]:[port]/[db]`  |  A regular connection string. The driver infers the ClusterID and Region from the host.  | 
|  `jdbc:redshift:iam:// [cluster-id]: [region]/[db]`  |  The driver retrieves host information, given the ClusterID and Region.  | 
|  `jdbc:redshift:iam:// [host]/[db]`  |  The driver defaults to port 5439, and infers ClusterID and Region from the host. Depending on the port you selected when creating, modifying or migrating the cluster, allow access to the selected port.   | 

## Specifying profiles
<a name="jdbc20-aws-credentials-profiles"></a>

If you are using IAM authentication, you can specify any additional required or optional connection properties under a profile name. By doing this, you can avoid putting certain information directly in the connection string. You specify the profile name in your connection string using the Profile property. 

Profiles can be added to the AWS credentials file. The default location for this file is: `~/.aws/credentials` 

You can change the default value by setting the path in the following environment variable: `AWS_CREDENTIAL_PROFILES_FILE` 

 For more information about profiles, see [Working with AWS Credentials](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html) in the *AWS SDK for Java*. 

## Using instance profile credentials
<a name="jdbc20-instance-profile-credentials"></a>

If you are running an application on an Amazon EC2 instance that is associated with an IAM role, you can connect using the instance profile credentials. 

To do this, use one of the IAM connection string formats in the preceding table, and set the dbuser connection property to the Amazon Redshift username that you are connecting as. 

For more information about instance profiles, see [Access Management](https://docs.aws.amazon.com/IAM/latest/UserGuide/access.html) in the *IAM User Guide*. 

## Using credential providers
<a name="jdbc20-aws-credentials-provider"></a>

The driver also supports credential provider plugins from the following services: 
+ AWS IAM Identity Center
+ Active Directory Federation Service (ADFS)
+ JSON Web Tokens (JWT) Service
+ Microsoft Azure Active Directory (AD) Service and Browser Microsoft Azure Active Directory (AD) Service
+ Okta Service
+ PingFederate Service 
+ Browser SAML for SAML services such as Okta, Ping, or ADFS

If you use one of these services, the connection URL needs to specify the following properties: 
+ **Plugin\$1Name** – The fully-qualified class path for your credentials provider plugin class.
+ **IdP\$1Host:** – The host for the service that you are using to authenticate into Amazon Redshift.
+ **IdP\$1Port** – The port that the host for the authentication service listens at. Not required for Okta.
+ **User** – The username for the idp\$1host server.
+ **Password** – The password associated with the idp\$1host username.
+ **DbUser** – The Amazon Redshift username you are connecting as.
+ **SSL\$1Insecure** – Indicates whether the IDP server certificate should be verified.
+ **Client\$1ID** – The client ID associated with the username in the Azure AD portal. Only used for Azure AD.
+ **Client\$1Secret** – The client secret associated with the client ID in the Azure AD portal. Only used for Azure AD.
+ **IdP\$1Tenant** – The Azure AD tenant ID for your Amazon Redshift application. Only used for Azure AD.
+ **App\$1ID** – The Okta app ID for your Amazon Redshift application. Only used for Okta.
+ **App\$1Name** – The optional Okta app name for your Amazon Redshift application. Only used for Okta.
+ **Partner\$1SPID** – The optional partner SPID (service provider ID) value. Only used for PingFederate.
+ **Idc\$1Region** – The AWS Region where the AWS IAM Identity Center instance is located. Only used for AWS IAM Identity Center.
+ **Issuer\$1Url** – The AWS IAM Identity Center server's instance endpoint. Only used for AWS IAM Identity Center.

If you are using a browser plugin for one of these services, the connection URL can also include: 
+ **Login\$1URL** –The URL for the resource on the identity provider's website when using the Security Assertion Markup Language (SAML) or Azure AD services through a browser plugin. This parameter is required if you are using a browser plugin.
+ **Listen\$1Port** – The port that the driver uses to get the SAML response from the identity provider when using the SAML, Azure AD, or AWS IAM Identity Center services through a browser plugin.
+ **IdP\$1Response\$1Timeout** – The amount of time, in seconds, that the driver waits for the SAML response from the identity provider when using the SAML, Azure AD, or AWS IAM Identity Center services through a browser plugin.

For information on additional connection string properties, see [Options for JDBC driver version 2.x configuration](jdbc20-configuration-options.md). 

# Using username and password only
<a name="jdbc20-authentication-username-password"></a>

If the server you are connecting to doesn't use SSL, then you only need to provide your Redshift username and password to authenticate the connection. 

**To configure authentication using your Redshift username and password only**

1. Set the `UID` property to your Redshift username for accessing the Amazon Redshift server.

1. Set the PWD property to the password corresponding to your Redshift username.

# Using SSL without identity verification
<a name="jdbc20-use-ssl-without-identity-verification"></a>

If the server you are connecting to uses SSL but doesn't require identity verification, then you can configure the driver to use a non-validating SSL factory. 

**To configure an SSL connection without identity verification**

1. Set the `UID` property to your Redshift username for accessing the Amazon Redshift server.

1. Set the `PWD` property to the password corresponding to your Redshift username.

1. Set the `SSLFactory` property to `com.amazon.redshift.ssl.NonValidatingFactory`.

# Using one-way SSL authentication
<a name="jdbc20-use-one-way-SSL-authentication"></a>

If the server you are connecting to uses SSL and has a certificate, then you can configure the driver to verify the identity of the server using one-way authentication. 

One-way authentication requires a signed, trusted SSL certificate for verifying the identity of the server. You can configure the driver to use a specific certificate or access a TrustStore that contains the appropriate certificate. If you don't specify a certificate or TrustStore, then the driver uses the default Java TrustStore (typically either `jssecacerts` or `cacerts`). 

**To configure one-way SSL authentication**

1. Set the UID property to your Redshift username for accessing the Amazon Redshift server.

1. Set the PWD property to the password corresponding to your Redshift username.

1. Set the SSL property to true.

1. Set the SSLRootCert property to the location of your root CA certificate.

1. If you aren't using one of the default Java TrustStores, then do one of the following:
   + To specify a server certificate, set the SSLRootCert property to the full path of the certificate.
   + To specify a TrustStore, do the following:

     1. Use the keytool program to add the server certificate to the TrustStore that you want to use.

     1. Specify the TrustStore and password to use when starting the Java application using the driver. For example:

        ```
        -Djavax.net.ssl.trustStore=[TrustStoreName]
        -Djavax.net.ssl.trustStorePassword=[TrustStorePassword]
        -Djavax.net.ssl.trustStoreType=[TrustStoreType]
        ```

1. Choose one:
   + To validate the certificate, set the SSLMode property to verify-ca.
   + To validate the certificate and verify the host name in the certificate, set the SSLMode property to verify-full.

# Configuring logging
<a name="jdbc20-configuring-logging"></a>

You can turn on logging in the driver to assist in diagnosing issues.

You can log driver information by using the following methods:
+ To save logged information in .log files, see [Using log files](jdbc20-using-log-files.md).
+ To send logged information to the LogStream or LogWriter specified in the DriverManager, see [Using LogStream or LogWriter](jdbc20-logstream-option.md). 

You provide the configuration information to the driver in the connection URL. For more information about the syntax of the connection URL, see [Building the connection URL](jdbc20-build-connection-url.md).

# Using log files
<a name="jdbc20-using-log-files"></a>

Only turn on logging long enough to capture an issue. Logging decreases performance and can consume a large quantity of disk space. 

Set the LogLevel key in your connection URL to turn on logging and specify the amount of detail included in log files. The following table lists the logging levels provided by the Amazon Redshift JDBC driver version 2.x, in order from least verbose to most verbose. 


| LogLevel value | Description | 
| --- | --- | 
|  1  |  Log severe error events that will lead the driver to abort.  | 
|  2  |  Log error events that might allow the driver to continue running.  | 
|  3  |  Log events that might result in an error if action is not taken. This level of logging and the levels of logging above this level also log the user's queries.  | 
|  4  |  Log general information that describes the progress of the driver.  | 
|  5  |  Log detailed information that is useful for debugging the driver.  | 
|  6  |  Log all driver activity.  | 

**To set up logging that uses log files**

1. Set the LogLevel property to the desired level of information to include in log files.

1. Set the LogPath property to the full path to the folder where you want to save log files. 

   For example, the following connection URL enables logging level 3 and saves the log files in the C:\$1temp folder: `jdbc:redshift://redshift.company.us-west- 1.redshift.amazonaws.com:9000/Default;DSILogLevel=3;LogPath=C:\temp`

1. To make sure that the new settings take effect, restart your JDBC application and reconnect to the server.

   The Amazon Redshift JDBC driver produces the following log files in the location specified in the LogPath property:
   +  redshift\$1jdbc.log file that logs driver activity that is not specific to a connection.
   + redshift\$1jdbc\$1connection\$1[Number].log file for each connection made to the database, where [Number] is a number that identifies each log file. This file logs driver activity that is specific to the connection.

If the LogPath value is invalid, then the driver sends the logged information to the standard output stream (`System.out`)

# Using LogStream or LogWriter
<a name="jdbc20-logstream-option"></a>

Only turn on logging long enough to capture an issue. Logging decreases performance and can consume a large quantity of disk space. 

Set the LogLevel key in your connection URL to turn on logging and specify the amount of detail sent to the LogStream or LogWriter specified in the DriverManager. 

**To turn on logging that uses the LogStream or LogWriter:**

1. To configure the driver to log general information that describes the progress of the driver, set the LogLevel property to 1 or INFO.

1. To make sure that the new settings take effect, restart your JDBC application and reconnect to the server.

# Data type conversions
<a name="jdbc20-data-type-mapping"></a>

The Amazon Redshift JDBC driver version 2.x supports many common data formats, converting between Amazon Redshift, SQL, and Java data types.

The following table lists the supported data type mappings.


| Amazon Redshift type | SQL type | Java type | 
| --- | --- | --- | 
|  BIGINT  |  SQL\$1BIGINT  |  Long  | 
|  BOOLEAN  |  SQL\$1BIT  |  Boolean  | 
|  CHAR  |  SQL\$1CHAR  |  String  | 
|  DATE  |  SQL\$1TYPE\$1DATE  |  java.sql.Date  | 
|  DECIMAL  |  SQL\$1NUMERIC  |  BigDecimal  | 
|  DOUBLE PRECISION  |  SQL\$1DOUBLE  |  Double  | 
|  GEOMETRY  |  SQL\$1 LONGVARBINARY  |  byte[]  | 
|  INTEGER  |  SQL\$1INTEGER  |  Integer  | 
|  OID  |  SQL\$1BIGINT  |  Long  | 
|  SUPER  |  SQL\$1LONGVARCHAR  |  String  | 
|  REAL  |  SQL\$1REAL  |  Float  | 
|  SMALLINT  |  SQL\$1SMALLINT  |  Short  | 
|  TEXT  |  SQL\$1VARCHAR  |  String  | 
|  TIME  |  SQL\$1TYPE\$1TIME  |  java.sql.Time  | 
|  TIMETZ  |  SQL\$1TYPE\$1TIME  |  java.sql.Time  | 
|  TIMESTAMP  |  SQL\$1TYPE\$1 TIMESTAMP  |  java.sql.Timestamp  | 
|  TIMESTAMPTZ  |  SQL\$1TYPE\$1 TIMESTAMP  |  java.sql.Timestamp  | 
|  VARCHAR  |  SQL\$1VARCHAR  |  String  | 

# Using prepared statement support
<a name="jdbc20-prepared-statement-support"></a>

The Amazon Redshift JDBC driver supports prepared statements. You can use prepared statements to improve the performance of parameterized queries that need to be run multiple times during the same connection.

A *prepared statement* is a SQL statement that is compiled on the server side but not run immediately. The compiled statement is stored on the server as a PreparedStatement object until you close the object or the connection. While that object exists, you can run the prepared statement as many times as needed using different parameter values, without having to compile the statement again. This reduced overhead enables the set of queries to be run more quickly.

For more information about prepared statements, see "Using Prepared Statements" in [JDBC Basics tutorial from Oracle](https://docs.oracle.com/javase/tutorial/jdbc/basics/prepared.html).

You can prepare a statement that contains multiple queries. For example, the following prepared statement contains two INSERT queries:

```
PreparedStatement pstmt = conn.prepareStatement("INSERT INTO
MyTable VALUES (1, 'abc'); INSERT INTO CompanyTable VALUES
(1, 'abc');");
```

Take care that these queries don't depend on the results of other queries that are specified within the same prepared statement. Because queries don't run during the prepare step, the results have not been returned yet, and aren't available to other queries in the same prepared statement.

For example, the following prepared statement, which creates a table and then inserts values into that newly-created table, is not allowed:

```
PreparedStatement pstmt = conn.prepareStatement("CREATE
TABLE MyTable(col1 int, col2 varchar); INSERT INTO myTable
VALUES (1, 'abc');");
```

If you try to prepare this statement, the server returns an error stating that the destination table (myTable) doesn't exist yet. The CREATE query must be run before the INSERT query can be prepared.

# Differences between the 2.2 and 1.x versions of the JDBC driver
<a name="jdbc20-jdbc10-driver-differences"></a>

This section describes the differences in the information returned by the 2.2 and 1.x versions of the JDBC driver. The JDBC driver version 1.x is discontinued.

The following table lists the DatabaseMetadata information returned by the getDatabaseProductName() and getDatabaseProductVersion() functions for each version of the JDBC driver. JDBC driver version 2.2 obtains the values while establishing the connection. JDBC driver version 1.x obtains the values as a result of a query.


| JDBC driver version | getDatabaseProductName() result | getDatabaseProductVersion() result | 
| --- | --- | --- | 
|  2.2  |  Redshift  |  8.0.2  | 
|  1.x  |  PostgreSQL  |  08.00.0002  | 

The following table lists the DatabaseMetadata information returned by the getTypeInfo function for each version of the JDBC driver. 


| JDBC driver version | getTypeInfo result | 
| --- | --- | 
|  2.2  |  Consistent with Redshift datatypes  | 
|  1.x  |  Consistent with PostgreSQL datatypes  | 

# Creating initialization (.ini) files for JDBC driver version 2.x
<a name="jdbc20-ini-file"></a>

By using initialization (.ini) files for Amazon Redshift JDBC driver version 2.x, you can specify system level configuration parameters. For example, federated IdP authentication parameters can vary for each application. The .ini file provides a common location for SQL clients to get the required configuration parameters. 

You can create an JDBC driver version 2.x initialization (.ini) file that contains configuration options for SQL clients. The default name of the file is `rsjdbc.ini`. The JDBC driver version 2.x checks for the .ini file in the following locations, listed in order of precedence:
+ `IniFile` parameter in the connection URL or in the connection property dialog box of the SQL client. Be sure that the `IniFile` parameter contains the full path to the .ini file, including the file name. For information about the `IniFile` parameter, see [IniFile](jdbc20-configuration-options.md#jdbc20-inifile-option). If the `IniFile` parameter incorrectly specifies the location of the .ini file, an error displays.
+ Environment variables such as AMAZON\$1REDSHIFT\$1JDBC\$1INI\$1FILE with the full path, including the file name. You can use `rsjdbc.ini` or specify a file name. If the AMAZON\$1REDSHIFT\$1JDBC\$1INI\$1FILE environment variable incorrectly specifies the location of the .ini file, an error displays.
+ Directory where the driver JAR file is located.
+ User home directory.
+ Temp directory of the system.

You can organize the .ini file into sections, for example [DRIVER]. Each section contains key-value pairs that specify various connection parameters. You can use the `IniSection` parameter to specify a section in the .ini file. For information about the `IniSection` parameter, see [IniSection](jdbc20-configuration-options.md#jdbc20-inisection-option). 

Following is an example of the .ini file format, with sections for [DRIVER], [DEV], [QA], and [PROD]. The [DRIVER] section can apply to any connection.

```
[DRIVER]
key1=val1
key2=val2

[DEV]
key1=val1
key2=val2

[QA]
key1=val1
key2=val2

[PROD]
key1=val1
key2=val2
```

The JDBC driver version 2.x loads configuration parameters from the following locations, listed in order of precedence:
+ Default configuration parameters in the application code.
+ [DRIVER] section properties from the .ini file, if included.
+ Custom section configuration parameters, if the `IniSection` option is provided in the connection URL or in the connection property dialog box of the SQL client.
+ Properties from the connection property object specified in the `getConnection` call.
+ Configuration parameters speified in the connection URL.

# Options for JDBC driver version 2.x configuration
<a name="jdbc20-configuration-options"></a>

Following, you can find descriptions for the options that you can specify for version 2.2 of the Amazon Redshift JDBC driver. Configuration options are not case sensitive.

You can set configuration properties using the connection URL. For more information, see [Building the connection URL](jdbc20-build-connection-url.md).

**Topics**
+ [

## AccessKeyID
](#jdbc20-accesskeyid-option)
+ [

## AllowDBUserOverride
](#jdbc20-allowdbuseroverride-option)
+ [

## App\$1ID
](#jdbc20-app-id-option)
+ [

## App\$1Name
](#jdbc20-app-name-option)
+ [

## ApplicationName
](#jdbc20-applicationname-option)
+ [

## AuthProfile
](#jdbc20-authprofile-option)
+ [

## AutoCreate
](#jdbc20-autocreate-option)
+ [

## Client\$1ID
](#jdbc20-client_id-option)
+ [

## Client\$1Secret
](#jdbc20-client_secret-option)
+ [

## ClusterID
](#jdbc20-clusterid-option)
+ [

## Compression
](#jdbc20-compression-option)
+ [

## connectTimeout
](#jdbc20-connecttimeout-option)
+ [

## connectionTimezone
](#jdbc20-connecttimezone-option)
+ [

## databaseMetadataCurrentDbOnly
](#jdbc20-databasemetadatacurrentdbonly-option)
+ [

## DbUser
](#jdbc20-dbuser-option)
+ [

## DbGroups
](#jdbc20-dbgroups-option)
+ [

## DBNAME
](#jdbc20-dbname-option)
+ [

## defaultRowFetchSize
](#jdbc20-defaultrowfetchsize-option)
+ [

## DisableIsValidQuery
](#jdbc20-disableisvalidquery-option)
+ [

## enableFetchRingBuffer
](#jdbc20-enablefetchringbuffer-option)
+ [

## enableMultiSqlSupport
](#jdbc20-enablemultisqlsupport-option)
+ [

## fetchRingBufferSize
](#jdbc20-fetchringbuffersize-option)
+ [

## ForceLowercase
](#jdbc20-forcelowercase-option)
+ [

## groupFederation
](#jdbc20-groupFederation-option)
+ [

## HOST
](#jdbc20-host-option)
+ [

## IAMDisableCache
](#jdbc20-iamdisablecache-option)
+ [

## IAMDuration
](#jdbc20-iamduration-option)
+ [

## Idc\$1Client\$1Display\$1Name
](#jdbc20-idc_client_display_name)
+ [

## Idc\$1Region
](#jdbc20-idc_region)
+ [

## IdP\$1Host
](#jdbc20-idp_host-option)
+ [

## IdP\$1Partition
](#jdbc20-idp_partition-option)
+ [

## IdP\$1Port
](#jdbc20-idp_port-option)
+ [

## IdP\$1Tenant
](#jdbc20-idp_tenant-option)
+ [

## IdP\$1Response\$1Timeout
](#jdbc20-idp_response_timeout-option)
+ [

## IniFile
](#jdbc20-inifile-option)
+ [

## IniSection
](#jdbc20-inisection-option)
+ [

## isServerless
](#jdbc20-isserverless-option)
+ [

## Issuer\$1Url
](#jdbc20-issuer-url)
+ [

## Listen\$1Port
](#jdbc20-listen-port)
+ [

## Login\$1URL
](#jdbc20-login_url-option)
+ [

## loginTimeout
](#jdbc20-logintimeout-option)
+ [

## loginToRp
](#jdbc20-logintorp-option)
+ [

## LogLevel
](#jdbc20-loglevel-option)
+ [

## LogPath
](#jdbc20-logpath-option)
+ [

## OverrideSchemaPatternType
](#jdbc20-override-schema-pattern-type)
+ [

## Partner\$1SPID
](#jdbc20-partner_spid-option)
+ [

## Password
](#jdbc20-password-option)
+ [

## Plugin\$1Name
](#jdbc20-plugin_name-option)
+ [

## PORT
](#jdbc20-port-option)
+ [

## Preferred\$1Role
](#jdbc20-preferred_role-option)
+ [

## Profile
](#jdbc20-profile-option)
+ [

## PWD
](#jdbc20-pwd-option)
+ [

## queryGroup
](#jdbc20-querygroup-option)
+ [

## readOnly
](#jdbc20-readonly-option)
+ [

## Region
](#jdbc20-region-option)
+ [

## reWriteBatchedInserts
](#jdbc20-rewritebatchedinserts-option)
+ [

## reWriteBatchedInsertsSize
](#jdbc20-rewritebatchedinsertssize-option)
+ [

## roleArn
](#jdbc20-rolearn-option)
+ [

## roleSessionName
](#jdbc20-roleaessionname-option)
+ [

## scope
](#jdbc20-scope-option)
+ [

## SecretAccessKey
](#jdbc20-secretaccesskey-option)
+ [

## SessionToken
](#jdbc20-sessiontoken-option)
+ [

## serverlessAcctId
](#jdbc20-serverlessacctid-option)
+ [

## serverlessWorkGroup
](#jdbc20-serverlessworkgroup-option)
+ [

## socketFactory
](#jdbc20-socketfactory-option)
+ [

## socketTimeout
](#jdbc20-sockettimeout-option)
+ [

## SSL
](#jdbc20-ssl-option)
+ [

## SSL\$1Insecure
](#jdbc20-ssl_insecure-option)
+ [

## SSLCert
](#jdbc20-sslcert-option)
+ [

## SSLFactory
](#jdbc20-sslfactory-option)
+ [

## SSLKey
](#jdbc20-sslkey-option)
+ [

## SSLMode
](#jdbc20-sslmode-option)
+ [

## SSLPassword
](#jdbc20-sslpassword-option)
+ [

## SSLRootCert
](#jdbc20-sslrootcert-option)
+ [

## StsEndpointUrl
](#jdbc20-stsendpointurl-option)
+ [

## tcpKeepAlive
](#jdbc20-tcpkeepalive-option)
+ [

## token
](#jdbc20-token-option)
+ [

## token\$1type
](#jdbc20-token-type-option)
+ [

## UID
](#jdbc20-uid-option)
+ [

## User
](#jdbc20-user-option)
+ [

## webIdentityToken
](#jdbc20-webidentitytoken-option)

## AccessKeyID
<a name="jdbc20-accesskeyid-option"></a>
+ **Default Value** – None
+ **Data Type** – String

You can specify this parameter to enter the IAM access key for the user or role. You can usually locate the key by looking at and existing string or user profile. If you specify this parameter, you must also specify the `SecretAccessKey` parameter. If passed in the JDBC URL, AccessKeyID must be URL encoded.

This parameter is optional.

## AllowDBUserOverride
<a name="jdbc20-allowdbuseroverride-option"></a>
+ **Default Value** – 0
+ **Data Type** – String

This option specifies whether the driver uses the `DbUser` value from the SAML assertion or the value that is specified in the `DbUser` connection property in the connection URL. 

This parameter is optional.

**1**  
The driver uses the `DbUser` value from the SAML assertion.  
If the SAML assertion doesn't specify a value for `DBUser`, the driver uses the value specified in the `DBUser` connection property. If the connection property also doesn't specify a value, the driver uses the value specified in the connection profile.

**0**  
The driver uses the `DBUser` value specified in the `DBUser` connection property.  
If the `DBUser` connection property doesn't specify a value, the driver uses the value specified in the connection profile. If the connection profile also doesn't specify a value, the driver uses the value from the SAML assertion.

## App\$1ID
<a name="jdbc20-app-id-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The Okta-provided unique ID associated with your Amazon Redshift application. 

This parameter is required if authenticating through the Okta service.

## App\$1Name
<a name="jdbc20-app-name-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The name of the Okta application that you use to authenticate the connection to Amazon Redshift. 

This parameter is optional.

## ApplicationName
<a name="jdbc20-applicationname-option"></a>
+ **Default Value** – null
+ **Data Type** – String

The name of the application to pass to Amazon Redshift for audit purposes. 

This parameter is optional.

## AuthProfile
<a name="jdbc20-authprofile-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The name of the authentication profile to use for connecting to Amazon Redshift. 

This parameter is optional.

## AutoCreate
<a name="jdbc20-autocreate-option"></a>
+ **Default Value** – false
+ **Data Type** – Boolean

This option specifies whether the driver causes a new user to be created when the specified user doesn't exist. 

This parameter is optional.

**true**  
If the user specified by either `DBUser` or unique ID (UID) doesn't exist, a new user with that name is created.

**false**  
The driver doesn't cause new users to be created. If the specified user doesn't exist, the authentication fails.

## Client\$1ID
<a name="jdbc20-client_id-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The client ID to use when authenticating the connection using the Azure AD service. 

This parameter is required if authenticating through the Azure AD service.

## Client\$1Secret
<a name="jdbc20-client_secret-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The Client Secret to use when authenticating the connection using the Azure AD service. 

This parameter is required if authenticating through the Azure AD service.

## ClusterID
<a name="jdbc20-clusterid-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The name of the Amazon Redshift cluster that you want to connect to. The driver attempts to detect this parameter from the given host. If you're using a Network Load Balancer (NLB) and connecting via IAM, the driver will fail to detect it, so you can set it using this connection option. 

This parameter is optional.

## Compression
<a name="jdbc20-compression-option"></a>
+ **Default Value** – off
+ **Data Type** – String

The compression method used for wire protocol communication between the Amazon Redshift server and the client or driver.

This parameter is optional.

You can specify the following values:
+ **lz4**

  Sets the compression method used for wire protocol communication with Amazon Redshift to lz4.
+ **off**

  Doesn't use compression for wire protocol communication with Amazon Redshift.

## connectTimeout
<a name="jdbc20-connecttimeout-option"></a>
+ **Default Value** – 10
+ **Data Type** – Integer

The timeout value to use for socket connect operations. If the time required to establish an Amazon Redshift connection exceeds this value, the connection is considered unavailable. The timeout is specified in seconds. A value of 0 means that no timeout is specified.

This parameter is optional.

## connectionTimezone
<a name="jdbc20-connecttimezone-option"></a>
+ **Default Value** – LOCAL
+ **Data Type** – String

The session level timezone.

This parameter is optional.

You can specify the following values:

**LOCAL**  
Configures the session level timezone to the LOCAL JVM timezone.

**SERVER**  
Configures the session level timezone to the timezone set for the user on the Amazon Redshift server. You can configure session level timezones for users with the following command:  

```
ALTER USER
[...]
SET TIMEZONE TO [...];
```

## databaseMetadataCurrentDbOnly
<a name="jdbc20-databasemetadatacurrentdbonly-option"></a>
+ **Default Value** – true
+ **Data Type** – Boolean

This option specifies whether the metadata API retrieves data from all accessible databases or only from the connected database. 

This parameter is optional.

You can specify the following values:

**true**  
The application retrieves metadata from a single database.

**false**  
The application retrieves metadata from all accessible databases.

## DbUser
<a name="jdbc20-dbuser-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The user ID to use with your Amazon Redshift account. You can use an ID that doesn't currently exist if you have enabled the AutoCreate property. 

This parameter is optional.

## DbGroups
<a name="jdbc20-dbgroups-option"></a>
+ **Default Value** – PUBLIC
+ **Data Type** – String

A comma-separated list of existing database group names that `DBUser` joins for the current session. 

This parameter is optional.

## DBNAME
<a name="jdbc20-dbname-option"></a>
+ **Default Value** – null
+ **Data Type** – String

The name of the database to connect to. You can use this option to specify the database name in the JDBC connection URL. 

This parameter is required. You must specify the database name, either in the connection URL or in the connection properties of the client application.

## defaultRowFetchSize
<a name="jdbc20-defaultrowfetchsize-option"></a>
+ **Default Value** – 0
+ **Data Type** – Integer

This option specifies a default value for getFetchSize. 

This parameter is optional.

You can specify the following values:

**0**  
Fetch all rows in a single operation.

**Positive integer**  
Number of rows to fetch from the database for each fetch iteration of the ResultSet.

## DisableIsValidQuery
<a name="jdbc20-disableisvalidquery-option"></a>
+ **Default Value** – False
+ **Data Type** – Boolean

This option specifies whether the driver submits a new database query when using the Connection.isValid() method to determine whether the database connection is active. 

This parameter is optional.

**true**  
The driver doesn't submit a query when using Connection.isValid() to determine whether the database connection is active. This may cause the driver to incorrectly identify the database connection as active if the database server has shut down unexpectedly.

**false**  
The driver submits a query when using Connection.isValid() to determine whether the database connection is active.

## enableFetchRingBuffer
<a name="jdbc20-enablefetchringbuffer-option"></a>
+ **Default Value** – true
+ **Data Type** – Boolean

This option specifies that the driver fetches rows using a ring buffer on a separate thread. The fetchRingBufferSize parameter specifies the ring buffer size. 

The ring buffer implements automatic memory management in JDBC to prevent out-of-memory (OOM) errors during data retrieval operations. The ring buffer monitors the actual size of buffered data in real-time, ensuring total memory usage by the driver stays within defined limits. When buffer capacity is reached, the driver pauses data fetching operations, preventing memory overflow without requiring manual intervention. This built-in safeguard eliminates OOM errors automatically, with no configuration needed from users.

If a transaction detects a Statement containing multiple SQL commands separated by semicolons, the fetch ring buffer for that transaction is set to false. enableFetchRingBuffer's value doesn't change. 

This parameter is optional.

**Note**  
When the ring buffer is disabled and the fetch size is not properly configured, out-of-memory (OOM) issues may occur. For more information about configuring fetch size, see [here](https://docs.aws.amazon.com/redshift/latest/dg/set-the-JDBC-fetch-size-parameter.html).

## enableMultiSqlSupport
<a name="jdbc20-enablemultisqlsupport-option"></a>
+ **Default Value** – true
+ **Data Type** – Boolean

This option specifies whether to process multiple SQL commands separated by semicolons in a Statement. 

This parameter is optional.

You can specify the following values:

**true**  
The driver processes multiple SQL commands, separated by semicolons, in a Statement object.

**false**  
The driver returns an error for multiple SQL commands in a single Statement.

## fetchRingBufferSize
<a name="jdbc20-fetchringbuffersize-option"></a>
+ **Default Value** – 1G
+ **Data Type** – String

This option specifies the size of the ring buffer used while fetching the result set. You can specify a size in bytes, for example 1K for 1 KB, 5000 for 5,000 bytes, 1M for 1 MB, 1G for 1 GB, and so on. You can also specify a percentage of heap memory. The driver stops fetching rows upon reaching the limit. Fetching resumes when the application reads rows and frees space in the ring buffer. 

This parameter is optional.

## ForceLowercase
<a name="jdbc20-forcelowercase-option"></a>
+ **Default Value** – false
+ **Data Type** – Boolean

This option specifies whether the driver lowercases all database groups (DbGroups) sent from the identity provider to Amazon Redshift when using single sign-on authentication. 

This parameter is optional.

**true**  
The driver lowercases all database groups that are sent from the identity provider.

**false**  
The driver doesn't alter database groups.

## groupFederation
<a name="jdbc20-groupFederation-option"></a>
+ **Default Value** – false
+ **Data Type** – Boolean

This option specifies whether to use Amazon Redshift IDP groups. This is supported by the GetClusterCredentialsV2 API. 

This parameter is optional.

**true**  
Use Amazon Redshift Identity Provider (IDP) groups.

**false**  
Use STS API and GetClusterCredentials for user federation and explicitly specify DbGroups for the connection.

## HOST
<a name="jdbc20-host-option"></a>
+ **Default Value** – null
+ **Data Type** – String

The host name of the Amazon Redshift server to connect to. You can use this option to specify the host name in the JDBC connection URL. 

This parameter is required. You must specify the host name, either in the connection URL or in the connection properties of the client application.

## IAMDisableCache
<a name="jdbc20-iamdisablecache-option"></a>
+ **Default Value** – false
+ **Data Type** – Boolean

This option specifies whether the IAM credentials are cached.

This parameter is optional.

**true**  
The IAM credentials aren't cached.

**false**  
The IAM credentials are cached. This improves performance when requests to the API gateway are throttled, for instance.

## IAMDuration
<a name="jdbc20-iamduration-option"></a>
+ **Default Value** – 900
+ **Data Type** – Integer

The length of time, in seconds, until the temporary IAM credentials expire. 
+ **Minimum value** – 900
+ **Maximum value ** – 3,600

This parameter is optional.

## Idc\$1Client\$1Display\$1Name
<a name="jdbc20-idc_client_display_name"></a>
+ **Default Value** – Amazon Redshift JDBC driver
+ **Data Type** – String

The display name to be used for the client that's using BrowserIdcAuthPlugin.

This parameter is optional.

## Idc\$1Region
<a name="jdbc20-idc_region"></a>
+ **Default Value** – None
+ **Data Type** – String

The AWS region where the IAM Identity Center instance is located.

This parameter is required only when authenticating using `BrowserIdcAuthPlugin` in the plugin\$1name configuration option.

## IdP\$1Host
<a name="jdbc20-idp_host-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The IdP (identity provider) host you are using to authenticate into Amazon Redshift. This can be specified in either the connection string or in a profile. 

This parameter is optional.

## IdP\$1Partition
<a name="jdbc20-idp_partition-option"></a>
+ **Default Value** – None
+ **Data Type** – String

Specifies the cloud partition where your identity provider (IdP) is configured. This determines which IdP authentication endpoint the driver connects to.

If this parameter is left blank, the driver defaults to the commercial partition. Possible values are:
+  `us-gov`: Use this value if your IdP is configured in Azure Government. For example, Azure AD Government uses the endpoint `login.microsoftonline.us`.
+  `cn`: Use this value if your IdP is configured in the China cloud partition. For example, Azure AD China uses the endpoint `login.chinacloudapi.cn`. 

This parameter is optional.

## IdP\$1Port
<a name="jdbc20-idp_port-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The port used by an IdP (identity provider). You can specify the port in either the connection string or in a profile. The default port is 5439. Depending on the port you selected when creating, modifying or migrating the cluster, allow access to the selected port. 

This parameter is optional.

## IdP\$1Tenant
<a name="jdbc20-idp_tenant-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The Azure AD tenant ID for your Amazon Redshift application. 

This parameter is required if authenticating through the Azure AD service.

## IdP\$1Response\$1Timeout
<a name="jdbc20-idp_response_timeout-option"></a>
+ **Default Value** – 120
+ **Data Type** – Integer

The amount of time, in seconds, that the driver waits for the SAML response from the identity provider when using the SAML or Azure AD services through a browser plugin. 

This parameter is optional.

## IniFile
<a name="jdbc20-inifile-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The full path of the .ini file, including file name. For example:

```
IniFile="C:\tools\rsjdbc.ini"
```

For information about the .ini file, see [Creating initialization (.ini) files for JDBC driver version 2.x](jdbc20-ini-file.md).

This parameter is optional.

## IniSection
<a name="jdbc20-inisection-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The name of a section in the .ini file containing the configuration options. For information about the .ini file, see [Creating initialization (.ini) files for JDBC driver version 2.x](jdbc20-ini-file.md). 

The following example specifies the [Prod] section of the .ini file:

```
IniSection="Prod"
```

This parameter is optional.

## isServerless
<a name="jdbc20-isserverless-option"></a>
+ **Default Value** – false
+ **Data Type** – Boolean

This option specifies whether the Amazon Redshift endpoint host is a serverless instance. The driver attempts to detect this parameter from the given host. If you're using a Network Load Balancer (NLB), the driver will fail to detect it, so you can set it here. 

This parameter is optional.

**true**  
The Amazon Redshift endpoint host is a serverless instance.

**false**  
The Amazon Redshift endpoint host is a provisioned cluster.

## Issuer\$1Url
<a name="jdbc20-issuer-url"></a>
+ **Default Value** – None
+ **Data Type** – String

Points to the AWS IAM Identity Center server's instance endpoint. 

This parameter is required only when authenticating using `BrowserIdcAuthPlugin` in the plugin\$1name configuration option.

## Listen\$1Port
<a name="jdbc20-listen-port"></a>
+ **Default Value** – 7890
+ **Data Type** – Integer

The port that the driver uses to receive the SAML response from the identity provider or authorization code when using SAML, Azure AD, or AWS Identity Center services through a browser plugin.

This parameter is optional.

## Login\$1URL
<a name="jdbc20-login_url-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The URL for the resource on the identity provider's website when using the SAML or Azure AD services through a browser plugin. 

This parameter is required if authenticating with the SAML or Azure AD services through a browser plugin.

## loginTimeout
<a name="jdbc20-logintimeout-option"></a>
+ **Default Value** – 0
+ **Data Type** – Integer

The number of seconds to wait before timing out when connecting and authenticating to the server. If establishing the connection takes longer than this threshold, then the connection is aborted. 

When this property is set to 0, connections don't time out.

This parameter is optional.

## loginToRp
<a name="jdbc20-logintorp-option"></a>
+ **Default Value** – `urn:amazon:webservices`
+ **Data Type** – String

The relying party trust that you want to use for the AD FS authentication type. 

This parameter is optional.

## LogLevel
<a name="jdbc20-loglevel-option"></a>
+ **Default Value** – 0
+ **Data Type** – Integer

Use this property to turn on or turn off logging in the driver and to specify the amount of detail included in log files. 

Enable logging only long enough to capture an issue. Logging decreases performance and can consume a large quantity of disk space.

This parameter is optional.

Set the parameter to one of the following values:

**0**  
Disable all logging.

**1**  
Enable logging on the FATAL level, which logs very severe error events that will lead the driver to abort.

**2**  
Enable logging on the ERROR level, which logs error events that might still allow the driver to continue running.

**3**  
Enable logging on the WARNING level, which logs events that might result in an error if action is not taken.

**4**  
Enable logging on the INFO level, which logs general information that describes the progress of the driver.

**5**  
Enable logging on the DEBUG level, which logs detailed information that is useful for debugging the driver.

**6**  
Enable logging on the TRACE level, which logs all driver activity.

When logging is enabled, the driver produces the following log files in the location specified in the `LogPath` property:
+ ** `redshift_jdbc.log`** – File that logs driver activity that is not specific to a connection.
+ **`redshift_jdbc_connection_[Number].log`** – File for each connection made to the database, where `[Number]` is a number that distinguishes each log file from the others. This file logs driver activity that is specific to the connection. 

If the LogPath value is invalid, the driver sends the logged information to the standard output stream, `System.out`.

## LogPath
<a name="jdbc20-logpath-option"></a>
+ **Default Value** – The current working directory.
+ **Data Type** – String

The full path to the folder where the driver saves log files when the DSILogLevel property is enabled. 

To be sure that the connection URL is compatible with all JDBC applications, we recommend that you escape the backslashes (\$1) in your file path by typing another backslash.

This parameter is optional.

## OverrideSchemaPatternType
<a name="jdbc20-override-schema-pattern-type"></a>
+ **Default Value** – null
+ **Data Type** – Integer

This option specifies whether to override the type of query used in getTables calls.

**0**  
No Schema Universal Query

**1**  
Local Schema Query

**2**  
External Schema Query

This parameter is optional.

## Partner\$1SPID
<a name="jdbc20-partner_spid-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The partner SPID (service provider ID) value to use when authenticating the connection using the PingFederate service. 

This parameter is optional.

## Password
<a name="jdbc20-password-option"></a>
+ **Default Value** – None
+ **Data Type** – String

When connecting using IAM authentication through an IDP, this is the password for the IDP\$1Host server. When using standard authentication, this can be used for the Amazon Redshift database password instead of PWD. 

This parameter is optional.

## Plugin\$1Name
<a name="jdbc20-plugin_name-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The fully qualified class name to implement a specific credentials provider plugin. 

This parameter is optional.

The following provider options are supported:
+ **`AdfsCredentialsProvider`** – Active Directory Federation Service.
+ **`AzureCredentialsProvider`** – Microsoft Azure Active Directory (AD) Service.
+ **`BasicJwtCredentialsProvider`** – JSON Web Tokens (JWT) Service.
+ **`BasicSamlCredentialsProvider`** – Security Assertion Markup Language (SAML) credentials which you can use with many SAML service providers.
+ **`BrowserAzureCredentialsProvider`** – Browser Microsoft Azure Active Directory (AD) Service.
+ **`BrowserAzureOAuth2CredentialsProvider` ** – Browser Microsoft Azure Active Directory (AD) Service for Native Authentication.
+ **`BrowserIdcAuthPlugin` ** – An authorization plugin using AWS IAM Identity Center.
+ **`BrowserSamlCredentialsProvider`** – Browser SAML for SAML services such as Okta, Ping, or ADFS.
+ **`IdpTokenAuthPlugin`** – An authorization plugin that accepts an AWS IAM Identity Center token or OpenID Connect (OIDC) JSON-based identity tokens (JWT) from any web identity provider linked to AWS IAM Identity Center.
+ **`OktaCredentialsProvider`** – Okta Service.
+ **`PingCredentialsProvider`** – PingFederate Service.

## PORT
<a name="jdbc20-port-option"></a>
+ **Default Value** – null
+ **Data Type** – Integer

The port of the Amazon Redshift server to connect to. You can use this option to specify the port in the JDBC connection URL. 

This parameter is optional.

## Preferred\$1Role
<a name="jdbc20-preferred_role-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The IAM role that you want to assume during the connection to Amazon Redshift. 

This parameter is optional.

## Profile
<a name="jdbc20-profile-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The name of the profile to use for IAM authentication. This profile contains any additional connection properties not specified in the connection string. 

This parameter is optional.

## PWD
<a name="jdbc20-pwd-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The password corresponding to the Amazon Redshift username that you provided using the property UID. 

This parameter is optional.

## queryGroup
<a name="jdbc20-querygroup-option"></a>
+ **Default Value** – null
+ **Data Type** – String

This option assigns a query to a queue at runtime by assigning your query to the appropriate query group. The query group is set for the session. All queries that run on the connection belong to this query group. 

This parameter is optional.

## readOnly
<a name="jdbc20-readonly-option"></a>
+ **Default Value** – false
+ **Data Type** – Boolean

This property specifies whether the driver is in read-only mode. 

This parameter is optional.

**true**  
The connection is in read-only mode and cannot write to the data store.

**false**  
The connection is not in read-only mode and can write to the data store.

## Region
<a name="jdbc20-region-option"></a>
+ **Default Value** – null
+ **Data Type** – String

This option specifies the AWS Region where the cluster is located. If you specify the StsEndPoint option, the Region option is ignored. The Redshift `GetClusterCredentials` API operation also uses the Region option. 

This parameter is optional.

## reWriteBatchedInserts
<a name="jdbc20-rewritebatchedinserts-option"></a>
+ **Default Value** – false
+ **Data Type** – Boolean

This option enables optimization to rewrite and combine compatible INSERT statements into batches. 

This parameter is optional.

## reWriteBatchedInsertsSize
<a name="jdbc20-rewritebatchedinsertssize-option"></a>
+ **Default Value** – 128
+ **Data Type** – Integer

This option enables optimization to rewrite and combine compatible INSERT statements into batches. This value must increase exponentially by the power of 2. 

This parameter is optional.

## roleArn
<a name="jdbc20-rolearn-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The Amazon Resource Name (ARN) of role. Make sure to specify this parameter when you specify BasicJwtCredentialsProvider for the Plugin\$1Name option. You specify the ARN in the following format: 

`arn:partition:service:region:account-id:resource-id`

This parameter is required if you specify BasicJwtCredentialsProvider for the Plugin\$1Name option.

## roleSessionName
<a name="jdbc20-roleaessionname-option"></a>
+ **Default Value** – jwt\$1redshift\$1session
+ **Data Type** – String

An identifier for the assumed role session. Typically, you pass the name or identifier that is associated with the user of your application. The temporary security credentials that your application uses are associated with that user. You can specify this parameter when you specify BasicJwtCredentialsProvider for the Plugin\$1Name option. 

This parameter is optional.

## scope
<a name="jdbc20-scope-option"></a>
+ **Default Value** – None
+ **Data Type** – String

A space-separated list of scopes to which the user can consent. You specify this parameter so that your Microsoft Azure application can get consent for APIs that you want to call. You can specify this parameter when you specify BrowserAzureOAuth2CredentialsProvider for the Plugin\$1Name option. 

This parameter is required for the BrowserAzureOAuth2CredentialsProvider plug-in.

## SecretAccessKey
<a name="jdbc20-secretaccesskey-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The IAM access key for the user or role. If this is specified, then AccessKeyID must also be specified. If passed in the JDBC URL, SecretAccessKey must be URL encoded. 

This parameter is optional.

## SessionToken
<a name="jdbc20-sessiontoken-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The temporary IAM session token associated with the IAM role you are using to authenticate. If passed in the JDBC URL, the temporary IAM session token must be URL encoded. 

This parameter is optional.

## serverlessAcctId
<a name="jdbc20-serverlessacctid-option"></a>
+ **Default Value** – null
+ **Data Type** – String

The Amazon Redshift Serverless account ID. The driver attempts to detect this parameter from the given host. If you're using a Network Load Balancer (NLB), the driver will fail to detect it, so you can set it here. 

This parameter is optional.

## serverlessWorkGroup
<a name="jdbc20-serverlessworkgroup-option"></a>
+ **Default Value** – null
+ **Data Type** – String

The Amazon Redshift Serverless workgroup name. The driver attempts to detect this parameter from the given host. If you're using a Network Load Balancer (NLB), the driver will fail to detect it, so you can set it here. 

This parameter is optional.

## socketFactory
<a name="jdbc20-socketfactory-option"></a>
+ **Default Value** – null
+ **Data Type** – String

This option specifies a socket factory for socket creation. 

This parameter is optional.

## socketTimeout
<a name="jdbc20-sockettimeout-option"></a>
+ **Default Value** – 0
+ **Data Type** – Integer

The number of seconds to wait during socket read operations before timing out. If the operation takes longer than this threshold, then the connection is closed. When this property is set to 0, the connection doesn't time out. 

This parameter is optional.

## SSL
<a name="jdbc20-ssl-option"></a>
+ **Default Value** – TRUE
+ **Data Type** – String

Use this property to turn on or turn off SSL for the connection. 

This parameter is optional.

You can specify the following values:

**TRUE**  
The driver connects to the server through SSL.

**FALSE**  
The driver connects to the server without using SSL. This option is not supported with IAM authentication.

Alternatively, you can configure the AuthMech property.

## SSL\$1Insecure
<a name="jdbc20-ssl_insecure-option"></a>
+ **Default Value** – true
+ **Data Type** – String

This property indicates whether the IDP hosts server certificate should be verified.

This parameter is optional.

You can specify the following values:

**true**  
The driver doesn't check the authenticity of the IDP server certificate.

**false**  
The driver checks the authenticity of the IDP server certificate.

## SSLCert
<a name="jdbc20-sslcert-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The full path of a .pem or .crt file containing additional trusted CA certificates for verifying the Amazon Redshift server instance when using SSL. 

This parameter is required if SSLKey is specified.

## SSLFactory
<a name="jdbc20-sslfactory-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The SSL factory to use when connecting to the server through TLS/SSL without using a server certificate. 

## SSLKey
<a name="jdbc20-sslkey-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The full path of the .der file containing the PKCS8 key file for verifying the certificates specified in SSLCert. 

This parameter is required if SSLCert is specified.

## SSLMode
<a name="jdbc20-sslmode-option"></a>
+ **Default Value** – verify-ca
+ **Data Type** – String

Use this property to specify how the driver validates certificates when TLS/SSL is enabled. 

This parameter is optional.

You can specify the following values:

**verify-ca**  
The driver verifies that the certificate comes from a trusted certificate authority (CA).

**verify-full**  
The driver verifies that the certificate comes from a trusted CA and that the host name in the certificate matches the host name specified in the connection URL.

## SSLPassword
<a name="jdbc20-sslpassword-option"></a>
+ **Default Value** – 0
+ **Data Type** – String

The password for the encrypted key file specified in SSLKey. 

This parameter is required if SSLKey is specified and the key file is encrypted.

## SSLRootCert
<a name="jdbc20-sslrootcert-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The full path of a .pem or .crt file containing the root CA certificate for verifying the Amazon Redshift Server instance when using SSL. 

## StsEndpointUrl
<a name="jdbc20-stsendpointurl-option"></a>
+ **Default Value** – null
+ **Data Type** – String

You can specify an AWS Security Token Service (AWS STS) endpoint. If you specify this option, the Region option is ignored. You can only specify a secure protocol (HTTPS) for this endpoint. 

## tcpKeepAlive
<a name="jdbc20-tcpkeepalive-option"></a>
+ **Default Value** – TRUE
+ **Data Type** – String

Use this property to turn on or turn off TCP keepalives. 

This parameter is optional.

You can specify the following values:

**TRUE**  
The driver uses TCP keepalives to prevent connections from timing out.

**FALSE**  
The driver doesn't use TCP keepalives.

## token
<a name="jdbc20-token-option"></a>
+ **Default Value** – None
+ **Data Type** – String

An AWS IAM Identity Center provided access token or an OpenID Connect (OIDC) JSON Web Token (JWT) provided by a web identity provider that's linked with AWS IAM Identity Center. Your application must generate this token by authenticating the user of your application with AWS IAM Identity Center or an identity provider linked with AWS IAM Identity Center. 

This parameter works with `IdpTokenAuthPlugin`.

## token\$1type
<a name="jdbc20-token-type-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The type of token that is being used in `IdpTokenAuthPlugin`.

You can specify the following values:

**ACCESS\$1TOKEN**  
Enter this if you use an AWS IAM Identity Center provided access token.

**EXT\$1JWT**  
Enter this if you use an OpenID Connect (OIDC) JSON Web Token (JWT) provided by a web-based identity provider that's integrated with AWS IAM Identity Center.

This parameter works with `IdpTokenAuthPlugin`.

## UID
<a name="jdbc20-uid-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The database username that you use to access the database.

This parameter is required.

## User
<a name="jdbc20-user-option"></a>
+ **Default Value** – None
+ **Data Type** – String

When connecting using IAM authentication through an IDP, this is the username for the idp\$1host server. When using standard authentication, this can be used for the Amazon Redshift database username. 

This parameter is optional.

## webIdentityToken
<a name="jdbc20-webidentitytoken-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The OAuth 2.1 access token or OpenID Connect ID token that is provided by the identity provider. Your application must get this token by authenticating the user of your application with a web identity provider. Make sure to specify this parameter when you specify BasicJwtCredentialsProvider for the Plugin\$1Name option. 

This parameter is required if you specify BasicJwtCredentialsProvider for the Plugin\$1Name option.

# Previous versions of JDBC driver version 2.x
<a name="jdbc20-previous-driver-version-20"></a>

Download a previous version of the Amazon Redshift JDBC driver version 2.x only if your tool requires a specific version of the driver. 

These are the previous JDBC 4.2–compatible JDBC driver version 2.x drivers:
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.2.4/redshift-jdbc42-2.2.4.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.2.4/redshift-jdbc42-2.2.4.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.2.3/redshift-jdbc42-2.2.3.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.2.3/redshift-jdbc42-2.2.3.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.2.2/redshift-jdbc42-2.2.2.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.2.2/redshift-jdbc42-2.2.2.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.2.1/redshift-jdbc42-2.2.1.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.2.1/redshift-jdbc42-2.2.1.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.2.0/redshift-jdbc42-2.2.0.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.2.0/redshift-jdbc42-2.2.0.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.34/redshift-jdbc42-2.1.0.34.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.34/redshift-jdbc42-2.1.0.34.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.33/redshift-jdbc42-2.1.0.33.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.33/redshift-jdbc42-2.1.0.33.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.32/redshift-jdbc42-2.1.0.32.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.32/redshift-jdbc42-2.1.0.32.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.30/redshift-jdbc42-2.1.0.30.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.30/redshift-jdbc42-2.1.0.30.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.29/redshift-jdbc42-2.1.0.29.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.29/redshift-jdbc42-2.1.0.29.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.28/redshift-jdbc42-2.1.0.28.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.28/redshift-jdbc42-2.1.0.28.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.26/redshift-jdbc42-2.1.0.26.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.26/redshift-jdbc42-2.1.0.26.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.25/redshift-jdbc42-2.1.0.25.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.25/redshift-jdbc42-2.1.0.25.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.24/redshift-jdbc42-2.1.0.24.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.24/redshift-jdbc42-2.1.0.24.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.23/redshift-jdbc42-2.1.0.23.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.23/redshift-jdbc42-2.1.0.23.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.22/redshift-jdbc42-2.1.0.22.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.22/redshift-jdbc42-2.1.0.22.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.21/redshift-jdbc42-2.1.0.21.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.21/redshift-jdbc42-2.1.0.21.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.20/redshift-jdbc42-2.1.0.20.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.20/redshift-jdbc42-2.1.0.20.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.19/redshift-jdbc42-2.1.0.19.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.19/redshift-jdbc42-2.1.0.19.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.18/redshift-jdbc42-2.1.0.18.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.18/redshift-jdbc42-2.1.0.18.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.17/redshift-jdbc42-2.1.0.17.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.17/redshift-jdbc42-2.1.0.17.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.16/redshift-jdbc42-2.1.0.16.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.16/redshift-jdbc42-2.1.0.16.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.15/redshift-jdbc42-2.1.0.15.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.15/redshift-jdbc42-2.1.0.15.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.14/redshift-jdbc42-2.1.0.14.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.14/redshift-jdbc42-2.1.0.14.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.13/redshift-jdbc42-2.1.0.13.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.13/redshift-jdbc42-2.1.0.13.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.12/redshift-jdbc42-2.1.0.12.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.12/redshift-jdbc42-2.1.0.12.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.11/redshift-jdbc42-2.1.0.11.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.11/redshift-jdbc42-2.1.0.11.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.10/redshift-jdbc42-2.1.0.10.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.10/redshift-jdbc42-2.1.0.10.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.9/redshift-jdbc42-2.1.0.9.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.9/redshift-jdbc42-2.1.0.9.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.8/redshift-jdbc42-2.1.0.8.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.8/redshift-jdbc42-2.1.0.8.zip) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.7/redshift-jdbc42-2.1.0.7.zip](https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.7/redshift-jdbc42-2.1.0.7.zip) 

# Amazon Redshift Python connector
<a name="python-redshift-driver"></a>

By using the Amazon Redshift connector for Python, you can integrate work with [the AWS SDK for Python (Boto3)](https://github.com/boto/boto3), and also pandas and Numerical Python (NumPy). For more information on pandas, see the [pandas GitHub repository](https://github.com/pandas-dev/pandas). For more information on NumPy, see the [NumPy GitHub repository](https://github.com/numpy/numpy). 

The Amazon Redshift Python connector provides an open source solution. You can browse the source code, request enhancements, report issues, and provide contributions. 

To use the Amazon Redshift Python connector, make sure that you have Python version 3.6 or later. For more information, see the [Amazon Redshift Python driver license agreement](https://github.com/aws/amazon-redshift-python-driver/blob/master/LICENSE). 

The Amazon Redshift Python connector provides the following:
+ AWS Identity and Access Management (IAM) authentication. For more information, see [Identity and access management in Amazon Redshift](redshift-iam-authentication-access-control.md).
+ Identity provider authentication using federated API access. Federated API access is supported for corporate identity providers such as the following:
  + Azure AD. For more information, see the AWS Big Data blog post [Federate Amazon Redshift access with Microsoft Azure AD single sign-on](https://aws.amazon.com/blogs/big-data/federate-amazon-redshift-access-with-microsoft-azure-ad-single-sign-on/).
  + Active Directory Federation Services. For more information, see the AWS Big Data blog post [Federate access to your Amazon Redshift cluster with Active Directory Federation Services (AD FS): Part 1](https://aws.amazon.com/blogs/big-data/federate-access-to-your-amazon-redshift-cluster-with-active-directory-federation-services-ad-fs-part-1/). 
  + Okta. For more information, see the AWS Big Data blog post [Federate Amazon Redshift access with Okta as an identity provider](https://aws.amazon.com/blogs/big-data/federate-amazon-redshift-access-with-okta-as-an-identity-provider/).
  + PingFederate. For more information, see the [PingFederate site](https://www.pingidentity.com/en/software/pingfederate.html).
  + JumpCloud. For more information, see the [JumpCloud site](https://jumpcloud.com/).
+ Amazon Redshift data types.

The Amazon Redshift Python connector implements Python Database API Specification 2.0. For more information, see [PEP 249—Python Database API Specification v2.0](https://www.python.org/dev/peps/pep-0249/) on the Python website.

**Topics**
+ [

# Installing the Amazon Redshift Python connector
](python-driver-install.md)
+ [

# Configuration options for the Amazon Redshift Python connector
](python-configuration-options.md)
+ [

# Importing the Python connector
](python-start-import.md)
+ [

# Integrating the Python connector with NumPy
](python-connect-integrate-numpy.md)
+ [

# Integrating the Python connector with pandas
](python-connect-integrate-pandas.md)
+ [

# Using identity provider plugins
](python-connect-identity-provider-plugins.md)
+ [

# Examples of using the Amazon Redshift Python connector
](python-connect-examples.md)
+ [

# API reference for the Amazon Redshift Python connector
](python-api-reference.md)

# Installing the Amazon Redshift Python connector
<a name="python-driver-install"></a>

You can use any of the following methods to install the Amazon Redshift Python connector:
+ Python Package Index (PyPI)
+ Conda
+ Cloning the GitHub repository

## Installing the Python connector from PyPI
<a name="python-pip-install-pypi"></a>

To install the Python connector from the Python Package Index (PyPI), you can use pip. To do this, run the following command.

```
>>> pip install redshift_connector
```

You can install the connector within a virtual environment. To do this, run the following command.

```
>>> pip install redshift_connector
```

Optionally, you can install pandas and NumPy with the connector.

```
>>> pip install 'redshift_connector[full]'
```

For more information on pip, see the [pip site](https://pip.pypa.io/en/stable/).

## Installing the Python connector from Conda
<a name="python-pip-install-from-conda"></a>

You can install the Python connector from Anaconda.org.

```
>>>conda install -c conda-forge redshift_connector
```

## Installing the Python connector by cloning the GitHub repository from AWS
<a name="python-pip-install-from-source"></a>

To install the Python connector from source, clone the GitHub repository from AWS. After you install Python and virtualenv, set up your environment and install the required dependencies by running the following commands.

```
$ git clone https://github.com/aws/amazon-redshift-python-driver.git
$ cd amazon-redshift-python-driver
$ virtualenv venv
$ . venv/bin/activate
$ python -m pip install -r requirements.txt
$ python -m pip install -e .
$ python -m pip install redshift_connector
```

# Configuration options for the Amazon Redshift Python connector
<a name="python-configuration-options"></a>

Following, you can find descriptions for the options that you can specify for the Amazon Redshift Python connector. The options below apply to the latest available connector version unless specified otherwise.

## access\$1key\$1id
<a name="python-access-key-id-option"></a>
+ **Default value** – None
+ **Data type** – String

The access key for the IAM role or user configured for IAM database authentication. 

This parameter is optional.

## allow\$1db\$1user\$1override
<a name="python-allow-db-user-override-option"></a>
+ **Default value** – False
+ **Data type** – Boolean

True  
Specifies that the connector uses the `DbUser` value from the Security Assertion Markup Language (SAML) assertion.

False  
Specifies that the value in the `DbUser` connection parameter is used.

This parameter is optional.

## app\$1name
<a name="python-app-name-option"></a>
+ **Default value** – None
+ **Data type** – String

The name of the identity provider (IdP) application used for authentication. 

This parameter is optional.

## application\$1name
<a name="python-application_name-option"></a>
+ **Default value** – None
+ **Data type** – String

The name of the client application to pass to Amazon Redshift for audit purposes. The application name that you provide appears in the 'application\$1name' column of the [SYS\$1CONNECTION\$1LOG](https://docs.aws.amazon.com/redshift/latest/dg/SYS_CONNECTION_LOG.html) table. This helps track and troubleshoot connection sources when debugging issues.

This parameter is optional.

## auth\$1profile
<a name="python-auth-profile-option"></a>
+ **Default value** – None
+ **Data type** – String

The name of an Amazon Redshift authentication profile having connection properties as JSON. For more information about naming connection parameters, see the `RedshiftProperty` class. The `RedshiftProperty` class stores connection parameters provided by the end user and, if applicable, generated during the IAM authentication process (for example, temporary IAM credentials). For more information, see the [RedshiftProperty class](https://github.com/aws/amazon-redshift-python-driver/blob/master/redshift_connector/redshift_property.py#L9). 

This parameter is optional.

## auto\$1create
<a name="python-auto-create-option"></a>
+ **Default value** – False
+ **Data type** – Boolean

A value that indicates whether to create the user if the user doesn't exist. 

This parameter is optional.

## client\$1id
<a name="python-client-id-option"></a>
+ **Default value** – None
+ **Data type** – String

The client ID from Azure IdP. 

This parameter is optional.

## client\$1secret
<a name="python-client-secret-option"></a>
+ **Default value** – None
+ **Data type** – String

The client secret from Azure IdP. 

This parameter is optional.

## cluster\$1identifier
<a name="python-cluster-identifier-option"></a>
+ **Default value** – None
+ **Data type** – String

The cluster identifier of the Amazon Redshift cluster. 

This parameter is optional.

## credentials\$1provider
<a name="python-credential-provider-option"></a>
+ **Default value** – None
+ **Data type** – String

The IdP that is used for authenticating with Amazon Redshift. Following are valid values: 
+ `AdfsCredentialsProvider`
+ `AzureCredentialsProvider`
+ `BrowserAzureCredentialsProvider`
+ `BrowserAzureOAuth2CredentialsProvider`
+ `BrowserIdcAuthPlugin` – An authorization plugin using AWS IAM Identity Center.
+ `BrowserSamlCredentialsProvider`
+ `IdpTokenAuthPlugin` – An authorization plugin that accepts an AWS IAM Identity Center token or OpenID Connect (OIDC) JSON-based identity tokens (JWT) from any web identity provider linked to the AWS IAM Identity Center.
+ `PingCredentialsProvider`
+ `OktaCredentialsProvider`

This parameter is optional.

## database
<a name="python-database-option"></a>
+ **Default value** – None
+ **Data type** – String

The name of the database to which you want to connect. 

This parameter is required.

## database\$1metadata\$1current\$1db\$1only
<a name="python-database-metadata-current-db-only-option"></a>
+ **Default value** – True
+ **Data type** – Boolean

A value that indicates whether an application supports multidatabase datashare catalogs. The default value of True indicates that the application doesn't support multidatabase datashare catalogs for backward compatibility. 

This parameter is optional.

## db\$1groups
<a name="python-db-groups-option"></a>
+ **Default value** – None
+ **Data type** – String

A comma-separated list of existing database group names that the user indicated by DbUser joins for the current session. 

This parameter is optional.

## db\$1user
<a name="python-db-user-option"></a>
+ **Default value** – None
+ **Data type** – String

The user ID to use with Amazon Redshift. 

This parameter is optional.

## endpoint\$1url
<a name="python-endpoint-url-option"></a>
+ **Default value** – None
+ **Data type** – String

The Amazon Redshift endpoint URL. This option is only for AWS internal use. 

This parameter is optional.

## group\$1federation
<a name="python-group-federation-option"></a>
+ **Default value** – False
+ **Data type** – Boolean

This option specifies whether to use Amazon Redshift IDP groups.

This parameter is optional.

**true**  
Use Amazon Redshift Identity Provider (IDP) groups.

**false**  
Use STS API and GetClusterCredentials for user federation and specify **db\$1groups** for the connection.

## host
<a name="python-host-option"></a>
+ **Default value** – None
+ **Data type** – String

The hostname of Amazon Redshift cluster. 

This parameter is optional.

## iam
<a name="python-iam-option"></a>
+ **Default value** – False
+ **Data type** – Boolean

IAM authentication is enabled. 

This parameter is required.

## iam\$1disable\$1cache
<a name="python-iam-disable-cache-option"></a>
+ **Default value** – False
+ **Data type** – Boolean

This option specifies whether the IAM credentials are cached. By default, the IAM credentials are cached. This improves performance when requests to the API gateway are throttled. 

This parameter is optional.

## idc\$1client\$1display\$1name
<a name="python-idc_client_display_name-option"></a>
+ **Default Value** – Amazon Redshift Python connector
+ **Data Type** – String

The display name to be used for the client that's using BrowserIdcAuthPlugin.

This parameter is optional.

## idc\$1region
<a name="python-idc_region"></a>
+ **Default Value** – None
+ **Data Type** – String

The AWS region where the AWS IAM Identity Center instance is located.

This parameter is required only when authenticating using `BrowserIdcAuthPlugin` in the credentials\$1provider configuration option.

## idp\$1partition
<a name="python-idp_partition-option"></a>
+ **Default Value** – None
+ **Data Type** – String

Specifies the cloud partition where your identity provider (IdP) is configured. This determines which IdP authentication endpoint the driver connects to.

If this parameter is left blank, the driver defaults to the commercial partition. Possible values are:
+  `us-gov`: Use this value if your IdP is configured in Azure Government. For example, Azure AD Government uses the endpoint `login.microsoftonline.us`.
+  `cn`: Use this value if your IdP is configured in the China cloud partition. For example, Azure AD China uses the endpoint `login.chinacloudapi.cn`. 

This parameter is optional.

## idpPort
<a name="python-idp-port-option"></a>
+ **Default value** – 7890
+ **Data type** – Integer

The listen port to which IdP sends the SAML assertion. 

This parameter is required.

## idp\$1response\$1timeout
<a name="python-idp-response-timeout-option"></a>
+ **Default value** – 120
+ **Data type** – Integer

The timeout for retrieving SAML assertion from IdP. 

This parameter is required.

## idp\$1tenant
<a name="python-idp-tenant-option"></a>
+ **Default value** – None
+ **Data type** – String

The IdP tenant. 

This parameter is optional.

## issuer\$1url
<a name="python-issuer_url"></a>
+ **Default Value** – None
+ **Data Type** – String

 Points to the AWS IAM Identity Center server's instance endpoint. 

This parameter is required only when authenticating using `BrowserIdcAuthPlugin` in the credentials\$1provider configuration option.

## listen\$1port
<a name="python-listen-port-option"></a>
+ **Default value** – 7890
+ **Data type** – Integer

The port that the driver uses to receive the SAML response from the identity provider or authorization code when using SAML, Azure AD, or AWS IAM Identity Center services through a browser plugin.

This parameter is optional.

## login\$1url
<a name="python-login-url-option"></a>
+ **Default value** – None
+ **Data type** – String

The single sign-on Url for the IdP. 

This parameter is optional.

## max\$1prepared\$1statements
<a name="python-max-prepared-statements-option"></a>
+ **Default value** – 1000
+ **Data type** – Integer

The maximum number of prepared statements that will be cached per connection. Setting this parameter to 0 disables the caching mechanism. Entering a negative number for this parameter sets it to the default value. 

This parameter is optional.

## numeric\$1to\$1float
<a name="python-numeric-to-float-option"></a>
+ **Default value** – False
+ **Data type** – Boolean

This option specifies if the connector converts numeric data type values from decimal.Decimal to float. By default, the connector receives numeric data type values as decimal.Decimal and does not convert them. 

We don't recommend enabling numeric\$1to\$1float for use cases that require precision, as results may be rounded. 

For more information on decimal.Decimal and the tradeoffs between it and float, see [decimal — Decimal fixed point and floating point arithmetic](https://docs.python.org/3/library/decimal.html) on the Python website. 

This parameter is optional.

## partner\$1sp\$1id
<a name="python-partner-sp-id-option"></a>
+ **Default value** – None
+ **Data type** – String

The Partner SP ID used for authentication with Ping. 

This parameter is optional.

## password
<a name="python-password-option"></a>
+ **Default value** – None
+ **Data type** – String

The password to use for authentication. 

This parameter is optional.

## port
<a name="python-port-option"></a>
+ **Default value** – 5439
+ **Data type** – Integer

The port number of the Amazon Redshift cluster. 

This parameter is required.

## preferred\$1role
<a name="python-preferred-role-option"></a>
+ **Default value** – None
+ **Data type** – String

The IAM role preferred for the current connection. 

This parameter is optional.

## principal\$1arn
<a name="python-principal-arn-option"></a>
+ **Default value** – None
+ **Data type** – String

The Amazon Resource Name (ARN) of the user or IAM role for which you are generating a policy. It's recommended that you attach a policy to a role and then assign the role to your user, for access. 

This parameter is optional.

## profile
<a name="python-profile-option"></a>
+ **Default value** – None
+ **Data type** – String

The name of a profile in an AWS credentials file that contains AWS credentials. 

This parameter is optional.

## provider\$1name
<a name="python-provider_name-option"></a>
+ **Default value** – None
+ **Data type** – String

The name of the Redshift Native Authentication Provider. 

This parameter is optional.

## region
<a name="python-region-option"></a>
+ **Default value** – None
+ **Data type** – String

The AWS Region where the cluster is located. 

This parameter is optional.

## role\$1arn
<a name="python-role-arn-option"></a>
+ **Default value** – None
+ **Data type** – String

The Amazon Resource Name (ARN) of the role that the caller is assuming. This parameter is used by the provider indicated by `JwtCredentialsProvider`. 

For the `JwtCredentialsProvider` provider, this parameter is mandatory. Otherwise, this parameter is optional.

## role\$1session\$1name
<a name="python-role-session-name-option"></a>
+ **Default value** – jwt\$1redshift\$1session
+ **Data type** – String

An identifier for the assumed role session. Typically, you pass the name or identifier that is associated with the user who is using your application. The temporary security credentials that your application uses are associated with that user. This parameter is used by the provider indicated by `JwtCredentialsProvider`. 

This parameter is optional.

## scope
<a name="python-scope-option"></a>
+ **Default value** – None
+ **Data type** – String

A space-separated list of scopes to which the user can consent. You specify this parameter so that your application can get consent for APIs that you want to call. You can specify this parameter when you specify BrowserAzureOAuth2CredentialsProvider for the credentials\$1provider option.

This parameter is required for the BrowserAzureOAuth2CredentialsProvider plug-in.

## secret\$1access\$1key\$1id
<a name="python-secret-access-key-id-option"></a>
+ **Default value** – None
+ **Data type** – String

The secret access key for the IAM role or user configured for IAM database authentication. 

This parameter is optional.

## session\$1token
<a name="python-session-token-option"></a>
+ **Default value** – None
+ **Data type** – String

The access key for the IAM role or user configured for IAM database authentication. This parameter is required if temporary AWS credentials are being used. 

This parameter is optional.

## serverless\$1acct\$1id
<a name="python-serverless-acct-id-option"></a>
+ **Default value** – None
+ **Data type** – String

The Amazon Redshift Serverless account ID.

This parameter is optional.

## serverless\$1work\$1group
<a name="python-serverless-work-group-option"></a>
+ **Default value** – None
+ **Data type** – String

The Amazon Redshift Serverless workgroup name.

This parameter is optional.

## ssl
<a name="python-ssl-option"></a>
+ **Default value** – True
+ **Data type** – Boolean

Secure Sockets Layer (SSL) is enabled. 

This parameter is required.

## ssl\$1insecure
<a name="python-ssl-insecure-option"></a>
+ **Default value** – False
+ **Data type** – Boolean

A value that specifies whether to disable the verification of the IdP host's server SSL certificate. Setting this parameter to True will disable the verification of the IdP host's server SSL certificate. We recommend that you keep the default value of False in production environments.

This parameter is optional.

## sslmode
<a name="python-sslmode-option"></a>
+ **Default value** – verify-ca
+ **Data type** – String

The security of the connection to Amazon Redshift. You can specify either of the following: 
+ verify-ca
+ verify-full

This parameter is required.

## tcp\$1keepalive
<a name="python-tcp_keepalive-option"></a>
+ **Default value** – True
+ **Data type** – Boolean

Whether to use TCP keepalives to keep connections from timing out. You can specify the following values:
+ True: The driver will use TCP keepalives to keep connections from timing out.
+ False: The driver won’t use TCP keepalives.

This parameter is optional.

## tcp\$1keepalive\$1count
<a name="python-tcp_keepalive_count-option"></a>
+ **Default value** – None
+ **Data type** – Integer

The number of unacknowledged probes to send before considering the connection inactive. For example, setting the value to 3 means that the driver will send 3 unanswered keepalive packets before determining that the connection is no longer active.

If this parameter is not specified, Amazon Redshift uses the system's default value.

This parameter is optional.

## tcp\$1keepalive\$1interval
<a name="python-tcp_keepalive_interval-option"></a>
+ **Default value** – None
+ **Data type** – Integer

The interval, in seconds, between subsequent keepalive probes if the driver doesn’t received acknowledgement for the probe before it. If you specify this parameter, it must be a positive integer.

If this parameter is not specified, Amazon Redshift uses the system's default value.

This parameter is optional.

## tcp\$1keepalive\$1idle
<a name="python-tcp_keepalive_idle-option"></a>
+ **Default value** – None
+ **Data type** – Integer

The duration of inactivity, in seconds, after which the driver sends the first keepalive probe. For example, setting the value to 120 means that the driver will wait for 2 minutes of inactivity before sending the first keepalive packet. If you specify this parameter, it must be a positive integer. 

If this parameter is not specified, Amazon Redshift uses the system's default value.

This parameter is optional.

## timeout
<a name="python-timeout-option"></a>
+ **Default value** – None
+ **Data type** – Integer

The number of seconds before the connection to the server times out. 

This parameter is optional.

## token
<a name="python-token-option"></a>
+ **Default Value** – None
+ **Data Type** – String

An AWS IAM Identity Center provided access token or an OpenID Connect (OIDC) JSON Web Token (JWT) provided by a web identity provider that's linked with AWS IAM Identity Center. Your application must generate this token by authenticating the user of your application with AWS IAM Identity Center or an identity provider linked with AWS IAM Identity Center. 

This parameter works with `IdpTokenAuthPlugin`.

## token\$1type
<a name="python-token_type-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The type of token that is being used in `IdpTokenAuthPlugin`.

You can specify the following values:

**ACCESS\$1TOKEN**  
Enter this if you use an AWS IAM Identity Center provided access token.

**EXT\$1JWT**  
Enter this if you use an OpenID Connect (OIDC) JSON Web Token (JWT) provided by a web-based identity provider that's integrated with AWS IAM Identity Center.

This parameter works with `IdpTokenAuthPlugin`.

## user
<a name="python-user-option"></a>
+ **Default value** – None
+ **Data type** – String

The user name to use for authentication. 

This parameter is optional.

## web\$1identity\$1token
<a name="python-web-identity-token-option"></a>
+ **Default value** – None
+ **Data type** – String

The OAuth 2.0 access token or OpenID Connect ID token that is provided by the identity provider. Make sure that your application gets this token by authenticating the user who is using your application with a web identity provider. The provider indicated by `JwtCredentialsProvider` uses this parameter. 

For the `JwtCredentialsProvider` provider, this parameter is mandatory. Otherwise, this parameter is optional.

# Importing the Python connector
<a name="python-start-import"></a>

To import the Python connector, run the following command.

```
>>> import redshift_connector
```

To connect to an Amazon Redshift cluster using AWS credentials, run the following command.

```
conn = redshift_connector.connect(
    host='examplecluster.abc123xyz789.us-west-1.redshift.amazonaws.com',
    port=5439,
    database='dev',
    user='awsuser',
    password='my_password'
 )
```

# Integrating the Python connector with NumPy
<a name="python-connect-integrate-numpy"></a>

Following is an example of integrating the Python connector with NumPy.

```
>>>  import numpy
#Connect to the cluster
>>> import redshift_connector
>>> conn = redshift_connector.connect(
     host='examplecluster.abc123xyz789.us-west-1.redshift.amazonaws.com',
     port=5439,
     database='dev',
     user='awsuser',
     password='my_password'
  )
  
# Create a Cursor object
>>> cursor = conn.cursor()

# Query and receive result set            
cursor.execute("select * from book")

result: numpy.ndarray = cursor.fetch_numpy_array()
print(result)
```

Following is the result.

```
[['One Hundred Years of Solitude' 'Gabriel García Márquez']
['A Brief History of Time' 'Stephen Hawking']]
```

# Integrating the Python connector with pandas
<a name="python-connect-integrate-pandas"></a>

Following is an example of integrating the Python connector with pandas.

```
>>> import pandas

#Connect to the cluster
>>> import redshift_connector
>>> conn = redshift_connector.connect(
     host='examplecluster.abc123xyz789.us-west-1.redshift.amazonaws.com',
     port=5439,
     database='dev',
     user='awsuser',
     password='my_password'
  )
  
# Create a Cursor object
>>> cursor = conn.cursor()

# Query and receive result set
cursor.execute("select * from book")
result: pandas.DataFrame = cursor.fetch_dataframe()
print(result)
```

# Using identity provider plugins
<a name="python-connect-identity-provider-plugins"></a>

For general information on how to use identity provider plugins, see [Options for providing IAM credentials](options-for-providing-iam-credentials.md). For more information about managing IAM identities, including best practices for IAM roles, see [Identity and access management in Amazon Redshift](redshift-iam-authentication-access-control.md).

## Authentication using the ADFS identity provider plugin
<a name="python-connect-identity-provider-active-dir"></a>

Following is an example of using the Active Directory Federation Service (ADFS) identity provider plugin to authenticate a user connecting to an Amazon Redshift database.

```
>>> con = redshift_connector.connect(
    iam=True,
    database='dev',
    host='my-testing-cluster.abc.us-east-2.redshift.amazonaws.com',
    cluster_identifier='my-testing-cluster',
    credentials_provider='AdfsCredentialsProvider',
    user='brooke@myadfshostname.com',
    password='Hunter2',
    idp_host='myadfshostname.com'
)
```

## Authentication using the Azure identity provider plugin
<a name="python-connect-identity-provider-azure"></a>

Following is an example of authentication using the Azure identity provider plugin. You can create values for a `client_id` and `client_secret` for an Azure Enterprise application as shown following. 

```
>>>  con = redshift_connector.connect(
    iam=True,
    database='dev',
    host='my-testing-cluster.abc.us-east-2.redshift.amazonaws.com',
    cluster_identifier='my-testing-cluster',
    credentials_provider='AzureCredentialsProvider',
    user='brooke@myazure.org',
    password='Hunter2',
    idp_tenant='my_idp_tenant',
    client_id='my_client_id',
    client_secret='my_client_secret',
    preferred_role='arn:aws:iam:123:role/DataScientist'
)
```

## Authentication using the AWS IAM Identity Center identity provider plugin
<a name="python-connect-identity-provider-aws-idc"></a>

 Following is an example of authentication using the AWS IAM Identity Center identity provider plugin. 

```
with redshift_connector.connect(
credentials_provider='BrowserIdcAuthPlugin',
host='my-testing-cluster.abc.us-east-2.redshift.amazonaws.com',
database='dev',
idc_region='us-east-1',
issuer_url='https://identitycenter.amazonaws.com/ssoins-790723ebe09c86f9',
idp_response_timeout=60,
listen_port=8100,
idc_client_display_name='Test Display Name',
# port value of 5439 is specified by default
)
```

## Authentication using Azure Browser identity provider plugin
<a name="python-connect-identity-provider-azure-browser"></a>

Following is an example of using the Azure Browser identity provider plugin to authenticate a user connecting to an Amazon Redshift database.

Multi-factor authentication occurs in the browser, where the sign-in credentials are provided by the user.

```
>>>con = redshift_connector.connect(
    iam=True,
    database='dev',
    host='my-testing-cluster.abc.us-east-2.redshift.amazonaws.com',
    cluster_identifier='my-testing-cluster',
    credentials_provider='BrowserAzureCredentialsProvider',
    idp_tenant='my_idp_tenant',
    client_id='my_client_id',
)
```

## Authentication using the Okta identity provider plugin
<a name="python-connect-identity-provider-okta"></a>

Following is an example of authentication using the Okta identity provider plugin. You can obtain the values for `idp_host`, `app_id` and `app_name` through the Okta application.

```
>>> con = redshift_connector.connect(
    iam=True,
    database='dev',
    host='my-testing-cluster.abc.us-east-2.redshift.amazonaws.com',
    cluster_identifier='my-testing-cluster',
    credentials_provider='OktaCredentialsProvider',
    user='brooke@myazure.org',
    password='hunter2',
    idp_host='my_idp_host',
    app_id='my_first_appetizer',
    app_name='dinner_party'
)
```

## Authentication using JumpCloud with a generic SAML browser identity provider plugin
<a name="python-connect-identity-provider-jumpcloud"></a>

Following is an example of using JumpCloud with a generic SAML browser identity provider plugin for authentication.

The password parameter is required. However, you don't have to enter this parameter because multi-factor authentication occurs in the browser.

```
>>> con = redshift_connector.connect(
    iam=True,
    database='dev',
    host='my-testing-cluster.abc.us-east-2.redshift.amazonaws.com',
    cluster_identifier='my-testing-cluster',
    credentials_provider='BrowserSamlCredentialsProvider',
    user='brooke@myjumpcloud.org',
    password='',
    login_url='https://sso.jumpcloud.com/saml2/plustwo_melody'
)
```

# Examples of using the Amazon Redshift Python connector
<a name="python-connect-examples"></a>

Following are examples of how to use the Amazon Redshift Python connector. To run them, you must first install the Python connector. For more information on installing the Amazon Redshift Python connector, see [Installing the Amazon Redshift Python connector](python-driver-install.md). For more information on configuration options you can use with the Python connector, see [Configuration options for the Amazon Redshift Python connector](python-configuration-options.md).

**Topics**
+ [

## Connecting to and querying an Amazon Redshift cluster using AWS credentials
](#python-connect-cluster)
+ [

## Enabling autocommit
](#python-connect-enable-autocommit)
+ [

## Configuring cursor paramstyle
](#python-connect-config-paramstyle)
+ [

## Using COPY to copy data from an Amazon S3 bucket and UNLOAD to write data to it
](#python-connect-copy-unload-s3)

## Connecting to and querying an Amazon Redshift cluster using AWS credentials
<a name="python-connect-cluster"></a>

The following example guides you through connecting to an Amazon Redshift cluster using your AWS credentials, then querying a table and retrieving the query results.

```
#Connect to the cluster
>>> import redshift_connector
>>> conn = redshift_connector.connect(
     host='examplecluster.abc123xyz789.us-west-1.redshift.amazonaws.com',
     database='dev',
     port=5439,
     user='awsuser',
     password='my_password'
  )
  
# Create a Cursor object
>>> cursor = conn.cursor()

# Query a table using the Cursor
>>> cursor.execute("select * from book")
                
#Retrieve the query result set
>>> result: tuple = cursor.fetchall()
>>> print(result)
 >> (['One Hundred Years of Solitude', 'Gabriel García Márquez'], ['A Brief History of Time', 'Stephen Hawking'])
```

## Enabling autocommit
<a name="python-connect-enable-autocommit"></a>

The autocommit property is off by default, following the Python Database API Specification. You can use the following commands to turn on the connection's autocommit property after performing a rollback command to make sure that a transaction is not in progress.

```
#Connect to the cluster
>>> import redshift_connector
>>> conn = redshift_connector.connect(...)

# Run a rollback command
>>>  conn.rollback()

# Turn on autocommit
>>>  conn.autocommit = True
>>>  conn.run("VACUUM")

# Turn off autocommit
>>>  conn.autocommit = False
```

## Configuring cursor paramstyle
<a name="python-connect-config-paramstyle"></a>

The paramstyle for a cursor can be modified via cursor.paramstyle. The default paramstyle used is `format`. Valid values for paramstyle are `qmark`, `numeric`, `named`, `format`, and `pyformat`.

The following are examples of using various paramstyles to pass parameters to a sample SQL statement.

```
# qmark
redshift_connector.paramstyle = 'qmark'
sql = 'insert into foo(bar, jar) VALUES(?, ?)'
cursor.execute(sql, (1, "hello world"))

# numeric
redshift_connector.paramstyle = 'numeric'
sql = 'insert into foo(bar, jar) VALUES(:1, :2)'
cursor.execute(sql, (1, "hello world"))

# named
redshift_connector.paramstyle = 'named'
sql = 'insert into foo(bar, jar) VALUES(:p1, :p2)'
cursor.execute(sql, {"p1":1, "p2":"hello world"})

# format
redshift_connector.paramstyle = 'format'
sql = 'insert into foo(bar, jar) VALUES(%s, %s)'
cursor.execute(sql, (1, "hello world"))

# pyformat
redshift_connector.paramstyle = 'pyformat'
sql = 'insert into foo(bar, jar) VALUES(%(bar)s, %(jar)s)'
cursor.execute(sql, {"bar": 1, "jar": "hello world"})
```

## Using COPY to copy data from an Amazon S3 bucket and UNLOAD to write data to it
<a name="python-connect-copy-unload-s3"></a>

The following example shows how to copy data from an Amazon S3 bucket into a table and then unload from that table back into the bucket.

A text file named `category_csv.txt` containing the following data is uploaded to an Amazon S3 bucket:.

```
12,Shows,Musicals,Musical theatre
13,Shows,Plays,"All ""non-musical"" theatre"
14,Shows,Opera,"All opera, light, and ""rock"" opera"
15,Concerts,Classical,"All symphony, concerto, and choir concerts"
```

Following is an example of the Python code, which first connects to the Amazon Redshift database. It then creates a table called `category` and copies the CSV data from the S3 bucket into the table.

```
#Connect to the cluster and create a Cursor
>>> import redshift_connector
>>> with redshift_connector.connect(...) as conn:
>>> with conn.cursor() as cursor:

#Create an empty table
>>>     cursor.execute("create table category (catid int, cargroup varchar, catname varchar, catdesc varchar)")

#Use COPY to copy the contents of the S3 bucket into the empty table 
>>>     cursor.execute("copy category from 's3://testing/category_csv.txt' iam_role 'arn:aws:iam::123:role/RedshiftCopyUnload' csv;")

#Retrieve the contents of the table
>>>     cursor.execute("select * from category")
>>>     print(cursor.fetchall())

#Use UNLOAD to copy the contents of the table into the S3 bucket
>>>     cursor.execute("unload ('select * from category') to 's3://testing/unloaded_category_csv.txt'  iam_role 'arn:aws:iam::123:role/RedshiftCopyUnload' csv;")

#Retrieve the contents of the bucket
>>>     print(cursor.fetchall())
 >> ([12, 'Shows', 'Musicals', 'Musical theatre'], [13, 'Shows', 'Plays', 'All "non-musical" theatre'], [14, 'Shows', 'Opera', 'All opera, light, and "rock" opera'], [15, 'Concerts', 'Classical', 'All symphony, concerto, and choir concerts'])
```

If you don't have `autocommit` set to true, commit with `conn.commit()` after running the `execute()` statements.

The data is unloaded into the file `unloaded_category_csv.text0000_part00` in the S3 bucket, with the following content:

```
12,Shows,Musicals,Musical theatre
13,Shows,Plays,"All ""non-musical"" theatre"
14,Shows,Opera,"All opera, light, and ""rock"" opera"
15,Concerts,Classical,"All symphony, concerto, and choir concerts"
```

# API reference for the Amazon Redshift Python connector
<a name="python-api-reference"></a>

Following, you can find a description of the Amazon Redshift Python connector API operations.

## redshift\$1connector
<a name="python-api-redshift_connector"></a>

Following, you can find a description of the `redshift_connector` API operation.

`connect(user, database, password[, port, …])`  
Establishes a connection to an Amazon Redshift cluster. This function validates user input, optionally authenticates using an identity provider plugin, and then constructs a connection object.

`apilevel`  
The DBAPI level supported, currently "2.0".

`paramstyle``str(object=’’) -> str str(bytes_or_buffer[, encoding[, errors]]) -> str`  
The database API parameter style to use globally.

## Connection
<a name="python-api-connection"></a>

Following, you can find a description of the connection API operations for the Amazon Redshift Python connector.

`__init__(user, password, database[, host, …])`  
Initializes a raw connection object.

`cursor`  
Creates a cursor object bound to this connection.

`commit`  
Commits the current database transaction.

`rollback`  
Rolls back the current database transaction.

`close`  
Closes the database connection.

`execute(cursor, operation, vals)`  
Runs the specified SQL command. You can provide the parameters as a sequence or as a mapping, depending upon the value of `redshift_connector.paramstyle`.

`run(sql[, stream])`  
Runs the specified SQL command. Optionally, you can provide a stream for use with the COPY command.

`xid(format_id, global_transaction_id, …)`  
Create a transaction ID. Only the `global_transaction_id` parameter is used in postgres. format\$1id and branch\$1qualifier are not used in postgres. The `global_transaction_id` can be any string identifier supported by postgres that returns a tuple (`format_id`, `global_transaction_id`, `branch_qualifier`).

`tpc_begin(xid)`  
Begins a TPC transaction with a transaction ID `xid` consisting of a a format ID, global transaction ID, and branch qualifier. 

`tpc_prepare`  
Performs the first phase of a transaction started with .tpc\$1begin.

`tpc_commit([xid])`  
When called with no arguments, .tpc\$1commit commits a TPC transaction previously prepared with .tpc\$1prepare().

`tpc_rollback([xid])`  
When called with no arguments, .tpc\$1rollback rolls back a TPC transaction.

`tpc_recover`  
Returns a list of pending transaction IDs suitable for use with .tpc\$1commit(xid) or .tpc\$1rollback(xid).

## Cursor
<a name="python-api-cursor"></a>

Following, you can find a description of the cursor API operation.

`__init__(connection[, paramstyle])`  
Initializes a raw cursor object.

`insert_data_bulk(filename, table_name, parameter_indices, column_names, delimiter, batch_size)`  
Runs a bulk INSERT statement.

`execute(operation[, args, stream, …])`  
Runs a database operation.

`executemany(operation, param_sets)`  
Prepares a database operation, and then runs it for all parameter sequences or mappings provided.

`fetchone`  
Fetches the next row of a query result set.

`fetchmany([num])`  
Fetches the next set of rows of a query result.

`fetchall`  
Fetches all remaining rows of a query result.

`close`  
Closes the cursor now. 

`__iter__`  
A cursor object can be iterated to retrieve the rows from a query.

`fetch_dataframe([num])`  
Returns a dataframe of the last query results.

`write_dataframe(df, table)`  
Writes the same structure dataframe into an Amazon Redshift database.

`fetch_numpy_array([num])`  
Returns a NumPy array of the last query results.

`get_catalogs`  
Amazon Redshift doesn't support multiple catalogs from a single connection. Amazon Redshift only returns the current catalog.

`get_tables([catalog, schema_pattern, …])`  
Returns the unique public tables which are user-defined within the system.

`get_columns([catalog, schema_pattern, …])`  
Returns a list of all columns in a specific table in an Amazon Redshift database.

## AdfsCredentialsProvider plugin
<a name="python-adfs-credentials-plugin"></a>

Following is the syntax for the AdfsCredentialsProvider plugin API operation for the Amazon Redshift Python connector. 

```
redshift_connector.plugin.AdfsCredentialsProvider()
```

## AzureCredentialsProvider plugin
<a name="python-azure-credentials-plugin"></a>

Following is the syntax for the AzureCredentialsProvider plugin API operation for the Amazon Redshift Python connector.

```
redshift_connector.plugin.AzureCredentialsProvider()
```

## BrowserAzureCredentialsProvider plugin
<a name="python-browser-azure-credentials-plugin"></a>

Following is the syntax for the BrowserAzureCredentialsProvider plugin API operation for the Amazon Redshift Python connector.

```
redshift_connector.plugin.BrowserAzureCredentialsProvider()
```

## BrowserSamlCredentialsProvider plugin
<a name="python-browser-saml-credentials-plugin"></a>

Following is the syntax for the BrowserSamlCredentialsProvider plugin API operation for the Amazon Redshift Python connector.

```
redshift_connector.plugin.BrowserSamlCredentialsProvider()
```

## OktaCredentialsProvider plugin
<a name="python-okta-credentials-plugin"></a>

Following is the syntax for the OktaCredentialsProvider plugin API operation for the Amazon Redshift Python connector.

```
redshift_connector.plugin.OktaCredentialsProvider()
```

## PingCredentialsProvider plugin
<a name="python-ping-credentials-plugin"></a>

Following is the syntax for the PingCredentialsProvider plugin API operation for the Amazon Redshift Python connector.

```
redshift_connector.plugin.PingCredentialsProvider()
```

## SamlCredentialsProvider plugin
<a name="python-saml-credentials-plugin"></a>

Following is the syntax for the SamlCredentialsProvider plugin API operation for the Amazon Redshift Python connector.

```
redshift_connector.plugin.SamlCredentialsProvider()
```

# Amazon Redshift integration for Apache Spark
<a name="spark-redshift-connector"></a>

 [Apache Spark](https://aws.amazon.com/emr/features/spark/) is a distributed processing framework and programming model that helps you do machine learning, stream processing, or graph analytics. Similar to Apache Hadoop, Spark is an open-source, distributed processing system commonly used for big data workloads. Spark has an optimized directed acyclic graph (DAG) execution engine and actively caches data in-memory. This can boost performance, especially for certain algorithms and interactive queries. 

 This integration provides you with a Spark connector you can use to build Apache Spark applications that read from and write to data in Amazon Redshift and Amazon Redshift Serverless. These applications don't compromise on application performance or transactional consistency of the data. This integration is automatically included in [Amazon EMR](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/) and [AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/), so you can immediately run Apache Spark jobs that access and load data into Amazon Redshift as part of your data ingestion and transformation pipelines. 

Currently, you can use the versions 3.3.x, 3.4.x, 3.5.x, and 4.0.0 of Spark with this integration.

 This integration provides the following: 
+  AWS Identity and Access Management (IAM) authentication. For more information, see [ Identity and access management in Amazon Redshift.](https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-authentication-access-control.html) 
+ Predicate and query pushdown to improve performance.
+  Amazon Redshift data types. 
+ Connectivity to Amazon Redshift and Amazon Redshift Serverless.

## Considerations and limitations when using the Spark connector
<a name="spark-redshift-connector-considerations"></a>
+  The tempdir URI points to an Amazon S3 location. This temp directory is not cleaned up automatically and could add additional cost. We recommend using [ Amazon S3 lifecycle policies](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html) in the *Amazon Simple Storage Service User Guide* to define the retention rules for the Amazon S3 bucket. 
+  By default, copies between Amazon S3 and Redshift don't work if the S3 bucket and Redshift cluster are in different AWS Regions. To use separate AWS Regions, set the `tempdir_region` parameter to the Region of the S3 bucket used for the `tempdir`.
+ Cross-Region writes between S3 and Redshift if writing Parquet data using the `tempformat` parameter.
+ We recommend using [Amazon S3 server-side encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html) to encrypt the Amazon S3 buckets used. 
+ We recommend [ blocking public access to Amazon S3 buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html). 
+  We recommend that the Amazon Redshift cluster should not be publicly accessible. 
+  We recommend turning on [ Amazon Redshift audit logging](https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html). 
+  We recommend turning on [ Amazon Redshift at-rest encryption](https://docs.aws.amazon.com/redshift/latest/mgmt/security-server-side-encryption.html). 
+  We recommend turning on SSL for the JDBC connection from Spark on Amazon EMR to Amazon Redshift. 
+ We recommend passing an IAM role using the parameter `aws_iam_role` for the Amazon Redshift authentication parameter.

# Authentication with the Spark connector
<a name="redshift-spark-connector-authentication"></a>

The following diagram describes the authentication between Amazon S3, Amazon Redshift, the Spark driver, and Spark executors.

![\[This is a diagram of the spark connector authentication.\]](http://docs.aws.amazon.com/redshift/latest/mgmt/images/spark-connector-authentication.png)


## Authentication between Redshift and Spark
<a name="redshift-spark-authentication"></a>

 You can use the Amazon Redshift provided JDBC driver version 2.x driver to connect to Amazon Redshift with the Spark connector by specifying sign-in credentials. To use IAM, [configure your JDBC url to use IAM authentication](https://docs.aws.amazon.com/redshift/latest/mgmt/generating-iam-credentials-configure-jdbc-odbc.html). To connect to a Redshift cluster from Amazon EMR or AWS Glue, make sure that your IAM role has the necessary permissions to retrieve temporary IAM credentials. The following list describes all of the permissions that your IAM role needs to retrieve credentials and run Amazon S3 operations. 
+ [ Redshift:GetClusterCredentials](https://docs.aws.amazon.com/redshift/latest/APIReference/API_GetClusterCredentials.html) (for provisioned Redshift clusters)
+ [ Redshift:DescribeClusters](https://docs.aws.amazon.com/redshift/latest/APIReference/API_DescribeClusters.html) (for provisioned Redshift clusters)
+ [ Redshift:GetWorkgroup](https://docs.aws.amazon.com/redshift-serverless/latest/APIReference/API_GetWorkgroup.html) (for Amazon Redshift Serverless workgroups)
+ [ Redshift:GetCredentials](https://docs.aws.amazon.com/redshift-serverless/latest/APIReference/API_GetCredentials.html) (for Amazon Redshift Serverless workgroups)
+ [ s3:ListBucket](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html)
+ [ s3:GetBucket](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetBucket.html)
+ [ s3:GetObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html)
+ [ s3:PutObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html)
+ [ s3:GetBucketLifecycleConfiguration](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLifecycleConfiguration.html)

 For more information about GetClusterCredentials, see [ IAM policies for GetClusterCredentials](https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-access-control-identity-based.html#redshift-policy-resources.getclustercredentials-resources). 

You also must make sure that Amazon Redshift can assume the IAM role during `COPY` and `UNLOAD` operations.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "redshift.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
```

------

If you’re using the latest JDBC driver, the driver will automatically manage the transition from an Amazon Redshift self-signed certificate to an ACM certificate. However, you must [specify the SSL options to the JDBC url](https://docs.aws.amazon.com/redshift/latest/mgmt/jdbc20-configuration-options.html#jdbc20-ssl-option). 

 The following is an example of how to specify the JDBC driver URL and `aws_iam_role` to connect to Amazon Redshift. 

```
df.write \
  .format("io.github.spark_redshift_community.spark.redshift ") \
  .option("url", "jdbc:redshift:iam://<the-rest-of-the-connection-string>") \
  .option("dbtable", "<your-table-name>") \
  .option("tempdir", "s3a://<your-bucket>/<your-directory-path>") \
  .option("aws_iam_role", "<your-aws-role-arn>") \
  .mode("error") \
  .save()
```

## Authentication between Amazon S3 and Spark
<a name="spark-s3-authentication"></a>

 If you’re using an IAM role to authenticate between Spark and Amazon S3, use one of the following methods: 
+ The AWS SDK for Java will automatically attempt to find AWS credentials by using the default credential provider chain implemented by the DefaultAWSCredentialsProviderChain class. For more information, see [ Using the Default Credential Provider Chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).
+ You can specify AWS keys via [ Hadoop configuration properties](https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md). For example, if your `tempdir` configuration points to a `s3n://` filesystem, set the `fs.s3n.awsAccessKeyId` and `fs.s3n.awsSecretAccessKey` properties in a Hadoop XML configuration file or call `sc.hadoopConfiguration.set()` to change Spark's global Hadoop configuration.

For example, if you are using the s3n filesystem, add:

```
sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId", "YOUR_KEY_ID")
sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey", "YOUR_SECRET_ACCESS_KEY")
```

For the s3a filesystem, add:

```
sc.hadoopConfiguration.set("fs.s3a.access.key", "YOUR_KEY_ID")
sc.hadoopConfiguration.set("fs.s3a.secret.key", "YOUR_SECRET_ACCESS_KEY")
```

If you’re using Python, use the following operations:

```
sc._jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", "YOUR_KEY_ID")
sc._jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", "YOUR_SECRET_ACCESS_KEY")
```
+ Encode authentication keys in the `tempdir` URL. For example, the URI `s3n://ACCESSKEY:SECRETKEY@bucket/path/to/temp/dir` encodes the key pair (`ACCESSKEY`, `SECRETKEY`).

## Authentication between Redshift and Amazon S3
<a name="redshift-s3-authentication"></a>

 If you’re using the COPY and UNLOAD commands in your query, you also must grant Amazon S3 access to Amazon Redshift to run queries on your behalf. To do so, first [authorize Amazon Redshift to access other AWS services](https://docs.aws.amazon.com/redshift/latest/mgmt/authorizing-redshift-service.html), then authorize the [ COPY and UNLOAD operations using IAM roles](https://docs.aws.amazon.com/redshift/latest/mgmt/copy-unload-iam-role.html). 

As a best practice, we recommend attaching permissions policies to an IAM role and then assigning it to users and groups as needed. For more information, see [Identity and access management in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-authentication-access-control.html).

## Integration with AWS Secrets Manager
<a name="redshift-secrets-manager-authentication"></a>

You can retrieve your Redshift username and password credentials from a stored secret in AWS Secrets Manager. To automatically supply Redshift credentials, use the `secret.id` parameter. For more information about how to create a Redshift credentials secret, see [Create an AWS Secrets Manager database secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_database_secret.html).

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/redshift-spark-connector-authentication.html)

**Note**  
 Acknowledgement: This documentation contains sample code and language developed by the [Apache Software Foundation](http://www.apache.org/) licensed under the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0). 

# Performance improvements with pushdown
<a name="spark-redshift-connector-pushdown"></a>

 The Spark connector automatically applies predicate and query pushdown to optimize for performance. This support means that if you’re using a supported function in your query, the Spark connector will turn the function into a SQL query and run the query in Amazon Redshift. This optimization results in less data being retrieved, so Apache Spark can process less data and have better performance. By default, pushdown is automatically activated. To deactivate it, set `autopushdown` to false. 

```
import sqlContext.implicits._val 
 sample= sqlContext.read
    .format("io.github.spark_redshift_community.spark.redshift")
    .option("url",jdbcURL )
    .option("tempdir", tempS3Dir)
    .option("dbtable", "event")
    .option("autopushdown", "false")
    .load()
```

 The following functions are supported with pushdown. If you’re using a function that’s not in this list, the Spark connector will perform the function in Spark instead of Amazon Redshift, resulting in unoptimized performance. For a complete list of functions in Spark, see [Built-in Functions](https://spark.apache.org/docs/latest/api/sql/index.html). 
+ Aggregation functions
  + avg
  + count
  + max
  + min
  + sum
  + stddev\$1samp
  + stddev\$1pop
  + var\$1samp
  + var\$1pop
+ Boolean operators
  + in
  + isnull
  + isnotnull
  + contains
  + endswith
  + startswith
+ Logical operators
  + and
  + or
  + not (or \$1)
+ Mathematical functions
  + \$1
  + -
  + \$1
  + /
  + - (unary)
  + abs
  + acos
  + asin
  + atan
  + ceil
  + cos
  + exp
  + floor
  + greatest
  + least
  + log10
  + pi
  + pow
  + round
  + sin
  + sqrt
  + tan
+ Miscellaneous functions
  + cast
  + coalesce
  + decimal
  + if
  + in
+ Relational operators
  + \$1=
  + =
  + >
  + >=
  + <
  + <=
+ String functions
  + ascii
  + lpad
  + rpad
  + translate
  + upper
  + lower
  + length
  + trim
  + ltrim
  + rtrim
  + like
  + substring
  + concat
+ Time and date functions
  + add\$1months
  + date
  + date\$1add
  + date\$1sub
  + date\$1trunc
  + timestamp
  + trunc
+ Mathematical operations
  + CheckOverflow
  + PromotePrecision
+ Relational operations
  + Aliases (for example, AS)
  + CaseWhen
  + Distinct
  + InSet
  + Joins and cross joins
  + Limits
  + Unions, union all
  + ScalarSubquery
  + Sorts (ascending and descending)
  + UnscaledValue

# Other configuration options
<a name="spark-redshift-connector-other-config"></a>

On this page, you can find descriptions for the options that you can specify for the Amazon Redshift Spark connector.

## Maximum size of string columns
<a name="spark-redshift-connector-other-config-max-size"></a>

Redshift creates string columns as text columns when creating tables, which are stored as VARCHAR(256). If you want columns that support larger sizes, you can use maxlength to specify the maximum length of string columns. The following is an example of how to specify `maxlength`. 

```
columnLengthMap.foreach { case (colName, length) =>
  val metadata = new MetadataBuilder().putLong("maxlength", length).build()
  df = df.withColumn(colName, df(colName).as(colName, metadata))
}
```

## Column type
<a name="spark-redshift-connector-other-config-column-type"></a>

To set a column type, use the `redshift_type` field.

```
columnTypeMap.foreach { case (colName, colType) =>
  val metadata = new MetadataBuilder().putString("redshift_type", colType).build()
  df = df.withColumn(colName, df(colName).as(colName, metadata))
}
```

## Compression encoding on a column
<a name="spark-redshift-connector-other-config-compression-encoding"></a>

 To use a specific compression encoding on a column, use the encoding field. For a full list of support compression encodings, see [Compression encodings](https://docs.aws.amazon.com/redshift/latest/dg/c_Compression_encodings.html). 

## Description for a column
<a name="spark-redshift-connector-other-config-description"></a>

To set a description, use the `description` field.

## Authentication between Redshift and Amazon S3
<a name="spark-redshift-connector-other-config-unload-as-text"></a>

 By default, the result is unloaded to Amazon S3 in the parquet format. To unload the result as pipe-delimited text file, specify the following option. 

```
.option("unload_s3_format", "TEXT")
```

## Pushdown statements
<a name="spark-redshift-connector-other-config-lazy-pushdown"></a>

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/spark-redshift-connector-other-config.html)

## Connector parameters
<a name="spark-redshift-connector-other-config-spark-parameters"></a>

The parameter map or `OPTIONS` in Spark SQL supports the following settings.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/spark-redshift-connector-other-config.html)

**Note**  
 Acknowledgement: This documentation contains sample code and language developed by the [Apache Software Foundation](http://www.apache.org/) licensed under the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0). 

# Supported data types
<a name="spark-redshift-connector-data-types"></a>

The following data types in Amazon Redshift are supported with the Spark connector. For a complete list of supported data types in Amazon Redshift, see [Data types](https://docs.aws.amazon.com//redshift/latest/dg/c_Supported_data_types.html). If a data type is not in the table below, it's not supported in the Spark connector.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/spark-redshift-connector-data-types.html)

## Complex data types
<a name="spark-redshift-connector-complex-data-types"></a>

 You can use the spark connector to read and write Spark complex data types such as `ArrayType`, `MapType`, and `StructType` to and from Redshift SUPER data type columns. If you provide a schema during a read operation, the data in the column will be converted to its corresponding complex types in Spark, including any nested types. Additionally, if `autopushdown` is enabled, projection of nested attributes, map values, and array indices are pushed down to Redshift so that the entire nested data structure no longer needs to be unloaded when accessing just a portion of the data. 

When you write DataFrames from the connector, any column of type `MapType` (using `StringType`), `StructType`, or `ArrayType` is written to a Redshift SUPER data type column. When writing these nested data structures, the `tempformat` parameter must be of type `CSV`, `CSV GZIP`, or `PARQUET`. Using `AVRO` will cause an exception. Writing a `MapType` data structure that has a key type other than `StringType` will also cause an exception. 

### StructType
<a name="spark-redshift-connector-complex-data-types-examples-structtype"></a>

The following example demonstrates how to create a table with a SUPER data type that contains a struct

```
create table contains_super (a super);
```

You can then use the connector to query a `StringType` field `hello` from the SUPER column `a` in the table using a schema like in the following example.

```
import org.apache.spark.sql.types._

val sc = // existing SparkContext
val sqlContext = new SQLContext(sc)

val schema = StructType(StructField("a", StructType(StructField("hello", StringType) ::Nil)) :: Nil)

val helloDF = sqlContext.read
.format("io.github.spark_redshift_community.spark.redshift")
.option("url", jdbcURL )
.option("tempdir", tempS3Dir)
.option("dbtable", "contains_super")
.schema(schema)
.load().selectExpr("a.hello")
```

The following example demonstrates how to write a struct to the column `a`.

```
import org.apache.spark.sql.types._
import org.apache.spark.sql._

val sc = // existing SparkContext
val sqlContext = new SQLContext(sc)

val schema = StructType(StructField("a", StructType(StructField("hello", StringType) ::Nil)) :: Nil)
val data = sc.parallelize(Seq(Row(Row("world"))))
val mydf = sqlContext.createDataFrame(data, schema)

mydf.write.format("io.github.spark_redshift_community.spark.redshift").
option("url", jdbcUrl).
option("dbtable", tableName).
option("tempdir", tempS3Dir).
option("tempformat", "CSV").
mode(SaveMode.Append).save
```

### MapType
<a name="spark-redshift-connector-complex-data-types-examples-maptype"></a>

If you prefer to use a `MapType` to represent your data, then you can use a `MapType` data structure in your schema and retrieve the value corresponding to a key in the map. Note that all keys in your `MapType` data structure must be of type String, and all of the values must of the same type, such as int. 

The following example demonstrates how to get the value of the key `hello` in the column `a`.

```
import org.apache.spark.sql.types._

val sc = // existing SparkContext
val sqlContext = new SQLContext(sc)

val schema = StructType(StructField("a", MapType(StringType, IntegerType))::Nil)

val helloDF = sqlContext.read
    .format("io.github.spark_redshift_community.spark.redshift")
    .option("url", jdbcURL )
    .option("tempdir", tempS3Dir)
    .option("dbtable", "contains_super")
    .schema(schema)
    .load().selectExpr("a['hello']")
```

### ArrayType
<a name="spark-redshift-connector-complex-data-types-examples-arraytype"></a>

If the column contains an array instead of a struct, you can use the connector to query the first element in the array.

```
import org.apache.spark.sql.types._

val sc = // existing SparkContext
val sqlContext = new SQLContext(sc)

val schema = StructType(StructField("a", ArrayType(IntegerType)):: Nil)

val helloDF = sqlContext.read
    .format("io.github.spark_redshift_community.spark.redshift")
    .option("url", jdbcURL )
    .option("tempdir", tempS3Dir)
    .option("dbtable", "contains_super")
    .schema(schema)
    .load().selectExpr("a[0]")
```

### Limitations
<a name="spark-redshift-connector-complex-data-types-limitations"></a>

Using complex data types with the spark connector has the following limitations:
+ All nested struct field names and map keys must be lowercase. If querying for complex field names with uppercase letters, you can try omitting the schema and using the `from_json` spark function to convert the returned string locally as a workaround.
+ Any map fields used in read or write operations must have only `StringType` keys.
+ Only `CSV`, `CSV GZIP`, and `PARQUET `are supported tempformat values for writing complex types to Redshift. Attempting to use `AVRO `will throw an exception.

# Configuring a connection for ODBC driver version 2.x for Amazon Redshift
<a name="odbc20-install"></a>

You can use an ODBC connection to connect to your Amazon Redshift cluster from many third-party SQL client tools and applications. If your client tool supports JDBC, you can choose to use that type of connection rather than ODBC due to the ease of configuration that JDBC provides. However, if your client tool doesn't support JDBC, you can follow the steps in this section to set up an ODBC connection on your client computer or Amazon EC2 instance.

Amazon Redshift provides 64-bit ODBC drivers for Linux, Windows and Mac operating systems; the 32-bit ODBC drivers are discontinued. Further updates to the 32-bit ODBC drivers will not be released, except for urgent security patches.

For the latest information about ODBC driver changes, see the [change log](https://github.com/aws/amazon-redshift-odbc-driver/blob/master/CHANGELOG.md).

**Topics**
+ [

# Getting the ODBC URL
](odbc20-getting-url.md)
+ [

# Using an Amazon Redshift ODBC driver on Microsoft Windows
](odbc20-install-config-win.md)
+ [

# Using an Amazon Redshift ODBC driver on Linux
](odbc20-install-config-linux.md)
+ [

# Using an Amazon Redshift ODBC driver on Apple macOS
](odbc20-install-config-mac.md)
+ [

# Authentication methods
](odbc20-authentication-ssl.md)
+ [

# Data types conversions
](odbc20-converting-data-types.md)
+ [

# ODBC driver options
](odbc20-configuration-options.md)
+ [

# Previous ODBC driver versions
](odbc20-previous-versions.md)

# Getting the ODBC URL
<a name="odbc20-getting-url"></a>

Amazon Redshift displays the ODBC URL for your cluster in the Amazon Redshift console. This URL contains the information required to set up the connection between your client computer and the database.

An ODBC URL has the following format: 

```
Driver={driver}; Server=endpoint_host; Database=database_name; UID=user_name; PWD=password; Port=port_number
```

The preceding format's fields have the following values:


| Field | Value | 
| --- | --- | 
| Driver | The name of the 64-bit ODBC driver to use: Amazon Redshift ODBC Driver (x64) | 
| Server | The endpoint host of the Amazon Redshift cluster. | 
| Database | The database that you created for your cluster. | 
| UID | The user name of a database user account that has permission to connect to the database. Although this value is a database-level permission and not a cluster-level permission, you can use the Redshift admin user account that you set up when you launched the cluster. | 
| PWD | The password for the database user account to connect to the database. | 
| Port | The port number that you specified when you launched the cluster. If you have a firewall, ensure that this port is open for you to use. | 

The following is an example ODBC URL: 

```
Driver={Amazon Redshift ODBC Driver (x64)}; Server=examplecluster.abc123xyz789.us-west-2.redshift.amazonaws.com; Database=dev; UID=adminuser; PWD=insert_your_admin_user_password_here; Port=5439
```

For information on where to find the ODBC URL, see [Finding your cluster connection string](https://docs.aws.amazon.com/redshift/latest/mgmt/configuring-connections.html#connecting-connection-string). 

# Using an Amazon Redshift ODBC driver on Microsoft Windows
<a name="odbc20-install-config-win"></a>

You must install the Amazon Redshift ODBC driver on client computers accessing an Amazon Redshift data warehouse. For each computer where you install the driver, there are the following minimum requirements: 
+ Administrator rights on the machine. 
+ The machine meets the following system requirements:
  + One of the following operating systems:
    + Windows 10 or 8.1.
    + Windows Server 2019, 2016, or 2012.
  + 100 MB of available disk space.
  + Visual C\$1\$1 Redistributable for Visual Studio 2015 for 64-bit Windows installed. You can download the installation package at [ Download Visual C\$1\$1 Redistributable for Visual Studio 2022](https://visualstudio.microsoft.com/downloads/#microsoft-visual-c-redistributable-for-visual-studio-2022) on the Microsoft website.

# Downloading and installing the Amazon Redshift ODBC driver
<a name="odbc20-install-win"></a>

Use the following procedure to download and install the Amazon Redshift ODBC driver for Windows operating systems. Only use a different driver if you're running a third-party application that is certified for use with Amazon Redshift, and that application requires that specific driver.

To download and install the ODBC driver: 

1. Download the following driver: [64-bit ODBC driver version 2.1.15.0](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.15.0/AmazonRedshiftODBC64-2.1.15.0.msi) 

   The name for this driver is **Amazon Redshift ODBC Driver (x64)**.

1. Review the [ Amazon Redshift ODBC driver version 2.x license](https://github.com/aws/amazon-redshift-odbc-driver/blob/master/LICENSE).

1. Double-click the .msi file, then follow the steps in the wizard to install the driver.

# Creating a system DSN entry for an ODBC connection
<a name="odbc20-dsn-win"></a>

After you download and install the ODBC driver, add a data source name (DSN) entry to the client computer or Amazon EC2 instance. SQL client tools can use this data source to connect to the Amazon Redshift database. 

We recommend that you create a system DSN instead of a user DSN. Some applications load the data using a different database user account, and might not be able to detect user DSNs that are created under another database user account.

**Note**  
For authentication using AWS Identity and Access Management (IAM) credentials or identity provider (IdP) credentials, additional steps are required. For more information, see [ Configure a JDBC or ODBC connection to use IAM credentials](https://docs.aws.amazon.com/redshift/latest/mgmt/generating-iam-credentials-configure-jdbc-odbc.html).

To create a system DSN entry for an ODBC connection:

1. In the **Start** menu, type "ODBC Data Sources." Choose **ODBC Data Sources**.

   Make sure that you choose the ODBC Data Source Administrator that has the same bitness as the client application that you are using to connect to Amazon Redshift. 

1. In the **ODBC Data Source Administrator**, choose the **Driver** tab and locate the following driver folder: **Amazon Redshift ODBC Driver (x64)**.

1. Choose the **System DSN** tab to configure the driver for all users on the computer, or the **User DSN** tab to configure the driver for your database user account only.

1. Choose **Add**. The **Create New Data Source** window opens.

1. Choose the **Amazon Redshift ODBC driver (x64)**, and then choose **Finish**. The **Amazon Redshift ODBC Driver DSN Setup** window opens.

1. Under the **Connection Settings** section, enter the following information: 
   + 

**Data source name**  
 Enter a name for the data source. For example, if you followed the *Amazon Redshift Getting Started Guide*, you might type `exampleclusterdsn` to make it easy to remember the cluster that you associate with this DSN. 
   + 

**Server**  
 Specify the endpoint host for your Amazon Redshift cluster. You can find this information in the Amazon Redshift console on the cluster's details page. For more information, see [ Configuring connections in Amazon Redshift ](https://docs.aws.amazon.com/redshift/latest/mgmt/configuring-connections.html). 
   + 

**Port**  
 Enter the port number that the database uses. Depending on the port you selected when creating, modifying or migrating the cluster, allow access to the selected port. 
   + 

**Database**  
 Enter the name of the Amazon Redshift database. If you launched your cluster without specifying a database name, enter `dev`. Otherwise, use the name that you chose during the launch process. If you followed the *Amazon Redshift Getting Started Guide*, enter `dev`. 

1. Under the **Authentication** section, specify the configuration options to configure standard or IAM authentication. 

1. Choose **SSL Options** and specify a value for the following:
   + 

**Authentication mode**  
Choose a mode for handling Secure Sockets Layer (SSL). In a test environment, you might use `prefer`. However, for production environments and when secure data exchange is required, use `verify-ca` or `verify-full`.
   + 

**Min TLS**  
Optionally, choose the minimum version of TLS/SSL that the driver allows the data store to use for encrypting connections. For example, if you specify TLS 1.2, TLS 1.1 can't be used to encrypt connections. The default version is TLS 1.2.

1.  In the **Proxy** tab, specify any proxy connection setting. 

1. In the **Cursor** tab, specify options on how to return query results to your SQL client tool or application. 

1. In **Advanced Options**, specify values for `logLevel`, `logPath`, `compression`, and other options. 

1. Choose **Test**. If the client computer can connect to the Amazon Redshift database, the following message appears: **Connection successful**. If the client computer fails to connect to the database, you can troubleshoot possible issues by generating a log file and contacting AWS support. For information on generating logs, see (LINK). 

1.  Choose **OK**. 

# Using an Amazon Redshift ODBC driver on Linux
<a name="odbc20-install-config-linux"></a>

You must install the Amazon Redshift ODBC driver on client computers accessing an Amazon Redshift data warehouse. For each computer where you install the driver, there are the following minimum requirements: 
+ Root access on the machine.
+ One of the following distributions:
  + Red Hat® Enterprise Linux® (RHEL) 8 or later
  + CentOS 8 or later.
+ 150 MB of available disk space.
+ unixODBC 2.2.14 or later.
+ glibc 2.26 or later.

# Downloading and installing the Amazon Redshift ODBC driver
<a name="odbc20-install-linux"></a>

To download and install the Amazon Redshift ODBC driver version 2.x for Linux:

1.  Download the following driver: 
   + [x86 64-bit RPM driver version 2.1.15.0](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.15.0/AmazonRedshiftODBC-64-bit-2.1.15.0.x86_64.rpm) 
   + [ARM 64-bit RPM driver version 2.1.15.0](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.15.0/AmazonRedshiftODBC-64-bit-2.1.15.0.aarch64.rpm) 
**Note**  
32-bit ODBC drivers are discontinued. Further updates will not be released, except for urgent security patches.

1.  Go to the location where you downloaded the package, and then run one of the following commands. Use the command that corresponds to your Linux distribution. 

   On RHEL and CentOS operating systems, run the following command:

   ```
   yum --nogpgcheck localinstall RPMFileName
   ```

   Replace `RPMFileName` with the RPM package file name. For example, the following command demonstrates installing the 64-bit driver:

   ```
   yum --nogpgcheck localinstall AmazonRedshiftODBC-64-bit-2.x.xx.xxxx.x86_64.rpm
   ```

# Using an ODBC driver manager to configure the ODBC driver
<a name="odbc20-config-linux"></a>

On Linux, you use an ODBC driver manager to configure the ODBC connection settings. ODBC driver managers use configuration files to define and configure ODBC data sources and drivers. The ODBC driver manager that you use depends on the operating system that you use.

## Configuring the ODBC driver using unixODBC driver manager
<a name="odbc20-config-unixodbc-linux"></a>

The following files are required to configure the Amazon Redshift ODBC driver: 
+ ` amazon.redshiftodbc.ini `
+ ` odbc.ini `
+ ` odbcinst.ini `

 If you installed to the default location, the `amazon.redshiftodbc.ini` configuration file is located in `/opt/amazon/redshiftodbcx64`.

 Additionally, under `/opt/amazon/redshiftodbcx64`, you can find sample `odbc.ini` and `odbcinst.ini` files. You can use these files as examples for configuring the Amazon Redshift ODBC driver and the data source name (DSN).

 We don't recommend using the Amazon Redshift ODBC driver installation directory for the configuration files. The sample files in the installed directory are for example purposes only. If you reinstall the Amazon Redshift ODBC driver at a later time, or upgrade to a newer version, the installation directory is overwritten. You will lose any changes that you might have made to files in the installation directory.

 To avoid this, copy the `amazon.redshiftodbc.ini` file to a directory other than the installation directory. If you copy this file to the user's home directory, add a period (.) to the beginning of the file name to make it a hidden file.

 For the `odbc.ini` and `odbcinst.ini` files, either use the configuration files in the user's home directory or create new versions in another directory. By default, your Linux operating system should have an `odbc.ini` file and an `odbcinst.ini` file in the user's home directory (`/home/$USER` or `~/.`). These default files are hidden files, which is indicated by the dot (.) in front of each file name. These files display only when you use the `-a` flag to list the directory contents.

 Whichever option you choose for the `odbc.ini` and `odbcinst.ini` files, modify the files to add driver and DSN configuration information. If you create new files, you also need to set environment variables to specify where these configuration files are located.

 By default, ODBC driver managers are configured to use hidden versions of the `odbc.ini` and `odbcinst.ini` configuration files (named `.odbc.ini` and `.odbcinst.ini`) located in the home directory. They also are configured to use the `amazon.redshiftodbc.ini` file in the driver installation directory. If you store these configuration files elsewhere, set the environment variables described following so that the driver manager can locate the files.

 If you are using unixODBC, do the following: 
+  Set `ODBCINI` to the full path and file name of the `odbc.ini` file. 
+  Set `ODBCSYSINI` to the full path of the directory that contains the `odbcinst.ini` file. 
+  Set `AMAZONREDSHIFTODBCINI` to the full path and file name of the `amazon.redshiftodbc.ini` file. 

The following is an example of setting the values above:

```
export ODBCINI=/usr/local/odbc/odbc.ini 
export ODBCSYSINI=/usr/local/odbc 
export AMAZONREDSHIFTODBCINI=/etc/amazon.redshiftodbc.ini
```

## Configuring a connection using a data source name (DSN) on Linux
<a name="odbc20-dsn-linux"></a>

When connecting to your data store using a data source name (DSN), configure the `odbc.ini` file to define data source names (DSNs). Set the properties in the `odbc.ini` file to create a DSN that specifies the connection information for your data store.

On Linux operating systems, use the following format:

```
[ODBC Data Sources]
driver_name=dsn_name

[dsn_name]
Driver=path/driver_file
Host=cluster_endpoint
Port=port_number
Database=database_name
locale=locale
```

The following example shows the configuration for `odbc.ini` with the 64-bit ODBC driver on Linux operating systems.

```
[ODBC Data Sources]
Amazon_Redshift_x64=Amazon Redshift ODBC Driver (x64)

[Amazon_Redshift_x64]
Driver=/opt/amazon/redshiftodbcx64/librsodbc64.so
Host=examplecluster.abc123xyz789.us-west-2.redshift.amazonaws.com
Port=5932Database=dev
locale=en-US
```

## Configuring a connection without a DSN on Linux
<a name="odbc20-no-dsn-linux"></a>

 To connect to your data store through a connection that doesn't have a DSN, define the driver in the `odbcinst.ini` file. Then provide a DSN-less connection string in your application.

On Linux operating systems, use the following format:

```
[ODBC Drivers]
driver_name=Installed
...
                            
[driver_name]
Description=driver_description
Driver=path/driver_file
    
...
```

The following example shows the configuration for `odbcinst.ini` with the 64-bit ODBC driver on Linux operating systems.

```
[ODBC Drivers]
Amazon Redshift ODBC Driver (x64)=Installed

[Amazon Redshift ODBC Driver (x64)]
Description=Amazon Redshift ODBC Driver (64-bit)
Driver=/opt/amazon/redshiftodbcx64/librsodbc64.so
```

# Using an Amazon Redshift ODBC driver on Apple macOS
<a name="odbc20-install-config-mac"></a>

You must install the Amazon Redshift ODBC driver on client computers accessing an Amazon Redshift data warehouse. For each computer where you install the driver, these are the following minimum requirements: 
+ Root access on the machine. 
+ Apple macOS System Requirements:
  + A 64-bit version of Apple macOS version 11.7 or higher (such as Apple macOS Big Sur, Monterey, Ventura or later) is required. The Redshift ODBC driver only supports 64-bit client applications.
  + 150 MB of available disk space.
  + The driver supports applications built with iODBC 3.52.9\$1 or unixODBC 2.3.7\$1.

# Downloading and installing the Amazon Redshift ODBC driver
<a name="odbc20-install-mac"></a>

Use the following procedure to download and install the Amazon Redshift ODBC driver on Apple macOS. Only use a different driver if you're running a third-party application that is certified for use with Amazon Redshift, and that application requires that specific driver.

To download and install the ODBC driver: 

1. Download the following driver: [64-bit ODBC driver version 2.1.15.0](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.15.0/AmazonRedshiftODBC-64-bit.2.1.15.0.universal.pkg) 

   This driver is supported on both x86\$164 and arm64 architectures. The name for this driver is **Amazon Redshift ODBC Driver (x64)**.

1. Review the [ Amazon Redshift ODBC driver version 2.x license](https://github.com/aws/amazon-redshift-odbc-driver/blob/master/LICENSE).

1. Double-click the .pkg file, then follow the steps in the wizard to install the driver. Alternatively, run the following command:

   ```
   sudo installer -pkg PKGFileName -target /
   ```

   Replace `PKGFileName` with the pkg package file name. For example, the following command demonstrates installing the 64-bit driver:

   ```
   sudo installer -pkg ./AmazonRedshiftODBC-64-bit.X.X.XX.X.universal.pkg -target /
   ```

# Using an ODBC driver manager to configure the ODBC driver
<a name="odbc20-config-mac"></a>

On Mac, you use an ODBC driver manager to configure the ODBC connection settings. ODBC driver managers use configuration files to define and configure ODBC data sources and drivers. The ODBC driver manager that you use depends on the operating system that you use.

## Configuring the ODBC driver using iODBC or unixODBC driver manager
<a name="odbc20-config-iodbc-mac"></a>

The following files are required to configure the Amazon Redshift ODBC driver: 
+ ` amazon.redshiftodbc.ini `
+ ` odbc.ini `
+ ` odbcinst.ini `

 If you installed to the default location, the `amazon.redshiftodbc.ini` configuration file is located in `/opt/amazon/redshiftodbcx64`.

 Additionally, under `/opt/amazon/redshiftodbcx64`, you can find sample `odbc.ini` and `odbcinst.ini` files. You can use these files as examples for configuring the Amazon Redshift ODBC driver and the data source name (DSN). The sample files in the installed directory are for example purposes only.

 We don't recommend using the Amazon Redshift ODBC driver installation directory for the configuration files. If you reinstall the Amazon Redshift ODBC driver at a later time, or upgrade to a newer version, the installation directory is overwritten. You will lose any changes that you might have made to files in the installation directory.

 To avoid this, copy the `odbc.ini`, `odbcinst.ini` and `amazon.redshiftodbc.ini` files to a directory other than the installation directory. If you copy these files to the user's home directory, add a period (.) to the beginning of these file names to make it a hidden file.

 Modify the files to add DSN configuration information. When you create new files, you also need to set environment variables to specify where these configuration files are located.

The following is an example of setting the environment variables:

```
export ODBCINI=/Library/ODBC/odbc.ini
export ODBCSYSINI=/Library/ODBC
export ODBCINSTINI=${ODBCSYSINI}/odbcinst.ini
```

For command-line applications: Add the export commands to your shell startup file (e.g., `~/.bash_profile` or `~/.zshrc`). 

For supported version of driver manager, see [here](https://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-install-config-mac.html) 

### Configuring a connection using a data source name (DSN) on Apple macOS
<a name="odbc20-dsn-mac"></a>

When connecting to your data store using a data source name (DSN), configure the `odbc.ini` file to define data source names (DSNs). Set the properties in the `odbc.ini` file to create a DSN that specifies the connection information for your Redshift data warehouse.

On Apple macOS, use the following format:

```
[ODBC Data Sources]
driver_name=dsn_name

[dsn_name]
Driver=path/driver_file
Host=cluster_endpoint
Port=port_number
Database=database_name
locale=locale
```

The following example shows the configuration for `odbc.ini` with the 64-bit ODBC driver on Apple macOS.

```
[ODBC Data Sources]
Amazon_Redshift_x64=Amazon Redshift ODBC Driver (x64)

[Amazon_Redshift_x64]
Driver=/opt/amazon/redshiftodbcx64/librsodbc64.dylib
Host=examplecluster.abc123xyz789.us-west-2.redshift.amazonaws.com
Port=5932
Database=dev
locale=en-US
```

### Configuring a connection without a DSN on Apple macOS
<a name="odbc20-no-dsn-mac"></a>

 To connect to your Redshift data warehouse through a connection that doesn't have a DSN, define the driver in the `odbcinst.ini` file. Then provide a DSN-less connection string in your application.

On Apple macOS, use the following format:

```
[ODBC Drivers]
driver_name=Installed
...
                            
[driver_name]
Description=driver_description
Driver=path/driver_file
    
...
```

The following example shows the configuration for `odbcinst.ini` with the 64-bit ODBC driver on Apple macOS.

```
[ODBC Drivers]
Amazon Redshift ODBC Driver (x64)=Installed

[Amazon Redshift ODBC Driver (x64)]
Description=Amazon Redshift ODBC Driver (64-bit)
Driver=/opt/amazon/redshiftodbcx64/librsodbc64.dylib
```

# Authentication methods
<a name="odbc20-authentication-ssl"></a>

To protect data from unauthorized access, Amazon Redshift data stores require all connections to be authenticated using user credentials.

The following table illustrates the required and optional connection options for each authentication method that can be used to connect to the Amazon Redshift ODBC driver version 2.x:


| Authentication Method | Required | Optional | 
| --- | --- | --- | 
|  Standard  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)  |   | 
|  IAM Profile  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)   **ClusterID** and **Region** must be set in **Host** if they are not set separately.    | 
|  IAM Credentials  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)   **ClusterID** and **Region** must be set in **Host** if they are not set separately.    | 
|  AD FS  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)   **ClusterID** and **Region** must be set in **Host** if they are not set separately.    | 
|  Azure AD  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)   **ClusterID** and **Region** must be set in **Host** if they are not set separately.    | 
|  JWT  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)  | 
|  Okta  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)   **ClusterID** and **Region** must be set in **Host** if they are not set separately.    | 
|  Ping Federate  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)   **ClusterID** and **Region** must be set in **Host** if they are not set separately.    | 
|  Browser Azure AD  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)   **ClusterID** and **Region** must be set in **Host** if they are not set separately.    | 
|  Browser SAML  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)   **ClusterID** and **Region** must be set in **Host** if they are not set separately.    | 
|  Auth Profile  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)  |   | 
|  Browser Azure AD OAUTH2  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)   **ClusterID** and **Region** must be set in **Host** if they are not set separately.    | 
|  AWS IAM Identity Center  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/odbc20-authentication-ssl.html)  | 

## Using an external credentials service
<a name="odbc20-authentication-external"></a>

In addition to built-in support for AD FS, Azure AD, and Okta, the Windows version of the Amazon Redshift ODBC driver also provides support for other credentials services. The driver can authenticate connections using any SAML-based credential provider plugin of your choice. 

To configure an external credentials service on Windows:

1. Create an IAM profile that specifies the credential provider plugin and other authentication parameters as needed. The profile must be ASCII-encoded, and must contain the following key-value pair, where `PluginPath` is the full path to the plugin application: 

   ```
   plugin_name = PluginPath
   ```

   For example:

   ```
   plugin_name = C:\Users\kjson\myapp\CredServiceApp.exe 
   ```

   For information on how to create a profile, see [ Using a Configuration Profile ](https://docs.aws.amazon.com/redshift/latest/mgmt/options-for-providing-iam-credentials.html#using-configuration-profile) in the Amazon Redshift Cluster Management Guide.

1. Configure the driver to use this profile. The driver detects and uses the authentication settings specified in the profile.

# Data types conversions
<a name="odbc20-converting-data-types"></a>

The Amazon Redshift ODBC driver version 2.x supports many common data formats, converting between Amazon Redshift and SQL data types.

The following table lists the supported data type mappings.


| Amazon Redshift type | SQL type | 
| --- | --- | 
|  BIGINT  |  SQL\$1BIGINT  | 
|  BOOLEAN  |  SQL\$1BIT  | 
|  CHAR  |  SQL\$1CHAR  | 
|  DATE  |  SQL\$1TYPE\$1DATE  | 
|  DECIMAL  |  SQL\$1NUMERIC  | 
|  DOUBLE PRECISION  |  SQL\$1DOUBLE  | 
|  GEOGRAPHY  |  SQL\$1 LONGVARBINARY  | 
|  GEOMETRY  |  SQL\$1 LONGVARBINARY  | 
|  INTEGER  |  SQL\$1INTEGER  | 
|  REAL  |  SQL\$1REAL  | 
|  SMALLINT  |  SQL\$1SMALLINT  | 
|  SUPER  |  SQL\$1LONGVARCHAR  | 
|  TEXT  |  SQL\$1LONGVARCHAR  | 
|  TIME  |  SQL\$1TYPE\$1TIME  | 
|  TIMETZ  |  SQL\$1TYPE\$1TIME  | 
|  TIMESTAMP  |  SQL\$1TYPE\$1 TIMESTAMP  | 
|  TIMESTAMPTZ  |  SQL\$1TYPE\$1 TIMESTAMP  | 
|  VARBYTE  |  SQL\$1LONGVARBINARY  | 
|  VARCHAR  |  SQL\$1VARCHAR  | 

# ODBC driver options
<a name="odbc20-configuration-options"></a>

You can use driver configuration options to control the behavior of the Amazon Redshift ODBC driver. Driver options are not case sensitive.

In Microsoft Windows, you typically set driver options when you configure a data source name (DSN). You can also set driver options in the connection string when you connect programmatically, or by adding or changing registry keys in `HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI\your_DSN`.

In Linux, you set driver configuration options in your `odbc.ini` and `amazon.redshiftodbc.ini` files. Configuration options set in an `amazon.redshiftodbc.ini` file apply to all connections. In contrast, configuration options set in an `odbc.ini` file are specific to a connection. Configuration options set in `odbc.ini` take precedence over configuration options set in `amazon.redshiftodbc.ini`.

Following are descriptions for the options that you can specify for the Amazon Redshift ODBC version 2.x driver:

## AccessKeyID
<a name="odbc20-accesskeyid-option"></a>
+ **Default Value** – None
+ **Data Type** – String

 The IAM access key for the user or role. If you set this parameter, you must also specify **SecretAccessKey**.

This parameter is optional.

## app\$1id
<a name="odbc20-app-id-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The Okta-provided unique ID associated with your Amazon Redshift application.

This parameter is optional.

## ApplicationName
<a name="odbc20-application_name-option"></a>
+ **Default value** – None
+ **Data type** – String

The name of the client application to pass to Amazon Redshift for audit purposes. The application name that you provide appears in the 'application\$1name' column of the [SYS\$1CONNECTION\$1LOG](https://docs.aws.amazon.com/redshift/latest/dg/SYS_CONNECTION_LOG.html) table. This helps track and troubleshoot connection sources when debugging issues.

This parameter is optional.

## app\$1name
<a name="odbc20-app-name-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The name of the Okta application that you use to authenticate the connection to Amazon Redshift.

This parameter is optional.

## AuthProfile
<a name="odbc20-authprofile-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The authentication profile used to manage the connection settings. If you set this parameter, you must also set **AccessKeyID** and **SecretAccessKey**. 

This parameter is optional.

## AuthType
<a name="odbc20-authtype-option"></a>
+ **Default Value** – Standard
+ **Data Type** – String

This option specifies the authentication mode that the driver uses when you configure a DSN using the Amazon Redshift ODBC Driver DSN Setup dialog box: 
+  Standard: Standard authentication using your Amazon Redshift user name and password. 
+  AWS Profile: IAM authentication using a profile.
+  AWS IAM Credentials: IAM authentication using IAM credentials. 
+  Identity Provider: AD FS: IAM authentication using Active Directory Federation Services (AD FS). 
+  Identity Provider: Auth Plugin: An authorization plugin that accepts an AWS IAM Identity Center token or OpenID Connect (OIDC) JSON-based identity tokens (JWT) from any web identity provider linked to AWS IAM Identity Center.
+  Identity Provider: Azure AD: IAM authentication using an Azure AD portal. 
+  Identity Provider: JWT: IAM authentication using a JSON Web Token (JWT). 
+  Identity Provider: Okta: IAM authentication using Okta. 
+  Identity Provider: PingFederate: IAM authentication using PingFederate. 

This option is available only when you configure a DSN using the Amazon Redshift ODBC Driver DSN Setup dialog box in the Windows driver. When you configure a connection using a connection string or a non-Windows machine, the driver automatically determines whether to use Standard, AWS Profile, or AWS IAM Credentials authentication based on your specified credentials. To use an identity provider, you must set the **plugin\$1name** property. 

This parameter is required.

## AutoCreate
<a name="odbc20-autocreate-option"></a>
+ **Default Value** – 0
+ **Data Type** – Boolean

A boolean specifying whether the driver creates a new user when the specified user does not exist. 
+  1 \$1 TRUE: If the user specified by the **UID** does not exist, the driver creates a new user. 
+  0 \$1 FALSE: The driver does not create a new user. If the specified user does not exist, the authentication fails. 

This parameter is optional.

## CaFile
<a name="odbc20-cafile-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The file path to the CA certificate file used for some forms of IAM authentication. 

 This parameter is only available on Linux.

This parameter is optional.

## client\$1id
<a name="odbc20-client-id-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The client ID associated with your Amazon Redshift application in Azure AD. 

This parameter is required if authenticating through the Azure AD service.

## client\$1 secret
<a name="odbc20-client-secret-option"></a>
+ **Default Value** – None
+ **Data Type** – String

 The secret key associated with your Amazon Redshift application in Azure AD. 

This parameter is required if authenticating through the Azure AD service.

## ClusterId
<a name="odbc20-clusterid-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The name of the Amazon Redshift cluster you want to connect to. It is used in IAM authentication. The Cluster ID is not specified in the **Server** parameter.

This parameter is optional.

## compression
<a name="odbc20-compression-option"></a>
+ **Default Value** – off
+ **Data Type** – String

The compression method used for wire protocol communication between the Amazon Redshift server and the client or driver.

You can specify the following values:
+ lz4: Sets the compression method used for wire protocol communication with Amazon Redshift to `lz4`. 
+ zstd: Sets the compression method used for wire protocol communication with Amazon Redshift to `zstd`. 
+  off: Doesn't use compression for wire protocol communication with Amazon Redshift. 

This parameter is optional.

## Database
<a name="odbc20-database-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The name of the Amazon Redshift database that you want to access.

This parameter is required.

## DatabaseMetadataCurrentDbOnly
<a name="odbc20-database-metadata-option"></a>
+ **Default Value** – 1
+ **Data Type** – Boolean

A boolean specifying whether the driver returns metadata from multiple databases and clusters.
+ 1 \$1 TRUE: The driver only returns metadata from the current database. 
+  0 \$1 FALSE. The driver returns metadata across multiple Amazon Redshift databases and clusters. 

This parameter is optional.

## dbgroups\$1filter
<a name="odbc20-dbgroups-filter-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The regular expression you can specify to filter out DbGroups that are received from the SAML response to Amazon Redshift when using Azure, Browser Azure, and Browser SAML authentication types. 

This parameter is optional.

## Driver
<a name="odbc20-driver-option"></a>
+ **Default Value** – Amazon Redshift ODBC Driver (x64)
+ **Data Type** – String

The name of the driver. The only supported value is **Amazon Redshift ODBC Driver (x64)**.

This parameter is required if you do not set **DSN**.

## DSN
<a name="odbc20-dsn-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The name of the driver data source name. The application specifies the DSN in the SQLDriverConnect API.

This parameter is required if you do not set **Driver.**.

## EndpointUrl
<a name="odbc20-endpointurl-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The overriding endpoint used to communicate with the Amazon Redshift Coral Service for IAM authentication.

This parameter is optional.

## ForceLowercase
<a name="odbc20-forcelowercase-option"></a>
+ **Default Value** – 0
+ **Data Type** – Boolean

A boolean specifying whether the driver lowercases all DbGroups sent from the identity provider to Amazon Redshift when using single sign-on authentication. 
+  1 \$1 TRUE: The driver lowercases all DbGroups that are sent from the identity provider. 
+  0 \$1 FALSE: The driver does not alter DbGroups. 

This parameter is optional.

## group\$1federation
<a name="odbc20-group-federation-option"></a>
+ **Default Value** – 0
+ **Data Type** – Boolean

A boolean specifying whether the `getClusterCredentialsWithIAM` API is used for obtaining temporary cluster credentials in provisioned clusters. This option lets IAM users integrate with Redshift database roles in provisioned clusters. Note that this option does not apply to Redshift Serverless namespaces.
+  1 \$1 TRUE: The driver uses the `getClusterCredentialsWithIAM` API for obtaining temporary cluster credentials in provisioned clusters. 
+  0 \$1 FALSE: The driver uses the default `getClusterCredentials` API for obtaining temporary cluster credentials in provisioned clusters. 

This parameter is optional.

## https\$1proxy\$1host
<a name="odbc20-https-proxy-host-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The host name or IP address of the proxy server through which you want to pass IAM authentication processes.

This parameter is optional.

## https\$1proxy\$1password
<a name="odbc20-https-proxy-password-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The password that you use to access the proxy server. It’s used for IAM authentication.

This parameter is optional.

## https\$1proxy\$1port
<a name="odbc20-https-proxy-port-option"></a>
+ **Default Value** – None
+ **Data Type** – Integer

The number of the port that the proxy server uses to listen for client connections. It’s used for IAM authentication.

This parameter is optional.

## https\$1proxy\$1username
<a name="odbc20-https-proxy-username-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The user name that you use to access the proxy server. It's used for IAM authentication.

This parameter is optional.

## IAM
<a name="odbc20-iam-option"></a>
+ **Default Value** – 0
+ **Data Type** – Boolean

A boolean specifying whether the driver uses an IAM authentication method to authenticate the connection. 
+  1 \$1 TRUE: The driver uses one of the IAM authentication methods (using an access key and secret key pair, or a profile, or a credentials service). 
+  0 \$1 FALSE. The driver uses standard authentication (using your database user name and password). 

This parameter is optional.

## idc\$1client\$1display\$1name
<a name="odbc20-idc_client_display_name-option"></a>
+ **Default Value** – Amazon Redshift ODBC driver
+ **Data Type** – String

The display name to be used for the client that's using BrowserIdcAuthPlugin.

This parameter is optional.

## idc\$1region
<a name="odbc20-idc_region"></a>
+ **Default Value** – None
+ **Data Type** – String

The AWS region where the AWS IAM Identity Center instance is located.

This parameter is required only when authenticating using `BrowserIdcAuthPlugin` in the plugin\$1name configuration option.

## idp\$1host
<a name="odbc20-idp-host-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The IdP (identity provider) host you are using to authenticate into Amazon Redshift.

This parameter is optional.

## idp\$1port
<a name="odbc20-idp-port-option"></a>
+ **Default Value** – None
+ **Data Type** – Integer

The port for an IdP (identity provider) you are using to authenticate into Amazon Redshift. Depending on the port you selected when creating, modifying or migrating the cluster, allow access to the selected port. 

This parameter is optional.

## idp\$1response\$1timeout
<a name="odbc20-idp-response-timeout-option"></a>
+ **Default Value** – 120
+ **Data Type** – Integer

The number of seconds that the driver waits for the SAML response from the identity provider when using SAML or Azure AD services through a browser plugin. 

This parameter is optional.

## idp\$1tenant
<a name="odbc20-idp-tenant-option"></a>
+ **Default Value** – None
+ **Data Type** – String

 The Azure AD tenant ID associated with your Amazon Redshift application.

This parameter is required if authenticating through the Azure AD service.

## idp\$1partition
<a name="odbc20-idp-partition-option"></a>
+ **Default Value** – None
+ **Data Type** – String

Specifies the cloud partition where your identity provider (IdP) is configured. This determines which IdP authentication endpoint the driver connects to.

If this parameter is left blank, the driver defaults to the commercial partition. Possible values are:
+ `us-gov`: Use this value if your IdP is configured in Azure Government. For example, Azure AD Government uses the endpoint `login.microsoftonline.us`.
+ `cn`: Use this value if your IdP is configured in the China cloud partition. For example, Azure AD China uses the endpoint `login.chinacloudapi.cn`.

This parameter is optional.

## idp\$1use\$1https\$1proxy
<a name="odbc20-idp-use-https-proxy-option"></a>
+ **Default Value** – 0
+ **Data Type** – Boolean

A boolean specifying whether the driver passes the authentication processes for identity providers (IdP) through a proxy server. 
+  1 \$1 TRUE: The driver passes IdP authentication processes through a proxy server. 
+  0 \$1 FALSE. The driver does not pass IdP authentication processes through a proxy server. 

This parameter is optional.

## InstanceProfile
<a name="odbc20-instanceprofile-option"></a>
+ **Default Value** – 0
+ **Data Type** – Boolean

A boolean specifying whether the driver uses the Amazon EC2 instance profile, when configured to use a profile for authentication.
+  1 \$1 TRUE: The driver uses the Amazon EC2 instance profile. 
+  0 \$1 FALSE. The driver uses the chained roles profile specified by the Profile Name option (**Profile**) instead. 

This parameter is optional.

## issuer\$1url
<a name="odbc20-issuer_url"></a>
+ **Default Value** – None
+ **Data Type** – String

 Points to the AWS IAM Identity Center server's instance endpoint. 

This parameter is required only when authenticating using `BrowserIdcAuthPlugin` in the plugin\$1name configuration option.

## KeepAlive
<a name="odbc20-keepalive-option"></a>
+ **Default Value** – 1
+ **Data Type** – Boolean

A boolean specifying whether the driver uses TCP keepalives to prevent connections from timing out.
+  1 \$1 TRUE: The driver uses TCP keepalives to prevent connections from timing out. 
+  0 \$1 FALSE. The driver does not use TCP keepalives. 

This parameter is optional.

## KeepAliveCount
<a name="odbc20-keepalivecount-option"></a>
+ **Default Value** – 0
+ **Data Type** – Integer

The number of TCP keepalive packets that can be lost before the connection is considered broken. When this parameter is set to 0, the driver uses the system default for this setting. 

This parameter is optional.

## KeepAliveInterval
<a name="odbc20-keepaliveinterval-option"></a>
+ **Default Value** – 0
+ **Data Type** – Integer

The number of seconds between each TCP keepalive retransmission. When this parameter is set to 0, the driver uses the system default for this setting. 

This parameter is optional.

## KeepAliveTime
<a name="odbc20-keepalivetime-option"></a>
+ **Default Value** – 0
+ **Data Type** – Integer

The number of seconds of inactivity before the driver sends a TCP keepalive packet. When this parameter is set to 0, the driver uses the system default for this setting. 

This parameter is optional.

## listen\$1port
<a name="odbc20-listen-port-option"></a>
+ **Default Value** – 7890
+ **Data Type** – Integer

The port that the driver uses to receive the SAML response from the identity provider or authorization code when using SAML, Azure AD, or AWS IAM Identity Center services through a browser plugin.

This parameter is optional.

## login\$1url
<a name="odbc20-login-url-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The URL for the resource on the identity provider's website when using the generic Browser SAML plugin.

This parameter is required if authenticating with the SAML or Azure AD services through a browser plugin.

## loginToRp
<a name="odbc20-logintorp-option"></a>
+ **Default Value** – urn:amazon:webservices
+ **Data Type** – String

The relying party trust that you want to use for the AD FS authentication type.

This string is optional.

## LogLevel
<a name="odbc20-loglevel-option"></a>
+ **Default Value** – 0
+ **Data Type** – Integer

Use this property to enable or disable logging in the driver and to specify the amount of detail included in log. files. We recommend you only enable logging long enough to capture an issue, as logging decreases performance and can consume a large quantity of disk space.

 Set the property to one of the following values:
+  0: OFF. Disable all logging. 
+  1: ERROR. Logs error events that might allow the driver to continue running but produce an error. 
+  2: API\$1CALL. Logs ODBC API function calls with function argument values. 
+  3: INFO. Logs general information that describes the progress of the driver. 
+  4: MSG\$1PROTOCOL. Logs detailed information of the driver's message procotol. 
+  5: DEBUG. Logs all driver activity 
+  6: DEBUG\$1APPEND. Keep appending logs for all driver activities. 

When logging is enabled, the driver produces the following log files at the location you specify in the **LogPath** property: 
+  A `redshift_odbc.log.1` file that logs driver activity that takes place during handshake of a connection. 
+  A `redshift_odbc.log` file for all driver activities after a connection is made to the database. 

This parameter is optional.

## LogPath
<a name="odbc20-logpath-option"></a>
+ **Default Value** – The OS-specific TEMP directory
+ **Data Type** – String

The full path to the folder where the driver saves log files when **LogLevel** is higher than 0.

This parameter is optional.

## Min\$1TLS
<a name="odbc20-min-tls-option"></a>
+ **Default Value** – 1.2
+ **Data Type** – String

 The minimum version of TLS/SSL that the driver allows the data store to use for encrypting connections. For example, if TLS 1.2 is specified, TLS 1.1 cannot be used to encrypt connections.

Min\$1TLS accepts the following values:
+  1.0: The connection must use at least TLS 1.0. 
+  1.1: The connection must use at least TLS 1.1. 
+  1.2: The connection must use at least TLS 1.2. 

This parameter is optional.

## partner\$1spid
<a name="odbc20-partner-spid-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The partner SPID (service provider ID) value to use when authenticating the connection using the PingFederate service.

This parameter is optional.

## Password \$1 PWS
<a name="odbc20-password-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The password corresponding to the database user name that you provided in the User field (**UID** \$1 **User** \$1 **LogonID**). 

This parameter is optional.

## plugin\$1name
<a name="odbc20-plugin-name-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The credentials provider plugin name that you want to use for authentication. 

 The following values are supported: 
+  `ADFS`: Use Active Directory Federation Services for authentication. 
+  `AzureAD`: Use Microsoft Azure Active Directory (AD) Service for authentication. 
+  `BrowserAzureAD`: Use a browser plugin for the Microsoft Azure Active Directory (AD) Service for authentication. 
+  `BrowserIdcAuthPlugin `: An authorization plugin using AWS IAM Identity Center. 
+  `BrowserSAML`: Use a browser plugin for SAML services such as Okta or Ping for authentication. 
+  `IdpTokenAuthPlugin`: An authorization plugin that accepts an AWS IAM Identity Center token or OpenID Connect (OIDC) JSON-based identity tokens (JWT) from any web identity provider linked to AWS IAM Identity Center. 
+  `JWT`: Use a JSON Web Token (JWT) for authentication. 
+  `Ping`: Use the PingFederate service for authentication. 
+  `Okta`: Use the Okta service for authentication. 

This parameter is optional.

## Port \$1 PortNumber
<a name="odbc20-port-option"></a>
+ **Default Value** – 5439
+ **Data Type** – Integer

The number of the TCP port that the Amazon Redshift server uses to listen for client connections. 

This parameter is optional.

## preferred\$1role
<a name="odbc20-preferred-role-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The role you want to assume during the connection to Amazon Redshift. It’s used for IAM authentication.

This parameter is optional.

## Profile
<a name="odbc20-profile-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The name of the user AWS profile used to authenticate into Amazon Redshift.
+  If the Use Instance Profile parameter (the **InstanceProfile** property) is set to 1 \$1 TRUE, that setting takes precedence and the driver uses the Amazon EC2 instance profile instead. 
+  The default location for the credentials file that contains profiles is `~/.aws/Credentials`. The `AWS_SHARED_CREDENTIALS_FILE` environment variable can be used to point to a different credentials file. 

This parameter is optional.

## provider\$1name
<a name="odbc20-provider-name-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The authentication provider created by the user using the CREATE IDENTITY PROVIDER query. It’s used in native Amazon Redshift authentication.

This parameter is optional.

## ProxyHost
<a name="odbc20-proxyhost-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The host name or IP address of the proxy server that you want to connect through.

This parameter is optional.

## ProxyPort
<a name="odbc20-proxyport-option"></a>
+ **Default Value** – None
+ **Data Type** – Integer

The number of the port that the proxy server uses to listen for client connections.

This parameter is optional.

## ProxyPwd
<a name="odbc20-proxypwd-option"></a>
+ **Default ValPrevious ODBC driver versionsue** – None
+ **Data Type** – String

The password that you use to access the proxy server. 

This parameter is optional.

## ProxyUid
<a name="odbc20-proxyuid-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The user name that you use to access the proxy server.

This parameter is optional.

## ReadOnly
<a name="odbc20-readonly-option"></a>
+ **Default Value** – 0
+ **Data Type** – Boolean

A boolean specifying whether the driver is in read-only mode. 
+  1 \$1 TRUE: The connection is in read-only mode, and cannot write to the data store. 
+  0 \$1 FALSE: The connection is not in read-only mode, and can write to the data store. 

This parameter is optional.

## region
<a name="odbc20-region-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The AWS region that your cluster is in. 

This parameter is optional.

## SecretAccessKey
<a name="odbc20-secretaccesskey-option"></a>
+ **Default Value** – None
+ **Data Type** – String

 The IAM secret key for the user or role. If you set this parameter, you must also set **AccessKeyID**. 

This parameter is optional.

## SessionToken
<a name="odbc20-sessiontoken-option"></a>
+ **Default Value** – None
+ **Data Type** – String

 The temporary IAM session token associated with the IAM role that you are using to authenticate. 

This parameter is optional.

## Server \$1 HostName \$1 Host
<a name="odbc20-server-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The endpoint server to connect to.

This parameter is required.

## ssl\$1insecure
<a name="odbc20-ssl-insecure-option"></a>
+ **Default Value** – 0
+ **Data Type** – Boolean

A boolean specifying whether the driver checks the authenticity of the IdP server certificate.
+  1 \$1 TRUE: The driver does not check the authenticity of the IdP server certificate. 
+  0 \$1 FALSE: The driver checks the authenticity of the IdP server certificate 

This parameter is optional.

## SSLMode
<a name="odbc20-sslmode-option"></a>
+ **Default Value** – `verify-ca`
+ **Data Type** – String

The SSL certificate verification mode to use when connecting to Amazon Redshift. The following values are possible: 
+  `verify-full`: Connect only using SSL, a trusted certificate authority, and a server name that matches the certificate. 
+  `verify-ca`: Connect only using SSL and a trusted certificate authority. 
+  `require`: Connect only using SSL. 
+  `prefer`: Connect using SSL if available. Otherwise, connect without using SSL. 
+  `allow`: By default, connect without using SSL. If the server requires SSL connections, then use SSL. 
+  `disable`: Connect without using SSL. 

This parameter is optional.

## StsConnectionTimeout
<a name="odbc20-stsconnectiontimeout-option"></a>
+ **Default Value** – 0
+ **Data Type** – Integer

The maximum wait time for IAM connections, in seconds. If set to 0 or not specified, the driver waits 60 seconds for each AWS STS call. 

This parameter is optional.

## StsEndpointUrl
<a name="odbc20-stsendpointurl-option"></a>
+ **Default Value** – None
+ **Data Type** – String

This option specifies the overriding endpoint used to communicate with the AWS Security Token Service (AWS STS). 

This parameter is optional.

## token
<a name="jdbc20-token-option"></a>
+ **Default Value** – None
+ **Data Type** – String

An AWS IAM Identity Center provided access token or an OpenID Connect (OIDC) JSON Web Token (JWT) provided by a web identity provider that's linked with AWS IAM Identity Center. Your application must generate this token by authenticating the user of your application with AWS IAM Identity Center or an identity provider linked with AWS IAM Identity Center. 

This parameter works with `IdpTokenAuthPlugin`.

## token\$1type
<a name="jdbc20-token-type-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The type of token that is being used in `IdpTokenAuthPlugin`.

You can specify the following values:

**ACCESS\$1TOKEN**  
Enter this if you use an AWS IAM Identity Center provided access token.

**EXT\$1JWT**  
Enter this if you use an OpenID Connect (OIDC) JSON Web Token (JWT) provided by a web-based identity provider that's integrated with AWS IAM Identity Center.

This parameter works with `IdpTokenAuthPlugin`.

## UID \$1 User \$1 LogonID
<a name="odbc20-uid-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The user name that you use to access the Amazon Redshift server.

This parameter is required if you use database authentication.

## UseUnicode
<a name="odbc20-useunicode-option"></a>
+ **Default Value** – 0
+ **Data Type** – Boolean

A boolean specifying whether the driver returns Redshift data as Unicode or regular SQL types.
+  1 \$1 TRUE: The Driver returns wide SQL type for character data type. 
  + SQL\$1WCHAR is returned instead of SQL\$1CHAR.
  + SQL\$1WVARCHAR is returned instead of SQL\$1VARCHAR.
  + SQL\$1WLONGVARCHAR is returned instead of SQL\$1LONGVARCHAR.
+  0 \$1 FALSE: The driver returns normal SQL type for character data type. 
  + SQL\$1CHAR is returned instead of SQL\$1WCHAR.
  + SQL\$1VARCHAR is returned instead of SQL\$1WVARCHAR.
  + SQL\$1LONGVARCHAR is returned instead of SQL\$1WLONGVARCHAR.

This parameter is optional. It is available in driver versions 2.1.15 and later.

## web\$1identity\$1token
<a name="odbc20-web-identity-token-option"></a>
+ **Default Value** – None
+ **Data Type** – String

The OAUTH token that is provided by the identity provider. It’s used in the JWT plugin.

This parameter is required if you set the **plugin\$1name** parameter to BasicJwtCredentialsProvider.

# Previous ODBC driver versions
<a name="odbc20-previous-versions"></a>

Download a previous version of the Amazon Redshift ODBC driver version 2.x only if your tool requires a specific version of the driver. 

## Use previous ODBC driver versions for Microsoft Windows
<a name="odbc20-previous-versions-windows"></a>

The following are the previous versions of the Amazon Redshift ODBC driver version 2.x for Microsoft Windows: 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.14.0/AmazonRedshiftODBC64-2.1.14.0.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.14.0/AmazonRedshiftODBC64-2.1.14.0.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.13.0/AmazonRedshiftODBC64-2.1.13.0.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.13.0/AmazonRedshiftODBC64-2.1.13.0.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.12.0/AmazonRedshiftODBC64-2.1.12.0.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.12.0/AmazonRedshiftODBC64-2.1.12.0.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.11.0/AmazonRedshiftODBC64-2.1.11.0.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.11.0/AmazonRedshiftODBC64-2.1.11.0.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.10.0/AmazonRedshiftODBC64-2.1.10.0.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.10.0/AmazonRedshiftODBC64-2.1.10.0.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.9.0/AmazonRedshiftODBC64-2.1.9.0.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.9.0/AmazonRedshiftODBC64-2.1.9.0.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.9.0/AmazonRedshiftODBC64-2.1.9.0.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.9.0/AmazonRedshiftODBC64-2.1.9.0.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.8.0/AmazonRedshiftODBC64-2.1.8.0.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.8.0/AmazonRedshiftODBC64-2.1.8.0.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.7.0/AmazonRedshiftODBC64-2.1.7.0.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.7.0/AmazonRedshiftODBC64-2.1.7.0.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.6.0/AmazonRedshiftODBC64-2.1.6.0.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.6.0/AmazonRedshiftODBC64-2.1.6.0.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.4.0/AmazonRedshiftODBC64-2.1.4.0.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.4.0/AmazonRedshiftODBC64-2.1.4.0.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.3.0/AmazonRedshiftODBC64-2.1.3.0.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.3.0/AmazonRedshiftODBC64-2.1.3.0.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.2.0/AmazonRedshiftODBC64-2.1.2.0.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.2.0/AmazonRedshiftODBC64-2.1.2.0.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.1.0/AmazonRedshiftODBC64-2.1.1.0.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.1.0/AmazonRedshiftODBC64-2.1.1.0.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.0.0/AmazonRedshiftODBC64-2.1.0.0.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.0.0/AmazonRedshiftODBC64-2.1.0.0.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.1.0/AmazonRedshiftODBC64-2.0.1.0.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.1.0/AmazonRedshiftODBC64-2.0.1.0.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.11/AmazonRedshiftODBC64-2.0.0.11.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.11/AmazonRedshiftODBC64-2.0.0.11.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.9/AmazonRedshiftODBC64-2.0.0.9.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.9/AmazonRedshiftODBC64-2.0.0.9.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.8/AmazonRedshiftODBC64-2.0.0.8.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.8/AmazonRedshiftODBC64-2.0.0.8.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.7/AmazonRedshiftODBC64-2.0.0.7.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.7/AmazonRedshiftODBC64-2.0.0.7.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.6/AmazonRedshiftODBC64-2.0.0.6.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.6/AmazonRedshiftODBC64-2.0.0.6.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.5/AmazonRedshiftODBC64-2.0.0.5.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.5/AmazonRedshiftODBC64-2.0.0.5.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.3/AmazonRedshiftODBC64-2.0.0.3.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.3/AmazonRedshiftODBC64-2.0.0.3.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.1/AmazonRedshiftODBC64-2.0.0.1.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.1/AmazonRedshiftODBC64-2.0.0.1.msi) 

## Use previous ODBC driver versions for Linux
<a name="odbc20-previous-versions-linux"></a>

The following are the previous versions of the Amazon Redshift ODBC driver version 2.x for Linux: 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.14.0/AmazonRedshiftODBC-64-bit-2.1.14.0.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.14.0/AmazonRedshiftODBC-64-bit-2.1.14.0.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.14.0/AmazonRedshiftODBC-64-bit-2.1.14.0.aarch64.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.14.0/AmazonRedshiftODBC-64-bit-2.1.14.0.aarch64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.13.0/AmazonRedshiftODBC-64-bit-2.1.13.0.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.13.0/AmazonRedshiftODBC-64-bit-2.1.13.0.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.13.0/AmazonRedshiftODBC-64-bit-2.1.13.0.aarch64.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.13.0/AmazonRedshiftODBC-64-bit-2.1.13.0.aarch64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.12.0/AmazonRedshiftODBC-64-bit-2.1.12.0.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.12.0/AmazonRedshiftODBC-64-bit-2.1.12.0.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.12.0/AmazonRedshiftODBC-64-bit-2.1.12.0.aarch64.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.12.0/AmazonRedshiftODBC-64-bit-2.1.12.0.aarch64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.11.0/AmazonRedshiftODBC-64-bit-2.1.11.0.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.11.0/AmazonRedshiftODBC-64-bit-2.1.11.0.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.11.0/AmazonRedshiftODBC-64-bit-2.1.11.0.aarch64.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.11.0/AmazonRedshiftODBC-64-bit-2.1.11.0.aarch64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.10.0/AmazonRedshiftODBC-64-bit-2.1.10.0.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.10.0/AmazonRedshiftODBC-64-bit-2.1.10.0.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.9.0/AmazonRedshiftODBC-64-bit-2.1.9.0.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.9.0/AmazonRedshiftODBC-64-bit-2.1.9.0.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.8.0/AmazonRedshiftODBC-64-bit-2.1.8.0.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.8.0/AmazonRedshiftODBC-64-bit-2.1.8.0.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.7.0/AmazonRedshiftODBC-64-bit-2.1.7.0.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.7.0/AmazonRedshiftODBC-64-bit-2.1.7.0.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.6.0/AmazonRedshiftODBC-64-bit-2.1.6.0.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.6.0/AmazonRedshiftODBC-64-bit-2.1.6.0.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.4.0/AmazonRedshiftODBC-64-bit-2.1.4.0.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.4.0/AmazonRedshiftODBC-64-bit-2.1.4.0.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.3.0/AmazonRedshiftODBC-64-bit-2.1.3.0.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.3.0/AmazonRedshiftODBC-64-bit-2.1.3.0.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.2.0/AmazonRedshiftODBC-64-bit-2.1.2.0.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.2.0/AmazonRedshiftODBC-64-bit-2.1.2.0.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.1.0/AmazonRedshiftODBC-64-bit-2.1.1.0.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.1.0/AmazonRedshiftODBC-64-bit-2.1.1.0.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.0.0/AmazonRedshiftODBC-64-bit-2.1.0.0.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.0.0/AmazonRedshiftODBC-64-bit-2.1.0.0.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.1.0/AmazonRedshiftODBC-64-bit-2.0.1.0.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.1.0/AmazonRedshiftODBC-64-bit-2.0.1.0.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.11/AmazonRedshiftODBC-64-bit-2.0.0.11.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.11/AmazonRedshiftODBC-64-bit-2.0.0.11.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.9/AmazonRedshiftODBC-64-bit-2.0.0.9.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.9/AmazonRedshiftODBC-64-bit-2.0.0.9.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.8/AmazonRedshiftODBC-64-bit-2.0.0.8.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.8/AmazonRedshiftODBC-64-bit-2.0.0.8.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.7/AmazonRedshiftODBC-64-bit-2.0.0.7.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.7/AmazonRedshiftODBC-64-bit-2.0.0.7.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.6/AmazonRedshiftODBC-64-bit-2.0.0.6.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.6/AmazonRedshiftODBC-64-bit-2.0.0.6.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.5/AmazonRedshiftODBC-64-bit-2.0.0.5.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.5/AmazonRedshiftODBC-64-bit-2.0.0.5.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.3/AmazonRedshiftODBC-64-bit-2.0.0.3.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.3/AmazonRedshiftODBC-64-bit-2.0.0.3.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.1/AmazonRedshiftODBC-64-bit-2.0.0.1.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.0.0.1/AmazonRedshiftODBC-64-bit-2.0.0.1.x86_64.rpm) 

## Use previous ODBC driver versions for Apple macOS
<a name="odbc20-previous-versions-mac"></a>

The following are the previous versions of the Amazon Redshift ODBC driver version 2.x for Apple macOS: 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.14.0/AmazonRedshiftODBC-64-bit.2.1.14.0.universal.pkg](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.14.0/AmazonRedshiftODBC-64-bit.2.1.14.0.universal.pkg) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.13.0/AmazonRedshiftODBC-64-bit.2.1.13.0.universal.pkg](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.13.0/AmazonRedshiftODBC-64-bit.2.1.13.0.universal.pkg) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.12.0/AmazonRedshiftODBC-64-bit.2.1.12.0.universal.pkg](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/2.1.12.0/AmazonRedshiftODBC-64-bit.2.1.12.0.universal.pkg) 

# Configuring an ODBC driver version 1.x connection
<a name="configure-odbc-connection"></a>

You can use an ODBC connection to connect to your Amazon Redshift cluster from many third-party SQL client tools and applications. To do this, set up the connection on your client computer or Amazon EC2 instance. If your client tool supports JDBC, you might choose to use that type of connection rather than ODBC due to the ease of configuration that JDBC provides. However, if your client tool doesn't support JDBC, follow the steps in this section to configure an ODBC connection. 

Amazon Redshift provides 64-bit ODBC drivers for Linux, Windows, and macOS X operating systems. The 32-bit ODBC drivers are discontinued. Further updates will not be released, except for urgent security patches. 

For the latest information about ODBC driver functionality and prerequisites, see [Amazon Redshift ODBC driver release notes](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Release+Notes.pdf). 

For installation and configuration information for Amazon Redshift ODBC drivers, see the [Amazon Redshift ODBC connector installation and configuration guide](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Amazon+Redshift+ODBC+Connector+Install+Guide.pdf). 

**Topics**
+ [

# Getting the ODBC URL
](obtain-odbc-url.md)
+ [

# Using an Amazon Redshift ODBC driver on Microsoft Windows
](install-odbc-driver-windows.md)
+ [

# Using an Amazon Redshift ODBC driver on Linux
](install-odbc-driver-linux.md)
+ [

# Using an Amazon Redshift ODBC driver on macOS X
](install-odbc-driver-mac.md)
+ [

# ODBC driver options
](configure-odbc-options.md)
+ [

# Previous ODBC driver versions
](odbc-previous-versions.md)

# Getting the ODBC URL
<a name="obtain-odbc-url"></a>

Amazon Redshift displays the ODBC URL for your cluster in the Amazon Redshift console. This URL contains the information to set up the connection between your client computer and the database.

 An ODBC URL has the following format: `Driver={driver};Server=endpoint;Database=database_name;UID=user_name;PWD=password;Port=port_number` 

The fields of the format shown preceding have the following values.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/obtain-odbc-url.html)

 The fields in the preceding tables can contain the following special characters:

```
[]{}(),;?*=!@ 
```

 If you use these special characters you must enclose the value in curly braces. For example, the password value `Your;password123` in a connection string is represented as `PWD={Your;password123};`. 

 Since `Field=value` pairs are separated by semicolon, the combination of `}` and `;` with any number of spaces in between is considered the end of a `Field={value};` pair. We recommend you avoid the sequence `};` in your field values. For example, if you set your password value as `PWD={This is a passwor} ;d};`, your password would be `This is a passwor} ;` and the URL would error out. 

The following is an example ODBC URL.

```
Driver={Amazon Redshift (x64)};
                    Server=examplecluster.abc123xyz789.us-west-2.redshift.amazonaws.com;
                    Database=dev; 
                    UID=adminuser;
                    PWD=insert_your_admin_user_password_here;
                    Port=5439
```

For information about how to get your ODBC connection, see [Finding your cluster connection string](connecting-connection-string.md). 

# Using an Amazon Redshift ODBC driver on Microsoft Windows
<a name="install-odbc-driver-windows"></a>

You install the Amazon Redshift ODBC driver on client computers accessing an Amazon Redshift data warehouse. Each computer where you install the driver must meet a list of minimum system requirements. For information about minimum system requirements, see the [Amazon Redshift ODBC connector installation and configuration guide](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Amazon+Redshift+ODBC+Connector+Install+Guide.pdf). 

**Topics**
+ [

# Downloading and installing the Amazon Redshift ODBC driver
](odbc-driver-windows-how-to-install.md)
+ [

# Creating a system DSN entry for an ODBC connection
](create-dsn-odbc-windows.md)

# Downloading and installing the Amazon Redshift ODBC driver
<a name="odbc-driver-windows-how-to-install"></a>

Use the following procedure to download the Amazon Redshift ODBC drivers for Windows operating systems. Only use a driver other than these if you're running a third-party application that is certified for use with Amazon Redshift and that requires a specific driver. 

**To install the ODBC driver**

1. Download one of the following, depending on the system architecture of your SQL client tool or application: 
   + [64-bit ODBC driver version 1.6.3](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/AmazonRedshiftODBC64-1.6.3.1008.msi) 

     The name for this driver is Amazon Redshift (x64).
   + [32-bit ODBC driver version 1.4.52](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.52.1000/AmazonRedshiftODBC32-1.4.52.1000.msi) 

     The name for this driver is Amazon Redshift (x86). The 32-bit ODBC drivers are discontinued. Further updates will not be released, except for urgent security patches.
**Note**  
Download the MSI package that corresponds to the system architecture of your SQL client tool or application. For example, if your SQL client tool is 64-bit, install the 64-bit driver.

    Then download and review the [Amazon Redshift ODBC and JDBC driver license agreement](https://s3.amazonaws.com/redshift-downloads/drivers/Amazon+Redshift+ODBC+and+JDBC+Driver+License+Agreement.pdf). 

1.  Double-click the .msi file, and then follow the steps in the wizard to install the driver. 

# Creating a system DSN entry for an ODBC connection
<a name="create-dsn-odbc-windows"></a>

After you download and install the ODBC driver, add a data source name (DSN) entry to the client computer or Amazon EC2 instance. SQL client tools use this data source to connect to the Amazon Redshift database. 

We recommend that you create a system DSN instead of a user DSN. Some applications load the data using a different user account. These applications might not be able to detect user DSNs that are created under another user account.

**Note**  
For authentication using AWS Identity and Access Management (IAM) credentials or identity provider (IdP) credentials, additional steps are required. For more information, see [Step 5: Configure a JDBC or ODBC connection to use IAM credentials](generating-iam-credentials-steps.md#generating-iam-credentials-configure-jdbc-odbc).

For information about how to create a system DSN entry, see the [Amazon Redshift ODBC connector installation and configuration guide](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Amazon+Redshift+ODBC+Connector+Install+Guide.pdf). 

**To create a system DSN entry for an ODBC connection on Windows**

1. In the **Start** menu, open **ODBC Data Sources**.

   Make sure that you choose the ODBC Data Source Administrator that has the same bitness as the client application that you are using to connect to Amazon Redshift.

1. In the **ODBC Data Source Administrator**, choose the **Driver** tab and locate the driver folder:
   + **Amazon Redshift ODBC Driver (64-bit)**
   + **Amazon Redshift ODBC Driver (32-bit)**

1.  Choose the **System DSN** tab to configure the driver for all users on the computer, or the **User DSN** tab to configure the driver for your user account only. 

1.  Choose **Add**. The **Create New Data Source** window opens. 

1.  Choose the **Amazon Redshift** ODBC driver, and then choose **Finish**. The ** Amazon Redshift ODBC Driver DSN Setup** window opens.

1. Under **Connection Settings**, enter the following information:
<a name="rs-mgmt-dsn"></a>
**Data source name**  
Enter a name for the data source. You can use any name that you want to identify the data source later when you create the connection to the cluster. For example, if you followed the *Amazon Redshift Getting Started Guide*, you might type `exampleclusterdsn` to make it easy to remember the cluster that you associate with this DSN.
<a name="rs-mgmt-server"></a>
**Server**  
Specify the endpoint for your Amazon Redshift cluster. You can find this information in the Amazon Redshift console on the cluster's details page. For more information, see [Configuring connections in Amazon Redshift](configuring-connections.md).
<a name="rs-mgmt-port"></a>
**Port**  
Enter the port number that the database uses. Use the port that the cluster was configured to use when it was launched or modified.
<a name="rs-mgmt-database"></a>
**Database**  
Enter the name of the Amazon Redshift database. If you launched your cluster without specifying a database name, enter `dev`. Otherwise, use the name that you chose during the launch process. If you followed the *Amazon Redshift Getting Started Guide*, enter `dev`.

1. Under **Authentication**, specify the configuration options to configure standard or IAM authentication. For information about authentication options, see "Configuring Authentication on Windows" in *Amazon Redshift ODBC Connector Installation and Configuration Guide*. 

1. Under **SSL Settings**, specify a value for the following:
<a name="rs-mgmt-ssl-authentication"></a>
**SSL authentication**  
Choose a mode for handling Secure Sockets Layer (SSL). In a test environment, you might use `prefer`. However, for production environments and when secure data exchange is required, use `verify-ca` or `verify-full`. For more information about using SSL on Windows, see "Configuring SSL Verification on Windows" in *Amazon Redshift ODBC Connector Installation and Configuration Guide*. 

1. Under **Additional Options**, specify options on how to return query results to your SQL client tool or application. For more information, see "Configuring Additional Options on Windows" in *Amazon Redshift ODBC Connector Installation and Configuration Guide*. 

1. In **Logging Options**, specify values for the logging option. For more information, see "Configuring Logging Options on Windows" in *Amazon Redshift ODBC Connector Installation and Configuration Guide*. 

   Then choose **OK**.

1. Under **Data Type Options**, specify values for data types. For more information, see "Configuring Data Type Options on Windows" in *Amazon Redshift ODBC Connector Installation and Configuration Guide*. 

   Then choose **OK**.

1. Choose **Test**. If the client computer can connect to the Amazon Redshift database, you see the following message: **Connection successful**. 

    If the client computer fails to connect to the database, you can troubleshoot possible issues. For more information, see [Troubleshooting connection issues in Amazon Redshift](troubleshooting-connections.md). 

1. Configure TCP keepalives on Windows to prevent connections from timing out. For information about how to configure TCP keepalives on Windows, see *Amazon Redshift ODBC Connector Installation and Configuration Guide*.

1. To help troubleshooting, configure logging. For information about how to configure logging on Windows, see *Amazon Redshift ODBC Connector Installation and Configuration Guide*. 

# Using an Amazon Redshift ODBC driver on Linux
<a name="install-odbc-driver-linux"></a>

You install the Amazon Redshift ODBC driver on client computers accessing an Amazon Redshift data warehouse. Each computer where you install the driver must meet a list of minimum system requirements. For information about minimum system requirements, see the [Amazon Redshift ODBC connector installation and configuration guide](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Amazon+Redshift+ODBC+Connector+Install+Guide.pdf). 

**Topics**
+ [

# Downloading and installing the Amazon Redshift ODBC driver
](odbc-driver-linux-how-to-install.md)
+ [

# Using an ODBC driver manager to configure the driver
](odbc-driver-configure-linux.md)

# Downloading and installing the Amazon Redshift ODBC driver
<a name="odbc-driver-linux-how-to-install"></a>

Use the steps in this section to download and install the Amazon Redshift ODBC drivers on a supported Linux distribution. The installation process installs the driver files in the following directories: 
+ `/opt/amazon/redshiftodbc/lib/64` (for the 64-bit driver)
+ `/opt/amazon/redshiftodbc/ErrorMessages`
+ `/opt/amazon/redshiftodbc/Setup`
+  `/opt/amazon/redshiftodbc/lib/32` (for the 32-bit driver)<a name="rs-mgmt-install-odbc-drivers-linux"></a>

**To install the Amazon Redshift ODBC driver**

1. Download one of the following, depending on the system architecture of your SQL client tool or application: 
   + [64-bit RPM driver version 1.6.3](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/AmazonRedshiftODBC-64-bit-1.6.3.1008-1.x86_64.rpm) 
   + [64-bit Debian driver version 1.6.3](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/AmazonRedshiftODBC-64-bit-1.6.3.1008-1.x86_64.deb) 
   + [32-bit driver version 1.4.52](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.52.1000/AmazonRedshiftODBC-32-bit-1.4.52.1000-1.i686.rpm) 

   The name for each of these drivers is Amazon Redshift ODBC driver. The 32-bit ODBC drivers are discontinued. Further updates will not be released, except for urgent security patches.
**Note**  
Download the package that corresponds to the system architecture of your SQL client tool or application. For example, if your client tool is 64-bit, install a 64-bit driver.

    Then download and review the [Amazon Redshift ODBC and JDBC driver license agreement](https://s3.amazonaws.com/redshift-downloads/drivers/Amazon+Redshift+ODBC+and+JDBC+Driver+License+Agreement.pdf). 

1. Go to the location where you downloaded the package, and then run one of the following commands. Use the command that corresponds to your Linux distribution. 
   + On RHEL and CentOS operating systems, run the following command.

     ```
     yum -nogpgcheck localinstall RPMFileName
     ```

     Replace *`RPMFileName`* with the RPM package file name. For example, the following command demonstrates installing the 64-bit driver.

     ```
     yum -nogpgcheck localinstall AmazonRedshiftODBC-64-bit-1.x.xx.xxxx-x.x86_64.rpm
     ```
   + On SLES, run the following command.

     ```
     zypper install RPMFileName
     ```

     Replace *`RPMFileName`* with the RPM package file name. For example, the following command demonstrates installing the 64-bit driver.

     ```
     zypper install AmazonRedshiftODBC-1.x.x.xxxx-x.x86_64.rpm
     ```
   + On Debian, run the following command.

     ```
     sudo apt install ./DEBFileName.deb
     ```

     Replace `DEBFileName.deb` with the Debian package file name. For example, the following command demonstrates installing the 64-bit driver.

     ```
     sudo apt install ./AmazonRedshiftODBC-1.x.x.xxxx-x.x86_64.deb
     ```

**Important**  
When you have finished installing the drivers, configure them for use on your system. For more information on driver configuration, see [Using an ODBC driver manager to configure the driverUsing an ODBC driver manager to configure the driver](odbc-driver-configure-linux.md).

# Using an ODBC driver manager to configure the driver
<a name="odbc-driver-configure-linux"></a>

On Linux operating systems, you use an ODBC driver manager to configure the ODBC connection settings. ODBC driver managers use configuration files to define and configure ODBC data sources and drivers. The ODBC driver manager that you use depends on the operating system that you use. For Linux, it's unixODBC driver manager.

For more information about the supported ODBC driver managers to configure the Amazon Redshift ODBC drivers, see [Using an Amazon Redshift ODBC driver on LinuxUsing an ODBC driver on Linux](install-odbc-driver-linux.md) for Linux operating systems. Also, see "Specifying ODBC Driver Managers on Non- Windows Machines" in the [Amazon Redshift ODBC connector installation and configuration guide](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Amazon+Redshift+ODBC+Connector+Install+Guide.pdf). 

Three files are required for configuring the Amazon Redshift ODBC driver: `amazon.redshiftodbc.ini`, `odbc.ini`, and `odbcinst.ini`.

If you installed to the default location, the `amazon.redshiftodbc.ini` configuration file is located in one of the following directories:
+ `/opt/amazon/redshiftodbc/lib/64` (for the 64-bit driver on Linux operating systems)
+ `/opt/amazon/redshiftodbc/lib/32` (for the 32-bit driver on Linux operating systems)

Additionally, under `/opt/amazon/redshiftodbc/Setup` on Linux, there are sample `odbc.ini` and `odbcinst.ini` files. You can use these files as examples for configuring the Amazon Redshift ODBC driver and the data source name (DSN).

We don't recommend using the Amazon Redshift ODBC driver installation directory for the configuration files. The sample files in the `Setup` directory are for example purposes only. If you reinstall the Amazon Redshift ODBC driver at a later time, or upgrade to a newer version, the installation directory is overwritten. You then lose any changes that you might have made to those files.

To avoid this, copy the `amazon.redshiftodbc.ini` file to a directory other than the installation directory. If you copy this file to the user's home directory, add a period (.) to the beginning of the file name to make it a hidden file.

For the `odbc.ini` and `odbcinst.ini` files, either use the configuration files in the user's home directory or create new versions in another directory. By default, your Linux operating system should have an `odbc.ini` file and an `odbcinst.ini` file in the user's home directory (`/home/$USER` or `~/`.). These default files are hidden files, which is indicated by the dot (.) in front of each file name. These files display only when you use the `-a` flag to list the directory contents.

Whichever option you choose for the `odbc.ini` and `odbcinst.ini` files, modify the files to add driver and DSN configuration information. If you create new files, you also need to set environment variables to specify where these configuration files are located. 

By default, ODBC driver managers are configured to use hidden versions of the `odbc.ini` and `odbcinst.ini` configuration files (named .`odbc.ini` and .`odbcinst.ini`) located in the home directory. They also are configured to use the `amazon.redshiftodbc.ini` file in the `/lib` subfolder of the driver installation directory. If you store these configuration files elsewhere, set the environment variables described following so that the driver manager can locate the files. For more information, see "Specifying the Locations of the Driver Configuration Files" in the [Amazon Redshift ODBC connector installation and configuration guide](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Amazon+Redshift+ODBC+Connector+Install+Guide.pdf). 

## Creating a data source name on Linux operating systems
<a name="configure-odbc-ini-file"></a>

 When connecting to your data store using a data source name (DSN), configure the `odbc.ini` file to define DSNs. Set the properties in the `odbc.ini` file to create a DSN that specifies the connection information for your data store.

For information about how to configure the `odbc.ini` file, see "Creating a Data Source Name on a Non-Windows Machine" in the [Amazon Redshift ODBC connector installation and configuration guide](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Amazon+Redshift+ODBC+Connector+Install+Guide.pdf) 

 Use the following format on Linux operating systems.

```
[ODBC Data Sources]
driver_name=dsn_name

[dsn_name]
Driver=path/driver_file

Host=cluster_endpoint
Port=port_number
Database=database_name
locale=locale
```

The following example shows the configuration for odbc.ini with the 64-bit ODBC driver on Linux operating systems.

```
[ODBC Data Sources]
Amazon_Redshift_x64=Amazon Redshift (x64)

[Amazon Redshift (x64)]
Driver=/opt/amazon/redshiftodbc/lib/64/libamazonredshiftodbc64.so
Host=examplecluster.abc123xyz789.us-west-2.redshift.amazonaws.com
Port=5932
Database=dev
locale=en-US
```

The following example shows the configuration for odbc.ini with the 32-bit ODBC driver on Linux operating systems.

```
[ODBC Data Sources]
Amazon_Redshift_x32=Amazon Redshift (x86)

[Amazon Redshift (x86)]
Driver=/opt/amazon/redshiftodbc/lib/32/libamazonredshiftodbc32.so
Host=examplecluster.abc123xyz789.us-west-2.redshift.amazonaws.com
Port=5932
Database=dev
locale=en-US
```

## Configuring a connection without a DSN on Linux operating systems
<a name="configure-odbcinst-ini-file"></a>

To connect to your data store through a connection that doesn't have a DSN, define the driver in the `odbcinst.ini` file. Then provide a DSN-less connection string in your application.

For information about how to configure the `odbcinst.ini` file in this case, see "Configuring a DSN-less Connection on a Non-Windows Machine" in the [Amazon Redshift ODBC connector installation and configuration guide](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Amazon+Redshift+ODBC+Connector+Install+Guide.pdf). 

Use the following format on Linux operating systems.

```
[ODBC Drivers]
driver_name=Installed
...
                            
[driver_name]
Description=driver_description
Driver=path/driver_file
    
...
```

The following example shows the `odbcinst.ini` configuration for the 64-bit driver installed in the default directories on Linux operating systems.

```
[ODBC Drivers]
Amazon Redshift (x64)=Installed

[Amazon Redshift (x64)]
Description=Amazon Redshift ODBC Driver (64-bit)
Driver=/opt/amazon/redshiftodbc/lib/64/libamazonredshiftodbc64.so
```

The following example shows the `odbcinst.ini` configuration for the 32-bit driver installed in the default directories on Linux operating systems.

```
[ODBC Drivers]
Amazon Redshift (x86)=Installed

[Amazon Redshift (x86)]
Description=Amazon Redshift ODBC Driver (32-bit)
Driver=/opt/amazon/redshiftodbc/lib/32/libamazonredshiftodbc32.so
```

## Configuring environment variables
<a name="rs-mgmt-config-global-env-variables"></a>

Use the correct ODBC driver manager to load the correct driver. To do this, set the library path environment variable. For more information, see "Specifying ODBC Driver Managers on Non-Windows Machines" in the [Amazon Redshift ODBC connector installation and configuration guide](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Amazon+Redshift+ODBC+Connector+Install+Guide.pdf). 

By default, ODBC driver managers are configured to use hidden versions of the `odbc.ini` and `odbcinst.ini` configuration files (named .`odbc.ini` and .`odbcinst.ini`) located in the home directory. They also are configured to use the `amazon.redshiftodbc.ini` file in the `/lib` subfolder of the driver installation directory. If you store these configuration files elsewhere, the environment variables so that the driver manager can locate the files. For more information, see "Specifying the Locations of the Driver Configuration Files" in *Amazon Redshift ODBC Connector Installation and Configuration Guide*. 

## Configuring connection features
<a name="connection-config-features"></a>

You can configure the following connection features for your ODBC setting:
+ Configure the ODBC driver to provide credentials and authenticate the connection to the Amazon Redshift database.
+ Configure the ODBC driver to connect to a socket enabled with Secure Sockets Layer (SSL), if you are connecting to an Amazon Redshift server that has SSL enabled.
+ Configure the ODBC driver to connect to Amazon Redshift through a proxy server.
+ Configure the ODBC driver to use a query processing mode to prevent queries from consuming too much memory.
+ Configure the ODBC driver to pass IAM authentication processes through a proxy server.
+ Configure the ODBC driver to use TCP keepalives to prevent connections from timing out.

For information about these connection features, see the [Amazon Redshift ODBC connector installation and configuration guide](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Amazon+Redshift+ODBC+Connector+Install+Guide.pdf). 

# Using an Amazon Redshift ODBC driver on macOS X
<a name="install-odbc-driver-mac"></a>

You install the driver on client computers accessing an Amazon Redshift data warehouse. Each computer where you install the driver must meet a list of minimum system requirements. For information about minimum system requirements, see the [Amazon Redshift ODBC connector installation and configuration guide](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Amazon+Redshift+ODBC+Connector+Install+Guide.pdf). 

**Topics**
+ [

# Downloading and installing the Amazon Redshift ODBC driver
](odbc-driver-mac-how-to-install.md)
+ [

# Use an ODBC driver manager to configure the driver
](odbc-driver-configure-mac.md)

# Downloading and installing the Amazon Redshift ODBC driver
<a name="odbc-driver-mac-how-to-install"></a>

Use the steps in this section to download and install the Amazon Redshift ODBC driver on a supported version of macOS X. The installation process installs the driver files in the following directories: 
+ `/opt/amazon/redshift/lib/universal`
+ `/opt/amazon/redshift/ErrorMessages`
+ `/opt/amazon/redshift/Setup`<a name="rs-mgmt-install-odbc-drivers-mac"></a>

**To install the Amazon Redshift ODBC driver on macOS X**

1. To install the Amazon Redshift ODBC driver on macOS X, download the [macOS driver version 1.6.3](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/AmazonRedshiftODBC-64-bit.1.6.3.1008.universal.pkg). 

   Then download and review the [Amazon Redshift ODBC and JDBC driver license agreement](https://s3.amazonaws.com/redshift-downloads/drivers/Amazon+Redshift+ODBC+and+JDBC+Driver+License+Agreement.pdf). 

1. Double-click **AmazonRedshiftODBC.pkg** to run the installer.

1. Follow the steps in the installer to complete the driver installation process. To perform the installation, agree to the terms of the license agreement.

**Important**  
When you have finished installing the driver, configure it for use on your system. For more information on driver configuration, see [Use an ODBC driver manager to configure the driverUse an ODBC driver manager to configure the driver](odbc-driver-configure-mac.md).

# Use an ODBC driver manager to configure the driver
<a name="odbc-driver-configure-mac"></a>

On macOS X operating systems, you use an ODBC driver manager to configure the ODBC connection settings. ODBC driver managers use configuration files to define and configure ODBC data sources and drivers. The ODBC driver manager that you use depends on the operating system that you use. For a macOS X operation system, it's the iODBC driver manager.

For more information about the supported ODBC driver managers to configure the Amazon Redshift ODBC drivers, see [Using an Amazon Redshift ODBC driver on macOS XUsing an ODBC driver on macOS X](install-odbc-driver-mac.md) for macOS X operating systems. Also, see "Specifying ODBC Driver Managers on Non- Windows Machines" in the [Amazon Redshift ODBC connector installation and configuration guide](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Amazon+Redshift+ODBC+Connector+Install+Guide.pdf). 

Three files are required for configuring the Amazon Redshift ODBC driver: `amazon.redshiftodbc.ini`, `odbc.ini`, and `odbcinst.ini`.

If you installed to the default location, the `amazon.redshiftodbc.ini` configuration file is located in `/opt/amazon/redshift/lib`.

Additionally, under `/opt/amazon/redshift/Setup` on macOS X, there are sample `odbc.ini` and `odbcinst.ini` files. You can use these files as examples for configuring the Amazon Redshift ODBC driver and the data source name (DSN).

We don't recommend using the Amazon Redshift ODBC driver installation directory for the configuration files. The sample files in the `Setup` directory are for example purposes only. If you reinstall the Amazon Redshift ODBC driver at a later time, or upgrade to a newer version, the installation directory is overwritten. You then lose any changes that you might have made to those files.

To avoid this, copy the `amazon.redshiftodbc.ini` file to a directory other than the installation directory. If you copy this file to the user's home directory, add a period (.) to the beginning of the file name to make it a hidden file.

For the `odbc.ini` and `odbcinst.ini` files, either use the configuration files in the user's home directory or create new versions in another directory. By default, your macOS X operating system should have an `odbc.ini` file and an `odbcinst.ini` file in the user's home directory (`/home/$USER` or `~/`.). These default files are hidden files, which is indicated by the dot (.) in front of each file name. These files display only when you use the `-a` flag to list the directory contents.

Whichever option you choose for the `odbc.ini` and `odbcinst.ini` files, modify the files to add driver and DSN configuration information. If you create new files, you also need to set environment variables to specify where these configuration files are located. 

By default, ODBC driver managers are configured to use hidden versions of the `odbc.ini` and `odbcinst.ini` configuration files (named .`odbc.ini` and .`odbcinst.ini`) located in the home directory. They also are configured to use the `amazon.redshiftodbc.ini` file in the `/lib` subfolder of the driver installation directory. If you store these configuration files elsewhere, set the environment variables described following so that the driver manager can locate the files. For more information, see "Specifying the Locations of the Driver Configuration Files" in the [Amazon Redshift ODBC connector installation and configuration guide](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Amazon+Redshift+ODBC+Connector+Install+Guide.pdf). 

## Creating a data source name macOS X operating systems
<a name="configure-odbc-ini-file"></a>

 When connecting to your data store using a data source name (DSN), configure the `odbc.ini` file to define DSNs. Set the properties in the `odbc.ini` file to create a DSN that specifies the connection information for your data store.

For information about how to configure the `odbc.ini` file, see "Creating a Data Source Name on a Non-Windows Machine" in the [Amazon Redshift ODBC connector installation and configuration guide](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Amazon+Redshift+ODBC+Connector+Install+Guide.pdf) 

Use the following format on macOS X operating systems.

```
[ODBC Data Sources]
driver_name=dsn_name

[dsn_name]
Driver=path/lib/amazonredshiftodbc.dylib

Host=cluster_endpoint
Port=port_number
Database=database_name
locale=locale
```

 The following example shows the configuration for `odbc.ini` on macOS X operating systems.

```
[ODBC Data Sources]
Amazon_Redshift_dylib=Amazon Redshift DSN for macOS X

[Amazon Redshift DSN for macOS X]
Driver=/opt/amazon/redshift/lib/amazonredshiftodbc.dylib
Host=examplecluster.abc123xyz789.us-west-2.redshift.amazonaws.com
Port=5932
Database=dev
locale=en-US
```

## Configuring a connection without a DSN on macOS X operating systems
<a name="configure-odbcinst-ini-file"></a>

To connect to your data store through a connection that doesn't have a DSN, define the driver in the `odbcinst.ini` file. Then provide a DSN-less connection string in your application.

For information about how to configure the `odbcinst.ini` file in this case, see "Configuring a DSN-less Connection on a Non-Windows Machine" in the [Amazon Redshift ODBC connector installation and configuration guide](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Amazon+Redshift+ODBC+Connector+Install+Guide.pdf). .

Use the following format on macOS X operating systems.

```
[ODBC Drivers]
driver_name=Installed
...
                            
[driver_name]
Description=driver_description
Driver=path/lib/amazonredshiftodbc.dylib
    
...
```

The following example shows the `odbcinst.ini` configuration for the driver installed in the default directory on macOS X operating systems.

```
[ODBC Drivers]
Amazon RedshiftODBC DSN=Installed

[Amazon RedshiftODBC DSN]
Description=Amazon Redshift ODBC Driver for macOS X
Driver=/opt/amazon/redshift/lib/amazonredshiftodbc.dylib
```

## Configuring environment variables
<a name="rs-mgmt-config-global-env-variables"></a>

Use the correct ODBC driver manager to load the correct driver. To do this, set the library path environment variable. For more information, see "Specifying ODBC Driver Managers on Non-Windows Machines" in the [Amazon Redshift ODBC connector installation and configuration guide](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Amazon+Redshift+ODBC+Connector+Install+Guide.pdf). 

By default, ODBC driver managers are configured to use hidden versions of the `odbc.ini` and `odbcinst.ini` configuration files (named .`odbc.ini` and .`odbcinst.ini`) located in the home directory. They also are configured to use the `amazon.redshiftodbc.ini` file in the `/lib` subfolder of the driver installation directory. If you store these configuration files elsewhere, the environment variables so that the driver manager can locate the files. For more information, see "Specifying the Locations of the Driver Configuration Files" in *Amazon Redshift ODBC Connector Installation and Configuration Guide*. 

## Configuring connection features
<a name="connection-config-features"></a>

You can configure the following connection features for your ODBC setting:
+ Configure the ODBC driver to provide credentials and authenticate the connection to the Amazon Redshift database.
+ Configure the ODBC driver to connect to a socket enabled with Secure Sockets Layer (SSL), if you are connecting to an Amazon Redshift server that has SSL enabled.
+ Configure the ODBC driver to connect to Amazon Redshift through a proxy server.
+ Configure the ODBC driver to use a query processing mode to prevent queries from consuming too much memory.
+ Configure the ODBC driver to pass IAM authentication processes through a proxy server.
+ Configure the ODBC driver to use TCP keepalives to prevent connections from timing out.

For information about these connection features, see the [Amazon Redshift ODBC connector installation and configuration guide](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Amazon+Redshift+ODBC+Connector+Install+Guide.pdf). 

# ODBC driver options
<a name="configure-odbc-options"></a>

You can use configuration options to control the behavior of the Amazon Redshift ODBC driver.

In Microsoft Windows, you typically set driver options when you configure a data source name (DSN). You can also set driver options in the connection string when you connect programmatically, or by adding or changing registry keys in `HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI\your_DSN`. For more information about configuring a DSN, see [Using an Amazon Redshift ODBC driver on Microsoft Windows](install-odbc-driver-windows.md).

In macOS X, you set driver configuration options in your `odbc.ini` and `amazon.redshiftodbc.ini` files, as described in [Use an ODBC driver manager to configure the driverUse an ODBC driver manager to configure the driver](odbc-driver-configure-mac.md). Configuration options set in an `amazon.redshiftodbc.ini` file apply to all connections. In contrast, configuration options set in an `odbc.ini` file are specific to a connection. Configuration options set in `odbc.ini` take precedence over configuration options set in `amazon.redshiftodbc.ini`.

For information about how to set up ODBC driver configuration options, see the [Amazon Redshift ODBC connector installation and configuration guide](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1008/Amazon+Redshift+ODBC+Connector+Install+Guide.pdf). 

# Previous ODBC driver versions
<a name="odbc-previous-versions"></a>

Download a previous version of the Amazon Redshift ODBC driver only if your tool requires a specific version of the driver. 

## Previous ODBC driver versions for Windows
<a name="odbc-previous-versions-windows"></a>

The following are the 64-bit drivers: 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1006/AmazonRedshiftODBC64-1.6.3.1006.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1006/AmazonRedshiftODBC64-1.6.3.1006.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.1.1000/AmazonRedshiftODBC64-1.6.1.1000.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.1.1000/AmazonRedshiftODBC64-1.6.1.1000.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.20.1024/AmazonRedshiftODBC64-1.5.20.1024.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.20.1024/AmazonRedshiftODBC64-1.5.20.1024.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.16.1019/AmazonRedshiftODBC64-1.5.16.1019.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.16.1019/AmazonRedshiftODBC64-1.5.16.1019.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.9.1011/AmazonRedshiftODBC64-1.5.9.1011.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.9.1011/AmazonRedshiftODBC64-1.5.9.1011.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.7.1007/AmazonRedshiftODBC64-1.5.7.1007.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.7.1007/AmazonRedshiftODBC64-1.5.7.1007.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.65.1000/AmazonRedshiftODBC64-1.4.65.1000.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.65.1000/AmazonRedshiftODBC64-1.4.65.1000.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.62.1000/AmazonRedshiftODBC64-1.4.62.1000.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.62.1000/AmazonRedshiftODBC64-1.4.62.1000.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.59.1000/AmazonRedshiftODBC64-1.4.59.1000.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.59.1000/AmazonRedshiftODBC64-1.4.59.1000.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.56.1000/AmazonRedshiftODBC64-1.4.56.1000.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.56.1000/AmazonRedshiftODBC64-1.4.56.1000.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.53.1000/AmazonRedshiftODBC64-1.4.53.1000.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.53.1000/AmazonRedshiftODBC64-1.4.53.1000.msi) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.52.1000/AmazonRedshiftODBC64-1.4.52.1000.msi](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.52.1000/AmazonRedshiftODBC64-1.4.52.1000.msi) 

32-bit drivers are discontinued and previous versions are not supported.

## Previous ODBC driver versions for Linux
<a name="odbc-previous-versions-linux"></a>

The following are the versions of the 64-bit driver: 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1006/AmazonRedshiftODBC-64-bit-1.6.3.1006-1.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1006/AmazonRedshiftODBC-64-bit-1.6.3.1006-1.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.1.1000/AmazonRedshiftODBC-64-bit-1.6.1.1000-1.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.1.1000/AmazonRedshiftODBC-64-bit-1.6.1.1000-1.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.20.1024/AmazonRedshiftODBC-64-bit-1.5.20.1024-1.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.20.1024/AmazonRedshiftODBC-64-bit-1.5.20.1024-1.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.16.1019/AmazonRedshiftODBC-64-bit-1.5.16.1019-1.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.16.1019/AmazonRedshiftODBC-64-bit-1.5.16.1019-1.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.9.1011/AmazonRedshiftODBC-64-bit-1.5.9.1011-1.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.9.1011/AmazonRedshiftODBC-64-bit-1.5.9.1011-1.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.7.1007/AmazonRedshiftODBC-64-bit-1.5.7.1007-1.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.7.1007/AmazonRedshiftODBC-64-bit-1.5.7.1007-1.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.65.1000/AmazonRedshiftODBC-64-bit-1.4.65.1000-1.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.65.1000/AmazonRedshiftODBC-64-bit-1.4.65.1000-1.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.62.1000/AmazonRedshiftODBC-64-bit-1.4.62.1000-1.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.62.1000/AmazonRedshiftODBC-64-bit-1.4.62.1000-1.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.59.1000/AmazonRedshiftODBC-64-bit-1.4.59.1000-1.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.59.1000/AmazonRedshiftODBC-64-bit-1.4.59.1000-1.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.59.1000/AmazonRedshiftODBC-64-bit-1.4.59.1000-1.x86\$164.deb](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.59.1000/AmazonRedshiftODBC-64-bit-1.4.59.1000-1.x86_64.deb) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.56.1000/AmazonRedshiftODBC-64-bit-1.4.56.1000-1.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.56.1000/AmazonRedshiftODBC-64-bit-1.4.56.1000-1.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.56.1000/AmazonRedshiftODBC-64-bit-1.4.56.1000-1.x86\$164.deb](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.56.1000/AmazonRedshiftODBC-64-bit-1.4.56.1000-1.x86_64.deb) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.52.1000/AmazonRedshiftODBC-64-bit-1.4.52.1000-1.x86\$164.rpm](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.52.1000/AmazonRedshiftODBC-64-bit-1.4.52.1000-1.x86_64.rpm) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.52.1000/AmazonRedshiftODBC-64-bit-1.4.52.1000-1.x86\$164.deb](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.52.1000/AmazonRedshiftODBC-64-bit-1.4.52.1000-1.x86_64.deb) 

32-bit drivers are discontinued and previous versions are not supported.

## Previous ODBC driver versions for macOS X
<a name="odbc-previous-versions-mac"></a>

The following are the versions of the Amazon Redshift ODBC driver for macOS X: 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1006/AmazonRedshiftODBC-64-bit.1.6.3.1006.universal.pkg](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.3.1006/AmazonRedshiftODBC-64-bit.1.6.3.1006.universal.pkg) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.1.1000/AmazonRedshiftODBC-64-bit.1.6.1.1000.universal.pkg](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.6.1.1000/AmazonRedshiftODBC-64-bit.1.6.1.1000.universal.pkg) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.20.1024/AmazonRedshiftODBC-1.5.20.1024.arm64.dmg](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.20.1024/AmazonRedshiftODBC-1.5.20.1024.arm64.dmg) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.20.1024/AmazonRedshiftODBC-1.5.20.1024.x86\$164.dmg](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.20.1024/AmazonRedshiftODBC-1.5.20.1024.x86_64.dmg) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.16.1019/AmazonRedshiftODBC-1.5.16.1019.x86\$164.dmg](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.16.1019/AmazonRedshiftODBC-1.5.16.1019.x86_64.dmg) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.9.1011/AmazonRedshiftODBC-1.5.9.1011.x86\$164.dmg](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.9.1011/AmazonRedshiftODBC-1.5.9.1011.x86_64.dmg) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.7.1007/AmazonRedshiftODBC-1.5.7.1007.x86\$164.dmg](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.7.1007/AmazonRedshiftODBC-1.5.7.1007.x86_64.dmg) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.65.1000/AmazonRedshiftODBC-1.4.65.1000.dmg](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.65.1000/AmazonRedshiftODBC-1.4.65.1000.dmg) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.62.1000/AmazonRedshiftODBC-1.4.62.1000.dmg](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.62.1000/AmazonRedshiftODBC-1.4.62.1000.dmg) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.59.1000/AmazonRedshiftODBC-1.4.59.1000.dmg](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.59.1000/AmazonRedshiftODBC-1.4.59.1000.dmg) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.56.1000/AmazonRedshiftODBC-1.4.56.1000.dmg](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.56.1000/AmazonRedshiftODBC-1.4.56.1000.dmg) 
+ [https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.52.1000/AmazonRedshiftODBC-1.4.52.1000.dmg](https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.52.1000/AmazonRedshiftODBC-1.4.52.1000.dmg) 