AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Implementation for accessing KinesisFirehose
Amazon Data FirehoseAmazon Data Firehose was previously known as Amazon Kinesis Data Firehose.
Amazon Data Firehose is a fully managed service that delivers real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3), Amazon OpenSearch Service, Amazon Redshift, Splunk, and various other supportd destinations.
Namespace: Amazon.KinesisFirehose
Assembly: AWSSDK.KinesisFirehose.dll
Version: 3.x.y.z
public class AmazonKinesisFirehoseClient : AmazonServiceClient IAmazonKinesisFirehose, IAmazonService, IDisposable
The AmazonKinesisFirehoseClient type exposes the following members
Name | Description | |
---|---|---|
AmazonKinesisFirehoseClient() |
Constructs AmazonKinesisFirehoseClient with the credentials loaded from the application's default configuration, and if unsuccessful from the Instance Profile service on an EC2 instance. Example App.config with credentials set. <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="AWSProfileName" value="AWS Default"/> </appSettings> </configuration> |
|
AmazonKinesisFirehoseClient(RegionEndpoint) |
Constructs AmazonKinesisFirehoseClient with the credentials loaded from the application's default configuration, and if unsuccessful from the Instance Profile service on an EC2 instance. Example App.config with credentials set. <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="AWSProfileName" value="AWS Default"/> </appSettings> </configuration> |
|
AmazonKinesisFirehoseClient(AmazonKinesisFirehoseConfig) |
Constructs AmazonKinesisFirehoseClient with the credentials loaded from the application's default configuration, and if unsuccessful from the Instance Profile service on an EC2 instance. Example App.config with credentials set. <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="AWSProfileName" value="AWS Default"/> </appSettings> </configuration> |
|
AmazonKinesisFirehoseClient(AWSCredentials) |
Constructs AmazonKinesisFirehoseClient with AWS Credentials |
|
AmazonKinesisFirehoseClient(AWSCredentials, RegionEndpoint) |
Constructs AmazonKinesisFirehoseClient with AWS Credentials |
|
AmazonKinesisFirehoseClient(AWSCredentials, AmazonKinesisFirehoseConfig) |
Constructs AmazonKinesisFirehoseClient with AWS Credentials and an AmazonKinesisFirehoseClient Configuration object. |
|
AmazonKinesisFirehoseClient(string, string) |
Constructs AmazonKinesisFirehoseClient with AWS Access Key ID and AWS Secret Key |
|
AmazonKinesisFirehoseClient(string, string, RegionEndpoint) |
Constructs AmazonKinesisFirehoseClient with AWS Access Key ID and AWS Secret Key |
|
AmazonKinesisFirehoseClient(string, string, AmazonKinesisFirehoseConfig) |
Constructs AmazonKinesisFirehoseClient with AWS Access Key ID, AWS Secret Key and an AmazonKinesisFirehoseClient Configuration object. |
|
AmazonKinesisFirehoseClient(string, string, string) |
Constructs AmazonKinesisFirehoseClient with AWS Access Key ID and AWS Secret Key |
|
AmazonKinesisFirehoseClient(string, string, string, RegionEndpoint) |
Constructs AmazonKinesisFirehoseClient with AWS Access Key ID and AWS Secret Key |
|
AmazonKinesisFirehoseClient(string, string, string, AmazonKinesisFirehoseConfig) |
Constructs AmazonKinesisFirehoseClient with AWS Access Key ID, AWS Secret Key and an AmazonKinesisFirehoseClient Configuration object. |
Name | Type | Description | |
---|---|---|---|
Config | Amazon.Runtime.IClientConfig | Inherited from Amazon.Runtime.AmazonServiceClient. |
Name | Description | |
---|---|---|
CreateDeliveryStream(CreateDeliveryStreamRequest) |
Creates a Firehose delivery stream. By default, you can create up to 50 delivery streams per Amazon Web Services Region.
This is an asynchronous operation that immediately returns. The initial status of
the delivery stream is
If the status of a delivery stream is
A Firehose delivery stream can be configured to receive records directly from providers
using PutRecord or PutRecordBatch, or it can be configured to use an
existing Kinesis stream as its source. To specify a Kinesis data stream as input,
set the To create a delivery stream with server-side encryption (SSE) enabled, include DeliveryStreamEncryptionConfigurationInput in your request. This is optional. You can also invoke StartDeliveryStreamEncryption to turn on SSE for an existing delivery stream that doesn't have SSE enabled.
A delivery stream is configured with a single destination, such as Amazon Simple Storage
Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service, Amazon OpenSearch
Serverless, Splunk, and any custom HTTP endpoint or HTTP endpoints owned by or supported
by third-party service providers, including Datadog, Dynatrace, LogicMonitor, MongoDB,
New Relic, and Sumo Logic. You must specify only one of the following destination
configuration parameters:
When you specify A few notes about Amazon Redshift as a destination:
Firehose assumes the IAM role that is configured as part of the destination. The role should allow the Firehose principal to assume the role, and the role should have permissions that allow the service to deliver the data. For more information, see Grant Firehose Access to an Amazon S3 Destination in the Amazon Firehose Developer Guide. |
|
CreateDeliveryStreamAsync(CreateDeliveryStreamRequest, CancellationToken) |
Creates a Firehose delivery stream. By default, you can create up to 50 delivery streams per Amazon Web Services Region.
This is an asynchronous operation that immediately returns. The initial status of
the delivery stream is
If the status of a delivery stream is
A Firehose delivery stream can be configured to receive records directly from providers
using PutRecord or PutRecordBatch, or it can be configured to use an
existing Kinesis stream as its source. To specify a Kinesis data stream as input,
set the To create a delivery stream with server-side encryption (SSE) enabled, include DeliveryStreamEncryptionConfigurationInput in your request. This is optional. You can also invoke StartDeliveryStreamEncryption to turn on SSE for an existing delivery stream that doesn't have SSE enabled.
A delivery stream is configured with a single destination, such as Amazon Simple Storage
Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service, Amazon OpenSearch
Serverless, Splunk, and any custom HTTP endpoint or HTTP endpoints owned by or supported
by third-party service providers, including Datadog, Dynatrace, LogicMonitor, MongoDB,
New Relic, and Sumo Logic. You must specify only one of the following destination
configuration parameters:
When you specify A few notes about Amazon Redshift as a destination:
Firehose assumes the IAM role that is configured as part of the destination. The role should allow the Firehose principal to assume the role, and the role should have permissions that allow the service to deliver the data. For more information, see Grant Firehose Access to an Amazon S3 Destination in the Amazon Firehose Developer Guide. |
|
DeleteDeliveryStream(string) |
Deletes a delivery stream and its data.
You can delete a delivery stream only if it is in one of the following states:
DeleteDeliveryStream is an asynchronous API. When an API request to DeleteDeliveryStream
succeeds, the delivery stream is marked for deletion, and it goes into the
Removal of a delivery stream that is in the |
|
DeleteDeliveryStream(DeleteDeliveryStreamRequest) |
Deletes a delivery stream and its data.
You can delete a delivery stream only if it is in one of the following states:
DeleteDeliveryStream is an asynchronous API. When an API request to DeleteDeliveryStream
succeeds, the delivery stream is marked for deletion, and it goes into the
Removal of a delivery stream that is in the |
|
DeleteDeliveryStreamAsync(string, CancellationToken) |
Deletes a delivery stream and its data.
You can delete a delivery stream only if it is in one of the following states:
DeleteDeliveryStream is an asynchronous API. When an API request to DeleteDeliveryStream
succeeds, the delivery stream is marked for deletion, and it goes into the
Removal of a delivery stream that is in the |
|
DeleteDeliveryStreamAsync(DeleteDeliveryStreamRequest, CancellationToken) |
Deletes a delivery stream and its data.
You can delete a delivery stream only if it is in one of the following states:
DeleteDeliveryStream is an asynchronous API. When an API request to DeleteDeliveryStream
succeeds, the delivery stream is marked for deletion, and it goes into the
Removal of a delivery stream that is in the |
|
DescribeDeliveryStream(DescribeDeliveryStreamRequest) |
Describes the specified delivery stream and its status. For example, after your delivery
stream is created, call
If the status of a delivery stream is |
|
DescribeDeliveryStreamAsync(DescribeDeliveryStreamRequest, CancellationToken) |
Describes the specified delivery stream and its status. For example, after your delivery
stream is created, call
If the status of a delivery stream is |
|
DetermineServiceOperationEndpoint(AmazonWebServiceRequest) |
Returns the endpoint that will be used for a particular request. |
|
Dispose() | Inherited from Amazon.Runtime.AmazonServiceClient. | |
ListDeliveryStreams() |
Lists your delivery streams in alphabetical order of their names.
The number of delivery streams might be too large to return using a single call to
|
|
ListDeliveryStreams(ListDeliveryStreamsRequest) |
Lists your delivery streams in alphabetical order of their names.
The number of delivery streams might be too large to return using a single call to
|
|
ListDeliveryStreamsAsync(CancellationToken) |
Lists your delivery streams in alphabetical order of their names.
The number of delivery streams might be too large to return using a single call to
|
|
ListDeliveryStreamsAsync(ListDeliveryStreamsRequest, CancellationToken) |
Lists your delivery streams in alphabetical order of their names.
The number of delivery streams might be too large to return using a single call to
|
|
ListTagsForDeliveryStream(ListTagsForDeliveryStreamRequest) |
Lists the tags for the specified delivery stream. This operation has a limit of five transactions per second per account. |
|
ListTagsForDeliveryStreamAsync(ListTagsForDeliveryStreamRequest, CancellationToken) |
Lists the tags for the specified delivery stream. This operation has a limit of five transactions per second per account. |
|
PutRecord(string, Record) |
Writes a single data record into an Amazon Firehose delivery stream. To write multiple data records into a delivery stream, use PutRecordBatch. Applications using these operations are referred to as producers. By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB per second. If you use PutRecord and PutRecordBatch, the limits are an aggregate across these two operations for each delivery stream. For more information about limits and how to request an increase, see Amazon Firehose Limits. Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics. You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists of a data blob that can be up to 1,000 KiB in size, and any kind of data. For example, it can be a segment from a log file, geographic location data, website clickstream data, and so on.
Firehose buffers records before delivering them to the destination. To disambiguate
the data blobs at the destination, a common solution is to use delimiters in the data,
such as a newline (
The
If the Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For larger data assets, allow for a longer time out before retrying Put API operations. Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it tries to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no longer available. Don't concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the raw data, then perform base64 encoding. |
|
PutRecord(PutRecordRequest) |
Writes a single data record into an Amazon Firehose delivery stream. To write multiple data records into a delivery stream, use PutRecordBatch. Applications using these operations are referred to as producers. By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB per second. If you use PutRecord and PutRecordBatch, the limits are an aggregate across these two operations for each delivery stream. For more information about limits and how to request an increase, see Amazon Firehose Limits. Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics. You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists of a data blob that can be up to 1,000 KiB in size, and any kind of data. For example, it can be a segment from a log file, geographic location data, website clickstream data, and so on.
Firehose buffers records before delivering them to the destination. To disambiguate
the data blobs at the destination, a common solution is to use delimiters in the data,
such as a newline (
The
If the Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For larger data assets, allow for a longer time out before retrying Put API operations. Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it tries to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no longer available. Don't concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the raw data, then perform base64 encoding. |
|
PutRecordAsync(string, Record, CancellationToken) |
Writes a single data record into an Amazon Firehose delivery stream. To write multiple data records into a delivery stream, use PutRecordBatch. Applications using these operations are referred to as producers. By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB per second. If you use PutRecord and PutRecordBatch, the limits are an aggregate across these two operations for each delivery stream. For more information about limits and how to request an increase, see Amazon Firehose Limits. Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics. You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists of a data blob that can be up to 1,000 KiB in size, and any kind of data. For example, it can be a segment from a log file, geographic location data, website clickstream data, and so on.
Firehose buffers records before delivering them to the destination. To disambiguate
the data blobs at the destination, a common solution is to use delimiters in the data,
such as a newline (
The
If the Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For larger data assets, allow for a longer time out before retrying Put API operations. Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it tries to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no longer available. Don't concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the raw data, then perform base64 encoding. |
|
PutRecordAsync(PutRecordRequest, CancellationToken) |
Writes a single data record into an Amazon Firehose delivery stream. To write multiple data records into a delivery stream, use PutRecordBatch. Applications using these operations are referred to as producers. By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB per second. If you use PutRecord and PutRecordBatch, the limits are an aggregate across these two operations for each delivery stream. For more information about limits and how to request an increase, see Amazon Firehose Limits. Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics. You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists of a data blob that can be up to 1,000 KiB in size, and any kind of data. For example, it can be a segment from a log file, geographic location data, website clickstream data, and so on.
Firehose buffers records before delivering them to the destination. To disambiguate
the data blobs at the destination, a common solution is to use delimiters in the data,
such as a newline (
The
If the Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For larger data assets, allow for a longer time out before retrying Put API operations. Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it tries to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no longer available. Don't concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the raw data, then perform base64 encoding. |
|
PutRecordBatch(string, List<Record>) |
Writes multiple data records into a delivery stream in a single call, which can achieve higher throughput per producer than when writing single records. To write single data records into a delivery stream, use PutRecord. Applications using these operations are referred to as producers. Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics. For information about service quota, see Amazon Firehose Quota. Each PutRecordBatch request supports up to 500 records. Each record in the request can be as large as 1,000 KB (before base64 encoding), up to a limit of 4 MB for the entire request. These limits cannot be changed. You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists of a data blob that can be up to 1,000 KB in size, and any kind of data. For example, it could be a segment from a log file, geographic location data, website clickstream data, and so on.
Firehose buffers records before delivering them to the destination. To disambiguate
the data blobs at the destination, a common solution is to use delimiters in the data,
such as a newline (
The PutRecordBatch response includes a count of failed records,
A successfully processed record includes a
If there is an internal server error or a timeout, the write might have completed
or it might have failed. If
If PutRecordBatch throws Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For larger data assets, allow for a longer time out before retrying Put API operations. Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no longer available. Don't concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the raw data, then perform base64 encoding. |
|
PutRecordBatch(PutRecordBatchRequest) |
Writes multiple data records into a delivery stream in a single call, which can achieve higher throughput per producer than when writing single records. To write single data records into a delivery stream, use PutRecord. Applications using these operations are referred to as producers. Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics. For information about service quota, see Amazon Firehose Quota. Each PutRecordBatch request supports up to 500 records. Each record in the request can be as large as 1,000 KB (before base64 encoding), up to a limit of 4 MB for the entire request. These limits cannot be changed. You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists of a data blob that can be up to 1,000 KB in size, and any kind of data. For example, it could be a segment from a log file, geographic location data, website clickstream data, and so on.
Firehose buffers records before delivering them to the destination. To disambiguate
the data blobs at the destination, a common solution is to use delimiters in the data,
such as a newline (
The PutRecordBatch response includes a count of failed records,
A successfully processed record includes a
If there is an internal server error or a timeout, the write might have completed
or it might have failed. If
If PutRecordBatch throws Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For larger data assets, allow for a longer time out before retrying Put API operations. Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no longer available. Don't concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the raw data, then perform base64 encoding. |
|
PutRecordBatchAsync(string, List<Record>, CancellationToken) |
Writes multiple data records into a delivery stream in a single call, which can achieve higher throughput per producer than when writing single records. To write single data records into a delivery stream, use PutRecord. Applications using these operations are referred to as producers. Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics. For information about service quota, see Amazon Firehose Quota. Each PutRecordBatch request supports up to 500 records. Each record in the request can be as large as 1,000 KB (before base64 encoding), up to a limit of 4 MB for the entire request. These limits cannot be changed. You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists of a data blob that can be up to 1,000 KB in size, and any kind of data. For example, it could be a segment from a log file, geographic location data, website clickstream data, and so on.
Firehose buffers records before delivering them to the destination. To disambiguate
the data blobs at the destination, a common solution is to use delimiters in the data,
such as a newline (
The PutRecordBatch response includes a count of failed records,
A successfully processed record includes a
If there is an internal server error or a timeout, the write might have completed
or it might have failed. If
If PutRecordBatch throws Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For larger data assets, allow for a longer time out before retrying Put API operations. Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no longer available. Don't concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the raw data, then perform base64 encoding. |
|
PutRecordBatchAsync(PutRecordBatchRequest, CancellationToken) |
Writes multiple data records into a delivery stream in a single call, which can achieve higher throughput per producer than when writing single records. To write single data records into a delivery stream, use PutRecord. Applications using these operations are referred to as producers. Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics. For information about service quota, see Amazon Firehose Quota. Each PutRecordBatch request supports up to 500 records. Each record in the request can be as large as 1,000 KB (before base64 encoding), up to a limit of 4 MB for the entire request. These limits cannot be changed. You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists of a data blob that can be up to 1,000 KB in size, and any kind of data. For example, it could be a segment from a log file, geographic location data, website clickstream data, and so on.
Firehose buffers records before delivering them to the destination. To disambiguate
the data blobs at the destination, a common solution is to use delimiters in the data,
such as a newline (
The PutRecordBatch response includes a count of failed records,
A successfully processed record includes a
If there is an internal server error or a timeout, the write might have completed
or it might have failed. If
If PutRecordBatch throws Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For larger data assets, allow for a longer time out before retrying Put API operations. Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no longer available. Don't concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the raw data, then perform base64 encoding. |
|
StartDeliveryStreamEncryption(StartDeliveryStreamEncryptionRequest) |
Enables server-side encryption (SSE) for the delivery stream.
This operation is asynchronous. It returns immediately. When you invoke it, Firehose
first sets the encryption status of the stream to To check the encryption status of a delivery stream, use DescribeDeliveryStream.
Even if encryption is currently enabled for a delivery stream, you can still invoke
this operation on it to change the ARN of the CMK or both its type and ARN. If you
invoke this method to change the CMK, and the old CMK is of type
For the KMS grant creation to be successful, Firehose APIs
If a delivery stream already has encryption enabled and then you invoke this operation
to change the ARN of the CMK or both its type and ARN and you get
If the encryption status of your delivery stream is
You can enable SSE for a delivery stream only if it's a delivery stream that uses
The |
|
StartDeliveryStreamEncryptionAsync(StartDeliveryStreamEncryptionRequest, CancellationToken) |
Enables server-side encryption (SSE) for the delivery stream.
This operation is asynchronous. It returns immediately. When you invoke it, Firehose
first sets the encryption status of the stream to To check the encryption status of a delivery stream, use DescribeDeliveryStream.
Even if encryption is currently enabled for a delivery stream, you can still invoke
this operation on it to change the ARN of the CMK or both its type and ARN. If you
invoke this method to change the CMK, and the old CMK is of type
For the KMS grant creation to be successful, Firehose APIs
If a delivery stream already has encryption enabled and then you invoke this operation
to change the ARN of the CMK or both its type and ARN and you get
If the encryption status of your delivery stream is
You can enable SSE for a delivery stream only if it's a delivery stream that uses
The |
|
StopDeliveryStreamEncryption(StopDeliveryStreamEncryptionRequest) |
Disables server-side encryption (SSE) for the delivery stream.
This operation is asynchronous. It returns immediately. When you invoke it, Firehose
first sets the encryption status of the stream to To check the encryption state of a delivery stream, use DescribeDeliveryStream.
If SSE is enabled using a customer managed CMK and then you invoke
The |
|
StopDeliveryStreamEncryptionAsync(StopDeliveryStreamEncryptionRequest, CancellationToken) |
Disables server-side encryption (SSE) for the delivery stream.
This operation is asynchronous. It returns immediately. When you invoke it, Firehose
first sets the encryption status of the stream to To check the encryption state of a delivery stream, use DescribeDeliveryStream.
If SSE is enabled using a customer managed CMK and then you invoke
The |
|
TagDeliveryStream(TagDeliveryStreamRequest) |
Adds or updates tags for the specified delivery stream. A tag is a key-value pair that you can define and assign to Amazon Web Services resources. If you specify a tag that already exists, the tag value is replaced with the value that you specify in the request. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the delivery stream. For more information about tags, see Using Cost Allocation Tags in the Amazon Web Services Billing and Cost Management User Guide. Each delivery stream can have up to 50 tags. This operation has a limit of five transactions per second per account. |
|
TagDeliveryStreamAsync(TagDeliveryStreamRequest, CancellationToken) |
Adds or updates tags for the specified delivery stream. A tag is a key-value pair that you can define and assign to Amazon Web Services resources. If you specify a tag that already exists, the tag value is replaced with the value that you specify in the request. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the delivery stream. For more information about tags, see Using Cost Allocation Tags in the Amazon Web Services Billing and Cost Management User Guide. Each delivery stream can have up to 50 tags. This operation has a limit of five transactions per second per account. |
|
UntagDeliveryStream(UntagDeliveryStreamRequest) |
Removes tags from the specified delivery stream. Removed tags are deleted, and you can't recover them after this operation successfully completes. If you specify a tag that doesn't exist, the operation ignores it. This operation has a limit of five transactions per second per account. |
|
UntagDeliveryStreamAsync(UntagDeliveryStreamRequest, CancellationToken) |
Removes tags from the specified delivery stream. Removed tags are deleted, and you can't recover them after this operation successfully completes. If you specify a tag that doesn't exist, the operation ignores it. This operation has a limit of five transactions per second per account. |
|
UpdateDestination(UpdateDestinationRequest) |
Updates the specified destination of the specified delivery stream. Use this operation to change the destination type (for example, to replace the Amazon S3 destination with Amazon Redshift) or change the parameters associated with a destination (for example, to change the bucket name of the Amazon S3 destination). The update might not occur immediately. The target delivery stream remains active while the configurations are updated, so data writes to the delivery stream can continue during this process. The updated configurations are usually effective within a few minutes. Switching between Amazon OpenSearch Service and other services is not supported. For an Amazon OpenSearch Service destination, you can only update to another Amazon OpenSearch Service destination.
If the destination type is the same, Firehose merges the configuration parameters
specified with the destination configuration that already exists on the delivery stream.
If any of the parameters are not specified in the call, the existing values are retained.
For example, in the Amazon S3 destination, if EncryptionConfiguration is not
specified, then the existing If the destination type is not the same, for example, changing the destination from Amazon S3 to Amazon Redshift, Firehose does not merge any parameters. In this case, all parameters must be specified.
Firehose uses |
|
UpdateDestinationAsync(UpdateDestinationRequest, CancellationToken) |
Updates the specified destination of the specified delivery stream. Use this operation to change the destination type (for example, to replace the Amazon S3 destination with Amazon Redshift) or change the parameters associated with a destination (for example, to change the bucket name of the Amazon S3 destination). The update might not occur immediately. The target delivery stream remains active while the configurations are updated, so data writes to the delivery stream can continue during this process. The updated configurations are usually effective within a few minutes. Switching between Amazon OpenSearch Service and other services is not supported. For an Amazon OpenSearch Service destination, you can only update to another Amazon OpenSearch Service destination.
If the destination type is the same, Firehose merges the configuration parameters
specified with the destination configuration that already exists on the delivery stream.
If any of the parameters are not specified in the call, the existing values are retained.
For example, in the Amazon S3 destination, if EncryptionConfiguration is not
specified, then the existing If the destination type is not the same, for example, changing the destination from Amazon S3 to Amazon Redshift, Firehose does not merge any parameters. In this case, all parameters must be specified.
Firehose uses |
Name | Description | |
---|---|---|
AfterResponseEvent | Inherited from Amazon.Runtime.AmazonServiceClient. | |
BeforeRequestEvent | Inherited from Amazon.Runtime.AmazonServiceClient. | |
ExceptionEvent | Inherited from Amazon.Runtime.AmazonServiceClient. |
.NET Core App:
Supported in: 3.1
.NET Standard:
Supported in: 2.0
.NET Framework:
Supported in: 4.5, 4.0, 3.5