-ContentRedaction_PiiEntityType <
String[]>
Specify which types of personally identifiable information (PII) you want to redact in your transcript. You can include as many types as you'd like, or you can select ALL
.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | ContentRedaction_PiiEntityTypes |
Specify if you want only a redacted transcript, or if you want a redacted and an unredacted transcript.When you choose redacted
Amazon Transcribe creates only a redacted transcript.When you choose redacted_and_unredacted
Amazon Transcribe creates a redacted and an unredacted transcript (as two separate files).
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Specify the category of information you want to redact; PII
(personally identifiable information) is the only valid value. You can use PiiEntityTypes
to choose which types of PII you want to redact.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
This parameter overrides confirmation prompts to force the cmdlet to continue its operation. This parameter should always be used with caution.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Enables automatic language identification in your transcription job request.If you include IdentifyLanguage
, you can optionally include a list of language codes, using LanguageOptions
, that you think may be present in your media file. Including language options can improve transcription accuracy.If you want to apply a custom language model, a custom vocabulary, or a custom vocabulary filter to your automatic language identification request, include LanguageIdSettings
with the relevant sub-parameters (VocabularyName
, LanguageModelName
, and VocabularyFilterName
).Note that you must include one of LanguageCode
, IdentifyLanguage
, or IdentifyMultipleLanguages
in your request. If you include more than one of these parameters, your transcription job fails.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
-IdentifyMultipleLanguage <
Boolean>
Enables automatic multi-language identification in your transcription job request. Use this parameter if your media file contains more than one language.If you include IdentifyMultipleLanguages
, you can optionally include a list of language codes, using LanguageOptions
, that you think may be present in your media file. Including language options can improve transcription accuracy.If you want to apply a custom vocabulary or a custom vocabulary filter to your automatic language identification request, include LanguageIdSettings
with the relevant sub-parameters (VocabularyName
and VocabularyFilterName
).Note that you must include one of LanguageCode
, IdentifyLanguage
, or IdentifyMultipleLanguages
in your request. If you include more than one of these parameters, your transcription job fails.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | IdentifyMultipleLanguages |
-JobExecutionSettings_AllowDeferredExecution <
Boolean>
Allows you to enable job queuing when your concurrent request limit is exceeded. When AllowDeferredExecution
is set to true
, transcription job requests are placed in a queue until the number of jobs falls below the concurrent request limit. If AllowDeferredExecution
is set to false
and the number of transcription job requests exceed the concurrent request limit, you get a LimitExceededException
error.Note that job queuing is enabled by default for Call Analytics jobs.If you include AllowDeferredExecution
in your request, you must also include DataAccessRoleArn
.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
-JobExecutionSettings_DataAccessRoleArn <
String>
The Amazon Resource Name (ARN) of an IAM role that has permissions to access the Amazon S3 bucket that contains your input files. If the role you specify doesn’t have the appropriate permissions to access the specified Amazon S3 location, your request fails.IAM role ARNs have the format
arn:partition:iam::account:role/role-name-with-path
. For example:
arn:aws:iam::111122223333:role/Admin
. For more information, see
IAM ARNs.Note that if you include
DataAccessRoleArn
in your request, you must also include
AllowDeferredExecution
.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
The language code that represents the language spoken in the input media file.If you're unsure of the language spoken in your media file, consider using
IdentifyLanguage
or
IdentifyMultipleLanguages
to enable automatic language identification.Note that you must include one of
LanguageCode
,
IdentifyLanguage
, or
IdentifyMultipleLanguages
in your request. If you include more than one of these parameters, your transcription job fails.For a list of supported languages and their associated language codes, refer to the
Supported languages table.To transcribe speech in Modern Standard Arabic (
ar-SA
), your media file must be encoded at a sample rate of 16,000 Hz or higher.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
If using automatic language identification (IdentifyLanguage
) in your request and you want to apply a custom language model, a custom vocabulary, or a custom vocabulary filter, include LanguageIdSettings
with the relevant sub-parameters (VocabularyName
, LanguageModelName
, and VocabularyFilterName
).You can specify two or more language codes that represent the languages you think may be present in your media; including more than five is not recommended. Each language code you include can have an associated custom language model, custom vocabulary, and custom vocabulary filter. The languages you specify must match the languages of the specified custom language models, custom vocabularies, and custom vocabulary filters.To include language options using IdentifyLanguage
without including a custom language model, a custom vocabulary, or a custom vocabulary filter, use LanguageOptions
instead of LanguageIdSettings
. Including language options can improve the accuracy of automatic language identification.If you want to include a custom language model with your request but do not want to use automatic language identification, use instead the
parameter with the LanguageModelName
sub-parameter.If you want to include a custom vocabulary or a custom vocabulary filter (or both) with your request but do not want to use automatic language identification, use instead the
parameter with the VocabularyName
or VocabularyFilterName
(or both) sub-parameter.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | LanguageIdSettings |
You can specify two or more language codes that represent the languages you think may be present in your media; including more than five is not recommended. If you're unsure what languages are present, do not include this parameter.If you include
LanguageOptions
in your request, you must also include
IdentifyLanguage
.For more information, refer to
Supported languages.To transcribe speech in Modern Standard Arabic (
ar-SA
), your media file must be encoded at a sample rate of 16,000 Hz or higher.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | LanguageOptions |
The Amazon S3 location of the media file you want to transcribe. For example:
s3://DOC-EXAMPLE-BUCKET/my-media-file.flac
s3://DOC-EXAMPLE-BUCKET/media-files/my-media-file.flac
Note that the Amazon S3 bucket that contains your input media must be located in the same Amazon Web Services Region where you're making your transcription request.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
-Media_RedactedMediaFileUri <
String>
The Amazon S3 location of the media file you want to redact. For example:
s3://DOC-EXAMPLE-BUCKET/my-media-file.flac
s3://DOC-EXAMPLE-BUCKET/media-files/my-media-file.flac
Note that the Amazon S3 bucket that contains your input media must be located in the same Amazon Web Services Region where you're making your transcription request.
RedactedMediaFileUri
is only supported for Call Analytics (StartCallAnalyticsJob
) transcription requests. Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Specify the format of your input media file.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
-MediaSampleRateHertz <
Int32>
The sample rate, in Hertz, of the audio track in your input media file.If you don't specify the media sample rate, Amazon Transcribe determines it for you. If you specify the sample rate, it must match the rate detected by Amazon Transcribe; if there's a mismatch between the value you specify and the value detected, your job fails. Therefore, in most cases, it's advised to omit MediaSampleRateHertz
and let Amazon Transcribe determine the sample rate.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
-ModelSettings_LanguageModelName <
String>
The name of the custom language model you want to use when processing your transcription job. Note that language model names are case sensitive.The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the language model isn't applied. There are no errors or warnings associated with a language mismatch.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
The name of the Amazon S3 bucket where you want your transcription output stored. Do not include the
S3://
prefix of the specified bucket.If you want your output to go to a sub-folder of this bucket, specify it using the
OutputKey
parameter;
OutputBucketName
only accepts the name of a bucket.For example, if you want your output stored in
S3://DOC-EXAMPLE-BUCKET
, set
OutputBucketName
to
DOC-EXAMPLE-BUCKET
. However, if you want your output stored in
S3://DOC-EXAMPLE-BUCKET/test-files/
, set
OutputBucketName
to
DOC-EXAMPLE-BUCKET
and
OutputKey
to
test-files/
.Note that Amazon Transcribe must have permission to use the specified location. You can change Amazon S3 permissions using the
Amazon Web Services Management Console. See also
Permissions Required for IAM User Roles.If you don't specify
OutputBucketName
, your transcript is placed in a service-managed Amazon S3 bucket and you are provided with a URI to access your transcript.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
-OutputEncryptionKMSKeyId <
String>
The KMS key you want to use to encrypt your transcription output.If using a key located in the
current Amazon Web Services account, you can specify your KMS key in one of four ways:
- Use the KMS key ID itself. For example,
1234abcd-12ab-34cd-56ef-1234567890ab
. - Use an alias for the KMS key ID. For example,
alias/ExampleAlias
. - Use the Amazon Resource Name (ARN) for the KMS key ID. For example,
arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab
. - Use the ARN for the KMS key alias. For example,
arn:aws:kms:region:account-ID:alias/ExampleAlias
.
If using a key located in a
different Amazon Web Services account than the current Amazon Web Services account, you can specify your KMS key in one of two ways:
- Use the ARN for the KMS key ID. For example,
arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab
. - Use the ARN for the KMS key alias. For example,
arn:aws:kms:region:account-ID:alias/ExampleAlias
.
If you don't specify an encryption key, your output is encrypted with the default Amazon S3 key (SSE-S3).If you specify a KMS key to encrypt your output, you must also specify an output location using the
OutputLocation
parameter.Note that the user making the request must have permission to use the specified KMS key.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Use in combination with
OutputBucketName
to specify the output location of your transcript and, optionally, a unique name for your output file. The default name for your transcription output is the same as the name you specified for your transcription job (
TranscriptionJobName
).Here are some examples of how you can use
OutputKey
:
- If you specify 'DOC-EXAMPLE-BUCKET' as the
OutputBucketName
and 'my-transcript.json' as the OutputKey
, your transcription output path is s3://DOC-EXAMPLE-BUCKET/my-transcript.json
. - If you specify 'my-first-transcription' as the
TranscriptionJobName
, 'DOC-EXAMPLE-BUCKET' as the OutputBucketName
, and 'my-transcript' as the OutputKey
, your transcription output path is s3://DOC-EXAMPLE-BUCKET/my-transcript/my-first-transcription.json
. - If you specify 'DOC-EXAMPLE-BUCKET' as the
OutputBucketName
and 'test-files/my-transcript.json' as the OutputKey
, your transcription output path is s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript.json
. - If you specify 'my-first-transcription' as the
TranscriptionJobName
, 'DOC-EXAMPLE-BUCKET' as the OutputBucketName
, and 'test-files/my-transcript' as the OutputKey
, your transcription output path is s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript/my-first-transcription.json
.
If you specify the name of an Amazon S3 bucket sub-folder that doesn't exist, one is created for you.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Changes the cmdlet behavior to return the value passed to the TranscriptionJobName parameter. The -PassThru parameter is deprecated, use -Select '^TranscriptionJobName' instead. This parameter will be removed in a future version.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Use the -Select parameter to control the cmdlet output. The default value is 'TranscriptionJob'. Specifying -Select '*' will result in the cmdlet returning the whole service response (Amazon.TranscribeService.Model.StartTranscriptionJobResponse). Specifying the name of a property of type Amazon.TranscribeService.Model.StartTranscriptionJobResponse will result in that property being returned. Specifying -Select '^ParameterName' will result in the cmdlet returning the selected cmdlet parameter value.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
-Settings_ChannelIdentification <
Boolean>
Enables channel identification in multi-channel audio.Channel identification transcribes the audio on each channel independently, then appends the output for each channel into one transcript.You can't include both
ShowSpeakerLabels
and
ChannelIdentification
in the same request. Including both parameters returns a
BadRequestException
.For more information, see
Transcribing multi-channel audio.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
-Settings_MaxAlternative <
Int32>
Indicate the maximum number of alternative transcriptions you want Amazon Transcribe to include in your transcript.If you select a number greater than the number of alternative transcriptions generated by Amazon Transcribe, only the actual number of alternative transcriptions are included.If you include
MaxAlternatives
in your request, you must also include
ShowAlternatives
with a value of
true
.For more information, see
Alternative transcriptions.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | Settings_MaxAlternatives |
-Settings_MaxSpeakerLabel <
Int32>
Specify the maximum number of speakers you want to identify in your media.Note that if your media contains more speakers than the specified number, multiple speakers will be identified as a single speaker.If you specify the MaxSpeakerLabels
field, you must set the ShowSpeakerLabels
field to true.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | Settings_MaxSpeakerLabels |
-Settings_ShowAlternative <
Boolean>
To include alternative transcriptions within your transcription output, include
ShowAlternatives
in your transcription request.If you have multi-channel audio and do not enable channel identification, your audio is transcribed in a continuous manner and your transcript does not separate the speech by channel.If you include
ShowAlternatives
, you must also include
MaxAlternatives
, which is the maximum number of alternative transcriptions you want Amazon Transcribe to generate.For more information, see
Alternative transcriptions.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | Settings_ShowAlternatives |
-Settings_ShowSpeakerLabel <
Boolean>
Enables speaker identification (diarization) in your transcription output. Speaker identification labels the speech from individual speakers in your media file.If you enable
ShowSpeakerLabels
in your request, you must also include
MaxSpeakerLabels
.You can't include both
ShowSpeakerLabels
and
ChannelIdentification
in the same request. Including both parameters returns a
BadRequestException
.For more information, see
Identifying speakers (diarization).
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | Settings_ShowSpeakerLabels |
Specify how you want your vocabulary filter applied to your transcript.To replace words with ***
, choose mask
.To delete words, choose remove
.To flag words without changing them, choose tag
.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
-Settings_VocabularyFilterName <
String>
The name of the custom vocabulary filter you want to use in your transcription job request. This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account.Note that if you include VocabularyFilterName
in your request, you must also include VocabularyFilterMethod
.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
-Settings_VocabularyName <
String>
The name of the custom vocabulary you want to use in your transcription job request. This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Specify the output format for your subtitle file; if you select both WebVTT (vtt
) and SubRip (srt
) formats, two output files are generated.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | Subtitles_Formats |
-Subtitles_OutputStartIndex <
Int32>
Specify the starting value that is assigned to the first subtitle segment.The default start index for Amazon Transcribe is 0
, which differs from the more widely used standard of 1
. If you're uncertain which value to use, we recommend choosing 1
, as this may improve compatibility with other services.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Adds one or more custom tags, each in the form of a key:value pair, to a new transcription job at the time you start this new job.To learn more about using tags with Amazon Transcribe, refer to
Tagging resources.
Required? | False |
Position? | Named |
Accept pipeline input? | True (ByPropertyName) |
Aliases | Tags |
-TranscriptionJobName <
String>
A unique name, chosen by you, for your transcription job. The name you specify is also used as the default name of your transcription output file. If you want to specify a different name for your transcription output, use the OutputKey
parameter.This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account. If you try to create a new job with the same name as an existing job, you get a ConflictException
error.
Required? | True |
Position? | 1 |
Accept pipeline input? | True (ByValue, ByPropertyName) |