AWSSDK を使用して Amazon Polly でテキストから音声を合成する - AWSSDK コードサンプル

AWSDocAWS SDKGitHub サンプルリポジトリには、さらに多くの SDK サンプルがあります

翻訳は機械翻訳により提供されています。提供された翻訳内容と英語版の間で齟齬、不一致または矛盾がある場合、英語版が優先します。

AWSSDK を使用して Amazon Polly でテキストから音声を合成する

次のコード例は、Amazon Polly でテキストから音声を合成する方法を示しています。

.NET
AWS SDK for .NET
注記

他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

using System; using System.IO; using System.Threading.Tasks; using Amazon.Polly; using Amazon.Polly.Model; public class SynthesizeSpeech { public static async Task Main() { string outputFileName = "speech.mp3"; string text = "Twas brillig, and the slithy toves did gyre and gimbol in the wabe"; var client = new AmazonPollyClient(); var response = await PollySynthesizeSpeech(client, text); WriteSpeechToStream(response.AudioStream, outputFileName); } /// <summary> /// Calls the Amazon Polly SynthesizeSpeechAsync method to convert text /// to speech. /// </summary> /// <param name="client">The Amazon Polly client object used to connect /// to the Amazon Polly service.</param> /// <param name="text">The text to convert to speech.</param> /// <returns>A SynthesizeSpeechResponse object that includes an AudioStream /// object with the converted text.</returns> private static async Task<SynthesizeSpeechResponse> PollySynthesizeSpeech(IAmazonPolly client, string text) { var synthesizeSpeechRequest = new SynthesizeSpeechRequest() { OutputFormat = OutputFormat.Mp3, VoiceId = VoiceId.Joanna, Text = text, }; var synthesizeSpeechResponse = await client.SynthesizeSpeechAsync(synthesizeSpeechRequest); return synthesizeSpeechResponse; } /// <summary> /// Writes the AudioStream returned from the call to /// SynthesizeSpeechAsync to a file in MP3 format. /// </summary> /// <param name="audioStream">The AudioStream returned from the /// call to the SynthesizeSpeechAsync method.</param> /// <param name="outputFileName">The full path to the file in which to /// save the audio stream.</param> private static void WriteSpeechToStream(Stream audioStream, string outputFileName) { var outputStream = new FileStream( outputFileName, FileMode.Create, FileAccess.Write); byte[] buffer = new byte[2 * 1024]; int readBytes; while ((readBytes = audioStream.Read(buffer, 0, 2 * 1024)) > 0) { outputStream.Write(buffer, 0, readBytes); } // Flushes the buffer to avoid losing the last second or so of // the synthesized text. outputStream.Flush(); Console.WriteLine($"Saved {outputFileName} to disk."); } }

AWSSDK を使用して Amazon Polly でスピーチマークを使用してテキストから音声を合成します。

using System; using System.Collections.Generic; using System.IO; using System.Threading.Tasks; using Amazon.Polly; using Amazon.Polly.Model; public class SynthesizeSpeechMarks { public static async Task Main() { var client = new AmazonPollyClient(); string outputFileName = "speechMarks.json"; var synthesizeSpeechRequest = new SynthesizeSpeechRequest() { OutputFormat = OutputFormat.Json, SpeechMarkTypes = new List<string> { SpeechMarkType.Viseme, SpeechMarkType.Word, }, VoiceId = VoiceId.Joanna, Text = "This is a sample text to be synthesized.", }; try { using (var outputStream = new FileStream(outputFileName, FileMode.Create, FileAccess.Write)) { var synthesizeSpeechResponse = await client.SynthesizeSpeechAsync(synthesizeSpeechRequest); var buffer = new byte[2 * 1024]; int readBytes; var inputStream = synthesizeSpeechResponse.AudioStream; while ((readBytes = inputStream.Read(buffer, 0, 2 * 1024)) > 0) { outputStream.Write(buffer, 0, readBytes); } } } catch (Exception ex) { Console.WriteLine($"Error: {ex.Message}"); } } }
  • API の詳細については、AWS SDK for .NETAPI SynthesizeSpeechリファレンスのを参照してください

Python
SDK for Python (Boto3)
注記

他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

class PollyWrapper: """Encapsulates Amazon Polly functions.""" def __init__(self, polly_client, s3_resource): """ :param polly_client: A Boto3 Amazon Polly client. :param s3_resource: A Boto3 Amazon Simple Storage Service (Amazon S3) resource. """ self.polly_client = polly_client self.s3_resource = s3_resource self.voice_metadata = None def synthesize( self, text, engine, voice, audio_format, lang_code=None, include_visemes=False): """ Synthesizes speech or speech marks from text, using the specified voice. :param text: The text to synthesize. :param engine: The kind of engine used. Can be standard or neural. :param voice: The ID of the voice to use. :param audio_format: The audio format to return for synthesized speech. When speech marks are synthesized, the output format is JSON. :param lang_code: The language code of the voice to use. This has an effect only when a bilingual voice is selected. :param include_visemes: When True, a second request is made to Amazon Polly to synthesize a list of visemes, using the specified text and voice. A viseme represents the visual position of the face and mouth when saying part of a word. :return: The audio stream that contains the synthesized speech and a list of visemes that are associated with the speech audio. """ try: kwargs = { 'Engine': engine, 'OutputFormat': audio_format, 'Text': text, 'VoiceId': voice} if lang_code is not None: kwargs['LanguageCode'] = lang_code response = self.polly_client.synthesize_speech(**kwargs) audio_stream = response['AudioStream'] logger.info("Got audio stream spoken by %s.", voice) visemes = None if include_visemes: kwargs['OutputFormat'] = 'json' kwargs['SpeechMarkTypes'] = ['viseme'] response = self.polly_client.synthesize_speech(**kwargs) visemes = [json.loads(v) for v in response['AudioStream'].read().decode().split() if v] logger.info("Got %s visemes.", len(visemes)) except ClientError: logger.exception("Couldn't get audio stream.") raise else: return audio_stream, visemes
  • API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」のを参照してくださいSynthesizeSpeech

Rust
SDK for Rust
注記

これはプレビューリリースの SDK に関するドキュメントです。SDK は変更される場合があり、本稼働環境では使用しないでください。

注記

他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

async fn synthesize(client: &Client, filename: &str) -> Result<(), Error> { let content = fs::read_to_string(filename); let resp = client .synthesize_speech() .output_format(OutputFormat::Mp3) .text(content.unwrap()) .voice_id(VoiceId::Joanna) .send() .await?; // Get MP3 data from response and save it let mut blob = resp .audio_stream .collect() .await .expect("failed to read data"); let parts: Vec<&str> = filename.split('.').collect(); let out_file = format!("{}{}", String::from(parts[0]), ".mp3"); let mut file = tokio::fs::File::create(out_file) .await .expect("failed to create file"); file.write_all_buf(&mut blob) .await .expect("failed to write to file"); Ok(()) }
  • API の詳細については、SDK for Rust API リファレンスSynthesizeSpeechの「AWSSDK for Rust API リファレンス」を参照してください。