SDK を使用して Amazon Polly を使用してテキストから音声を合成する - AWS SDK コードサンプル

Doc AWS SDK Examples リポジトリには、他にも SDK の例があります。 AWS GitHub

翻訳は機械翻訳により提供されています。提供された翻訳内容と英語版の間で齟齬、不一致または矛盾がある場合、英語版が優先します。

SDK を使用して Amazon Polly を使用してテキストから音声を合成する

次のコード例は、Amazon Polly を使用してテキストから音声を合成する方法を示しています。

.NET
AWS SDK for .NET
注記

には他にもがあります GitHub。用例一覧を検索し、AWS コードサンプルリポジトリでの設定と実行の方法を確認してください。

using System; using System.IO; using System.Threading.Tasks; using Amazon.Polly; using Amazon.Polly.Model; public class SynthesizeSpeech { public static async Task Main() { string outputFileName = "speech.mp3"; string text = "Twas brillig, and the slithy toves did gyre and gimbol in the wabe"; var client = new AmazonPollyClient(); var response = await PollySynthesizeSpeech(client, text); WriteSpeechToStream(response.AudioStream, outputFileName); } /// <summary> /// Calls the Amazon Polly SynthesizeSpeechAsync method to convert text /// to speech. /// </summary> /// <param name="client">The Amazon Polly client object used to connect /// to the Amazon Polly service.</param> /// <param name="text">The text to convert to speech.</param> /// <returns>A SynthesizeSpeechResponse object that includes an AudioStream /// object with the converted text.</returns> private static async Task<SynthesizeSpeechResponse> PollySynthesizeSpeech(IAmazonPolly client, string text) { var synthesizeSpeechRequest = new SynthesizeSpeechRequest() { OutputFormat = OutputFormat.Mp3, VoiceId = VoiceId.Joanna, Text = text, }; var synthesizeSpeechResponse = await client.SynthesizeSpeechAsync(synthesizeSpeechRequest); return synthesizeSpeechResponse; } /// <summary> /// Writes the AudioStream returned from the call to /// SynthesizeSpeechAsync to a file in MP3 format. /// </summary> /// <param name="audioStream">The AudioStream returned from the /// call to the SynthesizeSpeechAsync method.</param> /// <param name="outputFileName">The full path to the file in which to /// save the audio stream.</param> private static void WriteSpeechToStream(Stream audioStream, string outputFileName) { var outputStream = new FileStream( outputFileName, FileMode.Create, FileAccess.Write); byte[] buffer = new byte[2 * 1024]; int readBytes; while ((readBytes = audioStream.Read(buffer, 0, 2 * 1024)) > 0) { outputStream.Write(buffer, 0, readBytes); } // Flushes the buffer to avoid losing the last second or so of // the synthesized text. outputStream.Flush(); Console.WriteLine($"Saved {outputFileName} to disk."); } }

AWS SDK を使用して Amazon Polly でスピーチマークを使用してテキストから音声を合成します。

using System; using System.Collections.Generic; using System.IO; using System.Threading.Tasks; using Amazon.Polly; using Amazon.Polly.Model; public class SynthesizeSpeechMarks { public static async Task Main() { var client = new AmazonPollyClient(); string outputFileName = "speechMarks.json"; var synthesizeSpeechRequest = new SynthesizeSpeechRequest() { OutputFormat = OutputFormat.Json, SpeechMarkTypes = new List<string> { SpeechMarkType.Viseme, SpeechMarkType.Word, }, VoiceId = VoiceId.Joanna, Text = "This is a sample text to be synthesized.", }; try { using (var outputStream = new FileStream(outputFileName, FileMode.Create, FileAccess.Write)) { var synthesizeSpeechResponse = await client.SynthesizeSpeechAsync(synthesizeSpeechRequest); var buffer = new byte[2 * 1024]; int readBytes; var inputStream = synthesizeSpeechResponse.AudioStream; while ((readBytes = inputStream.Read(buffer, 0, 2 * 1024)) > 0) { outputStream.Write(buffer, 0, readBytes); } } } catch (Exception ex) { Console.WriteLine($"Error: {ex.Message}"); } } }
  • API の詳細については、「 API リファレンスSynthesizeSpeech」の「」を参照してください。 AWS SDK for .NET

Java
SDK for Java 2.x
注記

には他にもがあります GitHub。用例一覧を検索し、AWS コードサンプルリポジトリでの設定と実行の方法を確認してください。

import javazoom.jl.decoder.JavaLayerException; import software.amazon.awssdk.core.ResponseInputStream; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.polly.PollyClient; import software.amazon.awssdk.services.polly.model.DescribeVoicesRequest; import software.amazon.awssdk.services.polly.model.Voice; import software.amazon.awssdk.services.polly.model.DescribeVoicesResponse; import software.amazon.awssdk.services.polly.model.OutputFormat; import software.amazon.awssdk.services.polly.model.PollyException; import software.amazon.awssdk.services.polly.model.SynthesizeSpeechRequest; import software.amazon.awssdk.services.polly.model.SynthesizeSpeechResponse; import java.io.IOException; import java.io.InputStream; import javazoom.jl.player.advanced.AdvancedPlayer; import javazoom.jl.player.advanced.PlaybackEvent; import javazoom.jl.player.advanced.PlaybackListener; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class PollyDemo { private static final String SAMPLE = "Congratulations. You have successfully built this working demo " + " of Amazon Polly in Java Version 2. Have fun building voice enabled apps with Amazon Polly (that's me!), and always " + " look at the AWS website for tips and tricks on using Amazon Polly and other great services from AWS"; public static void main(String args[]) { PollyClient polly = PollyClient.builder() .region(Region.US_WEST_2) .build(); talkPolly(polly); polly.close(); } public static void talkPolly(PollyClient polly) { try { DescribeVoicesRequest describeVoiceRequest = DescribeVoicesRequest.builder() .engine("standard") .build(); DescribeVoicesResponse describeVoicesResult = polly.describeVoices(describeVoiceRequest); Voice voice = describeVoicesResult.voices().stream() .filter(v -> v.name().equals("Joanna")) .findFirst() .orElseThrow(() -> new RuntimeException("Voice not found")); InputStream stream = synthesize(polly, SAMPLE, voice, OutputFormat.MP3); AdvancedPlayer player = new AdvancedPlayer(stream, javazoom.jl.player.FactoryRegistry.systemRegistry().createAudioDevice()); player.setPlayBackListener(new PlaybackListener() { public void playbackStarted(PlaybackEvent evt) { System.out.println("Playback started"); System.out.println(SAMPLE); } public void playbackFinished(PlaybackEvent evt) { System.out.println("Playback finished"); } }); // play it! player.play(); } catch (PollyException | JavaLayerException | IOException e) { System.err.println(e.getMessage()); System.exit(1); } } public static InputStream synthesize(PollyClient polly, String text, Voice voice, OutputFormat format) throws IOException { SynthesizeSpeechRequest synthReq = SynthesizeSpeechRequest.builder() .text(text) .voiceId(voice.id()) .outputFormat(format) .build(); ResponseInputStream<SynthesizeSpeechResponse> synthRes = polly.synthesizeSpeech(synthReq); return synthRes; } }
  • API の詳細については、「 API リファレンスSynthesizeSpeech」の「」を参照してください。 AWS SDK for Java 2.x

Python
SDK for Python (Boto3)
注記

には他にもがあります GitHub。用例一覧を検索し、AWS コードサンプルリポジトリでの設定と実行の方法を確認してください。

class PollyWrapper: """Encapsulates Amazon Polly functions.""" def __init__(self, polly_client, s3_resource): """ :param polly_client: A Boto3 Amazon Polly client. :param s3_resource: A Boto3 Amazon Simple Storage Service (Amazon S3) resource. """ self.polly_client = polly_client self.s3_resource = s3_resource self.voice_metadata = None def synthesize( self, text, engine, voice, audio_format, lang_code=None, include_visemes=False ): """ Synthesizes speech or speech marks from text, using the specified voice. :param text: The text to synthesize. :param engine: The kind of engine used. Can be standard or neural. :param voice: The ID of the voice to use. :param audio_format: The audio format to return for synthesized speech. When speech marks are synthesized, the output format is JSON. :param lang_code: The language code of the voice to use. This has an effect only when a bilingual voice is selected. :param include_visemes: When True, a second request is made to Amazon Polly to synthesize a list of visemes, using the specified text and voice. A viseme represents the visual position of the face and mouth when saying part of a word. :return: The audio stream that contains the synthesized speech and a list of visemes that are associated with the speech audio. """ try: kwargs = { "Engine": engine, "OutputFormat": audio_format, "Text": text, "VoiceId": voice, } if lang_code is not None: kwargs["LanguageCode"] = lang_code response = self.polly_client.synthesize_speech(**kwargs) audio_stream = response["AudioStream"] logger.info("Got audio stream spoken by %s.", voice) visemes = None if include_visemes: kwargs["OutputFormat"] = "json" kwargs["SpeechMarkTypes"] = ["viseme"] response = self.polly_client.synthesize_speech(**kwargs) visemes = [ json.loads(v) for v in response["AudioStream"].read().decode().split() if v ] logger.info("Got %s visemes.", len(visemes)) except ClientError: logger.exception("Couldn't get audio stream.") raise else: return audio_stream, visemes
  • API の詳細については、SynthesizeSpeechAWS「 SDK for Python (Boto3) API リファレンス」の「」を参照してください。

Ruby
SDK for Ruby
注記

には他にもがあります GitHub。用例一覧を検索し、AWS コードサンプルリポジトリでの設定と実行の方法を確認してください。

require "aws-sdk-polly" # In v2: require 'aws-sdk' begin # Get the filename from the command line if ARGV.empty? puts "You must supply a filename" exit 1 end filename = ARGV[0] # Open file and get the contents as a string if File.exist?(filename) contents = IO.read(filename) else puts "No such file: " + filename exit 1 end # Create an Amazon Polly client using # credentials from the shared credentials file ~/.aws/credentials # and the configuration (region) from the shared configuration file ~/.aws/config polly = Aws::Polly::Client.new resp = polly.synthesize_speech({ output_format: "mp3", text: contents, voice_id: "Joanna", }) # Save output # Get just the file name # abc/xyz.txt -> xyx.txt name = File.basename(filename) # Split up name so we get just the xyz part parts = name.split(".") first_part = parts[0] mp3_file = first_part + ".mp3" IO.copy_stream(resp.audio_stream, mp3_file) puts "Wrote MP3 content to: " + mp3_file rescue StandardError => ex puts "Got error:" puts "Error message:" puts ex.message end
  • API の詳細については、「 API リファレンスSynthesizeSpeech」の「」を参照してください。 AWS SDK for Ruby

Rust
SDK for Rust
注記

には他にもがあります GitHub。用例一覧を検索し、AWS コードサンプルリポジトリでの設定と実行の方法を確認してください。

async fn synthesize(client: &Client, filename: &str) -> Result<(), Error> { let content = fs::read_to_string(filename); let resp = client .synthesize_speech() .output_format(OutputFormat::Mp3) .text(content.unwrap()) .voice_id(VoiceId::Joanna) .send() .await?; // Get MP3 data from response and save it let mut blob = resp .audio_stream .collect() .await .expect("failed to read data"); let parts: Vec<&str> = filename.split('.').collect(); let out_file = format!("{}{}", String::from(parts[0]), ".mp3"); let mut file = tokio::fs::File::create(out_file) .await .expect("failed to create file"); file.write_all_buf(&mut blob) .await .expect("failed to write to file"); Ok(()) }
  • API の詳細については、SynthesizeSpeechAWS「 SDK for Rust API リファレンス」の「」を参照してください。