Menu
Lumberyard
Developer Guide (Version 1.11)

Using the Speech Recognition Sample Level

Before you can use the Speech Recognition Sample Level, you must prepare the necessary Cloud Canvas resources. The sample level for the Cloud Gem Speech Recognition is located in the Lumberyard \dev\CloudGemSamples\Levels\SpeechToTextSampleCloudGemSamples directory. The name of the sample level is SpeechToTextSample.

Preparing the Sample Level

To prepare the sample level, you use the Cloud Gem Portal to import the sample Amazon Lex bot that is included with Lumberyard. For more information about bots, see Bots, Intents, Slots, and Elicitations.

To prepare the sample Level

  1. In the Lumberyard Project Configurator, select the CloudGemSamples project as the default project. If you want to use a different project, ensure that the Cloud Gem Speech Recognition and Cloud Gem Text-to-Speech gems are selected, and then rebuild your project.

  2. In Lumberyard Editor, choose AWS, Cloud Canvas, Resource Manager.

  3. Click Upload all resources and follow the prompts to create your project stack, deploy required resources, and create the Cloud Gem Portal in an Amazon Lex supported AWS region. The operations might take some time to complete.

  4. In Lumberyard Editor, choose AWS, Open Cloud Gem Portal.

  5. In the Launching the Cloud Gem Portal dialog box, copy your temporary credentials.

  6. In your web browser, use your temporary password to sign in to the Cloud Gem Portal, and then change your password when prompted.

  7. In the Cloud Gem Portal, choose Speech Recognition.

    
            Choose Speech Recognition
  8. In the Cloud Gem Portal, click Create Bot.

    
            Click Create Bot

    In the preview version of the gem, this feature imports Amazon Lex bot files, which are in .json format.

  9. From the file explorer, select the following file from your Lumberyard installation: dev\CloudGemSamples\Levels\SpeechToTextSample\lex_test.json.

    When the import is finished, LYTestBot appears in the list of bots.

    
            LYTestBot imported

    The Status column shows BUILDING and then changes to READY when processing is completed. At this point, the sample level is ready to use.

Trying the Speech Recognition Sample

The sample level uses a mini map of a simple multiplayer online battle arena (MOBA) game. In a MOBA game, a team member might want to ask for help or warn another team member about the presence of an opponent in a particular location.

The sample level uses the RequestHelp and Ping intents and the MapLocation slot to implement this functionality. The intents and slot are specified in the lex_test.json file that you imported.

To try the speech recognition sample level

  1. In Lumberyard Editor, close the resource manager.

  2. Choose File, Open, Levels, SpeechToTextSample.

  3. Click Play Game or press Ctrl+G.

    The MOBA mini map appears.

    
            MOBA map
  4. Click and hold the Hold To Talk icon and ask for help. Because the sample uses Amazon Lex, your spoken request must include the word "help" but does not have to have any particular phrasing. Release the mouse button when you are finished speaking. The built-in voice conveys your request by saying "I need some help here."

    Note

    The preview version of the speech recognition gem does not have a wake word.

  5. Click and hold the Hold To Talk icon and say "ping" to warn about a danger in a location on the map. In your spoken request, include one of the words "top," "middle," or "bottom" to specify a location on the map. Your phrasing does not have to be exact. Release the mouse button when you are finished speaking.

    
            Hold the button to talk

    If you said "Ping the middle lane," the map highlights the middle of the map with a target-like animation, and the voice says "Watch the middle lane." The box on the upper left contains a transcript of the speech that was input and the intents and slots that were recognized.

  6. Try other locations, including "me" and "myself." Using natural speech, vary your phrasing to confirm that your wording does not have to be exact.

  7. Press the Hold To Talk button again and say "ping" without specifying a location. You are asked "Where should I ping?" In your follow-up response, include one of the words "top," "middle," or "bottom." The level responds with the animation and voice as before.

  8. To test the intents without using a microphone, type a phrase in the text box on the bottom left and then click Send Debug Text or press Enter.

    
            Send debug text

    As before, if you type only "ping" without specifying a location, you are prompted for one.

    
            Follow-up question

To learn more about the Cloud Gem Speech Recognition Cloud Gem Portal, see Speech Recognition Cloud Gem Portal (Preview).