Our support site has moved to https://support.intuiface.com

Interface Asset - Speech Recognition

Table of Contents


    The Speech Recognition interface asset uses the Microsoft Speech Platform to capture the spoken word. With it you can gather information (like a visitor's name) or transform voices into a trigger of one or more actions to control your IntuiFace experience.


    • Speech Recognition is only available on Windows PCs.
    • Speech Recognition cannot be trained to recognize - or reject - specific voices. It accepts all voices equally.

    Supported languages

    This interface asset (IA) works with all of the languages supported by the Microsoft Speech Platform SDK 11. There are currently 26 languages on the list.

    Use the following link to download Runtime Languages for Version 11 of the Speech Platform Runtime: Microsoft Speech Platform Runtime Languages v11.0

    To specify the language to be detected, manually enter the Language-Country Code value into the Language to detect property of the Speech Recognition IA. The following is a list of the available languages as of Jan-2018:

    Expression patterns

    The Speech Recognition IA attempts to detect the pattern you specify in the "Expression" parameter of the Expression is recognized trigger. You can use 3 kinds of patterns:

    • An exact sentence which has to be spoken word for word.
      • For example: Play introduction video.
    • A sentence with a list of possible choices. Use brackets to define multiple options, each option surrounded by double quotes and separated by a comma.
      • For example: Go to scene ["Introduction", "Main menu", "Conclusion"]
    • A sentence with a wildcard. Use an asterisk to represent any one or more words.
      • For example: Hello my name is *

    Add the Speech Recognition IA to your experience in Composer and then drag-and-drop it into the active scene. The following default template will be created, enabling you to test all three patterns.


    Properties, Triggers & Actions


    • Language to detect: Specify the language code to use in your experience. See above.
    • Recognize navigation commands: Only available for the English language. If enabled, IntuiFace will try to recognize the following expressions and execute the corresponding actions
      • Go to <scene name>
      • Go to next
      • Go to previous
    • Detect speech: Has a value of 'true' if the Speech Recognition IA is actively listening for expressions.

    The following properties are read-only and thus only accessible via binding.

    • Encountered an error: True / False value specifying whether the Speech Recognition IA encountered an error.
    • Error message: The description of the error. For troubleshooting, keep in mind that the typical causes are a missing language package or microphone issues.



    • Expression is recognized: Raised when the specified expression is recognized. See section above for the list of supported patterns. You can create as many of these triggers as you have expressions to recognize. The following read-only parameters are available to actions through binding:
      • Expression: the complete recognized expression.
      • Recognition confidence: a percentage between 0 and 1.
      • Variable part: If the trigger expression included a choice or wildcard pattern, this parameter will contain the recognized variable part.
    • Is in progress: Raised when speech is detected. The following read-only parameters are available to actions through binding:
      • Detected speech: The speech currently recognized.
      • Recognition confidence: a percentage between 0 and 1.
    • Is started: Raised when the "Start recognition" action is called.
    • Is stopped: Raised when the "Stop recognition" action is called.
    • No expression is recognized: Raised when none of the expressions specified by any one or more "Expression is recognized" triggers are recognized. The following read-only parameter is available to actions through binding:
      • Detected speech: The words detected
    • Error is encountered: Raised when an error is detected. The following read-only parameter is available to actions through binding:
      • Error Message: the description of the error.



    • Start recognition: Begin listening for expressions specified by all "Expression is recognized" triggers.
    • Stop recognition: Stop listening for expressions.


    Usage sample

    You can download a sample illustrating the Speech Recognition Interface Asset in action from our Marketplace.