NLU essentials
Natural language understanding (NLU) is one of the components of a rich conversational voice experience for your end users. NLUaaS uses engines hosted by Nuance and is accessible from a single gRPC interface.
More Info:
Just want to get started? If you are already familiar with these concepts, or simply want to get started right away, you can jump to the Prerequisites from Mix.How it works
NLUaaS accepts input from the user. This input can be text written by the user or the result of user speech transcribed into text by automatic speech recognition (ASR).
NLU applies transformation rules to the text of user input and performs formatting of output for display or further processing.
It derives domain-specific meaning from text using speech technology based on artificial intelligence (AI) and machine learning.
It interprets the text of the input with the aid of Nuance data packs and a semantic model created in Mix.nlu, returning a semantic interpretation.
Your client application can use this result to drive the next human-machine turn.
Languages and locales
Your Mix project is defined for a set of languages and locales, and brings in Nuance data packs for these languages and locales. Your NLU model is trained on domain-specific data for the same languages and locales. At runtime, the NLU service can then interpret domain-specific user input in any of the languages and locales supported by your Mix project and model.
Intents and entities
NLUaaS interpretation results consist of one or more hypotheses of the meaning of the user input in relation to the specified NLU model. Each hypothesis contains intents and entities, along with NLU’s confidence in the hypothesis.
An intent is the overall meaning of the user input in a form an application can understand, for example PAY_BILL, PLACE_ORDER, BOOK_FLIGHT, or GET_INFO. See Interpretation results: Intents for some examples.
Entities (also known as concepts) define the meaning of individual words within the input. They represent categories of things that are important to the intent. For example, the PLACE_ORDER intent might have entities such as PRODUCT, FLAVOR, and QTY. Each entity contains a set of values. For example, the FLAVOR entity might have values such as chocolate, strawberry, blueberry, vanilla, and so on.
At runtime, NLU interprets the sentence I’d like to order a dozen blueberry pies as:
- Intent: PLACE_ORDER
- Entities:
- dozen = QTY: 12
- blueberry = FLAVOR: Blueberry
- pie = PRODUCT: Pie
List entities have specific values, while other types of entity have values defined in a grammar file and/or regular expression. See Interpretation results: Entities for examples.
Intents and entities that can be interpreted by the semantic model are defined in Mix.nlu.
Extending the model
For more flexibility, you can extend your semantic model with wordsets containing additional terms for dynamic list entities. For entities with small numbers of terms (less than 100), an inline wordset can be included with the interpretation request at runtime. The wordset gets compiled at runtime and used as a resource to improve interpretation. This is convenient and simple, but for large wordsets (hundreds of items and above), this can lead to issues with latency.
As an alternative to the inline approach for larger wordsets, wordset files can be uploaded to Mix ahead of time to be compiled using a separate Wordset API. The compiled wordsets are saved in Mix and can later be accessed at runtime as an external reference. For large wordsets, this can significantly reduce latency in interpretation requests. If in doubt about which approach to take, test the latency with inline wordsets.
Related topics
- Add intents and entities to your model
- Sample Python wordsets client
- Wordset app development
- Wordset service messages
- Wordsets
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.