Training an SLM
Once your training file is ready, use the sgc compiling utility to generate an SLM grammar from a training file. The basic command is:
> sgc -train TrainingFile.xml
Where TrainingFile.xml is the name of the XML format training file.
This command produces a compiled binary grammar file (in the example, TrainingFile.gram). With no parser to assign meanings, this grammar will return only the literal text recognized from user input.
When the training file contains parameters to write .fsm and wordlist output (see below), the sgc command line typically suppresses the default output:
> sgc -train TrainingFile.xml -no_gram
This table shows other options to specify with the sgc command:
-train <file> |
Specifies a file that contains SLM training data. This option requires a training file, not an SRGS speech grammar. The resulting output is a binary grammar that is a simple loop over all vocabulary words in the training file. Cannot be used with -load_arpa. |
-load_arpa <file> |
Specifies a file that contains SLM training data written in ARPA format. When this option is specified, an input grammar is not allowed. The resulting binary grammar will be a simple loop over all vocabulary words in the training file. Cannot be used with -train. The Recognizer uses the Katz backoff formula, which says that if the n-gram doesn't exist in the language model, use the n-1-gram likelihood with its backoff weight. |
-language <lang> |
This option is available with -load. It specifies the recognition language to use for the trained model. Use this option when reading an ARPA file that contains no default language. |
-langver <versions> |
Specifies non-language versions to be used during the compile. For example: -langver en.us 9.0.0 -langver en.us 9.0.0,fr.ca 10.0.0 |
-no_gram |
This option is available with -train. It suppresses output of the binary grammar file, and is used when configuration parameters inside the training file are writing FSM and wordlist output. See below. |
-test <file> |
This option specifies an input file of test sentences. The compiler reports perplexity measurements for each sentence provided. See Tuning perplexity. |
Configuration parameters inside the training file determine other output files. See the following:
Using SLMs in applications (wrapper grammars)
Any grammar that includes a statistical model is informally known as a wrapper grammar. The wrapper can be an SRGS grammar, but it is typically a robust parsing grammar or an SSM wrapper.
To use an SLM as a component inside a grammar, use the <meta> element in the grammar header to include the .fsm and wordlist files. Once you’ve created a wrapper, you can compile it, test it with the parseTool utility, and use it in a runtime application. An example of a wrapper grammar appears below:
<?xml version="1.0" encoding="UTF-8"?>
<grammar xml:lang="en-US" version="1.0"
xmlns="http://www.w3.org/2001/06/grammar"
mode="voice" root="product">
<meta name="swirec_fsm_grammar" content="myslm.fsm"/>
<meta name="swirec_fsm_wordlist" content="myslm.wordlist"/>
<meta name="swirec_acoustic_adapt_suppress_adaptation"
content="0 0 1 "/>
<rule id="product">
...
</rule>
</grammar>
The wrapper suppresses acoustic adaptation. Nuance recommends suppression because the per-utterance error rate is typically higher in an SLM context that other grammars.
In the example, the "product" rule will be used only for the semantic interpretation of result sentences generated by the SLM in "myslm". The SLM controls all language decoding, and then the root rule determines semantics.
The following grammar has a root rule that uses SRGS to assign meaning:
<?xml version="1.0" encoding="UTF-8"?>
<grammar xml:lang="en-US" version="1.0"
xmlns="http://www.w3.org/2001/06/grammar"
mode="voice" root="dummy">
<meta name="swirec_fsm_grammar" content="myslm.fsm"/>
<meta name="swirec_fsm_wordlist" content="myslm.wordlist"/>
<rule id="dummy">
my bill <tag> foo="bar" </tag>
</rule>
</grammar>
The following grammar has no root rule. It contains a <meta> element to set the parser to NULL. This grammar returns the literal text recognized.
<?xml version="1.0" encoding="UTF-8"?>
<grammar xml:lang="en-US" version="1.0"
xmlns="http://www.w3.org/2001/06/grammar" mode="voice">
<meta name="swirec_fsm_parser" content="NULL"/>
<meta name="swirec_fsm_grammar" content="myslm.fsm"/>
<meta name="swirec_fsm_wordlist" content="myslm.wordlist"/>
</grammar>
Testing an SLM
After training and wrapping the SLM, run parseTool with the -test_sentences and -compute_lm options to see how the compiled grammar operates. As you enter sentences, the relative difference in scores between sentences is more important than the actual numeric values of the scores. A good model returns higher language model scores for sentences you expect to be more common. Below the output is reformatted (and some lines removed) for clarity:
parseTool script_sample.gram -t_s -c_l
next sentence: apple pie