aboutsummaryrefslogtreecommitdiffstats
path: root/doc
diff options
context:
space:
mode:
authoroej <oej@f38db490-d61c-443f-a65b-d21fe96a405b>2006-04-13 07:08:44 +0000
committeroej <oej@f38db490-d61c-443f-a65b-d21fe96a405b>2006-04-13 07:08:44 +0000
commitebd4a3762503e7a5ba17188969edf20176713ab5 (patch)
treec85f19d2098dadfbbc5b82e5614e6e1646dddbab /doc
parent44688449adeef86455b89c4a125c7bb91a62f60a (diff)
Formatting fixes
git-svn-id: http://svn.digium.com/svn/asterisk/trunk@19703 f38db490-d61c-443f-a65b-d21fe96a405b
Diffstat (limited to 'doc')
-rw-r--r--doc/speechrec.txt297
1 files changed, 173 insertions, 124 deletions
diff --git a/doc/speechrec.txt b/doc/speechrec.txt
index 19fff17c0..1c1b04cba 100644
--- a/doc/speechrec.txt
+++ b/doc/speechrec.txt
@@ -1,78 +1,116 @@
-Generic Speech Recognition API
+The Asterisk Speech Recognition API
+===================================
-*** NOTE: To use the API, you must load the res_speech.so module before any connectors. For your convenience, there is a preload line commented out in the modules.conf sample file. ***
+The generic speech recognition engine is implemented in the res_speech.so module.
+This module connects through the API to speech recognition software, that is
+not included in the module.
-Dialplan Applications:
+To use the API, you must load the res_speech.so module before any connectors.
+For your convenience, there is a preload line commented out in the modules.conf
+sample file.
-The dialplan API is based around a single speech utilities application file, which exports many applications to be used for speech recognition. These include an application to prepare for speech recognition, activate a grammar, and play back a sound file while waiting for the person to speak. Using a combination of these applications you can easily make a dialplan use speech recognition without worrying about what speech recognition engine is being used.
+* Dialplan Applications:
+------------------------
-SpeechCreate(Engine Name):
+The dialplan API is based around a single speech utilities application file,
+which exports many applications to be used for speech recognition. These include an
+application to prepare for speech recognition, activate a grammar, and play back a
+sound file while waiting for the person to speak. Using a combination of these applications
+you can easily make a dialplan use speech recognition without worrying about what
+speech recognition engine is being used.
-This application creates information to be used by all the other applications. It must be called before doing any speech recognition activities such as activating a grammar. It takes the engine name to use as the argument, if not specified the default engine will be used.
+- SpeechCreate(Engine Name):
-If an error occurs are you are not able to create an object, the variable ERROR will be set to 1. You can then exit your speech recognition specific context and play back an error message, or resort to a DTMF based IVR.
+This application creates information to be used by all the other applications.
+It must be called before doing any speech recognition activities such as activating a
+grammar. It takes the engine name to use as the argument, if not specified the default
+engine will be used.
-SpeechLoadGrammar(Grammar Name|Path):
+If an error occurs are you are not able to create an object, the variable ERROR will be
+set to 1. You can then exit your speech recognition specific context and play back an
+error message, or resort to a DTMF based IVR.
-Loads grammar locally on a channel. Note that the grammar is only available as long as the channel exists, and you must call SpeechUnloadGrammar before all is done or you may cause a memory leak. First argument is the grammar name that it will be loaded as and second argument is the path to the grammar.
+- SpeechLoadGrammar(Grammar Name|Path):
-SpeechUnloadGrammar(Grammar Name):
+Loads grammar locally on a channel. Note that the grammar is only available as long as the
+channel exists, and you must call SpeechUnloadGrammar before all is done or you may cause a
+memory leak. First argument is the grammar name that it will be loaded as and second
+argument is the path to the grammar.
-Unloads a locally loaded grammar and frees any memory used by it. The only argument is the name of the grammar to unload.
+- SpeechUnloadGrammar(Grammar Name):
-SpeechActivateGrammar(Grammar Name):
+Unloads a locally loaded grammar and frees any memory used by it. The only argument is the
+name of the grammar to unload.
-This activates the specified grammar to be recognized by the engine. A grammar tells the speech recognition engine what to recognize, and how to portray it back to you in the dialplan. The grammar name is the only argument to this application.
+- SpeechActivateGrammar(Grammar Name):
-SpeechStart():
+This activates the specified grammar to be recognized by the engine. A grammar tells the
+speech recognition engine what to recognize, and how to portray it back to you in the
+dialplan. The grammar name is the only argument to this application.
-Tell the speech recognition engine that it should start trying to get results from audio being fed to it. This has no arguments.
+- SpeechStart():
-SpeechBackground(Sound File|Timeout):
+Tell the speech recognition engine that it should start trying to get results from audio
+being fed to it. This has no arguments.
-This application plays a sound file and waits for the person to speak. Once they start speaking playback of the file stops, and silence is heard. Once they stop talking the processing sound is played to indicate the speech recognition engine is working. Note it is possible to have more then one result. The first argument is the sound file and the second is the timeout. Note the timeout will only start once the sound file has stopped playing.
+- SpeechBackground(Sound File|Timeout):
-SpeechDeactivateGrammar(Grammar Name):
+This application plays a sound file and waits for the person to speak. Once they start
+speaking playback of the file stops, and silence is heard. Once they stop talking the
+processing sound is played to indicate the speech recognition engine is working. Note it is
+possible to have more then one result. The first argument is the sound file and the second is the
+timeout. Note the timeout will only start once the sound file has stopped playing.
-This deactivates the specified grammar so that it is no longer recognized. The only argument is the grammar name to deactivate.
+- SpeechDeactivateGrammar(Grammar Name):
-SpeechProcessingSound(Sound File):
+This deactivates the specified grammar so that it is no longer recognized. The
+only argument is the grammar name to deactivate.
-This changes the processing sound that SpeechBackground plays back when the speech recognition engine is processing and working to get results. It takes the sound file as the only argument.
+- SpeechProcessingSound(Sound File):
-SpeechDestroy():
+This changes the processing sound that SpeechBackground plays back when the speech
+recognition engine is processing and working to get results. It takes the sound file as the
+only argument.
-This destroys the information used by all the other speech recognition applications. If you call this application but end up wanting to recognize more speech, you must call SpeechCreate again before calling any other application. It takes no arguments.
+- SpeechDestroy():
-Getting Result Information:
+This destroys the information used by all the other speech recognition applications.
+If you call this application but end up wanting to recognize more speech, you must call
+SpeechCreate again before calling any other application. It takes no arguments.
-The speech recognition utilities module exports several dialplan functions that you can use to examine results.
+* Getting Result Information:
+-----------------------------
-${SPEECH(status)}:
+The speech recognition utilities module exports several dialplan functions that you can use to
+examine results.
-Returns 1 if SpeechCreate has been called. This uses the same check that applications do to see if a speech object is setup. If it returns 0 then you know you can not use other speech applications.
+- ${SPEECH(status)}:
-${SPEECH(spoke)}:
+Returns 1 if SpeechCreate has been called. This uses the same check that applications do to see if a
+speech object is setup. If it returns 0 then you know you can not use other speech applications.
+
+- ${SPEECH(spoke)}:
Returns 1 if the speaker spoke something, or 0 if they were silent.
-${SPEECH(results)}:
+- ${SPEECH(results)}:
Returns the number of results that are available.
-${SPEECH_SCORE(result number)}:
+- ${SPEECH_SCORE(result number)}:
Returns the score of a result.
-${SPEECH_TEXT(result number)}:
+- ${SPEECH_TEXT(result number)}:
Returns the recognized text of a result.
-${SPEECH_GRAMMAR(result number)}:
+- ${SPEECH_GRAMMAR(result number)}:
Returns the matched grammar of the result.
-Dialplan Flow:
+* Dialplan Flow:
+-----------------
1. Create a speech recognition object using SpeechCreate()
2. Activate your grammars using SpeechActivateGrammar(Grammar Name)
@@ -82,163 +120,174 @@ Dialplan Flow:
6. Deactivate your grammars using SpeechDeactivateGrammar(Grammar Name)
7. Destroy your speech recognition object using SpeechDestroy()
-Dialplan Examples:
+* Dialplan Examples:
-This is pretty cheeky in that it does not confirmation of results. As well the way the grammar is written it returns the person's extension instead of their name so we can just do a Goto based on the result text.
+This is pretty cheeky in that it does not confirmation of results. As well the way the
+grammar is written it returns the person's extension instead of their name so we can
+just do a Goto based on the result text.
-company-directory.gram
+- Grammar: company-directory.gram
-#ABNF 1.0;
-language en-US;
-mode voice;
-tag-format <lumenvox/1.0>;
+ #ABNF 1.0;
+ language en-US;
+ mode voice;
+ tag-format <lumenvox/1.0>;
-root $company_directory;
+ root $company_directory;
-$josh = (Joshua | Josh) [Colp]:"6066";
-$mark = Mark [Spencer] | Markster:"4569";
-$kevin = Kevin [Fleming]:"2567";
+ $josh = (Joshua | Josh) [Colp]:"6066";
+ $mark = Mark [Spencer] | Markster:"4569";
+ $kevin = Kevin [Fleming]:"2567";
-$company_directory = ($josh | $mark | $kevin) { $ = parseInt($$) };
+ $company_directory = ($josh | $mark | $kevin) { $ = parseInt($$) };
-dialplan logic
+- Dialplan logic
-[dial-by-name]
-exten => s,1,SpeechCreate()
-exten => s,2,SpeechActivateGrammar(company-directory)
-exten => s,3,SpeechStart()
-exten => s,4,SpeechBackground(who-would-you-like-to-dial)
-exten => s,5,SpeechDeactivateGrammar(company-directory)
-exten => s,6,SpeechDestroy()
-exten => s,7,Goto(internal-extensions-${SPEECH_TEXT(0)})
+ [dial-by-name]
+ exten => s,1,SpeechCreate()
+ exten => s,2,SpeechActivateGrammar(company-directory)
+ exten => s,3,SpeechStart()
+ exten => s,4,SpeechBackground(who-would-you-like-to-dial)
+ exten => s,5,SpeechDeactivateGrammar(company-directory)
+ exten => s,6,SpeechDestroy()
+ exten => s,7,Goto(internal-extensions-${SPEECH_TEXT(0)})
-Useful Dialplan Tidbits:
+- Useful Dialplan Tidbits:
-A simple macro that can be used for confirm of a result. Requires some sound files. ARG1 is equal to the file to play back after "I heard..." is played.
+A simple macro that can be used for confirm of a result. Requires some sound files.
+ARG1 is equal to the file to play back after "I heard..." is played.
-[macro-speech-confirm]
-exten => s,1,SpeechActivateGrammar(yes_no)
-exten => s,2,Set(OLDTEXT0=${SPEECH_TEXT(0)})
-exten => s,3,Playback(heard)
-exten => s,4,Playback(${ARG1})
-exten => s,5,SpeechStart()
-exten => s,6,SpeechBackground(correct)
-exten => s,7,Set(CONFIRM=${SPEECH_TEXT(0)})
-exten => s,8,GotoIf($["${SPEECH_TEXT(0)}" = "1"]?9:10)
-exten => s,9,Set(CONFIRM=yes)
-exten => s,10,Set(${CONFIRMED}=${OLDTEXT0})
-exten => s,11,SpeechDeactivateGrammar(yes_no)
+ [macro-speech-confirm]
+ exten => s,1,SpeechActivateGrammar(yes_no)
+ exten => s,2,Set(OLDTEXT0=${SPEECH_TEXT(0)})
+ exten => s,3,Playback(heard)
+ exten => s,4,Playback(${ARG1})
+ exten => s,5,SpeechStart()
+ exten => s,6,SpeechBackground(correct)
+ exten => s,7,Set(CONFIRM=${SPEECH_TEXT(0)})
+ exten => s,8,GotoIf($["${SPEECH_TEXT(0)}" = "1"]?9:10)
+ exten => s,9,Set(CONFIRM=yes)
+ exten => s,10,Set(${CONFIRMED}=${OLDTEXT0})
+ exten => s,11,SpeechDeactivateGrammar(yes_no)
-C API
+* The Asterisk Speech Recognition C API
+---------------------------------------
-The module res_speech.so exports a C based API that any developer can use to speech recognize enable their application. The API gives greater control, but requires the developer to do more on their end in comparison to the dialplan speech utilities.
+The module res_speech.so exports a C based API that any developer can use to speech
+recognize enable their application. The API gives greater control, but requires the
+developer to do more on their end in comparison to the dialplan speech utilities.
For all API calls that return an integer value a non-zero value indicates an error has occured.
-Creating a speech structure:
+- Creating a speech structure:
-struct ast_speech *ast_speech_new(char *engine_name, int format)
+ struct ast_speech *ast_speech_new(char *engine_name, int format)
-struct ast_speech *speech = ast_speech_new(NULL, AST_FORMAT_SLINEAR);
+ struct ast_speech *speech = ast_speech_new(NULL, AST_FORMAT_SLINEAR);
-This will create a new speech structure that will be returned to you. The speech recognition engine name is optional and if NULL the default one will be used. As well for now format should always be AST_FORMAT_SLINEAR.
+This will create a new speech structure that will be returned to you. The speech recognition
+engine name is optional and if NULL the default one will be used. As well for now format should
+always be AST_FORMAT_SLINEAR.
-Activating a grammar:
+- Activating a grammar:
-int ast_speech_grammar_activate(struct ast_speech *speech, char *grammar_name)
+ int ast_speech_grammar_activate(struct ast_speech *speech, char *grammar_name)
-res = ast_speech_grammar_activate(speech, "yes_no");
+ res = ast_speech_grammar_activate(speech, "yes_no");
This activates the specified grammar on the speech structure passed to it.
-Start recognizing audio:
+- Start recognizing audio:
-void ast_speech_start(struct ast_speech *speech)
+ void ast_speech_start(struct ast_speech *speech)
-ast_speech_start(speech);
+ ast_speech_start(speech);
-This essentially tells the speech recognition engine that you will be feeding audio to it from then on. It MUST be called every time before you start feeding audio to the speech structure.
+This essentially tells the speech recognition engine that you will be feeding audio to it from
+then on. It MUST be called every time before you start feeding audio to the speech structure.
-Send audio to be recognized:
+- Send audio to be recognized:
-int ast_speech_write(struct ast_speech *speech, void *data, int len)
+ int ast_speech_write(struct ast_speech *speech, void *data, int len)
-res = ast_speech_write(speech, fr->data, fr->datalen);
+ res = ast_speech_write(speech, fr->data, fr->datalen);
-This writes audio to the speech structure that will then be recognized. It must be written signed linear only at this time. In the future other formats may be supported.
+This writes audio to the speech structure that will then be recognized. It must be written
+signed linear only at this time. In the future other formats may be supported.
-Checking for results:
+- Checking for results:
-The way the generic speech recognition API is written is that the speech structure will undergo state changes to indicate progress of recognition. The states are outlined below:
+The way the generic speech recognition API is written is that the speech structure will
+undergo state changes to indicate progress of recognition. The states are outlined below:
-AST_SPEECH_STATE_NOT_READY - The speech structure is not ready to accept audio
-AST_SPEECH_STATE_READY - You may write audio to the speech structure
-AST_SPEECH_STATE_WAIT - No more audio should be written, and results will be available soon.
-AST_SPEECH_STATE_DONE - Results are available and the speech structure can only be used again by calling ast_speech_start
+ AST_SPEECH_STATE_NOT_READY - The speech structure is not ready to accept audio
+ AST_SPEECH_STATE_READY - You may write audio to the speech structure
+ AST_SPEECH_STATE_WAIT - No more audio should be written, and results will be available soon.
+ AST_SPEECH_STATE_DONE - Results are available and the speech structure can only be used again by
+ calling ast_speech_start
-It is up to you to monitor these states. Current state is available via a variable on the speech structure. (state)
+It is up to you to monitor these states. Current state is available via a variable on the speech
+structure. (state)
-Knowing when to stop playback:
+- Knowing when to stop playback:
-If you are playing back a sound file to the user and you want to know when to stop play back because the individual started talking use the following.
+If you are playing back a sound file to the user and you want to know when to stop play back because the
+individual started talking use the following.
-ast_test_flag(speech, AST_SPEECH_QUIET) - This will return a positive value when the person has started talking.
+ ast_test_flag(speech, AST_SPEECH_QUIET) - This will return a positive value when the person has started talking.
-Getting results:
+- Getting results:
-struct ast_speech_result *ast_speech_results_get(struct ast_speech *speech)
+ struct ast_speech_result *ast_speech_results_get(struct ast_speech *speech)
-struct ast_speech_result *results = ast_speech_results_get(speech);
+ struct ast_speech_result *results = ast_speech_results_get(speech);
This will return a linked list of result structures. A result structure looks like the following:
-struct ast_speech_result {
- /*! Recognized text */
- char *text;
- /*! Result score */
- int score;
- /*! Matched grammar */
- char *grammar;
- /*! List information */
- struct ast_speech_result *next;
-};
+ struct ast_speech_result {
+ char *text; /*!< Recognized text */
+ int score; /*!< Result score */
+ char *grammar; /*!< Matched grammar */
+ struct ast_speech_result *next; /*!< List information */
+ };
-Freeing a set of results:
+- Freeing a set of results:
-int ast_speech_results_free(struct ast_speech_result *result)
+ int ast_speech_results_free(struct ast_speech_result *result)
-res = ast_speech_results_free(results);
+ res = ast_speech_results_free(results);
This will free all results on a linked list. Results MAY NOT be used as the memory will have been freed.
-Deactivating a grammar:
+- Deactivating a grammar:
-int ast_speech_grammar_deactivate(struct ast_speech *speech, char *grammar_name)
+ int ast_speech_grammar_deactivate(struct ast_speech *speech, char *grammar_name)
-res = ast_speech_grammar_deactivate(speech, "yes_no");
+ res = ast_speech_grammar_deactivate(speech, "yes_no");
This deactivates the specified grammar on the speech structure.
-Destroying a speech structure:
+- Destroying a speech structure:
-int ast_speech_destroy(struct ast_speech *speech)
+ int ast_speech_destroy(struct ast_speech *speech)
-res = ast_speech_destroy(speech);
+ res = ast_speech_destroy(speech);
This will free all associated memory with the speech structure and destroy it with the speech recognition engine.
-Loading a grammar on a speech structure:
+- Loading a grammar on a speech structure:
-int ast_speech_grammar_load(struct ast_speech *speech, char *grammar_name, char *grammar)
+ int ast_speech_grammar_load(struct ast_speech *speech, char *grammar_name, char *grammar)
-res = ast_speech_grammar_load(speech, "builtin:yes_no", "yes_no");
+ res = ast_speech_grammar_load(speech, "builtin:yes_no", "yes_no");
-Unloading a grammar on a speech structure:
+- Unloading a grammar on a speech structure:
-If you load a grammar on a speech structure it is preferred that you unload it as well, or you may cause a memory leak. Don't say I didn't warn you.
+If you load a grammar on a speech structure it is preferred that you unload it as well,
+or you may cause a memory leak. Don't say I didn't warn you.
-int ast_speech_grammar_unload(struct ast_speech *speech, char *grammar_name)
+ int ast_speech_grammar_unload(struct ast_speech *speech, char *grammar_name)
-res = ast_speech_grammar_unload(speech, "yes_no");
+ res = ast_speech_grammar_unload(speech, "yes_no");
This unloads the specified grammar from the speech structure.