Clarissa is a spoken dialogue speech interface for use in command situations by astronauts or mars pioneer. NASA did a study on the banter dialog during the sixties and seventies and discovered, much to their surprise - that 80% of the dialogue between houston control and the mission was in fact, centered on discussion that could have taken place within the vehicle itself, or locating things within their vehicle - and not traffic respective of Houston's role in the whole affair. It was all kitchen talk.

This poses a distinct problem for the pioneer, because he or she , or they are going to need to connect with their base station. You have not seen lag , until you have seen earth to mars lag. Try 5 minutes one way , and back. The kitchen talk will be no more. But they still need to talk to base. The computer in the habitat will have valuable information not only about their world but about their own systems (They will walk around with a laptop on their back I think).

Clarissa intercepts spoken language, and when it recognizes a word command intercepts it and executes it with appropriate directed dialogue. For example someone may be talking to another when one of them might need a rover sample pickup. Ok. So that person says the name of the rover, midconversation stream, - usually some kind of off-beat name so it doesn't false trigger - ex. , lets call the rover here - "Thibodeaux" (Think: SL names.. yep.) - and you get a dialogue from the rover "Yes?" . "Thibodeaux, go to site number five". and the rover will respond "On my way". And the dialogue will close. And then you can continue on with your conversation. It will wait for you again.

Here is a question for you - what if, after you built something like that - you found that when you were out in the desert testing this thing, that it was easier for the recognizer, to intercept a long command (which are usually harder to intercept, up to a certain point) - ex: "Thibodeaux, proceed to checkpoint five. Pick up samples. Return to base". ... that the recognition got +worse+ on a simple or +normal+ short command like, say "Yes" or "No". So, ... the question is ... why would something like this happen? To restate - Why would speech recognition tuned to listen to you speaking all the time, get confused if suddenly you did a change up and spoke normally?

This post was brought to you by the coconino cty extension service. Bringing you clean water and fresh air, and GPS - since 1918.


Lee said…
I just love it when men speak hi-tech gibberish to me. Maybe if someone was like a gangsta and the speech recognition got all confused when the homey said, "Yo Yo Yo" instead of "No no no"?