Skip to content

Voice Input in Aliveforms

In Aliveforms, logic units can be be used for voice input for using WebSpeech API SpeechRecognition. It is very useful to allow users to give input using voice. It can be used to create tutorials, language learning apps and education apps etc.

Set up

For this we need a local logic unit. First we need to initialize the speech recognition.

Input Type: Execute JavaScript
Screen Index: -1
Execute JavaScript | -1
if ('SpeechRecognition' in window || 'webkitSpeechRecognition' in window) {
  window.recognition = new (window.SpeechRecognition || window.webkitSpeechRecognition)();

  window.recognition.continuous = false;
  window.recognition.lang = 'en-US';
  window.recognition.interimResults = false;

} else {
  alert('Speech recognition is not supported in your browser. Please use a modern browser like Google Chrome.');
}

Now, we can use this as following example. For this example, we are using this on ALL screens and we are using it for text inputs only.

Input Type: Execute JavaScript
Screen Index: ALL
Execute JavaScript | ALL
setTimeout(function () {
  window.recognition.start();
}, 300)

window.recognition.onresult = (event) => {
  console.log(event.results)
  const result = event.results[0][0].transcript;
  var confidence = event.results[0][0].confidence;

  $.Alert(`You said, ${result}, ${confidence}`)
  if (screen !== -1) {
    inputs[screen] = result;
    if (result.toLowerCase() !== "pass") { 
      document.getElementById('ti').value = result;
    }
    setTimeout(function() {
      $.PressNext()
    }, 500)
  } else {
    if (result.toLowerCase() == "start") {
      setTimeout(function() {
        $.PressNext();
      }, 500)
    }
  }
};

window.recognition.onend = () => {
  $.Alert('Speech recognition ended');
};

window.recognition.onerror = (event) => {
  $.Alert('Speech recognition error', event.error);
};