Speech, Synthesizers, and Dialogflow
At the same time SiriKit was announced, Apple also unveiled the Speech framework, the underlying voice recognition system that Siri uses. What does the Speech framework offer? It recognizes both live and prerecorded speech, creates transcriptions and alternative interpretations of the recognized text, and produces confidence levels of how accurate the transcription is. That sounds similar to what Siri does, so what’s the difference between SiriKit and the Speech framework?