How Machines Learn at Spokestack
1 Model, 5 Platforms: Auto ML, Personal ML, and HuggingFace Transformers Made Easy
Machine learning for voice is the core of what Spokestack does. Not only do we make it easy to use for all developers, we're always advancing the state of the art. This week, we peel back the curtains on our machine learning backend.
Our model creation service utilizes AutoML tech to generate a single model that runs on edge, mobile, browser, and the cloud. Once you train your custom wake word, keyword, or text-to-speech model, you can run it everywhere!
Take a deep dive with our case study on how we automatically convert machine learning models from TensorFlow to TensorFlow.js. Those custom models you create in Spokestack Maker utilize transfer learning for rapid testing and prototyping (or just for your personal use at home or in your side projects)—learn more. Finally, if you’re curious about the hype around HuggingFace Transformers, walk through our tutorial on creating a Wikipedia-powered Q&A voice bot that incorporates both Spokestack and Transformers. “Tell me about Lady Ada Lovelace …”
Converting a TensorFlow Model to TensorFlow.js in Python
We use several different types of TensorFlow models: big models for the cloud, small TensorFlow Lite models for mobile devices, and even TensorFlow.js models for browsers
What Are Personal AI Models?
The personal models created with Spokestack Maker are only as good as the data they’re trained on. Our automatic model trainer will adjust to make the best use of the data you provide it.
Building a Question Answering Bot with Python
This tutorial will teach you how to use Spokestack and HuggingFace’s Transformers library to build a voice interface for a question answering service using data from Wikipedia.