Over the Christmas break, in between migrating some data for customers. I started to explore Python for Data Science and AI. This led me down some great deep holes such as purchasing the book Machine Learning for Time Series Forecasting with Python – by Francesca Lazzeri, PhD. Exploring Natural Language processing using neural networks, leveraging Speech Recognition.
With these simple tools under my belt, I set myself the simple goal of building my own Voice Assistant… Little did I know, this would open a world of fun and obsession…
The issue I faced was really a first world problem. Google Home didn’t have the commands I wanted… or at least use the phases I wanted.
So, I drew up this quick idea and started building away.
First my focus was on getting a decent NLP engine which could understand Australian accent, the wonderful thing is the python has so many pre-made modules, I was able to leverage the NLTK module which took away having to design my own and find copious amounts of training data.
The next job was building the custom neural network to handle my phases and commands. This is where tools like numpy and Keras come in their own, being able to prototype and develop your own AI models and evaluate the results. The model now matched a phase to my intent with trigger the correct command. Success!
With the commands now being matched, it was just a matter of matching each command, with its own Command Module which would contain the code to do things, such as reading my emails from Office 365 thanks to GraphAPI, or even composing emails on my behalf. The modular natural of this approach is allowing me to expand it one command at a time.
My next challenge is taking the above architecture and turning it into API application, letting me centralise the AI brain and opening doors such as developing my own hardware on top of Audrio or RaspberryPi platforms. On top that I plan to use Azure voice recognition feature to lock it down to my voice pattern, allowing me to control who can command my AI.