Aging Ahead while learning on the edge

“With Power Edge AI in the palm of your hand, will be your business Irresistible.

Marketing seems to read like for artificial intelligence companies. Everyone seems to have a cloud-scale AI-driven business intelligence analysis. As impressive as it sounds, we’re not sure if marketing mambo jumbo means anything. But what does AI look like on Edge devices nowadays?

Staying on the edge means real AI evaluation and even fine-tuning runs locally on the user’s device rather than in some cloud environments. This is a double whammy, both for business and for the user. Privacy can be saved more easily because less information is sent back to the central location. In addition, AI can work in situations where a server may not be accessible anywhere or respond quickly enough.

Google and Apple have their own AI library, ML Kit and Core ML, respectively. There are tools to convert Tensorflow, PyTorch, XGBoost, and LibSVM models to CoreML and ML Kit formats. But other solutions try to provide a platform-agnostic level for training and evaluation. We’ve covered TensorFlow Lite (TFL) before, a trim-down version of TensorFlow, which has matured enough since 2017.

For this article, we will look at PyTorch Live (PTL), a slimmed-down framework for adding the PyTorch model to smartphones. Unlike TFL (which can run on RPi and a browser), PTL focuses entirely on Android and iOS and offers rigorous integration. It uses a reactive-native backed environment which means it is strongly built towards node.js world.

No clouds needed

At the moment, PTL is too early. It runs on MacOS (although Apple doesn’t support Silicon), but Windows and Linux compatibility is apparently imminent. It comes with a simple CLI that makes starting a new project relatively painless. After installing and creating a new project, the experience is smooth, a few commands take care of everything. The tutorial was straightforward, and soon we had a demo that could recognize numbers.

It was time to take the tutorial further and create a custom model. Using the EMNIST dataset, we created a trained resnet9 model with a character dataset using a helpful GitHub rep. Once we had a model, it was easy enough to use the PyTorch utilities to export the model in a light environment. With some changes to the code (which is reloaded live in the simulator), it recognizes characters instead of numbers.

We doubt that anyone inserting a little more into the world of machine learning will be able to take it further away from us. PTL has other exciting demos, such as Speech Recognition on Device and Live Video Segmentation and Recognition. Overall the experience was easy, and the situation we were trying to implement was relatively easy.

If you are already in a smartphone responsive-native world, PTL seems easy to assemble and use. Beyond that, much remains unsupported. Tensorflow Lite was similarly limited when we first covered it and since then has matured and acquired new platforms and features, becoming a powerful library with many supported platforms. Eventually, we’ll see what the Pieterch Live key turns out to be The beta variant already has support for GPUs and neural engines

Leave a Reply

Your email address will not be published.