How 3 developers used core ml to run ai models on an iphone venturebeat

Apple launched Core ML, a framework designed to speed up machine learning tasks, alongside iOS 11 in May 2017. The Cupertino company shipped its first chip purpose-built for AI, the A11 Bionic, in last year’s iPhone X. And at the 2018 Worldwide Developers Conference (WWDC), it took the wraps off Core ML 2, a new and improved version of Core ML; and Create ML, a GPU-accelerated tool for native AI model training on Macs.

That’s impressive progress in a year, but the fact remains that smartphone AI is in its infancy; there are plenty of challenges to overcome. To get a better sense of them, VentureBeat spoke to iOS developers using Core ML today for language translation, object detection, and style transfer. Here’s what they had to say.


Digital Masterpieces

“We were using techniques developed at the Institute,” he said — specifically for style transfer, a technique of recomposing paintings, pictures, or sketches in the style of other images. “We already had implementations for these neural networks and artificial intelligence techniques [on desktop], so we thought it would be cool to port them to the iPhone.”

In machine learning, neural networks — computing architectures loosely based on the physiology of the human brain — consist of layers that contain nodes. Nodes are akin to neurons — they’re where data fed into the network is combined with coefficients called weights, which assign significance to the inputs by dampening or amplifying them. Deep neural networks consist of multiple layers and multiple nodes, and the term generally speaking (but not always) is a corollary for accuracy.

That’s not to say the Polarr team hasn’t tried. It has pushed iOS devices to the limit in the lab, with models that ramp up processor and graphics chip usage to 100 percent within the first five seconds of run time. But the drawbacks have been severe — the entire operating system becomes unresponsive; the screen and back panel become uncomfortably hot; and the battery life drains quickly.

In fairness to Apple, Core ML 2 introduces a number of improvements. It’s 30 percent faster thanks to a technique called batch prediction. It supports 16-bit floating point and all levels of quantization down to 1 bit. And it ships with a model converter that works with Facebook’s Caffe and Caffe2, Keras, scikit-learn, XGBoost, LibSVM, and Google’s TensorFlow Lite frameworks.

“For the future, it would make sense for Apple to start releasing more pretrained models based on their own big data,” he wrote in an email. “Smaller developers don’t have access to large amounts of data or the resources to do long training runs, so there are some things that are impractical even if you have the expertise. For example, it would be nice if there was a way to build an image classifier that didn’t require you to bring your own dataset — ideally you’d just declare the objects you want it to recognize and it just works — Apple’s cloud builds the model for you, using their data.”

“The thing most on top of my mind is to understand GPU/CPU and thermal throttling,” he wrote in an email. “For example, you know [how] a lot of PC games … tell you the FPS and plot a curve for extended gameplay? As a developer, it would be amazing if CoreML could do that for models. You’d load it up, it would tell you when the GPU starts to throttle, and based on that produce a list of performance reports for other devices (iPhone 5, 6, 7, 8, X). If Apple doesn’t do this, I’m sure some startup will be formed just for this sort of thing.”