Serdar Yegulalp
Senior Writer

ONNX makes machine learning models portable, shareable

news
Sep 08, 20172 mins
AnalyticsData ManagementMachine Learning

Microsoft and Facebook's machine learning model format aims to let devs choose frameworks freely and share trained models without hassle

keys to access solutions world in palm of hand
Credit: Thinkstock

Microsoft and Facebook have announced a joint project to make it easier for data analysts to exchange trained models between different machine learning frameworks.

The Open Neural Network Exchange (ONNX) format is meant to provide a common way to represent the data used by neural networks. Most frameworks have their own specific model format that will only work with models from other frameworks by way of a conversion tool.

ONNX allows models to be swapped freely between frameworks without the conversion process. A model trained on one framework can be used for inference by another framework.

Microsoft claims the ONNX format provides advantages above and beyond not having to convert between model formats. For instance, it allows developers to choose frameworks that reflect the job and workflow at hand, since each framework tends to be optimized for different use cases: “fast training, supporting flexible network architectures, inferencing on mobile devices, etc.”

Facebook notes that a few key frameworks are already on board to start supporting ONNX. Caffe2, PyTorch (both Facebook’s projects), and Cognitive Toolkit (Microsoft’s project) will provide support sometime in September. This, according to Facebook, “will allow models trained in one of these frameworks to be exported to another for inference.”

The first wave of ONNX-supporting releases won’t cover everything out of the gate. In PyTorch’s case, Facebook notes that “some of the more advanced programs in PyTorch such as those with dynamic flow control” won’t benefit fully from ONNX support yet.

It’s not immediately clear how ONNX model sizes shape up against those already in common use. Apple’s Core ML format, for instance, was designed by Apple so that small but accurate models could be deployed to and served from end-user devices like the iPhone. But Core ML is proprietary. One of ONNX’s long-term goals is to make it easier to deliver models for inference to many kinds of targets.

Serdar Yegulalp
Senior Writer

Serdar Yegulalp is a senior writer at InfoWorld, covering software development and operations tools, machine learning, containerization, and reviews of products in those categories. Before joining InfoWorld, Serdar wrote for the original Windows Magazine, InformationWeek, the briefly resurrected Byte, and a slew of other publications. When he's not covering IT, he's writing SF and fantasy published under his own personal imprint, Infinimata Press.

More from this author