Take AI to the Edge with Xnor’s Computer Vision

LEARN MORE

Go beyond conventional deep learning with Xnor’s Binarized ML Models

DNN Image

Traditional deep learning models rely on GPUs to process 32-bit precision floating-point operations. That means that Deep Neural Networks (DNN) usually perform best when they run in a cloud-based data center, which adds costs for bandwidth and data usage.

Xnor retrains DNN models into a proprietary binary neural network called Xnor-Net. By reducing the precision down to a single bit, Xnor’s AI models can be processed with binary operations, which are standard instructions on low-cost CPU platforms.

Xnor Net Image

Historically, binary models have been associated with a degradation in accuracy. However, Xnor-Net is able to match the performance of conventional 32-bit models.

This makes it possible to run AI on Edge devices, yielding models that are 10x faster, 30x more power efficient, and 15x more memory efficient than conventional solutions.

Binary Convolutions Image

With Xnor’s Binarized AI Models, developers can build powerful, innovative computer-vision applications that run extremely well in resource-constrained environments. This includes solar-powered chips, battery-operated MCUs, and even batteryless devices that harness energy directly from radiofrequency (RF) and WiFi waves.

Here are a few of the advantages of Xnor-Net

Power and Memory Efficient

The Xnor-Net Binary Neural Network provides state-of-the-art accuracy for models that are 10x faster, 20x more power efficient, and need 15x less memory than conventional deep learning models.

Optimized Loss Functions

Conventional models optimize for just one or two classes of objects. Xnor’s binarized models boost accuracy by optimizing loss functions for a distribution of all possible categories.

Sparse Model Design

We use a variety of techniques to prune operations and parameters to reduce model size and minimize the math operations necessary for accurate results - making AI possible on low cost, resource constrained, embedded devices.

Compact Model Design

Our compact design reduces the model size to less than 1 MB, so our computer vision apps can run on embedded devices without sacrificing performance or accuracy.