Beyond traditional
deep learning

Binary convolutional neural networks

Traditional deep learning AI models are compute intensive. They rely on GPUs and are often restricted to running in data centers in the cloud. Xnor's novel approach retrains models into a proprietary binary neural network called Xnor-Net. This allows state-of-the-art accuracy for models that are 10x faster, 20-200x more power efficient, and need 8-15x less memory than traditional deep learning models.
Binary Convolutional Neural Network
10x Faster
30x More power efficient
15x Less memory

Xnorized models deliver state-of-the-art performance

Any deep neural network can benefit from binarization and Xnor's other optimizations. Models already developed include detection, recognition, tracking, segmentation, face recognition, scene classification, pose estimation, image enhancement, emotional recognition, and voice command.

Xnor technology
optimizes AI performance
from battery powered
devices to GPUs

$
Raspberry Pi-0
$
Pine 64
$$
NanoPi A64
$$
Raspberry Pi-3
$$$
Mobile CPU
$$$
Desktop CPU
Traditional AI only runs here
$$$$
Server GPU
Xnor.ai deep learning framework

Deep learning at the edge

Deep learning at the edge

How do we drive deep learning models all the way down to 1 bit?

An engineer looks at an electronics board

Traditional deep learning models for convolutional network uses 32-bit precision floating point operations processed on GPUs, which are generally cost prohibitive for everyday products.

Xnor reduces the precision down to a single bit and processes it using binary operations like XNOR and pop-count, which are standard instructions on low cost CPU platforms.

To maintain high accuracy while binarizing the parameters, we train the models with proprietary learning and optimization techniques to deliver state-of-the-art performance while utilizing the least amouint of CPU and memory resources. This allows high performance machine learning models to run well in any environment, from the most resource-constrained, low cost battery-operated MCUs to high-end GPUs and servers.

Read Papers

YOLO

You Only Look Once

Xnor's founding team developed YOLO, a leading open source object detection model used in real world applications. We use a proprietary, high performance, binarized version of YOLO in our models for enterprise customers.

We build with hardware in mind

We build state-of-the-art models designed to run smartly and efficiently in resource-constrained environments. Our team of deep learning researchers work side-by-side with our core engineers and hardware engineers to constantly iterate and optimize, ensuring models reach stunning levels of efficiency across hardware platforms ranging from simple ARM-based microcontrollers to the most powerful x86 server CPUs to application-specific FPGAs and neural net accelerators.

An illustration depicting machine learning algorithms being optimized for computer architecture

AI enable your product with Xnor

For enterprise customers

Xnor delivers custom AI solutions tailored for your needs

  • We can create and train new binary models using your existing training datasets
  • We can start with your ML models and convert them into highly efficient binary neural network models
  • If desired, we can annotate your raw images and videos to create training datasets
  • Our custom models run on our lightweight and highly optimized proprietary inference engine

How can we build AI for you?

Get In Touch
For developers

Our developer platform (available winter 2018) allows everyone, even non AI experts, to use and train models automatically and integrate AI into their products.

Interested in early access?

Sign Up