Any deep neural network can benefit from binarization and Xnor's other optimizations. Models already developed include detection, recognition, tracking, segmentation, face recognition, scene classification, pose estimation, image enhancement, emotional recognition, and voice command.
Binary convolutional neural networks
Xnorized models deliver state-of-the-art performance
optimizes AI performance
from battery powered
devices to GPUs
Deep learning at the edge
How do we drive deep learning models all the way down to 1 bit?
Traditional deep learning models for convolutional network uses 32-bit precision floating point operations processed on GPUs, which are generally cost prohibitive for everyday products.
Xnor reduces the precision down to a single bit and processes it using binary operations like XNOR and pop-count, which are standard instructions on low cost CPU platforms.
To maintain high accuracy while binarizing the parameters, we train the models with proprietary learning and optimization techniques to deliver state-of-the-art performance while utilizing the least amouint of CPU and memory resources. This allows high performance machine learning models to run well in any environment, from the most resource-constrained, low cost battery-operated MCUs to high-end GPUs and servers.
You Only Look Once
Xnor's founding team developed YOLO, a leading open source object detection model used in real world applications. We use a proprietary, high performance, binarized version of YOLO in our models for enterprise customers.
We build with hardware in mind
We build state-of-the-art models designed to run smartly and efficiently in resource-constrained environments. Our team of deep learning researchers work side-by-side with our core engineers and hardware engineers to constantly iterate and optimize, ensuring models reach stunning levels of efficiency across hardware platforms ranging from simple ARM-based microcontrollers to the most powerful x86 server CPUs to application-specific FPGAs and neural net accelerators.
AI enable your product with Xnor
Xnor delivers custom AI solutions tailored for your needs
- We can create and train new binary models using your existing training datasets
- We can start with your ML models and convert them into highly efficient binary neural network models
- If desired, we can annotate your raw images and videos to create training datasets
- Our custom models run on our lightweight and highly optimized proprietary inference engine
How can we build AI for you?
Our developer platform (available winter 2018) allows everyone, even non AI experts, to use and train models automatically and integrate AI into their products.
Interested in early access?