Recently I got my hands on a production sample of a new development board by Seeed studio, ReComputer ODYSSEY - X864105. It is a Mini PC built around Intel® Celeron® J4105, Quad-Core 1.5-2.5GHZ and also has ATSAMD21 ARM® Cortex®-M0+ for easier development of real-time applications. The board itself is unlike your common SBC (single board computer) and bears more resemblance to LattePanda series - both in size and system specifications. Here is the list of full specs for the board:
- Intel® Celeron® J4105, Quad-Core 1.5-2.5GHZ
- Dual-Band Frequency 2.5GHz/5GHz WiFi/ Bluetooth 5.0
- Intel® UHD Graphics 600
- Dual Gigabit Ethernet
- Integrated Arduino Coprocessor ATSAMD21 ARM® Cortex®-M0+
- Raspberry Pi 40-Pin Compatible
- 2 x M.2 PCIe (B Key and M Key)
- Support Windows 10 & Linux OS
- Compatible with Grove Ecosystem
My channel and blog are mostly focused on computer vision/machine learning applications inference on the edge, so I decided to run a series of simple benchmarks to see the board performance on common computer vision tasks.
For tests I am using aXeleRate, Keras-based framework for AI on the Edge. It will allow us to quickly train the models and run inference 10 times, outputting the average time for one inference. As usual first result is discarded from the average time calculation.
Here is the table of results, as a little bonus I decided to throw in Mozilla’s DeepSpeech speech recognition framework in test results(WIP):
All above results are from tests with FLOAT32 Keras models. We also can try real-time inference with.tflite models using example scripts in example/scripts/raspberry_pi folder. Here is result video - keep in mind that the models are not fully trained, but that doesn’t influence the inference speed, since it is dependent only on number of parameters and not their values.
As we can see the performance of ODYSSEY - X86J4105 is comparable with Raspberry Pi 4. It is not surprising since X86J4105 is not a dedicated board for machine learning inference, like Nvidia Jetson Nano or Google Coral board and inference is done on CPU. On the other hand, if you project is not focused on machine learning and you’re using it as NAS, IoT Gateway or a Media Center it will still be able to deliver good performance out of the box. Meaning that unlike with Jetson Nano or Coral Board, you do not need to convert or optimize the models for hardware accelerator.
The application scenarios I can envision are:
- AI-enabled NAS - you could set up the script to sort the pictures backed from your phone/camera into categories.
- IoT Gateway that runs inference on sensors data in real-time (sensor data analysis usually uses time-series prediction, so you might need to use LSTM cells - in this case Google Coral or Jetson Nano don’t have too much of an edge as compared to ODYSSEY - X86J4105).
- Server for a website that utilizes simple IoT/Machine learning inference application. Here, as in examples above X86J4105’s fast Dual Ethernet and tich connectivity option come especially handy.
You can easily reproduce the results by installing Miniconda (cross-platform Python package manager), following the steps here. After that you need to create a new environment with Python 3.7:
conda create -n tf python=3.7
Switch to new environment:
conda activate tf
And install latest development version of aXeleRate with
pip install git+https://github.com/AIWintermuteAI/aXeleRate
Run python tests_training_and_inference_time.py -t classifier to train and test classifier model and tests_training_and_inference_time.py -t detector to train and test detector model. Change architecture in tests_training_and_inference_time.py file to try different architectures (Full Yolo, Tiny Yolo, MobileNet, SqueezeNet, VGG16, ResNet50, and Inception3). The script will output the average inference time after all the samples in dataset are processed.
For real time inference download models and labels here and put them in corresponding folders. To run these you will need to install Python Tensorflow Lite Interpreter, follow official Tensorflow guide on how to do that.
The hardware for this article was kindly provided by Seeed studio. Check out ReComputer series and other hardware for makers at Seeed studio store!
Stay tuned for more videos and articles!