It's been almost one year since I have started tinkering with the NXP Rapid IoT Prototyping kit, that I was given to participate to the "Revolutionize Your IoT Prototyping" contest, and that I nicknamed "rIoT" for ease of speak. In the spirit of the "rapid prototyping", until now I have developed mostly with the suggested on-line graphical tools NXP Rapid IoT Studio (or with the Atmosphere IoT Platform), with which I have written MIDI rIoT!, and reproduced the Talking Weather Station.
I love the hardware of the Rapid Prototyping kit, that include a number of sensors ( buttons, touch controller, temperature, humidity, accelerometer, gyroscope, magnetometer, air quality, battery level ), a few actuators ( LEDs, LCD, buzzer ), wired (USB, UART, I2C, SPI) and wireless interfaces (BLE/Zigbee/Thread and NFC), all in a compact form factor, and for a fairly reasonable price point of about 50$.
However, as an hardware guy I feel a bit far from the transistors, and limited in the functionalities exposed by the rapid IoT studio. For instance: control of the graphics of the LCD is limited to the defined presets, DSP functions are limited to some sub-functions on the graphical datapath, RTC functions are not exposes even though they are actually present in the SDK... it seems close to impossible to add drivers for new device that are not supported by atmosphere, but also, I could not configure the BLE GATT service and characteristic UUID, needed to implement a standard BLE-MIDI controller. I soon started to feel too much constrained by the software limitations of such graphical development tools: I feel the need for deeper hardware control!
However, developing with MCUXpresso also almost inevitably requires buying the Hexiwear Docking Station, which while being a good development tool, is a bit expensive (39$) compared to the cost of the bare kit (50$).
On top of that, when you start trying to develop with the SDK, you basically have to start dealing with the FreeRTOS abstraction layer, the NXP middleware, and the different implementation of the low-level drivers... in short: development immediately become much less "rapid", as there is some learning curve to do.
Moreover, when you start project yourself more into the future into moonshot projects ( for some time I have caressed the idea of porting the Keyword-Spotting on MCU demo from ARM working on the FRDM-K64 board, that uses the same MCU ), you realize that the rIoT is not supported by the SDK Builder, which means no updates on the NXP SDK (including ARM CMSIS library), and no support for other toolchains and IDEs than MCUXpresso (usually NXP SDK support IAR, keil and GCC Make).
Furthermore, the LCD library provided is Segger's emWin, which is statically compiled in a binary format. This also means when I have tried to include cpp sources from other project, it made my compiling attempts to fail, probably due to compiler settings, that I was not able to figure out.
Moreover, I had recently read that Arduino's latest official boards were finally based on ARM MCUs, and using ARM's MBED development platform below the Arduino library. Even though some ARM boards were unofficially already available, I thought that all the mbed-supported boards were immediately available (or sort-of with Arduino).
Particularly the Hexiwear bears striking similarity with the rIoT: they are meant to use the same docking station, they are based on the same MCU, and have some same or similar sensors.
So, I have started to poke around with MBED... and figure out the Hexiwear codebase can be used to program the Rapid IoT, I have dubbed this effort "rIoTwear mbed".
Unfortunately trying to use the online IDE, results in binary files that are not accepeted by the default bootloader of rIoT, therefore I really needed to put my hands on an Hexiwear docking station to go forward.
But then I figured out that fortunately Mbed does support a whole breed of other IDEs/toolchains. I finally had my first success exporting the blinky demo (well actually an older version of it, based on the blocking wait_ms(), as opposed to the non-blocking thread_sleep_for() ), and finding that it worked right away with IAR Embedded Workbench on my work windows PC.
Here an unofficial version that works out of the box.
Potentially this is the right thread implementation for K64 ?
Unfortunately, my copy of IAR is tied to a network license, therefore when I have tried it at home, and found that for some reason my VPN connection was not working, and therefore I could not access the license server, I have turned myself towards open source again.
Particularly I noticed the support for VisualStudio code, that I had already tried and appreciated for some other python project. Also, being this a spare-time project, I wanted be able to develop with my personal Ubuntu 18.04 machine. I found that only two things needed to be modified, that are not mentioned by the tutorial.
First the gdb-server: arm-none-eabi-gdb seems now deprecated in favor of gdb-multiarch. In order to reflect this change, I had to modify the corresponding line of the launch.json file in the.vscode folder, as below.
The other issue I had was with the linker not finding the file mbed_config.h.
cc1: fatal error: /filer/workspace_data/exports/d/d8e6c0dafc780e4e27535649b9338717/mbed-os-example-blinky/mbed_config.h: No such file or directory
In order to fix this, I had to modify the Makefile to actually provide the exact path to the file to the ASM_FLAGS (presumably used by the linker).
# ASM_FLAGS += /filer/workspace_data/exports/d/d8e6c0dafc780e4e27535649b9338717/mbed-os-example-blinky/mbed_config.h
ASM_FLAGS += /media/marco/DATA/programming/mbed/mbed-os-example-blinky_vscode_gcc_arm_hexiwear/mbed-os-example-blinky/mbed_config.h
And there goes the blinky!
One note goes for the same procedure on Windows10 that is currently not working on my PC, with compilation failing, probably due to an error of coexistence between different MinGW versions that I have indirectly installed due to the presence of open source software on my machine.
I found some "cures" looking on some forums, but I am afraid this will mess up my configuration, so I have to let others to debug the issue for the time being.
See my demo that shows how to make blink the red, green and blue leds.
See the tutorial on how to use printf() statements:
LCD: LPM013M126A – JDI Color Memory LCD
I have therefore created my own example, mapping the correct pins for the rIoT.
Temperature/Humidity: AMS ENS210
Mbed natively supports SPI flash block device.
However, the feature need to be enabled in the compilation configuration.
Based on the provided documentation, this seems not to be possible neither with the online compiler, nor with IAR.
On top of that, the SPI flash device is shared at boot time between K64 and KW41 and an handshake protocol has been implemented, in order to not conflict while accessing the peripheral.
This protocol needs to be implemented if planning to use the flash on Rapid IoT.
I have attached to this project the code from the NXP SDK for the SPI bus share between K64 and KW41.
SPI Flash: MT25QL128ABA
I found a couple of implementations of the CCS811 driver, but both of them seems to no be working very well with rapid IoT and mbed 5.
It might be a better idea to re-implement the driver starting from either the driver in the rapid iot SDK, or the original sparkfun driver for Arduino.
AIR quality: AMS CCS811
I could not find an available mbed driver for this device, therefore I have attached the SDK driver from MCUXPresso, from which it can be implemented.
Similarly, I found some Arduino libraries.
NFC is supported by mbed ( https://os.mbed.com/docs/mbed-os/v5.14/reference/nfc-technology.html ), but not the specific chipset in Rapid IoT. Some work is required to get it working.
There is a ready repository of CMSIS5 for mbed, even though the documentation mentions "some work is needed".
The project implements "keyword spotting", meaning recognizing when one between a few keywords was pronunciated, ie like in the attempt of implementing a voice function.
However a github repository that includes the original code that could run on the Freedom K64 board too is available.
In theory this should run on the Rapid IoT, even though the device is missing a microphone input. A Mic Click board could be used for that purpose.
One of the interesting developments that happened about mbed, is that the newest Arduino Nano 33 boards, are supporting the Arduino API through an mbed compatibility layer. This means the compatibility layer can (at least in theory) be applied to any mbed-supported board.
Based on the description in the tutorial, it should be feasible to port the Arduino API to an mbed project, and this would allow to directly access the Arduino library of drivers for peripherals.
One of the key points of Arduino, is that it includes a bootloader, that does not require having a somewhat expensive programmer, like a jlink or the CMSIS-DAP, which are availble in the hexiwear docking station.
There are examples of how to implement a bootloader with mbed.
Mbed already supports BLE through a dedicated class.
In order for this to be used, the different Host Controller Interface (called FSCI for NXP products), that on Rapid IoT uses the UART port, needs to be implemented.
Some documentation is available on the FSCI interface on the NXP community forums.
FSCI on KW40
Develop FSCI for KW36
Arduino has its own BLE API.
The Freedom board KW41z is supported by mbed.
Therefore, in principle, it should be possible to use the same process explained here, in order to use KW41 chip too. However, in order to get the wireless protocols working, the connectivity stack shall be rewritten within mbed. It does not sound such an easy task, and for the purpose of the Rapid IoT Kit, it probably adds little value. What I personally find more interesting would be the possibility of using the FRDM-KW41 board as a BLE-enabled mbed/arduino board.