There is no doubt that it is easier to extinguish a fire on an early stage before it was able to spread, therefore, the key to eliminating fires is a quick response of the emergency service. One of the most important improvements in fire safety is the development of methods for the early detection of fires and automatic monitoring of fire-hazardous areas.
Using NXP HoverGames drone development kit and QGroundControl software, I propose to build an autonomous drone capable of monitoring a defined area to detect fire beginning.
Drone patrols a given area autonomously around the clock according to a mission that has been pre-defined in QGroundControl. It utilizes an on-board camera and machine learning algorithms to detect fire hazards. When the drone detects a possible fire, the GPS coordinates are sent to the dispatch service desk.
The drone will have also a Payload flexibility : a 3D printed modular payload release system allows various devices to be attached to the drone. A search and rescue crew might attach a gas measurement device to monitor the air quality before entering in fire area or may release a map with instructions to assist trapped people in getting out of the fire zone or even delivering first-aid kits for victims.
To be able to perform real time computer vision on the air, the flight controller unit can't provide enough computation power to execute machine learning algorithms. For this reason, we rely on a companion computer architecture.
Companion Computers can be used to interface and communicate with flight controller using the MAVLink protocol. By doing this, a companion computer gets all the MAVLink data produced by the autopilot (including GPS data) and can use it to make intelligent decisions during flight. There are two major parts to Companion Computers - hardware and software.
The Companion Computer hardware refers to the specific computer hardware being used. This is typically a small ARM-based Single Board Computer. For this project, I have used a Nvidia Jetson Nano Board.
The Companion Computer software refers to the programs and tools that run on the Companion Computer. They will take in MAVLink telemetry from the Flight Controller and can route and process the telemetry data. For this project, I have used DroneKit suite.
Nvidia Jetson Nano is a developer kit, which consists of a SoM (System on Module) and a reference carrier board. It is primarily targeted for creating embedded systems that need high processing power for machine learning, machine vision and video processing applications.
The Jetbot expansion board is a valuable add-on for Jetson nano Boards, It adopts 3x 18650 batteries with up to 12.6V voltage output. It provides protections for over-charge, over-discharge, over-current, and short circuit. An onboard APW7313 voltage regulator, provides stable 5V voltage for the Jetson Nano. A 0.91inch 128×32 pixels OLED, allows to display robot IP address, RAM, battery life, etc.
The Jetson Nano with Jetbot expansion board are atteched to the Drone Frame. The camera is pointing down.
For this project, we are running the Jetson Nano Developer Kit in a "headless" configuration. That means you do not hook up a monitor directly to the Jetson Nano Developer Kit. This method conserves memory resources on the Jetson Nano and has the added benefit of eliminating the requirement for extra hardware, i.e. a monitor, keyboard, and mouse.
In addition, we will further simplify the configuration by using "USB Device Mode". In this mode, Jetson Nano Developer Kit connects directly to a computer through an USB cable. This eliminates the need for a network connection on the Jetson Nano, as well as the need to determine the IP address on the network. It is always
192.168.55.1:8888 in this mode.
- Download the NVIDIA DLI Jetson Nano SD Card Image
- Download, install, and launch Etcher
- Use Etcher to write the Jetson Nano Developer Kit SD Card Image to your microSD card.
- Connect the Raspberry Pi v2 Camera to the MIPI CSI port
- Reboot the Jetson Nano
- Open the following link address : 192.168.55.1:8888The JupyterLab server running on the Jetson Nano will open up with a login prompt the first time.
- Enter the password:
- Launch a terminal session
- Install required Python packages :
pip3 install tensorflow
pip3 install tflearn
pip3 install cv2
For this project, I have used a Convolutional Neural Network (CNN). CNN is a type of neural network model which allows us to extract higher representations for the image content. Unlike the classical image recognition where you define the image features yourself, CNN takes the image’s raw pixel data, trains the model, then extracts the features automatically for better classification. The custom dataset used for training and evaluation can be found on Durham Collections (together with the trained network models). In addition, standard datasets such as furg-fire-dataset were also used for training and evaluation. You can download the python scirpts and the weights file needed to run the CNN with the following commands.
git clone https://github.com/amri1978/hovergames
wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1KyIWAka3K31wE82R3jgIkKQE5FWWQhVw' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1KyIWAka3K31wE82R3jgIkKQE5FWWQhVw" -O 'fire.data-00000-of-00001' && rm -rf /tmp/cookies.txt
To test the fire detection CNN without all the stuff related to drone, you can use fire.py script (usage :
input_video_file -s). use the '-s' switch to save the output file. The output videos indicates when a fire is detected (red frame with FIRE word in the top left otherwise the frame is green and the word CLEAR is present).
If your are not familiar with CNN, stride and max pooling. You could read this excellent tutorial:
To use fire detection CNN with HoverGames drone, connect Jetson Nano to FMU board with an USB cable.
Set a wifi USB module on the companion computer according to https://dev.px4.io/v1.9.0/en/qgc/video_streaming_wifi_broadcast.htmlIn QGroundControl, define a mission to scan the area to be monitored by droneDownload mission to the droneLaunch dronepatrolHeadless.py (you can add it to startup script to be launched when Jetson nano is powered on)The script is commented. The process is to :
There is another version of the script (dronepatrol.py) for desktop version of Jetson OS. I'have used this version to output demo videos below:
The payload delivery system is based on a 3d print model and a small 9g servo (Hextronix HXT900 or equivalent).The 3d model consists of 4 parts:
- A bottom part to which the servo is attached- A piston or cylinder- A ring to attach the payload and blocked by the piston- A top part
The servo arm is attached to the piston, when the servo rotates counter-clockwise, it pulls the piston, making the ring free to drop and releasing the payload.
The servo can be connected to auxiliary channel 5 or 6 and can be easily actioned using AUX Passthrough channels settings.