The illegal wildlife trade is worth tens of billions of dollars each year and impacts legally operating businesses and tourism around the world. The illicit wildlife and plant trade are estimated to be worth $70-213 billion a year and infringes on the natural resources of countries and wealth of businesses around the world. Moreover, Human interactions with wildlife are a defining experience of human existence. Humans have competed with other species for habitat and resources and have innovated and adapted to become the dominant ecological force on the planet leading to human-wildlife conflicts. This conflict has contributed to the extinction of numerous species; changes in ecosystem structure and function; and immeasurable loss of human life, crops, livestock, and property.
We thus built Wild Eye to help mitigate the challenges of wildlife trafficking and human-wildlife conflicts by employing latest technologies in deep learning and Internet of Things to detect and send out an alarm on the onset of these illegal activities.
The Sustainable Development Goal 15 of the 2030 Agenda for Sustainable Development is devoted to “protect, restore and promote sustainable use of terrestrial ecosystems, sustainably manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss”
Conflicts between people and wildlife currently rank amongst the main threats to conservation in Africa. In Kenya, for instance, one of the real challenges to conservation is how to enhance and sustain co-existence between people and wild animals. The mitigation of this conflict is central to the conservation and restoration of many species, and debates over how and whether to coexist with other animals drive social, economic, and political conflict within and among human communities.
With our system deployed in various parts of the wildlife protected zones, it will be able to Provide data on exactly where species are, what they are doing, and how large their populations are. They can be used to build up a picture of whole communities of species, including how they are structured and how species are interacting over space and time.
The development of our networked camera traps, capable of analyzing images on the edge in near real-time, will provide a new tool in the fight against poaching. Our new software tool and statistical models will also make it much easier and faster to obtain high-quality information from the thousands of images that camera traps can quickly generate. This is improving our understanding of human impacts on wildlife and helping land managers make better decisions at both small and large scales.
The smart camera trap is an integration of three major technologies:
- Internet of Things (IoT)
- Deep Learning
With varying internet connectivity around the world, it has never been easier to benefit from IoT. The connectivity even worsens when we are in Wildlife restricted zones. The current solutions like the GSM technology are expensive in terms of both power and subscription. That is why we opted for a reliable low power, cheap solution with a good coverage, and Sigfox connectivity was our answer.
As the applications of IoT and AI are independently interesting, their joined use cases hold even more dazzling potential. The Internet of Things is getting more brilliant when artificial intelligence is infused with the key: discovering meaningful insights in data. Machine learning at the edge makes things even better as we will not be sending captured images to the cloud for analysis then the cloud back to us. Instead, it will be a one-way communication where all analysis will be done on the edge of the device and only meaningful inferences will be sent to the Sigfox backend. This way we would have improved operational efficiencies, avoided latency problems and also maintained a strategic distance from the security dangers of the public cloud as edge computing keeps sensitive information in the nearby IT ecosystem.
The collected data will then be displayed on a user-friendly dashboard where we will be keeping track to all activities happening around our wildlife protected areas.
With an integration of the three major technologies we believe we are going to change how camera traps are being perceived as a tedious, labor intensive affair since they are usually manual and one needs to physically retrieve the SD card on the device and do analysis one by one from a computer.
Location: It should be positioned on suitable trees within a 250 m radius of a predetermined GPS point at the center of each 5 Square Kilometer grid cell. When no trees are available, cameras can be placed on metal poles.
Naming: Camera locations should be given unique letter-number identifiers based on rows (letters) and columns (numbers): i.e., A01, A02, B01, B02.
Placement: Should be positioned to view animals walking on a nearby game trail but should not be placed deliberately at den sites or water holes, nor should the cameras be baited to attract carnivores or other rare species. Set cameras ~ 75-100 cm above ground level to prioritize capture of medium to large vertebrates. Point camera in a direction that minimizes obstructions rather than with respect to compass direction.
Housing: The housing cameras are in steel cases to protect them from animals and from the elements. Furthermore, small silica gel packets should be inserted inside the cameras themselves to absorb moisture from dew or rain. The casing is also Jungle green in color to camouflage with the surrounding.
To run application:
1. Clone the repository
2. For the IoT project install the necessary packages as detailed in the ReadME.md file at the root of each folder.
3. For the AI project we did use the snapshot Serengeti data set (Season 7), we created a data science Virtual Machine on Microsoft Azure to download all this data (~640Gb) and train the model which was then exported.
4. To compile the model into a graph for use on the Neural Compute Stick (NCS) you will need a Linux based machine.