This is the 1st piece in our series of FPGA use cases for edge computing. This article tells you a little about the current state of drones, goes into detail on 2 areas which drones will have to improve on to achieve their promised potential and how FPGAs fit into the development cycle.
Toward autonomous drones
Currently drones cannot perform complex maneuvers or operations without operator intervention and supervision. In most instances, drones still require an individual to pre-program their flight paths in order to operate. Drone swarms used for entertainment in events are a good example of the current state of affairs. These systems are programmed in advance, and are therefore limited in how they can be deployed.
Pre-programming means it is impossible for drones to overcome issues independently in the event communication is lost or sensors fail. There are still advancements to be made to allow for autonomous object avoidance and path mapping among a host of other capabilities. However, as the technology improves, humans will play less of a role in actual flight operations.
Future use cases for drones cover a wide array of industries, from agriculture and delivery to mapping and search and rescue. One thing that all these use cases have in common is that to reach maximum commercial potential, autonomous drones (AD) that can operate independently and be capable of self-learning are needed. To achieve this, the drone must be capable of processing large amounts of data, including visual feature extraction, description and the capacity to act and build on the data it gathers. For this to be enabled on an edge device, high speed computation and power efficiency are critical.
Studies have shown that FPGAs are a natural candidate for this, despite being notoriously difficult to program. Research has shown that FPGAs can achieve computational performance of between 3.7x - 16.4x over CPUs and between 2x - 4x over GPUs.
Combined with the aspect of power efficiency where the FPGA held a 98% advantage over the CPU implementation and a 90% advantage over the GPU implementation, the promise FPGAs hold is clear.
How FPGAs can enable autonomous drones
Common future use cases of an AD requires the drone to be capable of self-navigation. With a dynamic urban environment, there is an ever increasing requirement for a drone to be able to re-plan and adjust routes in real-time.
To date, path planning algorithms have primarily been software-based implementations, which usually do not take into account some of the real-time constraints required for autonomous operation.
Recent developments in the autonomous scene has been shifting towards a process called simultaneous localization and mapping (SLAM) due to its advantages in location awareness.
Simultaneous localization and mapping (SLAM)
Using SLAM, drones can build their own internal map for navigation, by aligning sensor data they collect with existing data they already have. This is essential especially in GPS denied environments and areas which require high precision maneuvers.
The important point about SLAM is that it has to be real-time. The processing must be done on each incoming sensor (e.g. camera) and instantly processed such that there is no backlog and the data can be utilized by the system immediately.
Experimentally, the desired efficiency has only been achieved via a complete implementation of the proposed architecture on an FPGA, which achieved a 10 Hz update frequency of a typical autopilot system. Despite the requirement to process large amounts of data with a highly iterative process, the ability to re-plan the unmanned aerial vehicle (UAV) path in real-time improved the feasibility of an autonomous UAV.
For autonomous navigation of drones, it is essential to detect potential obstacles not only in the direction of flight but in the entire local environment. While there exist systems that have vision based obstacle detection, most of them are limited to a single direction of perception. Extending these systems to a multi-directional sensing approach would involve a multi-camera system and even more demand for computational power and energy.
Extended Kalman filter
An extended Kalman filter (EKF) is a commonly used algorithm to tackle obstacle avoidance. It allows for non-linear state prediction, which is commonly used for most real-world measurements involving topology, location or obstacles.
The EKF uses all available measurements from sensors to produce an estimated state (e.g. location) of the system at the current time. With sufficient data collected, the EKF can be used to predict the state of the system while simultaneously updating the measurements from different sensors. Both the predicted states and the measured states will be updated into the system to ‘train’ the system. A better ‘trained’ system will have a better avoidance rate.
The math of EKF deals heavily with matrices, for which parallel processing capabilities have a significant advantage over that of sequential processing. As a result, FPGAs are postulated to be the ideal option among other hardware candidates with the best results for weight payload, computational power and energy consumption.
Bringing it all together
Both features mentioned earlier are just 2 key components for the future of drones, with more to come as the use-cases evolve. Being able to efficiently accommodate, process and act on this deluge of data while using an embedded system will be key to unlocking this future.
FPGAs have exceedingly strong merits in both scenarios we have touched on, and although being difficult to program, being able to house the path planning, obstacle avoidance and decision processing all in 1 energy efficient FPGA could be a key piece of infrastructure for the future.