The number of unmanned systems in the world is regularly increasing. Today, this diversity includes “self-managed” planes, vehicles, drones, seagoing ships, and spacecraft. Each designed device solves unique problems, hence the variety of interfaces for control and monitoring (Howard, 2013). There is no need to create machine functionality that will not be used because it is useless. In other words, each solution, while creating an interface, must be justified by the requirements. For example, some of the most complicated drone systems, spacecraft, should not have control systems in online mode (Howard, 2013). The principal difference is represented by uncrewed aerial vehicles with high technical complexity due to the variety of incoming signals (Maza, Caballero, Molina, Peña, & Ollero, 2009). To simplify operations with such systems, a dual-channel system is most often used in conjunction with a range of voice, head, and sound recognition sensors. Furthermore, the interfaces of modern flight drones are modified by the latest artificial intelligence and machine learning technologies to achieve greater machine autonomy (Togootogtokh, Micheloni, Foresti, & Martinel, 2020). A vital role in the interfaces of air and underwater drones is played by the position control buttons (SiriumUAVmedia, 2013). For ground-based vehicles fixed in the ground plane, the functionality of the interface is noticeably less. Despite significant improvements, modern systems are still difficult to call fully autonomous. Regardless of whether it is an underwater bathyscaphe, a Mars rover, or a quadrocopter, there is a control center in the form of a human or a computer program. Each unmanned device uses GPS or Wi-Fi signals to obtain location information. Nevertheless, there is a tendency to simplify the interface between humans and machines while improving the degree of autonomy. This means that most tasks are either not under the direct control or are hidden from the main screen.
Future technologies will simplify the interaction between user and hardware. For example, the DJI drones have the ActiveTrack function for tracking an object in sight (DJI, 2017). ActiveTrack works on neural networks, making it easier for the user to work. In addition, the manufacturer is committed to maximizing the agility of the devices and improving control security. For example, with the help of DJI Goggles, unmanned devices are controlled by head movement (DJI Goggles, 2020). The world is steadily moving towards the use of artificial intelligence, therefore, future interfaces may work according to the principles of Neuralink, providing a complete immersion of the user.
References
DJI. (2017). Film like a pro: DJI drone “ActiveTrack.” Web.
DJI Goggles. (2020). Web.
Howard, C. (2013). Flexible and integrated unmanned command & control. Military Aerospace, 24(11), 1-10.
Maza, I., Caballero, F., Molina, R., Peña, N., & Ollero, A. (2009). Multimodal interface technologies for UAV ground control stations. Web.
SiriumUAVmedia. (2013). LARS UAV ground control station. Web.
Togootogtokh, E., Micheloni, C., Foresti, G. L., & Martinel, N. (2020). An efficient UAV-based artificial intelligence framework for real-time visual tasks. Web.