1 Introduction

Creative industries require the ability to control cameras frequently. Many devices such as cranes, rails or portable frames are used in order to obtain interesting shots. These devices often have many drawbacks: they are complex to install, handle and remove, the devices have limited movement space and they are invasive on the scene they are recording. Devices like Steadicam may avoid these drawbacks, but they cannot be used in any situation and they cannot be moved off of the ground.

On the other hand, unmanned aerial vehicles (UAV) or drones, also known as Remotely Piloted Aircraft System (RPAS), which consider as the whole system (including the ground control), obviate the drawbacks. When recording takes place indoors, let’s say a television or movie set, drones can provide shots that are not available to current auxiliary devices because of their stability and precision (Castillo et al. 2007).

Drone navigation requires knowledge of the position of the drone at all times. In outdoor flights, drones can use GPS location systems. When working indoors, GPS does not have the accuracy to allow for a safe flight, and therefore an Indoor Positioning System (IPS) is needed. Furthermore, due to smaller spaces and increased risk of damages to property and people in case of an accident, much higher accuracy is required. Typically, the necessary accuracy is in the order of tens of centimetres, which means two orders of magnitude more precise than GPS.

If the drone incorporates an internal positioning system, it is possible to control it remotely and autonomously. This control must provide the mission to the drone, monitor said mission and be able to ensure all safety features. To perform all these tasks, it is necessary to have an Intelligent Flight Control System (IFCS). This system must be integrated with the IPS, since it requires knowledge of the position to generate a map of the interior space. It must also be integrated with the drone, since it must be able to control it with guarantees. Consequently, a drone mission will consist of a sequence of actions that the drone must perform. Therefore, the drone must be equipped with different flight modes. These modes are one of the strengths of the chapter, since it is the key to security (Fig. 1).

Fig. 1
figure 1

Main areas to be considered in the design of an autonomous indoor drone. Source: own elaboration

Currently, indoor drone navigation is mostly performed using commercial, off-the-shelf solutions, both for drone control (Hussein et al. 2015) and for trajectory tracking (Santana et al. 2014). This latter aspect is one of the most interesting as far as research is concerned (Martínez and Tomás-Rodríguez 2014).

The drone’s real-time navigation and control features are especially relevant to the creative industry. Safety is a key factor for both outdoor and indoor flight environments. In most countries, outdoor drone flights in populated areas are very restricted. Common outdoor flight environments are in non-populated areas with few elements of value around. However, even if indoor flights are not regulated, it is not strange to find elements of value like paintings, sculptures, lamps, furniture and so on. Combined with smaller spaces and the presence of people, indoor drone flights should have higher levels of security than outdoor flights. This aspect therefore determines, to a large extent, all questions relating to the design of both the control architecture and the drone.

2 Drone Characteristics

The core of the indoor system is the drone. Drone must have the necessary components to ensure the safety requirements. In this section, we describe the drone classification and the review of the components that are directly related with the drone.

2.1 Drone Classification

A classification of drones in their different form-types and configurations can be carried out based on different approaches. A very frequent classification takes into account the ways the drone gets lift and, separately, the way the drone takes off. A basic scheme could be formed by drones in the following configurations: Multi-rotor, Fixed-Wing and VTOL (Vertical Take-off and landing).

Each of these configurations has pros and cons and specific applications where it is the most suitable option. Since the scope of this book is on how technology is applied or used in indoor drones, it is worth focusing on the type of drones which are more oriented to be used in limited airspace.

These types of drones are the multi-rotor or multi-copter types, which have a list of advantages that can be summarised as follows.

  • Vertical take-off and landing, which minimise required space on land for operation. This advantage is reduced if compared with the small and lightweight Fixed-Wing, which can be launched by hand or using small catapults, while the landing can be done with parachute or controlled loss of lift.

  • Possibility of stationary or very low speed flights.

  • Better manoeuvrability and accuracy while flying. Fixed-wing systems fly with wide curvilinear trajectories, on a high turning radius, as well as both restricted ascent and descend speeds. In contrast, multi-rotors can fly following any trajectory on a 3D path, allowing a better approach to the target.

  • Due to configuration and design, payloads are generally heavier and bigger related to the aircraft’s size.

The multi-rotors can be classified also by number of motors:

  • Bi-copter: two motors. They need adjustable pitch on motors and propellers in order to fly in a balanced manner.

  • Tri-copter: three arms with a motor at the end. The tail motor must have a mechanical system to counteract the torque generated by the other two motors.

  • Quad-copters: four arms with a motor at the end.

  • Hexa-copter: six arms and six motors.

  • Octocopter: eight motors either on eight arms or four arms (biaxial configuration).

  • There are multi-rotors with 10, 12 and up to 18 motors, most of them just in trial mode having not reaching commercial status.

When used specifically for indoor environment, there are a number of restrictions that must be addressed and sorted out with new developments of the technology. This is the case of positioning and navigation in airspace where the main source of reference for positioning, the well-known GPS (Global Positioning System) or in general terms, the GNSS (Global Navigation Satellite System) devices, have limited coverage and the signal is not reliable.

Nowadays, almost every positioning device in the RPAS market works GNSS mode, which allows the receiving of data from different satellite constellations (GPS, Glonass, etc.). They can also receive WAAS (Wide Area Augmentation System) or EGNOS (European Geostationary Navigation Overlay System) differential corrections, both of which are based on SBAS (Satellite Based Augmentation System), which provides differential positioning that increases accuracy of the GPS signal in almost every case.

This system can also fix typical errors in GPS signals, such as clock status, ionosphere influence or errors in the orbital tracks. Using this system, positioning accuracy is increased from 2–5 m to 1 m. This 3DGPS precision is enough to navigate and provide geo-referenced images.

RTK systems (Real Time Kinematic) improve positioning and navigation, reaching an accuracy of a centimetre. The main factors to be considered when selecting RTK GNSS systems are:

  • Concurrence

  • Frequencies (L1 and/or L2 in the GPS case)

  • Time to “fix” the signal

This system needs its own referencing station or a supplier of differential corrections. These corrections are transmitted via radio or cellular net signals. The GNSS receiver used as a reference shall have, at the very least, same features and performance as the mobile receiver, meaning that if the mobile receiver is a double frequency GNSS receiver (L1 & L2), the base receiver shall be also a double frequency GNSS device, or the whole system will see its performance reduced.

The starting sequence (automatic) is the process through which the receiver solves ambiguities and goes from autonomous accuracy to centimetre precision. In L1 frequency devices, it takes about 20 minutes, an aspect that must be considered for the operation, as it can cause delays when signal is lost and it has to be reinitiated. Dual frequency devices are faster in this process.

With regard to indoor positioning, where the GPS signal is not available, other systems must be considered. One that should be highlighted is the UWB (Ultra-Wide Band) Technology that uses radio-frequency signal in the wide band spectrum to measure distance to beacons and deduct positioning by triangulation. This type of signal can penetrate walls and provides more accuracy than others like GPS, Wi-Fi (IEEE 802.11), and so on.

Bluetooth systems can be an alternative, although accuracy is very low and is usually discarded for most of the applications

2.2 Navigation Systems

A drone mission is a set of actions that a drone must do in a specific environment. The actions, for example, are to take a picture, start a video recording, go to another point in the space, change the pose of the drone, and similar actions. All these actions must be controlled. Consequently, the navigation system in a drone is necessary to control the drone mission.

2.2.1 Inertial Measurement Unit (IMU)

2.2.1.1 Gyroscope

There a number of gyroscopes that make use of different physical phenomena to obtain a measure of the rotation speed (MEMS, gyro-laser, mechanical, etc.), although the most used in RPAS is of the MEMS type, due to its advantages in size, weight, and also important, in price (as it is used in the mobile phone devices market).

The MEMS-type gyroscopes keep a mass in constant vibration and periodically measure the deviation with regard to the initial plane of vibration. The deviations allow the obtaining of a measure of the Coriolis Force experienced by the mass, which is originated by the rotation speed of the gyroscope.

2.2.1.2 Accelerometer

In the case of accelerometers, as it is with the case of the gyroscopes, the main type is the MEMS type. The way they function is very similar. The inner part of this sensor is composed of a comb type structure that forms a series of capacitors. The mobile electrodes are fixed to a known mass that is also connected to a device that will return the mass to a known position. When the sensor detects acceleration, the mass moves and provokes a change in the capacitance of the capacitor. Measuring this capacitance allows estimation of the acceleration experienced by the sensor.

2.2.1.3 Magnetometers

Also, most used magnetometers are of the MEMS type. These magnetometers are based on the Hall effect to function. This effect is due to the difference in potential that appears when the forces in a conductor are not distributed evenly. By measuring this difference in potential, the magnetic field in the sensor can be estimated, along with the heading with regard to the earth’s magnetic field.

2.2.2 Barometer

Most barometers used in RPAS are piezoelectric barometers (based on MEMS type). A piezoelectric element is fixed at the only exit of a cavity, completely sealing it. In this cavity, an air mass is trapped with the reference pressure, generally 10,1325 Pa. The pressure differences between the air in the cavity and the atmospheric air provoke forces that act on the piezoelectric element, causing a deflection and then a difference in electric potential that can be measured. Based on this difference in potential and correcting with the sensor’s temperature, the atmospheric pressure can be deducted.

2.2.3 Ultrasounds

Ultrasound sensors are based on the same principles involved in sonar. They send a sound wave at a very high frequency and measure the time that the echo takes to reach the sensor again. This time, by multiplying by the speed of sound and dividing by 2, the sensor can calculate the distance from surrounding obstacles. In RPAS, they are generally used as altimeters (near the ground) or to avoid large-sized obstacles such as walls or ceilings.

2.2.4 Infrared

The use of infrared sensors is very restricted for outdoor operations due to a lack of sunlight, so they are limited indoors for measuring very short distances (when landing) or distances to very big obstacles (walls or ceilings).

2.2.5 Optical Flow

The optical flow cameras allow measuring of ground speed of an RPAS using images of the land the RPAS is flying over. The way they function is very similar to a current typical computer mouse. A low-resolution camera (lower than 1 MP) takes pictures at a very high refreshing rate and feeds an algorithm that can extract patterns from the images. Changes in the pattern between two consecutive images and time for the changes can give the horizontal speed of the RPAS, and the new position starting from a known initial position. In principle, any camera with a relevant refreshing rate is valid for this purpose. What is important is the algorithm and the quality of the measures obtained.

2.2.6 Stereoscopic Cameras

Stereoscopic cameras are based on photogrammetric principles to obtain the distance from a series of points. Once the common characteristics are detected in two pictures taken from two separate points, the position of the common characteristics can be obtained as well as referencing the whole pictures. What is obtained is a map of distances with the same resolution as the cameras used. This information is generally used for obstacle avoidance, although it can be used also for 3D mapping of the environment.

3 Indoor RPAS System Architecture

A drone is a distributed system composed of several devices that are connected to provide the whole service of recording video or photographs in indoor scenarios using RPAS. Principally, there are two main working areas: Drone and Environment Infrastructure. Each area has common components in the same three systems: Intelligent Flight Control System (IFCS), RPAS system, and IPS system (see Fig. 2).

Fig. 2
figure 2

Indoor Drone distributed system main areas (Drone and Environment Infrastructure) and the three systems involved in each area (IFCS, RPAS and IPS). In each system, the components must connect to others creating an interesting distributed system. Source: own elaboration

3.1 Environment Infrastructure

This is the ground system in charge of supporting the drone operability. It is composed of several subsystems: the IPS anchors that give support to drone indoor positioning, the remote control via radio to control the flight manually, and the Ground Control System (GCS). The GCS is in charge of receiving the whole cloud of points of the scene and surroundings from the Virtual Environment Mapping (VEM) Manager, to generate the flight plan by means of the Flight Planning System. Finally, the Record and Flight Control System controls and monitors the drone flight.

3.2 Drone

Similar to the Environment Infrastructure, Drone has components that correspond with the previous presented three subsystems. Concerning the IPS, Drone incorporates four antennas to receive the anchors signal, and one board (caller “tag”) that processes the signals and generates the position. Related to the RPAS, Drone include all sensors like distance or the Inertial Measurement Unit (IMU), a multiplexor system that provides the source that control the drone (manually flight or automatic flight), and the Flight Control System (FCS) that is in charge of controlling all the parameters of the flight: drone position and orientation, camera parameters, gimbal parameters. Finally, the third system is the On-board Control System (OCS) which also includes the VEM manager (synchronised with the GCS). It is in charge of detecting the cloud of points in front of the drone, sending them back to the GCS, receiving the flight plan and transferring it to the FCS, controlling the flight plan according to the operator requirements.

Concerning the record system, there is a Gimbal, a Record Camera and a VEM Camera, which is usually an RGBD camera (Munera et al. 2015). Gimbal is a specific actuator. It is in charge of orienting the Record Camera to its correct point of interest regardless of the position and orientation of the drone. The Recording Camera (RCam) is in charge of recording the high-resolution video using professional parameters. VEM Camera is a ZCam that provides the drone with the cloud of points of the objects in front of the camera. Both Gimbal and RCam are directly controlled by the FCS using a flight plan or in real time when performing a flight via the OCS that is commanded from the GCS. This paper only focuses on the VEM management both in the OCS and in the GCS during the scanning of the environment phase.

The virtual map of the environment where the drone has to fly is obtained from the same drone used for the recording phase. This carries several advantages: there is only one single drone for all of the recording operation; there is no need to use different devices for different actions (calibration, scanning, tuning and recording); and the size of the equipment to carry, store and move is lower, and consequently, the operation is simplified.

4 Intelligent Flight Indoor Drone Navigation

Safety is the main axis of system design. There are several layers of safety throughout the whole architecture. The lowest level of safety is at the hardware level. It concerns all the devices selected for the hardware, connectors for transmitting the data from one module to another one, power cables, motors, batteries and so on.

There is no control over the basic software layer composed of drivers, O.S. and the low-level FCS. This is not the aim of this work. On this layer, there is another level in charge of the way the drone behaves when it is flying. There are several ways of flying, depending on the degree of autonomy of the drone and the level of safety of the mission to be accomplished:

  1. 1.

    Manual Flight Mode (M.F.M.). The pilot has complete control of the drone. The drone movements have no restrictions and the pilot can instruct it to go anywhere, regardless of any sensor reading or map configuration. The pilot has cancelled the reactive mode (see later) and the human pilot has full control of the drone.

  2. 2.

    Reactive Flight Mode (R.F.M.). This is a defensive flight mode, which is performed by the Flight Control System (FCS), taking into account the reading of the proximity sensors. It is a priority flight mode that is always active unless the human pilot expressly cancels it when flying in Manual Flight Mode. It is active by default for all the other kinds of flight modes: Assisted, Mixed and Smart.

  3. 3.

    Assisted Flight Mode (A.F.M.). The pilot controls the drone, and he or she can take the drone out of the established flight plan. The difference with respect to the Manual Flight Mode (M.F.M.) is that the reactive mode is engaged, so the pilot cannot crash into the environment even if he or she tries.

  4. 4.

    Deliberative Flight Mode (D.F.M.). This is an A.F.M. where the drone is not allowed to move into no-flight zones like populated zones, hanging cables areas and similar spaces.

  5. 5.

    Mixed Flight Mode (Mi.F.M.). The user can explicitly stop and move forward or backward at different speeds along the flight plan. The metaphor is like having a virtual rail along the trajectory of the flight plan. The drone behaves like a 3D virtual dolly. It requires the virtual map to have been captured and the flight plan defined.

  6. 6.

    Guided Flight Mode (G.F.M.). Automatic flight that considers obstacles, restricted areas and surrounding architecture. This mode is completely autonomous. It is supervised by humans, but humans do not control the drone. A human pilot can pass to any other kind of flight mode from this one. The drone moves automatically along the trajectory of the flight plan. The drone behaves like a 3D virtual dolly as in Mi.F.M., but is completely autonomous. It has the same requirements as Mi.F.M.

  7. 7.

    Smart Flight Mode (S.F.M.). This mode can be engaged when the drone has left the flight plan, and the pilot wants it to return to the predefined flight plan, selecting the shortest itinerary and considering obstacles, restricted areas and surrounding architecture in real time. This mode is completely autonomous. It is supervised by humans, but humans do not control the drone. The implementation of this mode is beyond the goals of this work. It requires the virtual map, the flight plan and 3D sensors.

  8. 8.

    Emergency Flight Mode (E.F.M.). Moreover, in the event of a loss of IPS datalink, radio contact, engine failures or battery level below a safety level, a defensive failsafe behaviour will be executed. Depending on the type of failure and position of the RPAS, it will start a slow landing or return automatically to the starting point (return to launch—RTL Mode). It requires Environment Scanning.

5 Security vs. Flight Modes

There is a relationship between the flight modes of the drone and the security level required. See Fig. 3. Notice that R.F.M. is not an automatic flight mode that is selectable by the user but rather is a cross-cutting safety feature for all flight modes except for the M.F.M., where the human pilot has full control of the drone. So, it is not included as a flight mode in the horizontal axis.

Fig. 3
figure 3

Relationship between flight modes and security levels. Source: own elaboration

Notice that the Emergency Flight Mode can only activate the RTL mode without entering in S.F.M. if the drone is in the path. In this mode, the drone can perform an emergency landing in any situation if it does not have enough battery to come back home through the shortest path available. This is an exceptional mode that can be reached from any other state of the drone.

If the drone is not following the flight plan, a return straight line trajectory could be dangerous since the drone could collide with an obstacle or fly into a no-flight zone. In case of detecting an obstacle, the drone cannot decide where to move to. It cannot recalculate in real time an alternative trajectory to RTL. In the case of the battery level being really low, the drone can land on any safe landing point or area specified in the map. This requires an Environment Scanning since this mode requires knowledge of the path and potential obstacles (walls, environment cloud of points, and so on) to determine the return path to a landing point or area (typically the take-off point).

Observe that M.F.M. and A.F.M. may be used in any situation and do not require any kind of scanning of the environment. It is interesting that they are the flight modes currently used when flying a drone in indoor scenarios, using any current on-the-shelf commercial drone. Finally, the drones do not require the use of an accurate IPS, since the flight is completely manual.

In the case of the D.F.M., this mode does not require any kind of scanning of the surroundings since this flight mode has to avoid restricted areas. The D.F.M. mode is a flight mode that improves security for drone indoor navigation over a completely free M.F.M. since it takes into account both the R.F.M. and the not allowed areas. It is important that indoor spaces cannot use any current on-the-shelf commercial drone since the restriction has to be edited by the pilot in a GUI on a PC/Tablet and later, transferred to the drone FCS in order to avoid those areas when flying. This mode requires use of an accurate IPS since current GPS has a minimum resolution of 10 m when flying outdoors. On the other hand, there are many situations where GPS reception is bad or even non-existent when working indoors.

Notice that (Mi, G, S, E).F.M.:

  1. 1.

    Require scanning of the environment since this flight mode has to avoid not only restricted areas but walls, columns, furniture or any area or volume that the director considers dangerous or not available.

  2. 2.

    Improve security for drone navigation in indoor scenarios, preventing the pilot from crashing into the surrounding walls and furniture.

  3. 3.

    Cannot use any current on-the-shelf commercial drone since forbidden areas, allowed flying paths and flight plans have to be edited by the pilot in a GUI on a PC/Tablet and later transferred to the drone FCS in order to avoid those areas when flying.

  4. 4.

    Require use of an accurate IPS since current GPS has a minimum resolution of 10 m when flying outdoors. On the other hand, there are many situations where GPS reception is bad or non-existent when working indoors.

6 Flight Mode State Machine

A drone may be understood as a state machine. This machine works on different flight modes. Every kind of flight mode may be understood as a state of this state machine. In Fig. 4, there is a state-transition diagram that shows how an indoor drone should work.

Fig. 4
figure 4

State-transition diagram of all the flight modes. Source: own elaboration

Notice that there is a group formed by the (M, A, D).F.M. on the left side. This is the manual group, while the group formed by the (Mi, G, S).F.M. on the right side is the automatic one.

The manual group has three characteristics:

  1. 1.

    It is controlled directly by the human pilot. The drone does not move if the human pilot does not order anything (it flies in loiter mode).

  2. 2.

    The human pilot can change from one mode to any other simply by activating the defensive/reactive flight mode or de/activating the no-flight zones. It depends on the safety risks the pilot wants to assume.

  3. 3.

    D.F.M. has the highest security level of all the manual flight modes. It does not allow the pilot to get the drone into no-flight zones and does not allow the drone to collide with walls. So, it is the flight mode reached as soon as the user enters in manual mode while the drone is in G.F.M.

Notice that S.F.M. is a transition state that allows the drone to return back to the original flight plan path in an attempt to avoid obstacles. There are several choices for returning to the original path: going to the nearest path point, to the point where it is supposed to be at that moment according the flight plan, going to the point where the drone left the flight plan, or to a given checkpoint. This mode is the only way to return from a manual flight mode to an automatic flight plan. While the drone is turning back to the original path (S.F.M.), the user can take control of the drone again, passing to D.F.M., the next lower security level. This is why there is a bidirectional arrow between both flight modes. It is the responsibility of the pilot to later reduce the security passing to (M,A).F.M. Once the drone has reached the flight plan trajectory, it enters into the G.F.M. and it starts to follow it as if nothing had happened.

Observe that the automatic group is not directly controlled by the human pilot except for in Mi.F.M., which is a semiautomatic mode where the drone cannot move outside the path in the flight plan. So with this, there is no need to go to S.F.M. from G.F.M. to correct anything. The arrow is one way. Consequently, the human pilot can move the defined trajectory of the flight plan forward and backward by changing from G.F.M. to Mi.F.M. Additionally the drone can move out of the trajectory by changing from G.F.M. to D.F.M.

See that for current implementation, the S.F.M. is not available. So, when moving manually out of the path, the drone is always on D.F.M. and it cannot come back to G.F.M. until the drone is landed and reset.

The diagram shows that manual flight is always mandatory over any other flight mode. Manual flights are always available. They may be seen as an escape mode when the drone is in a risky situation.

Some final remarks about the flight modes: it is important that the transitions from any state to another one are controlled from the Flight Path Manager at the land base. Also, if the manual radio control base is touched, either explicitly or accidentally, the drone will abandon any automatic flight mode and get into D.F.M. for security. Finally, once the drone is in any manual flight mode, the pilot can change to any other manual flight mode by selecting the new mode in the Flight Plan Manager device.

7 Conclusions

This chapter has presented the characteristics of the drones and the specific aspects to be considered in the case of the drone flights in an indoor environment. These characteristics are architecture-oriented in order to achieve the autonomous navigation of indoor drones whose mission is to record videos and pictures with very high definition cameras. In order to achieve this type of mission, the system requires different flight modes that have been proposed: manual, reactive, deliberative and intelligent. Safety is the main axis of system design and a relationship between the flight modes of the drone and the security level required has been presented. Finally, the paper shows how an indoor drone should work by using these flight modes.