1 Introduction

The Industry 4.0 or the fourth industrial revolution, is a very frequently discussed topic nowadays. In the Industry 4.0, most of the work should be carried out by autonomous robots which will require feedback to enable their safe orientation in the working area, coping with obstacles and in the same time cooperate with humans. Additional sensors will be needed to obtain more and the accurate machine visual information for autonomous robots with respect to the surrounding objects. Thus the tracking of the robot movements in the industrial, i.e. non-cooperative environment is necessary to be considered.

Different types of imaging techniques for robot navigation in the industry, as the time of arrival, structured lighting, photogrammetry, stereoscopy, and laser triangulation together with others exploited for monitoring and quality control are generally used. Some machine vision and monitoring industrial systems used for the safety of the employee in the industrial environment together with robot navigation are described in [1]. Depending on the tracked target, these systems can be oriented on the mapping, localization, detection of obstacles [2], and terrain classification [3] or on the eye-in-hand configuration in order to acquire new images by changing the camera position. The time-of-flight (ToF) method with a camera for robot monitoring in eye-in-hand configuration was published in [4]. In [5], an overview and comparison of machine vision techniques used for robot navigation in terms of accuracy, range, sensor weight, safety, processing delay and their influence on the environment are provided. So the machine vision techniques in connection with robotics have become the main research priorities in industrial scenarios. Several methods based on different technologies and devoted to various environments are currently under research. The main techniques include acoustic [6, 7], optical [8] and radio-frequency (RF) [9] methods.

In the last decade, a great effort has been devoted to the research in the field of RF sensors, specifically ultra-wideband (UWB) radar or sensor application for the target localization. A particularly interesting feature of UWB sensors is their capability to detect not only the targets located line-of-sight (LOS) but also the targets situated behind a non-conductive obstacle (non-line-of-sight (NLOS) scenarios) [9, 10]. In addition, it has been shown (e.g. in [11]) that UWB radar could detect targets in the dust, smoke or other types of non-cooperative environments. As a sensor or radar device, different kinds of RF illuminators have been proposed. They include conventional UWB radars (e.g. impulse radar, noise, and pseudo-noise radar, Frequency Modulated Continuous Wave (FMCW) Doppler radar, step frequency radar, etc.) [12] and unconventional approach based on WiFi system application [13, 14]. In this chapter, the UWB systems in terms of application possibilities and exploitation of UWB illuminating signals for measurement are described. In these systems, the occupied frequency band is considerably wide. However, in combination with extremely low power, the occupation of the spectrum is not that significant and brings the advantage of improved system robustness in terms of transmission channel parameters.

Localization of moving robots in 2D space was discussed in [15] and in 3D space in [16,17,18]. This chapter deals with a UWB localization technology exploited for robot gripper tracking estimation. Because of very strict UWB energy emission limits [19, 20], it is designed for short-range i.e. indoor industrial sensing. Due to the huge bandwidth of the UWB systems, and their proper distance resolution in correspondence with the achieved material penetration capabilities, an interesting property of the UWB sensors working in high-frequency bands is their already mentioned ability to detect not only targets located in LOS but also objects situated in NLOS scenarios. Based on the above-mentioned potential of UWB technology and the recent state of the art of the UWB radar signal processing, several basic applications of UWB sensors for robot localization either in LOS or NLOS scenarios were proposed.

There are a lot of theoretical approaches focused on object localization in 2D and 3D. They include, for example, angle of arrival (AOA) measurement, time of arrival (TOA) measurement and received signal strength (RSS) indicator measurement. The experimental results devoted to target localization using these methods and radar systems for the outlined scenarios have shown that these solutions can provide a reliable single target localization. However, experiences with a robust M-sequence UWB radar received under a number of measurement campaigns dedicated to the localization of multiple moving targets have shown that often a target located near the radar antennas is only visible. The other targets can also be detected but with less reliability than those located closer to the radar antennas. This effect results in the essential reduction of the localization system performance efficiency. This problem has been deeply analyzed in [21, 22]. It has been shown in these articles that the mentioned difficulties appearing in multiple target scenarios can be explained as the consequence of a complex environment and also the phenomenon of mutually shielding targets. Note, this phenomenon is the impact of the electromagnetic wave propagation, and hence, it is practically invariant with regard to the applied sensor devices. Possible solutions to that problem have been proposed in [23]. In this chapter, we show that an efficient approach on how to improve the target detection and localization is to use several radars forming UWB sensor network (UWB-SN) for monitoring the area of interest. Diversity is the main approach behind this idea. By spacing the particular radar systems i.e. sensor network nodes with their transmitting Tx and receiving Rx antennas in such a way that the target angular spread is confirmed, the UWB-SN is able to utilize spatial diversity of the target scattering to decrease the complexity of the monitored environment. The experimental analyses of the performance of UWB-SN with a centralized architecture presented in [23,24,25,26] have shown that UWB-SN is really a very promising multiple targets monitoring technology, e.g. the human in coexistence with robot also in the complex industrial environment.

Theoretical aspects of exploitation and performance of the UWB-SN have been studied e.g. in [15]. In this article, it was shown that the Cramer-Rao lower bound of the target position estimation by UWB-SN is inversely proportional to the number of measurements executed by M transmitters and N receivers. The results confirmed that an application of UWB-SN increases the target detection probability [23], as well as improvement of the target localization accuracy. The person and robot localization methods based on UWB-SN have been experimentally studied in [15, 24,25,26,27] respectively. The methods introduced in these papers include especially TOA method combined with a modified version of multiple moving target tracking (MTT) system (TOA-MTT) [24], probability hypothesis density filter application [27] and imaging method implemented for distributed multi-static UWB radar system [28]. The comparison of these methods in terms of their complexity, reliability, and accuracy has indicated that TOA-MTT method can be considered as very promising. The reader interested in this topic can find some additional details e.g. in [24, 29, 30].

A short list of the localization methods given in the previous paragraph shows, the fundamental methods and algorithms for target localization by UWB-SN are available. On the other hand, a real-time operating hardware and software solution of UWB-SN for any real-life application is necessary. Some sensor network solutions for indoor scenarios are in [18] and for outdoor sports and lifestyle applications in [31]. Both these sensor networks were designed for the localization of the target with active markers. That is the main difference between the UWB-SN discussed in [18, 31] and the UWB-SN described in this chapter.

Based on a wide range of promising applications of UWB sensors mentioned in this chapter, considering the advantages of the UWB technologies and target localization methods, both for person localization [25] and robot localization [26], as well as taking into account the current state of the art in the UWB-SN implementation area, we have developed the testing equipment for a wireless real-time UWB-SN. The equipment can be used for the development of a whole range of UWB-SN applications for detection, localization, and tracking of moving targets. In this chapter, the novel hardware and software structure of the above-mentioned equipment, which will be represented by the sensor network with a centralized architecture is introduced. The M-sequence UWB radar as a localization sensor, low-cost ARM based quad-core microcomputer (ARM-MC) as an embedded processor of all sensor network nodes and a narrow-band wireless transmission system creating a communication infrastructure of UWB-SN will be also proposed as the key components of the proposed UWB-SN. The software solution of UWB-SN is based on optimized Yocto Linux operating system and signal processing algorithms implemented in ANSI C language. The novelty presented in this chapter we can see especially in the proposed hardware and software solutions of the UWB-SN testing equipment. Therefore, these solutions will be described in detail in the next parts of this chapter. For the estimation of the target location, TOA-MTT signal processing procedure will be used. This localization approach is too complex to be included in a single chapter, therefore it will be described just very briefly here. The performance of the proposed concept of UWB-SN will be illustrated by its application for processing of the data obtained at the experiment aimed at the tracking of the moving robot gripper in NLOS scenario. The results show that the introduced concept of UWB-SN can be considered interesting and promising.

2 Monitoring System Architecture and Hardware

When designing the described UWB-SN, it is crucial to select proper UWB sensors i.e. radar devices. As mentioned earlier, radar devices using various illuminators were developed. They include conventional UWB radars as impulse radar, noise and pseudo-noise radar, FMCW Doppler radar, stepped sine radar etc. or unconventional approaches based on the WiFi system application.

The scenario of the UWB-SN described in this chapter includes non-cooperative localization and tracking of the moving robot gripper and developed UWB-SN is applicable to LOS as well as NLOS scenarios. The main concept of the target localization in the UWB-SN is based on the assumption, that raw radar data are represented by the impulse responses (IRs) of the environment, through which the electromagnetic waves propagate from the transmitting Tx to the receiving Rx antennas of the sensors. Considering these requirements, the M-sequence UWB radar was chosen as the sensor for the UWB-SN [9]. The reasons for this decision can be briefly expressed as follows.

Experimental testing of the M-sequence sensor and developed signal processing algorithms have proved that by their combination we are able to localize and track the robot gripper. Moreover, the M-sequence sensor has additional interesting operational properties. Its operational parameters, such as system bandwidth (up to several GHz), range resolution (1–3 cm) and unambiguous range (17–40 m) are controlled by the system clock frequency. Therefore, the single radar system can be simply adapted to various scenarios. Additionally, the transmit power of a particular radar sensor is usually between 1 and 10 mW and thanks to the M-sequence signal exploitation, the transmitted energy is uniformly distributed along the whole period of the M-sequence.

The unique property of the M-sequence UWB radars is their ability to operate simultaneously over the same frequency band and during the same time, while their mutual interference is negligible. The detailed analysis of the mentioned property in [32] showed that this effect is caused by the clock frequency deviation between the individual system clock sources of the particular radars. It has been demonstrated in [32] that only a few kHz variances between the clock frequencies of the radars ensure negligible correlation between the same M-sequences transmitted by them and therefore their mutual interference can be negligible. This property of the M-sequence systems was considered as their major advantage during their development. Therefore, no special algorithms to control the UWB-SN in terms of time and frequency synchronization of the individual sensors are needed. The ability of the M-sequence systems to operate properly also under narrowband interference condition and in co-existence with other sources of electromagnetic radiation as WiFi, cellular phones, radio and TV broadcast, communication infrastructure of the UWB-SN, etc. is considered as their further excellent advantage over the other systems [9, 33].

The proposed UWB-SN has the star topology with one central node (CN) and several UWB sensor nodes (SNs) as it is shown in Fig. 1. In the proposed network, the CN operates in the master mode, while the SNs operate as slaves. As described in the next section, the discussed SN uses as the sensor the UWB radar manufactured by Ilmsens GmbH™, radiating the pseudo-random maximum length binary sequence (M-sequence). As mentioned above, the M-sequence radar has multiple advantages which make it an ideal choice for short-range sensing. The UWB radar node monitors the area of the interest and gathers continuous series of IRs (i.e. raw radar data) of the environment, through which the electromagnetic waves from transmitting to receiving antennas of the sensor propagate. The UWB-SN equipped with ARM-MC collects the raw radar data from UWB radar and by executing the algorithms described in Sect. 3, it is able to estimate coordinates of the moving targets [x, y, optionally z]. The ARM-MC executes all computationally expensive processing in the SN and acts as the signal processing core. Because it is additionally equipped with the embedded microcontroller containing the RF communication interface (MCU-RF), it ensures the real-time communication between the SN and CN as well. The considered UWB-SN has centralized architecture [24,25,26] and all locally estimated target coordinates received from SNs are processed by the data fusion in the CN. The data provided to the data fusion in the CN are time-synchronized and transformed into the common coordinate system. The sets of such data from each SN represent the input data for the MTT system. The hardware functions and the complete data processing flow performed in all components of the sensor network are described in the next sections of this chapter.

Fig. 1
figure 1

Star topology of the proposed UWB sensor network. UWB radar and signal processing unit are included in each sensor node, data fusion and estimation of the target location are implemented in the central node equipped with the personal computer

2.1 M-sequence Based UWB Systems

The M-sequence based systems described in this chapter can be classified as the devices with a continuously radiated electromagnetic wave (CW). The stimulation signal is emitted continuously in contrast to the conventional impulse radars which transmit short pulses. As shown in the simplified block schematics of the M-sequence generator (Fig. 2), the clock signal (CLK) is modulated by a pseudo-random bit sequence. A linear feedback shift register (LFSR) consisting of sequential logic circuits is used to generate the pseudo-random sequence [9].

Fig. 2
figure 2

Simplified block schematics of the M-sequence generator

The basic parameter of this pseudo-random signal is its periodicity given by the number of shift register bits. The period M is given by:

$$ M = 2^{N} - 1 $$
(1)

where N is the number of the shift register bits dependent on the system used, usually 9, 12, or even 15 for the most recent systems. Periodicity can be considered as an advantage because it allows the use of receiving circuits working at lower frequencies (subsampling mode) than the clock frequency of the M-sequence generator of the transmitter. The principles are described in more details in the [34, 35]. The transmitted signal is a noise-like stochastic signal. Its waveform in the time domain after the sampling process is depicted in Fig. 3.

Fig. 3
figure 3

Part of the generated (top) and processed (bottom) pseudo-random binary sequence waveform

The maximum length binary sequence (MLBS), also known as the M-sequence, is a special type of a pseudo-random binary sequence (PRBS). The term pseudo-random is used because the stimulation signal \( x\left( t \right) \) has a certain period (dependent on the LFSR length), after which the bit sequence is repeated. Therefore, it is not the real random, but a quasi- or pseudo-random signal. Parameters of the pseudo-random signal are very close to the real random signals. As an example, the autocorrelation function \( R_{xx} \left( \tau \right) \) can be mentioned:

$$ R_{xx} \left( \tau \right) = \frac{1}{T}\mathop \smallint \nolimits_{T} x\left( t \right)x\left( {t + \tau } \right)d\tau $$
(2)

The autocorrelation function has the short impulse shape (similar to Dirac impulse) and the power spectrum of the pseudo-random signal \( x\left( t \right) \) is given by the Fourier transformation of its autocorrelation function:

$$ S_{xx} \left( f \right) = \mathop \smallint \nolimits_{ - \infty }^{\infty } R_{xx} \left( \tau \right)e^{ - j2\pi f\tau } d\tau $$
(3)

The PRBS signal [36] consists of elementary pulses (chips), whose number is dependent on the generator LFSR length and which are pseudo-randomly distributed within a single signal period.

Signal whose autocorrelation function is narrow has a large bandwidth. Therefore, it is appropriate to use such a signal as the stimulation signal, i.e. signal transmitted by the UWB sensors with a high resolution dependent on the clock signal frequency. By investigating the cross-correlation function \( R_{xy} \left( \tau \right) \) between the transmitted (stimulation) signal and the received signal, we can obtain different information about the space illuminated by the stimulation signal. The cross-correlation function is defined by:

$$ R_{xy} \left( \tau \right) = \frac{1}{T}\mathop \smallint \nolimits_{T} y\left( t \right)x\left( {t + \tau } \right)d\tau $$
(4)

The information of interest is included in the IR \( h\left( \tau \right) \), especially in its shape, position, magnitude and other parameters which can be seen directly from the outline of the IR (Fig. 4). When observing a target, the IR includes information about the behaviour of the target placed between the antennas (e.g. between the transmitting Tx and receiving Rx UWB sensor antenna feeding points) under the influence of the radiated signal. The IR, as well as the auto- and cross-correlation functions, are related by the convolution:

Fig. 4
figure 4

Real shape of the received impulse response

$$ R_{xy} \left( \tau \right) = h\left( \tau \right) \otimes R_{xx} \left( \tau \right) $$
(5)

For the relationship between the transmitted stimulation signal \( x\left( t \right) \) and the system response \( y\left( t \right) \), we can write the equation:

$$ y\left( t \right) = h\left( \tau \right) \otimes x\left( t \right) $$
(6)

Then, if the autocorrelation function is the Dirac impulse \( \delta \left( \tau \right) \), the Eq. (5) can be simplified to:

$$ R_{xy} \left( \tau \right) \sim h\left( \tau \right)\,for\,R_{xx} \left( \tau \right) = \sigma \left( \tau \right) $$
(7)

From the practical point of view, the Eq. (7) means that the cross-correlation function of the received and transmitted stimulation signals is proportional to the IR function of the object under test (OUT) illuminated by the stimulation signal \( x\left( t \right) \).

Regarding the operating frequencies, these devices are capable of operation from tens of Hz up to tens of GHz. Therefore, they easily cover the whole UWB band. Generated stimulation signal waveform in the time domain is depicted in Fig. 3, where a part of the signal period measured in real time can be seen. The spectrum of this signal is shown in Fig. 5 and the received signal in the form of the impulse response is in Fig. 4. Standard IR processing algorithms may consider various IR parameters as its rising and falling edge, width or amplitude.

Fig. 5
figure 5

The M-sequence spectrum in baseband

2.1.1 UWB Sensor Systems and Radar Device Selection

The UWB devices operating with the continuously transmitted electromagnetic wave modulated by the M-sequence can be connected with each other and realize cooperative sensor networks and systems. We can imagine the UWB sensor system as one or more sensors using an ultra-wide frequency spectrum in a defined band, in our case in baseband up to approximately 6 GHz. Individual sensors (i.e. sensor nodes) may be connected by different interfaces into the UWB-SN. These interfaces can be classified according to their use, as wireless (RF communication modules operating in unlicensed ISM bands) and wired. The wireless interfaces transmit mostly synchronization pulses, because the devices in the UWB sensor network require proper synchronization for their correct function. On the other hand, wired interfaces distribute the clock signal to all nodes and synchronization is performed by each node autonomously.

The proper selection of UWB sensors (i.e. radar devices) for UWB-SN belongs to key decisions taken within UWB-SN development. We choose M-sequence UWB radar as the sensor of UWB-SN [9]. The experimental testing of this UWB sensor and developed data processing methods have shown that their combination provides the target localization with very good reliability and accuracy. Additionally, this M-sequence UWB sensor is able to provide interesting operational properties, as well. Its operation parameters such as operational bandwidth, range resolution and un-ambiguous range can be controlled by the system clock frequency and hence, the same radar system can be simply adapted for the different scenarios. As was already mentioned, due to the difference between the system clock frequencies (several kHz only), the M-sequence UWB radars using the same M-sequence can operate simultaneously, while their mutual interference is negligible. They provide even a good performance under a narrowband interference. This property is very beneficial especially if the sensor is used in an environment where a co-existence with other sources of electromagnetic radiation is present. All these properties of the M-sequence UWB radar systems have been exploited with an advantage within UWB-SN development.

2.1.2 Basic UWB Sensor Node Design

The basic configuration of the UWB sensors used nowadays (depicted in Fig. 6) is the result of the development of the M-sequence UWB devices [37, 38]. We build up our SN node around the UWB M-sequence radar front-end SH-3100 m:explore. The scheme of the UWB sensor can be divided into the following blocks according to their functions:

Fig. 6
figure 6

Basic block diagram of a UWB sensor equipped with the 15 bit LFSR

  • Transmitter

    • Input and output amplifiers connected to the clock signal input and M-sequence generator output (m:explore −7 dBm RF output power).

    • The M-sequence generator of stimulation signal. It may include 9 or 12 (the latest version 15) bit LFSR. In the case of a 9-bit LFSR, the range covers several meters. If the 15-bit version is used, the sensing range can achieve tens to hundreds of meters. The sensor resolution can be adjusted by the clock frequency, while even the resolution of tenths of a millimeter can be achieved under optimal conditions. Because of their sensing range, the UWB sensors are also known as the short range devices (m:explore 9th order M-Sequence/511 chips UWB transmit signal).

    • Synchronization circuits with binary divider (m:explore 13.312 GHz system clock rate).

    • Modulator used for optional modulation of the output signal (m:explore 0.1–6.0 GHz UWB bandwidth in baseband operation).

  • Receiver or a pair of receivers

    • Similarly to the transmitter as well as the receiver has amplifiers on its inputs and outputs. However, the cascade Low Noise Amplifier (LNA), buffers and the other front-end circuits are placed at the input of the receiver (m:explore RF input power: 0 dBm max. signal level, input 1 dB compression point P1dB ≈ −15 dBm and signal level above the −24 dBm may cause nonlinearities; instantaneous dynamic range > 135 dB(s)).

    • Wideband sampling gates with a Track and Hold Amplifier (THA) responsible for tracking and measurement of instantaneous input signal level, i.e. equivalent time sampling.

The parts of the transmitter and the receiver are described in more details in [9]. The actual device concept depends on the given application and working conditions of the sensor. For example, through-wall measurements and object localization in 2D space use the conventional block schematic shown in Fig. 6. For other scenarios, e.g. for a sensor network for 3D mapping, n-such sensor topologies can be used, alternatively, n-receivers or n-receivers and m-transmitters can be used [39].

2.1.3 Modifications of UWB Sensor Devices

As mentioned above, the UWB sensors as the wideband devices are basically multi-purpose and principally modular devices in terms of clock signal time base settings as well as in sense of a number of transmitters and receivers, e.g. more transmitters and receivers can be used for Multiple Input Multiple Output (MIMO) configuration. These systems can be applied mainly in distributed configurations of the sensors, for example, to improve monitoring of moving targets and its accuracy in complex environments. Thanks to its relatively simple structure and robust timing and synchronization system, it is easy to extend the devices to an arbitrary number of input and output channels and therefore to the arbitrary number of sensor network nodes. The solution is depicted in Fig. 7. This type of the modified UWB sensor node has been successfully used as a wideband system for non-invasive scanning in medical applications [40].

Fig. 7
figure 7

Block diagram of a multi-channel structure of the UWB sensor

However, the standard structure of the sensors with a single transmitter extended with low pass filters for UWB systems [41] and two or more receivers, as shown in Fig. 6, is used in most cases. The basic version of the UWB sensor can be extended by circuits for wireless synchronization between the transmitter and receiver. The synchronization over the wireless link (Fig. 8) adds a degree of freedom in terms of arbitrary positioning of the UWB sensor transmitter and receiver. The main problems of such systems are contact interruptions and interference from external sources which cause difficulties in narrowband channels. A possible solution for the realization of such a sensor could be the use of narrowband microwave systems combined with a narrow radiation angle of the antennas used for synchronization. Moreover, if the transmitter moves with respect to the receiver, it is necessary to ensure stable visibility conditions. Such a system could be used under laboratory conditions, but it is not that straight-forward in real environments. In this case, the solution for the synchronization of two sensors described in the next section is more viable.

Fig. 8
figure 8

Transmitter and receiver synchronization by the wireless link

2.2 Architecture of Embedded UWB Sensor Node

We build up our SN around the ARM based quad core microcomputer (ARM-MC) with optimized Linux operating system. It is connected to the UWB M-sequence radar front-end SH-3100 m:explore (UWB radar) described in previous sections and specialized communication microcontroller with RF transceiver as shown in Fig. 9.

Fig. 9
figure 9

Sensor node block diagram. ARM-MC is connected to the MCU-RF via standard UART interface. The M-sequence UWB radar is connected by the standard USB interface

The ARM-MC controls UWB radar, performs all computationally intensive signal data processing tasks executed by SN and communicates with CN via MCU-RF. The UWB radar front-end is connected to the ARM-MC by using the standard USB port as shown in Fig. 10 where complete SN hardware is depicted. The MCU-RF communicating in short range devices (SRD) RF band is connected to the ARM-MC by using a standard UART interface.

Fig. 10
figure 10

Complete UWB sensor node hardware with M-sequence UWB radar (SH-3100 m:explore), signal processing core and RF communication unit with RF antenna for SRD frequency band, 2 receiving and 1 transmitting UWB antennas

The UWB radar front-end monitors the area of interest and provides continuous series of impulse responses (raw radar data) of the environment through which the electromagnetic waves transmitted by the radar propagate from the transmitting to the receiving antennas of the sensor. ARM-MC receives raw radar output data, computes impulse responses by deconvolution of raw radar data with transmitted M-sequence (by using optimized fast Hadamard transform [42]) and estimates for each detected person/object his/her/its coordinates by execution of all algorithms described in the next subsection. The MCU-RF is responsible for reliable low-rate real-time communication with CN in SRD RF band. All main used SN components (with the exception of UWB radar described in the previous section) are based on the off-the-shelf low-cost development boards. The main hardware and software features of these SN components are summarized in the following subsections.

2.2.1 Signal Processing Core with Embedded Linux

ARM-MC signal processing core is based on low-cost 8.5 cm × 5.6 cm embedded Raspberry Pi3 Model B minicomputer [43] equipped with Broadcom BCM2837 system on chip (SoC). The SoC includes 64-bit quad-core ARM Cortex A53 (ARMv8) CPUs running at 1.2 GHz, 32-bit VideoCore IV GPU running at 400 MHz, 1 GB LPDDR2 RAM and several standard interfaces including USBs, HDMI, Ethernet, microSD card interface, UARTs, and GPIOs.

Separation of data processing (performed by CPUs) and communication/synchronization (performed by MCU) significantly simplified SN software development. In order to simplify SN software development and ensure simple portability to alternative signal processing core hardware platforms in the future, we decided to use the operating system (OS). We chose Yocto Linux OS [44]. The Yocto Linux is a minimalistic Linux distribution with very small footprint size and it is built for the target embedded platforms rather than distributed as a pre-compiled image. We used minimal Yocto image with just a USB storage driver installed. We also developed custom USB driver for the M-sequence UWB radar front-end. The Yocto Linux, signal data processing and complete sensor node custom functions have less than 40 MB and are stored on the SD card. OS and processing functions are downloaded to the ARM-MC DRAM memory during sensor node start-up and booting phase.

Data processing algorithms are performed by ARM-MC by its quad-core CPU. We implemented all the processing blocks shown in Fig. 11 as portable ANSI C functions by using the standard GCC compiler.

Fig. 11
figure 11

Hardware blocks and signal processing software blocks embedded in UWB sensor node

This provides high flexibility for the cross-platform software development and its future extensions and optimizations. We implemented signal processing functions as dynamically linkable libraries so they can be easily updated or modified in the final sensor node system. Measured impulse responses are obtained from Correlation block that performs deconvolution of received UWB raw radar signals by using highly optimized Hadamard transform and signal components permutations [45]. The Background subtraction blocks extract week dynamic signals representing moving persons or objects from static reflections. This operation significantly improves the input signal to noise ratio that is required for proper operation of the following signal processing algorithms. Detection, Localization, and Tracking blocks execute series of operations with the goal to estimate [x, y, z] coordinates of moving targets in the analyzed area locally.

All computations are performed in 32-bit floating-point arithmetic by using quad core CPU in ARM-MC. Implemented software allows also to store in real-time locally in DRAM memory computed impulse responses and to transfer them to the CN in off-line mode. This mode of operation is useful for development and testing of optimized signal processing algorithms that require records of real-time data from several SNs.

Source codes and compiler settings for ARM-MC software building are optimized for exploitation of the single instruction multiple data (SIMD) based NEON extension available in ARMv8 CPUs. Thanks to the multi-core architecture and 1.2 GHz clock frequency of CPUs there is no problem with real-time data processing capabilities of used CPUs hardware and there is even a reserve for more complex algorithms to be implemented in future.

2.2.2 Communication and Synchronization Interface of Sensor Node

In the proposed UWB-SN data are transmitted from each SN to the CN for further signal data processing. Each SN transmits [x, y, z] coordinates of up to 10 detected moving persons/objects (limit defined in our current software implementation). Coordinates are expressed as 12–bit numbers so each SN transmits up to 240 bits per IR. For 15–25 IR/s we need to transmit 3600–6000 bits/s. We use license-free narrow-band SRD 868–870 MHz for this low-rate communication. As used RF communication channel can contain errors we used CRC check of received packets and automatic repeat request protocol for error correction of corrupted or lost packets. As SNs are synchronized by using RF control return channel shown in Fig. 1 we use only 1 forward RF channel (by aggregating of several narrow-band channels) and all SNs share it by using time division multiplex (TDM) access.

As RF transceivers we use Analog Devices ADuCRF101 MCU-RF. The ADuCRF101 integrates powerful RF transceiver for SRD frequency band with ARM Cortex-M3 MCU [46]. Integrated MCU runs our custom firmware that can perform data buffering, data transmission and reception, error detection and correction. This MCU runs in parallel with ARM-MC and provides great flexibility and modularity for SN development.

Locally estimated target [x, y, z] coordinates that are computed by the ARM-MC are sent to the MCU-RF via standard UART interface as shown in Fig. 9. The MCU-RF performs also wireless synchronization of SNs by using pilot signals (special synchronization packets) transmitted by CN via RF interface. The integrated ARM-MC and MCU-RF shown in Fig. 9 are integrated into the compact metallic case.

The MCU-RF firmware is developed as the bare-metal application by using Keil MDK [47], the development environment for ARM Cortex devices. The usage of separate MCU for RF communication allows to get real-time synchronization of SNs with a precision better than 100 us that is much better than required for proper detection and tracking of moving targets. Future firmware extensions can be easily added as RF communication represents a transparent communication channel for the ARM-MC.

2.2.3 Communication and Synchronization Interface of Central Node

Our UWB-SN uses the star topology with one CN and several UWB SNs as shown in Fig. 1. The CN works as a master and SNs are slaves in the proposed network. The data fusion is implemented in CN by using a standard personal computer (PC) with external communication and synchronization module. Due to high software modularity of our main signal processing hardware shown in Fig. 12, we use the same hardware as communication and synchronization front-end of our CN. The functionality of this front-end is changed by changing Yocto Linux application and MCU-RF firmware in order to provide the required functionality.

Fig. 12
figure 12

Main signal data processing hardware components of developed sensor node stored in the metalic case: Raspberry Pi quad-core ARMv8 CPUs based microcomputer (right), ADuCRF101 based RF transceiver with antenna for communication in the SRD frequency band (left)

3 Wireless UWB Sensor Network Application

In this case, the UWB sensor was designed mainly for localization and tracking of the robot arm (or its predefined part) which was moving in the defined space in the [x, y, z] coordinates behind the obstacle. This solution has high application potential, especially in situations, when conventional sensors have high failure probability or if the conventional sensor functionality may degrade because of the non-cooperative environment with e.g. low visibility, dust, or even in case of fire. Another advantage of the proposed solution is that the UWB sensor can be located outside the working area of the robot arm and does not require direct visibility.

For this application scenario, a configuration of UWB sensors exploiting four receiving and two transmitting antennas was used. A pair of receiving antennas was connected to the first SN which monitored the moving target in the horizontal plane. The other pair of the receiving antennas was connected to the second SN which observed the target movements in the vertical direction. Two transmitting antennas radiating the illumination signal were also connected to both SNs. Further details concerning their arrangement will be described later in this chapter. This way, the sensor network was created of two UWB radars which were strictly synchronized to allow for the further data processing of the received signals. Finally, the data were combined and the resulting movement of the robot arm in the spatial coordinates was displayed. Using the additional sensor connected into the UWB-SN, it is possible to effectively improve the target detection and localization by widening the target identification observation angle. Such a sensor network is able to exploit the spatial diversity of the target scattering to decrease the complexity of the monitored environment.

3.1 Radar Signal Processing

In the case of robot monitoring by using the UWB sensors, raw radar signals (data) gathered by such sensors are represented by a set of the IRs of the environment through which the electromagnetic waves emitted by the radar are propagated from transmitting to receiving antenna of radar. This set of IRs is usually referred to as a radargram [9, 48]. Then, the robot co-ordinates, trajectory (localization results) and track (tracking results) can be obtained by sequential processing of particular impulse responses of the radargram.

An analysis of a robot echo has shown that such signals represent non-stationary components of the raw radar signals. Then, the robot echo can be revealed based on the detection of time changes of adjacent impulse responses of radargram due to non-stationarity of the target echo. In the case of the scenario discussed in this chapter, the radar range resolution (about 1–3 cm) is finer than the physical dimensions of the robot (about 100 cm) to be tracked. Then, the robot body can be considered a distributed target, and hence the UWB sensor can receive several backscatters from the robot body. If we summarize these findings and compare the scenario of robot tracking with those of tracking of a moving person (e.g. [48]), we can see that the robot tracking by UWB sensor is very similar to the tracking of a single moving person. Therefore, for robot tracking, the radar signal processing procedure originally introduced for tracking of a single moving person (e.g. [29, 48]) can be directly applied. In the next, this procedure adapted for the robot tracking by a single UWB sensor will be referred to as the radar signal processing procedure for the robot tracking (RTP). Its block diagram for the case of the single UWB sensor application is sketched in Fig. 13.

Fig. 13
figure 13

Block schematic diagram of the overall procedure: system setup, preparing input data, data processing and final output

RTP consists of the set of six basic signal processing phases such as background subtraction, target detection, time of arrival (TOA) estimation including TOA association, wall effect compensation, target localization, and target tracking. The particular phases are implemented using appropriate methods of signal processing. In the following parts of this section, we would like to provide the reader with a short outline of the mentioned procedure. With that intent, the significance of the particular phases of the RTP will be outlined, and the lists of signal processing methods that are most frequently used within the particular phases will be given. Because of the complexity of the discussed procedure of moving target observation, its detailed description is beyond this chapter, and hence, it will not be presented here. The reader can find its comprehensive description, especially, in [29, 48].

3.1.1 Background Subtraction

The analysis of raw radar data has shown that it is impossible to directly identify any moving targets in the obtained radargrams. This comes from the fact that target echo-to-noise and clutter ratio (ENCR) is usually very low, and hence simple signal processing methods cannot be used directly to detect the moving robot. Then, to detect such target, the ENCR has to be increased. For that purpose, background subtraction methods can be used. They help to reject, especially, stationary and correlated clutter, such as antenna coupling, impedance mismatch response, and ambient static clutter, and in such a way, they allow one to detect a moving target echo. The analyses of different methods of the background subtraction performed in [29] have shown that due to a good performance, high robustness and low computational complexity, the method of exponential averaging can be used for the background subtraction with the advantage.

3.1.2 Detection

Detection is the next phase of the RTP, which comes after the background subtraction. Detection methods analyse the radargram with the subtracted background and reach the decision of whether a signal scattered by a moving target is present or absent in the analysed impulse response. The most important group of detectors applied for radar signal processing is represented by sets of constant false alarm rate detectors (CFAR). They are based on Neymann-Person optimum criterion providing the maximum probability of detection for a given false alarm rate. It is well-known, that there are a number of varieties of CFAR detectors [49, 50]. In [51], a CFAR detector developed especially for UWB radar signal processing has been proposed. In spite of its simple structure and the assumption of the Gaussian model of clutter, it has proven to have a very good and robust performance for many scenarios of moving person detection [29]. That has been the reason why we have applied it also within the detection phase of RTP.

3.1.3 TOA Estimation

The TOA can be defined as the time interval necessary for an electromagnetic wave to propagate along the trajectory a transmitting antenna Tx-target-receiving antenna Rx. As we have mentioned above, the moving robot can be considered a distributed target. As a result, the detector output for a distributed target could be complex, because several backscatters corresponding to the same target can be detected with a unique TOA estimated for each detected backscatter. In order to handle this issue, an effective algorithm of TOA estimation for a distributed target has been proposed in [52]. Here, the basic idea of distributed target localization consists in the substitution of the distributed target with a proper point target. Then a distributed target position can be determined by using the same approach as for a point target. Note, that this algorithm referred to as trace connection method provides not only TOA estimation for the distributed target but also the association of the data received from two receiving channels and a deghosting operation essential for multiple target detection and tracking scenarios. Because of the complexity of the trace connection method, its detailed description is beyond this chapter, and hence, it will not be presented here. The reader can find its detail description in [52].

3.1.4 Wall Effect Compensation

The propagation of electromagnetic waves through non-metallic obstacles (e.g. through-wall) results in a delayed time of signals reflected by targets moving behind an obstacle, which means that TOA estimated by the previous phase of the radar signal processing are time shifted, because of the obstacle/wall presence. Their correction can be achieved by the subtraction of the mentioned delay time, whereby its estimation is the task of the wall effect compensation phase [53]. The methods referred to as the trace correction of the first and second kind [53] provide promising results in this area. The trace correction of the first kind is very simple but quite efficient approach allowing for the wall effect compensation. On the other hand, the trace correction of the second kind is more complex but more efficient solution of the wall effect compensation.

Our experiences with these methods have shown that the method of the trace correction of the first kind represents a trade-off between its high complexity and good performance. Therefore, this method is usually applied for the compensation of the wall effect. For its utilization, the wall parameters, such as its permittivity, permeability and thickness have to be known in advance. Here, we would like to note, the mentioned parameters can be estimated very efficiently, e.g., by UWB radar using the method described in [54]. If the walls considered in an analysed scenario are thin with small relative permittivity, the wall effect can be treated as negligible. Hence, under such conditions, no methods of the compensation of the wall effect have to be employed.

3.1.5 Target Localization

The aim of the target localization phase is to determine the target coordinates in a defined coordinate system. Because the considered UWB sensor is equipped with one transmitting and two receiving antennas, the target coordinates can be obtained using a direct computation method [29] which represents a form of multilateration method.

The analyses of the target localization by a UWB radar equipped with one transmitting and two receiving antennas based on the direct computation method [55] has confirmed that the distance between transmitting and receiving antennas, TOA estimation error and the target position in the monitored area are the key factors determining the localization error. It has been shown in [55] that the localization error can take on values from zero up to several meters, even for very simple scenarios. These conclusions indicate that cooperative localization of the target based on the UWB sensor network (UWB-SN) could be a promising solution for the improvement of the accuracy of the target localization [23, 25].

The key issue of the target localization by UWB-SN lies in the data fusion obtained by the particular sensor network node. The outputs of the nodes can be represented by the estimated target coordinates or by sets of estimated TOAs. In the case of the moving robot localization (single target localization), 4 basic approaches for data fusion obtained by the particular nodes can be applied. A simple arithmetical average of the estimated target coordinates provided by the particular nodes, method of joining intersections of the ellipses (JIEM) [36], the time of arrival complementing method (TOACOM) [56] and multiple target tracking systems (MTT) [56, 62] can be used.

3.1.6 Target Tracking

Target tracking provides a new estimation of the target location based on its foregoing positions. Usually, the target tracking will result in the target trajectory error decreasing including trajectory smoothing [48]. Note that the result of the phase of target tracking is usually referred to as a target track.

The most of tracking systems utilize a number of basic and advanced modifications of Kalman filters as e.g. linear, nonlinear and extended Kalman filters and particle filters [57, 58]. Besides Kalman filter theory, more complex systems such as single and multiple target tracking systems can be used for target tracking [29, 50, 62]. Note, that unlike tracking filters, tracking systems solve also the tasks such as targets-to-tracks association, target track estimation, and target track maintenance. Our experiences with tracking systems have shown that single target tracking (STT) system or MTT systems could be strong tools for target tracking and data fusion employed by UWB-SN.

3.2 Experiments with UWB Sensor Network

In various scenarios of measurements with the UWB radar, a different number and type of receiving antennas can be used [59]. Two receivers (with overall four Rx antennas) for better signal processing in the part of localization of the target of interest were used [60]. Therefore we can get more flexibility for the better recognition of every movement of the robot gripper in the 2D or 3D space (its [x, y, z] coordinates [61]) in the monitored area.

For better scenario description please refer to Fig. 14, where the UWB radar, antennas, simulated obstacle, and the robot arm can be seen. The basic configuration of the measured scenario is shown in Fig. 14 (left) and the robot gripper was moving in the simulated industrial process running on the small model of the production line as it is seen in Fig. 14 (right). UWB radar i.e. sensor was situated in three different positions, on the top, in front of and on the side of the moving robot gripper in the same distance from the base of the robot. The distance of all antennas was 1.2 m on each position and distance between transmitting and receiving antennas was 0.4 m. Hence the actual setup for the measurement was equipped by nine antennas, arranged according to the scenario of the measurement (see Fig. 14). But for purpose of experimental testing, the sensor network of only two UWB sensors (in front and on the side) was created.

Fig. 14
figure 14

Basic sketch of measured scenario (left). Through-wall experimental sensor network application based on the M-sequence UWB radar system (right)

The robot gripper moved between two reference points P1 and P2 that were determined by its control software. During the robot gripper operation, the control program was intentionally modified so that two movements exhibit a small deviation. During the measurements, 10 periods were recorded and the recording time was about 40 s. Three different kinds of measurement by utilizing the same simulated process under test were done, specifically two horizontal and one vertical measurement of robot gripper movement. The results provided below will be presented only for the horizontal robot movement. Similar results were obtained for vertical movement as well. The purpose of experimental testing was the validation of proper functionality and performance of our UWB-SN (i.e. the embedded signal processing hardware in each SN and real-time capability of used narrow-band RF communication among SNs and CN) for short range tracking of moving robot gripper.

The described measurement approach is based on the data processing tools that suppose to use M-sequence active sensing tools for the surveillance system. This method uses the sophisticated algorithm for detection and tracking of targets that are being moved in the specific area. The process of detection and localization of the target in the one [x, y] plane requires preprocessed data. Results from the processing are shown in Figs. 15, 16, 17 and 18.

Fig. 15
figure 15

Raw radar data from 1st UWB sensor located in front of the moving robot gripper

Fig. 16
figure 16

Raw radar data from 2nd UWB sensor located on the side of the moving robot gripper

Fig. 17
figure 17

Preprocessed data after background subtraction and normalization from 1st UWB sensor located in front of the moving robot gripper

Fig. 18
figure 18

Preprocessed data after background subtraction and normalization from 2nd UWB sensor located on the side of the moving robot gripper

In the first step raw radar data from both receivers were gathered. Data from both receivers should have the same parameters and hence no significant differences because the system is monitoring the same scenario under test. Data from two receivers Rx1 and Rx2 (for two horizontal systems in front of and on the side of the moving robot gripper) are being depicted. In Figs. 15, 16, 17 and 18, the raw radar data are shown in the form of radargrams. Figures 15 and 16 represent raw received data without any data processing in front of and on the side of the moving robot gripper, respectively. A lot of reflections from static objects can be observed and it is too hard to identify patterns that indicate any discontinuities in the motion of the robot gripper. In the case of Figs. 17 and  18, significantly better results after using a background subtraction algorithm can be seen. There are depicted data also after the normalization process in the range from −1 to 1. Value −1 represents the minimal energy of the signal that can be detected by UWB radar and 1 is the maximal value of the range.

These pictures show partial results of the signal processing, illustrating periodic movements of the robot gripper. In both cases (perpendicular and parallel), the result from remote observation of robot gripper in operation mode with simulated deviations is shown. During the robot gripper operation, the control program was intentionally modified so that two movements of the robot gripper exhibit a small deviation of approximately 0.02 m. The deviation at the sixth and eighth movement period was set. Two samples caused by the simulated movement error are more significant in Figs. 15 and 17, but can be visible in Figs. 16 and 18 as well, at the 6th and 8th period of the robot gripper movement. Samples of these periods can be considered as changing the position of the main peak of the impulse response. In deeper observation, amplitude, width and time of the rising and falling edge of the IR may be taken into account as a good point of view for the next evaluation.

After the gathering and preprocessing of the raw data, the data processing toolbox with additional algorithms described in Sect. 3.1 is used (see Figs. 19, 20, 21 and 22). As it is evident from previous figures, the data from both receivers Rx1 and Rx2 have no significant differences. Therefore, data from only one receiver (for both scenarios, in front of and on the side of the moving robot gripper) are being depicted in the next experimental results. In Fig. 19 outputs from the Constant False Alarm Rate (CFAR) detector are shown. Here the measured data are converted to binary values. In this step, it is necessary to set the sensitivity of the algorithm by setting the decision level of the CFAR detector. From the binary data, the time of the signal arrival (TOA) is calculated (see Fig. 20).

Fig. 19
figure 19

Raw radar data after applying CFAR detector. From 1st UWB sensor located in front of the moving robot gripper (left). From 2nd UWB sensor located on the side of the moving robot gripper (right)

Fig. 20
figure 20

Data after applied TOA algorithm. From 1st UWB sensor located in front of the moving robot gripper (left). From 2nd UWB sensor located on the side of the moving robot gripper (right)

Fig. 21
figure 21

Target localization. From 1st UWB sensor located in front of the moving robot gripper (left). From 2nd UWB sensor located on the side of the moving robot gripper (right)

Fig. 22
figure 22

Tracking of the robot gripper movement detection from the tracking algorithm. From 1st UWB sensor located in front of the moving robot gripper (left). From 2nd UWB sensor located on the side of the moving robot gripper (right)

The estimated target coordinates after time-synchronization and transformation into the common coordinate system are depicted particularly for each sensor in Fig. 21. Afterwards, target tracking (see Fig. 22) can be calculated. Tracking of the robot gripper as the magnified view is shown in Fig. 23. As it is seen the high resolution and stable tracking of the moving target is confirmed. Nevertheless, deviation in the tracking of the moved robot gripper is visible too much. It could have been caused by a very complex, or non-cooperative environment where the robot gripper has moved. These processed data are used for testing and further evaluation of new algorithms and hardware developed in our group. The performance of our UWB-SN for short-range tracking of moving target was tested by comparing target’s coordinates received by CN with the reference results saved in the control software of the robot gripper movement. Hence for a better comparison, the real movement between the two endpoints with known predefined coordinates (grey points P1 and P2) is also shown in Fig. 24 (right). Comparing the known robot arm trajectory between the given endpoints and the trajectory estimated by the tracking algorithms, it is possible to recognize the accuracy of the described method.

Fig. 23
figure 23

Detailed view of the tracking of the robot gripper movement detection from the tracking algorithm. From 1st UWB sensor located in front of the moving robot gripper (left). From 2nd UWB sensor located on the side of the moving robot gripper (right)

Fig. 24
figure 24

Experimental confirmation of the robot gripper movement estimation (left) and its detailed view (right) by data fusion method [24], from both 1st UWB sensor and 2nd UWB sensor located in front of and on the side of the moving robot gripper, respectively

Note that the described experimental results were acquired by UWB-SN working with real-time wireless communication channels from SNs to the CN. All computation executed in embedded processing core of the SN was also executed in real-time. Only CN results shown in Fig. 24 were computed off-line by using Matlab UWB-processing toolbox developed by our research group. These results demonstrate significantly improved tracking estimation of the proposed UWB-SN approach.

4 Conclusion

Robots have become a core element of Industry 4.0 and flexibility can be incorporated into them by sensor technologies in order to reach the requirements and functionalities of the new applications. It is necessary to increase the accuracy and collaborative work with humans, which means making decisions in real-time. For these purposes, visual feedback is the key issue, thus 3D machine vision is the future of robotics. In this chapter, a basic idea of wireless UWB sensing has been explained. Digital signal processing and all parts of the whole UWB system have been mentioned as well. From the experimental results, it can be seen that chosen hardware and methods of radar signal processing work very well. For the future work, the complex wireless UWB sensor network with more UWB sensors for performing of 3D measurements and advanced data processing for the moving robot with cooperative with humans will be deployed and tested.

The chapter describes wireless UWB-SN hardware for robot gripper monitoring behind the obstacle. It uses star topology with the centralized CN. Each SN uses M-sequence wireless UWB radar front-end with 12th order pseudo-random M-sequence developed by ILMSENS GmbH and embedded controller as well. Radar signals pre-processing and locally executed signal processing algorithms are performed by 64-bit quad-core ARM Cortex A53 CPU clocked up to 1.2 GHz available on the Raspberry Pi3 ARM-MC. The ARM-MC runs Yocto Linux OS optimized for embedded systems. Implemented radar signal processing algorithms are linked as dynamic libraries and can be easily modified in future development.

Data streams of compressed target coordinates are sent from each SN to the CN by using narrowband FSK modulation in standard SRD radio frequency band (868–870 MHz) that is the feature of the MCU-RF. The MCU-RF performs also wireless synchronization of SNs by using pilot signals transmitted by CN via RF interface. Experimental results confirmed the correctness of implemented algorithms and real-time capability of complete UWB-SN operation. The proposed concept of advanced radar signal preprocessing executed locally in each SN enables to use only low-rate narrow-band RF communication with CN.

From experimental results it can be seen, that selected methods of signal processing for the robot gripper moving estimation behind the obstacle work very well, as well as the using more sensors connected into the UWB sensor network is an efficient approach. UWB-SN improves target detection and localization capabilities by widening the identification angle of the targets. The sensor network is able to utilize the spatial diversity of the target scattering in order to decrease the complexity of the monitored environment and to improve the more accurate sensing. Developed UWB-SN has a modular architecture that can be easily extended from hardware as well software point of view. Our future development will concentrate on improvement of embedded signal processing algorithms required for visual feedback to robots, to enable their safe orientation in the working area, coping with obstacles and in the same time cooperate as well as avoid contact with humans.