1 Introduction

Owing to the rapid growth of the communication bandwidth and other resources, citizen science or crowd science is now considered to be a powerful tool to gather and analyze scientific data. Especially, rapid penetration of sensor-rich smart phones and IoT sensors make it possible to retrieve real-time data about urban environments. Their sensor data can be used in ordinary times, however, they will also play a critical role in disaster monitoring [1]. Understanding what’s going on and analyzing the data in fine granularity can be achieved only by user participatory sensing because we cannot deploy conventional expensive sensors with sufficient density.

Smartphones and IoT sensors can be very useful for mitigating the impact of disasters if we can effectively handle the huge amount of data they produce. We need to make their data easier to handle by applying algorithmic and statistical approaches such as aggregation, indexing, filtering, compression, data mining, and machine learning. We also need to make the data more useful by activating robust technological infrastructures for collecting and communicating accurate contextual data reliably.

In this paper, we propose a robust and resilient sensing environment by extending and integrating cooperative location inference and participatory sensing using smartphones and IoTs. Firstly, it is very important to conserve battery life of mobile devices in disaster situations as people use them to access and share critical disaster-related information and communicate with family members and friends. It is therefore highly desirable to determine the locations of mobile devices with minimum energy consumption. One of the energy efficient localization techniques for mobile devices is to use wireless location reference points and pedestrian dead reckoning rather than GPS. However, currently there is no robust pervasive infrastructure of location reference points. We use IoT devices to activate such an infrastructure. In particular, we propose a cooperative location inference mechanism to automatically determine the locations of IoT devices, thereby turning the devices into ubiquitous location reference points.

Secondly, we develop a user participatory sensing environment for mitigating the impacts of disasters based on the IoT-supported location infrastructure. The proposed environment has three key advantages compared to existing participatory sensing environments: (1) it facilitates collection of geo-tagged sensor data from smartphones and IoT sensors with smaller battery consumption, (2) it allows citizens to collect data before, during and after a disaster using smartphones, omnidirectional cameras, and environmental sensors to build an integrated large-scale database, and (3) it applies algorithmic and statistical approaches such as aggregation, indexing, filtering, compression, data mining, and machine learning to deliver relevant information such as safety-enhancing route recommendations at citizens’ fingertips.

2 Related Works

We now review existing user participatory environments for disaster detection and mitigation. People use social media tools to respond to natural disasters such as earthquakes, floods, and hurricanes. They are often used as a means to collect (or “sense”) critical information by organizing and coordinating volunteers. Such a form of crowdsourcing enables swift sharing of disaster information although it has certain limitations in terms of data quality as well as ease of collaboration and coordination [2]. Olteanu et al. have analyzed Tweets from various recent crises and shown their substantial variability across crises [3]. We can exploit social big data in a more informed manner as we deepen our understanding about the kinds of information crowds generate in various crises situations.

Crowdsourced disaster information is often linked to location information and can be visualized on a map. For example, volunteers monitored wildfires in Santa Barbara by showing text reports, photos and videos on a digital map [4]. Crowds can generate such maps much before authoritative information becomes available, which is an important benefit that can outweigh the cost of error-prone crowdsourcing data. Likely relevant to this discussion is that not only grassroots organizations but also governmental agencies are now exploiting crowdsourcing. For example, the Federal Emergency Management Agency (FEMA) in the U.S. recently introduced a crowdsourcing feature in their mobile app [5].

Smartphones are often used as social and participatory platforms for collecting disaster-relevant information. Moreover, there are a number of experimental projects that explore the uses of ubiquitous sensors in smartphones to infer critical information such as shakes, infrastructural damages, and fires in earthquakes. Smartphones’ accelerometers can be used to measure and communicate the strengths of shakes quickly and cheaply with much higher spatial resolution than professionally managed high-quality sensors such as K-NET in Japan. Existing research by Naito et al. has shown that smartphones’ accelerometers are particularly effective for monitoring shakes with the seismic intensity over 2 on the Japanese seven-stage seismic scale [6]. Monitoring strong shakes in buildings with high spatial resolution can be extremely useful for analyzing cumulative impact of shakes on buildings and even for designing safer physical structures. Community Sense and Response system (CSR) exploits accelerometers in smartphones and dedicated devices to monitor shakes cheaply and infer complex spatial patterns of shakes based on a machine learning mechanism [7]. Citizen Seismology Project interestingly senses web traffic on a popular earthquake web site and Twitter messages to detect earthquakes quickly [8, 9].

Fires, which can be triggered by earthquakes, often cause significant damage to inhabitants. Early detection of the locations of fires is very important for predicting the spread of the fires and making appropriate evacuation plans in time. However, there is a relative scarcity of projects that explore smartphone-based fire detection. Some recent high-end smartphones such as Samsung Galaxy S4 are equipped with temperature and humidity sensors that can be useful for detecting high temperature and low humidity as well as their temporal variances in proximity to fires. Amjad’s recent project exploits such high-end smartphones to build FireDitector that infers occurrences of fires in indoor environments using Naive Bayes Classifier with the data from smartphones’ temperature, humidity, pressure and light sensors [10].

Although existing literature reports many success cases of user participatory sensing for disaster detection and mitigation, most of the existing systems use energy-hungry localization mechanisms such as the ones that heavily rely on GPS. When using stationary sensors, someone would have to specify the locations of the devices at the time of deployment. However, oftentimes, deployment processes are not clearly defined.

3 Cooperative Location Inference with IoTs

There will be as many as 26 billion Internet of Things (IoTs) in 5 years [11]. As we discussed ealier, IoTs can be extremely useful for collecting environmental information before, during and after disasters. Moreover, they can cooperate with personal and wearable devices that citizens carry around. For example, IoT devices could help smartphones to detect their context more accurately by providing useful reference data.

Smartphones can use IoT devices as location reference points or “location tags” if they can identify nearby IoT devices by using short-range radio, visual recognition, audio detection, etc. Our proposed mechanism considers two types of location tags: (T1) the ones that already know their accurate locations and (T2) the ones that don’t know their accurate locations. In addition, location tags have onstage and offstage states: the system uses onstage tags to compute location information, and trains offstage tags until they are ready to “go on stage.”

We now consider a physical space in which onstage T1/T2 tags and offstage T2 tags coexist. Let L be the location estimate of an offstage tag. Our system collects location information from the smartphones that are in proximity to the tag, and incrementally computes L as follows:

$$\begin{aligned} L_{i+1} = \frac{(i \cdot L_i) + S_{i+1}}{i+1} \end{aligned}$$

It obtains new location estimate \(L_{i+1}\) from smartphone location \(S_{i+1}\) and existing location estimate \(L_i (0 \le i)\). This computational process can be triggered periodically, using the best smartphone location \(S_{i+1}\) in each interval. Also, when there are multiple smartphones nearby, \(S_{i+1}\) is a weighted sum of their location information. Note that our system currently uses RSSI (Received Signal Strength Indicator) to select the best \(S_{i+1}\) within each interval, and to assign a weight to each smartphone.

An offstage tag is turned into an onstage tag when its error estimation becomes smaller than a threshold value. We estimate the error by using maximum likelihood estimator of a corresponding covariance matrix. We then derive an ellipse that contains the tag’s real location with 95 % confidence, and use the area of the ellipse as the tag’s error estimation.

There are multiple benefits gained from providing such a localization mechanism. First of all, as it infers locations of IoT devices automatically, people don’t always have to define the locations of IoT devices at the time of deployment. IoT devices can eventually be associated with corresponding location information and the data they produce will be geotagged regardless of whether they are located indoors or outdoors, whether they have GPS modules or not, and so on. We can then accumulate a lot of georeferenced data which can be used to detect points of critical events such as occurrences of fire or collapse, and possibly guide firefighters quickly to the people in need of rescue, help citizens to evacuate successfully, and assess and predict damages accurately. Moreover, location-tagged IoT devices can provide nearby smartphones with accurate location information. The smartphones can use the received location information to improve their location estimation without consuming a lot of energy. As the proposed mechanism does not rely on GPS, it is particularly useful in buildings, underground passages, and urban canyons.

4 User Participatory Sensing

Making participatory sensing useful in disaster situations would require practical solutions to fundamental problems such as energy efficient sensing, integration of mobile and stationary sensing, integration of sensing in everyday and emergency situations, and privacy preservation. We describe our approaches to tackle these issues based on our experiences developing relevant prototypes.

4.1 Energy Efficient Sensing

Some computational processing is more energy consuming than others. Thus, we can save energy by turning off energy-consuming functions most of the time. Our approach to energy conserving participatory sensing exploits energy-efficient sensors such as accelerometers to detect the appropriate timing for turning on and off more energy-hungry sensors, communication modules, and computational processes.

One of our ongoing researches aims to record daily interaction of a person by utilizing Bluetooth in a smartphone as a sensor [12]. Although Bluetooth is superior to other direct-communication method due to its usable identifier (MAC address) and useful communication range of approximately 10 m, energy consumption is still a problem. We developed a method that improves energy consumption of Bluetooth beaconing leveraging 3-axial accelerometers equipped on smartphones. Also, the method improves robustness of finding social links that tend to fail due to collision using the similarity of acceleration and sets of Bluetooth MAC addresses.

The detailed method to find other smartphones considering energy consumption is illustrated in Fig. 1. First of all, the method recognizes if a user is “staying” or not with an accelerometer based on the method proposed by Ravi et al. [13]. Second, the method recognizes if a user is “talking” or not with a microphone. The method does not utilize speech-recognition, but utilizes only the volume of sound. Finally, the method senses proximity using inquiry mode of the Bluetooth that is normally used to search unpaired devices. The phone collects the MAC addresses of nearby phones in a certain seconds.

The proposed method predicts a social link in a robust manner against failures of finding in inquiry of Bluetooth. In the following equation, \(s_{ij} (B, t)\) is the strength of the social link between the person i and the person j from time t to \(t + T\) where \(B_{it}\) and \(B_{jt}\) represent sets of collected MAC addresses. Even when a smartphone cannot find by the Bluetooth directory, the equation gives an indication how much two smartphones are located nearby.

$$\begin{aligned} s_{ij}(B, t) = \left\{ \begin{array}{ll} 1 &{}(Found) \\ \frac{B_{it} \cap B_{jt}}{B_{it} \cup B_{jt}} &{}(Not found) \end{array} \right. \end{aligned}$$

We have shown that the proposed approach can reduce energy consumption through preliminary evaluation studies. We believe that this technique should be extended and integrated with various kinds of mobile sensing and communication tools for disaster detection and mitigation.

Fig. 1.
figure 1

Flowchart of proposed sensing method

4.2 Integration of Mobile and Stationary Sensing

When disasters occur, we would be most likely to seek ways to best utilize all the tools and datasets in complementary manners in order to minimize the negative impacts of disasters on citizens. It is then very important to develop optimal strategies and best practices to use various technologies and resources in combination.

In our previous project, we have combined stationary wireless sensor network systems and user participatory sensing to collect fine-grained environmental information, thereby enhancing the safety of citizens in extremely hot urban environments [14]. The sensor systems are deployed in an urban area, with a range about 600\(\,\times \,\)600 m\(^{2}\), near a railway station in Tatebayashi City, Japan. There are two independent sensor systems: a wireless sensor network (WSN) to gather temperature and humidity information and a distributed camera system to detect the traffic flows of pedestrians. The combined sensor nodes which measure the conditions of temperature and humidity have been installed on the utility poles alongside the streets. The sensor nodes transfer data to a sink node and then to a central server by using IEEE802.15.4 protocol. There are 40 combined sensor nodes which have been deployed in the target area. Stereo cameras have been installed near the streets so that they can conveniently capture the scenes of pedestrian crowds. The captured scenes are delivered to a local PC on which a detection program runs to recognize the traffic flows and velocities of pedestrians. Then the sensed data are transferred to the central server by using wireless communication. Six stereo cameras have been deployed in the target area.

One of the most important issue in this type of integrated sensing is the spatial and temporal coverage of sensor data. One might opt for eliminating redundancy, however, redundant measurements can be useful for assuring the quality of crowd sensed data. This has to be supported by the data management mechanisms on the cloud, which we will discuss in Sect. 5.

4.3 Integration of Sensing in Everyday and Emergency Situations

User participatory sensing generally requires citizens to interact with mobile sensing tools. The amount of work that users are expected to perform differs in different participatory sensing tools. Opportunistic sensing tools only requires users to install and activate the tools unless users want to deactivate and activate the tools from time to time to save energy, memory space, or protect privacy. Other data collection tools may require users to enter text, numbers, select items from menus, take photos, record sound or video clips, and so on. However, it is a question how much time and mental space citizens may have to perform such operations during a devastating crisis. In order to address this issue, we argue for an approach that integrate sensing in everyday and emergency situations.

We have sought to identify the kind of useful data which can be collected in everyday life situations and used to facilitate participatory sensing during disasters. One of such kind of data can be omnidirectional camera images along urban streets. In everyday life situations, such data can for example be used to recommend pleasant green routes for taking a walk. The same data could be used to assess damages and recommend safer rotes in disaster situations, potentially combined with complementary participatory sensing during disasters.

Inexpensive omnidirectional cameras such as Ricoh Theta and Kodak Pixpro are increasing popular, and people can take 360-degree photographs using smartphones as well. If citizens are motivated to capture and share geo-tagged omnidirectional images of streets in their everyday lives, the accumulated images can be processed as frames of reference for assessing the impact of disasters.

We have developed a system for citizens to capture omnidirectional images along urban streets and extract the amount of visible green to recommend pleasant walking routes. The system first processes omnidirectional images based on Lambert azimuthal equal-area projection. As shown in Fig. 2, it then applies an edge detector and analyzes fractal dimension to find vegetation in the images. Finally, the amount of green in each image is determined based on a color-based filtering technique. In particular, color histogram data constructed from sample images of vegetation are used to compute the percentage of vegetation in each omnidirectional image. “Green routes” can be recommended based on the resulting georeferenced data.

Fig. 2.
figure 2

Extracting the amount of vegetation from omnidirectional images

Although we have focused on green routes, other information can be extracted from omnidirectional images using different image processing and spatial analysis techniques. By opening up the possibilities for such everyday applications of omnidirectional street images, we expect to increase useful location indexed datasets that can be quickly retrieved and used in disaster situations.

4.4 Privacy Preservation

If there is any concern on privacy preservation in user participatory sensing, people are discouraged to join any participatory sensing applications. Further, if privacy preservation mechanism cannot be easily understood by the users, it will also discourage them. In light of these issues, we have proposed a perturbation technique called Negative survey [15] and some of its extensions. Negative survey and its extension can be applied to user participatory sensing for disaster situation. Typical example is the usage of privacy-preserving smartphones as seismometers to complement the existing infrastructure deployed by K-NET [16]. Early and detailed fire detection as well as detection of people follow in disaster situation is within our scope. We have also proposed mechanisms for protecting location privacy [17], which makes it difficult to trace the trajectory of a specific node. Since the degree of location privacy is not yet well defined, we are now tackling the issue and try to re-define it [18].

5 System Architecture for Providing Integrated Services

To use the data collected through user participatory sensing effectively, we briefly describe methods to (1) build the environmental data warehouse (EDW) which works as an infrastructure providing comprehensive and predictive environmental information, and (2) integrate heterogeneous environmental information from multi-modal sensors into an aggregate value which facilitates further processing, and (3) determine the optimal path plans in environments which are varying continuously.

Figure 3 shows the overall architecture. Raw multi-modal sensor data are input into fact tables of the EDW where multidimensional data model and data prediction method are applied. The dimensional information of space and time is extracted and aggregated into dimension tables. The EDW contains predictive functions therefore it can provide historical, current and future environmental information.

Fig. 3.
figure 3

Overall architecture of the proposed methods

The walkable space of pedestrians is modeled as a street network. The intersections are treated as nodes and the walkable street segments between intersections are treated as edges. Map matching is applied to associate sensor data to proper street edges.

In order to integrate the multi-modal sensor data consistently and flexibly, a novel multi-factor cost (MFC) model is proposed. The aggregate cost rates for edges are calculated out by applying the MFC model. The cost value of an edge accessed by the PP engine is the product of aggregate cost rate and the travel time for that edge.

Based on the former two solutions, the optimal path planning (PP) problem is solved in a time-dependent network by applying a dynamic programming method. The PP engine receives path queries that are submitted by pedestrians in real time. We have developed the prototype client application running on an Android smartphone. A map view is displayed on the smartphone and the pedestrian can specify her origin and destination by touching the screen. Then the planned path calculated on a server is displayed on the map view to navigate the pedestrian to approach her destination.

This architecture has been used to integrate the data from a wireless sensor network (WSN) to gather temperature and humidity information and a distributed camera system to detect the traffic flows of pedestrians [19], thereby recommending comfortable and safe navigation routes in an extremely hot urban environments.

6 Conclusion

We have proposed a robust and resilient sensing environment by extending and integrating cooperative location inference and user participatory sensing. The proposed user participatory sensing environment supports energy efficient sensing, integrated sensing in everyday and emergency situations using mobile and stationary sensors, and privacy preservation. In particular, the proposed environment encourages proactive engagement in disaster mitigation by means of everyday data collection. The automated location inference facilitates end-user deployment of IoT sensors as well.

User participatory sensing has important roles to play even when high quality sensors and simulation systems are in place. Oftentimes disaster-monitoring infrastructures are of national and/or regional concerns. Infrastructures, such as Japanese K-NET, are deployed and managed under different budgetary restrictions, which may lead to compromised spatial resolutions of sensors. In the Japanese context, it is particularly important to consider complementary relationships between cheap, quick and dense crowd sensing and reliable infrastructural sensors. Moreover, as people often face scarcity of information in disaster situations, providing more data through crowd sensing can help reduce false negative problems of failing to issue alarms and warnings.

Computer-based simulation systems help us understand how things behave in disaster situations without actually experiencing them in the real world. Connecting simulations to real-world events could effectively narrow down the space for what-if explorations for pertinent decision-making. Crowd sensing then can play a significant role in making simulations useful in time-critical disaster situations as it provides a way to feed real-world information quickly into simulations, much before authoritative information is made available. Also, microscopic simulations of shakes and fires at a building scale require fine-grained feed of real-world data that crowd sensing could cater well for. Furthermore, simulations could be useful for making crowd-sensing systems including crowd behaviors and computational processing mechanisms smarter. For example, simulation results could be used to request sensing tasks efficiently by prioritizing data collection based on the most critical goals such as saving lives.

We expect that our current results will be extended to be a systemic yet flexible environment rather than a complex, monolithic system. Thus, our proposed mechanisms could be adapted easily to different disaster situations and different external systems.