Skip to main content

Advertisement

Log in

Users plan optimization for participatory urban texture documentation

  • Published:
GeoInformatica Aims and scope Submit manuscript

Abstract

We envision participatory texture documentation (PTD) as a process in which a group of users (dedicated individuals and/or general public) with camera-equipped mobile phones participate in collaborative collection of urban texture information. PTD enables inexpensive, scalable and high quality urban texture documentation. We propose to implement PTD in two steps. At the first step, termed viewpoint selection, a minimum number of viewpoints in the urban environment are selected from which the texture of the entire urban environment (the part visible to cameras) with a desirable quality can be collected/captured. At the second step, called viewpoint assignment, the selected viewpoints are assigned to the participating users such that given a limited number of users with various constraints (e.g., restricted available time) users can collectively capture the maximum amount of texture information within a limited time interval. In this paper, we define each of these steps and prove that both are NP-hard problems. Accordingly, we propose efficient algorithms to implement the viewpoint selection and assignment problems. We study, profile and verify our proposed solutions comparatively by both rigorous analysis and extensive experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others

Notes

  1. http://www.forum.nokia.com/devices/N95/

  2. The camera focal point is the point where all parallel rays will be focused to. The distance between camera lens and focal points is focal length [21].

References

  1. Banaei-Kashani F, Shirani-Mehr H, Pan B, Bopp N, Nocera L, Shahabi C (2010) Geosim: a geospatial data collection system for participatory urban texture documentation. Special Issue of IEEE Data Eng Bull 33(2):40–45

    Google Scholar 

  2. Blum A, Chawla S, Karger DR, Lane T, Meyerson A, Minkoff M (2003) Approximation algorithms for orienteering and discounted-reward tsp. In: FOCS, pp 46–55

  3. Borgstrom PH, Singh A, Jordan BL, Sukhatme GS, Batalin MA, Kaiser WJ (2008) Energy based path planning for a novel cabled robotic system. In: IROS, pp 1745–1751

  4. Chakravarty S, Shekhawat A (1992) Parallel and serial heuristics for the minimum set cover problem. J Supercomput 5(4):331–345

    Article  Google Scholar 

  5. Chao I, Golden BL, Wasil EA (1996) The team orienteering problem. Eur J Oper Res 88(3):464–474

    Article  Google Scholar 

  6. Chao IM, Golden BL, Wasil EA (1996) A fast and effective heuristic for the orienteering problem. Eur J Oper Res 88(3):475–489

    Article  Google Scholar 

  7. Chekuri C, Korula N, Pál M (2008) Improved algorithms for orienteering and related problems. In: SODA 2008. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, pp 661–670

    Google Scholar 

  8. Chekuri C, Pal M (2005) A recursive greedy algorithm for walks in directed graphs. In: FOCS 2005. IEEE Computer Society, Washington, DC, USA, pp 245–253

    Google Scholar 

  9. Chen K, Har-Peled S (2008) The euclidean orienteering problem revisited. SIAM J Comput 38(1):385–397

    Article  Google Scholar 

  10. Cormen TH, Leiserson CE, Rivest RL, Stein C (2001) Introduction to algorithms, 2nd edn. McGraw-Hill Science/Engineering/Math

  11. Dhillon SS, Chakrabarty K (2003) Sensor placement for effective coverage and surveillance in distributed sensor networks. Wireless communications and networking, 2003. WCNC 2003, vol 3. IEEE, pp 1609–1614

  12. Engel J, Pasewaldt S, Trapp M, Döllner J (2012) An immersive visualization system for virtual 3d city models. In: Proceedings of the 20th international conference on GeoInformatics. IEEE GRSS

  13. Evans W, Kirkpatrick D, Townsend G (1997) Right triangular irregular networks. Tech. rep., Algorithmica, Tucson, AZ, USA

  14. Fischetti M, Gonzalez JJS, Toth P (1998) Solving the orienteering problem through branch-and-cut. INFORMS J Comput 10(2):133–148

    Article  Google Scholar 

  15. Forsyth DA, Ponce J (2002) Computer vision: a modern approach. Prentice Hall Professional Technical Reference

  16. Fowler RJ, Little JJ (1979) Automatic extraction of irregular network digital terrain models. In: SIGGRAPH ’79: proceedings of the 6th annual conference on Computer graphics and interactive techniques. ACM, New York, NY, USA, pp 199–207

    Chapter  Google Scholar 

  17. http://infolab.usc.edu/projects/GeoSIM

  18. Leachtenauer JC, Driggers RG (2001) Surveillance and reconnaissance imaging systems: modeling and performance prediction. Artech House, Boston

    Google Scholar 

  19. Guestrin C, Krause A, Singh AP (2005) Near-optimal sensor placements in gaussian processes. In: ICML 2005. ACM, New York, NY, USA, pp 265–272

    Chapter  Google Scholar 

  20. Guillou E, Meneveaux D, Maisel E, Bouatouch K (2000) Using vanishing points for camera calibration and coarse 3d reconstruction from a single image. Vis Comput 16(7):396–410

    Article  Google Scholar 

  21. Hecht E (2002) Optics. Addison-Wesley

  22. Hörster E, Lienhart R (2006) On the optimal placement of multiple visual sensors. In: VSSN 2006. ACM, New York, NY, USA, pp 111–120

    Chapter  Google Scholar 

  23. Huang CF, Tseng YC (2003) The coverage problem in a wireless sensor network. In: WSNA 2003. ACM, New York, NY, USA, pp 115–121

    Chapter  Google Scholar 

  24. Krause A, Guestrin C (2009) Optimizing sensing: from water to the web. Comput 42:38–45

    Article  Google Scholar 

  25. Krause A, Rajagopal R, Gupta A, Guestrin C (2009) Simultaneous placement and scheduling of sensors. In: IPSN ’09: proceedings of the 2009 international conference on information processing in sensor networks. IEEE Computer Society, Washington, DC, USA, pp 181–192

    Google Scholar 

  26. Lee DT, Lin AK (1986) Computational complexity of art gallery problems. IEEE Trans Inf Theory 32(2):276–282

    Article  Google Scholar 

  27. Lee JS, Hoh B (2010) Dynamic pricing incentive for participatory sensing. J Perv Mob Comp 6:693–708

    Article  Google Scholar 

  28. Meguerdichian S, Koushanfar F, Potkonjak M, Srivastava MB (2001) Coverage problems in wireless ad-hoc sensor networks. In: INFOCOM, pp 1380–1387

  29. Murray AT, Kim K, Davis JW, Machiraju R, Parent RE (2007) Coverage optimization to support security monitoring. Comput Environ Urban Syst 31(2):133–147

    Article  Google Scholar 

  30. Samet H, Sankaranarayanan J, Alborzi H (2008) Scalable network distance browsing in spatial databases. In: SIGMOD conference, pp 43–54

  31. Semmo A, Trapp M, Kyprianidis JE, Döllner J (2012) Interactive visualization of generalized virtual 3d city models using level-of-abstraction transitions. ACM GIS 31(3):885–894

    Google Scholar 

  32. Shirani-Mehr H, Banaei-Kashani F, Shahabi C (2009) Efficient viewpoint assignment for urban texture documentation. In: GIS ’09: proceedings of the 17th ACM SIGSPATIAL international conference on advances in geographic information systems, pp 62–71

  33. Shirani-Mehr H, Banaei-Kashani F, Shahabi C (2009) Efficient viewpoint selection for urban texture documentation. In: Third international conference on geosensor networks

  34. Singh A, Krause A, Guestrin C, Kaiser WJ (2009) Efficient informative sensing using multiple robots. J Artif Intell Res 34(1):707–755

    Google Scholar 

  35. Singh A, Krause A, Kaiser WJ (2009) Nonmyopic adaptive informative path planning for multiple robots. In: IJCAI’09: proceedings of the 21st international jont conference on Artifical intelligence. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, pp 1843–1850

    Google Scholar 

  36. Spiegel MR (1992) Mathematical handbook of formulas and tables, 28th printing edn. McGraw Hill

  37. Tan PN, Steinbach M, Kumar V (2005) Introduction to Data Mining, 1st edn. Addison Wesley

  38. Tsai F, Lin HC (2007) Polygon-based texture mapping for cyber city 3d building models. Int J Geogr Inf Sci 21(9):965–981

    Article  Google Scholar 

  39. Vansteenwegen P, Souffriau W, Berghe GV, Oudheusden DV (2009) A guided local search metaheuristic for the team orienteering problem. Eur J Oper Res 196(1):118–127

    Article  Google Scholar 

  40. Wolff RW (1990) A note on pasta and anti-pasta for continuous-time markov chains. Oper Res 38(1):176–177

    Article  Google Scholar 

  41. Wu CH, Lee KC, Chung YC (2007) A delaunay triangulation based method for wireless sensor network deployment. Comput Commun 30(14–15):2744–2752

    Article  Google Scholar 

  42. Zhang B, Sukhatme GS (2008) Adaptive sampling with multiple mobile robots. In: IEEE international conference on robotics and automation

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Houtan Shirani-Mehr.

Additional information

This research has been funded in part by NSF grant CNS- 0831505 (CyberTrust), the NSF Integrated Media Systems Center (IMSC), the NSF Center for Embedded Networked Sensing (CENS), and in part by unrestricted cash and equipment gift from Google and Microsoft. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Appendix: Calculating \(A_{p,S_c}\)

Appendix: Calculating \(A_{p,S_c}\)

The image of each cell c consists of a set of pixels P c . With a pinhole model, the region represented by each pixel p ∈ P c is in the form of a four sided polygon which area depends on a set of parameters S c . S c includes camera focal length f, camera view angle θ (the angle between camera image plane and c) and camera distance to c represented by d . Finally, we denote the area represented by p ∈ P c on c, according to the setting in S c , by \(A_{p,S_c}.\) Here, we elaborate on how to calculate \(A_{p,S_c}.\)First, assume the camera image plane is parallel to c, i.e., view angle is zero and θ = 0 (we later consider the general case). Consider a pixel p with the height of p v and the width of p h . In this case, the polygon represented by p is a rectangle with the side lengths of l v (vertical side) and l h (horizontal side). l v is coplanar with c and has the length of

$$ l_v=\frac{p_v\times d_c}{d_p}, $$
(5)

where d c is the distance between camera lens and point o on c where o, camera lens and center of p v are on the same line (Fig. 20). d p is the distance between midpoint of p v and the pinhole (both d p and d c can be calculated based on the parameters in S c ). Equation 5 is derived based on the similarity of triangles formed by l v , p v and the pinhole. In a similar way, we can find the length of l h which is coplanar with c. Therefore, \(A_{p,S_c}\) is derived as follows:

$$ A_{p,S_c}=l_h\times l_v=\frac{p_h\times p_v\times d_c^2}{d_p^2}. $$
(6)

\(A_{p,S_c}\) is not exactly the same for all the pixels in P c and depends on d c and d p . Note that p h and p v are the intrinsic camera properties. Equation 6 holds for a set of cells whose images are completely captured on the camera image plane. To find this set, we calculate the field of view of camera [15]. The field of view of a camera is the portion of scene space that actually projects onto the camera image plane. For a camera with the image plane diameter of I d , the field of view φ is \(2\arctan\frac{I_d}{2f}\) [15]. The image of a cell within the field of view of is completely captured on the camera plane and Eq. 6 holds for it.

Fig. 20
figure 20

A pixel p with the side length p v where p v represents the length of l v on cell c

We now consider the case in which the view angle is non-zero (− 90 ≤ θ < 0 or 0 < θ ≤ 90). In this case, the area represented by p is not necessarily a rectangle but still a four sided polygon where each side of the polygon is represented by a side of p. Assume the bottom side of p is denoted by p b . We explain how to calculate l b (the side of the polygon represented by p b ) and the approach can be extended to calculate the other sides. Consider a plane which contains l b and p b . We call this plane \(P_\bot.\) The projection of θ on \(P_\bot\) is denoted by \(\theta_\bot\) (Fig. 21). Based on the figure, l b  = x 1 + x 2 where x 1 and x 2 can be calculated according to the law of sines [36], i.e.,

$$ x_1=\frac{d_{c_\bot}\sin\big(\frac{\phi_p}{2}\big)}{\cos\big(\frac{\phi_p}{2}-\theta_\bot\big)}~~~\text{and}~~~ x_2=\frac{d_{c_\bot}\sin\big(\frac{\phi_p}{2}\big)}{\cos\big(\frac{\phi_p}{2}+\theta_\bot\big)}. $$

ϕ p is the field of view for p (the field of view of a camera which only consists of pixel p) and can be derived similar to field of view of camera. \(d_{c_\bot}\) is the distance between camera lens and point o, where o is at the intersection of l b and the line passing through camera lens image on \(P_\bot\) and the center of p b . Similarly, we can calculate the other sides of the polygon represented by p. Additionally, we can find the angle between the sides based on the equation of planes form from the image sides and pixel sides. Consequently, we can derive \(A_{p,S_c}.\)

Fig. 21
figure 21

The top-view of a camera pixel p and a cell c

Rights and permissions

Reprints and permissions

About this article

Cite this article

Shirani-Mehr, H., Banaei-Kashani, F. & Shahabi, C. Users plan optimization for participatory urban texture documentation. Geoinformatica 17, 173–205 (2013). https://doi.org/10.1007/s10707-012-0166-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10707-012-0166-7

Keywords

Navigation