Measurement Techniques

, Volume 57, Issue 2, pp 132–138 | Cite as

General Problems of Metrology and Measurement Technique

Aggregation of Preferences as a Method of Solving Problems in Metrology and Measurement Technique

We show the possibility of usefully applying procedures for preference aggregation in processing interlaboratory comparison results and organizing the transmission of measurement data over wireless sensor networks.


aggregation of preferences consensus relation interlaboratory comparisons wireless sensor networks 


  1. 1.
    H. P. Young, “Optimal voting rules,” J. Econom. Perspect., 9, No. 1, 51–64 (1995).CrossRefGoogle Scholar
  2. 2.
    L. Lam and Ch. Y. Suen, “Application of majority voting to pattern recognition: an analysis of its behavior and performance,” IEEE Trans. Syst., Man and Cybernet. Pt. A: Syst. And Humans, 27, No. 5, 553–568 (1997).CrossRefGoogle Scholar
  3. 3.
    E. Ephrati and J. S. Rosenschein, “A heuristic technique for multiagent planning”, Ann. Math. Artif. Intel., 20, 13–67 (1997).CrossRefMATHMathSciNetGoogle Scholar
  4. 4.
    J. Kemeny and J. Snell, Cybernetic Modeling [Russian translation], Sov. Radio, Moscow (1972).Google Scholar
  5. 5.
    B. G. Litvak, Expert Information: Methods of Acquisition and Analysis, [in Russian], Radio i Svyaz, Moscow, (1982).Google Scholar
  6. 6.
    A. M. A. van Deemen, “The probability of the paradox of voting for weak preference orderings”, Social Choice and Welfare, 16, 171–182 (1999).CrossRefMATHMathSciNetGoogle Scholar
  7. 7.
    S. V. Muravyov and V. Savolainen, “Special interpretation of formal measurement scales for the case of multiple heterogeneous properties,” Measurement, 29, 209–223 (2001).CrossRefGoogle Scholar
  8. 8.
    S. V. Muravyov, “Rankings as ordinal scale measurement results,” Metrol. and Measur. Syst., 13, No. 1, 9–24 (2007).Google Scholar
  9. 9.
    N. Betzler et al., “Fixed-parameter algorithms for Kemeny rankings,” Theor. Comp. Sci., 410, No. 45, 4554–4570 (2009).CrossRefMATHMathSciNetGoogle Scholar
  10. 10.
    M. Garey and D. Johnson, Computers and Intractability, [Russian translation], Mir, Moscow, 1982.Google Scholar
  11. 11.
    S. V. Muravyov, “Ordinal measurement, preference aggregation and interlaboratory comparisons,” Measurement, 46, No. 8, 2927–2935 (2013).CrossRefGoogle Scholar
  12. 12.
    R 50.4.006-2002, Interlaboratory Comparison Testing with Accreditation and Inspection Control of Testing Laboratories. Methodology and Implementation Protocols.Google Scholar
  13. 13.
    M. G. Cox, “The evaluation of key comparison data: determining the largest consistent subset”, Metrologia, 44, 187–200 (2007).ADSCrossRefGoogle Scholar
  14. 14.
    H. S. Nielsen, “Determining consensus values in interlaboratory comparisons and proficiency testing,” Proc. NCSL Int. Workshop and Symp., Tampa, Florida, USA (2003), pp. 1–16.Google Scholar
  15. 15.
    L. Brunetti et al., “Establishing reference value in high frequency power comparisons,” Measurement, 42, 1318–1323 (2009).CrossRefGoogle Scholar
  16. 16.
    S. V. Muravyov and I. A. Marinushkina, “Largest consistent subsets in interlaboratory comparisons: preference aggregation approach,” Proc. IMEKO TC1+TC7+TC13 Symp., Jena, Germany (2011).Google Scholar
  17. 17.
    Tao Shao et al. “Pritrans: a prioritized converge-cast scheme using consensus ranking in wireless sensor networks,” Proc. IEEE Sensors Appl. Symp., Limerick, Ireland (2010), pp. 251–256.Google Scholar
  18. 18.
    S. V. Muravyov and E. V. Tarakanov, “Data transmission over wireless sensor networks with priorities using aggregation of preferences,” Izv. Tomsk. Politekh. Univ., 320, No. 5, 111–116, (2012).Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.Tomsk National Research Polytechnic UniversityTomskRussia

Personalised recommendations