Encyclopedia of Database Systems

2018 Edition
| Editors: Ling Liu, M. Tamer Özsu

Quorum Systems

  • Marta Patiño-MartínezEmail author
  • Bettina Kemme
Reference work entry
DOI: https://doi.org/10.1007/978-1-4614-8265-9_299


Continuous availability; Tolerance to network partitions


Data replication is a technique to provide high availability and scalability by introducing redundancy. The data remains available as long as some replicas are accessible and, as the load can be distributed across replicas, adding more replicas potentially allows for increased throughput. Challenges arise when the data has to be updated as the replicas must be kept consistent. The most intuitive approach is to always execute all write operations at all replicas. Then, all replicas always have the same state and a read operation can read any replica. The main problem with this Read-One-Write-All (ROWA) approach is that as soon as one replica is no more available write operations cannot be performed anymore. A further problem is that executing all updates always on all replicas makes write operations very expensive.

Quorum systems address both these issues. They allow write operations to succeed if they execute...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Agrawal D, El Abbadi A. The generalized tree quorum protocol: an efficient approach for managing replicated data. ACM Trans Database Syst. 1992;17(4):689–717.MathSciNetCrossRefGoogle Scholar
  2. 2.
    Amir Y, Wool A. Optimal availability quorums systems: theory and practice. Inf Process Lett. 1998;65(5):223–28.MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    Barbara D, Garcia-Molina H. The reliability of vote mechanisms. IEEE Trans Comput Syst. 1987;36(10):1197–1208.CrossRefGoogle Scholar
  4. 4.
    Bernstein PA, Hadzilacos V, Goodman N. Concurrency control and recovery in database systems. Reading: Addison Wesley; 1987.Google Scholar
  5. 5.
    Cheung SY, Ahamad M, Ammar MH. The grid protocol: a high performance scheme for maintaining replicated data. In: Proceedings of the 6th International Conference on Data Engineering; 1990. p. 438–45.Google Scholar
  6. 6.
    Corbett JC, Dean J, Epstein M, Fikes A, Frost C, Furman JJ, Ghemawat S, Gubarev A, Heiser C, Hochschild P, Hsieh WC, Kanthak S, Kogan E, Li H, Lloyd A, Melnik S, Mwaura D, Nagle D, Quinlan S, Rao R, Rolig L, Saito Y, Szymaniak M, Taylor C, Wang R, Woodford D. Spanner: Google’s globally distributed database. ACM Trans Comput Syst. 2013;31(3):8.CrossRefGoogle Scholar
  7. 7.
    Gifford DK. Weighted voting for replicated data. In: Proceedings of the seventh ACM symposium on Operating systems principles; 1979. p. 150–62.Google Scholar
  8. 8.
    Jiménez-Peris R, Patiño-Martínez M, Alonso G, Kemme B. Are quorums an alternative for data replication. ACM Trans Database Syst. 2003;28(3): 257–294.CrossRefGoogle Scholar
  9. 9.
    Kumar A. Hierarchical quorum consensus: a new algorithm for managing replicated data. IEEE Trans Comput. 1991;40(9):996–1004.CrossRefGoogle Scholar
  10. 10.
    Lakshman A, Malik P. Cassandra: a decentralized structured storage system. Op Syst Rev. 2010;44(2):35–40.CrossRefGoogle Scholar
  11. 11.
    Lamport L. The part-time parliament. ACM Trans Comput Syst. 1998;16(2):133–69.CrossRefGoogle Scholar
  12. 12.
    Maekawa M. A \(\sqrt {N}\) algorithm for mutual exclusion in decentralized systems. ACM Trans Comput Syst. 1985;3(2):145–59.CrossRefGoogle Scholar
  13. 13.
    Mahmoud HA, Nawab F, Pucher A, Agrawal D, El Abbadi A. Low-latency multi-datacenter databases using replicated commit. Proc VLDB Endow. 2013;6(9):661–72.CrossRefGoogle Scholar
  14. 14.
    Malkhi D, Reiter MK, Wool A. The load and availability of Byzantine quorum systems. SIAM J Comput. 2000;29(6):1889–1906.MathSciNetzbMATHCrossRefGoogle Scholar
  15. 15.
    Naor M, Wool A. The load, capacity, and availability of quorum systems. SIAM J Comput. 1998;27(2):423–47.MathSciNetzbMATHCrossRefGoogle Scholar
  16. 16.
    Peleg D, Wool A. The availability of quorum systems. Inf Comput. 1995;123(2):210–23MathSciNetzbMATHCrossRefGoogle Scholar
  17. 17.
    Rao J, Shekita EJ, Tata S. Using paxos to build a scalable, consistent, and highly available datastore. Proc VLDB Endow. 2011;4(4):243–54.CrossRefGoogle Scholar
  18. 18.
    Thomas RH. A majority consensus approach to concurrency control for multiple copy databases. ACM Trans Database Syst. 1979;4(9):180–209.CrossRefGoogle Scholar
  19. 19.
    Tong Z, Kain RY. Vote assignments in weighted voting mechanisms. In: IEEE international symposium on reliable distributed systems (SRDS). West Lafayette: IEEE Computer Society Press; 1988.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Distributed Systems LabUniversidad Politecnica de MadridMadridSpain
  2. 2.ETSI InformáticosUniversidad Politécnica de Madrid (UPM)MadridSpain
  3. 3.School of Computer ScienceMcGill UniversityMontrealCanada