Skip to main content

A First Comparative Characterization of Multi-cloud Connectivity in Today’s Internet

  • Conference paper
  • First Online:
Passive and Active Measurement (PAM 2020)

Part of the book series: Lecture Notes in Computer Science ((LNCCN,volume 12048))

Included in the following conference series:

Abstract

Today’s enterprises are adopting multi-cloud strategies at an unprecedented pace. Here, a multi-cloud strategy specifies end-to-end connectivity between the multiple cloud providers (CPs) that an enterprise relies on to run its business. This adoption is fueled by the rapid build-out of global-scale private backbones by the large CPs, a rich private peering fabric that interconnects them, and the emergence of new third-party private connectivity providers (e.g., DataPipe, HopOne, etc.). However, little is known about the performance aspects, routing issues, and topological features associated with currently available multi-cloud connectivity options. To shed light on the tradeoffs between these available connectivity options, we take a cloud-to-cloud perspective and present in this paper the results of a cloud-centric measurement study of a coast-to-coast multi-cloud deployment that a typical modern enterprise located in the US may adopt. We deploy VMs in two regions (i.e., VA and CA) of each one of three large cloud providers (i.e., AWS, Azure, and GCP) and connect them using three different options: (i) transit provider-based best-effort public Internet (BEP), (ii) third-party provider-based private (TPP) connectivity, and (iii) CP-based private (CPP) connectivity. By performing active measurements in this real-world multi-cloud deployment, we provide new insights into variability in the performance of TPP, the stability in performance and topology of CPP, and the absence of transit providers for CPP.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    This is different from hybrid cloud computing, where a direct connection exists between a public cloud and private on-premises enterprise server(s).

  2. 2.

    See Sect. 3.4 for more details.

  3. 3.

    In Sect. 5 we highlight that our inter-cloud measurements do not exit the source and destination CP’s network.

  4. 4.

    Note that these price points do not take into consideration the additional charges that are incurred by CPs for establishing connectivity to their network.

  5. 5.

    We do not have access to parameters such as TCP timeout delay and number of acknowledged packets by each ACK to use more elaborate TCP models (e.g., [54]).

  6. 6.

    In an ideal setting, we should not experience any packet losses as we are limiting our probing rate at the source.

  7. 7.

    In the absence of information regarding the physical fiber paths, we rely on latency as a proxy measure of path length.

References

  1. A first comparative characterization of multi-cloud connectivity in today’s internet (2020). https://gitlab.com/onrg/multicloudcmp

  2. Ager, B., Chatzis, N., Feldmann, A., Sarrar, N., Uhlig, S., Willinger, W.: Anatomy of a large European IXP. In: SIGCOMM. ACM (2012)

    Google Scholar 

  3. Alexander, M., Luckie, M., Dhamdhere, A., Huffaker, B., Claffy, K., Jonathan, S.M.: Pushing the boundaries with bdrmapIT: mapping router ownership at internet scale. In: Internet Measurement Conference (IMC). ACM (2018)

    Google Scholar 

  4. Amazon: AWS direct connect. https://aws.amazon.com/directconnect/

  5. Amazon: AWS direct connect partners. https://aws.amazon.com/directconnect/partners/

  6. Amazon: AWS transit gateway. https://aws.amazon.com/transit-gateway/

  7. Amazon: AWS direct connect pricing (2019). https://aws.amazon.com/directconnect/pricing/

  8. Amazon: EC2 instance pricing - Amazon web services (2019). https://aws.amazon.com/ec2/pricing/on-demand/

  9. Anwar, R., Niaz, H., Choffnes, D., Cunha, Í., Gill, P., Katz-Bassett, E.: Investigating interdomain routing policies in the wild. In: Internet Measurement Conference (IMC). ACM (2015)

    Google Scholar 

  10. Augustin, B., et al.: Avoiding traceroute anomalies with paris traceroute. In: Internet Measurement Conference (IMC). ACM (2006)

    Google Scholar 

  11. Augustin, B., Krishnamurthy, B., Willinger, W.: IXPs: mapped? In: Internet Measurement Conference (IMC). ACM (2009)

    Google Scholar 

  12. Ausmees, K., John, A., Toor, S.Z., Hellander, A., Nettelblad, C.: BAMSI: a multi-cloud service for scalable distributed filtering of massive genome data. BMC Bioinf. 19, 240 (2018)

    Article  Google Scholar 

  13. CAIDA: the skitter project (2007). http://www.caida.org/tools/measurement/skitter/

  14. Calder, M., Fan, X., Hu, Z., Katz-Bassett, E., Heidemann, J., Govindan, R.: Mapping the expansion of Google’s serving infrastructure. In: Internet Measurement Conference (IMC). ACM (2013)

    Google Scholar 

  15. Calder, M., Flavel, A., Katz-Bassett, E., Mahajan, R., Padhye, J.: Analyzing the performance of an anycast CDN. In: Internet Measurement Conference (IMC). ACM (2015)

    Google Scholar 

  16. Calder, M., et al.: Odin: Microsoft’s scalable fault-tolerant \(\{\)CDN\(\}\) measurement system. In: NSDI. USENIX (2018)

    Google Scholar 

  17. Chiu, Y.C., Schlinker, B., Radhakrishnan, A.B., Katz-Bassett, E., Govindan, R.: Are we one hop away from a better internet? In: Internet Measurement Conference (IMC). ACM (2015)

    Google Scholar 

  18. CloudHarmony: Cloudharmony, transparency for the cloud. https://cloudharmony.com/

  19. CoreSite: The Coresite open cloud exchange. https://www.coresite.com/solutions/cloud-services/open-cloud-exchange

  20. Cunha, Í., et al.: Sibyl: a practical internet route oracle. In: NSDI. USENIX (2016)

    Google Scholar 

  21. Demchenko, Y., et al.: Open Cloud Exchange (OCX): architecture and functional components. In: International Conference on Cloud Computing Technology and Science. IEEE (2013)

    Google Scholar 

  22. Dhamdhere, A., Dovrolis, C.: The Internet is flat: modeling the transition from a transit hierarchy to a peering mesh. In: CoNEXT. ACM (2010)

    Google Scholar 

  23. Durairajan, R., Barford, P., Sommers, J., Willinger, W.: InterTubes: a study of the US long-haul fiber-optic infrastructure. In: SIGCOMM. ACM (2015)

    Google Scholar 

  24. Durairajan, R., Ghosh, S., Tang, X., Barford, P., Eriksson, B.: Internet Atlas: a geographic database of the Internet. In: HotPlanet. ACM (2013)

    Google Scholar 

  25. Elshazly, H., Souilmi, Y., Tonellato, P.J., Wall, D.P., Abouelhoda, M.: MC-GenomeKey: a multicloud system for the detection and annotation of genomic variants. BMC Bioinf. 18, 49 (2017)

    Article  Google Scholar 

  26. Facebook: Building express backbone: Facebook’s new long-haul network (2017). https://code.fb.com/data-center-engineering/building-express-backbone-facebook-s-new-long-haul-network/

  27. Gill, P., Arlitt, M., Li, Z., Mahanti, A.: The Flattening Internet topology: natural evolution, unsightly barnacles or contrived collapse? In: Claypool, M., Uhlig, S. (eds.) PAM 2008. LNCS, vol. 4979, pp. 1–10. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-79232-1_1

    Chapter  Google Scholar 

  28. Google: GCP direct peering. https://cloud.google.com/interconnect/docs/how-to/direct-peering

  29. Google: Google supported service providers. https://cloud.google.com/interconnect/docs/concepts/service-providers

  30. Haq, O., Raja, M., Dogar, F.R.: Measuring and improving the reliability of wide-area cloud paths. In: WWW. ACM (2017)

    Google Scholar 

  31. Hofmann, H., Kafadar, K., Wickham, H.: Letter-value plots: boxplots for large data. Technical report. had.co.nz (2011)

    Google Scholar 

  32. Huffaker, B., Keys, K., Fomenkov, M., Claffy, K.: AS-to-organization dataset (2018). http://www.caida.org/research/topology/as2org/

  33. Hung, C.C., Ananthanarayanan, G., Golubchik, L., Yu, M., Zhang, M.: Wide-area analytics with multiple resources. In: EuroSys Conference. ACM (2018)

    Google Scholar 

  34. Internet2: One-Way Ping (OWAMP) (2019). http://software.internet2.edu/owamp/

  35. Iyer, A.P., Panda, A., Chowdhury, M., Akella, A., Shenker, S., Stoica, I.: Monarch: gaining command on geo-distributed graph analytics. In: Hot Topics in Cloud Computing (HotCloud). USENIX (2018)

    Google Scholar 

  36. Khalidi, Y.: How Microsoft builds its fast and reliable global network (2017). https://azure.microsoft.com/en-us/blog/how-microsoft-builds-its-fast-and-reliable-global-network/

  37. Klöti, R., Ager, B., Kotronis, V., Nomikos, G., Dimitropoulos, X.: A comparative look into public IXP datasets. In: SIGCOMM CCR (2016)

    Google Scholar 

  38. Knight, S., Nguyen, H.X., Falkner, N., Bowden, R.A., Roughan, M.: The Internet topology zoo. In: JSAC. IEEE (2011)

    Google Scholar 

  39. Krishna, A., Cowley, S., Singh, S., Kesterson-Townes, L.: Assembling your cloud orchestra: a field guide to multicloud management. https://www.ibm.com/thought-leadership/institute-business-value/report/multicloud

  40. Labovitz, C., Iekel-Johnson, S., McPherson, D., Oberheide, J., Jahanian, F.: Internet inter-domain traffic. In: SIGCOMM. ACM (2010)

    Google Scholar 

  41. Li, A., Yang, X., Kandula, S., Zhang, M.: CloudCmp: comparing public cloud providers. In: Internet Measurement Conference (IMC). ACM (2010)

    Google Scholar 

  42. Luckie, M.: Scamper: a scalable and extensible packet prober for active measurement of the Internet. In: Internet Measurement Conference (IMC). ACM (2010)

    Google Scholar 

  43. Luckie, M., Dhamdhere, A., Huffaker, B., Clark, D., et al.: bdrmap: inference of borders between IP networks. In: Internet Measurement Conference (IMC). ACM (2016)

    Google Scholar 

  44. Madhyastha, H.V., et al.: iPlane: an information plane for distributed services. In: OSDI. USENIX (2006)

    Google Scholar 

  45. Mao, Z.M., Rexford, J., Wang, J., Katz, R.H.: Towards an accurate AS-level traceroute tool. In: SIGCOMM. ACM (2003)

    Google Scholar 

  46. Marder, A., Smith, J.M.: MAP-IT: multipass accurate passive inferences from traceroute. In: Internet Measurement Conference (IMC). ACM (2016)

    Google Scholar 

  47. Mathis, M., Semke, J., Mahdavi, J., Ott, T.: The macroscopic behavior of the TCP congestion avoidance algorithm. In: SIGCOMM CCR (1997)

    Google Scholar 

  48. Megaport: Megaport pricing (2019). https://www.megaport.com/pricing/

  49. Megaport: Nine Common Scenarios of multi-cloud design (2019). https://knowledgebase.megaport.com/megaport-cloud-router/nine-common-scenarios-for-multicloud-design/

  50. Microsoft: Azure ExpressRoute. https://azure.microsoft.com/en-us/services/expressroute/

  51. Microsoft: Expressroute partners and peering locations. https://docs.microsoft.com/en-us/azure/expressroute/expressroute-locations

  52. Motamedi, R., Rejaie, R., Willinger, W.: A survey of techniques for Internet topology discovery. Commun. Surv. Tutor. 17, 1044–1065 (2014)

    Article  Google Scholar 

  53. PacketFabric: Cloud Connectivity (2019). https://www.packetfabric.com/packetcor#pricing

  54. Padhye, J., Firoiu, V., Towsley, D., Kurose, J.: Modeling TCP throughput: a simple model and its empirical validation. In: SIGCOMM CCR (1998)

    Google Scholar 

  55. PCH: Packet Clearing House (2019). https://www.pch.net/

  56. PeeringDB: PeeringDB (2019). https://www.peeringdb.com/

  57. Pu, Q., et al.: Low latency geo-distributed data analytics. In: SIGCOMM CCR (2015)

    Google Scholar 

  58. Pureport: Pricing - Pureport (2019). https://www.pureport.com/pricing/

  59. RIPE: RIPE RIS (2019)

    Google Scholar 

  60. Schlinker, B., et al.: Engineering egress with edge fabric: steering oceans of content to the world. In: SIGCOMM. ACM (2017)

    Google Scholar 

  61. Sermpezis, P., Nomikos, G., Dimitropoulos, X.A.: Re-mapping the Internet: bring the IXPs into play. CoRR (2017)

    Google Scholar 

  62. Shavitt, Y., Shir, E.: DIMES: let the internet measure itself. In: SIGCOMM CCR. ACM (2005)

    Google Scholar 

  63. Sherwood, R., Bender, A., Spring, N.: Discarte: a disjunctive internet cartographer. In: SIGCOMM. ACM (2008)

    Google Scholar 

  64. Sherwood, R., Spring, N.: Touring the Internet in a TCP sidecar. In: SIGCOMM Conference on Measurement. ACM (2006)

    Google Scholar 

  65. Spring, N., Mahajan, R., Wetherall, D.: Measuring ISP topologies with Rocketfuel. In: SIGCOMM (2002)

    Google Scholar 

  66. Tariq, M.M.B., Dhamdhere, A., Dovrolis, C., Ammar, M.: Poisson versus periodic path probing (or, does pasta matter?). In: Internet Measurement Conference (IMC). ACM (2005)

    Google Scholar 

  67. University of Oregon: Route views project. http://www.routeviews.org/

  68. Viswanathan, R., Ananthanarayanan, G., Akella, A.: \(\{\)CLARINET\(\}\): WAN-aware optimization for analytics queries. In: Operating Systems Design and Implementation (\(\{\)OSDI\(\}\)). USENIX (2016)

    Google Scholar 

  69. Vulimiri, A., et al.: Wanalytics: geo-distributed analytics for a data intensive world. In: SIGMOD. ACM (2015)

    Google Scholar 

  70. Wohlfart, F., Chatzis, N., Dabanoglu, C., Carle, G., Willinger, W.: Leveraging interconnections for performance: the serving infrastructure of a large CDN. In: SIGCOMM. ACM (2018)

    Google Scholar 

  71. Yeganeh, B., Durairajan, R., Rejaie, R., Willinger, W.: How cloud traffic goes hiding: a study of Amazon’s peering fabric. In: Internet Measurement Conference (IMC). ACM (2019)

    Google Scholar 

  72. ZDNet: Top cloud providers (2019). https://tinyurl.com/y526vneg

  73. Zhang, B., Liu, R., Massey, D., Zhang, L.: Collecting the Internet AS-level topology. In: SIGCOMM CCR. ACM (2005)

    Google Scholar 

  74. Zhang, H., et al.: Guaranteeing deadlines for inter-data center transfers. Trans. Netw. (TON) 25, 579–595 (2017)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bahador Yeganeh .

Editor information

Editors and Affiliations

A Appendices

A Appendices

1.1 A.1 Representation of Results

Distributions in this paper are presented using letter-value plots [31]. Letter-value plots, similar to boxplots, are helpful for summarizing the distribution of data points but offer finer details beyond the quartiles. The median is shown using a dark horizontal line and the 1/2\(^i\) quantile is encoded using the box width, with the widest boxes surrounding the median representing the quartiles, the 2nd widest boxes corresponding to the octiles, etc. Distributions with low variance centered around a single value appear as a narrow horizontal bar while distributions with diverse values appear as vertical bars.

Throughout this paper we try to present full distributions of latency when it is illustrative. Furthermore, we compare latency characteristics of different paths using the median and variance measures and specifically refrain from relying on minimum latency as it does not capture the stability and dynamics of this measure across each path.

1.2 A.2 Preliminary results on E2C perspective

We emulate an enterprise leveraging multi-clouds by connecting a cloud router in the Phoenix, AZ region to a physical server hosted within a colocation facility in Phoenix, AZ.

TPP Routes Offer Better Latency than BEP Routes. Figure 6a shows the distribution of latency for our measured E2C paths. We observe that TPP routes consistently outperform their BEP counterparts by having a lower baseline of latency and also exhibiting less variation. We observe a median latency of 11 ms, 20 ms, and 21 ms for TPP routes towards GCP, AWS, and Azure VM instances in California, respectively. We also observe symmetric distributions on the reverse path but omit the results for brevity. In the case of our E2C paths, we always observe direct peerings between the upstream provider (e.g., Cox Communications (AS22773)) and the CP network. Relying on bdrmapIT to infer the peering points from the traceroutes associated with our E2C paths, we measure the latency on the peering hop. Figure 6b shows the distribution of the latency for the peering hop for E2C paths originated from the CPs’ instances in CA towards our enterprise server in AZ. While the routing policies of GCP and Azure for E2C paths are similar to our observations for C2C paths, Amazon seems to hand-off traffic near the destination which is unlike their hot-potato tendencies for C2C paths. We hypothesize that this change in AWS’ policy is to minimize the operational costs via their Transit Gateway service which provide finer control to customers and peering networks over the egress/ingress point of traffic to their network [6]. In addition, observing an equal or lower minimum latency for TPP routes as compared to BEP routes suggests that TPP routes are shorter than BEP pathsFootnote 7. We also find (not shown here) that the average loss rate on TPP routes is \(6*10^{-4}\) which is an order of magnitude lower than the loss rate experienced on BEP routes (\(1.6*10^{-3}\)).

Fig. 6.
figure 6

(a) Distribution of latency for E2C paths between our server in AZ and CP instances in California through TPP and BEP routes. Outliers on the Y-axis have been deliberately cut-off to increase the readability of distributions. (b) Distribution of RTT on the inferred peering hop for E2C paths sourced from CP instances in California. (c) Distribution of throughput for E2C paths between our server in AZ and CP instances in California through TPP and BEP routes.

TPP Offers Consistent Throughput for E2C Paths. Figure 6c depicts the distribution of throughput for E2C paths between our server in AZ and CP instances in CA via TPP and BEP routes, respectively. While we observe very consistent throughput values near the purchased link capacity for TPP paths, BEP paths exhibit higher variability which is expected given the best effort nature of public Internet paths. Similar to the latency characteristics, we attribute the better throughput of TPP routes to the lower loss rates and shorter fiber paths from the enterprise server to the CPs’ instances in CA. Moreover, compared to the CPs’ connect locations, the third-party providers are often present in additional, distinct colocation facilities closer to the edge and partially answers the question we posed earlier in Sect. 4.3.

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yeganeh, B., Durairajan, R., Rejaie, R., Willinger, W. (2020). A First Comparative Characterization of Multi-cloud Connectivity in Today’s Internet. In: Sperotto, A., Dainotti, A., Stiller, B. (eds) Passive and Active Measurement. PAM 2020. Lecture Notes in Computer Science(), vol 12048. Springer, Cham. https://doi.org/10.1007/978-3-030-44081-7_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-44081-7_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-44080-0

  • Online ISBN: 978-3-030-44081-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics