Skip to main content

Optimizing PySpark SQL

  • Chapter
  • First Online:
PySpark SQL Recipes

Abstract

In this chapter, we look at various Spark SQL recipes that optimize SQL queries. Apache Spark is an open source framework that is developed with Big Data volumes in mind. It is supposed to handle huge volumes of data. It is supposed to be used in scenarios where there is a need for horizontal scaling for processing power. Before we cover the optimization techniques used in Apache Spark, you need to understand basics of horizontal scaling and vertical scaling.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 34.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 44.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Raju Kumar Mishra and Sundar Rajan Raman

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Mishra, R.K., Raman, S.R. (2019). Optimizing PySpark SQL. In: PySpark SQL Recipes. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-4335-0_7

Download citation

Publish with us

Policies and ethics