Abstract
In this chapter, we look at various Spark SQL recipes that optimize SQL queries. Apache Spark is an open source framework that is developed with Big Data volumes in mind. It is supposed to handle huge volumes of data. It is supposed to be used in scenarios where there is a need for horizontal scaling for processing power. Before we cover the optimization techniques used in Apache Spark, you need to understand basics of horizontal scaling and vertical scaling.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2019 Raju Kumar Mishra and Sundar Rajan Raman
About this chapter
Cite this chapter
Mishra, R.K., Raman, S.R. (2019). Optimizing PySpark SQL. In: PySpark SQL Recipes. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-4335-0_7
Download citation
DOI: https://doi.org/10.1007/978-1-4842-4335-0_7
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-4334-3
Online ISBN: 978-1-4842-4335-0
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)