Abstract
To understand how memory management works, we need to acquire a broader context. In the previous chapter we learned the theoretical basis for this topic. We could now go directly to the details of automatic memory management, how the Garbage Collector works, and where memory leaks may occur. But if we really want to "feel" the topic, it is worthwhile to spend a few more moments on the basic reminder of yet another aspect of this topic. This will allow us to better understand the various design decisions that were made by Garbage Collector creators in .NET (as well as other managed runtime environments). The creators of such mechanisms do not live in a vacuum and have to adapt to the state of being - limitations and mechanisms that govern computer hardware and operating systems. That's the aspect we're going to touch on now.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
In a real CPU, the “buffer” for cache lines is the entire CPU cache so it typically fits hundreds or thousands of 64-byte wide cache line-sized entries.
- 2.
However, even in .NET we can still design method calls with L1i cache misses kept in mind. It mainly includes avoiding lot of virtual calls and favourites repetitive calls of the same method over a big set of data. We will see such example in Chapter 10.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 Konrad Kokosa
About this chapter
Cite this chapter
Kokosa, K. (2018). Low-Level Memory Management. In: Pro .NET Memory Management. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-4027-4_2
Download citation
DOI: https://doi.org/10.1007/978-1-4842-4027-4_2
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-4026-7
Online ISBN: 978-1-4842-4027-4
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)