This blog entry is dedicated to those who teach computer science at high schools and vocational schools.
In one of my Java programming classes, only few of the students showed up. It didn’t seem to make sense to continue with my regular lesson plan, so I had to make an ad hoc decision about what to do instead. I was clear that I would have to pull something out of the hat. After a second or so I thought it might be a good idea to take a closer look at memory management in Java. During that class, I wrote a simple quick ‘n’ dirty program that just eats up heap memory like there’s no tomorrow.
Here is how the essential lines of code work: First, an array a is declared to hold SIZE references to objects of the type of SomeObject. Then, SIZE objects of SomeObject are instantiated in a loop. There is nothing noteworthy about class SomeObject, except it contains about 25 (useless) attributes of datatype long. The primary purpose of those attributes is to waste some heap memory.
If you run this program normally, sooner or later you will see the following error message.
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at Demo.main(Demo.java:14)
This is because the default maximum heap size is limited to 1 GByte in 32-bit JVMs. Run your program again with the following JVM option to push your computer to its limits.
java -Xmx8g Demo
Sooner or later you’re computer system will almost freeze until you kill the process.
There is an interesting fact about Java that you can see when you don’t keep the references stored in the array: It is Garbage Collection to its finest. As soon as an object doesn’t have any references to it any more, the Java Virtual Machine automatically deallocates the related memory. In order to explore this effect, use a fixed index, e.g. 0, inside the loop.
a[ 0 ] = new SomeObject();
It is also worth it to open a graphical system monitor before starting the program. Here is how it looked like on my laptop (Intel i5, 4 gigs of RAM).
The sky-rocketing rise at t-59 in the uppermost diagram is where the program was started. From t-57 to t-45 the CPU is busily creating objects, which is also indicated by the rising purple line (physical RAM usage) in the diagram in the middle. At t-45 the physical RAM is vastly exhausted, so the operating system starts to swap RAM data to hard disk. This involves time-consuming disk operations which are not handled by the CPU. Hence, the process is put to sleep once in a while, which results in a decrease of the average CPU usage. As the administrative overhead in memory management increases, CPU usage further decreases due to prolonged sleep periods of the process. Eventually, your system is going to freeze due to deprivation of memory, although the CPU usage remains relatively low.
When I got home from school late that evening, I thought that I could reuse this program in a different context, specifically in one of my lectures on Linux, though the underlying principles apply to any other operating system as well.
In my opinion, everyone who is about to touch, use, or even administer a computer system should be able to identify memory exhaustion. However, both identification and debugging of memory-related issues can be tricky, and you need to have quite some background knowledge. As with many things in computer science, the whole topic seems to end up in a difficult balancing act between hiding confusing aspects on the one hand, and establishing mental cross-links on the other. As far as I’m concerned, I’m still looking for a simple and “unscientific” approach to memory management which is easily scalable from high school to university level. If you have found one, please let me know.
I wish you a happy Christmas time!
— Andre M. Maier