LRDIMMs, RDIMMs, and Supermicro’s Latest Twin

Most of the servers in the datacenter, especially the ones running virtualization, database, and some HPC applications, are more memory limited than anything else. There are several server memory options: UDIMMs, RDIMM, LRDIMMs, and even HCDIMMs. RDIMMs are the most commonly used. The LRDIMM in 2011 was the most popular high capacity variety, but only for those with huge budgets.

In our lab we have Supermicro's Twin 2U server (6027TR-D71FRF) from our Xeon E5 review and 16 Samsung LRDIMMs and RDIMMs. We felt that dense servers and high capacity memory made for an interesting combination that's worthy of investigation.

What is the situation now in 2012? Are LRDIMMs only an option for the happy few? Can a Twin node with high capacity make sense for virtualization loads? How much performance do you sacrifice when using LRDIMMs instead of RDIMMs? Does it pay off to use LRDIMMs and Supermicro's Twins? Can you get away with less memory? After all, modern hypervisor such as ESXi 5 have a lots of tricks up their sleeves to save memory. Even if you have less physical memory than allocated, chances are that your applications will still run fine. We measured bandwidth and latency, throughput and response times, scanned the market and performed a small TCO study to provide answers to the questions above.