Google not only keeps all of their search indexes in memory, but also keeps the entire Web in memory. Google has to do this to return search results with snippets back quickly.
Facebook caches everything (or almost everything) in memory. Facebook has to do this to render pages complete with news feeds quickly.
LinkedIn keeps their entire social graph in memory. LinkedIn has to do that to show nearby connections with real-time updates and to rank order search results by network distance.
Most web applications want rapid random access to many pieces of indexed data. They have only a hundred milliseconds or so to get what they need.
Only RAM and its slower but persistent cousin SSD can do that. For lots of data to be accessed fast enough for most web applications, it needs to be sitting in memory.
What most web applications need is commodity clusters with lots of low power and low cost RAM. As for CPU, they only need something cheap and low power, just enough to filter some data as they yank it out of memory. And they only need disk to get started when they wake up.
But manufacturers keep adding more and more cores and asking for more and more power. They keep building bigger and bigger disks spinning faster and faster, eating more and more power.
Big web applications using commodity clusters are hungry for RAM. Give us servers loaded with lots and lots of low power RAM. And little else.
The Latest from CACM
Shape the Future of Computing
ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.
Get InvolvedCommunications of the ACM (CACM) is now a fully Open Access publication.
By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.
Learn More
Join the Discussion (0)
Become a Member or Sign In to Post a Comment