A new piece of research from MIT’s computer science and artificial intelligence laboratory (CSAIL) has proffered a new system for data centre caching using flash memory – potentially meaning more energy efficient and economical computing.
According to the researchers, the system they created, dubbed BlueCache, is able to keep up with requests flooding the data centre by ‘pipelining’, allowing the next instructions to be fetched while the processor performs arithmetic operations.
This helps shore up the traditional flaw in flash memory when compared to RAM, in terms of speed. As slow as flash is relative to dynamic RAM (DRAM), users ‘won’t notice the difference between a request that takes .0002 seconds to process… and one that takes .0004 seconds because it involves a flash query’, as MIT puts it. Per gigabyte of memory, flash consumes approximately 5% as much energy as RAM, and costs about one tenth as much.
Despite this, even through pipelining, the researchers had to deploy some ‘clever engineering tricks’ to make flash caching able to compete with DRAM caching, with BlueCache ending up 4.2 times as fast as a default flash-based cache server. This included adding a few megabytes of DRAM to each million megabytes of flash, making the detection of cache misses more efficient.
“The viability of this type of system extends beyond caching, since many data-intensive applications use a [key value]-based software stack, which the MIT team has proven can now be eliminated,” said Vijay Balakrishnan, director of the data centre performance and ecosystem program at Samsung Semiconductor’s Memory Solutions Lab.
“By integrating programmable chips with flash and rewriting the software stack, they have demonstrated that a fully scalable, performance-enhancing storage technology, like the one described in the paper, can greatly improve upon prevailing architectures,” Balakrishnan added.
You can read the full MIT post here.