There’s a new technology on the block that’s going to shake-up the storage world in a big way: SMB Direct.
SMB Direct is built upon Remote Direct Memory Access (RDMA) which enables very low-latency connections between the memory of two computers without using the operating system of either. When this technology is embedded within network interface cards and a supporting protocol is used, it enables very low-latency access to files on a remote file share. I’ll avoid going into the depths of the technology (admittedly, because I don’t understand the details), save to say that it reduces the number of components in the stack that are necessary to talk to remote storage to far fewer than are required to talk to conventional SAN block-level devices. What I really want to focus on are the game-changing advantages this technology provides and the implications for SAN storage. When combined with a network protocol that can take advantage of RDMA, such as Microsoft’s SMB Direct, a whole host of advantages can be delivered:
- SMB Direct enables file shares to provide superior performance to existing block-level SAN devices
- Far less CPU is required to manage the IO stack.
- Storage networking can utilise existing ethernet infrastructure to deliver high performance storage normally associated with fibre-channel connected SANs.
- Because RDMA controllers exist for ethernet and they work well, an entire organisation’s infrastructure can converge on ethernet if it so wishes, reducing the need for Fibre-Channel and Infiniband technologies and their associated skills.
- Skills related to LUN presentation are not required.
- Protocols such as SMB Direct implement their own auto-discovery of targets and multi-pathing, removing the requirement for MPIO configuration.
- Utilising the same card type for storage as for networking enables greater IO capacity from servers and blades with reduced numbers of slots, therefore increasing potential server density.
- Instead of pre-allocating fixed LUNs to servers as is the case with current SANs, it will now become feasible to utilise a single file-share for the storage of data from multiple servers without compromising performance. This will result in less storage wastage, as no pre-allocation of space is required to the multiple connected servers. As SAN storage is an expensive commodity, this will save quite a bit of money.
- An enterprise that uses RDMA-enabled protocols to talk to file shares instead of block-level SAN devices will experience a smoother, more predictable growth in their storage requirements.
- There will be an increase in demand for high-performance, low-latency centralised storage such as Violin Memory’s Windows Flash Array
- The reduction in CPU utilisation will enable greater server consolidation.
- Storage networking skills will diminish in importance.
- Storage roles will increasingly focus on the storage media and less on the connectivity to it.
SMB Direct is here to stay. Those early adopters are likely to reap considerable advantages in cost-savings, ease of management and scalability.