Out of the many ways to add storage to your network, surely a storage area network or a SAN is a study in contrasts. While the technology promises the ability to add scalable, cheap storage quickly to your network, the costs can add up quickly particularly for resellers to support and implementations can be notoriously difficult to accomplish. On top of this, the standards surrounding SANs are still very much in flux, it can be difficult to find competent VARs to install and support these products, and the return on investment can be difficult for corporate IT departments to justify.
So why bother? Because for those resellers and customers that stick with the technology, the rewards can be great. A tired maxim is that networks can never have too much disk storage, especially as file-intensive operations such as imaging and storing videos take off within enterprises.
“Our customers in the printing and publishing worlds have very demanding file storage and file management issues,” says Pat Taylor, President of Proactive Technologies Inc. based in Carrollton, Tex. ” Their online storage systems regularly exceed a terabyte. Few industries consume storage like printers do.” That is a lot of disk space to manage, but it may not just be printers using all that disk space. Interviews that Forrester Research has done over the past several years indicates that storage capacity is growing at 52% per year, with 30% of sites contacted having storage consuming more than 6 terabytes.
SANs create an independent high-speed network just for file storage, keeping the existing enterprise Ethernet for ordinary network traffic. The beauty of a SAN is that this new storage-only network can be shared with various application servers to keep up with their voracious file storage requirements, giving more headroom on the backbone Ethernet network and more room to grow these newer applications.
Resellers should note that the market for storage continues to be tremendous, even as the economy sputters. Gartner analyst James Opfer predicts about a $21 billion SAN market by 2005, and to be sure that covers the cost of a lot of hard disks and other components. So it could be a worthwhile business opportunity, if approached correctly.
SANs offer something that a pure Networked Attached Storage device doesn’t: the ability to scale up quickly, available at a moment’s notice for a particular application. Under software control, you can quickly aggregate individual hard disks into a single, large volume, or create different volumes that divide up a single hard disk. The software magic is part of the SAN system you purchase, and of course different systems have different controls and different routines that are used to allocate new hard disk space. Indeed, perhaps the best combination is to add NAS servers to a SAN, so that you have the best of both worlds: simple setups with the NAS device, but the flexibility to arrange your storage needs with the SAN software.
SANs have several different applications. First and foremost are for speeding up backup operations: if you attach your tape servers to the SAN, then the data can move more quickly from hard disk to tape, without interfering with any network users. This is because the Fibre Channel network (the typical connection used by most SANs) operates much faster than a standard Ethernet, and also because the both the tape drives and the hard disks are attached directly to the network, without having to incur any operating system overhead to transfer information between them.
SANs also can be used to archive data, or migrate files using hierarchical storage management techniques. They are also useful for a new breed of applications called data vaults, where large networks can archive their data to a remote server across the Internet.
The early SANs weren’t very flexible or very easy to setup. Resellers needed to decide which of two basic configurations to use. One method connects Just A Bunch of Disks (or JBOD) enclosures directly to the SAN. This has the advantage of using relatively cheap SCSI disks that are plentifully available from numerous manufacturers. The second method relies on a special storage array, which can be more costly but is specifically designed for connecting to the SAN. Rounding out the SAN infrastructure includes special Fibre Channel hardware and switchgear. Basically, you are setting up a second network of Fibre Channel adapters (also called 100BASE-FL) that connects storage and application servers together, independently of the corporate standard Ethernet network. This network runs over a different set of cables and uses different network interface adapters in your servers: this means installing two sets of interfaces, two different sets of hubs and routers/switches, and managing them separately.
Resellers, when installing a SAN, will have to assign storage devices to particular application servers, and set these up properly. That means familiarity with the underlying NT or Unix commands to manage disk drives, along with familiarity with SANs and how NT and Unix handle Fibre Channel adapters. “Historically, this mixed environment challenges the integrator by presenting compatibility issues with O/S drivers and “certified” Fibre Channel host bus adapters,” says ProActive’s Pat Taylor.
This mixed operating system environment also presents its own challenges. “In the past, we have experienced a number of problems mixing and matching Fibre Channel products. In one installation, we had to move the customer from one SAN technology to another because of compatibility issues with his Silicon Graphics file server adapters, the Windows NT output servers, and the Fibre Channel switch that connected them,” said Taylor. One solution that Taylor has found is finding the right partners, including hardware vendor QLogic and software vendor DataCore Software.
“Partnering with QLogic has helped us reduce those compatibility issues to a manageable level,” says Taylor. “In the absence of hard standards, QLogic has wrapped their arms around the compatibility issue — since they purchased Ancor, they can now provide end-to-end compatibility with their Fibre Channel switches and host bus adapters.” And since his company starting selling DataCore software, “we have eliminated storage-related downtime and have the ability to consolidate storage resources and manage them from a single point without being tied to any particular storage manufacturer. This proved to be very attractive to our clientele.”
This is the real crux of the SAN dilemma: how to best and readily manage all that disk storage out there on its own network. Unfortunately, the tools to do so aren’t at the level that a typical Ethernet network management application such as Open View can provide for straight network infrastructure applications, although the tools are getting better.
Some signs that things are beginning to change for SAN management include storage vendor EMC announcing various initiatives to get its proprietary management software to run on other vendor’s storage arrays, along with sharing its programming interfaces with Compaq’s SAN systems. And a number of vendors have lined up behind a languishing protocol called DAFS to support better file transfer performance and interoperability over high-speed networks (see sidebar).
What to look for in a SAN reseller
Given the complexity of your average SAN, training is the key to success for any integrator thinking about entering this market. Augie Gonzalez is Director of Product Marketing, DataCore Software says “We do the heavy lifting at the beginning so that the integrator can come up to speed and transition the customer from direct-attached storage to a well-managed SAN. We also offer web-based training and formal hands-on classes that span several days with our products so that resellers and integrators will be able to go in and configure and operate the equipment.”
Marc Farley echoes these comments on training. Farley, who is an author of several storage networking books and president of consulting firm Building Storage Inc., says that resellers “should be prepared to invest considerable resources in training and education.” One vendor that Farley recommends with particularly good training courses is InfinityIO.com based in Half Moon Bay, Calif. They offer a wide range of courses on a variety of SAN and Fibre Channel technologies and techniques.
Besides training, integrators should spend the time actually testing various SAN products to ensure interoperability and configuration. “Interoperability is constantly getter better,” says Farley, “but it isn’t safe to assume that you can mix and match SAN gear and everything will automatically just work.”
Curtis Preston, who is a storage networking expert and president of The Storage Group, agrees with Farley. “Interoperability is far from an assumption at this point and will continue to be a problem for a while. So many times you can solve a connectivity problem when you swap out one host adapter for another from a different vendor. The VARs should know this already — they should not be selling switches and disk arrays that haven’t been tested to work together. Whenever possible they should be selling solutions that have been tested and proven together. I get furious with those vendors that don’t want to take responsibility for their gear and be able to support it down to the OS level.”
Testing a variety of SAN equipment for interoperability can bring some benefits for resellers, too. Farley says that “End users want to see alternatives to the single-vendor solution and resellers should be able to provide various packages that they have actually proven work together.”
Virtualization or visualization
There are two basic types of SAN management tools: the first, called storage virtualization, includes those that help network administrators control the quantity of their disk storage and how it is broken apart and presented to network users. This makes it easier to switch disk assets when a particularly storage-hungry application requires more room, for example. These applications can be very seductive, as you might imagine. Products from DataCore Software, SANavigator, Veritas’ SANPoint Control and Vixel’s SANsite are typical here.
But virtualization is only part of the SAN story. There is also storage visualization, which is the physical management of the entire specialized SAN infrastructure of switches, hubs, and storage enclosures from the level of the Fibre Channel network itself. These tools answer questions such as are these devices up and running, or do they have problems? Network management software vendors are just beginning to incorporate modules to support Fibre Channel in their management consoles.
In an email poll conducted last year, Network Computing magazine found that performance and quality of technical support topped the list of most important features that respondents desired when picking a SAN management solution. Cross-vendor support and ease of use were the two top physical SAN management features that respondents desired: visualization was fourth on that list of responses.
Why SANs rock
Given all these complexities, once you get a working SAN together it can be very worthwhile. “Our customers can add drives to a storage pool during production hours without having to bring down the storage system or the servers attached to it. We just use the software to add the drives and it gets discovered on the network. It doesn’t get any easier than that and this flexibility is what makes SANs so powerful,” says Taylor.
One new development with SANs is a protocol to help speed file transfers around networks. “Direct Access File System (DAFS) makes file transfer happen over Ethernet — or any other connection — happen at wire speeds. This means if you have a 10-megabit network, you get close to 10-megabit transfers. This is unlike TCP/IP, which adds so much overhead that you never get anywhere near wire speeds. Plus with DAFS you won’t tax your machine’s processing resources either with these file transfers. In a sense DAFS delivers block-level file input/output performance, but does it on a file level,” says Steve Duplessie, the founder and senior analyst with Enterprise Storage Group.
The idea is to short-circuit the data path that a server needs to read and write a file, eliminating as much as possible the overhead involved by the server’s central processor itself. This direct memory-to-memory transfer allows bulk data copying to bypass the normal protocol processing of an applications or file server and to be transferred directly between appropriately aligned buffers on the communicating machines.
In addition to improving file transfer performance, this protocol will allow network-attached storage devices to take on more demanding applications and lighten the load of storage servers. This will be especially true as more applications such as databases and backup servers support this protocol, and several leading vendors have announced their support.
The DAFS protocol specifications were originally the work of engineers from Network Appliance, Seagate Technologies and Intel and published several years ago. However, it seems that recent activities by a number of vendors will see some products come to market this year. These vendors, including InfiniSwitch, Network Appliance, and Troika Networks, among others, have gotten together an industry consortium called the DAFS Collaborative, and have a web site with a wealth of white papers and details about their plans to implement the protocol (www.dafscollaborative.org). While so far the vendors that are part of the collaborative have released mostly demonstrations, keep a sharp lookout for new products and developments in this area over the next several months. — D.S.