|
|
|
Cette page est manquante.
Cette page est manquante.
Cette page est manquante.
|
Storage 2001: Order from chaos
|
Review Center archive
|
[< < previous page]
Storage-Area Network
Analysts say they expect the storage-area network (SAN) to be the greatest influence on new directions for storage technology. A SAN essentially is a pool of storage devices such as hard disk, tape and optical drives on a dedicated subnetwork that is shared by all systems on the primary Ethernet network.
To understand how a SAN works, picture a subnetwork within the main network (see chart, at right). That subnetwork, which closely mirrors a typical LAN, solely contains storage devices that operate independently of one another. It is where the enterprise's data is stored. All data traffic stays on the SAN until called by a server; only then is it switched to the client network.
SANs are the future of storage, but they aren't quite here yet. Some of the SAN pieces are available and can give users a taste of the benefits, such as higher throughput and further connection distance with the Fibre Channel interconnect. But you can't put it all together yet.
"We're expecting a two- to four-year phase-in by high-end Unix shops first," says Andres Lofgren, an analyst at Giga Information Group in Cambridge, Mass. But the expense of interconnect devices such as routers and hubs required to build SANs will likely hold off typically price-sensitive Windows NT shops longer, according to Lofgren.
Moving storage to its own network satisfies one of the biggest wants of end users: speed. Dedicated, 100M bit/sec. bandwidth for data transfers should give users what they want and at the same time decongest the client network. But what the systems department at First Union Corp. in Charlotte, N.C., finds most compelling is a SAN's promise of flexibility.
Gary Fox, a systems consultant at First Union, expects SANs to drastically change the way he allocates disks to servers. Because servers will share all the storage devices on the SAN, Fox says he expects to better match a system's needs to its most suitable type of storage. "For systems that need fast access to mirrored disk, we'll add an EMC array," he says. "[For] systems that don't need fast access, we'll hang a number of 45G-byte drives off the SAN."
Fox also is looking for the ability to add whatever storage he needs and when he needs it. That's another primary benefit of SANs, according to Tom Lahive, an analyst at Dataquest in Lowell, Mass. You aren't locked into any particular vendor's solution with a SAN, and because storage is separate from servers, "you can buy [whichever] disk array fits your budget at that time," he says.
Cutting costs is always a top priority, but having the flexibility to buy disks from the vendor offering the best deal is only one step toward reducing total cost of ownership. The more significant impact comes from what SANs do to reduce network complexity.
Michael Zanga, senior NT engineer at Greenwich Capital Markets, Inc. in Greenwich, Conn., is already installing Fibre Channel, which lets him design a network in which his NT and Unix servers share storage devices. Zanga says his goal is to consolidate storage so none of it is special to any particular server. "I want to view our storage in the future as just being generic," he says.
Fox sees SANs as a way to free up his servers' expansion slots, which he has been maxing out as he adds storage. "We've filled up all the card slots on several servers because we had to connect them to additional [storage] arrays," he says.
Servers need only a single connection to the SAN and aren't troubled with file-serving duties, so they will likely have longer lives. "And because we will be able to transfer data within the SAN, it will take a load off the [wide-area network] and hopefully prolong the life of a lot of devices," Fox says.
Then there's the ability to scale only as your needs grow. "When your first $30,000 array maxes out, go buy another, as opposed to initially buying a $100,000 array," Lahive says. Vendors say to think of a SAN as a cloud: If you need more storage, just throw in another disk array.
That cloud analogy may be appropriate because right now the SAN isn't much more than a concept. Some of the products are in place but not enough to guarantee all of the promised benefits.
WHAT'S HERE
Switches, hubs, routers and all the interconnect devices for LANs and, subsequently, SANs are all available. So are the interfaces: IBM's Escon, the dominant interface for mainframes, and IBM's Systems Applications Architecture are SAN candidates. But it's Fibre Channel that's emerging as the industry-standard SAN interface. And Fibre Channel can be deployed today, although its chip set isn't yet completely optimized to deliver the 100M bit/sec. performance the specification calls for, according to Lahive.
Fibre Channel has some advantages in that it's an outgrowth of SCSI and Ethernet, "meaning it can talk SCSI, the language of file I/O as well as [Internet Protocol] in a single interface," says James Staten, an industry analyst at Dataquest in San Jose, Calif. A more noticeable benefit is the distance it can span. SCSI is limited to 25 meters, but Fibre Channel extends to 10 kilometers.
WHAT'S NOT HERE
There are no standards yet to ensure that servers and storage devices from multiple vendors will communicate. The Storage Networking Industry Association reports it doesn't expect to complete the standards for at least another year.
Also, there aren't any software utilities to manage the hardware devices in a SAN; much of today's management software comes from storage device vendors. "It's what they have used to differentiate themselves in the market," and it isn't prepared to manage other vendors' equipment, says Carolyn DiCenzo, principal analyst at Dataquest.
WHAT YOU CAN DO IN THE MEANTIME
Analysts recommend working on a Fibre Channel infrastructure for now. "You can still reap the benefits of increased distance and performance with Fibre connections now; improved connectivity will come later," Casey says. Switching to Fibre involves replacing the host bus adapter card in the servers and replacing the controller on the storage subsystem. The cost of Fibre Channel is relatively high compared with Gigabit Ethernet, "but it's not cost-prohibitive'' like Asynchronous Transfer Mode, Staten says.
Zanga is already preparing Greenwich Capital with Fibre Channel storage and is using its distance advantage to support an off-site server for disaster recovery.
Network-Attached Storage
Network-Attached Storage (NAS) is a technology complementary to SANs that's available today and able to deliver some of the same benefits. Where SANs are for the enterprise, NAS is essentially a mini-SAN for LAN segments.
|
Plummeting prices |
Year
| Worldwide total disk capacity shipped (terabytes)
| Overall average price per megabyte
|
1988
| 1,770
| $11.54
|
1992
| 8,180
| $3.00
|
1995
| 80,677
| $0.33
|
1998*
| 772,275
| $0.044
|
2001*
| 6,141,889
| $0.006
|
|
* Projected
Source: Disk/Trend, Inc., Mountain View, Calif.
|
A NAS device is a specialized server that does nothing but serve up files. It attaches directly to the LAN like any other node and is as accessible as a network printer.
Hargreave started implementing NAS devices at Geneer because of their performance and lower cost. "Serving up files isn't complicated. You don't need a $20,000 server to do it," Hargreave says.
NAS devices also work well for workgroups with extraordinary storage demands. "We're planning on offloading groups that have a lot of PowerPoint presentations onto their own NAS," Hargreave says. Geneer is testing NAS devices in pockets but won't fully buy in to NAS until management utilities arrive.
Storage Resource Management
Storage Resource Management (SRM) software, which primarily performed backup and recovery and has typically come from storage device vendors, is largely unprepared to manage new environments and devices such as SANs and NASes.
"Now that users are starting to recentralize storage, data backup isn't the problem it once was. What's missing now are tools that can manage a variety of storage devices [for example, tape, optical disk, RAID], and [that have] the means to be proactive, to predict problems before they happen," DiCenzo says.
SRM has always been available on mainframes. Boole & Babbage, Inc.'s SpaceView and Sterling Software, Inc.'s Vantage remain the stalwarts, but SRM is only starting to emerge for open systems. "And much of what will work for SANs and NASes will come from new companies like HighGround [Systems in Boxboro, Mass.] that are dedicated to SRM," Staten says.
HighGround now has the only SRM product that sets alerts and thresholds and monitors disk consumption for Windows NT, according to DiCenzo. HighGround also is building the standard tool kit interface for managing removable storage in NT 5.0.
Although it isn't ready for SAN or NAS environments, HighGround plans to have some products available next year, according to Tom Rose, HighGround's vice president of marketing.
SRM will evolve much as Computer Associates International, Inc.'s CA Unicenter and Hewlett-Packard Co.'s HP OpenView did in the network and systems management space, a command center that continues to broaden its reach in what it can manage.
DiCenzo says she expects SRM eventually to tie in to the network and systems control. When that happens, more network administrators will likely be managing storage, "and that will be a real shift," she says.
Archives
Retrieving data from traditional archive mediums such as tape and optical disk always has been an arduous process. Unfortunately, the improvements in those mediums have done little to improve data retrieval.
Virtual tape, one of the more notable advancements in tape technology, makes better use of a tape's capacity but doesn't make the data more accessible. Virtual tape systems, such as Virtual Storage Manager from Storage Technology Corp. in Louisville, Colo., and Virtual Tape Server from IBM, use disk arrays to first cache data sets and then stack them as virtual tape volumes. When the volume is full, it's transferred to tape, completely filling its capacity.
Tape's advantage over disk remains its lower cost, but that price edge is no longer enough for some users especially as the cost of magnetic disk continues to drop, Staten says.
Fox began archiving to disk when he established a long-range plan to let customers request images of canceled checks over the World Wide Web. "Making it happen isn't as easy as it sounds, but if we archive on quick-access magnetic disk, we stand a chance. If it's on tape, forget it," Fox says.
Advancements in optical technology from companies such as Quinta Corp. in San Jose, Calif., may further squeeze tape's hold on archiving, according to Jim Porter, principal at Disk/Trend, Inc. in Mountain View, Calif. Quinta is developing what it calls Optical Assisted Winchester, which promises to extend the recording density far beyond the believed 40G-bits-per-square-in. limit of magnetic disks.
"The expectation is that [optical assisted technology] will top out at the hundreds of M-bits per square inch," Porter says.
The first products will likely be removable disk drives, Porter says. Quinta won't reveal dates for products, but the company says it expects to draw revenue from the technology within three years.
Disk Capacities
In the year 2001, desktop systems will be sporting 40G-, 60G-, even 80G-byte hard drives, Porter says. He bases his prediction on a conservative estimate that disk capacities will increase by at least 60% per year. The average in the past six years has been 73%.
IBM is paving the way for Porter's prediction. It continues to advance the sensitivity of magnetic-head technology to read smaller bits of recorded data, allowing data to be packed more tightly onto a disk.
The areal density the amount of data that can be loaded on to a square inch of disk is about 3G bits today, but IBM's latest Giant Magnetoresistive heads will support 10G bits per square in. and higher. "It's believed that [IBM] can eventually take magnetic recording up to 70G bits per square inch," Porter says. But if capacity does continue to increase at 60% per year, Porter says the physical limit of magnetic disk will be reached within 10 years.
Did you know? |
1 byte = 8 bits
|
1 kilobyte = 1,000 bytes
|
1 megabyte = 1,000,000 bytes
|
1 gigabyte = 1,000,000,000 bytes
|
1 terabyte = 1,000,000,000,000 bytes
|
1 petabyte = 1,000,000,000,000,000 bytes
|
1 exabyte = 1,000,000,000,000,000,000 bytes
|
1 zettabyte = 1,000,000,000,000,000,000,000 bytes
|
1 yottabyte = 1,000,000,000,000,000,000,000,000 bytes
|
|
Source: Disk/Trend, Inc., Mountain View, Calif.
|
However, hybrid technologies from companies such as TeraStor Corp. in San Jose, Calif., are ready to set new boundaries for disk capacity.
TeraStor's Near Field Recording uses a combination of optical and magnetic drive technology to further tighten the recording of data far beyond the capabilities of magnetic disk. "It will eventually record data in the many hundreds of M-bits-per-square-inch range," Porter says.
TeraStor's first drives which it expects to ship in the fourth quarter of this year will have capacities of 10G bytes and 20G bytes. A 40G-byte drive will soon follow. Preliminary pricing puts the cost of the 10G-byte drive between $700 and $800 and the 20G-byte drive between $1,000 and $1,200.
Burden is Computerworld's senior editor, features
Illustration: Larry Goode
|
Cette page est manquante.
Cette page est manquante.
|