SANs Help Manage Huge Data Flows

Data volumes are growing in excess of 100 percent per year. Where is all this data coming from? Think about it.

Insurance companies are all about data be it customer data, actuarial data, demographic data, or the myriad of other business and marketing data that carriers generate. Due to compliance and other regulatory issues, companies are collecting and maintaining data at a faster rate than ever.

That's why concerns over data loss that disrupts business-critical applications are at the forefront of the chief information officer's mind. Disaster recovery and data inaccessibility are great concerns in IT organizations today. Due to the time it takes to back up server data, companies are keenly aware of their inability to quickly respond to data outages.

In the insurance industry, a data outage could result in an agent being unable to provide a walk-in customer a quotation because the system was inaccessible. This spontaneous customer might leave in frustration and go to a competitor for a policy. Or it may lead to a customer service representative not being able to pull up an angry customer's data, leaving them to hang up in anger and start looking for another insurer.

In addition, compliance issues such as the Health Insurance Portability and Accountability Act have put a new emphasis on the importance of data. Virtually all organizations electronically handling health information must comply with HIPAA and be able to demonstrate (if audited) that they comply. Other compliance issues relating to financial reporting are also driving the influx of huge amounts of data.

As the amount of data under management began to grow dramatically in the 1990s, straining the resources of existing computing systems, companies tended to “throw more hardware” at the problem.

But simply adding more servers and storage devices compounded the problem of multiple direct-attached storage environments and more than likely didn't satisfactorily address performance and availability issues. To provide a solution for managing the data onslaught, more and more organizations are moving from a direct-attached environment to a storage area network (SAN) environment.

The reason? There is an obvious need to better manage the ever-increasing volumes of business data, enhance security, and provide faster and more efficient backup capabilities.

A SAN is a dedicated, centrally managed, secure information infrastructure that enables any-to-any interconnection of servers and storage systems.

By moving from direct-attached storage architecture to a networked architecture, companies have gained considerable improvements in the performance, flexibility and manageability of their storage infrastructures.

While a SAN can't slow down a company's data growth rate, it does provide a mechanism to reduce the overall cost of storage by providing an infrastructure where current storage resources are more efficiently utilized and centrally managed.

And the benefits gained from implementing a SAN are not limited to just the largest insurance organizations. Nearly one-third of the companies expected to deploy SANs over the next five years are smaller companies (fewer than 250 employees) and two-thirds of the companies expecting to deploy SANs have fewer than 5,000 employees.

If anything has limited the faster acceptance of SANs, it is the complexity associated with the technology. A myriad of connectivity components are required to connect the SAN fabric, including servers, switches and storage devices, often products provided by a variety of manufacturers. Integrated together, today's SAN solutions are compelling, but anything but simple to install and maintain.

Consider the SAN to be a highway system through which critical data flows. If the traffic flow is interrupted due to any number of possible problems including switch failure, router failure, server or storage device failure there will be a system slow down, or worse. An interruption in service can cost an insurance company anywhere from tens to hundreds of thousands of dollars a minute, depending on the size of the company and the time of day.

To proactively manage a SAN, it is crucial for organizations to monitor the overall health of the SAN on an ongoing basis, because waiting for problems to occur can result in disaster.

The various components in the SAN communicate by talking to each other in individual conversations or “exchanges.”

For example, a server asks a disk if its active, and the disk replies “yes.” The server asks the disk for some data, and the disk sends it and asks for acknowledgement of receipt. The server sends an acknowledgement, completing the exchange.

If something is wrong in the SAN, such as a disk not responding as expected, the conversation will be repeated until a successful outcome is reached, and this repetition can go on for a long time until resolved, taking up SAN capacity and slowing user response times in the process.

By closely watching what is going on in the SAN at the level of individual conversations, SAN monitoring products can find problems before they lead to inefficiency or, worse, a crash of the SAN taking the business out of service.

In the multi-vendor makeup of today's SAN implementations, it is often difficult to isolate and diagnose problem areas. Often vendor finger-pointing becomes the rule of the day and inhibits quick problem resolution.

Specialist SAN monitoring tools can look at the overall SAN (not individual components), find where problems are coming from, and indicate the most efficient way to address them.

With the comprehensive data available from SAN monitoring tools, network managers can easily identify problem areas and perform trend analysis. Better insight into the SAN's performance translates into less finger-pointing, quicker problem resolution, and more efficient use of SAN resources. The bottom line for companies is improved application performance and SAN uptime, which translates into greater user productivity and higher customer satisfaction.

SANs hold great promise for helping organizations wrestle with maintaining, as well as accessing and securing corporate data. The companies that best manage and take advantage of this data to better understand and serve customers will be those who lead the marketplace.

Brian Staff is vice president of marketing for Finisar Network Tools (www.finisar.com), based in Sunnyvale, Calif. He can be reached at [email protected].


Reproduced from National Underwriter Edition, April 2, 2004. Copyright 2004 by The National Underwriter Company in the serial publication. All rights reserved.Copyright in this article as an independent work may be held by the author.


Want to continue reading?
Become a Free PropertyCasualty360 Digital Reader

Your access to unlimited PropertyCasualty360 content isn’t changing.
Once you are an ALM digital member, you’ll receive:

  • Breaking insurance news and analysis, on-site and via our newsletters and custom alerts
  • Weekly Insurance Speak podcast featuring exclusive interviews with industry leaders
  • Educational webcasts, white papers, and ebooks from industry thought leaders
  • Critical converage of the employee benefits and financial advisory markets on our other ALM sites, BenefitsPRO and ThinkAdvisor
NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.