The 1986 "Star Trek" movie sequel "The Voyage Home" has a classic scene in which the crew has traveled back in time to 1986 to recover two humpback whales, the only Earth beings that can respond to a destructive space probe orbiting the planet in the future, where whales have become an extinct species.

In one scene, James Doohan’s character Scotty offers to share some future scientific knowledge with a materials manufacturer in exchange for the aluminum walls needed to enclose the whales for transport back to the future. When shown the computer on which Scotty will display the formula, the non-responsive machine confounds him when it doesn’t answer his "Hello, computer" request. It’s not until the scientist suggests using the keyboard that things start to happen.

This isn’t the first reference to voice recognition in computers. The first I can remember is the conversation between Dave and HAL 9000, the rebellious computer from Stanley Kubrick’s "2001: A Space Odyssey." The discussion is like any conversation you might have with a psychotic killer that happens to be a machine. At the end of the epic battle between man and machine, HAL degenerates into a mindless automaton as Dave removes his memory packets and his voice becomes less and less recognizable.

Communications between man and machine began early on as just a way to give voice to the written word; scanners were enabled with the ability to "read" the printed word with some semblance of clarity to help the visually impaired. As technology advanced and processing power grew exponentially, the capacity for computers to understand the spoken word started to evolve and businesses began using the technology to improve customer service. Whether there was much of an improvement realized by customers is arguable.

Voice recognition is everywhere today. Just last week I was on the phone to get some tech support from one of my vendors. As it happens more often these days, the automated voice at the other end of the line wanted to gather some information to connect me with the right person. I could input the data it requested either through the keypad or by speaking into the phone. Always one to embrace technology, I opted for the voice input approach. I’ve done this many times and find that the degree of success depends on the quality of the voice connection, the background noise and the clarity of the speaker.

One or more of those weak links were an issue for me this last time and ended up crashing the system every time I tried to input information.

The most robust need for voice recognition is when you use it as a total input device. Several months ago I required some minor hand surgery, which left me unable to touch type with both hands for about 6 weeks. Because I write for a living, I figured this would be a great excuse to try out one of the software packages that takes dictation.

After some research, I chose Dragon Dictate by Nuance. It is generally recognized as a leader in the field. After downloading the software and configuring it, I started playing. I was impressed by the ease of use and the accuracy of its interpretation of my spoken words into written text. After a short while, however, I realized that I wasn’t as good as the software. I couldn’t write as well when dictating as I could when I typed it in myself. I don’t know if dictation is a lost art (after all, how many executives are out there these days dictating letters to their secretaries?), or one I just never possessed.

These days, the more common use of voice recognition can be found inside your car. Microsoft’s Sync, for example, was a big sales feature for Ford cars when it first arrived on the market several years ago. You could control your navigation system, entertainment and even emergencies. Even navigation systems themselves offered a voice interface. Both my wife’s Hyundai Santa Fe and my Toyota Highlander have voice-based navigation systems, but mine is a human voice that recorded hundreds of words and numbers to cover all of the foreseeable vocal needs the system would have. My wife’s system, on the other hand, is a true computer simulated voice that sometimes places the emphasis on the wrong "sylLAble."

So there are many factors that go into an effective and successful voice recognition system, each with their strengths and weaknesses. It’s not just the extent of the systems vocabulary and degree of accuracy but with the recent introduction of Siri, we are now considering the users experience and the system’s personality.

Siri is the name given to the built-in smart assistant found in Apple’s new iPhone 4s. Siri can send text messages, schedule appointments and reminders, call your friends and read aloud incoming messages; all through simple, naturally spoken commands. It also tells jokes but won’t do "knock-knock" jokes; which are all part of the very robust personality Apple has created for Siri.

Once again, Apple has set the innovation bar very high. Siri doesn’t demand specific syntax; it doesn’t require training or limit you to certain words. Right out of the box, Siri is ready to do your bidding and make you laugh at the same time.

Even with the creative approach to voice recognition they have taken, Apple recognizes that the technology and architecture behind Siri is in the very early stages. It is far from perfect and, on occasion, it may take you a couple of repeats to get your request accurately understood. You’ll notice that Siri is actually in beta, yet presented as a fully functioning feature. Despite the few glitches, I can only imagine what it will be capable of doing when it gets a full release.

While there are hundreds, if not thousands, of videos on YouTube touting the unique, and at times funny, nature of Siri’s personality, this step in the evolution of the man/machine interface is important. How we, and ultimately, our customers interact with the "Siris" in our future will determine how quickly businesses will be to incorporate this technology.

If independent agents are going to successfully compete with direct writers and captive agents that have, by comparison, vastly greater resources, then they will need to leverage all types of technology and find "partners" in some unlikely places.

If you think about some of the basic skills Siri demonstrates and then imagine training your own Siri-like assistant to perform some of those same tasks that many entry level CSRs do, you can see the potential benefits of voice recognition technology to extend your customer service reach.

We are still decades away from true artificial intelligence on a wide-scale basis, but these shades of AI are whetting our appetites for the future benefits it promises. In the meantime, consider not just what voice recognition systems can bring to your agency’s customer service efforts, but think about what opportunities shifting some of the more mundane tasks over to that "silent partner" can open up for your CSRs.

Or better yet, think about what your agency’s mobile app could do with Siri APIs? Could Siri complete a loss notice to you from your client by having the customer say, "Siri, I was just in a car accident. Can you help me?"

Want to continue reading?
Become a Free PropertyCasualty360 Digital Reader

Your access to unlimited PropertyCasualty360 content isn’t changing.
Once you are an ALM digital member, you’ll receive:

  • Breaking insurance news and analysis, on-site and via our newsletters and custom alerts
  • Weekly Insurance Speak podcast featuring exclusive interviews with industry leaders
  • Educational webcasts, white papers, and ebooks from industry thought leaders
  • Critical converage of the employee benefits and financial advisory markets on our other ALM sites, BenefitsPRO and ThinkAdvisor
NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.