Odd Tweet of the Day Award

Although it seems a bit early to be handing out an award that could cover the entire day (and it’s only 9:30 AM EST), the Odd Tweet of the Day Award goes to:

@femainfocus

I have know idea what this Tweet means, and suspect it was posted in error – perhaps someone at FEMA though they were typing into their IM client.

fema_tweet

Advertisements

It’s the Little Things

OK, I’ll admit it. I’ve said some things about Twilio in the past.

To be honest, initially I viewed Twilio as a low-rent competitor to VoiceXML and the family of W3C languages designed to support telephony applications. However, although I still harbor some doubts about Twilio, my attitude is changing – largely based on some of the features that Twilio now offers.

Voice Transcription:

Now this is pretty boss. The Twilio XML Markup language (TwiML for short) not only allows developers to make a recording of what a caller says (pretty basic functionality that’s been in VoiceXML since 1.0), it also lets you have this recording transcribed (up to 2 minutes of speech). Now this is an exciting feature, and although this is a paid feature it is one that I will be trying out in the near future.

Geographic Information:

When a call comes into your app, Twilio attempts to look up geographic data based on the ANI and DNIS used. This allows developers to have access to to city, state and zip of the calling and called party. Pretty sweet!

Yes its the little things like these that are making Twilio more attractive, and more likely to be used in an upcoming project. Lets hope some of the big VoiceXML hosting providers stand up and take notice, and start to offer similar features to developers.

Automatic for the People

Like lots of others, I think its worth noting San Francisco’s innovative use of Twitter. San Francisco residents can now use Twitter to send a message to an operator at the City’s 311 call center and receive a Tweet back.

This is exactly the type of interactive use of Twitter by governments I had in mind when I wrote about Twitter 2.0 for the public sector a few months ago. Still, now that I see an actual use of Twitter by a government to interact with citizens, I’m wondering if this approach can be improved upon, to make it more efficient for governments and still user friendly for citizens.

While San Francisco’s use of Twitter is indeed convenient for citizens, it has many of the same cost implications for government. Tweets to 311 operators must still be processed “manually” – someone has to read the content of a Tweet (even if its prefiltered based on message content) and assign a follow up action, or respond directly if its been assigned to them. And even though San Francisco is reportedly using the very interesting Twitter-CRM product CoTweet to make this process more efficient, I wonder if there isn’t a better way to do this.

I think this would be a perfect scenario to deploy an interactive IM/SMS BOT. Citizens could interact with an application to report common 311 service requests – potholes, traffic-light outages, abandoned vehicles, etc. As long as certain keywords / hashtags are used in the message content (something that probably needs to be done if Twitter is used instead anyway) it should be pretty easy to process reliably in an automatic way. Moreover, using an IM/SMS BOT would allow the process to have multiple steps, where the application and the citizen could exchange information successively.

For example, a citizen using a BOT to report a traffic signal outage could receive a an automated response asking if there are any noticeable power outages in the vicinity, or telling them to send a follow up message when the repair crew arrives (to audit response times). The possibilities are enormous.

Requests that could not be processed automatically could be routed to a live operator and handled the traditional way. This would more efficiently allocate the finite resource of 311 operators — human operators would only intervene in the processing of 311 service requests when they could not be processed automatically.

Here’s hoping that someday very soon, we’ll see a government go “automatic for the people” with 311 service requests.

VoiceGlue 0.10 Released

For those that don’t know, Voiceglue is an open source project that links Asterisk (the open source PBX) with OpenVXI (an open source VoiceXML platform currently under the stewardship of Vocalocity). Voiceglue makes it possible for Asterisk users to deploy a completely open source VoiceXML platform for building IVRs and other useful applications.

The Voiceglue project recently announced the release of version 0.10 – there are several new features in this release:

  • Improved audio caching
  • Cookie passing on audio fetching
  • Handles maxage and audiomaxage of 0 properly
  • Uses HTTP Content-Type for audio content when available
  • Defaults to not requiring access-control directive in returned data from data tag
  • New transfer method, new config file param “blind_xfer_method”
  • Auto-install support for Ubuntu 9.04 (Jaunty)

I’m especially interested in the last item – I’ve been meaning to set up a VM to play around with Ubuntu 9.04 for a few weeks now, and this is yet another good reason for doing so.

The new version of Voiceglue can be downloaded here.