Ever since I got the semantic bug back in the summer of 2008, I’ve been making my annual pilgrimage to San Francisco to get the latest and greatest on all things semantics with about 1200 of my fellow semtech nerds.
And its always been a productive trip. I got my first under-the-hood look of Google Rich Snippets, Facebook Open Graph, Siri, Watson, schema.org, and WikiData at the annual confab over the years.
But wait, that should make for four, right? Yep! Because this is the first year that I attended two SemTechs – the original San Francisco June gathering, and for the first time, SemTech right here in my stomping grounds – New York City!
And the cherry on top? I was honored to co-present the Keynote with Andrew Nicklin – NYC’s “Open Data Tsar” and Director of R&D at NYC DoITT. ‘Twas a great joint presentation if I may say so myself.
Ontodia would have not been possible were it not for NYC’s ambitious Digital Initiatives – NYCBigApps, the NYC Open Data Portal, its landmark Open Data law, and the support of various City agencies and institutions – NYCEDC, DoITT, NYC Digital and NYU-Poly. So it was only proper to kick off with a great overview of the Open Data program and its Digital Roadmap courtesy of Andrew.
And after Andrew’s setup, it was quite easy to make the case for Linked Open Data in NYC. Here’s the slideshare of my presentation.
This being the first year in NYC, the crowd was a bit smaller – about 300 or so with two tracks, but it was still packed to the gills with great content – the latest with IBM’s Watson; rNews; using semtech to catch insider trading; how its used to accelerate drug development; how its being deployed on smartphones for mHealth; and how Supercomputer maker Cray is even getting into the game with its YarcData Big Data Real-time Graph Analytics Appliance.
As the conference wrapped up, I was also invited by Tony and Eric to join them on the Conference wrap-up panel along with Steve Hamby of Orbis Technologies, and Elisa Kendall of Thematix and we were prompted to answer three questions.
- What were your personal takeaways from the conference?
- How has your thinking about semantic technologies evolved over time?
- A prediction for the next year.
It was a very interesting conversation with the audience chiming in. Here are my answers:
What were your personal takeaways from the conference?
And to me, this Data Tsunami is just the thing for Semantic Technology. Because at the end of the day, managing all this Data requires both machines and humans to collaborate. And that should be done by leveraging what “wetware” (our brains) and “hardware/software” (machines) are best suited for.
And at the intersection of these Data Movements is a ton of metadata that we can use to create Linked Data.
How has your thinking about semantic technologies evolved over time?
I used to head up the Knowledge Engineering practice of a small consultancy and led several engagements with major enterprise customers where we deployed or piloted Semantic Technologies. And for the most part, it was really hard! Not the technology, mind you – we tried to minimize risk by scoping our projects appropriately, but the expectation levels were often unrealistic and the appetite for experimentation very low.
Each organization has idiosyncrasies and these projects are often sold as a way to control the Chaos of Data that exists inside every major organization. Modeling this complexity requires several iterations and sometimes, the sponsors just didn’t have the stomach or the patience to go through the process. Add to that that most IT departments expect more of an OTS (Off-the-Shelf) mentality with a quick pay-off.
And that’s the primary reason why I decided to jump ship and focus my efforts in the Consumer space. Eventually, with the Consumerization of IT, Ontodia will get back into the the Enterprise through the backdoor.
Here’s Steve Jobs explaining why Apple didn’t pursue the Enterprise. [media id=2 width=640 height=385]
A prediction for the next year.
Before HTML was SGML. True to its name, it was a “Standard General Markup Language” that is very large, powerful, and complex. And that very complexity was the reason why its not in general use beyond some exacting industrial applications and standards bodies who use it as a metalanguage to describe other languages.
XML is a lightweight cut-down version of SGML which keeps enough of its functionality to make it useful but removes all the optional features which make SGML too complex to program for in a Web environment.
HTML is just one of the SGML or XML applications, the one most frequently used in the Web, and the simplest of the three.
And that very simplicity is what made the Web what it is today. And to me, the threshold, the acid test of when the Semantic Web will become reality is when we achieve that level of simplicity; when a high school kid can put together a semantic application.
And you know what? We’re getting there! The tools are much more robust, and there are now frameworks that hide the complexity behind familiar Web 2.0 interfaces – one of which, is pediacities.
And with our launch targeted for 1st half 2013, we’re hoping that Ontodia will help achieve that goal – what I’d like to call Web 2.5, as we make our way to the Semantic Web.