August 01, 2009

Google Faculty Summit

I just returned from the Google Faculty Summit, a gathering of ~100 professors (mostly computer scientists) in Mountain View from universities across North America, and a few from South America as well. Google holds this event annually, as part of their university relations effort, fostering recruiting channels and research collaborations between the company and academia.

One of the highlights of the 1.5-day program was an informal talk by Larry Page, Google co-founder (and U.Michigan alumnus). Larry discussed a wide range of topics, devoting surprising attention to the topic of artificial intelligence, which he argued was being under-emphasized in computer science research these days. He expressed his opinion that much current AI research lacks the ambition to tackle the really fundamental problem, which he suggested would ultimately be solved with simple ideas and a huge engineering effort to bring to scale. (I suspect that most AI researchers would broadly agree with this, though they might quarrel with sweeping characterizations of the field.) Larry also asserted that Google's algorithms for placing ads on content pages (a fundamental operation of their AdSense service) came out of their early efforts on more general AI text understanding problems. Thus, he credited AI research with half of Google's current revenue.

Visiting innovative companies is of course a most worthwhile way for even the busiest academics to spend their time. In addition to learning about emerging technologies and making valuable connections, we get a glimpse of what problems these companies think are important, and where the real technical challenges are. Reflecting on all the impressive work from Google we saw presented at this meeting (Google Earth, Flu Trends, Book Search, Statistical Machine Translation, Wave, just to name a few), however, it occurred to me that once Google recognizes that a problem is important and ripe for innovation, they are probably well on their way to producing great solutions. Perhaps a better strategy for academic researchers like myself is to try to diagnose where Google (and other companies) may have a blind spot--problems that are important and solvable but the industry players just don't see that yet, or perhaps do not see how the advances would benefit the company. Regardless, it's clearly advantageous to be aware of what the capable people at Google and other cutting-edge technology companies are up to.

Posted by wellman at 07:55 AM | Comments (0) | TrackBack

July 31, 2009

AAAI Asilomar Meeting

John Markoff's NYT article "Scientists Worry Machines May Outsmart Man" touched off a mini-firestorm this week. The article refers to a meeting of AI scientists held at Asilomar (a conference center near Monterey) in February to discuss societal implications of future AI technology. The provocative headline may have had something to do with the reverberations in other media outlets, and on the blogs. The most egregious example I saw was an entry by Dan Smith on popsci.com:

The long-awaited robot-led holocaust may happen any day now. That seems to be the finding of a secret conference of the world's top computer scientists, roboticists, and artificial intelligence researchers.

As it happens, I participated in this Asilomar meeting (which was not at all secret), and can assure anybody reading this there was no finding that a "robot-led holocaust" is imminent. (And who exactly has been awaiting this?)

Ironically, one of the goals of this meeting was for the major AI professional organization to develop a stance to engage the broader public in a reasoned discussion about societal implications of AI technology. Certainly artificial intelligence and other means of increased computational scope in our world has transformative effects on the economy and the world, and preparing for these is only responsible behavior. Many AI scientists are uncomfortable with the existing public dialog, which is to a large extent driven by science fiction writers and singularity prophets. Accordingly, the discussion tends to emphasize broad utopian or distopian visions, rather than nearer-term practical implications of technology. As a result, the "debate" often takes a hysterical (therefore non-constructive) tone.

As current President of AAAI, my colleague Martha Pollack (Dean of the School of Information at U.Michigan) was called on to represent the AI field in this brouhaha, and pressed valiantly to counter the panic reflex. A telling episode was her appearance on Fox News, where the host framed the discussion with: "I'm scared. Tell me why I shouldn't be." Martha's appearance was a success in that by the end he said he was no longer in a fright state. But overall it is pretty depressing that the public discussion is at the level of whether we should panic or not. (This is not at all to pick on Fox News. They were "fair and balanced" in this instance, and completely typical of how the media handle AI.)

Posted by wellman at 10:27 AM | Comments (0) | TrackBack