Machine Learning

by rrusczyk, Jun 29, 2009, 5:56 PM

After my post about the Netflix prize, AoPSer orl sent me a whole string of AI articles. Here they are for the AI fans:

- Forbes' article series "The AI Report" at http://www.forbes.com/2009/06/22/singularity-robots-computers-opinions-contributors-artificial-intelligence-09_land.html.

- Machines that can outwit the smartest brains: http://www.ft.com/cms/s/2/e62201f2-5ee2-11de-91ad-00144feabdc0.html

- Intelligent machines? Think again: http://www.ft.com/cms/s/0/8199fe78-91bb-11dd-b5cd-0000779fd18c.html

- Solving AI. We need a new language for artificial intelligence: http://www.technologyreview.com/computing/22128/?a=f

- Rise of the Robots--The Future of Artificial Intelligence: http://www.scientificamerican.com/article.cfm?id=rise-of-the-robots&sc=WR_20090325

- 'The future is going to be very exciting': http://www.guardian.co.uk/technology/2009/may/02/google-univeristy-ray-kurzweil-artificial-intelligence

As with any futurology, many of these articles must be taken with a very large grain of salt.

Comment

5 Comments

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
I am surprised that the second article you listed does not mention TD-Gammon, which has a fairly interesting history. It was developed in the late 1980's and early 1990's using a temporal-difference algorithm and achieved (essentially) world class play. In addition, some of the moves it played went against the traditional view, which led to newfound insights into backgammon strategy.

There's a nice article here:
http://www.research.ibm.com/massive/tdl.html

by haoye, Jun 29, 2009, 7:15 PM

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
I know Arthur Morse who's in the AI report! He sends me emails! I met him when I used to live in Los Alamos and he taught me lots about AI, fractals and other stuff :)

by Poincare, Jun 29, 2009, 10:37 PM

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
Great links!

I'll reiterate my question from the previous post:

In which year, if ever will an AI successful read and complete a problem solving test: AMC 8? 10? 12? AIME? USAMO? Putnam?

Is there anything about one of these tests that would make them especially difficult for an AI to solve?

by djcordeiro, Jun 29, 2009, 11:17 PM

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
I always find it interesting that we try to do what computers are good at (computations, storage) and vice versa (pattern recognition followed by inference and decision making).

There are many names for the logic, probabilistic/statistical and other numerical optimization appraoches ranging from AI, decision-making, signal processing, machine learning, statistics, econometrics, operation research etc.

The problem is that is just appears as AI though almost all the time you fix some input data format, run a numerical optimization procedure and do predictions either directly for discriminative approaches or after model parameter estimation. This can be seen as kind of "hard-coding".

Just look at the DARPA autonomous car competition. Here Stanford's Sebastian Thrun discusses his approach. These cars got a massive amount of sensors and measurement devices and still if it encounters unseen environments it can be quickly led to very sub-optimal decisions.

A similar thing holds for chat bots as Eliza. I think you just need to input a huge number of specific phrases to make it pass the Turing test ("being able to tell the difference between a machine and a human") for most humans.

All these scenarios share a common factor: widly heterogeneous "things", i.e. input data formats, environment types (different road types, shades, reflection etc.), different discussion topics. The obvious brute force solution is to use databases of examples and emulate the really unseen ones by similarity. It is similar to the situation with chess programmes right now. Even the top players can no longer consistently beat the top programmes. The programmes did not suddenly get "smart" but rather make use of a huge library of possible moves. People also make use of experience but also are more inclined to apply heuristics due to resource limitations.

Another problem is that model parameter estimates are very sensitive. For example doing microarray experiments on patients in hospital A and B may yield very different parameter estimates. Thus a very hot new branch of machine learning looks at transfer learning (multi-task learning) right now where you try to see how the estimates carry over across different datasets. In this way it may also be interpreted as online learning for non-stationary distributions. Here is a video presentation about this topic by Massi Pontil from University College London.
This post has been edited 1 time. Last edited by orl, Jun 30, 2009, 1:07 AM

by orl, Jun 30, 2009, 12:27 AM

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
Some time ago I read this article: Robot scientist makes discoveries without human help.

There have been a few things in the past as expert systems. Basically experts feed knowledge into a system which can be retrieved and used by non-experts. But of course it never worked out: Most of the time there is always some more urgent higher priority task with higher incentive for the experts than to share knowledge. Somebody needs to maintain it to keep it consistent and up-to-date. It needs to be user-friendly that people want to use it etc.

A few decades back people started to develop automated logic theorem proving tools as Isabelle. But I think similarly to this robot described below these tools are used to automate mundane, repetitive but well-structured cases. Surely this a very efficient, rapid and reliable approach to test many cases: For the robot to experiment with yeast cells, for theorem provers to check the many cases in problems as the four color theorem or commerical circuit design/verification.


For example many of the recent Fields medal contributions went to people that applied a technique from field A to field B in a possibly adapted way. Gian-Carlo Rota once said : "Every Mathematician Has Only a Few Tricks".

But to make non-trivial progress you require creativity. That is why many mathematicians consider math as art. And not surprisingly many of them also enjoy poems, music etc.

But then again how much creativity do you need? And what is only technical work that can be possibly done by robots and/or software? "Genius is one percent inspiration, ninety-nine percent perspiration." (Thomas Edison)

An interesting way to think about creativity is to view it as intermediate step between simple, homogeneous states of type I and complex structures of type IV as Conway's game of life. Stephen Wolfram's article on "Two-Dimensional Cellular Automata",1985.

by orl, Jun 30, 2009, 12:43 AM

Come Search With Me

avatar

rrusczyk
Archives
+ December 2011
+ September 2011
+ August 2011
+ March 2011
+ June 2006
AMC
Tags
About Owner
  • Posts: 16194
  • Joined: Mar 28, 2003
Blog Stats
  • Blog created: Jan 28, 2005
  • Total entries: 940
  • Total visits: 3311945
  • Total comments: 3882
Search Blog
a