Share Site Map Contact us Home Page
Home > Background > AI at the turn of the millennium

History of AI

AI at the turn of the millennium
Academic AI
Corporate AI
Commercial AI

Speaking Machines

From Myth to Reality part 1

From Myth to Reality part 2

From Myth to Reality part 3

From Myth to Reality part 4

From Myth to Reality Full
Contemporary Artificial Intelligence (state of the art around AD 2000)
  Printable version
After World War II, engineering and mathematics departments began to explore the new science of computing. Before long, a handful of professors from a variety of disciplines realized that, in computers, it might be possible to create a machine which could think and process information like a human.

In the 1950s, a handful of professors and scholars, such as Herbert Simon and John McCarthy, began to investigate subjects like automatic problem solving and computerizing human logic. After the historic Dartmouth Conference of 1956, where the term "artificial intelligence" was introduced, research labs were built around the country, with the largest at MIT and Stanford.
After a surge in the 1970s through the mid 1980s, the "AI Winter" set in, leaving only a few main players to search for new solutions. As this list shows, while they may share an academic provenance, each lab approaches the problems of machine intelligence in unique, sometimes radically different ways. History of AI after 1950  Related Article
While artificial intelligence got its start in academia, the massive explosion of computer use has made it clear to software and hardware companies that the current standard interface - with windows, mice, icons, and cryptic commands - doesn't work the same way humans do; otherwise, there would be little need for tech support call centers with staffs of thousands. A great deal of research has gone into building technology which can parse natural, everyday language - with the hope that, one day, computer users can simply explain what they want to their machines.