Computer learns like a human and (sort of) beat the Turing Test - There is a fundamental difference between the way computers learn and
the way humans learn. Humans can see one example and intuit what that
object or symbol might be used for and quickly identify similar things. A
computer can only arrive at the same conclusions after being fed
thousands and thousands of examples. This usually referred to as
"machine learning."
That may, however, be about to change.
Scientists at New York University have figured out a way to not only
mimic how humans make those mental leaps, but to have computers recreate
simple symbols and drawings in such a way that they're almost
indistinguishable from these created by humans.
In a paper published this week in Science researchers
describe how they built a “Bayesian Program Learning (BPL)” algorithm,
which turns concepts into simple computer programs and allows computers
to learn a large class of visual concepts from a single example.
In a digital alphabet, the letter “A” would be represented by code.
However, instead of a programmer writing the code, the computer
generates the code to represent the letter and then it also produces
variations based on that first letter.
Researchers said the model uses knowledge from previous concepts to
learn. For example, if the computer knows the Latin alphabet, that can
help it learn the similar Greek alphabet.
Brenden Lake is a Moore-Sloan Data Science Fellow at New York
University and the paper's lead author. He says the breakthrough came
when researchers noticed that, “If you ask a handful of people to draw a
novel character, there is remarkable consistency in the way people
draw.... They do not see characters as just static visual objects.
Instead people see richer structure... that describes how to efficiently
produce new examples of the concept.
"We aimed to develop an algorithm with the same capability and then compare it with people.”
During
a presentation on their work, the scientists said they've not only
built a machine-learning program, “but what the program learns - its
concepts - are also programs. We think that is true for humans too:
your concepts are programs, or parts of programs,” said Joshua
Tenenbaum, of the Department of Brain and Cognitive Sciences, and Center
for Brains, Minds, and Machines, at MIT.
More incredibly, when the computer was asked to create fresh examples
based on the original concept, and those images were compared to
examples created by humans,
humans often couldn’t distinguish if a person or a computer had created the example.
In other words, the computational model passed a rough form of the Turing Test.
Legendary 20th century mathematician Alan Turning (he broke the Enigma
Code) posited that in the 21 century a computer would have a 70% chance
of fooling a human into thinking they were communicating with another
human.
According to the study, “this approach can perform one-shot learning
in classification tasks at human-level accuracy and fool most judges in
visual Turing tests of its more creative abilities. For each visual
Turing test, fewer than 25% of judges [the paper notes that there were
35] performed significantly better than chance.”
"Our results show that by reverse-engineering how people think about a
problem, we can develop better algorithms," Lake said in a release.
This machine-learning shortcut could have wide-reaching implications.
It could shorten the time it takes for computers to learn new
languages, recognize images and help systems generate new, usable
designs based on existing designs, without human input.
The research could also have significant impact on future artificial
intelligence innovation, including robotics. A robot that can make
logical leaps about things might someday be more adept at human-like
decision-making. Which is either a thrilling or terrifying thought.
Not everyone agrees that this is a breakthrough or even that the system beat the Turing test.
Allen Institute for Artificial Intelligence CEO Oren Etzioni told Mashable
that "They didn’t beat the Turing test any more than a calculator does
by out-multiplying a human," and the work is best classified as a
"scientific contribution."
"While the authors pose a fascinating research question, many
researchers have used related methods to achieve strong results. Still,
the paper is an invaluable reminder that we need methods that can
generalize from small numbers of examples both to model human abilities
and to move AI forward," said Etzioni.
Even if you do buy into the idea that this is a significant
advancement in the field of AI, applications for the work are years, if
not decades away; even by the researchers' own measure, the program
still doesn’t see the same level of structural detail as humans. “It
lacks explicit knowledge of parallel lines, symmetry, optional elements
such as cross bars in '7's, and connections between the ends of strokes
and other strokes,” wrote the scientists. Source: Mashable
0 komentar:
Posting Komentar