View Full Version : Artificial brain '10 years away'

Jason California
07-22-2009, 09:44 PM
Artificial brain '10 years away'

By Jonathan Fildes
Technology reporter, BBC News, Oxford

http://newsimg.bbc.co.uk/media/images/46101000/jpg/_46101181_-5.jpg Professor Markram said he would send a hologram to talk at TED in 10 years

A detailed, functional artificial human brain can be built within the next 10 years, a leading scientist has claimed.
Henry Markram, director of the Blue Brain Project, has already simulated elements of a rat brain.
He told the TED Global conference in Oxford that a synthetic human brain would be of particular use finding treatments for mental illnesses.
Around two billion people are thought to suffer some kind of brain impairment, he said.
"It is not impossible to build a human brain and we can do it in 10 years," he said.
"And if we do succeed, we will send a hologram to TED to talk."
'Shared fabric'
The Blue Brain project was launched in 2005 and aims to reverse engineer the mammalian brain from laboratory data.
In particular, his team has focused on the neocortical column - repetitive units of the mammalian brain known as the neocortex.
http://newsimg.bbc.co.uk/media/images/45691000/jpg/_45691657_p360364-nerve_cell_growth-spl.jpg The team are trying to reverse engineer the brain

"It's a new brain," he explained. "The mammals needed it because they had to cope with parenthood, social interactions complex cognitive functions.
"It was so successful an evolution from mouse to man it expanded about a thousand fold in terms of the numbers of units to produce this almost frightening organ."
And that evolution continues, he said. "It is evolving at an enormous speed."
Over the last 15 years, Professor Markram and his team have picked apart the structure of the neocortical column.
"It's a bit like going and cataloguing a bit of the rainforest - how may trees does it have, what shape are the trees, how many of each type of tree do we have, what is the position of the trees," he said.
"But it is a bit more than cataloguing because you have to describe and discover all the rules of communication, the rules of connectivity."
The project now has a software model of "tens of thousands" of neurons - each one of which is different - which has allowed them to digitally construct an artificial neocortical column.
Although each neuron is unique, the team has found the patterns of circuitry in different brains have common patterns.
"Even though your brain may be smaller, bigger, may have different morphologies of neurons - we do actually share the same fabric," he said.
"And we think this is species specific, which could explain why we can't communicate across species."
World view
To make the model come alive, the team feeds the models and a few algorithms into a supercomputer.
"You need one laptop to do all the calculations for one neuron," he said. "So you need ten thousand laptops."
http://newsimg.bbc.co.uk/media/images/45690000/jpg/_45690145_f0013613-the_human_brain-spl.jpg The research could give insights into brain disease

Instead, he uses an IBM Blue Gene machine with 10,000 processors.
Simulations have started to give the researchers clues about how the brain works.
For example, they can show the brain a picture - say, of a flower - and follow the electrical activity in the machine.
"You excite the system and it actually creates its own representation," he said.
Ultimately, the aim would be to extract that representation and project it so that researchers could see directly how a brain perceives the world.
But as well as advancing neuroscience and philosophy, the Blue Brain project has other practical applications.
For example, by pooling all the world's neuroscience data on animals - to create a "Noah's Ark", researchers may be able to build animal models.
"We cannot keep on doing animal experiments forever," said Professor Markram.
It may also give researchers new insights into diseases of the brain.
"There are two billion people on the planet affected by mental disorder," he told the audience.
The project may give insights into new treatments, he said.
The TED Global conference runs from 21 to 24 July in Oxford, UK.

07-22-2009, 09:57 PM
Hello Skynet.

Jason California
07-22-2009, 10:05 PM
Hello Skynet.

and on a related note,

[quote]Gadget Lab Hardware News and Reviews (http://www.wired.com/gadgetlab)
Robo-Ethicists Want to Revamp Asimov’s 3 Laws

By Priya Ganapati http://www.wired.com/gadgetlab/wp-content/themes/wired/images/envelope.gif (priya_ganapati@wired.com)
July 22, 2009 |
12:00 am |
Categories: R&D and Inventions (http://www.wired.com/gadgetlab/category/rd_and_inventions/)


Two years ago, a military robot used in the South African army killed nine soldiers (http://www.wired.com/dangerroom/2007/10/robot-cannon-ki/) after a malfunction. Earlier this year, a Swedish factory was fined after a robot machine injured one of the workers (though part of the blame was assigned to the worker). Robots have been found guilty of other smaller offenses such as an incorrectly responding to a request.
So how do you prevent problems like this from happening? Stop making psychopathic robots, say robot experts.
“If you build artificial intelligence but don’t think about its moral sense or create a conscious sense that feels regret for doing something wrong, then technically it is a psychopath,” says Josh Hall, a scientist who wrote the book Beyond AI: Creating the Conscience of a Machine.
For years, science fiction author Issac Asimov’s Three Laws of Robotics were regarded as sufficient for robotics enthusiasts. The laws, as first laid out in the short story “Runaround (http://www.rci.rutgers.edu/%7Ecfs/472_html/Intro/NYT_Intro/History/Runaround.html),” were simple: A robot may not injure a human being or allow one to come to harm; a robot must obey orders given by human beings; and a robot must protect its own existence. Each of the laws takes precedence over the ones following it, so that under Asimov’s rules, a robot cannot be ordered to kill a human, and it must obey orders even if that would result in its own destruction.
But as robots have become more sophisticated and more integrated into human lives, Asimov’s laws are just too simplistic, says Chien Hsun Chen, coauthor of a paper published in the International Journal of Social Robotics last month. The paper has sparked off a discussion among robot experts who say it is time for humans to get to work on these ethical dilemmas.
Accordingly, robo-ethicists want to develop a set of guidelines that could outline how to punish a robot, decide who regulates them and even create a ”legal machine language” that could help police the next generation of intelligent automated devices.
Even if robots are not entirely autonomous, there needs to be a clear path of responsibility laid out for their actions, says Leila Katayama, research scientist at open-source robotics developer Willow Garage. “We have to know who takes credit when the system does well and when it doesn’t,” she says. “That needs to be very transparent.”
A human-robot co-existence society could emerge by 2030, says Chen in his paper. Already iRobot’s Roomba robotic vacuum cleaner and Scooba floor cleaner are a part of more than 3 million American households. The next generation robots will be more sophisticated and are expected to provide services such as nursing, security, housework and education.
These machines will have the ability to make independent decisions and work reasonably unsupervised. That’s why, says Chen, it may be time to decide who regulates robots.
The rules for this new world will have to cover how humans should interact with robots and how robots should behave.
Responsibility for a robot’s actions is a one-way street today, says Hall. “So far, it’s always a case that if you build a machine that does something wrong it is your fault because you built the machine,” he says. “But there’s a clear day in the future that we will build machines that are complex enough to make decisions and we need to be ready for that.”
Assigning blame in case of a robot-related accident isn’t always straightforward. Earlier this year, a Swedish factory was fined after a malfunctioning robot almost killed (http://www.thelocal.se/19120.html) a factory worker who was attempting to repair the machine generally used to lift heavy rocks. Thinking he had cut off the power supply, the worker approached the robot without any hesitation but the robot came to life and grabbed the victim’s head. In that case, the prosecutor held the factory liable for poor safety conditions but also lay part of the blame on the worker.
“Machines will evolve to a point where we will have to increasingly decide whether the fault for doing something wrong lies with someone who designed the machine or the machine itself,” says Hall.
Rules also need to govern social interaction between robots and humans, says Henrik Christensen (http://www.cc.gatech.edu/%7Ehic/Georgia-HomePage/Home.html), head of robotics at Georgia Institute of Technology’s College of Computing. For instance, robotics expert Hiroshi Ishiguro has created a bot (http://www.engadget.com/2006/07/21/hiroshi-ishiguro-builds-his-evil-android-twin-geminoid-hi-1/) based on his likeness. “There we are getting into the issue of how you want to interact with these robots,” says Christensen. “Should you be nice to a person and rude to their likeness? Is it okay to kick a robot dog but tell your kids to not do that with a normal dog? How do you tell your children about the difference?”
Christensen says ethics around robot behavior and human interaction is not so much to protect either, but to ensure the kind of interaction we have with robots is the “right thing.”
Some of these guidelines will be hard-coded into the machines, others will become part of the software and a few will require independent monitoring agencies, say experts. That will also require creating a “legal machine language,” says Chen. That means a set of non-verbal rules, parts or all of which can be encoded in the robots. These rules would cover areas such as usability that would dictate, for instance, how close a robot can come to a human under various conditions, and safety guidelines that would conform to our current expectations of what is lawful.
Still the efforts to create a robot that can successfully interact with humans over time will likely be incomplete, say experts. “People have been trying to sum up what we mean by moral behavior in humans for thousands of years,” says Hall. “Even if we get guidelines on robo-ethics the size of the federal code it would still fall short. Morality is impossible to write in formal terms.”
Read the entire paper on human-robot co-existence (http://works.bepress.com/cgi/viewcontent.cgi?article=1000&context=weng_yueh_hsuan)

Wigner's Friend
07-22-2009, 10:12 PM
Sounds like somebody needs funding.

Taki Soma
07-22-2009, 10:16 PM

07-23-2009, 05:20 AM
This all leads to what we really want to know.
When do we get the hawt robots? :)


Stark Raving
07-23-2009, 05:56 AM
Robot Prostitutes 10.5 Years Away