Bienvenidos a Ciencia en Canoa, una iniciativa creada por
Vanessa Restrepo Schild.

viernes, 22 de agosto de 2014

"Brain" In A Dish Acts As Autopilot Living Computer

A glass dish contains a "brain" -- a living network of 25,000 rat brain cells connected to an array of 60 electrodes.University of Florida/Ray Carson
downloadable pdf
A University of Florida scientist has grown a living “brain” that can fly a simulated plane, giving scientists a novel way to observe how brain cells function as a network.The “brain” — a collection of 25,000 living neurons, or nerve cells, taken from a rat’s brain and cultured inside a glass dish — gives scientists a unique real-time window into the brain at the cellular level. By watching the brain cells interact, scientists hope to understand what causes neural disorders such as epilepsy and to determine noninvasive ways to intervene.
Thomas DeMarse holds a glass dish containing a living network of 25,000 rat brain cells connected to an array of 60 electrodes that can interact with a computer to fly a simulated F-22 fighter plane.
As living omputers, they may someday be used to fly small unmanned airplanes or handle tasks that are dangerous for humans, such as search-and-rescue missions or bomb damage assessments." We’re interested in studying how brains compute,” said Thomas DeMarse, the UF assistant professor of biomedical engineering who designed the study. “If you think about your brain, and learning and the memory process, I can ask you questions about when you were 5 years old and you can retrieve information. That’s a tremendous capacity for memory. In fact, you perform fairly simple tasks that you would think a computer would easily be able to accomplish, but in fact it can’t.

While computers are very fast at processing some kinds of information, they can’t approach the flexibility of the human brain, DeMarse said. In particular, brains can easily make certain kinds of computations — such as recognizing an unfamiliar piece of furniture as a table or a lamp — that are very difficult to program into today’s computers.

If we can extract the rules of how these neural networks are doing computations like pattern recognition, we can apply that to create novel computing systems,” he said.
DeMarse’s experimental “brain” interacts with an F-22 fighter jet flight simulator through a specially designed plate called a multi-electrode array and a common desktop computer. It’s essentially a dish with 60 electrodes arranged in a grid at the bottom,” DeMarse said. “Over that we put the living cortical neurons from rats, which rapidly begin to reconnect themselves, forming a living neural network — a brain.” The brain and the simulator establish a two-way connection, similar to how neurons receive and interpret signals from each other to control our bodies. By observing how the nerve cells interact with the simulator, scientists can decode how a neural network establishes connections and begins to compute, DeMarse said. When DeMarse first puts the neurons in the dish, they look like little more than grains of sand sprinkled in water. However, individual neurons soon begin to extend microscopic lines toward each other, making connections that represent neural processes. “You see one extend a process, pull it back, extend it out — and it may do that a couple of times, just sampling who’s next to it, until over time the connectivity starts to establish itself,” he said. “(The brain is) getting its network to the point where it’s a live computation device.” To control the simulated aircraft, the neurons first receive information from the computer about flight conditions: whether the plane is flying straight and level or is tilted to the left or to the right.

The neurons then analyze the data and respond by sending signals to the plane’s controls. Those signals alter the flight path and new information is sent to the neurons, creating a feedback system. Initially when we hook up this brain to a flight simulator, it doesn’t know how to control the aircraft,” DeMarse said. “So you hook it up and the aircraft simply drifts randomly. And as the datacome in, it slowly modifies the (neural) network so over time, the network gradually learns to fly the aircraft.” Although the brain currently is able to control the pitch and roll of the simulated aircraft in weather conditions ranging from blue skies to stormy, hurricane-force winds, the underlying goal is a more fundamental understanding of how neurons interact as a network, DeMarse said. “There’s a lot of data out there that will tell you that the computation that’s going on here isn’t based on just one neuron. 

The computational property is actually an emergent property of hundreds or thousands of neurons cooperating to produce the amazing processing power of the brain.” With José Principe, a UF distinguished professor of electrical engineering and director of UF’s Computational NeuroEngineering Laboratory, DeMarse has a $500,000 National Science Foundation grant to create a mathematical model that reproduces how the neurons compute. Thomas DeMarse,"

ORIGINAL: U of Florida
by Jennifer Viegas  
Nov 27, 2012

Prepare to Be Shocked. Four predictions about how brain stimulation will make us smarter

Alvaro Dominguez

Several years ago, the Defense Advanced Research Projects Agency got wind of a technique called transcranial direct-current stimulation, or tDCS, which promised something extraordinary: a way to increase people’s performance in various capacities, from motor skills (in the case of recovering stroke patients) to language learning, all by stimulating their brains with electrical current. The simplest tDCS rigs are little more than nine-volt batteries hooked up to sponges embedded with metal and taped to a person’s scalp.

It’s only a short logical jump from the preceding applications to other potential uses of tDCS. What if, say, soldiers could be trained faster by hooking their heads up to a battery?

This is the kind of question DARPA was created to ask. So the agency awarded a grant to researchers at the University of New Mexico to test the hypothesis. They took a virtual-reality combat-training environment called Darwars Ambush—basically, a video game the military uses to train soldiers to respond to various situations—and captured still images. Then they Photoshopped in pictures of suspicious characters and partially concealed bombs. Subjects were shown the resulting tableaus, and were asked to decide very quickly whether each scene included signs of danger. The first round of participants did all this inside an fMRI machine, which identified roughly the parts of their brains that were working hardest as they looked for threats. Then the researchers repeated the exercise with 100 new subjects, this time sticking electrodes over the areas of the brain that had been identified in the fMRI experiment, and ran two milliamps of current (nothing dangerous) to half of the subjects as they examined the images. The remaining subjects—the control group—got only a minuscule amount of current. Under certain conditions, subjects receiving the full dose of current outperformed the others by a factor of two. And they performed especially well on tests administered an hour after training, indicating that what they’d learned was sticking. Simply put, running positive electrical current to the scalp was making people learn faster.

Dozens of other studies have turned up additional evidence that brain stimulation can improve performance on specific tasks. In some cases, the gains are small—maybe 10 or 20 percent—and in others they are large, as in the DARPA study. Vince Clark, a University of New Mexico psychology professor who was involved with the DARPA work, told me that he’d tried every data-crunching tactic he could think of to explain away the effect of tDCS. “But it’s all there. It’s all real,” Clark said. “I keep trying to get rid of it, and it doesn’t go away.

Now the intelligence-agency version of DARPA, known as IARPA, has created a program that will look at whether brain stimulation might be combined with exercise, nutrition, and games to even more dramatically enhance human performance. As Raja Parasuraman, a George Mason University psychology professor who is advising an IARPA team, puts it, “The end goal is to improve fluid intelligence—that is, to make people smarter.

Whether or not IARPA finds a way to make spies smarter, the field of brain stimulation stands to shift our understanding of the neural structures and processes that underpin intelligence. Here, based on conversations with several neuroscientists on the cutting edge of the field, are four guesses about where all this might be headed. 

1. Brain stimulation will expand our understanding of the brain-mind connection.

The neural mechanisms of brain stimulation are just beginning to be understood, through work by Michael A. Nitsche and Walter Paulus at the University of Göttingen and by Marom Bikson at the City College of New York. Their findings suggest that adding current to the brain increases the plasticity of neurons, making it easier for them to form new connections. We don’t imagine our brains being so mechanistic. To fix a heart with simple plumbing techniques or to reset a bone is one thing. But you’re not supposed to literally flip an electrical switch and get better at spotting Waldo or learning Swahili, are you? And if flipping a switch does work, how will that affect our ideas about intelligence and selfhood?

Even if juicing the brain doesn’t magically increase IQ scores, it may temporarily and substantially improve performance on certain constituent tasks of intelligence, like memory retrieval and cognitive control. This in itself will pose significant ethical challenges, some of which echo dilemmas already being raised by “neuroenhancement” drugs like Provigil. Workers doing cognitively demanding tasks—air-traffic controllers, physicists, live-radio hosts—could find themselves in the same position as cyclists, weight lifters, and baseball players. They’ll either be surpassed by those willing to augment their natural abilities, or they’ll have to augment themselves.

2. DIY brain stimulation will be popular—and risky.

As word of research findings has spread, do-it-yourselfers on Reddit and elsewhere have traded tips on building simple rigs and where to place electrodes for particular effects. Researchers like the Wright State neuroscientist Michael Weisend have in turn gone on DIY podcasts to warn them off. There’s so much we don’t know. Is neurostimulation safe over long periods of time? Will we become addicted to it? Some scientists, like Stanford’s Teresa Iuculano and Oxford’s Roi Cohen Kadosh, warn that cognitive enhancement through electrical stimulation may “occur at the expense of other cognitive functions.” For example, when Iuculano and Kadosh applied electrical stimulation to subjects who were learning a code that paired various numbers with symbols, the test group memorized the symbols faster than the control group did. But they were slower when it came time to actually use the symbols to do arithmetic. Maybe thinking will prove to be a zero-sum game: we cannot add to our mental powers without also subtracting from them.

3. Electrical stimulation is just the beginning.

Scientists across the country are becoming interested in how other types of electromagnetic radiation might affect the brain. Some are looking at using alternating current at different frequencies, magnetic energy, ultrasound, even different types of sonic noise. There appear to be many ways of exciting the brain’s circuitry with various energetic technologies, but basic research is only in its infancy. “It’s so early,” Clark told me. “It’s very empirical now—see an effect and play with it.

As we learn more about our neurons’ wiring, through efforts like President Obama’s BRAIN Initiative—a huge, multiagency attempt to map the brain—we may become better able to deliver energy to exactly the right spots, as opposed to bathing big portions of the brain in current or ultrasound. Early research suggests that such targeting could mean the difference between modest improvements and the startling DARPA results. It’s not hard to imagine a plethora of treatments tailored to specific types of learning, cognition, or mood—a bit of current here to boost working memory, some there to help with linguistic fluency, a dash of ultrasound to improve one’s sense of well-being.

4. The most important application may be clinical treatment.

City College’s Bikson worries that an emphasis on cognitive enhancement could overshadow therapies for the sick, which he sees as the more promising application of this technology. In his view, do-it-yourself tDCS is a sideshow—clinical tDCS could be used to treat people suffering from epilepsy, migraines, stroke damage, and depression. “The science and early medical trials suggest tDCS can have as large an impact as drugs and specifically treat those who have failed to respond to drugs,” he told me. “tDCS researchers go to work every day knowing the long-term goal is to reduce human suffering on a transformative scale.” To that end, many of them would like to see clinical trials test tDCS against leading drug therapies. “Hopefully the National Institutes of Health will do that,” Parasuraman, the George Mason professor, said. “I’d like to see straightforward, side-by-side competition between tDCS and antidepressants. May the best thing win.

A Brief Chronicle of Cognitive Enhancement
  • 500 b.c.: Ancient Greek scholars wear rosemary in their hair, believing it to boost memory.
  • 1886: John Pemberton formulates the original Coca-Cola, with cocaine and caffeine. It’s advertised as a “brain tonic.”
  • 1955: The FDA licenses methylphenidate—a?k?a Ritalin—for treating “hyperactivity.”
  • 1997: Julie Aigner-Clark launches Baby Einstein, a line of products claiming to “facilitate the development of the brain in infants.”
  • 1998: Provigil hits the U.S. market.
  • 2005: Lumosity, a San Francisco company devoted to online “brain training,” is founded.
  • 2020: A tDCS company starts an SAT-prep service for high-school students.
ORIGINAL: The Atlantic
Aug 13 2014

Podcast with Vanessa Restrepo Schild from SHD Medellín

Listen to our interview with Vanessa Restrepo Schild, organizer of SHD Medellín and president of the Kairos Society of Latin America.

Q&A with Science Hack Day Medellín Winner: Jimmy Alexander Garcia Caicedo

 Matt Biddulph Matt Biddulph
Science Hack Day Medellín Winner, Jimmy Alexander Garcia Caicedo, talks about his experience at Science Hack Day and how his team came up with the winning hack - a motion-sensor activated conveyor belt for people with reduced mobility that works with clean and alternative energy.
What is your name and what do you do?
My name is Jimmy Alexander Garcia Caicedo. I teach technology in Medellín and am a systems engineer.
What is your background in science, if any, and what are you major areas of interest?  
My experience is in educational robotics and innovation. I am interested in technology and eco-urbanism.
How did you first hear about Science Hack Day? 
I heard of Science Hack Day from a former student of my institution, and we wanted to participate to present an innovative idea (an automated and self-sustaining bridge) that we think solves a community problem. The Medellin event was my first Science Hack Day.
How did you come up with the idea for your hack?
Our project for Science Hack Day Medellín arose from a mobility problem that occurs in the community. We tailored an idea we had in the past to answer one of the four challenges described in the event to present the most innovative idea.
Have you worked with anyone in your team before attending the event?
Yes, I worked with my robotics club on a project for a science, technology and innovation fair in Medellin. This project provided the basis for building our hack, which we called SIMA (Automated Mobility System).
Did you have an idea of what type of hack you wanted to build before the event?
We knew we wanted to focus on solving a community problem. The trick to building the best idea was to build a prototype Automated Mobility System with robótics pieces; this allowed us to demonstrate the functionality of the idea on a smaller scale.
Could you describe your winning hack? 
Our innovative and winning idea consists of a motion-sensor activated conveyor belt for people with reduced mobility: the elderly, pregnant women, children, cyclists, etc. It’s an inclusive system that works with clean and alternative energy (solar and kinetic). It also serves to protect the environment, as it integrates the concept of eco-urbanism to improve appearance and reduce overheating of parts, thus ensuring better performance and durability.
Did you learn anything new at the Science Hack Day event?
Of course. Sharing ideas with people from other areas makes for many lessons in design, communication, programming, teamwork, solidarity and more.
How does Science Hack Day benefit a local scientific and tech community and/or the community as a whole?
Science and communities benefit greatly when people from different and diverse backgrounds and professions analyze, raise and develop innovative ideas to solve real community problems.
by aswartz

jueves, 21 de agosto de 2014

Siri’s Inventors Are Building a Radical New AI That Does Anything You Ask

Viv was named after the Latin root meaning live. Its San Jose, California, offices are decorated with tsotchkes bearing the numbers six and five (VI and V in roman numerals). Ariel Zambelich

When Apple announced the iPhone 4S on October 4, 2011, the headlines were not about its speedy A5 chip or improved camera. Instead they focused on an unusual new feature: an intelligent assistant, dubbed Siri. At first Siri, endowed with a female voice, seemed almost human in the way she understood what you said to her and responded, an advance in artificial intelligence that seemed to place us on a fast track to the Singularity. She was brilliant at fulfilling certain requests, like “Can you set the alarm for 6:30?” or “Call Diane’s mobile phone.” And she had a personality: If you asked her if there was a God, she would demur with deft wisdom. “My policy is the separation of spirit and silicon,” she’d say.

Over the next few months, however, Siri’s limitations became apparent. Ask her to book a plane trip and she would point to travel websites—but she wouldn’t give flight options, let alone secure you a seat. Ask her to buy a copy of Lee Child’s new book and she would draw a blank, despite the fact that Apple sells it. Though Apple has since extended Siri’s powers—to make an OpenTable restaurant reservation, for example—she still can’t do something as simple as booking a table on the next available night in your schedule. She knows how to check your calendar and she knows how to use Open­Table. But putting those things together is, at the moment, beyond her.

Now a small team of engineers at a stealth startup called Viv Labs claims to be on the verge of realizing an advanced form of AI that removes those limitations. Whereas Siri can only perform tasks that Apple engineers explicitly implement, this new program, they say, will be able to teach itself, giving it almost limitless capabilities. In time, they assert, their creation will be able to use your personal preferences and a near-infinite web of connections to answer almost any query and perform almost any function.

Siri is chapter one of a much longer, bigger story,” says Dag Kittlaus, one of Viv’s cofounders. He should know. Before working on Viv, he helped create Siri. So did his fellow cofounders, Adam Cheyer and Chris Brigham.

For the past two years, the team has been working on Viv Labs’ product—also named Viv, after the Latin root meaning live. Their project has been draped in secrecy, but the few outsiders who have gotten a look speak about it in rapturous terms. “The vision is very significant,” says Oren Etzioni, a renowned AI expert who heads the Allen Institute for Artificial Intelligence. “If this team is successful, we are looking at the future of intelligent agents and a multibillion-dollar industry.

Viv is not the only company competing for a share of those billions. The field of artificial intelligence has become the scene of a frantic corporate arms race, with Internet giants snapping up AI startups and talent. Google recently paid a reported $500 million for the UK deep-learning company DeepMind and has lured AI legends Geoffrey Hinton and Ray Kurzweil to its headquarters in Mountain View, California. Facebook has its own deep-learning group, led by prize hire Yann LeCun from New York University. Their goal is to build a new generation of AI that can process massive troves of data to predict and fulfill our desires.

Viv strives to be the first consumer-friendly assistant that truly achieves that promise. It wants to be not only blindingly smart and infinitely flexible but omnipresent. Viv’s creators hope that some day soon it will be embedded in a plethora of Internet-connected everyday objects. Viv founders say you’ll access its artificial intelligence as a utility, the way you draw on electricity. Simply by speaking, you will connect to what they are calling “a global brain.” And that brain can help power a million different apps and devices.

I’m extremely proud of Siri and the impact it’s had on the world, but in many ways it could have been more,” Cheyer says. “Now I want to do something bigger than mobile, bigger than consumer, bigger than desktop or enterprise. I want to do something that could fundamentally change the way software is built.”

Viv labs is tucked behind an unmarked door on a middle floor of a generic glass office building in downtown San Jose. Visitors enter into a small suite and walk past a pool table to get to the single conference room, glimpsing on the way a handful of engineers staring into monitors on trestle tables. Once in the meeting room, Kittlaus—a product-whisperer whose career includes stints at Motorola and Apple—is usually the one to start things off.

He acknowledges that an abundance of voice-navigated systems already exists. In addition to Siri, there is Google Now, which can anticipate some of your needs, alerting you, for example, that you should leave 15 minutes sooner for the airport because of traffic delays. Microsoft, which has been pursuing machine-learning techniques for decades, recently came out with a Siri-like system called Cortana. Amazon uses voice technology in its Fire TV product.

But Kittlaus points out that all of these services are strictly limited. Cheyer elaborates: “Google Now has a huge knowledge graph—you can ask questions like ‘Where was Abraham Lincoln born?’ And it can name the city. You can also say, ‘What is the population?’ of a city and it’ll bring up a chart and answer. But you cannot say, ‘What is the population of the city where Abraham Lincoln was born?’” The system may have the data for both these components, but it has no ability to put them together, either to answer a query or to make a smart suggestion. Like Siri, it can’t do anything that coders haven’t explicitly programmed it to do.

Viv breaks through those constraints by generating its own code on the fly, no programmers required. Take a complicated command like “Give me a flight to Dallas with a seat that Shaq could fit in.” Viv will parse the sentence and then it will perform its best trick: automatically generating a quick, efficient program to link third-party sources of information together—say, Kayak, SeatGuru, and the NBA media guide—so it can identify available flights with lots of legroom. And it can do all of this in a fraction of a second.

Viv is an open system that will let innumerable businesses and applications become part of its boundless brain. The technical barriers are minimal, requiring brief “training” (in some cases, minutes) for Viv to understand the jargon of the specific topic. As Viv’s knowledge grows, so will its understanding; its creators have designed it based on three principles they call its “pillars”:
  • It will be taught by the world, 
  • it will know more than it is taught, and 
  • it will learn something every day. 
As with other AI products, that teaching involves using sophisticated algorithms to interpret the language and behavior of people using the system—the more people use it, the smarter it gets. By knowing who its users are and which services they interact with, Viv can sift through that vast trove of data and find new ways to connect and manipulate the information.

Kittlaus says the end result will be a digital assistant who knows what you want before you ask for it. He envisions someone unsteadily holding a phone to his mouth outside a dive bar at 2 am and saying, “I’m drunk.” Without any elaboration, Viv would contact the user’s preferred car service, dispatch it to the address where he’s half passed out, and direct the driver to take him home. No further consciousness required.

The founders of a stealth startup called Viv Labs—Adam Cheyer, Dag Kittlaus, and Chris Brigham—are building a Siri-like digital assistant that can process massive troves of data, teach itself, and write its own programs on the fly. The goal: to predict and fulfill our desires. Ariel Zambelich

If Kittlaus is in some ways the Steve Jobs of Viv—he is the only non-engineer on the 10-person team and its main voice on strategy and marketing—Cheyer is the company’s Steve Wozniak, the project’s key scientific mind. Unlike the whimsical creator of the Apple II, though, Cheyer is aggressively analytical in every facet of his life, even beyond the workbench. As a kid, he was a Rubik’s Cube champion, averaging 26 seconds a solution. When he encountered programming, he dove in headfirst. “I felt that computers were invented for me,” he says. And while in high school he discovered a regimen to force the world to bend to his will. “I live my life by what I call verbally stated goals,” he says. “I crystallize a feeling, a need, into words. I think about the words, and I tell everyone I meet, ‘This is what I’m doing.’ I say it, and then I believe it. By telling people, you’re committed to it, and they help you. And it works.

He says he used the technique to land his early computing jobs, including the most significant—at SRI International, a Menlo Park think tank that invented the concept of computer windows and the mouse. It was there, in the early 2000s, that Cheyer led the engineering of a Darpa-backed AI effort to build “a humanlike system that could sense the world, understand it, reason about it, plan, communicate, and act.” The SRI-led team built what it called a Cognitive Assistant that Learns and Organizes, or CALO. They set some AI high-water marks, not least being the system’s ability to understand natural language. As the five-year program wound down, it was unclear what would happen next.

That was when Kittlaus, who had quit his job at Motorola, showed up at SRI as an entrepreneur in residence. When he saw a CALO-related prototype, he told Cheyer he could definitely build a business from it, calling it the perfect complement to the just-released iPhone. In 2007, with SRI’s blessing, they licensed the technology for a startup, taking on a third cofounder, an AI expert named Tom Gruber, and eventually renaming the system Siri.

The small team, which grew to include Chris Brigham, an engineer who had impressed Cheyer on CALO, moved to San Jose and worked for two years to get things right. “One of the hardest parts was the natural language understanding,” Cheyer says. Ultimately they had an iPhone app that could perform a host of interesting tasks—call a cab, book a table, get movie tickets—and carry on a conversation with brio. They released it publicly to users in February 2010. Three weeks later, Steve Jobs called. He wanted to buy the company.

“I was shocked at how well he knew our app,” Cheyer says. At first they declined to sell, but Jobs persisted. His winning argument was that Apple could expose Siri to a far wider audience than a startup could reach. He promised to promote it as a key element on every iPhone. Apple bought the company in April 2010 for a reported $200 million.

The core Siri team came to Apple with the project. But as Siri was honed into a product that millions could use in multiple languages, some members of the original team reportedly had difficulties with executives who were less respectful of their vision than Jobs was. Kitt­laus left Apple the day after the launch—the day Steve Jobs died. Cheyer departed several months later. “I do feel if Steve were alive, I would still be at Apple,” Cheyer says. “I’ll leave it at that.” (Gruber, the third Siri cofounder, remains at Apple.)

After several months, Kittlaus got back in touch with Cheyer and Brigham. They asked one another what they thought the world would be like in five years. As they drew ideas on a whiteboard in Kittlaus’ house, Brigham brought up the idea of a program that could put the things it knows together in new ways. As talks continued, they lit on the concept of a cloud-based intelligence, a global brain. “The only way to make this ubiquitous conversational assistant is to open it up to third parties to allow everyone to plug into it,” Brigham says.

In retrospect, they were re-creating Siri as it might have evolved had Apple never bought it. Before the sale, Siri had partnered with around 45 services, from to Yahoo; Apple had rolled Siri out with less than half a dozen. “Siri in 2014 is less capable than it was in 2010,” says Gary Morgenthaler, one of the funders of the original app.

Cheyer and Brigham tapped experts in various AI and coding niches to fill out their small group. To produce some of the toughest parts—the architecture to allow Viv to understand language and write its own programs—they brought in Mark Gabel from the University of Texas at Dallas. Another key hire was David Gondek, one of the creators of IBM’S Watson.

Funding came from Solina Chau, the partner (in business and otherwise) of the richest man in China, Li Ka-shing. Chau runs the venture firm Horizons Ventures. In addition to investing in Facebook, DeepMind, and Summly (bought by Yahoo), it helped fund the original Siri. When Viv’s founders asked Chau for $10 million, she said, “I’m in. Do you want me to wire it now?

It’s early May, and Kittlaus is addressing the team at its weekly engineering meeting. “You can see the progress,” he tells the group, “see it get closer to the point where it just works.” Each engineer delineates the advances they’ve made and next steps. One explains how he has been refining Viv’s response to “Get me a ticket to the cheapest flight from SFO to Charles de Gaulle on July 2, with a return flight the following Monday.” In the past week, the engineer added an airplane-seating database. Using a laptop-based prototype of Viv that displays a virtual phone screen, he speaks into the microphone. Lufthansa Flight 455 fits the bill. “Seat 61G is available according to your preferences,” Viv replies, then purchases the seat using a credit card.

Viv’s founders don’t see it as just one product tied to a hardware manufacturer. They see it as a service that can be licensed. They imagine that everyone from TV manufacturers and car companies to app developers will want to incorporate Viv’s AI, just as PC manufacturers once clamored to boast of their Intel microprocessors. They envision its icon joining the pantheon of familiar symbols like Power On, Wi-Fi, and Bluetooth.

Intelligence becomes a utility,” Kittlaus says. “Boy, wouldn’t it be nice if you could talk to everything, and it knew you, and it knew everything about you, and it could do everything?

That would also be nice because it just might provide Viv with a business model. Kittlaus thinks Viv could be instrumental in what he calls “the referral economy.” He cites a factoid about that he learned from its CEO: The company arranges 50,000 dates a day. “What isn’t able to do is say, ‘Let me get you tickets for something. Would you like me to book a table? Do you want me to send Uber to pick her up? Do you want me to have flowers sent to the table?’” Viv could provide all those services—in exchange for a cut of the transactions that resulted.

Building that ecosystem will be a difficult task, one that Viv Labs could hasten considerably by selling out to one of the Internet giants. “Let me just cut through all the usual founder bullshit,” Kittlaus says. “What we’re really after is ubiquity. We want this to be everywhere, and we’re going to consider all paths along those lines.” To some associated with Viv Labs, selling the company would seem like a tired rerun. “I’m deeply hoping they build it,” says Bart Swanson, a Horizons adviser on Viv Labs’ board. “They will be able to control it only if they do it themselves.

Whether they will succeed, of course, is not certain. “Viv is potentially very big, but it’s all still potential,” says Morgenthaler, the original Siri funder. A big challenge, he says, will be whether the thousands of third-party components work together—or whether they clash, leading to a confused Viv that makes boneheaded errors. Can Viv get it right? “The jury is out, but I have very high confidence,” he says. “I only have doubt as to when and how.

Most of the carefully chosen outsiders who have seen early demos are similarly confident. One is Vishal Sharma, who until recently was VP of product for Google Now. When Cheyer showed him how Viv located the closest bottle of wine that paired well with a dish, he was blown away. “I don’t know any system in the world that could answer a question like that,” he says. “Many things can go wrong, but I would like to see something like this exist.

Indeed, many things have to go right for Viv to make good on its founders’ promises. It has to prove that its code-making skills can scale to include petabytes of data. It has to continually get smarter through omnivorous learning. It has to win users despite not having a preexisting base like Google and Apple have. It has to lure developers who are already stressed adapting their wares to multiple platforms. And it has to be as seductive as Scarlett Johansson in Her so that people are comfortable sharing their personal information with a robot that might become one of the most important forces in their lives.

The inventors of Siri are confident that their next creation will eclipse the first. But whether and when that will happen is a question that even Viv herself cannot answer. Yet.

 La Tigre


Joi Ito: Want to innovate? Become a "now-ist"

Remember before the internet?” asks Joi Ito. “Remember when people used to try to predict the future?” In this engaging talk, the head of the MIT Media Lab skips the future predictions and instead shares a new approach to creating in the moment: building quickly and improving constantly, without waiting for permission or for proof that you have the right idea. This kind of bottom-up innovation is seen in the most fascinating, futuristic projects emerging today, and it starts, he says, with being open and alert to what’s going on around you right now. Don’t be a futurist, he suggests: be a now-ist.

Preparing Your Students for the Challenges of Tomorrow

Right now, you have students. Eventually, those students will become the citizens -- employers, employees, professionals, educators, and caretakers of our planet in 21st century. Beyond mastery of standards, what can you do to help prepare them? What can you promote to be sure they are equipped with the skill sets they will need to take on challenges and opportunities that we can't yet even imagine?

Following are six tips to guide you in preparing your students for what they're likely to face in the years and decades to come.
1. Teach Collaboration as a Value and Skill Set Students of today need new skills for the coming century that will make them ready to collaborate with others on a global level. Whatever they do, we can expect their work to include finding creative solutions to emerging challenges.
2. Evaluate Information Accuracy 
New information is being discovered and disseminated at a phenomenal rate. It is predicted that 50 percent of the facts students are memorizing today will no longer be accurate or complete in the near future. Students need to know 
  • how to find accurate information, and 
  • how to use critical analysis for 
  • assessing the veracity or bias and 
  • the current or potential uses of new information
These are the executive functions that they need to develop and practice in the home and at school today, because without them, students will be unprepared to find, analyze, and use the information of tomorrow.
3. Teach Tolerance 
In order for collaboration to happen within a global community, job applicants of the future will be evaluated by their ability for communication with, openness to, and tolerance for unfamiliar cultures and ideas. To foster these critical skills, today's students will need open discussions and experiences that can help them learn about and feel comfortable communicating with people of other cultures.
4. Help Students Learn Through Their Strengths 
Children are born with brains that want to learn. They're also born with different strengths -- and they grow best through those strengths. One size does not fit all in assessment and instruction. The current testing system and the curriculum that it has spawned leave behind the majority of students who might not be doing their best with the linear, sequential instruction required for this kind of testing. Look ahead on the curriculum map and help promote each student's interest in the topic beforehand. Use clever "front-loading" techniques that will pique their curiosity.

5. Use Learning Beyond the Classroom
New "learning" does not become permanent memory unless there is repeated stimulation of the new memory circuits in the brain pathways
. This is the "practice makes permanent" aspect of neuroplasticity where neural networks that are the most stimulated develop more dendrites, synapses, and thicker myelin for more efficient information transmission. These stronger networks are less susceptible to pruning, and they become long-term memory holders. Students need to use what they learn repeatedly and in different, personally meaningful ways for short-term memory to become permanent knowledge that can be retrieved and used in the future. Help your students make memories permanent by providing opportunities for them to "transfer" school learning to real-life situations.
6. Teach Students to Use Their Brain Owner's Manual
The most important manual that you can share with your students is the owner's manual to their own brains. When they understand how their brains take in and store information (PDF, 139KB), they hold the keys to successfully operating the most powerful tool they'll ever own. When your students understand that, through neuroplasticity, they can change their own brains and intelligence, together you can build their resilience and willingness to persevere through the challenges that they will undoubtedly face in the future.

How are you preparing your students to thrive in the world they'll inhabit as adults?

ORIGINAL: Edutopia 
Judy Willis MD's Profile

August 20, 2014

domingo, 17 de agosto de 2014

Brainstorming Doesn't Work; Try This Technique Instead

Ever been in a meeting where one loudmouth's mediocre idea dominates?
Then you know brainstorming needs an overhaul.

Brainstorming, in its current form and by manymetrics, doesn't work as well as the frequency of "team brainstorming meetings" would suggests it does. Early ideas tend to have disproportionate influence over the rest of the conversation.

Sharing ideas in groups isn't the problem, it's the "out-loud" part that, ironically, leads to groupthink, instead of unique ideas. "As sexy as brainstorming is, with people popping like champagne with ideas, what actually happens is when one person is talking you're not thinking of your own ideas," Leigh Thompson, a management professor at the Kellogg School, told Fast Company. "Sub-consciously you're already assimilating to my ideas."

That process is called "anchoring," and it crushes originality. "Early ideas tend to have disproportionate influence over the rest of the conversation," Loran Nordgren, also a professor at Kellogg, explained. "They establish the kinds of norms, or cement the idea of what are appropriate examples or potential solutions for the problem."

Because brainstorming favors the first ideas, it also breeds the least creative ideas, a phenomenon called conformity pressure. People hoping to look smart and productive will blurt out low-hanging fruit first. Everyone else then rallies around that idea both internally and externally. Unfortunately, that takes up time and energy, leaving a lot the best thinking undeveloped. We've all been in meetings like this: Some jerk says the obvious thing before anyone else, taking all of the glory; everyone else harrumphs. Brainstorm session over.

To avoid these problems, both Thompson and Nordgren suggest another, quieter process: brainwriting. (The phrase, now used by Thompson, was coined by UT Arlington professor Paul Paulus.) The general principle is that idea generation should exist separate from discussion. Although the two professors have slightly different systems, they both offer the same general solution: write first, talk second.

Brainstorming works best if before or at the beginning of the meeting, people write down their ideas. Then everyone comes together to share those ideas out loud in a systematic way. Thompson has her participants post all the ideas on a wall, without anyone's name attached and then everyone votes on the best ones. "It should be a meritocracy of ideas," she said. "It's not a popularity contest." Only after that do people talk.
Nordgren, via an app he developed called Candor, has people record their thoughts before the meeting. Then, everyone goes around in a circle saying each idea.

This write first, discuss later system eliminates the anchoring problem because people think in a vacuum, unbiased by anyone else. Of course, people still jot down the most obvious ideas, which aren't necessarily bad ideas. But in brainstorming the goal is quantity, not quality. To avoid spending too much time on repetitive suggestions, people using Candor only present ideas someone else hasn't already said. In most meetings with traditional brainstorming, a few people do 60-75% of the talking.

With brainwriting, everyone gets a chance.
In her studies, Thompson found that brainwriting groups generated 20% more ideas and 42% more original ideas as compared to traditional brainstorming groups, she writes in her book Creative Conspiracy. "I was shocked to find there's not a single published study in which a face-to-face brainstorming group outperforms a brainwriting group," she said. In Nordgren's research he has found that the process leads to more diverse and candid ideas.

Discussion still has its merits, but should only take place after the group has generated a variety of distinct ideas with which to work. Raw ideas rarely work. It's the permutation and combination of the outlandish and banal that lead to the best proposals. "Usually the best idea that is selected at the end isn't exactly what anyone came up with at the beginning; the idea has been edited," Nordgren added.

The best part of introverted thinking, however, is that it cuts down on what I'll call the "loudmouth meeting-hog phenomenon." You know the type: the person who, along with one or two other people, dominate the conversation. (Here Fast Company's Baratunde Thurston acts out this very scenario with Behance Co-Founder Scott Belsky.) Thompson's studies have found that in most meetings with traditional brainstorming, a few people do 60-75% of the talking. With brainwriting, everyone gets a chance.

ORIGINAL: FastCompany

Tu álbum botánico. Semana del Árbol. Colombia

Queremos que no solo te unas y hagas parte de este proyecto en cualquiera de las siguientes opciones, sino que participes en el concurso en el que se unen y

¡Así participas en el concurso! 

Enviando máximo dos fotografías en donde los protagonistas sean árboles. Cualquier tipo de árbol, lo importante es que sea el ¡rey de la foto!

¿Qué debes hacer? Enviar tus fotos al correo electrónico: con los créditos del fotógrafo, el lugar donde se tomó y un nombre para la foto.

¿Qué haremos con las fotos? Las publicaremos en nuestra página Web y redes sociales de la Semana del árbol y de Savia (Facebook y Twitter), haciendo mención de su autor.

Para la publicación de la foto tendremos en cuenta: Debe tener autor, lugar y nombre
Seguirnos en las redes sociales

En Facebook: 
¡Ojo con el plagio!
Enviar máximo dos fotos por persona
Enviarla al correo que se menciona

¿Qué ganarás?
Publicaremos 15 fotos, pero solo ganarán premio las 3 escogidas por Savia y la Semana del Árbol. Así que demuestra todas tus dotes: destreza en la fotografía y amor por el cuidado de los árboles.

Primer puesto:
  • Libro "Vienen los pájaros", guía de aves publicado por la la Universidad Tecnológica de Pereira
  • Segundo tomo de la colección Savia: Savia Amazonas – Orinoco
  • Camiseta de la Semana del Árbol
Segundo y tercer puesto:
  • Segundo tomo de la colección Savia: Savia Amazonas – Orinoco
  • Camiseta de la Semana del Árbol

Las tres personas que ganen serán contactadas por correo electrónico y recibirán vía correo físico sus respectivos premios.

No te quedes por fuera y ahora sí, ¡envíanos tus fotos!

ORIGINAL: Savia Botánica

sábado, 16 de agosto de 2014

​ A Thousand Kilobots Self-Assemble Into Complex Shapes

A Thousand Kilobots Self-Assemble Into Complex Shapes Nanorobots, Harvard, Enjambre, Software, Biomimicry, Bioingeniería, Computación Distribuída,
By Evan Ackerman
14 Aug 2014

 Photo: Michael Rubenstein/Harvard Universit

When Harvard roboticists first introduced their Kilobots in 2011, they'd only made 25 of them. When we next saw the robots in 2013, they'd made 100. Now the researchers have built one thousand of them. That's a whole kilo of Kilobots, and probably the most robots that have ever been in the same place at the same time, ever.

The researchers—Michael Rubenstein, Alejandro Cornejo, and Professor Radhika Nagpal of Harvard's Self-Organizing Systems Research Group—describe their thousand-robot swarm in a paper published today in Science (they actually built 1024 robots, apparently following the computer science definition of "kilo").

Despite their menacing name (KILL-O-BOTS!) and the robot swarm nightmares they may induce in some people, these little guys are harmless. Each Kilobot [pictured below] is a small, cheap-ish ($14) device that can move around by vibrating their legs and communicate with other robots with infrared transmitters and receivers.

Photo: Michael Rubenstein/Harvard University

Two things are key to doing useful stuff with a swarm of robots like this.
  1. Thing one is having a lot of them, and one thousand most definitely qualifies as A LOT of them. In fact, researchers working on swarm robotics often rely on computer simulations (because digital robots are cheap!). When they built actual robots, their "swarm" consists of five robots or so. Maybe ten. Or a hundred, in rare cases. But a thousand Kilobots—that's a swarm. There are so many robots here that the importance of any individual robot is close to zero, which is a big part of the point of a swarm in the first place: robots can screw up, robots can break down, but there are so many of them that it just doesn't matter, because their collective behavior prevails.
  2. The second thing about swarm robotics is that you need the software and infrastructure to manage and control a huge number of robots. With a thousand robots, tasks that are trivial with a few robots rapidly scale to impossible. Charging is one example. Imagine if you had to manually plug 1000 robots to their chargers. For their Kilobots, the Harvard researchers solve this problem by sandwiching them between two metal sheets and passing a current through them. Similarly, you can rapidly program the robots by beaming infrared signals on them. And because you can do these operations (charging and programming) for all the robots at once, even if you increase the size of the swarm, the time required remains the same.

So once you've got your robots, and your charging, and your programming, what can you do?

In biological systems, swarms organize and control themselves based on a set of very simple rules. With fish, for example, the rules are to stay close to the fish in front of you while turning towards your nearest neighbor to the side. With the Kilobots, the algorithm that they use to create shapes are based on a similarly simple set of capabilities:
  • Edge-following, where a robot can move along the edge of a group by measuring distances from robots on the edge
  • Gradient formation, where a source robot can generate a gradient value message that increments as it propagates through the swarm, giving each robot a geodesic distance from the source
  • Localization, where the robots can form a local coordinate system using communication with, and measured distances to, neighbors
Of these capabilities, the localization is both the trickiest and most important. The robots talk to each other by bouncing infrared light off of the surface that they operate on. They can tell how far away they are from other robots by measuring how much the brightness of the infrared light changes: the dimmer the light, the farther away they are. But they have no information about where exactly the light is coming from. So, to localize, they depend on an initial "seed" group of robots to define the origin of a coordinate system, and then subsequent robots can localize based on the relative brightness of the infrared pulses coming from at least three other robots that have already been localized. 
Image: Harvard University/Science
Collective self-assembly algorithm: 
Top left: A user-specified shape is given to robots in the form of a picture.
Top right: The algorithm relies on three primitive collective behaviors: edge-following, gradient formation, and localization.
Bottom: The self-assembly process by which a group of robots forms the user-defined shape.

Once the robots localize themselves, the procedure for forming an arbitrary shape is relatively straightforward: robots start to move around the perimeter of the swarm until they detect that they've entered the area where the shape will be formed. Then, each robot continues to move around the edge of whatever other robots are already in the shape until it detects that it's about to exit the shape, or until it bumps into the previous robot. And that's it: robots just keep moving, following these simple rules, until the shape is created.

With 1000 robots, you have to figure that they're not all going to do exactly what they're supposed to do all the time, which is why the researchers had to build as much flexibility as possible into the system. Lots of things can go wrong, from robots accidentally shoving other robots to robots breaking down for whatever reason. In these cases, more relatively simple rules allow the swarm as a whole to form the desired shape. A stalled robot might direct other robots to go around it, while a shoved robot might have to reposition itself. Individual robots don't necessarily have the sensors to determine when bad things like this happen, but through interacting with their neighbors and looking for patterns in the sensor data, the swarm as a whole is much more resilient.

Photo: Michael Rubenstein/Harvard University

Now that there are a thousand of these robots out there running around, what's next? We're fixated on these two sentences (the very first one and the very last one) from the Science paper:

"In nature, groups of thousands, millions, or trillions of individual elements can self-assemble into a wide variety of forms, purely through local interaction."

"This motivates new investigations into advanced collective algorithms capable of detecting malfunctioning robots and recovering from large-scale external damages, as well as new robot designs that, like army ants, can physically attach to each other to form stable self-assemblages."

Okay, so if I'm putting these two things together correctly, what the researchers are saying is that they're going to create a trillion Kilobots (trilobots?) that will form self-assembling structures that cannot be destroyed. Ever. By anything.

[ Science ]

By Evan Ackerman
14 Aug 2014

H2prO - Portable Photocatalytic Electricity Generation and Water Purification Unit

Electricity and clean water -things that we easily have access to are unfortunately luxuries for those in underdeveloped countries.

In fact, not only is there a lack of resources in third-world countries, but also the whole world is facing energy crisis and water pollution. My objective is to find an eco-friendly and economical approach to solve both issues.

My device H2PRO relies on photocatalytic reactions to purify and sterilize wastewater and to generate electricity using hydrogen produced. This sustainable process only requires titania and light. What’s more is that organic pollutant doesn’t only get decomposed but will also enhance the reaction rate.


It is composed of 2 parts
  • –the upper unit for photocatalytic water-purification and hydrogen-generation, which is connected to a fuel cell and 
  • the bottom unit for further water filtration.
H2PRO’s feasibility in the removal of organic pollutant was examined to be excellent-almost 90% of organic compound was decomposed after 2 hours. However, its performance in electricity generation was quite unstable although theoretically and experimentally the photocatalytic hydrogen yield is proved to be satisfactory and even better in the presence of organic pollutant. I will keep improving this device until stable electricity generation is achieved.

In conclusion, I have successfully introduced a design for a portable electricity-generation and water-purification unit. Generally speaking, H2PRO has demonstrated its potential to feasibly provide clean water and sustainable energy to the needy ones. I will keep "practicalizing" the electricity-generation unit so that people can really benefit from my design one day.

Question / Proposal

Is it possible to create a portable device that purifies wastewater while generating electricity sustainably and affordably?

This question is raised not only because of the lack of clean water and electricity in third-world countries, but also because we are all facing energy crisis and water pollution. My objective is to find an eco-friendly and economical approach to solve both issues.

Based on what I developed from investigating photocatalysis, I aim to create a device that can put the mechanisms into practice. In photocatalysis, not only water is purified and sterilized, but hydrogen is also produced through water-splitting, which can be used to generate electricity.
The entire sustainable process only needs titania and light-no additional power source is required. However, hydrogen production is generally low since photoexcited electrons tend to fall back to the hole(i.e.photoinduced electron-hole combination). Fortunately, it can be overcome by adding reductants, while some organic pollutants serve such purpose. Hence, I propose to combine the two mechanisms together to enhance the yield and lower the cost of hydrogen generation, meanwhile efficient water purification can also be achieved.

There are similar designs existing, but they require either an excessively complex use of technology or an external energy source, which means they are not sustainable, affordable and manageable for users in underdeveloped countries. Nonetheless, I hypothesize that photocatalysis can be applied in a manageable scale that allows water purification and electricity production to be economically and sustainably performed in a portable device as well as at a household level.
ORIGINAL: Google Science Fair