Bienvenidos a Ciencia en Canoa, una iniciativa creada por
Vanessa Restrepo Schild.

jueves, 31 de julio de 2014

RCA graduate develops an artificial leaf that’s capable of producing oxygen
Human beings have long since been looking up at space, wondering when mankind will finally be technologically-advanced enough to colonize space. While staring heavenwards recently, we stumbled across this jaw-dropping development by Royal College of Art (RCA) graduate Julian Melchiorri. A synthetically developed leaf, this concept called the Silk Leaf Project, is capable of absorbing water and carbon dioxide to produce oxygen, just the way a real plant does! Quoting Melchiorri, “NASA is researching different ways to produce oxygen for long-distance space journeys to let us live in space. This material could allow us t0 explore space much further than we can now.

The Silk Leaf Project was developed as part of the Royal College of Art’s Innovation Design Engineering course in collaboration with Tufts University silk lab. Made from chloroplasts suspended in a matrix made out of silk protein, the leaf “as an amazing property of stabilizing molecules.” Not unlike real plants, these leaves created by Melchiorri also require light and a small amount of water to produce oxygen. This is the first man-made biological leaf in the history of mankind and an idea as such could help us step beyond boundaries, in terms of technology and lifestyle. Melchiorri sure deserves a pat on his back for his brilliance!

ORIGINAL: Newlaunches

miércoles, 30 de julio de 2014

New molecule puts scientists a step closer to understanding hydrogen storage

Australian and Taiwanese scientists have discovered a new molecule which puts the science community one step closer to solving one of the barriers to development of cleaner, greener hydrogen fuel-cells as a viable power source for cars.

Scientists say that the newly-discovered “28copper15hydride” puts us on a path to better understanding hydrogen, and potentially even how to get it in and out of a fuel system, and is stored in a manner which is stable and safe – overcoming Hindenburg-type risks.

“28copper15hydride” is certainly not a name that would be developed by a marketing guru, but while it would send many running for an encyclopaedia (or let’s face it, Wikipedia), it has some of the world’s most accomplished chemists intrigued.

Its discovery was recently featured on the cover of one of the world’s most prestigious chemistry journals, and details are being presented today by Australia’s Dr Alison Edwards at the 41st International Conference on Coordination Chemistry, Singapore where 1100 chemists have gathered..

The molecule was synthesised by a team led by Prof Chenwei Liu from the National Dong Hwa University in Taiwan, who developed a partial structure model.

The chemical structure determination was completed by the team at the Australian Nuclear Science and Technology Organisation (ANSTO) using KOALA, one of the world’s leading crystallography tools.

Most solid material is made of crystalline structures. The crystals are made up of regular arrangements of atoms stacked up like boxes in a tightly packed warehouse. The science of finding this arrangement, and structure of matter at the atomic level, is crystallography. ANSTO is Australia’s home of this science.

ANSTO’s Dr Alison Edwards is a Chemical Crystallographer at the Bragg Institute (named after William Bragg and his Australian-born son Lawrence, who were pioneers in this field). She explains the very basic (elementary, if you will!) principles behind the discovery, and the discovery itself:

Anyone with a textbook understanding of chemistry knows the term ‘hydride’ describes a compound which results when a hydrogen atom with a negative charge is combined with another element in the periodic table,” said Dr Edwards.

This study revealed that mixing certain copper (Cu) compounds with a hydride of boron (borohydride or (BH4)) - created our newly discovered “Chinese Puzzle molecule” with a new structure that has alternating layers of hydride and copper wrapped in an outer shell of protecting molecules.

Using our leading KOALA instrument – we identified that this molecule actually contained no less than 15 hydrides in the core - which is almost double the eight we were expecting.

This new molecule has an unprecedented metal hydride core it is definitely different and much more stable than many previous hydride compounds, in fact it is stable in air, which many others are not. So, we see there is probably much more yet to learn about the properties, and potential of hydride.”

The Chinese Puzzle Molecule -a twenty eight copper fifteen hydride core wrapped in dithiocarbamate

The discovery puts us one step further along a path to developing distribution infrastructure - one of four obstacles to hydrogen fuel-cell technology as a viable power source for low-carbon motor vehicles, as cited by Professor Steven Chu, Nobel Laureate and former Secretary of Energy in the United States.

The four problems in using hydrogen as fuel can be broadly understood as:
  1. Efficiency, because the process of obtaining hydrogen - H2 - costs some of the actual energy content already stored in the source of the hydrogen;
  2. Transportation and a lack of adequate mechanism to store large volumes at high density;
  3. The fuel cell technology is not yet advanced enough; and
  4. The distribution infrastructure has not been established.
ANSTO’s KOALA has been uniquely placed in developing a scientific understanding of hydrogen and the potential of hydrides, because the neutron source allows us to see the precise location of hydrogen in structures which is effectively invisible with X-rays.

This improved understanding of one aspect of the nature of hydride provides an improved fundamental understanding of an aspect of hydrogen which underpins potential technological developments – you cannot have a well-founded “hydrogen economy” unless you understand hydrogen!,” said Dr Edwards.

No one is claiming hydrogen-powered cars are imminent. Perhaps this puts us a step further down the road, but we don’t know how long the road is. What this research shows is hydrides may yet help us get hydrogen in and out of a fuel system, stored in a manner which is stable and safe – overcoming the Hindenburg-type risks.

As I said before, the implications from the research are actually broader and have impacts beyond car power sources.

“The same synthetic chemistry is being applied in the areas of gold and silver nanoparticle formation, which are currently believed to have wide-ranging potential applications in fields such as catalysis, medical diagnostics and therapeutics.”

Our result suggests there could be much more going on in gold and silver nanoclusters than is currently understood – or at the very least, there is more to be understood about the processes of nanoparticle formation. Through understanding the process, we have the prospect of controlling and even directing it.


Biohackers Are Growing Real Cheese In A Lab, No Cow Needed

If you're a vegan, cheese options are limited.

A team of Bay Area biohackers is trying to create a new option: real vegan cheese. That is, cheese derived from baker's yeast that has been modified to produce real milk proteins. Think of it as the cheese equivalent of lab-grown meat.

In order to get baker's yeast to produce milk proteins, the team scoured animal genomes to come up with milk-protein genetic sequences. Those sequences are then inserted into yeast, where it can produce milk protein.

Once the protein is purified, it needs to be mixed with a vegan milk-fat replacement, sugar (not lactose, so that the cheese will be edible by the lactose intolerant among us), and water to create vegan milk. Then the normal cheese-making process can commence.

The journey towards vegan cheese began a few years ago, when synthetic biologist Marc Juul started thinking about the genetic engineering possibilities. Now, Juul and a group of people from two Bay Area biohacker spaces, Counter Culture Labs and BioCurious, are trying to create a finished product in time for the International Genetically Engineered Machine competition--a global synthetic biology competition--in October. So far, they've raised over $16,000 on Indiegogo to do it.

The vegan cheese team does have a number of vegan and vegetarian members, as well as others passionate about the challenge and the prospect of having cheese that doesn't require the mistreatment of cows. "We're blessed in the Bay Area. There are lots of great cheeses produced north of San Francisco--small scale, organic, free-range, small cheese manufacturers. But that doesn’t hold for most cheese currently being made," says Patrik D'haeseleer, a computational biologist on the team.

In order to get baker's yeast to produce milk proteins, the team scoured animal genomes to come up with milk-protein genetic sequences. Those sequences are then inserted into yeast, where they can produce milk protein. Once the protein is purified, it needs to be mixed with a vegan milk-fat replacement, sugar (not lactose, so that the cheese will be edible by the lactose intolerant among us), and water to create vegan milk. Then the normal cheese-making process can commence. The team wants to start with a cheddar or gouda to satisfy vegan cravings for hard cheese.

"There are lots of naturally occurring cheese proteins that have naturally occurring [positive] health effects. We can pick and choose variants we want to use," says D'haeseleer. He stresses that the end product is GMO free. While the yeast is genetically modified, the purified proteins secreted by the yeast are not. Rennet used in traditional cheese is produced in a similar manner, using GMO E.coli bacteria.

Research is still in the early stages. By October, the team hopes to have four of the casein (milk) proteins produced and verified, along with the enzyme that attaches phosphate groups to these proteins. Ideally, the team would also like to demonstrate that it can coagulate the ingredients into cheese.

"At that point, we might have a small amount of what we might call cheese, on the order of grams or milligrams. Then we can start talking about how to scale it up," says D'haeseleer. "When it gets into the art and science of cheesemaking, we would probably collaborate with a real cheesemaker at that point. That's a whole different skillset."

In theory, they can make vegan cheese from any mammal's DNA--including humans and other mammals. If the team reaches its stretch goal of $20,000, it plans to create Narwhal cheese, working with researchers at the University of California, Santa Cruz on genetic sequencing and analysis.

All of the research is going up on a public wiki, but some of the team members may reportedly be interested in pursuing this full-time eventually. "10 years ago, this kind of science wouldn’t have been possible," says D'haeseleer. "For synthetic biology, it's gotten to the point where a team of biohackers like us can accomplish this."

ORIGINAL: FastCo Exist

viernes, 25 de julio de 2014

Palm's Jeff Hawkins is building a brain-like AI. He told us why he thinks his life's work is right

Inside a big bet on future machine intelligence

Feature Jeff Hawkins has bet his reputation, fortune, and entire intellectual life on one idea: that he understands the brain well enough to create machines with an intelligence we recognize as our own.

If his bet is correct, the Palm Pilot inventor will father a new technology, one that becomes the crucible in which a general artificial intelligence is one day forged. If his bet is wrong, then Hawkins will have wasted his life. At 56 years old that might sting a little.

"I want to bring about intelligent machines, machine intelligence, accelerated greatly from where it was going to happen and I don't want to be consumed – I want to come out at the other end as a normal person with my sanity," Hawkins told The Register. "My mission, the mission of Numenta, is to be a catalyst for machine intelligence."

A catalyst, he says, staring intently at your correspondent, "is something which accelerates a reaction by a thousand or ten thousand or a million-fold, and doesn't get consumed in the process."

His goal is ambitious, to put it mildly.

Before we dig deep into Hawkins' idiosyncratic approach to artificial intelligence, it's worth outlining the state of current AI research, why his critics have a right to be skeptical of his grandiose claims, and how his approach is different to the one being touted by consumer web giants such as Google.

Jeff Hawkins
AI researcher Jeff Hawkins
The road to a successful, widely deployable framework for an artificial mind is littered with failed schemes, dead ends, and traps. No one has come to the end of it, yet. But while major firms like Google and Facebook, and small companies like Vicarious, are striding over well-worn paths, Hawkins believes he is taking a new approach that could take him and his colleagues at his company,Numenta, all the way.

For over a decade, Hawkins has poured his energy into amassing enough knowledge about the brain and about how to program it in software. Now, he believes he is on the cusp of a great period of invention that may yield some very powerful technology.

Some people believe in him, others doubt him, and some academics El Reg has spoken with are suspicious of his ideas.

One thing we have established is that the work to which Hawkins has dedicated his life has become an influential touchstone within the red-hot modern artificial intelligence industry. His 2004 book, On Intelligence, appears to have been read by and inspired many of the most prominent figures in AI, and the tech Numenta is creating may trounce other commercial efforts by much larger companies such as Google, Facebook, and Microsoft.

"I think Jeff is largely right in what he wrote in On Intelligence," explains Hawkins' former colleague Dileep George (now running his own AI startup, Vicarious, which recently received $40m in funding from Mark Zuckerberg, space pioneer Elon Musk, and actor-turned-VC Ashton Kutcher). "Hierarchical systems, associative memory, time and attention – I think all those ideas are correct."

One of Google's most prominent AI experts agrees: "Jeff Hawkins ... has served as inspiration to countless AI researchers, for which I give him a lot of credit," explains former Google brain king and current Stanford Professor Andrew Ng.

Some organizations have taken Hawkins' ideas and stealthily run with them, with schemes already underway at companies like IBM and federal organizations like DARPA to implement his ideas in silicon, paving the way for neuromorphic processors that process information in near–real time, develop representations of patterns, and make predictions. If successful, these chips will make Qualcomm's "neuromorphic" Zeroth processors look like toys.

He has also inspired software adaptations of his work, such as CEPT, which has built an intriguing natural language processing engine partly out of Hawkins' ideas.

How we think: time and hierarchy
Hawkins' idea is that to build systems that behave like the brain, you have to be able to 
  • take in a stream of changing information, 
  • recognize patterns in it without knowing anything about the input source, 
  • make predictions, and 
  • react accordingly. 
The only context you have for this analysis is an ability to observe how the stream of data changes over time.

Though this sounds similar to some of the data processing systems being worked on by researchers at Google, Microsoft, and Facebook, it has some subtle differences.

Part of it is heritage – Hawkins traces his ideas back to his own understanding of how our neocortex works based on a synthesis of thousands of academic papers, chats with researchers, and his own work at two of his prior tech companies, Palm and Handspring, whereas the inspiration for most other approaches are neural networks based on technology from the 80s, which itself was refined out of a 1940s paper [PDF], "A Logical Calculus of the Ideas Immanent in Nervous Activity".

"That may be the right thing to do, but it's not the way brains work and it's not the principles of intelligence and it's not going to lead to a system that can explore the world or systems that can have behavior," Hawkins tells us.

So far he has outlined the ideas for this approach in his influential On Intelligence, plus a white paper published in 2011, a set of open source algorithms called NuPIC based on his Hierarchical Temporal Memory approach, and hundreds of talks given at universities and at companies ranging from Google to small startups.

Six easy pieces and the one true algorithm
Hawkins' work has "popularized the hypothesis that much of intelligence might be due to one learning algorithm," explains Ng.

Part of why Hawkins' approach is so controversial is that rather than assembling a set of advanced software components for specific computing functions and lashing them together via ever more complex collections of software, Hawkins has dedicated his research to figuring out an implementation of a single, basic approach.
This approach stems from an observation that our brain doesn't appear to come preloaded with any specific instructions or routines, but rather is an architecture that is able to take in, process, and store an endless stream of information and develop higher-order understandings out of that.

The manifestation of Hawkins' approach is the Cortical Learning Algorithm, or CLA.

"People used to think the neocortex was divided into sensory regions and motor regions," he explains. "We know now that is not true – the whole neocortex is sensory and motor."

Ultimately, the CLA will be a single system that involves both sensory processing and motor control – brain functions that Hawkins believes must be fused together to create the possibility of consciousness. For now, most work has been done on the sensory layer, though he has recently made some breakthroughs on the motor integration as well.

To build his Cortical Learning Algorithm system, Hawkins says, he has developed six principles that define a cortical-like processor. These traits are
  • "on-line learning from streaming data", 
  • "hierarchy of memory regions", 
  • "sequence memory", 
  • "sparse distributed representations",
  •  "all regions are sensory and motor", and 
  • "attention".
These principles are based on his own study of the work being done by neuroscientists around the world.

Now, Hawkins says, Numenta is on the verge of a breakthrough that could see the small company birth a framework for building intelligence machines. And unlike the hysteria that greeted AI in the 70s and 80s as the defense industry pumped money into AI, this time may not be a false dawn.

"I am thrilled at the progress we're making," he told El Reg one sunny afternoon at Numenta's whiteboard-crammed offices in Redwood City, California. "It's accelerating. These things are compounding, and it feels like these things are all coming together very rapidly."

The approach Numenta has been developing is producing better and better results, he says, and the CLA is gaining broader capabilities. In the past months, Hawkins has gone through a period of fecund creativity, and has solved one of the main problems that have bedeviled his system (temporal pooling), he says. He sees 2014 as a critical year for the company.

He is confident that he has bet correctly – but it's been a hard road to get here.

That long, hard road
Hawkins' interest in the brain dates back to his childhood, as does his frustration with how it is studied.
Growing up, Hawkins spent time with his father in an old shipyard on the north shore of Long Island, inventing all manner of boats with his father, an inventor with the enthusiasm for creativity of a Dr. Seuss character. In high school, the young Hawkins developed an interest in biophysics and, as he recounts in his book On Intelligence, tried to find out more about the brain at a local library.

"My search for a satisfying brain book turned up empty. I came to realize that no one had any idea how the brain actually worked. There weren't even any bad or unproven theories; there simply were none," he wrote.
This realization sparked a lifelong passion to try to understand the grand, intricate system that makes people who they are, and to eventually model the brain and create machines built in the same manner.

Hawkins graduated from Cornell in 1979 with a Bachelor of Science in Electronic Engineering. After a stint at Intel, he applied to MIT to study artificial intelligence, but had his application rejected because he wanted to understand how brains work, rather than build artificial intelligence. After this he worked at laptop start-up GRiD Systems, but during this time "could not get my curiosity about the brain and intelligent machines out of my head," so he did a correspondence course in physiology and ultimately applied to and was accepted in the biophysics program at the University of California, Berkeley.

When Hawkins started at Berkeley in 1986, his ambition to study a theory of the brain collided with the university administration, which disagreed with his course of study. Though Berkeley was not able to give him a course of study, Hawkins spent almost two years ensconced in the school's many libraries reading as much of the literature available on neuroscience as possible.

This deep immersion in neuroscience became the lens through which Hawkins viewed the world, with his later business accomplishments – Palm, Handspring – all leading to valuable insights on how the brain works and why the brain behaves as it does.

The way Hawkins recounts his past makes it seem as if the creation of a billion-dollar business in Palm, and arguably the prototype of the modern smartphone in Handspring, was a footnote along his journey to understand the brain.

This makes more sense when viewed against what he did in 2002, when he founded the Redwood Neuroscience Institute (now a part of the University of California at Berkeley and an epicenter of cutting-edge neuroscience research in its own right), and in 2005 founded Numenta with Palm/Handspring collaborator Donna Dublinksy and cofounder Dileep George.

These decades gave Hawkins the business acumen, money, and perspective needed to make a go at crafting his foundation for machine intelligence.

His media-savvy, confident approach appears to have stirred up some ill feeling among other academics who point out, correctly, that Hawkins hasn't published widely, nor has he invented many ideas on his own.
Numenta has also had troubles, partly due to Hawkins' idiosyncratic view on how the brain works.

In 2010, for example, Numenta cofounder Dileep George left to found his own company, Vicarious, to pick some of the more low-hanging fruit in the promising field of AI. From what we understand, this amicable separation stemmed from a difference of opinion between George and Hawkins, as George tended towards a more mathematical approach, and Hawkins to a more biological one.

Hawkins has also come in for a bit of a drubbing from the intelligentsia, with NYU psychology professor Gary Marcus dismissing Numenta's approach in a New Yorker article titled "Steamrolled by Big Data".

Other academics El Reg interviewed for this article did not want to be quoted, as they felt Hawkins' lack of peer reviewed papers combined with his entrepreneurial persona reduced the credibility of his entire approach.

Hawkins brushes off these criticisms and believes they come down to a difference of opinion between him and the AI intelligentsia.

"These are complex biological systems that were not designed by mathematical principles [that are] very difficult to formalize completely," he told us.

"This reminds me a bit of the beginning of the computer era," he said. "If you go back to the 1930s and early 40s, when people first started thinking about computers they were really interested in whether an algorithm would complete, and they were looking for mathematical completeness, a mathematical proof, that if you implemented something like an algorithm today when we build a computer, no one sits around saying "Let's look at the mathematical formalism of this computer.' It reminds me a little about that. We still have people saying 'You don't have enough math here!' There's some people that just don't like that."

Hawkins' confidence stems from the way Numenta has built its technology, which far from merely taking inspiration from the brain – as many other startups claim to do – is actively built as a digital implementation of everything Hawkins has learned about how the dense, napkin-sized sheet of cells that is our neocortex works.

"I know of no other cortical theories/models that incorporate any of the following: 
  • active dendrites, 
  • differences between proximal and distal dendrites, 
  • synapse growth and decay, 
  • potential synapses, 
  • dendrite growth, 
  • depolarization as a mode of prediction, 
  • mini-columns, 
  • multiple types of inhibition and their corresponding inhibitory neurons, 
  • etcetera. 
The new temporal pooling mechanism we are working on requires metabotropic receptors in the locations they are, and are not, found. Again, I don't know of any theories that have been reduced to practice that incorporate any, let alone all of these concepts," he wrote in a post to the discussion mailing list for NuPic, an open source implementation of Numenta's CLA, in February.

Deep learning is the new shallow learning
But for all the apparent rigorousness of Hawkins' approach, during the years he has worked on the technology there has been a fundamental change in the landscape of AI development: the rise of the consumer internet giants, and with them the appearance of various cavernous stores of user data on which to train learning algorithms.

Google, for instance, was said in January of 2014 to be assembling the team required for the "Manhattan Project for AI", according to a source who spoke anonymously to online publication Re/code. But Hawkins thinks that for all its grand aims, Google's approach may be based on a flawed presumption.

The collective term for the approach pioneered by companies like Google, Microsoft, and Facebook is "Deep Learning", but Hawkins fears it may be another blind path.

"Deep learning could be the greatest thing in the world, but it's not a brain theory," he says.
Deep learning approaches, Hawkins says, encourage the industry to go about refining methods based on old technology, itself based on an oversimplified version of the neurons in a brain.

Because of the vast stores of user data available, the companies are all compelled to approach the quest of creating artificial intelligence through building machines that compute over certain types of data.

In many cases, much of the development at places like Google, Microsoft, and Facebook has revolved around vision – a dead end, according to Hawkins.

"Where the whole community got tripped up – and I'm talking fifty years tripped up – is vision," Hawkins explains. "They said, 'Your eyes are moving all the time, your head is moving, the world is moving – let us focus on a simpler problem: spatial inference in vision'. This turns out to be a very small subset of what vision is. Vision turns out to be an inference problem. What that did is they threw out the most important part of vision – you must learn first how to do time-based vision."

The acquisitions these companies have made speak to this apparent flaw.

Google, for instance, hired AI luminary and University of Toronto professor Geoff Hinton and his startup DNNresearch last year to have him apply his "Deep Belief Networks" approach to Google's AI efforts.
In a talk given at the University of Toronto last year, Hinton said he believed more advanced AI should be based on existing approaches, rather than a rethought understanding of the brain.

"The kind of neural inspiration I like is when making it more like the brain works better," Hinton said. "There's lots of people who say you ought to make it more like the brain – like Henry Markram [of the European Union's brain simulation project], for example. He says, 'Give me a billion dollars and I'll make something like the brain,' but he doesn't actually know how to make it work – he just knows how to make something more and more like the brain. That seems to me not the right approach. What we should do is stick with things that actually work and make them more like the brain, and notice when making them more like the brain is actually helpful. There's not much point in making things work worse."

Hawkins vehemently disagrees with this point, and believes that basing approaches on existing methods means Hinton and other AI researchers are not going to be able to imbue their systems with the generality needed for true machine intelligence.

Another influential Googler agrees.
"We have neuroscientists in our team so we can be biologically inspired but are not slavish to it," Google Fellow Jeff Dean (creator of MapReduce, the Google File System, and now a figure in Google's own "Brain Project" team, also known as its AI division) told us this year.

"I'm surprised by how few people believe they need to understand how the brain works to build intelligent machines," Hawkins says. "I'm disappointed by this."

Hinton's foundational technologies, for example, are Boltzmann machines - advanced "stochastic recurrent neural network" tools that try to mimic some of the characteristics of the brain, which sit at the heart of Hinton's "Deep Belief Networks" (2006).

"The neurons in a restricted Boltzmann machine are not even close [to the brain] – it's not even an approximation," Hawkins explains.

Even Google is not sure about which way to bet on how to build a mind, as illustrated by its buy of UK company "DeepMind Technologies" earlier this year.

That company's founder, Demis Hassabis, has done detailed work on fundamental neuroscience, and has built technology out of this understanding. In 2010, it was reported that he mentioned both Hawkins' Hierarchical Temporal Memory and Hinton's Deep Belief Nets when giving a talk on viable general artificial intelligence approaches.

Facebook has gone down similar paths by hiring the influential artificial intelligence academic Yann LeCun to help it "predict what a user is going to do next," among other things.

Microsoft has developed significant capabilities as well, with systems like the Siri-beater "Cortana" and various endeavors by the company's research division, MSR.

Though the techniques these various researchers employ differ, they all depend on training a dataset over a large amount of information, and then selectively retraining it as information changes.

These AI efforts are built around dealing with problems backed up by large and relatively predictable datasets. This has yielded some incredible inventions, such as
  • reasonable natural language processing, 
  • image detection, and 
  • video tagging.
It has not and cannot, however, yield a framework for a general intelligence, as it doesn't have the necessary architecture for data 
  • apprehension, 
  • analysis, 
  • retention, and 
  • recognition 
that our own brains do, Hawkins claims.
Hawkins' focus on time is why he believes his approach will win – something that the consumer internet giants are slowly waking up to.

It's all about time
"I would say that Hawkins is focusing more on how things unfold over time, which I think is very important," Google's research director Peter Norvig told El Reg via email, "while most of the current deep learning work assumes a static representation, unchanging over time. I suspect that as we scale up the applications (i.e., from still images to video sequences, and from extracting noun-phrase entities in text to dealing with whole sentences denoting actions), that there will be more emphasis on the unfolding of dynamic processes over time."

Another former Googler concurs, with Andrew Ng telling us via email, "Hawkins' work places a huge emphasis on learning from sequences. While most deep learning researchers also think that learning from sequences is important, we just haven't figured out ways to do so that we're happy with yet."
Geoff Hinton echoes this praise. "He has great insights about the types of computation the brain must be doing," he tells us – but argues that Jeff Hawkins' actual algorithmic contributions have been "disappointing" so far.

An absolutely crucial ingredient to AI
Time "is one hundred per cent crucial" to the creation of true artificial intelligence, Hawkins tells us. "If you accept the fact intelligent machines are going to work on the principles of the neocortex, it is the entire thing, basically. The only way."

"The brain does two things: 
  • it does inference, which is recognizing patterns, and 
  • it does behavior, which is generating patterns or generating motor behavior,
Hawkins explains. "Ninety-nine percent of inference is time-based – language, audition, touch – it's all time-based. You can't understand touch without moving your hand. The order in which patterns occur is very important."

Numenta's approach relies on time. Its Cortical Learning Algorithm (white paper) amounts to an engine for
  • processing streams of information, 
  • classifying them, 
  • learning to spot differences, and 
  • using time-based patterns to make predictions about the future.
As mentioned above, there are several efforts underway at companies like IBM and federal research agencies like DARPA to implement Hawkins' systems in custom processors, and these schemes all recognize the importance of Hawkins' reliance on time.

"What I found intriguing about [his approach] – time is not an afterthought. In all of these [other] things, time has been an afterthought," one source currently working on implementing Hawkins' ideas tells us.

So far, Hawkins has used his system to make predictions of diverse phenomena such as 
  • hourly energy use and 
  • stock trading volumes, and 
  • to detect anomalies in data streams
Numenta's commercial product, Grok, detects anomalies in computer servers running on Amazon's cloud service.
Hawkins described to us one way to understand the power of this type of pattern recognition. "Imagine you are listening to a musician," he suggested. "After hearing her play for several days, you learn the kind of music she plays, how talented she is, how much she improvises, and how many mistakes she makes. Your brain learns her style, and then has expectations about what she will play and what it will sound like. As you continue to listen to her play, you will detect if her style changes, if the type of music she plays changes, or if she starts making more errors. The same kind of patterns exist in machine-generated data, and Grok will detect changes."

Here again the wider AI community appears to be dovetailing into Hawkins' ideas, with one of Andrew Ng's former Stanford students Honglak Lee having published a paper called "A classification-based polyphonic piano transcription approach using learned feature representations" in 2011. However, the method if implementation is different.

Obscurity through biology
Part of the reason why Hawkins' technology is not more widely known is because for current uses it is hard for it to demonstrate a vast lead over rival approaches. For all of Hawkins' belief in the tech, it is hard to demonstrate a convincing killer application for it that other approaches can't do. The point, Hawkins says, is that the CLA's internal structure gets rid of some of the stumbling blocks that exist in the future of other approaches.

Hawkins believes the CLA's implicit dependence on time means that eventually it will become the dominant approach.

"At the bottom of the [neocortex's] hierarchy are fast-changing patterns and they form sequences – some of them are predictable and some of them are not – and what the neocortex is doing is trying to understand the set of patterns here and give it a constant representation – a name for the sequence, if you will – and it forms that as the next level of the hierarchy so the next level up is more stable," Hawkins explains.

"Changing patterns lead to changing representations in the hierarchy that are more stable, and then it learns the changes in those patterns, and as you go up the hierarchy it forms more and more stable representations of the world and they also tend to be independent of your body position and your senses."

Illustration: A comparison between biological neurons and HTM cells
A comparison between Hawkins' Hierarchical Temporal Memory cells (right), 
a neural network neuron (center), and the brain's own neuron (left)

He believes his technology is more effective than the approaches taken by his rivals due to its use of sparse distributed representations as an input device to a storage system he terms "sequence memory".

Sequence memory refers to how information makes its way into the brain as a stream of information that comes in from both external stimuli and internal stimuli, such as signals from the broader body.

Sparse Distributed Representations (SDRs) are partially based on the work of mathematician Pentti Kanerva on "Sparse Distributed Memory" [PDF].

They refer to how the brain represents and stores information. They are designed to mimic the way our brain is believed to encode memories, which is through neuron firings across a very large area in response to inputs. To achieve this, SDRs are written, roughly, as a 2000-bit string of which perhaps two percent are active. This means that you don't need to read all active bits in an SDR to say that it is similar to another, because it merely needs to share a few of the activated bits to be considered similar, due to the sparsity.

Hawkins believes SDRs give input data inherent meaning through this representation approach.

"This means that if two vectors have 1s in the same position, they are semantically similar. Vectors can therefore be expressed in degrees of similarity rather than simply being identical or different. These large vectors can be stored accurately even using a subsampled index of, say, 10 of 2,000 bits. This makes SDR memory fault tolerant to gaps in data. SDRs also exhibit properties that reliably allow the neocortex to determine if a new input is unexpected," the company's commercial website for Grok says.

But what are the drawbacks?
So if Hawkins thinks he has the theory and is on the way to building the technology, and other companies are implementing it, then why are we even calling what he is doing a "bet"? The answer comes down to credibility.

Hawkins' idiosyncratic nature and decision to synthesize insights from two different fields – neuroscience and computer science – are his strengths, but also his drawbacks.

"No one knows how the cortex works, so there is no way to know if Jeff is on the right track or not," Dr. Terry Sejnowski, the laboratory head of the Computational Neurobiology Laboratory at the SALK Institute for Biological Studies, tells us. "To the extent that [Hawkins] incorporates new data into his models he may have a shot, and there will be a flood of data coming from the BRAIN Initiative that was announced by Obama last April."

Hawkins says that this response is typical of the academic community, and that there is enough data available to learn about the brain. You just have to look for it.

"We're not going to replicate the neocortex, we're not going to simulate the neocortex, we just need to understand how it works in sufficient detail so we can say 'A-ha!' and build things like it," Hawkins says. "There is an incredible amount of unassimilated data that exists. Fifty years of papers. Thousands of papers a year. It's unbelievable, and it's always the next set of papers that people think is going to do it. ... it's not true that you have to wait for that stuff."

The root of the problems Hawkins faces may be his approach, which stems more from biology than from mathematics. His old colleague and cofounder of Numenta, Dileep George, confirms this.

"I think Jeff is largely right in what he wrote in On Intelligence," George told us. "There are different approaches on how to bring those ideas. Jeff has an angle on it; we have a different angle on it; the rest of the community have another perspective on it."

These ideas are echoed by Google's Norvig. "Hawkins, at least in his general-public-facing-persona, seems to be more driven by duplicating what the brain does, while the deep learning researchers take some concepts from the brain, but then mostly are trying to optimize mathematical equations," he told us via email.
"I live in the middle," Hawkins explains. "Where I know the neuroscience details very very well, and I have a theoretical framework, and I bounce back and forth between these over and over again."

The future
Hawkins reckons that what he is doing today "is maybe 5 per cent of how humans learn," Hawkins says.
He believes that during the coming year he will begin work on the next major area of development for his technology: action.

For Hawkins' machines to gain independence – the ability, say, to not only recognize and classify patterns, but actively tune themselves to hunt for specific bits of information – the motor component needs to be integrated, he explains.

"What we've proven so far – I say built and tested and put into a product – is pure sensor. It's like an ear listening to sounds that doesn't have a chance to move," he tells us.

If you can add in the motor component, "an entire world opens up," he says.
"For example, I could have something like a web bot – an internet crawler. Today's web crawlers are really stupid, they're like wall-following rats. They just go up and down the length up and down the length," he says.

"If I wanted to look and understand the web, I could have a virtual system that is basically moving through cyberspace thinking about 'What is the structure here? How do I model this?' And so that's an example of a behavioral system that has no physical presence. It basically says, 'OK, I'm looking at this data, now where do I go next to look? Oh, I'm going to follow this link and do that in an intelligent way'."

By creating this technology, Hawkins hopes to dramatically accelerate the speed with which generally applicable artificial intelligence is developed and integrated into our world.

It's taken a lot to get here, and the older Hawkins gets and the more rival companies spend, the bigger the stakes get. As of 2014, he is still betting his life on the fact that he is right and they are wrong. ®

ORIGINAL: The Register
By Jack Clark,

miércoles, 23 de julio de 2014

Ángela Restrepo Moreno: "La mamá de los científicos"


Ángela Restrepo, ganadora en la categoría ciencias Bio - de la Vida y el Medio Ambiente.

De niña, Ángela Restrepo Moreno se quedaba por horas mirando la vitrina de la farmacia de su abuelo, detrás de la cual había un aparato de color amarillo con negro que la atraía como si fuera un imán. Era un microscopio muy parecido al que usaba Louis Pasteur en el siglo XIX para sus estudios.

Esto llevó a esta valiente mujer a ser científica en una época en la que las mujeres solo tenían dos caminos posibles: ser monjas o amas de casa. Cuando Ángela se graduó de bachiller, en 1951, no había dónde estudiar microbiología en Medellín. Su panorama se esclareció cuando abrieron un curso de Bacteriología. Luego hizo un máster en la Universidad de Tulane, en Estados Unidos. Años después hizo el doctorado y a su regreso comenzó a hacer diagnósticos de enfermedades causadas por hongos y microbios en un cuartico que el Hospital Pablo Tobón Uribe le prestó a ella y a otros investigadores. Gracias al trabajo de estos científicos y a do-naciones hoy es un edificio de cuatro plantas que todos conocen como la Corporación para Investigaciones Biológicas. Ángela Restrepo es hoy por hoy una autoridad mundial en el estudio del hongo Paracoccidioides brasiliensis.

Por su trabajo Ángela Restrepo ha sido reconocida con una larga lista de condecoraciones. Hace 20 años integró la Misión de Sabios que, junto con el educador Carlos E. Vasco, el escritor Gabriel García Márquez y el científico Rodolfo Llinás, le propuso al país una ruta de educación para catapultarlo hacia el desarrollo.

La doctora Restrepo es reconocida por haber sembrado la semilla de la investigación en una veintena de médicos y microbiólogos que hoy tienen estudios de doctorado y que la consideran como su mamá, tanto en la ciencia como en la vida. “Nunca me angustié de ser una mujer soltera por la multiplicación de los científicos que he visto crecer a mi lado”, dice ella.

05 julio 2014

Bamboo Engineering

MIT scientists, along with architects and wood processors from England and Canada, are looking for ways to turn bamboo into a construction material more akin to wood composites, like plywood.
(Learn more:

Such bamboo products are currently being developed by several companies; the MIT project intends to gain a better understanding of these materials, so that bamboo can be more effectively used.

Video: Melanie Gonick, MIT News

Additional footage courtesy of MyBoringChannel and Frank Ross 

Music sampled from: Radio Silence 

sábado, 19 de julio de 2014

Meet the electric life forms that live on pure energy

Photo credit: Shewanella oneidensis / U.S. Department of Energy

Unlike any other life on Earth, these extraordinary bacteria use energy in its purest formthey eat and breathe electrons – and they are everywhere

STICK an electrode in the ground, pump electrons down it, and they will come: living cells that eat electricity. We have known bacteria to survive on a variety of energy sources, but none as weird as this. Think of Frankenstein's monster, brought to life by galvanic energy, except these "electric bacteria" are very real and are popping up all over the place.

Unlike any other living thing on Earth, electric bacteria use energy in its purest form – naked electricity in the shape of electrons harvested from rocks and metals. We already knew about two types, Shewanella and Geobacter. Now, biologists are showing that they can entice many more out of rocks and marine mud by tempting them with a bit of electrical juice. Experiments growing bacteria on battery electrodes demonstrate that these novel, mind-boggling forms of life are essentially eating and excreting electricity.

That should not come as a complete surprise, says Kenneth Nealson at the University of Southern California, Los Angeles. We know that life, when you boil it right down, is a flow of electrons: "You eat sugars that have excess electrons, and you breathe in oxygen that willingly takes them." Our cells break down the sugars, and the electrons flow through them in a complex set of chemical reactions until they are passed on to electron-hungry oxygen.

In the process, cells make ATP, a molecule that acts as an energy storage unit for almost all living things. Moving electrons around is a key part of making ATP. "Life's very clever," says Nealson. "It figures out how to suck electrons out of everything we eat and keep them under control." In most living things, the body packages the electrons up into molecules that can safely carry them through the cells until they are dumped on to oxygen.

"That's the way we make all our energy and it's the same for every organism on this planet," says Nealson. "Electrons must flow in order for energy to be gained. This is why when someone suffocates another person they are dead within minutes. You have stopped the supply of oxygen, so the electrons can no longer flow."

The discovery of electric bacteria shows that some very basic forms of life can do away with sugary middlemen and handle the energy in its purest form – electrons, harvested from the surface of minerals. "It is truly foreign, you know," says Nealson. "In a sense, alien."

Nealson's team is one of a handful that is now growing these bacteria directly on electrodes, keeping them alive with electricity and nothing else – neither sugars nor any other kind of nutrient. The highly dangerous equivalent in humans, he says, would be for us to power up by shoving our fingers in a DC electrical socket.

To grow these bacteria, the team collects sediment from the seabed, brings it back to the lab, and inserts electrodes into it.

First they measure the natural voltage across the sediment, before applying a slightly different one. A slightly higher voltage offers an excess of electrons; a slightly lower voltage means the electrode will readily accept electrons from anything willing to pass them off. Bugs in the sediments can either "eat" electrons from the higher voltage, or "breathe" electrons on to the lower-voltage electrode, generating a current. That current is picked up by the researchers as a signal of the type of life they have captured.

"Basically, the idea is to take sediment, stick electrodes inside and then ask 'OK, who likes this?'," says Nealson.

Shocking breath

At the Goldschmidt geoscience conference in Sacramento, California, last month, Shiue-lin Li of Nealson's lab presented results of experiments growing electricity breathers in sediment collected from Santa Catalina harbour in California. Yamini Jangir, also from the University of Southern California, presented separate experiments which grew electricity breathers collected from a well in Death Valley in the Mojave Desert in California.

Over at the University of Minnesota in St Paul, Daniel Bond and his colleagues have published experiments showing that they could grow a type of bacteria that harvested electrons from an iron electrode (mBio, That research, says Jangir's supervisor Moh El-Naggar, may be the most convincing example we have so far of electricity eaters grown on a supply of electrons with no added food.

But Nealson says there is much more to come. His PhD student Annette Rowe has identified up to eight different kinds of bacteria that consume electricity. Those results are being submitted for publication.

Nealson is particularly excited that Rowe has found so many types of electric bacteria, all very different to one another, and none of them anything like Shewanella or Geobacter. "This is huge. What it means is that there's a whole part of the microbial world that we don't know about."

Discovering this hidden biosphere is precisely why Jangir and El-Naggar want to cultivate electric bacteria. "We're using electrodes to mimic their interactions," says El-Naggar. "Culturing the 'unculturables', if you will." The researchers plan to install a battery inside a gold mine in South Dakota to see what they can find living down there.

NASA is also interested in things that live deep underground because such organisms often survive on very little energy and they may suggest modes of life in other parts of the solar system.

Electric bacteria could have practical uses here on Earth, however, such as creating biomachines that do useful things like clean up sewage or contaminated groundwater while drawing their own power from their surroundings. Nealson calls them self-powered useful devices, or SPUDs.

Practicality aside, another exciting prospect is to use electric bacteria to probe fundamental questions about life, such as what is the bare minimum of energy needed to maintain life.

For that we need the next stage of experiments, says Yuri Gorby, a microbiologist at the Rensselaer Polytechnic Institute in Troy, New York: bacteria should be grown not on a single electrode but between two. These bacteria would effectively eat electrons from one electrode, use them as a source of energy, and discard them on to the other electrode.

Gorby believes bacterial cells that both eat and breathe electrons will soon be discovered. "An electric bacterium grown between two electrodes could maintain itself virtually forever," says Gorby. "If nothing is going to eat it or destroy it then, theoretically, we should be able to maintain that organism indefinitely."

It may also be possible to vary the voltage applied to the electrodes, putting the energetic squeeze on cells to the point at which they are just doing the absolute minimum to stay alive. In this state, the cells may not be able to reproduce or grow, but they would still be able to run repairs on cell machinery. "For them, the work that energy does would be maintaining life – maintaining viability," says Gorby.

How much juice do you need to keep a living electric bacterium going? Answer that question, and you've answered one of the most fundamental existential questions there is.

This article appeared in print under the headline "The electricity eaters"

Leader: "Spark of life revisited thanks to electric bacteria"

Wire in the mud

Electric bacteria come in all shapes and sizes. A few years ago, biologists discovered that some produce hair-like filaments that act as wires, ferrying electrons back and forth between the cells and their wider environment. They dubbed them microbial nanowires.

Lars Peter Nielsen and his colleagues at Aarhus University in Denmark have found that tens of thousands of electric bacteria can join together to form daisy chains that carry electrons over several centimetres – a huge distance for a bacterium only 3 or 4 micrometres long. It means that bacteria living in, say, seabed mud where no oxygen penetrates, can access oxygen dissolved in the seawater simply by holding hands with their friends.

Such bacteria are showing up everywhere we look, says Nielsen. One way to find out if you're in the presence of these electron munchers is to put clumps of dirt in a shallow dish full of water, and gently swirl it. The dirt should fall apart. If it doesn't, it's likely that cables made of bacteria are holding it together.

Nielsen can spot the glimmer of the cables when he pulls soil apart and holds it up to sunlight (see video). 

Flexible biocables

It's more than just a bit of fun. Early work shows that such cables conduct electricity about as well as the wires that connect your toaster to the mains. That could open up interesting research avenues involving flexible, lab-grown biocables.

ORIGINAL: New Scientist
by Catherine Brahic
16 July 2014

miércoles, 16 de julio de 2014

"Apis Mellifera: Honey Bee" a high-speed short

Last week I wanted to film something in high-speed (I shoot something every week to keep it fresh).
 My Bullfrog film had done well on the internet and I wanted to step up and challenge myself. I have wanted to film bee's for quite a while and luckily for me there happened to be an apiary in my town. Allen Lindahl owner of stepped up and allowed me to film his hives. It was 92 degrees out (33°c) and the sun was bearing down, but I was told sunny days are when the bee's are most active. Without a bee outfit, I was ready to shoot. I was able to get pretty close to one of the hives (about one and a half feet) which was perfect for using the Canon 100mm Macro IS. I primarily filmed with the Canon 30-105mm Cinema zoom lens wide open. I also used a 300mm Tamron and a Nikon 50mm. I had my trusty Sound Devices Pix 240i as a field monitor and for recording ProRes via the HD-SDI out of the Photron BC2 HD/2K. It was very hard to track the bee's as they fly very fast and were getting a little bothered by how close I was to the hives. I was only stung three times which is pretty remarkable due to my proximity and my lens poking almost into the entrance way of the hive. I shot for approx 2.5 hours each day. It was so hot I got a pretty bad sunburn and the camera was hot enough to cook a fat porterhouse. There was a few moments that were intimidating when bee's started landing on my arms, face, in my ear and on my eye. I just stayed still and they went on their way with the exception of the three stings (1 on the arm, 1 on the neck and 1 under my ear). Bee's are actually quite docile and would prefer not to sting. They just want to make honey.

Shot/Dir/Edit by: Michael Sutton @MNS1974

Equipment used:
  • Camera: Photron Fastcam BC2 HD/2K high-speed S35 camera system w/ custom trigger & batteries (1000-6800fps) 2K, HD (1080p & 720p) and SD
  • Lenses: Canon 30-105mm Cine zoom, Canon 100mm Macro, Nikon 50mm, 300mm Tamron SP
  • Recorder: Sound Devices Pix 240i w/ Sandisk CF cards
  • Support: Kessler Crane Carbon Fiber Stealth, Manfrotto 516 head w/546GBK tripod
Music Licensed via:

Special thanks to:
  • Mike Cohen
  • Allen Lindahl of Hillside Bee's
  • Heather Sutton
  • Eric Kessler and Chris Beller of Kessler Crane
  • Contact: Michael Sutton
  • website: 
  • email: mike at frozenprosperity dot com
  • phone: listed on website
  • twitter: @MNS1974 &@frozenpros

ORIGINAL: "Apis Mellifera: Honey Bee" a high-speed short
from Michael N Sutton / @MNS1974 on Vimeo.

lunes, 14 de julio de 2014

Todos Los Niños Nacen Genios, Pero Son Aplastados Por La Sociedad Y La Educación

Por Francisco Lira

En esta entrevista el físico teórico Michio Kaku, cuenta una historia sobre una desgarradora pregunta que su hija le hizo. Según Michio, todos los niños nacen genios hasta que la sociedad aplasta el espíritu científico y presenta una simple hipótesis de por qué a los niños no les gusta la ciencia.¡Todos sabemos que la magia se perdió en algún momento de nuestra niñez!

sábado, 12 de julio de 2014

How D-Wave Built Quantum Computing Hardware for the Next Generation

Photo: D-Wave Systems

One second is here and gone before most of us can think about it. But a delay of one second can seem like an eternity in a quantum computer capable of running calculations in millionths of a second. That's why engineers at D-Wave Systems worked hard to eliminate the one-second computing delay that existed in the D-Wave One—the first-generation version of what the company describes as the world's first commercial quantum computer.

Such lessons learned from operating D-Wave One helped shape the hardware design of D-Wave Two, a second-generation machine that has already been leased by customers such as Google, NASA, and Lockheed Martin. Such machines have not yet proven that they can definitely outperform classical computers in a way that would support D-Wave's particular approach to building quantum computers. But the hardware design philosophy behind D-Wave's quantum computing architecture points to how researchers could build increasingly more powerful quantum computers in the future.

"We have room for increasing the complexity of the D-Wave chip," says Jeremy Hilton, vice president of processor development at D-Wave Systems. "If we can fix the number of control lines per processor regardless of size, we can call it truly scalable quantum computing technology."

D-Wave recently explained the hardware design choices it made in going from D-Wave One to D-Wave Two in the June 2014 issue of the journal IEEE Transactions on Applied Superconductivity. Such details illustrate the engineering challenges that researchers still face in building a practical quantum computer capable of surpassing classical computers. (See IEEE Spectrum's overview of the D-Wave machines' performance from the December 2013 issue.)
Photo: D-Wave SystemsD-Wave's Year of Computing Dangerously

Quantum computing holds the promise of speedily solving tough problems that ordinary computers would take practically forever to crack. Unlike classical computing that represents information as bits of either a 1 or 0, quantum computers take advantage of quantum bits (qubits) that can exist as both a 1 and 0 at the same time, enabling them to perform many simultaneous calculations.

Classical computer hardware has relied upon silicon transistors that can switch between "on" and "off" to represent the 1 or 0 in digital information. By comparison, D-Wave's quantum computing hardware relies on metal loops of niobium that have tiny electrical currents running through them. A current running counterclockwise through the loop creates a tiny magnetic field pointing up, whereas a clockwise current leads to a magnetic field pointing down. Those two magnetic field states represent the equivalent of 1 or 0.

The niobium loops become superconductors when chilled to frigid temperatures of 20 millikelvin (-273 degrees C). At such low temperatures, the currents and magnetic fields can enter the strange quantum state known as "superposition" that allows them to represent both 1 and 0 states simultaneously. That allows D-Wave to use these "superconducting qubits" as the building blocks for making a quantum computing machine. Each loop also contains a number of Josephson junctions—two layers of superconductor separated by a thin insulating layer—that act as a framework of switches for routing magnetic pulses to the correct locations.

But a bunch of superconducting qubits and their connecting couplers—separate superconducting loops that allow qubits to exchange information—won't do any computing all by themselves. D-Wave initially thought it would rely on analog control lines that could apply a magnetic field to the superconducting qubits and control their quantum states in that manner. However, the company realized early on in development that it would need at least six or seven control lines per qubit, for a programmable computer. The dream of eventually building more powerful machines with thousands of qubits would become an "impossible engineering challenge" with such design requirements, Hilton says.

The solution came in the form of digital-to-analog flux converters (DAC)—each about the size of a human red blood cell at 10 micrometers in width— that act as control devices and sit directly on the quantum computer chip. Such devices can replace control lines by acting as a form of programmable magnetic memory that produces a static magnetic field to affect nearby qubits. D-Wave can reprogram the DACs digitally to change the "bias" of their magnetic fields, which in turn affects the quantum computing operations.

Most researchers have focused on building quantum computers using the traditional logic-gate model of computing. But D-Wave has focused on a more specialized approach known as "quantum annealing" —a method of tackling optimization problems. Solving optimization problems means finding the lowest "valley" that represents the best solution in a problem "landscape" with peaks and valleys. In practical terms, D-Wave starts a group of qubits in their lowest energy state and then gradually turns on interactions between the qubits, which encodes a quantum algorithm. When the qubits settle back down in their new lowest-energy state, D-Wave can read out the qubits to get the results.

Both the D-Wave One (128 qubits) and D-Wave Two (512 qubits) processors have DACs. But the circuitry setup of D-Wave One created some problems between the programming DAC phase and the quantum annealing operations phase. Specifically, the D-Wave One programming phase temporarily raised the temperature to as much as 500 millikelvin, which only dropped back down to the 20 millikelvin temperature necessary for quantum annealing after one second. That's a significant delay for a machine that can perform quantum annealing in just 20 microseconds (20 millionths of a second).

By simplifying the hardware architecture and adding some more control lines, D-Wave managed to largely eliminate the temperature rise. That in turn reduced the post-programming delay to about 10 milliseconds (10 thousandths of a second)— a "factor of 100 improvement achieved within one processor generation," Hilton says. D-Wave also managed to reduce the physical size of the DAC "footprint" by about 50 percent in D-Wave Two.

Building ever-larger arrays of qubits continues to challenge D-Wave's engineers. They must always be aware of how their hardware design—packed with many classical computing components—can affect the fragile quantum states and lead to errors or noise that overwhelms the quantum annealing operations.

"We were nervous about going down this path," Hilton says. "This architecture requires the qubits and the quantum devices to be intermingled with all these big classical objects. The threat you worry about is noise and impact of all this stuff hanging around the qubits. Traditional experiments in quantum computing have qubits in almost perfect isolation. But if you want quantum computing to be scalable, it will have to be immersed in a sea of computing complexity."

Still, D-Wave's current hardware architecture, code-named "Chimera," should be capable of building quantum computing machines of up to 8000 qubits, Hilton says. The company is also working on building a larger processor containing 1000 qubits.

"The architecture isn’t necessarily going to stay the same, because we're constantly learning about performance and other factors," Hilton says. "But each time we implement a generation, we try to give it some legs so we know it’s extendable."

By Jeremy Hsu
11 Jul 2014