Bienvenidos a Ciencia en Canoa, una iniciativa creada por
Vanessa Restrepo Schild.

miércoles, 23 de julio de 2014

Ángela Restrepo Moreno: "La mamá de los científicos"


Ángela Restrepo, ganadora en la categoría ciencias Bio - de la Vida y el Medio Ambiente.

De niña, Ángela Restrepo Moreno se quedaba por horas mirando la vitrina de la farmacia de su abuelo, detrás de la cual había un aparato de color amarillo con negro que la atraía como si fuera un imán. Era un microscopio muy parecido al que usaba Louis Pasteur en el siglo XIX para sus estudios.

Esto llevó a esta valiente mujer a ser científica en una época en la que las mujeres solo tenían dos caminos posibles: ser monjas o amas de casa. Cuando Ángela se graduó de bachiller, en 1951, no había dónde estudiar microbiología en Medellín. Su panorama se esclareció cuando abrieron un curso de Bacteriología. Luego hizo un máster en la Universidad de Tulane, en Estados Unidos. Años después hizo el doctorado y a su regreso comenzó a hacer diagnósticos de enfermedades causadas por hongos y microbios en un cuartico que el Hospital Pablo Tobón Uribe le prestó a ella y a otros investigadores. Gracias al trabajo de estos científicos y a do-naciones hoy es un edificio de cuatro plantas que todos conocen como la Corporación para Investigaciones Biológicas. Ángela Restrepo es hoy por hoy una autoridad mundial en el estudio del hongo Paracoccidioides brasiliensis.

Por su trabajo Ángela Restrepo ha sido reconocida con una larga lista de condecoraciones. Hace 20 años integró la Misión de Sabios que, junto con el educador Carlos E. Vasco, el escritor Gabriel García Márquez y el científico Rodolfo Llinás, le propuso al país una ruta de educación para catapultarlo hacia el desarrollo.

La doctora Restrepo es reconocida por haber sembrado la semilla de la investigación en una veintena de médicos y microbiólogos que hoy tienen estudios de doctorado y que la consideran como su mamá, tanto en la ciencia como en la vida. “Nunca me angustié de ser una mujer soltera por la multiplicación de los científicos que he visto crecer a mi lado”, dice ella.

05 julio 2014

Bamboo Engineering

MIT scientists, along with architects and wood processors from England and Canada, are looking for ways to turn bamboo into a construction material more akin to wood composites, like plywood.
(Learn more:

Such bamboo products are currently being developed by several companies; the MIT project intends to gain a better understanding of these materials, so that bamboo can be more effectively used.

Video: Melanie Gonick, MIT News

Additional footage courtesy of MyBoringChannel and Frank Ross 

Music sampled from: Radio Silence 

sábado, 19 de julio de 2014

Meet the electric life forms that live on pure energy

Photo credit: Shewanella oneidensis / U.S. Department of Energy

Unlike any other life on Earth, these extraordinary bacteria use energy in its purest formthey eat and breathe electrons – and they are everywhere

STICK an electrode in the ground, pump electrons down it, and they will come: living cells that eat electricity. We have known bacteria to survive on a variety of energy sources, but none as weird as this. Think of Frankenstein's monster, brought to life by galvanic energy, except these "electric bacteria" are very real and are popping up all over the place.

Unlike any other living thing on Earth, electric bacteria use energy in its purest form – naked electricity in the shape of electrons harvested from rocks and metals. We already knew about two types, Shewanella and Geobacter. Now, biologists are showing that they can entice many more out of rocks and marine mud by tempting them with a bit of electrical juice. Experiments growing bacteria on battery electrodes demonstrate that these novel, mind-boggling forms of life are essentially eating and excreting electricity.

That should not come as a complete surprise, says Kenneth Nealson at the University of Southern California, Los Angeles. We know that life, when you boil it right down, is a flow of electrons: "You eat sugars that have excess electrons, and you breathe in oxygen that willingly takes them." Our cells break down the sugars, and the electrons flow through them in a complex set of chemical reactions until they are passed on to electron-hungry oxygen.

In the process, cells make ATP, a molecule that acts as an energy storage unit for almost all living things. Moving electrons around is a key part of making ATP. "Life's very clever," says Nealson. "It figures out how to suck electrons out of everything we eat and keep them under control." In most living things, the body packages the electrons up into molecules that can safely carry them through the cells until they are dumped on to oxygen.

"That's the way we make all our energy and it's the same for every organism on this planet," says Nealson. "Electrons must flow in order for energy to be gained. This is why when someone suffocates another person they are dead within minutes. You have stopped the supply of oxygen, so the electrons can no longer flow."

The discovery of electric bacteria shows that some very basic forms of life can do away with sugary middlemen and handle the energy in its purest form – electrons, harvested from the surface of minerals. "It is truly foreign, you know," says Nealson. "In a sense, alien."

Nealson's team is one of a handful that is now growing these bacteria directly on electrodes, keeping them alive with electricity and nothing else – neither sugars nor any other kind of nutrient. The highly dangerous equivalent in humans, he says, would be for us to power up by shoving our fingers in a DC electrical socket.

To grow these bacteria, the team collects sediment from the seabed, brings it back to the lab, and inserts electrodes into it.

First they measure the natural voltage across the sediment, before applying a slightly different one. A slightly higher voltage offers an excess of electrons; a slightly lower voltage means the electrode will readily accept electrons from anything willing to pass them off. Bugs in the sediments can either "eat" electrons from the higher voltage, or "breathe" electrons on to the lower-voltage electrode, generating a current. That current is picked up by the researchers as a signal of the type of life they have captured.

"Basically, the idea is to take sediment, stick electrodes inside and then ask 'OK, who likes this?'," says Nealson.

Shocking breath

At the Goldschmidt geoscience conference in Sacramento, California, last month, Shiue-lin Li of Nealson's lab presented results of experiments growing electricity breathers in sediment collected from Santa Catalina harbour in California. Yamini Jangir, also from the University of Southern California, presented separate experiments which grew electricity breathers collected from a well in Death Valley in the Mojave Desert in California.

Over at the University of Minnesota in St Paul, Daniel Bond and his colleagues have published experiments showing that they could grow a type of bacteria that harvested electrons from an iron electrode (mBio, That research, says Jangir's supervisor Moh El-Naggar, may be the most convincing example we have so far of electricity eaters grown on a supply of electrons with no added food.

But Nealson says there is much more to come. His PhD student Annette Rowe has identified up to eight different kinds of bacteria that consume electricity. Those results are being submitted for publication.

Nealson is particularly excited that Rowe has found so many types of electric bacteria, all very different to one another, and none of them anything like Shewanella or Geobacter. "This is huge. What it means is that there's a whole part of the microbial world that we don't know about."

Discovering this hidden biosphere is precisely why Jangir and El-Naggar want to cultivate electric bacteria. "We're using electrodes to mimic their interactions," says El-Naggar. "Culturing the 'unculturables', if you will." The researchers plan to install a battery inside a gold mine in South Dakota to see what they can find living down there.

NASA is also interested in things that live deep underground because such organisms often survive on very little energy and they may suggest modes of life in other parts of the solar system.

Electric bacteria could have practical uses here on Earth, however, such as creating biomachines that do useful things like clean up sewage or contaminated groundwater while drawing their own power from their surroundings. Nealson calls them self-powered useful devices, or SPUDs.

Practicality aside, another exciting prospect is to use electric bacteria to probe fundamental questions about life, such as what is the bare minimum of energy needed to maintain life.

For that we need the next stage of experiments, says Yuri Gorby, a microbiologist at the Rensselaer Polytechnic Institute in Troy, New York: bacteria should be grown not on a single electrode but between two. These bacteria would effectively eat electrons from one electrode, use them as a source of energy, and discard them on to the other electrode.

Gorby believes bacterial cells that both eat and breathe electrons will soon be discovered. "An electric bacterium grown between two electrodes could maintain itself virtually forever," says Gorby. "If nothing is going to eat it or destroy it then, theoretically, we should be able to maintain that organism indefinitely."

It may also be possible to vary the voltage applied to the electrodes, putting the energetic squeeze on cells to the point at which they are just doing the absolute minimum to stay alive. In this state, the cells may not be able to reproduce or grow, but they would still be able to run repairs on cell machinery. "For them, the work that energy does would be maintaining life – maintaining viability," says Gorby.

How much juice do you need to keep a living electric bacterium going? Answer that question, and you've answered one of the most fundamental existential questions there is.

This article appeared in print under the headline "The electricity eaters"

Leader: "Spark of life revisited thanks to electric bacteria"

Wire in the mud

Electric bacteria come in all shapes and sizes. A few years ago, biologists discovered that some produce hair-like filaments that act as wires, ferrying electrons back and forth between the cells and their wider environment. They dubbed them microbial nanowires.

Lars Peter Nielsen and his colleagues at Aarhus University in Denmark have found that tens of thousands of electric bacteria can join together to form daisy chains that carry electrons over several centimetres – a huge distance for a bacterium only 3 or 4 micrometres long. It means that bacteria living in, say, seabed mud where no oxygen penetrates, can access oxygen dissolved in the seawater simply by holding hands with their friends.

Such bacteria are showing up everywhere we look, says Nielsen. One way to find out if you're in the presence of these electron munchers is to put clumps of dirt in a shallow dish full of water, and gently swirl it. The dirt should fall apart. If it doesn't, it's likely that cables made of bacteria are holding it together.

Nielsen can spot the glimmer of the cables when he pulls soil apart and holds it up to sunlight (see video). 

Flexible biocables

It's more than just a bit of fun. Early work shows that such cables conduct electricity about as well as the wires that connect your toaster to the mains. That could open up interesting research avenues involving flexible, lab-grown biocables.

ORIGINAL: New Scientist
by Catherine Brahic
16 July 2014

miércoles, 16 de julio de 2014

"Apis Mellifera: Honey Bee" a high-speed short

Last week I wanted to film something in high-speed (I shoot something every week to keep it fresh).
 My Bullfrog film had done well on the internet and I wanted to step up and challenge myself. I have wanted to film bee's for quite a while and luckily for me there happened to be an apiary in my town. Allen Lindahl owner of stepped up and allowed me to film his hives. It was 92 degrees out (33°c) and the sun was bearing down, but I was told sunny days are when the bee's are most active. Without a bee outfit, I was ready to shoot. I was able to get pretty close to one of the hives (about one and a half feet) which was perfect for using the Canon 100mm Macro IS. I primarily filmed with the Canon 30-105mm Cinema zoom lens wide open. I also used a 300mm Tamron and a Nikon 50mm. I had my trusty Sound Devices Pix 240i as a field monitor and for recording ProRes via the HD-SDI out of the Photron BC2 HD/2K. It was very hard to track the bee's as they fly very fast and were getting a little bothered by how close I was to the hives. I was only stung three times which is pretty remarkable due to my proximity and my lens poking almost into the entrance way of the hive. I shot for approx 2.5 hours each day. It was so hot I got a pretty bad sunburn and the camera was hot enough to cook a fat porterhouse. There was a few moments that were intimidating when bee's started landing on my arms, face, in my ear and on my eye. I just stayed still and they went on their way with the exception of the three stings (1 on the arm, 1 on the neck and 1 under my ear). Bee's are actually quite docile and would prefer not to sting. They just want to make honey.

Shot/Dir/Edit by: Michael Sutton @MNS1974

Equipment used:
  • Camera: Photron Fastcam BC2 HD/2K high-speed S35 camera system w/ custom trigger & batteries (1000-6800fps) 2K, HD (1080p & 720p) and SD
  • Lenses: Canon 30-105mm Cine zoom, Canon 100mm Macro, Nikon 50mm, 300mm Tamron SP
  • Recorder: Sound Devices Pix 240i w/ Sandisk CF cards
  • Support: Kessler Crane Carbon Fiber Stealth, Manfrotto 516 head w/546GBK tripod
Music Licensed via:

Special thanks to:
  • Mike Cohen
  • Allen Lindahl of Hillside Bee's
  • Heather Sutton
  • Eric Kessler and Chris Beller of Kessler Crane
  • Contact: Michael Sutton
  • website: 
  • email: mike at frozenprosperity dot com
  • phone: listed on website
  • twitter: @MNS1974 &@frozenpros

ORIGINAL: "Apis Mellifera: Honey Bee" a high-speed short
from Michael N Sutton / @MNS1974 on Vimeo.

lunes, 14 de julio de 2014

Todos Los Niños Nacen Genios, Pero Son Aplastados Por La Sociedad Y La Educación

Por Francisco Lira

En esta entrevista el físico teórico Michio Kaku, cuenta una historia sobre una desgarradora pregunta que su hija le hizo. Según Michio, todos los niños nacen genios hasta que la sociedad aplasta el espíritu científico y presenta una simple hipótesis de por qué a los niños no les gusta la ciencia.¡Todos sabemos que la magia se perdió en algún momento de nuestra niñez!

sábado, 12 de julio de 2014

How D-Wave Built Quantum Computing Hardware for the Next Generation

Photo: D-Wave Systems

One second is here and gone before most of us can think about it. But a delay of one second can seem like an eternity in a quantum computer capable of running calculations in millionths of a second. That's why engineers at D-Wave Systems worked hard to eliminate the one-second computing delay that existed in the D-Wave One—the first-generation version of what the company describes as the world's first commercial quantum computer.

Such lessons learned from operating D-Wave One helped shape the hardware design of D-Wave Two, a second-generation machine that has already been leased by customers such as Google, NASA, and Lockheed Martin. Such machines have not yet proven that they can definitely outperform classical computers in a way that would support D-Wave's particular approach to building quantum computers. But the hardware design philosophy behind D-Wave's quantum computing architecture points to how researchers could build increasingly more powerful quantum computers in the future.

"We have room for increasing the complexity of the D-Wave chip," says Jeremy Hilton, vice president of processor development at D-Wave Systems. "If we can fix the number of control lines per processor regardless of size, we can call it truly scalable quantum computing technology."

D-Wave recently explained the hardware design choices it made in going from D-Wave One to D-Wave Two in the June 2014 issue of the journal IEEE Transactions on Applied Superconductivity. Such details illustrate the engineering challenges that researchers still face in building a practical quantum computer capable of surpassing classical computers. (See IEEE Spectrum's overview of the D-Wave machines' performance from the December 2013 issue.)
Photo: D-Wave SystemsD-Wave's Year of Computing Dangerously

Quantum computing holds the promise of speedily solving tough problems that ordinary computers would take practically forever to crack. Unlike classical computing that represents information as bits of either a 1 or 0, quantum computers take advantage of quantum bits (qubits) that can exist as both a 1 and 0 at the same time, enabling them to perform many simultaneous calculations.

Classical computer hardware has relied upon silicon transistors that can switch between "on" and "off" to represent the 1 or 0 in digital information. By comparison, D-Wave's quantum computing hardware relies on metal loops of niobium that have tiny electrical currents running through them. A current running counterclockwise through the loop creates a tiny magnetic field pointing up, whereas a clockwise current leads to a magnetic field pointing down. Those two magnetic field states represent the equivalent of 1 or 0.

The niobium loops become superconductors when chilled to frigid temperatures of 20 millikelvin (-273 degrees C). At such low temperatures, the currents and magnetic fields can enter the strange quantum state known as "superposition" that allows them to represent both 1 and 0 states simultaneously. That allows D-Wave to use these "superconducting qubits" as the building blocks for making a quantum computing machine. Each loop also contains a number of Josephson junctions—two layers of superconductor separated by a thin insulating layer—that act as a framework of switches for routing magnetic pulses to the correct locations.

But a bunch of superconducting qubits and their connecting couplers—separate superconducting loops that allow qubits to exchange information—won't do any computing all by themselves. D-Wave initially thought it would rely on analog control lines that could apply a magnetic field to the superconducting qubits and control their quantum states in that manner. However, the company realized early on in development that it would need at least six or seven control lines per qubit, for a programmable computer. The dream of eventually building more powerful machines with thousands of qubits would become an "impossible engineering challenge" with such design requirements, Hilton says.

The solution came in the form of digital-to-analog flux converters (DAC)—each about the size of a human red blood cell at 10 micrometers in width— that act as control devices and sit directly on the quantum computer chip. Such devices can replace control lines by acting as a form of programmable magnetic memory that produces a static magnetic field to affect nearby qubits. D-Wave can reprogram the DACs digitally to change the "bias" of their magnetic fields, which in turn affects the quantum computing operations.

Most researchers have focused on building quantum computers using the traditional logic-gate model of computing. But D-Wave has focused on a more specialized approach known as "quantum annealing" —a method of tackling optimization problems. Solving optimization problems means finding the lowest "valley" that represents the best solution in a problem "landscape" with peaks and valleys. In practical terms, D-Wave starts a group of qubits in their lowest energy state and then gradually turns on interactions between the qubits, which encodes a quantum algorithm. When the qubits settle back down in their new lowest-energy state, D-Wave can read out the qubits to get the results.

Both the D-Wave One (128 qubits) and D-Wave Two (512 qubits) processors have DACs. But the circuitry setup of D-Wave One created some problems between the programming DAC phase and the quantum annealing operations phase. Specifically, the D-Wave One programming phase temporarily raised the temperature to as much as 500 millikelvin, which only dropped back down to the 20 millikelvin temperature necessary for quantum annealing after one second. That's a significant delay for a machine that can perform quantum annealing in just 20 microseconds (20 millionths of a second).

By simplifying the hardware architecture and adding some more control lines, D-Wave managed to largely eliminate the temperature rise. That in turn reduced the post-programming delay to about 10 milliseconds (10 thousandths of a second)— a "factor of 100 improvement achieved within one processor generation," Hilton says. D-Wave also managed to reduce the physical size of the DAC "footprint" by about 50 percent in D-Wave Two.

Building ever-larger arrays of qubits continues to challenge D-Wave's engineers. They must always be aware of how their hardware design—packed with many classical computing components—can affect the fragile quantum states and lead to errors or noise that overwhelms the quantum annealing operations.

"We were nervous about going down this path," Hilton says. "This architecture requires the qubits and the quantum devices to be intermingled with all these big classical objects. The threat you worry about is noise and impact of all this stuff hanging around the qubits. Traditional experiments in quantum computing have qubits in almost perfect isolation. But if you want quantum computing to be scalable, it will have to be immersed in a sea of computing complexity."

Still, D-Wave's current hardware architecture, code-named "Chimera," should be capable of building quantum computing machines of up to 8000 qubits, Hilton says. The company is also working on building a larger processor containing 1000 qubits.

"The architecture isn’t necessarily going to stay the same, because we're constantly learning about performance and other factors," Hilton says. "But each time we implement a generation, we try to give it some legs so we know it’s extendable."

By Jeremy Hsu
11 Jul 2014

The Man Who Rewrote the Tree of Life

Carl Woese may be the greatest scientist you’ve never heard of. “Woese is to biology what Einstein is to physics,” says Norman Pace, a microbiologist at the University of Colorado, Boulder. A physicist-turned-microbiologist, Woese specialized in the fundamental molecules of life—nucleic acids—but his ambitions were hardly microscopic. He wanted to create a family tree of all life on Earth.

Woese certainly wasn’t the first person with this ambition. The desire to classify every living thing is ageless. The Ancient Greeks and Romans worked to develop a system of classifying life. The Jewish people, in writing the Book of Genesis, set Adam to the task of naming all the animals in the Garden of Eden. And in the mid-1700s, Swedish botanist Carl von Linné published Systema Naturae, introducing the world to a system of Latin binomials—Genus speciesthat scientists use to this day.
Carl Woese in his later years

What Woese was proposing wasn’t to replace Linnaean classification, but to refine it. During the late 1960s, when Woese first started thinking about this problem as a young professor at the University of Illinois, biologists were relying a lot on guesswork to determine how organisms were related to each other, especially microbes. At the time, researchers used the shapes of microbes—their morphologies—and how they turned food into energy—their metabolisms—to sort them into bins. Woese was underwhelmed. To him, the morphology-metabolism approach was like trying to create a genealogical history using only photographs and drawings. Are people with dimples on their right cheeks and long ring fingers all members of the same family? Maybe, but probably not.

If you wanted to build a tree of life prior to what Woese did, there was no way to put something together that was based upon actual data,” says Jonathan Eisen, an evolutionary microbiologist at the University of California Davis.

Just as outward appearances aren’t the best way to determine family relations, Woese believed that morphology and metabolism were inadequate classifiers for life on Earth. Instead, he figured that DNA could sketch a much more accurate picture. Today, that approach may seem like common sense. But in the late 60s and early 70s, this was no easy task. Gene sequencing was a time-consuming, tedious task. Entire PhDs were granted for sequencing just one gene. To create his tree of life, Woese would need to sequence the same gene in hundreds, if not thousands, of different species. “When Woese first announced his results, I thought he was exaggerating at first.

So Woese toiled in his lab, sometimes with his postdoc George Fox but often alone, hunched over a light box with a magnifying glass, sequencing genes nucleotide by nucleotide. It took more than a decade. “When Woese first announced his results, I thought he was exaggerating at first,” Fox recalls. “Carl liked to think big, and I thought this was just another of his crazy ideas. But then I looked at the data and the enormity of what we had discovered hit me.

Woese and Fox published their results in 1977 in a well-respected journal, the Proceedings of the National Academy of Science. They had essentially rewritten the tree of life. But Woese still had a problem: few scientists believed him. He would spend the rest of his life working to convince the biological community that his work was correct.

Animal, Vegetable, Mineral

Following the publication of Linnaeus’s treatise in the 18th century, taxonomy progressed incrementally. The Swedish botanist had originally sorted things into three “kingdoms” of the natural world: animal, vegetable, and mineral. He placed organisms in their appropriate cubbyholes by looking at similarities in appearance. Plants with the same number of pollen-producing stamens were all lumped together, animals with the same number of teeth per jaw were grouped, and so on. With no knowledge of evolution and natural selection, he didn’t have a better way to comprehend the genealogy of life on Earth. Woese believed that DNA could unlock the hidden relationships between different organisms.

The publication of Darwin’s On the Origin of Species in 1859, combined with advances in microscopy, forced scientists to revise Linnaeus’s original three kingdoms to include the tiniest critters, including newly visible ones like amoebae and E. coli. Scientists wrestled with how to integrate microbial wildlife into the tree of life for the next 100 years. By the mid-20th century, however, biologists and taxonomists had mostly settled on a tree with five major branches: protists, fungi, plants, animals, and bacteria. It’s the classification system that many people learned in high school biology class.

Woese and other biologists weren’t convinced, though. Originally a physics major at Amherst College in Massachusetts and having received a PhD in biophysics from Yale in 1953, Woese believed that there had to be a more objective, data-driven way to classify life. Woese was particularly interested in how microbes fit into the classification of life, which had escaped a rigorous genealogy up until that point.

He arrived at the University of Illinois Urbana-Champaign as a microbiologist in the mid-1960s, shortly after James Watson and Francis Crick won the Nobel prize for their characterization of DNA’s double-helix form. It was the heyday of DNA. Woese was enthralled. He believed that DNA could unlock the hidden relationships between different organisms. In 1969, Woese wrote a letter to Crick, stating that:

…this can be done by using the cell’s ‘internal fossil record’—i.e., the primary structures of various genes. Therefore, what I want to do is to determine primary structures for a number of genes in a very diverse group of organisms, on the hope that by deducing rather ancient ancestor sequences for these genes, one will eventually be in the position of being able to see features of the cell’s evolution.

This type of thinking was “radically new,” says Norman Pace, a microbiologist at the University of Colorado, Boulder. “No one else was thinking in this direction at the time, to look for sequence-based evidence of life’s diversity.”
Evolution’s Timekeeper

Although the field of genetics was still quite young, biologists had already figured out some of the basics of how evolution worked at the molecular level. When a cell copies its DNA before dividing in two, the copies aren’t perfectly identical. Mistakes inevitably creep in. Over time, this can lead to significant changes in the sequence of nucleotides and the proteins they code for. By finding genes with sites that mutate at a known rate—say 4 mutations per site per million years—scientists could use them as an evolutionary clock that would give biologists an idea of how much time had passed since two species last shared a common ancestor.

To create his evolutionary tree of life, then, Woese would need to choose a gene that was present in every known organism, one that was copied from generation to generation with a high degree of precision and mutated very slowly, so he would be able to track it over billions of years of evolution.

This would let him make a direct measure of evolutionary history,” Pace says. “By tracking these gene sequences over time, he could calculate the evolutionary distance between two organisms and make a map of how life on Earth may have evolved.” The choice was especially fortuitous.

Some of the most ancient genes are those coding for molecules known as ribosomal RNAs. In ribosomes, parts of the cell that float around the soupy cytoplasm, proteins and ribosomal RNA, or rRNA, work together to crank out proteins. Each ribosome is composed of large and small subunits, which are similar in both simple, single-celled prokaryotes and more complex eukaryotes. Woese had several different rRNA molecules to choose from in the various subunits, which are classified based on their length. At around 120 nucleotides long, 5S rRNA wasn’t big enough to use to compare lots of different organisms. On the other end of the spectrum, 23S rRNA was more than 2300 nucleotides long, making it far too difficult for Woese to sequence using the technologies of the time. The Goldilocks molecule—long enough to allow for meaningful comparisons but not too long and difficult to sequence—was 16S rRNA in prokaryotes and its slightly longer eukaryotic equivalent, 18S rRNA. Woese decided to use these to create his quantitative tree of life.

His choice was especially fortuitous, Eisen says, because of several factors inherent in 16S rRNA that Woese couldn’t have been aware of at the time, including its ability to measure evolutionary time on several different time scales. Certain parts of the 16S rRNA molecule mutate at different speeds. Changes to 16S rRNA are, on the whole, still extremely slow (humans share about 50% of their 16S rRNA sequence with the bacterium E. coli), but one portion mutates much more slowly than the other. It’s as if the 16S rRNA clock has both an hour hand and a minute hand. The very slowly evolving “hour hand” lets biologists study the long-term changes to the molecule, whereas the more quickly evolving “minute hand” provides a more recent history. “This gives this gene an advantage because it lets use ask questions about deep evolutionary history and more recent history at the same time,” Eisen says.

Letter by letter

Selecting the gene was just Woese’s first challenge. Now he had to sequence it in a variety of different organisms. In the late 60s and early 70s, when Woese began his work, DNA sequencing was far from automated. Everything, down to the last nucleotide, had to be done by hand. Woese used a method to catalog short pieces of RNA developed in 1965 by British scientist Frederick Sanger, which used enzymes to chop RNA into small pieces. These small pieces were sequenced, and then scientists had to reassemble the overlapping pieces to determine the overall sequence of the entire molecule—a process that was tedious, expensive, and time-consuming, but that was seen as a minor annoyance to a workhorse like Woese, Fox says. “All he cared about was getting the answer.

Woese started with prokaryotes, the single-celled organisms that were his primary area of interest. He and his lab started by growing bacteria in a solution of radioactive phosphate, which the cells incorporated into backbones of their RNA molecules. This made the 16S rRNA radioactive. Then, Woese and Fox extracted the RNA from the cells and chopped it into smaller pieces using enzymes that acted like scissors. The enzymatic scissors would only cut at certain sequences. If a sequence was present in one organism but missing in a second, the scissors would pass over the second one’s sequence. Its fragment would be longer. “To Carl, each spot was a puzzle that he would solve.”

Since RNA’s sugar-phosphate backbone is negatively charged, the researchers could use a process known as electrophoresis to separate the different length pieces. As electricity coursed through gels containing samples, it pulled the smaller, lighter bits farther through the gels than the longer, heavier chunks. The result was distinct bands of different lengths of RNA. Woese and Fox then exposed each gel to photographic paper over several days. The radioactive bands in the gel transferred marks to the paper. This created a Piet Mondrian-esque masterpiece of black bands on a white background. Each different organism left its own mark. “To Carl, each spot was a puzzle that he would solve,” Fox says.

After developing each image, Woese and Fox returned to the gel and neatly cut out each individual blotch that contained fragments of a certain length. They then chopped up these fragments with another set of enzymes until they were about five to 15 nucleotides long, a length that made sequencing easier. For some of the longer fragments, it took several iterations of the process before they were successfully sequenced. The sequences were then recorded on a set of 80-column IBM punch cards. The cards were then run through a large computer to compare band patterns and RNA sequences among different organisms to determine evolutionary relationships. At the beginning, it took Woese and Fox months to obtain a single 16S rRNA fingerprint.

This process was a huge breakthrough,” says Peter Moore, an RNA chemist at Yale University who worked with Woese on other research relating to RNA’s structure. “It gave biologists a tool for sorting through microorganisms and giving them a conceptual way to understand the relationship between them. At the time, the field was just a total disaster area. Nobody knew what the hell was going on.
RNA is so fundamental to life that some scientists think it's the spark that started it all. To learn more about RNA, visit NOVA’s RNA Lab.

By the spring of 1976, Woese and Fox had created fingerprints of a variety of bacterial species when they turned to an oddball group of prokaryotes: methanogens. These microbes produce methane when they break down food for energy. Because even tiny amounts of oxygen are toxic to these prokaryotes, Woese and Fox had to grow them under special conditions.

After months of trial and error, the two scientists were finally able to obtain an RNA fingerprint of one type of methanogen. When they finally analyzed its fingerprint, however, it looked nothing like any of the other bacteria Woese and Fox had previously analyzed. All of the previous bacterial gels contained two large splotches at the bottom. They were entirely absent from these new gels. Woese knew instantly what this meant.

To fellow microbiologist Ralph Wolfe, who worked in the lab next door, Woese announced, “I don’t even think these are bacteria, Wolfe.

He dropped the full bombshell on Fox. “The methanogens didn’t have any of the spots he was expecting to see. When he realized this wasn’t a mistake, he just went nuts. He ran into my lab and told me we had discovered a new form of life,” Fox recalls.

The New Kingdom

The methanogens Woese and Fox had analyzed looked superficially like other bacteria, yet their RNA told a different story, sharing more in common with nucleus-containing eukaryotes than with other bacteria. After more analysis of his RNA data, Woese concluded that what he was tentatively calling Archaea (from Latin, meaning primitive) wasn’t a minor twig on the tree of life, but a new main branch. It wasn’t just Bacteria and Eukarya any more .

To prove to their critics that these prokaryotes really were a separate domain on the tree of life, Woese and Fox knew the branch needed more than just methanogens. Fox knew enough about methanogen biology to know that their unique RNA fingerprint wasn’t the only thing that made them strange. For one thing, their cell walls lacked a mesh-like outer layer made of peptidoglycan. Nearly every other bacterium Fox could think of contained peptidoglycan in its cell wall—until he recalled a strange fact he had learned as a graduate student—another group of prokaryotes, the salt-loving halophiles, also lacked peptidoglycan. 

Grand Prismatic Spring in Yellowstone National Park is home to many species of thermophilic archaea.

Fox turned to the research literature to search for other references to prokaryotes that lack peptidoglycan. He found two additional examples: Thermoplasma and Sulfolobus. Other than the missing peptidoglycan, these organisms and the methanogens seemed nothing alike. Methanogens were found everywhere from wetlands to the digestive tracts, halophiles flourished in salt, Thermoplasma liked things really hot, and Sulfolobus are often found in volcanoes and hot, acidic springs.

Despite their apparent differences, they all metabolized food in the same, unusual way—unlike anything seen in other bacteria—and the fats in the cell membrane were alike, too. When Woese and Fox sequenced the 16S rRNA of these organisms, they found that these prokaryotes were most similar to the methanogens.

Once we had the fingerprints, it all fell together,” Fox says.

Woese believed his findings were going to revolutionize biology, so he organized a press conference when the paper was published in PNAS in 1977. It landed Woese on the front page of the New York Times, and created animosity among many biologists. “The write-ups were ludicrous and the reporters got it all wrong,” Wolfe says. “No biologists wanted anything to do with him.

It wasn’t just distaste for what looked like a publicity stunt that was working against Woese. He had spent most of the last decade holed up in his third floor lab, poring over RNA fingerprints. His reclusive nature had given him the reputation of a crank. It also didn’t help that he had single-handedly demoted many biologists’ favorite species. Thanks to Woese, Wolfe says, “Microbes occupy nearly all of the tree. Then you have one branch at the very end where all the animals and plants were. And the biologists just couldn’t believe that all the plants and all the animals were really just one tiny twig on one branch.” “He was a brash, iconoclastic outsider, and his message did not go down well.

Although some specialists were quick to adopt Woese’s new scheme, the rest of biology remained openly hostile to the idea. It wasn’t until the mid-1980s that other microbiologists began to warm to the idea, and it took well over another decade for other areas of biology to follow suit. Woese had grown increasingly bitter that so many other scientists were so quick to reject his claims. He knew his research and ideas were solid. But he was left to respond to what seemed like an endless stream of criticism. Shying from these attacks, Woese retreated to his office for the next two decades.

He was a brash, iconoclastic outsider, and his message did not go down well,” says Moore, the Yale RNA chemist.

Woese’s cause wasn’t helped by his inability to engage critics in dialogue and discussion. Both reticent and abrupt, he preferred his lab over conferences and presentations. In place of public appearances to address his detractors, he sent salvos of op-eds and letters to the editor. Still, nothing seemed to help. The task of publicly supporting this new tree of life fell to Woese’s close colleagues, especially Norman Pace.

But as technology improved, scientists began to obtain the sequences of an increasing number of 16S rRNAs from different organisms. More and more of their analyses supported Woese’s hypothesis. As sequencing data poured in from around the world, it became clear to nearly everyone in biology that Woese’s initial tree was, in fact, been correct.

Now, when scientists try to discover unknown microbial species, the first gene they sequence is 16S rRNA. “It’s become one of the fundamentals of biology,” Wolfe says. “After more than 20 years, Woese was finally vindicated.”

Woese died on December 30, 2012, at the age of 84 of complications from pancreatic cancer. At the time of his death, he had won some of biology’s most prestigious awards and had become one of the field’s most respected scientists. Thanks to Woese’s legacy, we now know that most of the world’s biodiversity is hidden from view, among the tiny microbes that live unseen in and around us, and in them, the story of how life first evolved on this planet.

Tell us what you think on Twitter #novanext, Facebook, or email.

Photo credits: Jason Lindsey/University of Illinois, Tim Bocek/Flickr (CC BY-NC-SA)
Carrie Arnold

Carrie Arnold is a freelance science writer living in Virginia. She has written about many aspects of the living world for publications including Scientific American, Discover, New Scientist, Science News, and more.

30 Apr 2014

jueves, 10 de julio de 2014

Mathematicians Solve The Topological Mystery Behind The “Brazuca” World Cup Football Just in time for the World Cup final

The 1970 World Cup in Mexico is famous in footballing terms because of its attacking style of play and because it was won by a talented Brazilian team featuring Pelé, the player widely regarded as the best in history.

But this World Cup was also significant for topological reasons. It was the first to feature a ball with icosahedral symmetry — the famous Adidas Telstar, which the company made by stitching together 12 black pentagonal panels and 20 white hexagonal panels.

That’s important for topologists because icosahedral symmetry is one of only three Platonic symmetry groups, the other two being tetrahedral and octahedral symmetry groups.

What’s interesting about this ball is that it has a molecular analogue in the form of the fullerene C60—the famous football-shaped molecule which was discovered later in 1985. This too is made up of pentagonal and hexagonal carbon rings that together take on the exact shape of the Telstar.

Adidas continued to use balls with icosahedral throughout the 1970s, 80s and 90s until the 2006 World Cup in Germany when it introduced a ball with tetrahedral symmetry, known as the Teamgeist. Unlike previous balls, this one was made using 14 curved panels that together made it topologically equivalent to a truncated octahedron.

Just like the Telstar, the Teamgeist has a molecular analogue too in the form of fullerenes that obey tetrahedral symmetry.

But what of the third and final Platonic symmetry group involving octahedral symmetry? It turns out that Adidas has addressed this shortcoming with its current World Cup ball: the Brazuca now being used in Brazil.

The Adidas Brazuca is made from six panels each with a four-leaf clover shape that knit together like a jigsaw to form a sphere. This ball turns out to have octahedral symmetry, finally completing all of the Platonic symmetry groups.

But here’s the thing: nobody knows whether the Adidas Brazuca has a molecular analogue in the form of a fullerene with octahedral symmetry. Certainly nobody has ever seen a fullerene with this kind of symmetry. Indeed, topologists have never worked out whether it is even possible for a fullerene to form this shape given that it must be constructed from pentagons and hexagons in such a way that no two pentagons touch.

At least until now. Today, Yuan-Jia Fan and Bih-Yaw Jin at the National Taiwan University in Taipei announce the happy news that fullerenes can indeed form ball-shaped molecules with octahedral symmetry, just like the Adidas Brazuca.

Their method is relatively straightforward. They start with a flat sheet of graphene made from hexagonal rings of carbon. From this, they cut a template shape, such as a triangle, that they then glue together to form various ball-shaped molecules.

To keep track of the symmetry, their approach is to paste the template shapes onto the faces of polyhedra with known symmetry. The adjacent carbon hexagons then knit together to form a polygon with the same symmetry. However, where the vertices of the triangles meet, shapes other than hexagons can form too, such as pentagons.
An important part of this problem is that pentagons cannot be adjacent in fullerenes so anything that forms this kind of structure is ruled out.

So for example, Yuan-Jia and Bih-Yaw construct a fullerene with icosahedral symmetry by cutting 20 equilateral triangles out of graphene and then pasting them onto the triangular faces of an icosahedron. This creates a shape made of 20 hexagons with 12 pentagons at the vertices of the icosahedron. In other words, this cut-and-paste procedure produces C60, the molecular analogue of the Telstar.

The trick for creating a fullerene with octahedral symmetry is to start with a polyhedron that already has this symmetry, such as a cube with the corners cut off and the edges bevelled, known as a cantellated cube.

The next question is what template shape should they paste onto this cube. Yuan-Jia and Bih-Yaw try several showing that a remarkable variety of shapes with octahedral symmetry can be formed in this way.

However, the climax of their paper comes when they finally discover a shape that has both octahedral symmetry and physically feasible bonds for a fullerene. The new molecule, in addition to the hexagons and pentagons associated with traditional fullerenes, also has four- and eight-carbon rings as well. Indeed, it turns out that there are four structural types of octahedral fullerenes.

The take-home message from Yuan-Jia and Bih-Yaw’ work is that the Brazuca ball does have a fullerene that is its molecular analogue, just like its predecessors at all the world cups dating back to 1970.

That will come as a great relief for topologists and football fans alike. Indeed, chemists might also be interested too. The next stage in this work will be for somebody with graphene and plenty of time on their hands to go out and actually make one of these molecules, perhaps in time for the 2018 World Cup in Russia.

Ref: : From the “Brazuca” ball to Octahedral Fullerenes: Their Construction and Classification

Follow the Physics arXiv Blog on Twitter at @arxivblog, on Facebook and by hitting the Follow button below


Can The Human Brain Project Succeed?

Image: Getty Images

An ambitious effort to build human brain simulation capability is meeting with some very human resistance. On Monday, a group of researchers sent an open letter to the European Commission protesting the management of the Human Brain Project, one of two Flagship initiatives selected last year to receive as much as €1 billion over the course of 10 years (the other award went to a far less controversy-courting project devoted to graphene).

The letter, which now has more than 450 signatories, questions the direction of the project and calls for a careful, unbiased review. Although he’s not mentioned by name in the letter, news reports cited resistance to the path chosen by project leader Henry Markram of the Swiss Federal Institute of Technology in Lausanne. One particularly polarizing change was the recent elimination of a subproject, called Cognitive Architectures, as the project made its bid for the next round of funding.

According to Markram, the fuss all comes down to differences in scientific culture. He has described the project, which aims to build six different computing platforms for use by researchers, as an attempt to build a kind of CERN for brain research, a means by which disparate disciplines and vast amounts of data can be brought together. This is a "methodological paradigm shift" for neuroscientists accustomed to individual research grants, Markram told Science, and that's what he says the letter signers are having trouble with.

But some question the main goals of the project, and whether we're actually capable of achieving them at this point. The program's Brain Simulation Platform aims to build the technology needed to reconstruct the mouse brain and eventually the human brain in a supercomputer. Part of the challenge there is technological. Markram has said that an exascale-level machine (one capable of executing 1000 or more petaflops) would be needed to "get a first draft of the human brain", and the energy requirements of such machines are daunting.

Crucially, some experts say that even if we had the computational might to simulate the brain, we're not ready to. "The main apparent goal of building the capacity to construct a larger-scale simulation of the human brain is radically premature," signatory Peter Dayan, who directs a computational neuroscience department at University College London, told the Guardian. He called the project a "waste of money" that "can't but fail from a scientific perspective". To Science, he said "the notion that we know enough about the brain to know what we should simulate is crazy, quite frankly.”

This last comment resonated with me, as it reminded me of a feature that Steve Furber of the University of Manchester wrote for IEEE Spectrum a few years ago. Furber, one of the co-founders of the mobile chip design powerhouse ARM, is now in the process of stringing a million or so of the low-power processors together to build a massively parallel computer capable of simulating 1 billion neurons, about 1% as many as are contained in the human brain.

Furber and his collaborators designed their computing architecture quite carefully in order to take into account the fact that there are still a host of open questions when it comes to basic brain operation. General-purpose computers are power-hungry and slow when it comes to brain simulation. Analog circuitry, which is also on the Human Brain Project's list, might better mimic the way neurons actually operate, but, he wrote,

as speedy and efficient as analog circuits are, they’re not very flexible; their basic behavior is pretty much baked right into them. And that’s unfortunate, because neuroscientists still don’t know for sure which biological details are crucial to the brain’s ability to process information and which can safely be abstracted away

The Human Brain Project's website admits that exascale computing will be hard to reach: "even in 2020, we expect that supercomputers will have no more than 200 petabytes." To make up for the shortfall, it says, "what we plan to do is build fast storage random-access storage systems next to the supercomputer, store the complete detailed model there, and then allow our multi-scale simulation software to call in a mix of detailed or simplified models (models of neurons, synapses, circuits, and brain regions) that matches the needs of the research and the available computing power. This is a pragmatic strategy that allows us to keep build ever more detailed models, while keeping our simulations to the level of detail we can support with our current supercomputers."

This does sound like a flexible approach. But, as is par for the course with any ambitious research project, particularly one that involves a great amount of synthesis of disparate fields, it's not yet clear whether it will pay off.

And any big changes in direction may take a while. Although the proposal for the second round of funding will be reviewed this year, according to Science, which reached out to the European Commission, the first review of the project itself won't begin until January 2015.

Rachel Courtland can be found on Twitter at @rcourt.

ORIGINAL: Spectrum
By Rachel Courtland
Posted 9 Jul 2014 | 17:00 GMT

DARPA Wants a Memory Prosthetic for Injured Vets—and Wants It Now

Photo: Getty Images
No one will ever fault DARPA, the Defense Department's mad science wing, for not being ambitious enough. Over the next four years, the first grantees in its Restoring Active Memory (RAM) program are expected to develop and test prosthetic memory devices that can be implanted in the human brain. 

It's hoped that such synthetic devices can help veterans with traumatic brain injuries, and other people whose natural memory function is impaired. The two teams, led by researchers Itzhak Fried at UCLA and Mike Kahana at the University of Pennsylvania, will start with the fundamentals. 
They'll look for neural signals associated with the formation and recall of memories, and they'll work on computational models to describe how neurons carry out these processes, and to determine how an artificial device can replicate them. They'll also work with partners to develop real hardware suitable for the human brain. Such devices should ultimately be capable of recording the electrical activity of neurons, processing the information, and then stimulating other neurons as needed.The RAM research derives from an engineering approach to memory that's gaining traction. (Spectrum covered the work of one of its leading proponents, Ted Berger, in the recent article The End of Disability.) If the brain is essentially a collection of circuits, the thinking goes, a memory is formed by the sequential actions of many neurons. If a person has a brain injury that knocks out some of those neurons, the whole circuit may malfunction, and the person will experience memory problems. But if electrodes can pick up the signal in the neurons upstream from the problem spot, and then convey that signal around the damage to intact neurons downstream, then the memory should function as normal.
In a press briefing yesterday, program manager Justin Sanchez said that the first human experiments will be conducted with hospitalized epilepsy patients who have electrodes implanted in their brains as they await surgery (this is done so their doctors can pinpoint the origin of their seizures). Since epilepsy patients often experience memory loss as well, Sanchez said they're a natural fit for the research. Eventually trials would include military servicemembers who suffer the aftereffects of traumatic brain injuries, and finally civilians with similar injuries. 
DARPA recently decided to beef up its research in biological technologies, spurred in part by the needs of veterans returning from Iraq and Afghanistan. But it seems likely that the agency's increased attention to programs like RAM was also prompted by the recognition that neural engineering is one of the most exciting frontiers in science, with the neural technologies advancing faster than the science that guides it.

The RAM program is part of the overarching federal BRAIN Initiative, announced with much fanfare by President Obama in 2013. With a first-year budget of $110 million parceled out to three agencies and considerable cooperation from deep-pocketed private institutions, you can expect this decade to be a brainy one.

ORIGINAL: Spectrum
By Eliza Strickland
9 Jul 2014