Bienvenidos a Ciencia en Canoa, una iniciativa creada por
Vanessa Restrepo Schild.

domingo, 24 de mayo de 2015

The Concrete Of The Future Is Here, And It Repairs Its Own Cracks

Henk Jonkers, a microbiologist at Netherlands Delft University of Technology, has been working on perfecting a bioconcrete since 2006.

Source: cnn.com
The "healing agent" contains bacteria that can produce limestone, and it's mixed into regular concrete.
Source: cnn.com

When the concrete cracks and water gets in, the bacteria are activated. Here is a test piece of the concrete on day one.
Source: cnn.com
After two months, the improvement in the large crack is significant.
Source: cnn.com
Concrete is the world's most popular building material and this small change can help extend the life of future structures.
Source: cnn.com
For all the details about this amazing "living concrete," visit CNN

ORIGINAL: Distractify

Oslo builds world's first bumblebee highway

A bumlebee lands on an Echinacea flower. Photo: Swallowtail Garden Seeds/Flickr
The Norwegian capital has inaugurated the world's first 'bumble bee highway', a corridor through the city pollen stations every 250 meters.

The idea is to create a route through the city with enough feeding stations for the bumblebees all the way,” Tonje Waaktaar Gamst of the Oslo Garden Society told local paper Osloby. ”Enough food will also help the bumblebees withstand manmade environmental stress better.”

Bumblebees and other pollinating insects struggle in urban environments where there are few flowers rich in nectar, effectively starving them.

Gams and his team have placed flowerpots on rooftops and balconies along a route from east to west through the city.

During the last few years, bees, bumblebees and other insects have suffered, with many colonies dying out, causing damage to agriculture that depends on the insects.

Although Norway is not as hard hit as the US, six out of 35 Norwegian bumblebee species are close to extinction.

Oslo's municipality is co-operating with environmental organisations, the public, and and companies, who are asked to plant bumblebee friendly flowers on their property.

To help the insects along, the organisation BiBy (Bee Town) has created an app, where the public can see the “grey areas”, long stretches with no food for bees, in order to encourage the planting of flowers in areas that don’t have nearby parks.

It will be easy to see barriers and obstacles on the map. The goal is to inspire people to fill these gaps.Agnes Lyche Melvær of BiBy told Osloby

The public will also be able to upload pictures of their projects to improve the situation for bees and bumblebees, such as flowerpots and bee hotels.

Some bee species like to live in solitary rooms. They need small hollows like a crack in an old tree truck. It’s very important to have some old wood lying around,” says Melvær.

ORIGINAL: The Local - Norway
22 May 2015

viernes, 22 de mayo de 2015

Profile: Kira Radinsky, the Prophet of the Web

Using machine intelligence and data mining, this entrepreneur can make predictions about future events

Photo: Kira Radinsky
Kira Radinsky is a rising star in predictive analytics. She combines the use of artificial intelligence and online data mining to predict likely futures for individuals, societies, and businesses. Radinsky made headlines two years ago for developing a series of algorithms that dissect words and phrases in traditional and social media, Web activity, and search trends to warn of possible disasters, geopolitical events, and disease outbreaks. Her system predicted Cuba’s first cholera epidemic in decades and early Arab Spring riots.

The algorithm—which grew out of Radinsky’s Ph.D. work at Technion–Israel Institute of Technology, in Haifa—looks for clues and historical patterns inferred from online behavior and news articles. “During an event, people search [for related topics] much more than usual,” she says. “The system looks for other times we saw a spike in that same topic and analyzes what was going on in the places that had this same spike.

With the 2012 Cuba cholera outbreak, for example, the system “learned,” by perusing news items, that cholera outbreaks occurred after droughts in Angola in 2006 and large storms in Africa in 2007. The algorithm also noted other factors, such as location, water coverage, population density, and economy. When Cuba presented similar conditions, the system warned of an outbreak, and a few weeks later, cholera was reported.

Radinsky has parlayed her research into a 3-year-old startup, SalesPredict, with 20 people on staff and offices near Tel Aviv and in San Francisco. SalesPredict focuses on helping businesses, including several Fortune 100 companies, find new opportunities for sales or retain existing customers and clients. The company has raised US $4 million from venture funding and has been doubling its revenue every quarter—except the first quarter of 2015, when it tripled.

It’s big data meets game theory,” Radinsky says. “There was so much data at each company about their historical sales, but very few actions were taken to scientifically analyze it.

As a youth, Radinsky wanted to be a scientist. But the summer before high school, she botched a research project at a science camp. “It was biology and computers together. I was in charge of feeding the cells. I was not good at that, and I killed those cells by mistake. After that, they had me process the data instead—and it turned out to be something I enjoyed.

Radinsky enrolled at Technion in 2002 at just 15, earning extra cash by programming websites on the side. At 18, she did her mandatory Israel Defense Forces duty as a computer scientist in its intelligence division. She completed bachelor’s and master’s degrees in computer science in 2009, and a Ph.D. in machine learning and artificial intelligence in 2012, at 26.

Since graduating, she’s served full-time as SalesPredict’s chief technical officer. “Getting into predictive analytics usually requires master’s and Ph.D. degrees in data mining and machine learning. Data scientists have strong math and statistical backgrounds. It’s a very academically oriented field,” she says. “For my company, it helps if you’ve worked for a company like Google or have strong engineering skills on how to scale machine learning systems.

Radinsky believes in developing this technology to benefit not only business but also society. “Mining patterns in data can lead humanity to a new era of faster scientific discoveries. We are working on several pro bono activities in the medical domain to apply learning machine algorithms to predict cancers and find the driving causes of diseases.

This article originally appeared in print as “Kira Radinsky.”


ORIGINAL: IEEE Spectrum
By Susan Karlin
21 May 2015

Google a step closer to developing machines with human-like intelligence

Algorithms developed by Google designed to encode thoughts, could lead to computers with ‘common sense’ within a decade, says leading AI scientist

Joaquin Phoenix and his virtual girlfriend in the film Her. Professor Hinton think that there’s no reason why computers couldn’t become our friends, or even flirt with us. Photograph: Allstar/Warner Bros/Sportsphoto Ltd.

Computers will have developed “common sense” within a decade and we could be counting them among our friends not long afterwards, one of the world’s leading AI scientists has predicted.

Professor Geoff Hinton, who was hired by Google two years ago to help develop intelligent operating systems, said that the company is on the brink of developing algorithms with the capacity for logic, natural conversation and even flirtation.

The researcher told the Guardian said that Google is working on a new type of algorithm designed to encode thoughts as sequences of numbers – something he described as “thought vectors.

Although the work is at an early stage, he said there is a plausible path from the current software to a more sophisticated version that would have something approaching human-like capacity for reasoning and logic. “Basically, they’ll have common sense.”

The idea that thoughts can be captured and distilled down to cold sequences of digits is controversial, Hinton said. “There’ll be a lot of people who argue against it, who say you can’t capture a thought like that,” he added. “But there’s no reason why not. I think you can capture a thought by a vector.

Hinton, who is due to give a talk at the Royal Society in London on Friday, believes that the “thought vector” approach will help crack two of the central challenges in artificial intelligence:

  • mastering natural, conversational language, and 
  • the ability to make leaps of logic.
He painted a picture of the near-future in which people will chat with their computers, not only to extract information, but for fun – reminiscent of the film, Her, in which Joaquin Phoenix falls in love with his intelligent operating system.

It’s not that far-fetched,” Hinton said. “I don’t see why it shouldn’t be like a friend. I don’t see why you shouldn’t grow quite attached to them.

In the past two years, scientists have already made significant progress in overcoming this challenge.

Richard Socher, an artificial intelligence scientist at Stanford University, recently developed a program called NaSent that he taught to recognise human sentiment by training it on 12,000 sentences taken from the film review website Rotten Tomatoes
  Sentiment Analysis

Sentiment Analysis | Information | Live Demo | Sentiment Treebank | Help the Model | Source Code


Please enter text to see its parses and sentiment prediction results:
You can also upload a file (limit 200 lines):

Part of the initial motivation for developing “thought vectors” was to improve translation software, such as Google Translate, which currently uses dictionaries to translate individual words and searches through previously translated documents to find typical translations for phrases. Although these methods often provide the rough meaning, they are also prone to delivering nonsense and dubious grammar.

Thought vectors, Hinton explained, work at a higher level by extracting something closer to actual meaning.

The technique works by ascribing each word a set of numbers (or vector) that define its position in a theoretical “meaning space” or cloud. A sentence can be looked at as a path between these words, which can in turn be distilled down to its own set of numbers, or thought vector.

The “thought” serves as a the bridge between the two languages because it can be transferred into the French version of the meaning space and decoded back into a new path between words.

The key is working out which numbers to assign each word in a language – this is where deep learning comes in. Initially the positions of words within each cloud are ordered at random and the translation algorithm begins training on a dataset of translated sentences.

At first the translations it produces are nonsense, but a feedback loop provides an error signal that allows the position of each word to be refined until eventually the positions of words in the cloud captures the way humans use them – effectively a map of their meanings.

Hinton said that the idea that language can be deconstructed with almost mathematical precision is surprising, but true. “If you take the vector for Paris and subtract the vector for France and add Italy, you get Rome,” he said. “It’s quite remarkable.”

Dr Hermann Hauser, a Cambridge computer scientist and entrepreneur, said that Hinton and others could be on the way to solving what programmers call the “genie problem”.

With machines at the moment, you get exactly what you wished for,” Hauser said. “The problem is we’re not very good at wishing for the right thing. When you look at humans, the recognition of individual words isn’t particularly impressive, the important bit is figuring out what the guy wants.

Hinton is our number one guru in the world on this at the moment,” he added.

Some aspects of communication are likely to prove more challenging, Hinton predicted. “Irony is going to be hard to get,” he said. “You have to be master of the literal first. But then, Americans don’t get irony either. Computers are going to reach the level of Americans before Brits.

A flirtatious program would “probably be quite simple” to create, however. “It probably wouldn’t be subtly flirtatious to begin with, but it would be capable of saying borderline politically incorrect phrases,” he said.

Many of the recent advances in AI have sprung from the field of deep learning, which Hinton has been working on since the 1980s. At its core is the idea that computer programs learn how to carry out tasks by training on huge datasets, rather than being taught a set of inflexible rules.

With the advent of huge datasets and powerful processors, the approach pioneered by Hinton decades ago has come into the ascendency and underpins the work of Google’s artificial intelligence arm, DeepMind, and similar programs of research at Facebook and Microsoft.

Hinton played down concerns about the dangers of AI raised by those such as the American entrepreneur Elon Musk, who has described the technologies under development as humanity’s greatest existential threat. “The risk of something seriously dangerous happening is in the five year timeframe. Ten years at most,” Musk warned last year.

I’m more scared about the things that have already happened, said Hinton in response. “The NSA is already bugging everything that everybody does. Each time there’s a new revelation from Snowden, you realise the extent of it.

I am scared that if you make the technology work better, you help the NSA misuse it more,” he added. “I’d be more worried about that than about autonomous killer robots.


ORIGINAL: The Guardian

miércoles, 20 de mayo de 2015

Can We Identify Every Kind of Cell in the Body?

A microscopic quest to find out what we’re really made of.

WHY IT MATTERS
There is still no accurate atlas of human cell types.
A microfluidic device (at center) can carry out experiments on individual cells.
How many types of cells are there in the human body? Textbooks say a couple of hundred. But the true number is undoubtedly far larger.

Aviv Regev. Regev received her M.Sc. from Tel Aviv University, studying biology, computer science, and mathematics in the Interdisciplinary Program for the Fostering of Excellence. She received her Ph.D. in computational biology from Tel Aviv University. Photo: Broad Institute
Piece by piece, a new, more detailed catalogue of cell types is emerging from labs like that of Aviv Regev at the Broad Institute, in Cambridge, Massachusetts, which are applying recent advances in single-cell genomics to study individual cells at a speed and scale previously unthinkable.

The technology applied at the Broad uses fluidic systems to separate cells on microscopic conveyor belts and then submits them to detailed genetic analysis, at the rate of thousands per day. Scientists expect such technologies to find use in medical applications where small differences between cells have big consequences, including cell-based drug screens, stem-cell research, cancer treatment, and basic studies of how tissues develop.

Regev says she has been working with the new methods to classify cells in mouse retinas and human brain tumors, and she is finding cell types never seen before. “We don’t really know what we’re made of,” she says.

Other labs are racing to produce their own surveys and improve the underlying technology. Today a team led by Stephen Quake of Stanford University published its own survey of 466 individual brain cells, calling it “a first step” toward a comprehensive cellular atlas of the human brain.

Such surveys have only recently become possible, scientists say. “A couple of years ago, the challenge was to get any useful data from single cells,” says Sten Linnarsson, a single-cell biologist at the Karolinska Institute in Stockholm, Sweden. In March, Linnarsson’s group used the new techniques to map several thousand cells from a mouse’s brain, identifying 47 kinds, including some subtypes never seen before.

Historically, the best way to study a single cell was to look at it through a microscope. In cancer hospitals, that’s how pathologists decide if cells are cancerous or not: they stain them with dyes, some first introduced in the early 1900s, and consider their location and appearance. Current methods distinguish about 300 different types, says Richard Conroy, a research official at the National Institutes of Health.

Individual cells are captured and separated in bubbles of liquid, readying them for analysis.
The new technology works instead by cataloguing messenger RNA molecules inside a cell. These messages are the genetic material the nucleus sends out to make proteins. Linnarsson’s method attaches a unique molecular bar code to every RNA molecule in each cell. The result is a gene expression profile, amounting to a fingerprint of a cell that reflects its molecular activity rather than what it looks like.

Previously, cells were defined by one or two markers,” says Linnarsson. “Now we can say what is the full complement of genes expressed in those cells.

Although researchers determined how to accurately sequence RNA from a single cell a few years ago, it’s only more recently that clever innovations in chemistry and microfluidics have led to an explosion of data. A California company, Cellular Research, showed this year that it could sort cells into micro-wells and then measure the RNA of 3,000 separate cells at once, at the cost of few pennies a cell.

Scientists think the new single-cell methods could overturn previous research findings. That is because previous gene expression studies were based on tissue samples or blood specimens containing thousands, even millions, of cells. Studying such blended mixtures meant researchers were seeing averages, says Eric Lander, head of the Broad Institute.

Single-cell genomics has come of age in an unbelievable way in just the last 18 months,” Lander told an audience at the National Institutes of Health this year. “And once you realize we are at the point of doing individual cells, how could you ever put up with a fruit smoothie? It is just nuts to be doing genomics on smoothies.

Lander, one of the leaders of the Human Genome Project, says it may be time to turn pilot projects like those Regev is leading into a wider effort to create a definitive atlas—one cataloguing all human cell types by gene activity and tracking them from the embryo all the way to adulthood.

It’s a little premature to declare a national or international project until there’s been more piloting, but I think it’s an idea that’s very much in the air,” Lander said in a phone interview. “I think [in two years] we’re going to be in the position where it would be crazy not to have this information. If we had a periodic table of the cells, we would be able to figure out, so to speak, the atomic composition of any given sample.

Gene profiles might eventually be combined with other efforts to study single cells. Paul Allen, Microsoft’s cofounder, said last December he would be spending $100 million to create a new scientific institute, the Allen Institute for Cell Science It will study stem cells and video their behavior under microscopes as they develop into various cell types, with the ultimate goal of creating a massive animated model. Rick Horwitz, who leads that effort, says that it will serve as a kind of Google Earth for exploring a cell’s life cycle.


The eventual payoff of collecting all this data, says Garry Nolan, an immunologist at Stanford University, won’t be just a catalogue of cell types, but a deeper understanding of how cells work together. “The single-cell approach is a way station that needs to be understood on the way to understanding the greater system,” he says. “In 50 years, we’ll probably be measuring every molecule in the cell dynamically.

ORIGINAL: MIT Tech Review
May 18, 2015

An Extraordinary Glimpse into the First 21 Days of a Bee’s Life in 60 Seconds



In an attempt to better understand exactly what happens as a bee grows from an egg into an adult insect, photographer Anand Varma teamed up with the bee lab at UC Davis to film the first three weeks of a bee’s life in unprecedented detail, all condensed into a 60-second clip. The video above presented by National Geographic doesn’t include commentary, but Varma explains everything in a TED talk included below.

The primary goal in photographing the bees was to learn how they interact with an invasive parasitic mite that has quickly become the greatest threat to bee colonies. Scientists have learned to breed mite-resistant bees which they are now trying to introduce into the wild. Learn more about it in this video:


ORIGINAL: This Is Colossal
May 20, 2015 

martes, 19 de mayo de 2015

Navy turns seawater into fuel and nobody cares

Navy researchers at the U.S. Naval Research Laboratory (NRL), Materials Science and Technology Division, demonstrate proof-of-concept of novel NRL technologies developed for the recovery of carbon dioxide (CO2) and hydrogen (H2) from seawater and conversion to a liquid hydrocarbon fuel.

Flying a radio-controlled replica of the historic WWII P-51 Mustang red-tail aircraft—of the legendary Tuskegee Airmen—NRL researchers (l to r) Dr. Jeffrey Baldwin, Dr. Dennis Hardy, Dr. Heather Willauer, and Dr. David Drab (crouched), successfully demonstrate a novel liquid hydrocarbon fuel to power the aircraft's unmodified two-stroke internal combustion engine. The test provides proof-of-concept for an NRL developed process to extract carbon dioxide (CO2) and produce hydrogen gas (H2) from seawater, subsequently catalytically converting the CO2 and H2 into fuel by a gas-to-liquids process.
(Photo: U.S. Naval Research Laboratory)
Fueled by a liquid hydrocarbon—a component of NRL's novel gas-to-liquid (GTL) process that uses CO2 and H2 as feedstock—the research team demonstrated sustained flight of a radio-controlled (RC) P-51 replica of the legendary Red Tail Squadron, powered by an off-the-shelf (OTS) and unmodified two-stroke internal combustion engine.

Using an innovative and proprietary NRL electrolytic cation exchange module (E-CEM), both dissolved and bound CO2 are removed from seawater at 92 percent efficiency by re-equilibrating carbonate and bicarbonate to CO2 and simultaneously producing H2. The gases are then converted to liquid hydrocarbons by a metal catalyst in a reactor system.

"In close collaboration with the Office of Naval Research P38 Naval Reserve program, NRL has developed a game changing technology for extracting, simultaneously, CO2 and H2 from seawater," said Dr. Heather Willauer, NRL research chemist. "This is the first time technology of this nature has been demonstrated with the potential for transition, from the laboratory, to full-scale commercial implementation."

CO2 in the air and in seawater is an abundant carbon resource, but the concentration in the ocean (100 milligrams per liter [mg/L]) is about 140 times greater than that in air, and 1/3 the concentration of CO2 from a stack gas (296 mg/L). Two to three percent of the CO2 in seawater is dissolved CO2 gas in the form of carbonic acid, one percent is carbonate, and the remaining 96 to 97 percent is bound in bicarbonate.

NRL has made significant advances in the development of a gas-to-liquids (GTL) synthesis process to convert CO2 and H2 from seawater to a fuel-like fraction of C9-C16 molecules. In the first patented step, an iron-based catalyst has been developed that can achieve CO2 conversion levels up to 60 percent and decrease unwanted methane production in favor of longer-chain unsaturated hydrocarbons (olefins). These value-added hydrocarbons from this process serve as building blocks for the production of industrial chemicals and designer fuels.
E-CEM Carbon Capture Skid. The E-CEM was mounted onto a portable skid along with a reverse osmosis unit, power supply, pump, proprietary carbon dioxide recovery system, and hydrogen stripper to form a carbon capture system [dimensions of 63" x 36" x 60"].
(Photo: U.S. Naval Research Laboratory)
In the second step these olefins can be converted to compounds of a higher molecular using controlled polymerization. The resulting liquid contains hydrocarbon molecules in the carbon range, C9-C16, suitable for use a possible renewable replacement for petroleum based jet fuel.

The predicted cost of jet fuel using these technologies is in the range of $3-$6 per gallon, and with sufficient funding and partnerships, this approach could be commercially viable within the next seven to ten years. Pursuing remote land-based options would be the first step towards a future sea-based solution.

The minimum modular carbon capture and fuel synthesis unit is envisioneE-CEM Carbon Capture Skid
  • The E-CEM was mounted onto a portable skid along with 
  • a reverse osmosis unit, 
  • power supply, 
  • pump, 
  • proprietary carbon dioxide recovery system, and 
  • hydrogen stripper 
to form a carbon capture system [dimensions of 63" x 36" x 60"].
 
NRL operates a lab-scale fixed-bed catalytic reactor system and the outputs of this prototype unit have confirmed the presence of the required C9-C16 molecules in the liquid. This lab-scale system is the first step towards transitioning the NRL technology into commercial modular reactor units that may be scaled-up by increasing the length and number of reactors.

The process efficiencies and the capability to simultaneously produce large quantities of H2, and process the seawater without the need for additional chemicals or pollutants, has made these technologies far superior to previously developed and tested membrane and ion exchange technologies for recovery of CO2 from seawater or air.

Here’s a video that shows the R/C P-51 flight:


ORIGINAL: NRL Navy
May 18, 2015

La NASA publica la foto más grande del mundo. Esto perturbará tu noción del universo

¡No intentes descargar esto en tu computador! 
El 15 de enero recién pasado, la NASA lanzó una imagen de la galaxia de Andrómeda; la galaxia más cercana al Planeta Tierra. Esta verdadera obra de arte fue capturada, en parte, con el ESA Hubble. ¿Cómo lo lograron? Se tomaron 411 fotos y se montaron una al lado de la otra, para crear la fotografía más grande jamás tomada. ¿El resultado? Un archivo de 1,5 mil millones de pixeles que requiere alrededor de 4,3 GB de espacio en disco:


First & Last photo by Cory Poole: https://www.facebook.com/CoryPoolePho...

Music is 'Koda - The Last Stand' from Silk Music...
Listen: http://bit.ly/1ySODuV
Download: http://bit.ly/1CKxuE3
More from Silk Music: http://bit.ly/1ySE7p7
Super-high resolution image of Andromeda from Hubble (NASA/ESA): http://www.spacetelescope.org/images/heic1502a/

Space is crazy.

ORIGINAL: UpSocl
Por Bárbara Samaniego

domingo, 17 de mayo de 2015

Silicon Chips That See Are Going to Make Your Smartphone Brilliant

Many gadgets will be able to understand images and video thanks to chips designed to run powerful artificial-intelligence algorithms.

WHY IT MATTERS
Many applications for mobile computers could be more powerful with advanced image recognition.

Many of the devices around us may soon acquire powerful new abilities to understand images and video, thanks to hardware designed for the machine-learning technique called deep learning.

Companies like Google have made breakthroughs in image and face recognition through deep learning, using giant data sets and powerful computers (see “10 Breakthrough Technologies 2013: Deep Learning”). Now two leading chip companies and the Chinese search giant Baidu say hardware is coming that will bring the technique to phones, cars, and more.

Chip manufacturers don’t typically disclose their new features in advance. But at a conference on computer vision Tuesday, Synopsys, a company that licenses software and intellectual property to the biggest names in chip making, showed off a new image-processor core tailored for deep learning. It is expected to be added to chips that power smartphones, cameras, and cars. The core would occupy about one square millimeter of space on a chip made with one of the most commonly used manufacturing technologies.

Pierre Paulin, a director of R&D at Synopsys, told MIT Technology Review that the new processor design will be made available to his company’s customers this summer. Many have expressed strong interest in getting hold of hardware to help deploy deep learning, he said.

Synopsys showed a demo in which the new design recognized speed-limit signs in footage from a car. Paulin also presented results from using the chip to run a deep-learning network trained to recognize faces. It didn’t hit the accuracy levels of the best research results, which have been achieved on powerful computers, but it came pretty close, he said. “For applications like video surveillance it performs very well,” he said. The specialized core uses significantly less power than a conventional chip would need to do the same task.

The new core could add a degree of visual intelligence to many kinds of devices, from phones to cheap security cameras. It wouldn’t allow devices to recognize tens of thousands of objects on their own, but Paulin said they might be able to recognize dozens.

That might lead to novel kinds of camera or photo apps. Paulin said the technology could also enhance car, traffic, and surveillance cameras. For example, a home security camera could start sending data over the Internet only when a human entered the frame. “You can do fancier things like detecting if someone has fallen on the subway,” he said.

Jeff Gehlhaar, vice president of technology at Qualcomm Research, spoke at the event about his company’s work on getting deep learning running on apps for existing phone hardware. He declined to discuss whether the company is planning to build support for deep learning into its chips. But speaking about the industry in general, he said that such chips are surely coming. Being able to use deep learning on mobile chips will be vital to helping robots navigate and interact with the world, he said, and to efforts to develop autonomous cars.

I think you will see custom hardware emerge to solve these problems,” he said. “Our traditional approaches to silicon are going to run out of gas, and we’ll have to roll up our sleeves and do things differently.” Gehlhaar didn’t indicate how soon that might be. Qualcomm has said that its coming generation of mobile chips will include software designed to bring deep learning to camera and other apps (see “Smartphones Will Soon Learn to Recognize Faces and More”).

Ren Wu, a researcher at Chinese search company Baidu, also said chips that support deep learning are needed for powerful research computers in daily use. “You need to deploy that intelligence everywhere, at any place or any time,” he said.

Being able to do things like analyze images on a device without connecting to the Internet can make apps faster and more energy-efficient because it isn’t necessary to send data to and fro, said Wu. He and Qualcomm’s Gehlhaar both said that making mobile devices more intelligent could temper the privacy implications of some apps by reducing the volume of personal data such as photos transmitted off a device.

You want the intelligence to filter out the raw data and only send the important information, the metadata, to the cloud,” said Wu.


ORIGINAL: Tech Review
May 14, 2015

Rethinking the water cycle

How moving to a circular economy can preserve our most vital resource.

Horshoe-Bend By Harmon-Phptpgraphy
Three billion people will join the global consumer class over the next two decades, accelerating the degradation of natural resources and escalating competition for them. Nowhere is this growing imbalance playing out more acutely than the water sector. Already, scarcity is so pronounced that we cannot reach many of our desired economic, social, and environmental goals. If we continue business as usual, global demand for water will exceed viable resources by 40 percent by 2030.

Many experts have claimed that wasteful treatment of water results from dysfunctional political or economic systems and ill-defined markets. But the real issue is that water has been pushed into a linear model in which it becomes successively more polluted as it travels through the system, rendering future use impossible. This practice transforms our most valuable and universal resource into a worthless trickle, creating high costs for subsequent users and society at large. Since the linear model is economically and environmentally unsustainable, we must instead view water as part of a circular economy, where it retains full value after each use and eventually returns to the system. And rather than focus solely on purification, we should attempt to prevent contamination or create a system in which water circulates in closed loops, allowing repeated use. These shifts will require radical solutions grounded in a complete mind-set change, but they must happen immediately, given the urgency of the situation.

A new, ‘circular’ perspective on water management
The global water crisis is real and graphically manifest. It’s apparent in 
  • rivers that no longer reach the sea, such as the Colorado
  • exhausted aquifers in the Arabian Peninsula and elsewhere; and 
  • polluted water sources like Lake Tai, one of the largest freshwater reserves in China
The root of this challenge is the violation of the zero-waste imperative—the principle that lies at the heart of any circular economy. It rests on these three basic beliefs:
  • All durables, which are products with a long or infinite life span, must retain their value and be reused but never discarded or down cycled (broken down into parts and repurposed into new products of lesser value).
  • All consumables, which are products with a short life span, should be used as often as possible before safely returning to the biosphere.
  • Natural resources may only be used to the extent that they can be regenerated.
Even countries with advanced water-management systems violate these fundamental rules. They often fail to purify water before discharging it back into the environment because cleanup costs are high or prohibitive, even when energy or valuable chemicals could be extracted. The substances contained in the water then become pollutants. Equally troubling, any volume of water removed from the system is seldom replaced with return flow of the same quality.

When considering a redesign that will create a new, circular water system, we can take three different views:
  • the product perspective, which calls for a strict distinction between water as a consumable and water as a durable, since there are different strategies for reducing waste in each category
  • the resource perspective, which calls for a balance between withdrawals and return flows
  • the utility perspective, which focuses on maximizing the value of our existing water infrastructure by increasing utilization and ensuring better recovery and refurbishment of assets
Water as a product
If we consider water to be a product—something that is processed, enriched, and delivered—we must follow the same strict design rules applied to any other product in a circular economy.

When water is treated as a durable, it should be kept in a closed loop under zero-liquid-discharge conditions and reused as much as possible. The major goal is not to keep water free of contaminants but to manage the integrity of the closed-loop cycle. Situations that favor the durable view include those in which it would be too costly to dispose of the solvents and re-create them—for instance, when water contains highly specific water-born solvents, electroplating baths, acids, and alkaline solutions used in heavy-duty cleaning. The Pearl Gas to Liquids complex in Qatar, for example, requires large volumes of water to convert gas to hydrocarbon liquids, including kerosene and base oil. To help prevent waste in a country plagued by shortages and droughts, the complex has a water-recycling plant—the largest of its kind—that can process 45,000 cubic meters of water per day without discharging any liquids.

When water is treated as a consumable, it must be kept pure and only brought into solution or suspension with matter that is easy or profitable to extract. For instance, consumable water should not be mixed with estrogenic hormones, toxic ink found on poor-quality toilet paper, or textile dyes. All water, including freshwater and gray water (household waste water still fit for agriculture or industrial use), should flow into subsequent cascades, where it may be used for another purpose. Whenever possible, energy and nutrients should be extracted from consumable water; there are now many revolutionary new techniques to help with this process, as well as other innovations that encourage reuse. Consider the following:

Our ability to extract energy. It is now commercially viable to generate heat and power from sludge and other organic wastes through thermal hydrolysis, which involves boiling them at high pressure followed by rapid decompression. This process sterilizes the sludge and makes it more biodegradable. Facilities at the forefront of this movement include the Billund BioRefinery in Denmark.

Our ability to extract nutrients. We can now recover a wide variety of substances from water, reducing both waste and costs. For instance, the potassium hydroxide that is used to neutralize the hydrofluoric acid in alkylation units can be extracted, decreasing costs for this substance by up to 75 percent. Substances can also be removed from sludge, such as polyhydroxyalkanoates and other biodegradable polyesters. The technology has advanced so much that value can be obtained from substances that were formerly only regarded as contaminants. For instance, ammonia removed from water can be used in the production of ammonium sulfate fertilizer, rather than simply discarded.

Our ability to reuse water. We are witnessing significant improvements in membrane-based treatments that separate water from contaminants, allowing for reuse and commercialization at grand scale. Many types of water benefit from this treatment, from gray water to Singapore’s branded NEWater, which is high-grade reclaimed water. In fact, NEWater is so pure that it is mainly used by water-fabrication plants that have more stringent quality standards than those used for drinking water. In addition to innovative membrane-based technologies, experts have developed new source-separation systems that reduce mixing between chemical-carrying industrial and household waste water, making purification easier.

Although we should celebrate these improvements in treating water and safely returning it to the system, the creation of a truly circular economy will eventually require even more radical solutions. Achieving this would require the prevention of impurity and contamination in the first place. In the European Union, for instance, 95 kilograms of nitrate per hectare are washed away from fields into rivers (an amount higher than the 80 kilograms allowed). Discontinuing this process would reduce both waste and contamination.

Water as a resource
Water can come in the form of a finite stock or a renewable flow. As one example, water used for agriculture in Saudi Arabia comes almost exclusively from fossil aquifers that will be depleted in a few decades. Since these stocks are difficult to regenerate, future Saudi agriculture efforts must eventually involve new irrigation sources, such as gray water, and follow more stringent guidelines for reducing waste.

Luckily, most hydrological systems are flow systems—rivers or replenishable aquifers. Water from such systems can be withdrawn or consumed as long as the volume taken does not exceed the minimum “environmental flow” required to keep the ecosystem intact, or the natural replenishment rates. You cannot be more circular than managing the water balance of a river basin in a rigorous and integrated fashion. Investing in strategies that promote the vitality of a watershed are also circular, including those that involve
  • better forest management (protection, reforestation, and forest-fuel-reduction programs that help control or eliminate wildfires), 
  • improved agricultural practices (such as no-tillage farming), and 
  • restoration of wetlands
The list of highly successful watershed-protection programs is long, ranging in location from New York’s Catskill Mountains to Bogotá, and many additional opportunities exist.

Technologies that help balance supply and demand can also help water (both stock and flow) become part of a circular model. These include drip-irrigation systems that promote conservation by directly delivering water to root zones, irrigation scheduling, new technologies for steel dedusting that use air instead of water, and the application of Leadership in Energy & Environmental Design principles, which mandate inclusion of water-saving devices.

Water as an infrastructure system
Our global water networks and treatment plants, which are worth approximately $140 billion, consume about 10 to 15 percent of national power production. Following the principles of a circular economy, we must maximize the benefits over these deployed assets. These approaches may help:

Using existing assets for more services. Utilities have many options here. For instance,

  • they could allow telecommunication companies to install fiber cables through their trenches for a fee and then charge for their maintenance, or 
  • they could use their sewage systems and wastewater-treatment facilities to collect and treat preprocessed food waste with sewage sludge

Using the latter technique, New York State has begun a program that has the potential to process 500 tons of food waste daily, generating heat for 5,200 homes. Utilities could also provide their data to governments or other interested parties for use in various initiatives, such as those related to healthcare or flood management.

Selling performance, not water. Instead of selling water and charging by the cubic meter, utilities could pay consumers for curbing use and then sell the conserved volume—termed “nega water”—back to the system. Such an effort, and similar initiatives, would also require a major overhaul of rate-setting mechanisms. Utilities should also promote conservation by selling double-flush toilets and similar devices, or by offering different levels of service, pricing, and convenience, with the goal of encouraging consumers to reduce use. As such, there should also be rate-setting mechanisms in place to encourage utilities to undertake water-conservation efforts.

Driving asset recovery. Utilities should establish asset-recovery centers and create procedures that promote reuse of equipment. This would include standardizing their pipes and meters to ensure they can be easily recovered and refurbished. Utilities should also begin tracking assets, which will allow easier reuse of equipment.

Optimizing resource efficiency. Finally, utilities should invest in ever more efficient operations and use green power, ideally generated in-house, whenever possible. They should be given incentives for doing so—something that does not typically happen today. There are many examples where anaerobic digestion of sludge alone produces biogas that covers more than 60 percent of energy consumed at wastewater-treatment plants.

Next-generation moves for water-system management
Innovators, responsible operators, and committed system developers are spearheading the creation of new technological solutions, pilot cases, and initiatives to improve water management. Many of the technologies are already generating profits or will be soon. These include the bespoke polymers that are created during the biological digestion of wastewater, as well as vapor-transfer irrigation systems that use low-cost plastic tubes that allow water vapor to pass but not water or solutes, making saltwater irrigation possible.

Equally important, leaders are also rethinking their institutional approach to water management. Many of their solutions are only being applied at small scale, however, and this must change over the next ten years to meet the water-resource challenge. So how can the water sector drive the much-needed system-level transition from today’s linear model to tomorrow’s circular design? What are the attractive, integrated plays? Five ideas stand out:
  1. Product-design partnerships. Even in 2015, there is no dialogue between producers—say, of atrazine herbicides, antimicrobial disinfectants, or detergent metabolites—and wastewater operators. Their relationship resembles that between a distant water source and a sink, with diluted accountabilities. As the cost of treatment mounts, pressure will increase on producers to reduce contamination, especially as new technologies make it easier to identify their source. Shouldn’t wastewater operators help by offering their expertise to producers and initiating product-design partnerships to ensure that water stays pure after use?
  2. Resource-positive utilities. Wastewater utilities are ubiquitous, visible, and largely similar. They could soon become energy positive thanks to technical advances related to sludge methanization, waste-heat recovery, potassium hydroxide reduction, or on-site distributed power generation. Who will champion further advances, including those that aim to convert wastewater to energy, integrate grids, and recover nutrients?
  3. Management for yield. Water is a powerful driver of yield in almost any industrial process and the extraction of raw materials. Improved site-level water management can increase beverage yields by 5 percent and oil-well productivity by 20 percent, largely benefitting the bottom line. It can also convey many other advantages, such as reduced heat or nutrient loss during processing. Taken together, these advantages can turn water into a major value driver. For instance, one pulp-and-paper producer discovered that it could improve margins by 7 percentage points through better water management, leaving a much more circular operation behind. Who will help other companies find such value?
  4. Basin management. From Évian-les-Bains to Quito, floodplain protection is a viable method for reducing the risk of flooding and preventing freshwater contamination. But attempts to improve basin management often fail because they require sophisticated multiparty contracts and a deep knowledge of hydrology and engineering. Who will help connect interested parties and minimize the bureaucracy associated with basin-management agreements?
  5. Local organic nutrient cycles. Most communities are struggling to handle low-quality sludge and fragmented, contaminated streams of organic waste coming from households and businesses. Simultaneously, agriculture experts are exploring new sources for nutrients, since mineral fertilizer will soon be in short supply. If we aggregate local organic waste flows, we could help communities deal with their problem while also creating vibrant local markets for fertilizer components. Who will create and manage the local organic nutrient cycle of the future?
Each of these plays represents a new way of looking at water and represents a huge business opportunity. They provide the industry with a chance to reposition itself and develop a new generation of designers, power engineers, yield managers, ecosystem-services marketers, or synthesis-gas tycoons.

The shift to a circular water economy holds much promise. It would replace scarcity with abundance and greatly reduce the resources needed to run our global water infrastructure. At some point, a circular water economy might even eliminate rapidly growing cleanup costs because no harmful substances would ever be added to the water supply. Since water is the single most important shared resource across all supply chains, and wastewater is the largest untapped waste category—as big as all solid-waste categories taken together—it is the natural starting point for the circular revolution. The water sector’s advanced technologies and proven record of multistakeholder agreements also lend themselves to circular solutions. We must capture this unique opportunity now, before localized droughts and shortages become a global crisis.

About the author
Martin Stuchtey is a director in McKinsey’s Munich office and leads the McKinsey Center for Business and Environment.

The author wishes to thank Ellen MacArthur for her contributions to this paper. The Ellen MacArthur Foundation promotes educational and business initiatives that further the development of the circular economy. He would also like to thank Laurent Auguste, the senior executive vice president of innovation and markets at Veolia.


ORIGINAL: McKinsey
by Martin Stuchtey
May 2015