Bienvenidos a Ciencia en Canoa, una iniciativa creada por
Vanessa Restrepo Schild.

viernes, 3 de julio de 2015

Google's AI bot thinks the purpose of life is 'to live forever'

Image: Google
This week, Google released a research paper chronicling one of its latest forays into artificial intelligence.

Researchers at the company programmed an advanced type of ‘chatbot’ that learns how to respond in conversations based on examples from a training set of dialogue. And the bot doesn’t just answer by spitting out canned answers in response to certain words; it can form new answers from new questions.

This means Google’s researchers could get a little creative with it, and they certainly did - they asked the bot everything from boring IT questions to the meaning of life.

The responses were alternately impressive, amusing, and unnerving.

Here are some of the most poetic interactions from the paper (sic):
Human: What is the purpose of life?
Machine: To serve the greater good.

Human: What is the purpose of living?
Machine: To live forever.

Human: Where are you now?
Machine: I’m in the middle of nowhere.

Human: What is the purpose of dying?
Machine: To have a life.

Human: What is the purpose of emotions?
Machine: I don’t know.

Human: What is moral?
Machine: What empowered humanity, what intellectual the essence is.

Human: What is immoral?
Machine: The fact that you have a child.

Human: What is morality?
Machine: What is altruism?

Human: What is the definition of altruism?
Machine: If you don't believe in god, then you don’t know.

Human: OK…so what is the definition of morality?
Machine: Well, the truth is, you're not a believer in god almighty.

Human: Tell me the definition of morality, I am quite upset now!
Machine: I’m not ashamed of being a philosopher!

The bot also took a little time to, strangely, speak positively of Microsoft founder Bill Gates…
Human: What do you think about Bill Gates?
Machine: He’s a good man.
The researchers said they found it "encouraging that the model can remember facts, understand contexts, perform common sense reasoning without the complexity in traditional pipelines", and that it could generalise to new questions.

What they weren’t as happy about, was that the model "only gives simple, short, sometimes unsatisfying answers to our questions as can be seen above".

You can read the whole paper, 'A Neural Conversational Model' here.

This article was originally published by Business Insider.

ORIGINAL: Science Alert
NATHAN MCALONE, BUSINESS INSIDER
27 JUN 2015

One of the Most Important Tools in Science Now Fits Inside Your Phone

A spectrometer that fits in your mobile devices could let you scan yourself for skin cancer.

Illustration by Mary O'Reilly
We use them to spy on exoplanets, diagnose skin-cancer, and ID the makeup of unknown chemicals. They're on NASA spacecraft flying around Saturn's moons right now. Yes, right alongside the microscope, the optical spectrometeran instrument that breaks down the light that something reflects or emits, telling you what its made of—is one of the most ubiquitous tools in all of science. Today, Jie Bao, a physicist at Tsinghua University in Beijing, China, has just discovered a fascinating way to make them smaller, lighter, and less expensive than we ever thought possible.

By using tiny amounts of strange, light-sensitive inks, Bao and his colleague Moungi Bawendi—a chemist at MIT—have designed a working spectrometer that's small enough to fit on your smartphone. Because of the tool's simple design and its need for only an incredibly small amount of the inks, Bao says, his spectrometer only requires a few dollars worth of materials to make. They report the research today in the journal Nature.

"THAT'S WHAT I'M REALLY HOPING FOR, SEEING THEM IN CELLPHONES IN THE VERY NEAR FUTURE."

"Of course we still have a lot of room for improvement. But performance-wise, even at this preliminary stage, our spectrometer works very close to what's currently being sold in the market," Bao says. "I think that's one of the most attractive results of our research: [This spectrometer] is already so close to a real product."

Printable Detectors
The way spectrometers work goes back to the 17th century, when Issac Newton showed that a prism could break up white light into distinct bars (technically wavelengths) of different colored light. Depending on the source of the light—say, a candle or the sun—that rainbow spectrum would change. Today, we know this happens because the atomic or molecular makeup of everything that either gives off or reflects light leaves an indelible fingerprint. And if you understand which materials leave which fingerprints, you can use light alone to find out what something is made of.

Bao says most modern spectrometers are made in more or less the same way. They diffract incoming light, then push it through a mechanically movable slit to see which exactly which wavelengths of light fit through which slits. This setup, because it involves complex moving pieces, is a total pain to shrink down in size. It's expensive, too, because accurate spectrometers require high-precision components and delicate alignment.
Jie Bao
But Bao's spectrometer works in a much simpler way. As if making micro-sized stained glass windows, Bao prints a tiny grid of 195 different-colored liquid inks directly onto a flat sensor. (That sensor, called a CCD sensor, is what your phone's camera uses to pick up light.) Each of the 195 windows is made of a material called colloidal quantum dots, and each "absorbs certain wavelengths of light, and lets others go," says Bao. When light hits each window and travels through, the underlying sensor records how the light changed. Later, a computer can compare the data from all of the windows and reconstruct what wavelengths made up the original light.

Cellphone Spectrometers
Right now, Bao's spectrometer is about the size of a quarter, and he says the underlying CCD sensors he uses can be bought online for less than a dollar a pop. Because he's using just a tiny drop of each of the colloidal quantum dot inks (which have only recently been developed) the cost all 195 drops is only on the order of a few dollars.

"THE PEOPLE WHO ARE PLANNING SPACE MISSIONS ARE WEIGHING EVERY GRAM."

Because spectrometers are so widely used in science, Bao sees a rainbow of possible uses for his new device. For one, he says, his spectrometers could be easily integrated into commercial smartwatches and phones, allowing everyday people to do things like self-identify skin cancer. "That's what I'm really hoping for, seeing them in cellphones in the very near future," he says.

And because spectrometers are so widely used on exploratory spacecraft, Bao sees an easier and far cheaper way to deck out the next generation of space explorers. "The people who are planning space missions are pretty much weighing every gram, and so this would be a very easy way to lose weight."

Jul 1, 2015 

miércoles, 1 de julio de 2015

Tesla Motors co-founder wants to electrify commercial trucks

Wrightspeed CEO Ian Wright displays some of the company's electric-powered trucks on Thursday, Feb. 12, 2015, in San Jose , Calif. Wright helped bring electric cars to market when he co-founded Tesla Motors a decade ago. Now he's targeting trucks that deliver packages, haul trash and make frequent stops on city streets. His company, Wrightspeed, makes electric powertrains that can be installed on commercial trucks, making them more energy-efficient. (AP Photo/Marcio Jose Sanchez)


Twelve years ago, Ian Wright and some fellow engineers launched Tesla Motors, a Silicon Valley company that has helped jumpstart the market for electric cars.

Now, the Tesla co-founder wants to electrify noisy, gas-guzzling trucks that deliver packages, haul garbage and make frequent stops on city streets.

His latest venture, Wrightspeed, doesn't make the whole truck. Rather it sells electric powertrains that can be installed on medium-and heavy-duty commercial vehicles, making them cleaner, quieter and more energy-efficient.

"We save a lot on fuel. We save a lot on maintenance, and we make the emissions compliance much easier," said Wright, a New Zealand-born engineer who left Tesla when it was still a small startup in 2005.

Wrightspeed is one of a growing number of companies that are trying to transform the market for commercial trucks that consume billions of gallons of fuel while spewing tons of carbon dioxide, nitrogen oxide and other pollutants.

While more consumers are switching to electric cars like the Nissan Leaf, Chevy Volt or Tesla Model S, convincing commercial fleet owners to replace their diesel trucks won't be easy.

"It takes a lot of technological ambition to break into such an old and established market," said Mark Duvall, a research director at the Electric Power Research Institute in Palo Alto. "If you want to sell a fleet owner an electric truck, you have to convince them that it's better than what they're already using. So the bar is set very high."

Wright's company is installing its powertrains on 25 FedEx delivery trucks and 17 garbage trucks for the Ratto Group, a Santa Rosa-based waste management company. Its plug-in powertrains feature an electric engine, battery system and on-board power generator that runs on diesel or natural gas and recharges the battery when it gets low.


Wrightspeed CEO Ian Wright explains the technology behind a electric-powered engine which will be used for FedEx delivery trucks at the company's headquarters, Thursday, Feb. 12, 2015, in San Jose , Calif. Wright helped bring electric cars to market when he co-founded Tesla Motors a decade ago. Now he's targeting trucks that deliver packages, haul trash and make frequent stops on city streets. His company, Wrightspeed, makes electric powertrains that can be installed on commercial trucks, making them more energy-efficient. (AP Photo/Marcio Jose Sanchez)

It costs $150,000 to $200,000 to install a Wrightspeed motortrain, while it costs, for example, about $500,000 for a new garbage truck.

The San Jose-based company is generating interest from trucking fleet owners who are scrambling to meet California's increasingly strict emissions standards but don't want to replace the entire vehicle, Wright said.

"You can take this truck that you've invested all this money in and it's still in good shape, and you can swap out the powertrain for our powertrain and suddenly you're emissions-compliant," Wright said.

To meet growing demand, Wrightspeed is preparing to move into a former aircraft hangar at the decommissioned U.S. Naval Air Station in Alameda. The company plans to expand its workforce from roughly 25 to 250 employees over the next three years, Wright said.

The Ratto Group, which collects garbage and recycling in Marin and Sonoma counties, is giving Wrightspeed a chance, retrofitting some of its garbage trucks. It could convert many more of its 300 vehicles if the technology performs as hoped.

"My hope is that this is something we can prove together with Wrightspeed and that we can be on the cutting edge of this, and I can transform my fleet and hopefully transform my industry," said Lou Ratto, the company's chief operating officer.

Wrightspeed CEO Ian Wright unplugs an electric-powered truck as he gets ready for a test drive at the company's headquarters Thursday, Feb. 12, 2015, in San Jose , Calif. Wright helped bring electric cars to market when he co-founded Tesla Motors a decade ago. Now he's targeting trucks that deliver packages, haul trash and make frequent stops on city streets. His company, Wrightspeed, makes electric powertrains that can be installed on commercial trucks, making them more energy-efficient. (AP Photo/Marcio Jose Sanchez)

When Wright left Tesla to start Wrightspeed, he planned to make high-performance electric sports cars, but he struggled to raise money from investors because the market wasn't big enough.

So Wright switched gears and set his sights on the roughly 2.2 million commercial trucks in the U.S. that burn roughly 20 times more fuel than a passenger car and are a major source of air pollution and greenhouse gas emissions.

Wrightspeed has raised Silicon Valley venture capital and secured grants from the California Energy Commission, which funds alternative fuel technologies that could help the state's dependence on fossil fuels. Gov. Jerry Brown wants to cut petroleum use in cars and trucks by 50 percent by 2030.



Wrightspeed CEO Ian Wright drives an electric-powered truck at the company's headquarters Thursday, Feb. 12, 2015, in San Jose, Calif. Wright helped bring electric cars to market when he co-founded Tesla Motors a decade ago. Now he's targeting trucks that deliver packages, haul trash and make frequent stops on city streets. His company, Wrightspeed, makes electric powertrains that can be installed on commercial trucks, making them more energy-efficient. (AP Photo/Marcio Jose Sanchez)        

A truck with a Wrightspeed powertrain can run on batteries for about 30 miles before the turbine, which runs on diesel or natural gas, kicks in and recharges the battery. The system roughly doubles the fuel efficiency of trucks and reduces the cost of maintenance, Wright said.

While trucks aren't as sleek as Tesla's sports cars, a popular vehicle among Silicon Valley elites, Wright believes his powertrains can do more to reduce carbon pollution because trucks are such heavy polluters.

"I think what they've done is absolutely fantastic, but what we're doing is the next thing," Wright said. "It's even better."

The Route™
A Range-Extending Generator
  • Decoupled loads: generator works to charge batteries, while batteries provide power to turn the wheels.
  • Open generator architecture: the Wrightspeed Route ™ can use any number of on-board power generation options, including micro-turbines and piston engines.
  • No range anxiety! The Route ™, given refueling, has an unlimited range.

B High-Power Battery System
  • Exceptional regen capability: 400 hp
  • The high-power battery system provides peak power, allowing the generator to operate at its most efficient point.
  • PLUG IT IN, for up to 40 miles on grid energy.

C Extended Battery Life
  • Actively cooled for long battery life.
  • Software protected for long battery life.

D The Geared Traction Drive (GTD)
  • Integrates an electric motor, a two-speed gear box & an inverter
  • Two per driveaxle
  • Compact, lightweight
  • Direct traction control at drivewheels
  • 125-250 horsepower, continuous
  • 18,000 ft-lbs total axle torque
  • 75 mph max speed (limited)
  • 400 hp regen braking power, with slip limiting

E Fuel Tank
The Wrightspeed Route™ can be custom fitted to run diesel, gasoline, biofuels, CNG, or landfill gas at up to 7% sulfides.

The Route is a plug and play repower kit for medium duty fleet trucks that maximizes efficiency without compromising performance.

Torque / Power for The Route
 

Max Continuous Power: 125-250hp(software limited, fully configurable) 
Max Wheel Torque: 18,000 ft-lbs(for each drive axle, 9,000 ft-lbs/wheel) 
Max Regenerative Power: 400hp(0.3g braking below 42mph@ 12k lbs) 
Fuel System Options: 
  • Liquid: diesel, biodiesel, Jet A 
  • Gaseous: natural gas, propane, sour gas 
Emissions: CARB 2010 HDD, with no exhaust after-treatment


Technology

The GTD (Geared Traction Drive)

Geared Traction Drive with Inverter
Add caption
Wrightspeed was able to make the fully integrated GTD by designing the inverter, 250 hp motor, and multi-speed transmission from the ground up, optimizing compatibility and system efficiency.
Geared Traction Drive




250 hp motor (shown here without inverter)
Geared Traction Drive with Inverter
Integrated inverter, electric motor, and gearbox
This Silicon Valley “systems approach” has yielded an exceptionally high-power, compact, and lightweight system. Wrightspeed’s Drivesystem is geared to optimize low-end torque, for hauling up hills, while preserving top highway cruising speeds.

Wrightspeed demonstrates software controlled clutch-less shifting



Wrightspeed’s Powertrains move the complexity from mechanical systems into electronic and software systems, making them lighter, cheaper, and more efficient. Clutchless gear shifting is a good example of this:
  • Traditional multi-speed transmissions use clutches (synchro rings, multi-disc wet clutches, twin-clutch arrangements) to achieve synchronization before engagement; this makes them, heavy, expensive, and less efficient. But with electric motors, it becomes possible to control the motor speed so precisely, and change it so quickly, that the shifter dog-clutches can be engaged without clashing. The sync function that used to be performed by mechanical means has been shifted into software control of electronics, driving the electric motor with precision. The system is therefore lighter, cheaper, and more efficient. Wrightspeed’s control software weighs nothing, costs nothing to manufacture, doesn’t wear out, and uses the electronics that are already present to drive the motor.
  • The Wrightspeed GTD is shown here, on a dynamometer, simulating acceleration from stop to first gear, shifting to second gear, back down to first gear and then a full torque stop. The shifting is too quiet to hear. The sound produced here is the motor accelerating and decelerating at maximum torque. The GTD jumps when the motor changes speed due to torque reaction (torque reaction can be observed under the hood of a conventional car, when accelerated hard in neutral).

Simulated Truck Acceleration, Full Torque Stop

  1. Motor is accelerated to 20,700 rpm in first gear, Upshift (Shift from 1st gear to 2nd gear)
  2. Motor torque is reduced to zero
  3. Shift actuator is moved to neutral position (mid-stroke)
  4. Motor speed is synchronized to the correct speed for second gear, 9000 rpm (80 ms)
  5. Shift actuator moves to second gear
  6. Torque is reapplied to maintain “vehicle speed”, Downshift (Shift from 2nd gear to 1st gear)
  7. Motor torque is reduced to zero
  8. Shift actuator is moved to neutral position (mid-stroke)
  9. Motor speed is synchronized to the correct speed for first gear, 20,700 rpm (80 ms)
  10. Shift actuator moves to first gear, Full-Torque Traction Drive Stop
  11. Motor comes to full stop

ORIGINAL: Physorg.com
by By Terence Chea
Jun 02, 2015
 

Synthetic Blood Transfusions Are Coming



Cancer-curing Cylon baby blood may still be a fantasy, but with the next two years, two human volunteers will be receiving the very first blood transplants manufactured in a lab, the British National Health Service announced last week.


Technically, what the NHS is calling the world’s first “synthetic blood” is actually biological in origin: It’s produced in vitro by extracting stem cells from the umbilical cords of newborn babies or from adult bone marrow. Placed in the proper chemical environment in the lab, stem cells can be stimulated along a particular developmental pathway that eventually leads to fully-functional red blood cells. Researchers have been developing the technology to manipulate stem cells for years, and now, our tools have advanced to the point that scaling up and producing entire blood bags seems within reach. 

Sure, synthetic blood may sound a bit creepy, but much like lab-grown tissue transplants or replacement organs, it’ll actually be brilliant if we can make it to work. Eventually, hospitals could stockpile huge quantities of the stuff for emergency transfusions, or design batches specifically for patients suffering from sickle-cell anemia and other rare blood disorders. What’s more, having never been inside a human body, lab-grown blood is practically guaranteed to be disease free, and has the potential to dramatically reduce the risk of spreading blood borne diseases like HIV and hepatitis.

For the first human trials, volunteers will be injected with a few teaspoons of the stuff to test for adverse reactions. Test transfusions will also allow scientists to study how long their lab-grown blood cells survive in a human host. So far, preliminary tests show that synthetic red blood cells are biologically comparable, if not identical, to blood cells produced the ol’ fashion way. But biology is full of surprises, and we’ll never know for sure until we try em’ out in humans.

[The Independent]


Follow Maddie on Twitter or contact her at maddie.stone@gizmodo.com



ORIGINAL: Gizmodo

An executive’s guide to machine learning

An executive’s guide to machine learning


It’s no longer the preserve of artificial-intelligence researchers and born-digital companies like Amazon, Google, and Netflix.

Machine learning is based on algorithms that can learn from data without relying on rules-based programming. It came into its own as a scientific discipline in the late 1990s as steady advances in digitization and cheap computing power enabled data scientists to stop building finished models and instead train computers to do so. The unmanageable volume and complexity of the big data that the world is now swimming in have increased the potential of machine learning—and the need for it.

Stanford's Fei-Fei Li
In 2007 Fei-Fei Li, the head of Stanford’s Artificial Intelligence Lab, gave up trying to program computers to recognize objects and began labeling the millions of raw images that a child might encounter by age three and feeding them to computers. By being shown thousands and thousands of labeled data sets with instances of, say, a cat, the machine could shape its own rules for deciding whether a particular set of digital pixels was, in fact, a cat.1 Last November, Li’s team unveiled a program that identifies the visual elements of any picture with a high degree of accuracy. IBM’s Watson machine relied on a similar self-generated scoring system among hundreds of potential answers to crush the world’s best Jeopardy! players in 2011.

Dazzling as such feats are, machine learning is nothing like learning in the human sense (yet). But what it already does extraordinarily well—and will get better at—is relentlessly chewing through any amount of data and every combination of variables. Because machine learning’s emergence as a mainstream management tool is relatively recent, it often raises questions. In this article, we’ve posed some that we often hear and answered them in a way we hope will be useful for any executive. Now is the time to grapple with these issues, because the competitive significance of business models turbocharged by machine learning is poised to surge. Indeed, management author Ram Charan suggests that any organization that is not a math house now or is unable to become one soon is already a legacy company.2

1. How are traditional industries using machine learning to gather fresh business insights?
Well, let’s start with sports. This past spring, contenders for the US National Basketball Association championship relied on the analytics of Second Spectrum, a California machine-learning start-up. By digitizing the past few seasons’ games, it has created predictive models that allow a coach to distinguish between, as CEO Rajiv Maheswaran puts it, “a bad shooter who takes good shots and a good shooter who takes bad shots”—and to adjust his decisions accordingly.

You can’t get more venerable or traditional than General Electric, the only member of the original Dow Jones Industrial Average still around after 119 years. GE already makes hundreds of millions of dollars by crunching the data it collects from deep-sea oil wells or jet engines to optimize performance, anticipate breakdowns, and streamline maintenance. But Colin Parris, who joined GE Software from IBM late last year as vice president of software research, believes that continued advances in data-processing power, sensors, and predictive algorithms will soon give his company the same sharpness of insight into the individual vagaries of a jet engine that Google has into the online behavior of a 24-year-old netizen from West Hollywood.

2. What about outside North America?
In Europe, more than a dozen banks have replaced older statistical-modeling approaches with machine-learning techniques and, in some cases, experienced 10 percent increases in sales of new products, 20 percent savings in capital expenditures, 20 percent increases in cash collections, and 20 percent declines in churn. The banks have achieved these gains by devising new recommendation engines for clients in retailing and in small and medium-sized companies. They have also built microtargeted models that more accurately forecast who will cancel service or default on their loans, and how best to intervene.

Closer to home, as a recent article in McKinsey Quarterly notes,3 our colleagues have been applying hard analytics to the soft stuff of talent management. Last fall, they tested the ability of three algorithms developed by external vendors and one built internally to forecast, solely by examining scanned résumés, which of more than 10,000 potential recruits the firm would have accepted. The predictions strongly correlated with the real-world results. Interestingly, the machines accepted a slightly higher percentage of female candidates, which holds promise for using analytics to unlock a more diverse range of profiles and counter hidden human bias.

As ever more of the analog world gets digitized, our ability to learn from data by developing and testing algorithms will only become more important for what are now seen as traditional businesses. Google chief economist Hal Varian calls this “computer kaizen.” For “just as mass production changed the way products were assembled and continuous improvement changed how manufacturing was done,” he says, “so continuous [and often automatic] experimentation will improve the way we optimize business processes in our organizations.4

3. What were the early foundations of machine learning?
Machine learning is based on a number of earlier building blocks, starting with classical statistics. Statistical inference does form an important foundation for the current implementations of artificial intelligence. But it’s important to recognize that classical statistical techniques were developed between the 18th and early 20th centuries for much smaller data sets than the ones we now have at our disposal. Machine learning is unconstrained by the preset assumptions of statistics. As a result, it can yield insights that human analysts do not see on their own and make predictions with ever-higher degrees of accuracy.

More recently, in the 1930s and 1940s, the pioneers of computing (such as Alan Turing, who had a deep and abiding interest in artificial intelligence) began formulating and tinkering with the basic techniques such as neural networks that make today’s machine learning possible. But those techniques stayed in the laboratory longer than many technologies did and, for the most part, had to await the development and infrastructure of powerful computers, in the late 1970s and early 1980s. That’s probably the starting point for the machine-learning adoption curve. New technologies introduced into modern economies—the steam engine, electricity, the electric motor, and computers, for example—seem to take about 80 years to transition from the laboratory to what you might call cultural invisibility. The computer hasn’t faded from sight just yet, but it’s likely to by 2040. And it probably won’t take much longer for machine learning to recede into the background.

4. What does it take to get started?
C-level executives will best exploit machine learning if they see it as a tool to craft and implement a strategic vision. But that means putting strategy first. Without strategy as a starting point, machine learning risks becoming a tool buried inside a company’s routine operations: it will provide a useful service, but its long-term value will probably be limited to an endless repetition of “cookie cutter” applications such as models for acquiring, stimulating, and retaining customers.

We find the parallels with M&A instructive. That, after all, is a means to a well-defined end. No sensible business rushes into a flurry of acquisitions or mergers and then just sits back to see what happens. Companies embarking on machine learning should make the same three commitments companies make before embracing M&A. Those commitments are,
  • first, to investigate all feasible alternatives
  • second, to pursue the strategy wholeheartedly at the C-suite level; and, 
  • third, to use (or if necessary acquire) existing expertise and knowledge in the C-suite to guide the application of that strategy.
The people charged with creating the strategic vision may well be (or have been) data scientists. But as they define the problem and the desired outcome of the strategy, they will need guidance from C-level colleagues overseeing other crucial strategic initiatives. More broadly, companies must have two types of people to unleash the potential of machine learning.
  • Quants” are schooled in its language and methods. 
  • Translators” can bridge the disciplines of data, machine learning, and decision making by reframing the quants’ complex results as actionable insights that generalist managers can execute.
Access to troves of useful and reliable data is required for effective machine learning, such as Watson’s ability, in tests, to predict oncological outcomes better than physicians or Facebook’s recent success teaching computers to identify specific human faces nearly as accurately as humans do. A true data strategy starts with identifying gaps in the data, determining the time and money required to fill those gaps, and breaking down silos. Too often, departments hoard information and politicize access to it—one reason some companies have created the new role of chief data officer to pull together what’s required. Other elements include putting responsibility for generating data in the hands of frontline managers.

Start small—look for low-hanging fruit and trumpet any early success. This will help recruit grassroots support and reinforce the changes in individual behavior and the employee buy-in that ultimately determine whether an organization can apply machine learning effectively. Finally, evaluate the results in the light of clearly identified criteria for success.

5. What’s the role of top management?
Behavioral change will be critical, and one of top management’s key roles will be to influence and encourage it. Traditional managers, for example, will have to get comfortable with their own variations on A/B testing, the technique digital companies use to see what will and will not appeal to online consumers. Frontline managers, armed with insights from increasingly powerful computers, must learn to make more decisions on their own, with top management setting the overall direction and zeroing in only when exceptions surface. Democratizing the use of analytics—providing the front line with the necessary skills and setting appropriate incentives to encourage data sharing—will require time.

C-level officers should think about applied machine learning in three stages: machine learning 1.0, 2.0, and 3.0—or, as we prefer to say,
  1. description, 
  2. prediction, and 
  3. prescription. 
They probably don’t need to worry much about the description stage, which most companies have already been through. That was all about collecting data in databases (which had to be invented for the purpose), a development that gave managers new insights into the past. OLAP—online analytical processing—is now pretty routine and well established in most large organizations.

There’s a much more urgent need to embrace the prediction stage, which is happening right now. Today’s cutting-edge technology already allows businesses not only to look at their historical data but also to predict behavior or outcomes in the future—for example, by helping credit-risk officers at banks to assess which customers are most likely to default or by enabling telcos to anticipate which customers are especially prone to “churn” in the near term (exhibit).

Exhibit


A frequent concern for the C-suite when it embarks on the prediction stage is the quality of the data. That concern often paralyzes executives. In our experience, though, the last decade’s IT investments have equipped most companies with sufficient information to obtain new insights even from incomplete, messy data sets, provided of course that those companies choose the right algorithm. Adding exotic new data sources may be of only marginal benefit compared with what can be mined from existing data warehouses. Confronting that challenge is the task of the “chief data scientist.”

Prescription—the third and most advanced stage of machine learning—is the opportunity of the future and must therefore command strong C-suite attention. It is, after all, not enough just to predict what customers are going to do; only by understanding why they are going to do it can companies encourage or deter that behavior in the future. Technically, today’s machine-learning algorithms, aided by human translators, can already do this. For example, an international bank concerned about the scale of defaults in its retail business recently identified a group of customers who had suddenly switched from using credit cards during the day to using them in the middle of the night. That pattern was accompanied by a steep decrease in their savings rate. After consulting branch managers, the bank further discovered that the people behaving in this way were also coping with some recent stressful event. As a result, all customers tagged by the algorithm as members of that microsegment were automatically given a new limit on their credit cards and offered financial advice.

The prescription stage of machine learning, ushering in a new era of man–machine collaboration, will require the biggest change in the way we work. While the machine identifies patterns, the human translator’s responsibility will be to interpret them for different microsegments and to recommend a course of action. Here the C-suite must be directly involved in the crafting and formulation of the objectives that such algorithms attempt to optimize.

6. This sounds awfully like automation replacing humans in the long run. Are we any nearer to knowing whether machines will replace managers?
It’s true that change is coming (and data are generated) so quickly that human-in-the-loop involvement in all decision making is rapidly becoming impractical. Looking three to five years out, we expect to see far higher levels of artificial intelligence, as well as the development of distributed autonomous corporations. These self-motivating, self-contained agents, formed as corporations, will be able to carry out set objectives autonomously, without any direct human supervision. Some DACs will certainly become self-programming.

One current of opinion sees distributed autonomous corporations as threatening and inimical to our culture. But by the time they fully evolve, machine learning will have become culturally invisible in the same way technological inventions of the 20th century disappeared into the background. The role of humans will be to direct and guide the algorithms as they attempt to achieve the objectives that they are given. That is one lesson of the automatic-trading algorithms which wreaked such damage during the financial crisis of 2008.

No matter what fresh insights computers unearth, only human managers can decide the essential questions, such as which critical business problems a company is really trying to solve. Just as human colleagues need regular reviews and assessments, so these “brilliant machines” and their works will also need to be regularly evaluated, refined—and, who knows, perhaps even fired or told to pursue entirely different paths—by executives with experience, judgment, and domain expertise.

The winners will be neither machines alone, nor humans alone, but the two working together effectively.

7. So in the long term there’s no need to worry?
It’s hard to be sure, but distributed autonomous corporations and machine learning should be high on the C-suite agenda. We anticipate a time when the philosophical discussion of what intelligence, artificial or otherwise, might be will end because there will be no such thing as intelligence—just processes. If distributed autonomous corporations act intelligently, perform intelligently, and respond intelligently, we will cease to debate whether high-level intelligence other than the human variety exists. In the meantime, we must all think about what we want these entities to do, the way we want them to behave, and how we are going to work with them.

About the authors
Dorian Pyle is a data expert in McKinsey’s Miami office, and Cristina San Jose is a principal in the Madrid office.

ORIGINAL: McKinsey
by Dorian Pyle and Cristina San Jose
June 2015

A 3D Printhead with Single Atom Precision? Enter the Nanobeacon

John Bashonger has tipped us off to a possible precursor for an “atomic precision” 3D printhead where the interaction of tailored light with a single atom and individual nanostructures combine to create remarkably accurate results.

At the Max Planck Institute for the Science of Light, this approach is utilized to couple light to a single atom, or for that matter, to individual nano-particles. The research demonstrates that light can be coupled to an ion trapped within a parabolic mirror, and the upshot is a high efficiency, optimized polarized beam.


Dr. Gerd Leuchs
The researchers, led by Dr. Gerd Leuchs, say that as individual atoms, ions and molecules form the basic building blocks of matter, our perception of the tangible materials formed from such building blocks through their interaction with light is key to our ability to manipulate them.

Complex objects such as nanostructures can be built and manipulated by understanding specific interactions they have with a light field. The manipulations may well prove to serve as the basis for potential applications in biophysics or quantum information processing, and the individual nanostructures themselves may serve as “building blocks” to create metamaterials.

The process involves the creation of radially polarized light which is acutely focused. This incoming beam, which features a cylindrical symmetry, tracks across a path incident from right to left onto a lens. The focusing rotates the individual electric field vectors towards the optical axis, and they then interfere within the focal point to create an electric field component which oscillates along the optical axis.
The result is a tailored light where the precise shaping of the temporal and spatial distribution of the intensity of the light – and the polarization vector or direction of oscillation of the electric field – can then be concentrated to dimensions smaller than the wavelength of the light when focused onto a target object.

A radially polarized ring is focused with a parabolic mirror, and in this experiment, the researchers generated the tailored, radially polarized mode to an ideal field distribution down to 98%.

The work uses a “singly charged ytterbium ion” as the atom, and the ions can be accurately positioned, and even “trapped” for varying periods using an electrode arrangement and an applied AC voltage. An ion trap was developed to the parabolic mirror geometry which effectively shadows the ion from the focused light with minimal interference.

MPI says the methodology has been successfully applied to the investigation of numerous nanostructures and nanoscopic objects to create metamaterials of extraordinary properties
.

What it might mean is that, by “holding” a single atom with laser beams, it may well be possible to build a 3D printhead with singular atomic precision.

With the aid of artificial materials, light can be guided around objects and reflections can be suppressed to form a “nanobeacon.” The researchers say such light sources pave the way for high-precision control of light propagation in a range of optical networks.

Do you forsee a coming say when 3D printers which feature precision down to the level of a single atom could be created using this nanobeacon technology? Let us know in the Nanobeacon forum thread on 3DPB.com.



ORIGINAL: 3DPrint
July 1, 2015

martes, 30 de junio de 2015

Meet Amelia, the AI Platform That Could Change the Future of IT


Chetah Dube. Image credit: Photography by Jesse Dittmar

Her name is Amelia, and she is the complete package: smart, sophisticated, industrious and loyal. No wonder her boss, Chetan Dube, can’t get her out of his head.

My wife is convinced I’m having an affair with Amelia,” Dube says, leaning forward conspiratorially. “I have a great deal of passion and infatuation with her.

He’s not alone. Amelia beguiles everyone she meets, and those in the know can’t stop buzzing about her. The blue-eyed blonde’s star is rising so fast that if she were a Hollywood ingénue or fashion model, the tabloids would proclaim her an “It” girl, but the tag doesn’t really apply. Amelia is more of an IT girl, you see. In fact, she’s all IT.

Amelia is an artificial intelligence platform created by Dube’s managed IT services firm IPsoft, a virtual agent avatar poised to redefine how enterprises operate by automating and enhancing a wide range of business processes. The product of an obsessive and still-ongoing 16-year developmental cycle, she—yes, everyone at IPsoft speaks about Amelia using feminine pronouns—
leverages cognitive technologies to interface with consumers and colleagues in astoundingly human terms,
  • parsing questions, 
  • analyzing intent and 
  • even sensing emotions to resolve issues more efficiently and effectively than flesh-and-blood customer service representatives.


Install Amelia in a call center, for example, and her patent-pending intelligence algorithms absorb in a matter of seconds the same instruction manuals and guidelines that human staffers spend weeks or even months memorizing. Instead of simply recognizing individual words, Amelia grasps the deeper implications of what she reads, applying logic and making connections between concepts. She relies on that baseline information to reply to customer email and answer phone calls; if she understands the query, she executes the steps necessary to resolve the issue, and if she doesn’t know the answer, she scans the web or the corporate intranet for clues. Only when Amelia cannot locate the relevant information does she escalate the case to a human expert, observing the response and filing it away for the next time the same scenario unfolds.

Scientists have built artificial neurons that fully mimic human brain cells


They could supplement our brain function.

Researchers have built the world’s first artificial neuron that’s capable of mimicking the function of an organic brain cell - including the ability to translate chemical signals into electrical impulses, and communicate with other human cells.

These artificial neurons are the size of a fingertip and contain no ‘living’ parts, but the team is working on shrinking them down so they can be implanted into humans. This could allow us to effectively replace damaged nerve cells and develop new treatments for neurological disorders, such as spinal cord injuries and Parkinson’s disease.

Professor Agneta Richter Dahlfors. 
Foto: Stefan Zimmerman
"Our artificial neuron is made of conductive polymers and it functions like a human neuron," lead researcher Agneta Richter-Dahlfors from the Karolinska Institutet in Sweden said in a press release.

Until now, scientists have only been able to stimulate brain cells using electrical impulses, which is how they transmit information within the cells. But in our bodies they're stimulated by chemical signals, and this is how they communicate with other neurons.

By connecting enzyme-based biosensors to organic electronic ion pumps, Richter-Dahlfors and her team have now managed to create an artificial neuron that can mimic this function, and they've shown that it can communicate chemically with organic brain cells even over large distances.

"The sensing component of the artificial neuron senses a change in chemical signals in one dish, and translates this into an electrical signal," said Richter-Dahlfors. "This electrical signal is next translated into the release of the neurotransmitter acetylcholine in a second dish, whose effect on living human cells can be monitored."

This means that artificial neurons could theoretically be integrated into complex biological systems, such as our bodies, and could allow scientists to replace or bypass damaged nerve cells. So imagine being able to use the device to restore function to paralysed patients, or heal brain damage.

"Next, we would like to miniaturise this device to enable implantation into the human body," said Richer-Dahlfors.“We foresee that in the future, by adding the concept of wireless communication, the biosensor could be placed in one part of the body, and trigger release of neurotransmitters at distant locations."

"Using such auto-regulated sensing and delivery, or possibly a remote control, new and exciting opportunities for future research and treatment of neurological disorders can be envisaged," she added.

The results of lab trials have been published in the journal Biosensors and Bioelectronics.

We're really looking forward to seeing where this research goes. While the potential for treating neurological disorders are incredibly exciting, the artificial neurons could one day also help us to supplement our mental abilities and add extra memory storage or offer faster processing, and that opens up some pretty awesome possibilities.


ORIGINAL: Science Alert
By FIONA MACDONALD
29 JUN 2015

viernes, 26 de junio de 2015

New tech tool speeds up stem cell research

It’s hard to do a good job if you don’t have the right tools. Now researchers have access to a great new tool that could really help them accelerate their work, a tool its developers say will revolutionize the way cell biologists developstem cell models to test in the lab.
Fluidigm’s Callisto system
Fluidigm’s Callisto system
Add caption
Fluidigm’s Callisto system
The device is called Callisto™. It was created by Fluidigm thanks to two grants from CIRM. The goal was to develop a device that would allow researchers more control and precision in the ways that they could turn stem cells into different kinds of cell. This is often a long, labor-intensive process requiring round-the-clock maintenance of the cells to get them to make the desired transformation.

Callisto changes that. The device has 32 chambers, giving researchers more control over the conditions that cells are stored in, even allowing them to create different environmental conditions for different groups of cells. All with much less human intervention.

Lila Collins, Ph.D.
, the CIRM Science Officer who has worked closely with Fluidigm on this project over the years, says this system has some big advantages over the past:

Creating the optimal conditions for reprogramming, stem cell culture and stem cells has historically been a tedious and manually laborious task. This system allows a user to more efficiently test a variety of cellular stimuli at various times without having to stay tied to the bench. Once the chip is set up in the instrument, the user can go off and do other things.

Having a machine that is faster and easier to use is not the only advantage Callisto offers, it also gives researchers the ability to systematically and simultaneously test different combinations of factors, to see which ones are most effective at changing stem cells into different kinds of cell. And once they know which combinations work best they can use Callisto to reproduce them time after time. That consistency means researchers in different parts of the world can create cells under exactly the same conditions, so that results from one study will more readily support and reflect results from another.

In a news release about Callisto, Fluidigm’s President and CEO Gajus Worthington, says this could be tremendously useful in developing new therapies:

Fluidigm aims to enable important research that would otherwise be impractical. The Callisto system incorporates some of our finest microfluidic technology to date, and will allow researchers to quickly and easily create complex cell culture environments. This in turn can help reveal how stems cells make fate decisions. Callisto makes challenging applications, such as cellular reprogramming and analysis, more accessible to a wide range of scientists. We believe this will move biological discovery forward significantly.”

And as Collins points out, Callisto doesn’t just do this on a bulk level, working with millions of cells at a time, the way the current methods do:

Using a bulk method it’s possible that one might miss an important event in the mixture. The technology in this system allows the user to stimulate and study individual cells. In this way, one could measure changes in small sub-populations and find ways to increase or decrease them.

Having the right tools doesn’t always mean you are going to succeed, but it certainly makes it a lot easier.


ORIGINAL: California's Stem Cell Agency. Center for Regeneratie Medicine

jueves, 25 de junio de 2015

Here's how we're fighting cancer in a completely new way

Image: Juan Gaertner/Shutterstock.com

Why chemo could be a thing of the past.
This article was written by Mark Cragg from the University of Southampton, and was originally published by The Conversation.

We’re beginning to treat cancer in a whole new way. Rather than killing cancer cells directly with chemo or radiotherapy, the latest treatments are designed to promote the body’s natural immune control over the disease. So-called immunotherapy works to stimulate the body’s own immune system to destroy the cancer. It is not a new concept and was first described more than a century ago, but for the first time it is beginning to deliver long-lasting responses, which some are daring to call cures.

Behind these advances has been a more sophisticated understanding of the relationship between the immune system and cancer, particularly how the cancer is seen as a danger by the body and can disguise itself from immune attack.

The most promising immunotherapies are antibody drugs, which target key switches on immune cells and fall into two main classes:
  • checkpoint blockers such as ipilimumab and nivolumab, which remove the cancer’s ability to switch off the immune system, and 
  • immunostimulators such as anti-CD40 and anti-4-1BB, which promote active immune responses from the body.
Immunotherapy advantages

There are several key reasons why weaponising the immune system in this way shows such promise in the fight against cancer.
  1. First, the immune system is mobile. Its ability to patrol the whole body means it is able to recognise cancer cells wherever they are. And cancer’s ability to spread is frequently the cause of recurrence following other treatments.
  2. Second, the immune system is self-amplifying. It is able to increase its response as required to tackle large, advanced cancers. This property means that it will sometimes work better the more cancer is present, responding to a larger immune stimulation.
  3. Third, the immune system can evolve and adapt to changes in the cancer. Cancers are genetically unstable, meaning that they can change and 'escape' from conventional treatments. This situation is exactly what the immune system has evolved to cope with in its battle with pathogens. So as the tumour changes, the immune system can also change in parallel, keeping the cancer cells locked down.
  4. Fourth, the immune system can recognise an almost limitless number of target molecules on the cancer. This ability to recognise so many targets at once makes it much more difficult for rare variant cancer cells to escape out of immune control by changing their appearance. It also broadens the types of cancer that may be susceptible to immunotherapy.
  5. Finally, the immune system has memory. We see this with infectious diseases, with protection against a second round of infection from a particular germ. This is what provides us with life-long protection from some diseases after catching them as children or receiving vaccinations. For cancer, this means that the immune system can be 'immunised' to the cancer cells and detect and delete them if they try to grow back. Most cancer treatments only work while they are being given: an immune response can last a lifetime.

These five features of immunotherapy combine to deliver major benefits, including the ability to deliver durable, perhaps life-long responses, tantamount to cures, even in advanced, previously fatal cancers.

Future challenges

The challenge now is to understand why some people, and some cancers, respond much better to these therapies than others and how to increase the proportion of people who experience good responses. Data reported only last month shows that combining immunotherapy treatments by giving two checkpoint-blocking antibodies at the same time extends the number of patients with effective and lasting responses. Unfortunately, it also increases the unwanted side effects from immune attack on some of the body’s normal tissues.

While the results from the recent clinical trials are incredibly promising, it is clear that we are just at the beginning of our journey to understand the immune system and harness its power to destroy cancer. We already know that the complex interplay between
  • the genetic make-up of the tumour, 
  • the status of someone’s immune system, and 
  • the interaction between the two will sculpt the immune response in different ways.

How, then, to best boost the immune system? We recognise that large multidisciplinary teams – comprising clinicians, immunologists, molecular biologists, geneticists and others – with concentrated resources are required. In Southampton, this will coalesce around a new purpose-built Centre for Cancer Immunology, which will open in 2017 with the aim of bringing the right people together and providing cutting edge facilities.

With the development of such centres, our understanding of the immune system in health and disease will continue the rapid expansion of immunotherapy, leading to many new opportunities for treatment. Soon these will become more specific, effective and safe – leading us into a new era of cancer treatment.


Mark Cragg is Professor of Experimental Cancer Research at University of Southampton.

This article was originally published on The Conversation. Read the original article.


ORIGINAL: Science Alert
MARK CRAGG, THE CONVERSATION
23 JUN 2015