Chris Lintott first met Kevin Schawinski in the summer of 2007 at the astrophysics department of the University of Oxford. Lintott had just finished a PhD at the University College of London on star formation in galaxies. He was also something of a minor celebrity in the astronomy community: he was one of the presenters of the BBC's astronomy programme The Sky at Night alongside Sir Patrick Moore, and had written a popular science book called Bang!: The Complete History of the Universe with Moore and Brian May, the Queen guitarist and astrophysicist. "I went to give a seminar talk as part of a job interview," Lintott recalls. "And this guy in a suit jumped up and started having a go at me because I hadn't checked my galaxy data properly. I thought it was some lecturer who I'd pissed off, but it turned out to be Kevin [Schawinski], who was a student at the time."
Most galaxies come in two shapes: elliptical or spiral. Elliptical galaxies can have a range of shapes, from perfectly spherical to a flattened rugby-ball shape. Spirals, like the Milky Way, have a central bulge of stars surrounded by a thin disk of stars shaped in a spiral pattern known as "arms". The shape of a galaxy is an imprint of its history and how it has interacted with other galaxies over billions of years of evolution. It is a mystery to astronomers why they have these shapes and how the two geometries related to one another. For a long time, astronomers assumed that spirals were young galaxies, with an abundance of stellar nurseries, where new stars were being formed. These regions typically emitted hot, blue radiation. Elliptical galaxies, on the other hand, were thought to be predominantly old, replete of dying stars, which are colder, and therefore have a red colour. Schawinski was working on a theory which contradicted this paradigm. To prove it, he needed to find elliptical galaxies with blue regions, where starformation was taking place.
At the time, astronomers relied on computer algorithms to filter datasets of images of galaxies. The biggest bank of such images came from the Sloan Digital Sky Survey, which contained more than two million astronomical objects, nearly a million of which were galaxies, and had been taken by an automated robotic telescope in New Mexico with a two-metre mirror. The problem was that computers can easily filter galaxies based on their colour, however it was impossible for an algorithm to pick up galaxies based on their shape. "It's really hard to teach a computer a pattern-recognition task like this," says Schawinski, currently a professor in astronomy at the Swiss Federal Institute of Technology in Zurich. "It took computer scientists a decade to [teach a computer] to tell human faces apart, something every child can do the moment they open their eyes." The only way to prove this theory, Schawinski decided, was to look at each galaxy image, one by one.
Schawinski did it for a week, working 12 hours every day. He would go to his office in the morning, click through images of galaxies while listening to music, break for lunch, and continue until late in the evening. "When I attended Chris's seminar, I had just spent a week looking through fifty thousand galaxies," says Schawinski.
When Lintott moved to Oxford, he and Schawinski started debating the problem of how to classify datasets with millions of images. They weren't the only ones. "Kate Land, one of my colleagues, was intrigued about a recent paper which claimed most galaxies were rotating around a common axis," Lintott says. "Which is indeed puzzling because the expectation was that these axes would be totally random." Land needed more data, which required looking at the rotation of tens of thousands of galaxies. "Out of the blue she asked me if I thought that, if they put a laptop with galaxy images in the middle of a pub, would people classify them?" Lintott recalls.
At the time, Nasa had launched a project called Stardust@home, which had recruited about 20,000 online volunteers to identify tracks made by interstellar dust in samples from a comet. "We thought that if people are going to look at dust tracks, then surely they'll look at galaxies," says Lintott. Once it was decided they would go ahead with the project, they built a website within days. The homepage displayed the image of a galaxy from the dataset. For each image, the volunteers were asked if the galaxy was a spiral or elliptical. If a spiral, they were asked if they could discern the direction of its arms and the direction of its rotation. There were also options for stars, unknown objects and overlapping galaxies.
Charlie Surbey and Liam Sharp
The site, called Galaxy Zoo, launched on July 11, 2007. "We thought we would get at least some amateur astronomers," Lintott says. "I was planning to go to the British Astronomical Society, give a talk and get at least 50 of their members to classify some galaxies for us." Within 24 hours of its launch, Galaxy Zoo was receiving 60,000 classifications per hour. "The cable we were using melted and we were offline for a while," Schawinski says. "The project nearly died there." After ten days, users from all over the world had submitted eight million classifications. By November, every galaxy had been seen by an average of 40 people. Galaxy Zoo users weren't just classifying galactic shapes, they were making unexpected discoveries. Barely a month after launch, Dutch schoolteacher Hanny van Arkel discovered a strange green cluster that turned out to be a never-before-seen astronomical object. Christened Hanny's Voorwerp ("voorwerp" means "object" in Dutch), it remains the subject of intense scientific scrutiny. Later that year, a team of volunteers compiled evidence for a new type of galaxy -- blue and compact -- which they named Pea galaxies.
"When we did a survey of our volunteers we found out they weren't astronomers," Lintott says. "They weren't even huge science fans and weren't that interested in making new discoveries. The majority said they just wanted to make a contribution." With Galaxy Zoo, Schawinski and Lintott developed a powerful pattern-recognition machine, composed entirely of people who could not only process data incredibly quickly and accurately -- aggregating the results via a democratic statistical process -- but also enable individual serendipitous discoveries, a fundamental component of scientific enquiry. With robotic telescopes spewing terabytes of images every year, they found an answer to big data in a big crowd of volunteers. Since Galaxy Zoo's first discoveries, this pioneering approach of crowdsourcing science has gained a strong following not only with the general public but also within the scientific community. Today, there are hundreds of crowdsourcing projects involving a variety of scientific goals, from identifying cancer cells in biological tissues to building nanoscale machines using DNA. These endeavours have resulted in breakthroughs, such as Schawinski and Lintott's discoveries on the subject of star formation, that have merited publication in the most reputed scientific journals. The biggest breakthrough, however, is not the scientific discoveries per se, but the method itself. Crowdsourcing science is a reinvention of the scientific method, a powerful new way of making discoveries and solving problems that could have otherwise remain undiscovered and unsolved.
At around the time Lintott and his team were developing Galaxy Zoo, two computer scientists at the University of Washington in Seattle, Seth Cooper and Adrien Treuille, were trying to use online crowds to solve a problem in biochemistry called protein folding.
A protein is a chain of smaller molecules called amino acids. Its three-dimensional shape determines how it interacts with other proteins and, consequently, its function in the cell. Proteins only have one possible structure, and finding that structure is a notoriously difficult problem: for a given chain of amino acids, there are millions of ways in which it can be folded into a three-dimensional shape. Biochemists know thousands of sequences of amino acids but struggle to find how they fold into the three-dimensional structures that are found in nature.
Cooper and Treuille's lab had previously developed an algorithm which attempted to predict these structures. The algorithm, named Rosetta, required a lot of computer power, so it was adapted to run as a screensaver that online volunteers could install. The screensaver, called Rosetta@home, required no input from volunteers, so Cooper and Treuille had been brought in to turn it into a game. "With the screensaver, users could see the protein and how the computer was trying to fold it, but they couldn't interact with it," Cooper says. "We wanted to combine that computer power with human problem-solving."
Cooper and Treuille were the only computer scientists in their lab. They also had no idea about protein folding. "In some sense, we were forced to look at this very esoteric and abstract problem through the eyes of a child," Cooper says. "Biochemists often tell you that a protein looks right or wrong. It seemed that with enough training you can gain an intuition about how a protein folds. There are certain configurations that a computer never samples, but a person can just look at it and say, 'that's it'. That was the seed of the idea."
The game, called Foldit, was released in May 2008. Players start with a partially-folded protein structure, which has been arrived at by the Rosetta algorithm, and have to manipulate its structure by clicking, pulling and dragging amino acids until they've arrived at its most stable shape. The algorithm calculates how stable the structure is; the more stable, the higher the score.
"When we first trialled the game with the biochemists, they weren't particularly excited," Cooper says. "But then we added a leaderboard, where you could see each other's names and respective scores. After that, we had to shut down the game for a while because it was bringing all science to a halt."
Foldit turned the goal of solving one of biochemistry's hardest problems into a game that can be won by scoring points. Over the past five years, over 350,000 people have played Foldit; these players have been able to consistently fold proteins better than the best algorithms. "Most of these players didn't have a background in biochemistry and they were beating some of the biochemists who were playing the game," Cooper says. "They also discovered an algorithm similar to one that the scientists had been developing. It was more efficient that any previously published algorithms."
How A Failed Experiment On Rats Sparked A Billion-Dollar Infant-Care Breakthrough
WASHINGTON -- At a research lab at Duke University Department of Pharmacology in 1979, a group of scientists sparked a major breakthrough in infant care from a failed experiment on rats.
At the time, Dr. Saul Schanberg, a neuroscientist and physician, was running tests on just-born rats to measure growth-related markers (enzymes and hormones) in their brains. Together with Dr. Cynthia Kuhn and lab technician Gary Evoniuk, he kept getting weird results. With the rat pups separated from their mothers in order to run the experiments, their growth markers kept registering at low levels.
The team varied the trials. They used an anesthetized mother rat to feed the pups during and after the experimentation, and tried keeping the pups and mother in the same cage but with a divider to see if a lack of pheromones was the problem.
“The experiment failed,” Kuhn recalled.
So the team approached it from another angle. Instead of stabilizing the rat pups so they could run tests, they tried to figure out what was wrong with the pups in the first place. From a friend, Kuhn had heard theories that massaging the pups could produce positive results. Evoniuk, meanwhile, had watched mother rats groom their pups by vigorously licking them. He proposed doing essentially the same thing, minus the tongue.
The team began using a wet brush to rub the rat pups at different pressure levels. Eventually, they found the right one, and on cue, the deprivation effect was reversed.
"I said, 'Let’s give it a shot,' and it worked the first time and the second time," recalled Evoniuk. "It was just the touch.”
Though they had no way of knowing it, Schanberg’s team had taken the first step in a process that would see the upending of conventional wisdom when it came to post-natal care. Three and a half decades later, the theories that his team stumbled upon by failure would save an estimated billions of dollars in medical costs and affect countless young parents’ lives.
On Thursday night, the team will be rewarded for its work. A coalition of business, university and scientific organizations will present the Golden Goose Award to them and other researchers with similar successful projects. It is a prize given for the purpose of shining a light on how research with odd-sounding origins (really, massaging rat pups?) can produce groundbreaking results. More broadly, it’s meant to showcase the importance of federally funded scientific research.
The work done by Schanberg’s team is inextricably tied to the support of taxpayers -- not just because the group operated from a grant of approximately $273,000 from the National Institutes of Health. As Kuhn and Evoniuk both argued, the breakthrough they were able to produce never could have happened with a private funding source. The demand for an immediate result or for profit wouldn’t have allowed them to pivot off the initial failure.
“It is not a straight path from point A to point B,” said Evoniuk. “There are all kinds of weird little detours. We were really following a detour from where this work started. The federal funding gave people like Saul the ability to follow their scientific instincts and try to find the answers to interesting questions that popped up.”
As Congress members head back to their districts before the midterm elections, fights over science funding appear to be low on the list of priorities. The two parties are in the midst of an informal truce, having put in place budget caps this past winter. And no one seems particularly eager to disrupt that truce, even if science advocates warn it needs upending.
While NIH's funding increased this year from last year, when sequestration forced an estimated $1.55 billion reduction, it still fell $714 million short of pre-sequestration levels. Adjusted for inflation, it was lower than every year but President George W. Bush's first year in office.
Surveying the climate, the American Academy for Arts & Science released a report this week showing that the United States "has slipped to tenth place" among economically advanced nations in overall research and development investment as a percentage of GDP. For science advocates, it was another sobering cause for alarm. Young researchers, they argue, are leaving the field or country. Projects that could yield tremendous biomedical breakthroughs aren't getting off the ground.
Looming over the Golden Goose awards ceremony is this reality: Would an experiment testing rat-pup massages ever survive this political climate? Would it be admonished as waste by deficit hawks in Congress?
“Researchers massaging rats sounds strange, but oddball science saves lives,” said Rep. Jim Cooper (D-Tenn.), who is participating in the awards ceremony. “In this instance, premature babies got a healthier start. If Congress abandons research funding, we could miss the next unexpected breakthrough.”
NIH funding was certainly critical to the successful research behind rat-pup massages. "Without the NIH none of this would have happened, zero," said Kuhn.
But serendipity also played a role. Not long after he made his discovery, Schanberg was at an NIH study section with Tiffany Field, a psychologist at the University of Miami School of Medicine. Field had also been doing research -- also funded by the NIH -- on massage therapies for prematurely born babies. But she was getting poor results.
"We were just sharing our data, basically," Field recalled of that conversation. "I was telling him we were having trouble getting any positive effects with the preemies. … He talked about how his lab technician had an eureka experiment when he saw his mother's tongue licking the babies."
The conclusion reached was that Field probably wasn't massaging the premature babies hard enough. Instead of applying "moderate pressure" (as Schanberg had been doing) she was applying more of a "soft stroking."
A study done on rats became a study on humans. Field changed up her experiment and began to see results right away. Instead of the discomfort felt from that tickle-like sensation, the moderate pressure had a tonic effect, stimulating receptors. Babies' heart rates slowed down; the preemies seemed more relaxed; they were able to absorb food and gain weight; there was more evidence of growth hormone; an increase in insulin; greater bone density; and greater movement of the GI tract. The magnitude of the finding was enormous.
"We published the data and we actually did a cost-benefit analysis at that point and determined we could save $4.8 billion per year by massaging all the preemies, because of all the significant cost savings for the hospital," Field recalled.
Her conclusion challenged the prevailing sentiment of the time that prematurely born babies should be left in incubators, fed intravenously, and not touched immediately after birth lest they become agitated and potentially harmed. But few people listened.
"The only person who paid attention to it was Hillary Clinton," she recalled, noting that Clinton, who was working on a health care reform initiative as First Lady, expressed interest in the research.
Since then, however, conceptions of post-natal care have changed. Subsequent studies have confirmed Field's findings, though others have questioned whether there is enough research or the proper methodology to draw sweeping conclusions. Nevertheless, whereas few people used massage therapies in the '80s and '90s, as of eight years ago 38 percent of natal care units were using those therapies, said Fields. The method is estimated to save $10,000 per infant -- roughly $4.7 billion a year.
Those involved in the research still marvel that the chain of events started with a failed experiment on rats and turned on a fortuitous meeting between two scientists.
"We didn’t set out to figure out how to improve nursing care," said Kuhn. "But we wound up saving a lot of money and helped babies grow better, their cognitive outcome was better, they got out of the [intensive care units] sooner. … There was no downside."
"One thing led to another," said Evoniuk. "We were just kind of following an interesting question not thinking we were going to change medical practice."
Schanberg won't be around to receive his Golden Goose award Thursday night. He died in 2009, and his granddaughter will accept on his behalf. But those who worked with him say that his research remains a testament to the good results that an inquisitive mind and a respectable funding stream can produce. It's a story that scientists may find uplifting.
But it doesn't necessarily have a happy ending.
In the aftermath of her work with Schanberg, Field continued studying natal care, starting the Touch Research Institute at the University of Miami in 1992 with the help from the NIH and Johnson & Johnson. Her work has been widely cited in medical journals and newspaper articles. But the funding streams have run dry, and now she's faced with the prospect of dramatically narrowing the scope of her lifelong work.
"We are faced with having to close the institute because we don’t have any NIH grants," she said. "It used to be a third of us would get the grants. Now they are funding at something like the seventh percentile."
What I find most interesting about typical visions of the future isn’t all the fanciful and borderline magical technology that hasn’t been invented yet, but rather how much of it actually already exists.
Consider something relatively straightforward, like a multi-touch interface on your closet door that allows you to easily browse and experiment with your wardrobe, offering suggestions based on prior behavior, your upcoming schedule and the weather in the locations where you are expected throughout the day. Or a car that, as it makes real-time navigational adjustments in order to compensate for traffic anomalies, also lets your co-workers know that you will be a few minutes late, and even takes the liberty of postponing the delivery of your regular triple-shot, lactose-free, synthetic vegan latte. There’s very little about these types of scenarios that isn’t entirely possible right now using technology that either already exists, or that could be developed relatively easily. So if the future is possible today, why is it still the future?
I believe there are two primary reasons. The first is a decidedly inconvenient fact that futurists, pundits and science fiction writers have a tendency to ignore: Technology isn’t so much about what’s possible as it is about what’s profitable. The primary reason we haven’t landed a human on Mars yet has less to do with the technical challenges of the undertaking, and far more to do with the costs associated with solving them. And the only reason the entire sum of human knowledge and scientific, artistic and cultural endeavor isn’t instantly available at every single person’s fingertips anywhere on the planet isn’t because we can’t figure out how to do it; it’s because we haven’t yet figured out the business models to support it. Technology and economics are so tightly intertwined, in fact, that it hardly even makes sense to consider them in isolation.
The second reason is the seemingly perpetual refusal of devices to play together nicely, or interoperate. Considering how much we still depend on sneakernets, cables and email attachments for something as simple as data dissemination, it will probably be a while before every single one of our devices is perpetually harmonized in a ceaseless chorus of digital kumbaya. Before our computers, phones, tablets, jewelry, accessories, appliances, cars, medical sensors, etc., can come together to form our own personal Voltrons, they all have to be able to detect each other’s presence, speak the same languages, and leverage the same services.
The two reasons I’ve just described as to why the future remains as such — profit motive and device isolation — are obviously not entirely unrelated. In fact, they could be considered two sides of the same Bitcoin. However, there’s still value in examining each individually before bringing them together into a unified theory of technological evolution.
Profitable, Not Possible
Even though manufacturing and distribution costs continue to come down, bringing a new and innovative product to market is still both expensive and surprisingly scary for publicly traded and historically risk-adverse companies. Setting aside the occasional massively disruptive invention, the result is that the present continues to look suspiciously like a slightly enhanced or rehashed version of the past, rather than an entirely reimagined future.
This dynamic is something we have mostly come to accept as a tenet of our present technology, but conveniently disregard when contemplating the world of tomorrow. Inherent in our collective expectations of what lies ahead seems to be an emboldened corporate culture that has grown weary of conservative product iteration; R&D budgets unencumbered by intellectual property squabbles, investor demands, executive bonuses and golden parachutes; and massive investment in public infrastructure by municipalities that seem constantly on the verge of complete financial collapse – none of which, as we all know, are particularly reminiscent of the world we actually live in.
One of the staples of our collective vision of the future is various forms of implants: neurological enhancements to make us smarter, muscular augmentation to make us stronger, and subcutaneous sensors and transmitters to allow us to better integrate with and adapt to our environments. With every ocular implant that enables the blind to sense more light and higher resolution imagery; with every amputee who regains some independence through a fully articulated prosthetic; and with every rhesus monkey who learns to feed herself by controlling a robotic arm through a brain-computer interface, humanity seems to be nudging itself ever-closer to its cybernetic destiny.
There’s no doubt in my mind that it is possible to continue implanting electronics inside of humans, and organics inside of machines, until both parties eventually emerge as new and exponentially more capable species. However, what I’m not sure of yet is who will pay for all of it outside of research laboratories. Many medical procedures don’t seem to be enjoying the same trends toward availability and affordability as manufacturing processes, and as far as I can tell, insurance companies aren’t exactly becoming increasingly lavish or generous. As someone who is fortunate enough to have reasonably good benefits, but who still thinks long and hard about going to any kind of a doctor for any reason whatsoever due to perpetually increasing copays and deductibles (and perpetually decreasing quality of care), I can’t help regarding our future cybernetic selves with a touch of skepticism. The extent to which the common man will merge with machines in the foreseeable future will be influenced as much by economics and policy as by technological and medical breakthroughs. After all, almost a decade ago researchers had a vaccine that was 100 percent effective in preventing Ebola in monkeys, but until now, the profit motive wasn’t there to develop it further.
Let’s consider a more familiar and concrete data point: air travel. Growing up just a few miles from Dulles Airport outside of Washington, D.C., my friends and I frequently looked up to behold the sublime, delta-wing form of the Concorde as it passed overhead. I remember thinking that if one of the very first supersonic passenger jets entered service only three years after I was born, surely by the time I grew up (and assuming the better part of the planet hadn’t been destroyed by a nuclear holocaust unleashed by itchy trigger fingers in the United States or Soviet Union), surely all consumer air travel would be supersonic. Thirty-eight years after the Concorde was introduced — and 11 years after the retirement of the entire fleet — I think it’s fair to say that air travel has not only failed to advance from the perspective of passengers, but unless you can afford a first- or business-class ticket, it has in fact gotten significantly worse.
It would be unfair of me not to acknowledge that many of us do enjoy in-flight access to dozens of cable channels through a primitive LCD touchscreen (which encourages passengers behind us to constantly poke at our seats, rudely dispelling any hope whatsoever of napping) as well as email-grade Wi-Fi (as opposed to a streaming-media-grade Internet connection), but somehow I’d hoped for a little more than the Food Network and the ability to send a tweet at 35,000 feet about how cool it is that I can send a tweet at 35,000 feet.
Novelty Is Not Progress
I’ve come to the conclusion over the last few years that it’s far too easy to confuse novelty with technological and cultural progress, and nothing in my lifetime has made that more clear than smartphones. It used to be that computers and devices were platforms — hardware and software stacks on top of which third-party solutions were meant to be built. Now, many devices and platforms are becoming much more like appliances, and applications feel more like marginally tolerated, value-add extensions. In some ways, this is a positive evolution, since appliances are generally things that all of us have, depend on, know how to use, and are relatively reasonably priced. But let’s consider a few other attributes of appliances: They typically only do what their manufacture intends; they are the very paragons of planned obsolescence; and they generally operate either entirely in isolation, or are typically only compatible with hardware or services from the same manufacturer.
Admittedly, comparing a smartphone to a blender or a coffee maker isn’t entirely fair since our phones and tablets are obviously far more versatile. In fact, every time I adjust my Nest thermostat with whatever device happens to be in my pocket, or use Shazam to sample an ambient track in a coffee shop, or search for a restaurant in an unfamiliar city and have my phone (or my watch) take me directly to it, I’m reminded that several conveniences and miracles of the future have managed to thoroughly permeate the present. But one of the tricks I’ve learned for evaluating current technologies is to consider it in the broader context of what I want the future to be. And when I contemplate the kind of future I think most of us want — one in which all our devices interoperate, and consumers have full control over the services those devices support and consume (but more on that in a moment) — there’s a lot about modern smartphones, tablets and the direction of computing in general to be very concerned about.
The reality is that novelty, and both technological and cultural progress, are only loosely related. Novelty is usually about interesting, creative or fun new products and services. It’s about iterative progress like eking out a few more minutes of battery life, or shaving off fractions of millimeters or grams, or introducing new colors or alternating between beveled and rounded edges. But true technological and cultural progress is about something much bigger and far more profound: the integration of disparate technologies and services into solutions that are far greater than the sum of their parts.
Progress is about increasing access to information and media as opposed to imposing artificial restrictions and draconian policies; it’s about empowering the world to do more than just shop more conveniently, or inadvertently disclose more highly targetable bits of personal information; it’s about trusting your customers to do the right thing, providing real and tangible value, and holding yourself accountable by giving all the stakeholders in your business the ability to walk away at any moment. And it’s about sometimes taking on a challenge not only for the promise of financial reward, but simply to see if it can be done, or because you happen to be in a unique position to do so, or because humanity will be the richer for it.
I know I’m probably coming across as a postmodern hippie here, but it’s these kinds of idealistic, and possibly even overambitious, aspirations that should be guiding us toward our collective future — even if we know that it isn’t fully attainable.
I want to be able to use my phone to start, debug and monitor my car and my motorcycle. I want the NFC chip in my phone to automatically unlock my workstations as I approach them — regardless of which operating systems I choose to use. I want to be able to pick which payment system my phone defaults to based on who provides the terms and security practices I’m most comfortable with. I want instant access to every piece of digital media on the planet on any device at any time (and I’m more than willing to pay a fair price for it). I want all my devices to integrate, federate and seamlessly collaborate, sharing bandwidth and sensor input, combining themselves like an array of radio telescopes into something bigger and more powerful than what each one represents individually. I want to pick and choose from dozens of different services for connectivity, telephony, media, payments, news, messaging, social networking, geolocation, authentication and every other service that exists now and that will exist tomorrow. I want to pick the PC, phone, tablet, set-top box, watch, eyewear and [insert nonspecific connected device here] that I like best, and be assured that they will all integrate on a deep level, rather than feeling like I’m constantly being penalized for daring to cross the sacred ecosystem barrier. I want a future limited only by what’s possible rather than by intellectual property disputes, petty corporate feuds, service contracts, shareholder value and artificial lock-in.
And more than anything else, I want a future that is as much about making us intellectually and culturally rich as it is about material wealth.
Free as in Speech
Although we are very clearly living in a time (and headed for a future) that is determined as much by what is profitable as what is possible, it’s important to acknowledge that there are plenty of inspiring exceptions. While it’s undeniable that the U.S. space program has recently fallen upon some difficult times (relying on the Russians to ferry astronauts to and from the ISS sure seemed like a good idea at the time), there’s nothing like watching robots conduct scientific experiments on Mars, or reading about the atmospheric composition of exoplanets, to put NASA’s spectacular portfolio of accomplishments into perspective; starting as early as the late ’60s, academics, engineers, computer scientists and the Department of Defense all came together around the concept of interoperability, which ultimately led to the creation of the Internet and the World Wide Web — possibly two of the most politically, culturally and economically important and disruptive inventions in human history; and then there are collaborative resources like Wikipedia; open-source software projects like Linux, the various Apache projects, Bitcoin and Android; open hardware projects like Arduino, WikiHouse and the Hyperloop project; free and open access to GPS signals; and the myriad of incredibly creative crowd-funded Kickstarter projects that seem to make the rounds weekly.
The reality of technology — and perhaps the reality of most things complex, interesting and rewarding enough to hold our collective attention — is that it is not governed by absolutes, but rather manifests itself as the aggregate of multiple and often competing dynamics. I’ve come to think of technology as kind of like the weather: It is somewhat predictable up to a point, and there are clearly patterns from which we can derive assumptions, but ultimately there are so many variables at play that the only way to know for sure what’s going to happen is to wait and see.
But there is one key way in which technology is not like the weather: We can control it. One of my favorite quotes is by the famous computer scientist Alan Kay who once observed that the best way to predict the future is to invent it. If we want to see a future in which devices freely interoperate, and consumers have choices as to what they do with those devices and the services they connect to, it is up to us to both demand and create it. If we choose instead to remain complicit, we will get a future concerned much more with maximizing profits than human potential. Clearly we need to strike the right balance.
Insofar as technology is a manifestation of our creative expression, it is not unlike free speech. And like free speech, we don’t have to always like or agree with what people choose to do with it, but we do have a collective and uncompromising responsibility to protect it.
If you’re unfamiliar with the concept, here’s a quick rundown. Traveling far into space is a tricky endeavor. With existing technology, traveling to a planet like Mars takes about 180 days, for example. Keeping a crew of people alive (and entertained) in space for that long isn’t hard, but it does require a lot of food, water, energy, and other supplies. This makes manned long-distance space travel extremely expensive, since hauling more supplies requires huge amounts of storage space, and thousands of additional dollars just to get it all that stuff into orbit.
In theory, suspended animation would help solve this problem. If astronauts could be placed in a deep sleep during the journey, they would require far fewer resources along the way. Instead, they could just be put to sleep at the beginning and woken back up when they arrive at their destination.
Now, with a manned mission to Mars likely in its sights, NASA has begun to explore the viability of such an idea, and has recently funded a study by Atlanta-based aerospace engineering firm SpaceWorks Enterprises to help work out the kinks in the process.
Related: Cryostasis isn’t sci-fi: surgeons will soon use suspended animation to revive gunshot victims
The bulk of the study revolves around placing humans in torpor — a state in which metabolic and physiological activity is drastically slowed down. To do this, the company has developed a three-stage system. Step one involves sedating the person and using a neuromuscular blockade to prevent movement, whereas step two is to physically lower the person’s body temperature by about 10 degrees farenheit, thereby reducing cellular activity and metabolic rate by around 50 to 70 percent. This is achieved with the help of cooling pads and a nasally-inhaled coolant that lowers the subject’s temperature from the inside out. Then, once in torpor, the subject is hooked into an intravenous drip that supplies their body with all the nutrients needed to keep them alive.
Using these methods, SpaceWorks has reportedly managed to keep a person in stasis for a week — an impressive feat, but even so, there’s still much work to be done before the technology is ready for primetime. In addition to extending the length of the stasis period, the company has a handful of other hurdles to overcome. The potential onset of pneumonia, muscle atrophy, and bone loss have yet to be addressed; and the long term-effects of stasis on human organs is still largely unknown. SpaceWorks still has a long road ahead of it, but with a few more years of research, it’s not unreasonable to think that suspended animation, cryostasis, torpor –whatever you want to call it– might finally bring a manned mission to Mars within reach.
Nigeria a model for quick action, scientists find
Ebola. The word brings fear of an unseen and potentially lethal enemy. But there are ways to stop its spread, say infectious disease scientists.
Quick intervention is needed, according to the researchers, who recently published their findings in the journal Eurosurveillance.
Analyzing Ebola cases in Nigeria, a country with success in containing the disease, the scientists estimated the rate of fatality, transmission progression, proportion of health care workers infected, and the effect of control interventions on the size of the epidemic.
Rapid response needed
"Rapid control is necessary, as is demonstrated by the Nigerian success story," says Arizona State University (ASU) scientist Gerardo Chowell, senior author of the paper.
"This is critically important for countries in the West Africa region that are not yet affected by the Ebola epidemic, as well as for countries in other regions of the world that risk importation of the disease."
The research is funded by the U.S. National Science Foundation (NSF)-National Institutes of Health (NIH)-Department of Agriculture (USDA) Ecology and Evolution of Infectious Diseases (EEID) Program.
"Controlling a deadly disease like Ebola requires understanding how it's likely to spread, and knowing the ways of managing that spread that are most likely to be effective," says Sam Scheiner, NSF EEID program director.
"Being able to respond quickly needs a foundation of knowledge acquired over many years. The work of these scientists is testimony to long-term funding by the EEID program."
Control measures in Nigeria
The largest Ebola outbreak to date is ongoing in West Africa, with more than 8,000 reported cases and 4,000 deaths. However, just 20 Ebola cases have been reported in Nigeria, with no new cases since early September.
All the cases in Nigeria stem from a single traveler returning from Liberia in July.
The study used epidemic modeling and computer simulations to project the size of the outbreak in Nigeria if control interventions had been implemented during various time periods after the initial case, and estimated how many cases had been prevented by the actual early interventions.
"This timely work demonstrates how computational simulations, informed by data from health care officials and the complex social web of contacts and activities, can be used to develop both preparedness plans and response scenarios," says Sylvia Spengler, program director in NSF's Directorate for Computer and Information Science and Engineering, which also supported the research.
Control measures implemented in Nigeria included holding all people showing Ebola symptoms in an isolation ward if they had had contact with the initial case. If Ebola was confirmed through testing, people diagnosed with the disease were moved to a treatment center.
Asymptomatic individuals were separated from those showing symptoms; those who tested negative without symptoms were discharged.
Those who tested negative but showed symptoms--fever, vomiting, sore throat and diarrhea--were observed and discharged after 21 days if they were then free of symptoms, while being kept apart from people who had tested positive.
Brief window of opportunity
Ebola transmission is dramatically influenced by how rapidly control measures are put into place.
"Actions taken by health authorities to contain the spread of disease sometimes can, perversely, spread it," says NSF-funded scientist Charles Perrings, also of ASU.
"In the Nigeria case, people who tested negative but had some of the symptoms were not put alongside others who tested positive," says Perrings. "So they had no incentive to flee, and their isolation did nothing to increase infection rates. Elsewhere in the region isolation policies have had a different effect."
The researchers found that the projected effect of control interventions in Nigeria ranged from 15-106 cases when interventions are put in place on day 3; 20-178 cases when implemented on day 10; 23-282 cases on day 20; 60-666 cases on day 30; 39-1,599 cases on day 40; and 93-2,771 on day 50.
The person who was initially infected generated 12 secondary cases in the first generation of the disease; five secondary cases were generated from those 12 in the second generation; and two secondary cases in the third generation.
That leads to a rough estimate of the reproduction number according to disease generation declining from 12 during the first generation, to approximately 0.4 during the second and third disease generations.
A reproductive number above 1.0 indicates that the disease has the potential to spread.
Recent estimates of the reproduction number for the ongoing Ebola epidemic in Sierra Leone and Liberia range between 1.5 and 2 (two new cases for each single case), indicating that the outbreak has yet to be brought under control.
The effectiveness of the Nigerian response, scientists say, is illustrated by a dramatic decrease in the number of secondary cases over time.
The success story for Nigeria, they maintain, sets a hopeful example for other countries, including the United States.
Ich habe vor kurzem drei Bücher gelesen von japanischen Autoren, in schneller Folge, daß alle inspirierte Debatte über einen "steady-state society ". Sind Sie Yoshinori Hiroi Nishimatsu "Teijogata Shakai Atarashii "Yutakasa' Nr. Koso" (Steady-state Society: Ein neues Konzept der "Wohlstand" ); Ittaka Kishida "Mittsu no Junkan zu Bunmeiron keine Kagaku" (Wissenschaft der drei Zyklen und Zivilisation); und Kazuo Mizuno's "Shihonshugi no Shuen, rekishi no Kiki" (Das Ende des Kapitalismus und die Krise der Geschichte).
"Steady-state" ist wahrscheinlich ein Begriff nicht vertraut sind, viele Leser. Kurz gesagt, es bezieht sich auf eine Gesellschaft zu, in Bevölkerung und Wirtschaft haben ihre Grenzen des Wachstums und ihrer Bevölkerung akzeptiert, dass Realität und vermeidet stets auf der Suche nach höherer Produktion.
In den letzten Jahrzehnten hat sich die Welt seit "nachhaltiges Wachstum." Die drei Autoren haben zufälligerweise dafür eingetreten, sollten Sie zwei Faktoren auf die Theorie des "nachhaltigen Wachstums" - die Alterung der Gesellschaft und die Aufrechterhaltung einer stabilen Bevölkerung und, in dieser Hinsicht, müssen wir verzichten werdende für kontinuierliches Wachstum.
Auch wenn die historischen Perspektiven und die Punkte, die sie betonen in ihren Büchern unterscheiden sich geringfügig voneinander, die drei Autoren - während gleichzeitig das 21. Jahrhundert als Teil der langen Geschichte der Menschheit - in der Regel darauf hin, dass die Welt von heute ist am Rande der Abgründe der menschlichen Rasse hat noch nie zuvor erlebt habe.