Log in

The surprising discovery of "square ice" which forms at room temperature was made by an international team of researchers last week.

The study was published in Nature by a team of scientists from UK and Germany led by Andre Geim of University of Manchester and G. Algara-Siller of University of Ulm. The accompanying review article was done by Alan Soper of Rutherford Appleton Laboratory in UK.

"We didn't expect to find square ice ... We found there is something strange in terms of water going through [nanochannels]. It's going too fast. And you can't explain that by just imagining a very thin layer of liquid. Liquids do not behave in that way. The important thing to realize is that it is ice in the sense of a crystallized structure, it's not ice in the familiar sense in that it's something cold and from which you have to protect yourself," said Professor Irina Grigorieva, one of the researchers.

To study the molecular structure of water inside a transparent nanoscale capillary, the team used electron microscopy. This enabled them to view individual water molecules, especially because the nano-capillary was created from graphene which was one atom thick and would not impair the electron imaging. Graphene was also chosen because it has unusual properties like conducting electricity and extreme strength. It's a 2D form of carbon that once rolled up in cylinders will form a carbon nanotube, a material, which according to The Koyal Group Info Mag, is a subject of further study because of its unusual strength.

The scientists themselves were admittedly surprised at finding out that small square-shaped ice crystals formed at room temperature where the graphene capillaries are narrow (3 atomic layers of water at most). The water molecules formed into square lattices arranged in neat rows -- an arrangement that is uncharacteristic for the element that is known for forming consistent triangular structures inside regular ice. This discovery may just be the first example of water behavior in nanostructure.

The Koyal Group Info Mag reports that scientists have been trying to understand for decades how water structure is affected when it is confined in narrow channels. It is only now that this becomes possible through computer simulations, but even with those, the results they get do not agree with each other.

The team is also trying to determine how common this square ice actually is by using computer simulations. And from what they've learned, if the water layer is thin enough, it could create a square ice regardless of the chemical properties of the nanopore's walls where it is confined. Since there is water practically everywhere -- in microscopic pores and monolayers on surfaces -- it is likely that square ice is actually very common in nature.

Aside from its more practical applications in water distillation, desalination and filtration, their finding also allows for a better understanding of how water behave at a molecular scale which is important in nanotechnology work.
In legend, Yeti is a huge and furry human-resembling creature also referred to as the Abominable Snowman, but in science, Yeti is just a bear.

Now the question is: what kind of bear? A new study, published in the journal ZooKeys, concludes that hair sample "evidence" for Yeti actually comes from Himalayan brown bears.

The finding refutes an earlier study that the hair belonged to an unknown type of bear related to polar bears.

Top 10 Reasons Why Bigfoot's a Bust

At the center of the controversy are DNA analysis studies. Prior research, led by Bryan Sykes at the University of Oxford, determined that hairs formerly attributed to Yeti belonged to to a mysterious bear species that may not yet be known to science.

Sykes told Discovery News that his paper "refers to two Himalayan samples attributed to yetis and which turned out to be related to an ancient polar bear. This may be the source of the legend in the Himalayas."

The new study, however, calls this possibility into question. The research, in this case, was authored by Eliécer E. Gutiérrez of the Smithsonian Institution and Ronald Pine at the University of Kansas.

Did They Really Find Bigfoot DNA?

Gutiérrez and Pine found that genetic variation in brown bears makes it impossible to assign, with certainty, the samples tested by Sykes and his co-authors to either brown bears or to polar bears.

Because of genetic overlap, the samples could have come from either species, but because brown bears occur in the Himalayas, Gutiérrez and Pine think there is no reason to believe that the samples in question came from anything other than ordinary Himalayan brown bears.

For the new study, Gutiérrez and Pine also examined how the gene sequences analyzed might show the ways in which six present-day species of bears — including the polar bear, the brown bear, and the extinct Eurasian cave bear — might be related.

Wolf Attacks More Myth than Reality

This opened up a new mystery, as DNA from an Asian black bear in Japan indicated that this bear was not closely related to the mainland members of that species. The researchers believe that this unexpected large evolutionary distance between the two geographic groups of the Asian black bear merits further study.

"In fact, a study looking at the genetic and morphological variability of Asian black bear populations throughout the geographic distribution of the species is yet to be conducted, and it would surely yield exciting results," Gutiérrez concluded.

As for Yeti, believers might point out that the studies only looked at hair samples, and not the footprints, photographs, recorded sounds and other "evidence" for the Abominable Snowman.
It may be 2015 already, but in 2014 we saw some truly amazing scientific discoveries. We landed a probe on a comet, discovered new particles that further our knowledge of the physics of the universe, and learned more about the properties of the wonder-material graphene, which could eventually transform everything from fuel cell technology to battery and computing power and more.

At Futurism.co, Alex Klokus created an infographic that highlights 48 of the most transformative scientific advancements and discoveries of last year. We've republished the graphic here with permission, but you can check out Futurism's interactive version to click through to a source for each story.

The theory about everything review: Film depicting the life of Professor Stephen Hawking

The Koyal Group Info Mag Review Theory about the life of Professor Stephen Hawking
IT is going to be a battle of the boffins at the Oscars next year. Benedict Cumberbatch is a frontrunner for playing Alan Turing in The Imitation Game and Eddie Redmayne will be a powerful contender for his remarkable performance as Professor Stephen Hawking in The Theory Of Everything.

Playing Hawking from PhD student through to global superstardom as the author of A Brief History Of Time Redmayne is outstanding, inhabiting Hawking’s stricken body and brilliant mind with complete conviction.

In the same way that The Imitation Game humanised an intimidatingly clever and remote figure so The Theory Of Everything reveals the man behind the icon: courageous, mischievous, funny but also difficult and selfish.

It may not be a warts-and-all portrait (the picture is too genteel for that) but it’s a touching, humorous and inspirational insight into a man who refused to accept conventional boundaries, both of the mind and body.

We’re reminded quite how extraordinary it is that he’s still alive (now 72) when a doctor informs him, while at Cambridge University, that he has only two years to live. Told that his body will shut down as Motor Neurone Disease destroys his muscle function, Stephen asks about his brain. The doctor (Adam Godley) explains that it will continue to function normally but adds: “No one will know what your thoughts are.” The great mind will have no way to communicate.

Most people would have thrown in the towel and perhaps Stephen would have done were it not for Jane Wilde (a wonderful Felicity Jones), the girlfriend who refused to give up on him or let him give up.

Petite and seemingly demure she’s determined and quietly tenacious and the film is as much about her as it is Hawking. The screenplay by Anthony McCarten is based on her memoir, Travelling To Infinity: My Life With Stephen, and it’s their relationship which resulted in three children but ended in divorce that forms the heart of the story along with the role played by family friend and Jane’s eventual second husband, bashful choirmaster Jonathan Hellyer Jones (Charlie Cox).

This potentially messy state of affairs is handled with great delicacy and is the source of the picture’s fascination, heart and charm. It’s some achievement: what might have seemed uncomfortable and intrusive is actually moving, tender and sweet.

The result is a very British love story between three people, all extraordinary in their own way, who are trying to find happiness and fulfilment in the most trying of circumstances. We don’t get wild explosions or tantrums or declarations of love but mostly silent, dignified struggle and unspoken desire.

Initially we witness the love affair between Hawking and Jane who meet at Cambridge and strike up an instant rapport at a party despite having little in common. She’s a student of medieval Spanish poetry and a firm believer in God, he’s a “cosmologist” which he describes as a “religion for intelligent atheists”.

Still, love conquers all against the backdrop of a firework display during a May Ball where they kiss. On paper it sounds very Hollywood and their courtship is seductively staged and performed but the pair are winningly British and their conversation is hardly the stuff of your average Hollywood romance. They natter about quantum physics, God and Einstein.

Hawking explains his ambition to discover an “equation that explains everything in the universe” as he begins to explore his fascination with “time”.

The scientific talk is cleverly handled with some imaginative visual cues like cream swirling in a coffee cup. We may not understand the details but the general gist is clear as Hawking makes some ground-breaking discoveries into the origins of the universe.

In any case it’s not the science that compels or intrigues; we know the man’s a genius. What we don’t know is the personal story behind the work and the rather strange and testing family life endured by his wife who for years was denied help by her husband. “We’re just a normal family,” he insists. Read Source
With the help of highly sensitive particle detectors, some of the world’s most powerful lasers, and good-old-fashioned quantum mechanics, physicists from around the world made important discoveries this year.

From detecting elusive particles forged in the core of our sun to teleporting quantum data farther than ever before, these physicists’ scientific research has helped us better understand the universe in which we live as well as pave the way for a future of quantum computers, nuclear fusion, and more.

11. Multiple teams detected what could be our first hints of dark matter.

Although dark matter -- the mysterious substance that makes up most of the matter in the universe, but is seemingly undetectable to us here on Earth -- is still shrouded in mystery, two important discoveries in 2014 shed the first rays of light on this elusive material.

10. For the first time, physicists figured out the chemical composition of the mysterious and extremely rare phenomenon of 'ball lightning.'

Reports of ball lighting stretch back as far as the 16th century, but until the 1960s most scientists refused to believe it was real. But, it is real. Ball lighting is a floating sphere or disk of lightning up to 10 feet across that lasts only seconds.

9. An analogue of the theoretical radiation made by black holes was recreated in the lab.

Last October, Jeff Steinhauer, a physicist at the Technion-Israel Institute of Technology in Haifa, announced that he had created an analogue for a bizarre type of radiation that can, in theory, escape black holes.

8. An international group of physicists compressed quantum data for the first time in history.

You might grumble when your Internet connection is slow, but it would be infinitely slower if today's classical computers could not compress the information we're constantly sending back and forth.

7. Physicists made powerful, stellar explosions called supernovas in the lab -- for science.

During a supernova, a star explodes, ejecting its guts across space and leaving only a ghostly halo of gas and dust, called a supernova remnant, behind. Astrophysicists have observed supernovae remnants of all shapes and sizes but have yet to understand why they are all so different.

6. Powerful lasers compressed a diamond to simulate the centres of the giant planets Jupiter and Saturn.

Jupiter and Saturn are the two largest planets in our solar system, and yet what is inside them is mostly a mystery -- we don't even know if their centres are liquid or solid.

5. Researchers transferred information in light four times farther than ever before -- an important step to quantum computers.

If we are ever to have a digital world run by quantum computers, then we must learn how to transport information in the form of what scientists call quantum data, or qubits, which is encoded inside of subatomic particles, such as ions or photons (light particles).

4. Physicists developed a new and better kind of fibre optics to transfer information.

Traditionally, when you're trying to transfer particles of light through a fibre optic cable, the last thing you want are for the particles to be moving all about in a disorderly manner. But there's an exception to this that scientists at the University of Wisconsin-Milwaukee and Clemson University discovered the first time this year.

3. A physics team discovered a new particle, 80 years after it was first predicted.

After nearly 80 years since it was first predicted, the Majorana fermion was finally observed. The physicists at Princeton University and the University of Texas at Austin announced their discovery last October in the journal Science.

2. The National Ignition Facility made a nuclear fusion reaction that produced more energy than it used up -- a first .

Nuclear fusion is a nuclear reaction that generates up to four times more energy than nuclear fission -- the process that fuels today's nuclear power plants. One big issue standing in the way of harnessing this energy for electrical power is that it takes more energy to create the reaction than we've gotten out of it, until now.

1. We've figured out how the sun generates energy through nuclear fusion in its core.

Energy from the sun is essential for life on Earth. Yet we were not certain of how the sun's core works until just this year.
A new research regarding healthy sleep might get you thinking twice about reading from your e-reader or tablets before dozing off at night.

According to a study from Brigham and Women's Hospital, people who read on a lit screen before sleeping tend to fall asleep later as opposed to those who read on a paperback.

The study which was printed in Proceedings of the National Academy of Sciences is the newest contribution to an increasing number of studies pointing to backlit devices, like our mobile phones and tablets, as culprits of sleep problems.

Anne-Marie Chang, a neuroscientist who headed the project said, "It seems that use of these devices in the evening before bedtime really has this negative impact on our sleep and on your circadian rhythms."

The study was conducted in a lab with 12 people who were monitored for 2 weeks. Every evening, they were asked to read for 4 hours -- the first 5 days from an iPad and the next 5 days from a paperback. Once the subjects went to bed every 10pm, they were closely monitored for physiological changes.

It turned out that when the subjects read on screens, their circadian rhythms were disrupted and melatonin production was suppressed, leading to less deep sleep and feeling of tiredness the next day.

Chang advised that the proper recommendation should be to set aside electronic devices a couple of hours before sleeping and read printed book instead. Ebook devices that do not have backlit screens will also be better. According to The Koyal Group Info Mag researchers, any device that gives off blue wavelength of light is problematic as a person will tend to hold it closer to the eyes.

Professor Charles Czeisler, one of the lead researcher said, "The light emitted by most e-readers is shining directly into the eyes of the reader, whereas from a printed book or the original Kindle, the reader is only exposed to reflected light from the pages of the book. Sleep deficiency has been shown to increase the risk of cardio diseases, metabolic diseases and cancer. Thus, the melatonin suppression that we saw in this study among participants when they were reading from the light-emitting e-reader concerns us."

Meanwhile, other scientists are cautioning the public in drawing conclusions from the said study. This is because critical changes were observed in a controlled environment like a laboratory compared to the real-life setting.

Their experiment conducted in a lab does not effectively mimic the setup in real life where people are naturally exposed to sunlight. For instance, the low light in the lab might have affected them in a way that it made them sensitive to light from screens. As The Koyal Group Info Mag said, a person exposed in mere room light the whole day could be more sensitive to the light from an ebook reader than a person who had been exposed to sunlight outside the whole day.

However, they all agree that the blue light wavelength emitted by devices like laptops, tablets and mobile phones has negative effects and should be avoided at least before bedtime.

In a related research conducted by Mariana Figueiro of Rensselaer Polytechnic Institute in 2012, they found that subjects who use an e-reader device like a tablet before going to sleep at night had lower melatonin levels after using it for 2 hours. But they clarified that the lit screen is only one of the contributing factor.
On Monday, November 17, the US House of Representatives passed H.R. 5544, the Low Dose Radiation Research Act, which called for the National Academies to “conduct a study assessing the current status and development of a long-term strategy for low dose radiation research.”

Coincidentally that was the same day that the National Academy of Sciences hosted a publicly accessible, all day meeting to determine if there had been enough new developments in radiation health effects research to justify the formation of a new BEIR (Biological Effects of Ionizing Radiation) committee. If formed, that would be BEIR VIII, the latest in a series of committees performing a survey of available research on the health effects of atomic (now ionizing) radiation.

I had the pleasure of attending the meeting, which was held in the ornate NAS building on Constitution Avenue in Washington, DC. There were about 20 presenters talking about various aspects of the scientific and political considerations associated with the decision to form BEIR VIII. Several of the presenters had performed experimental research under the currently moribund Department of Energy’s Low Dose radiation research program.

That intriguing program was using modern genetics techniques to learn a great deal about the dynamic nature of DNA in organisms and about the ways that living tissues isolate and repair recurring damage that comes as a result of metabolic processes, heat, chemicals and ionizing radiation. It was defunded gradually beginning in 2009 and completely by 2011, with the money making its way to solar and wind energy research as the Office of Science shifted its priorities under a flat top line budget.

The agenda allocated a considerable amount of time for public comments. There were a couple of members of the audience interested in the science falsifying the “no safe dose” model who took advantage of the opportunities to speak, but so did a number of professional antinuclear activists from Maryland, Ohio, New York and Tennessee.

Need Better Results This Time

An epic struggle with important health, safety, cost and energy abundance implications is shaping up with regard to the way that the officially sanctioned science and regulatory bodies treat the risks and benefits associated with using ionizing radiation at low doses and dose rates for medical uses, industrial uses and power production.

We must make sure that this battle for science, hearts and minds is not as asymmetrical as the one fought in the period between 1954-1964. One skirmish in the battle worth winning will be to encourage the passage of the Low Dose Radiation Research Act and the annual appropriations that will enable it to function long into the future.

Here is a brief version of that lengthy prior engagement, where there were huge winners and losers. Losers included truth, general prosperity, peace and the environment. Partial winners included people engaged in the global hydrocarbon economy in finance, exploration, extraction, refinement, transportation, tools, machines and retail distribution. There were also big financial winners in pharmaceuticals, medical devices, oncology, and agriculture.

Rockefeller Funded Survey

During a 1954 Rockefeller Foundation Board of Trustees meeting, several of the trustees asked the President of the National Academy of Sciences (NAS) if his esteemed organization would be willing to review what was known about the biological effects of atomic radiation.

The board did not have to pick up the phone or send a letter to make that request. Detlev Bronk, who was the serving president of the NAS, was already at the table as a full member of the Rockefeller Foundation Board of Trustees. The board agreed that, based on their interpretations of recent media coverage, the public was confused and not properly informed about the risks of radiation exposure and the potential benefits of the Atomic Age.

The tasking given to the NAS was to form a credible committee that would study the science and issue a report “in a form accessible to seriously concerned citizens.”1

Aside: For historical context, that Foundation board meeting took place within months after President Eisenhower made his “Atoms for Peace” speech in December 1953. That speech to the United Nations announced a shift in focus of the Atomic Age from weapons development to more productive applications like electrical power generation and ship propulsion.

At the time the request to the NAS was made, the Rockefeller Foundation had been funding radiation biology-related research for at least 30 years, including the Drosophila mutation experiments that Hermann Muller conducted during the 1920s at the University of Texas. Foundation board members and supported scientists had been following developments in atomic science since the earliest discoveries of radiation and the dense energy stored inside atomic nuclei.

In March 1948, the Tripartite Conferences on radiation protection, a group that included experienced radiation researchers and practitioners from the US, Canada and the UK, had determined that the permissible doses for humans should be reduced from 1 mGy/day (in SI units) to 0.5 mGy/day or 3 mGy/week.

That reduction was not made because of any noted negative health effects, but to provide an additional safety factor.

In between these two extremes there is a level of exposure, — in the neighborhood of 0.1 r/day — which all experience to date show to be safe, but the time of observation of large numbers of people exposed at this rate under controlled conditions, is too short to permit a categorical assertion to this effect.2

End Aside.

Biological Effects of Atomic Radiation

The first NAS Biological Effects of Atomic Radiation committee began its work in April 1955. There were six subcommittees, each of which authored a section of the committee’s report. The report was identified as a preliminary version that was to be followed with a more technically detailed report scheduled to appear within the next couple of years, if desired by responsible government agencies.

Unlike the documents supporting the permissible dose limits that came out of the Tripartite Commission mentioned in the aside above, the NAS BEAR 1 committee report, especially the section from the Genetics Committee, was professionally promoted and received extensive media coverage and public attention.

The NAS held a press conference announcing the release of the report and answering questions in Washington, DC on June 12. Among other media attention, that press conference resulted in no less than six related articles in the June 13, 1956 edition of the New York Times. Several additional articles were published during the following weeks. The selection of pieces included a lengthy article that started at the top of the right hand column of the paper and continued with another 20-25 column inches on page 17. Read full article here
“A cure for cancer” – the phrase is so often repeated, surely it must finally materialise? To anyone not familiar with the developing story of cancer research, the position seems tragically unsatisfactory. Billions of pounds and decades of work by thousands of researchers have produced much better prognoses for some cancers, but harsh forms of chemotherapy and radiotherapy are still the standard treatment and the much sought-after magic cure remains tantalisingly out of reach.

As Sue Armstrong points out at the beginning of her book, while we may naively wonder why so many people get cancer, researchers are asking “Why so few?”. Every time a cell divides – skin and digestive-tract cells are constantly proliferating – there is a possibility of genetic errors. For cancer to develop, it requires the control mechanism in just one cell to be thrown into disorder, resulting in unlimited replication of that rogue cell. Considering the stupendous number of cell divisions occurring in the human body the development of cancer is rare. Scientists have long suspected that there is a very powerful protective mechanism at work.

P53 (the name refers to a protein of molecular weight 53 kilodaltons) is the cancer prophylactic for most multicellular organisms; it has been dubbed the guardian of the genome. While cancer has many causes and can be insidiously malignant throughout the body, p53 is the single most unifying factor in the disease: for most kinds of cancer to develop, p53’s suppressor activity has to have been disabled.

It has taken scientists a long time to establish some of the basic facts about cancer. In 1911 the pathologist Peyton Rous reported a virus that caused cancer in chickens. For decades this finding was dismissed: cancer, according to the official line, could not be caused by a virus. Rous lived long enough to see Francis Crick and James Watson’s double helix structure of 1953 establish DNA’s role at the heart of life and for his own theory to be subsequently vindicated; he received the Nobel prize in 1966 for his pioneering work.

How did we come to probe these minute molecular workings of nature? Most popular texts on genomics and molecular biology blithely report the results without offering any insight into how the scientists have reached their conclusions. Armstrong’s book has one of the best accounts I’ve read of how science is actually performed. She asks, what can they actually see? When it comes to a gene, which is only two nanometres wide, the answer is “nothing”; they work by inferring from experiments on things that they can see. As she says: “It is the ‘unseeable’ nature of molecular biology … that makes it so difficult to grasp.” She quotes one of her scientists, Peter Hall: “it’s based on faith, ultimately.” And even when scientists have a good sense of what their experiments are telling them, they’re up against the fact that life is an immensely complicated process: we can land a probe on a distant comet after a 10-year flight because the Newtonian clockwork of bodies in space is predictable. But all-embracing laws of biology are hard to find.

The process of discovery goes like this (and p53 is a classic example): something unexpected and odd turns up; investigation begins; its character gradually becomes clearer but its purpose remains a mystery; then evidence accumulates to suggest a function. That evidence is often misleading and, in the case of p53, a function diametrically opposed to the true one was ascribed to it for 10 years: it was thought to be a cancer-causing protein. Then came the moment of clarity and the potentially great unifying principle was born: in 1989, P53 was revealed as the master tumour suppressor – an order was established at last.

There are great hopes that our knowledge of p53 will lead to novel cancer treatments, but the pattern has grown much more complicated since then. In some situations p53 can cause cancer. For cancers to grow they need a mutated and disabled p53: in science, these cycles of discovery go on forever, and so will the battle between cancer and p53.

But progress is being made. One of the brightest hopes for therapy using p53 is in families with a predisposition to cancer. The reason for this blight is that the family members have each inherited a mutant copy of p53 and are therefore without the normal protection it provides. An experimental gene therapy (Advexin) already exists to correct this, but in 2008 the US regulatory body refused to license the treatment. A similar product, Gendicine, is licensed in China and approval for its clinical use is being sought in the US. One common story in today’s medical research is of remarkable possibilities constantly being blocked by a sluggish regulatory system and the skewed priorities of Big Pharma, which prefers to develop bestselling drugs that will have the widest use.

Armstrong’s book will offer many readers a sense of hope, but might also induce intense frustration at the long time it takes for discoveries in the lab to filter down to hospitals and the marketplace. Nevertheless, we can be sure that p53, even if it is not the “cure for cancer”, will have an honourable role to play in our attempts to find one.

Related Articles:
The Koyal Group Info Mag - SA konference for at markere efterforskning

Trods optage mineralsk produktion i South Australia, efterforskning investeringer er gået fra $328 millioner til kun $116 i løbet af de sidste tre år, ifølge SAEMC koordinator Dr. Kevin testamenter.Med værdien af statens mineralsk produktion op til $7,5 mia. i 2013-14, Dr. Wills sagde den 11. sydlige australske efterforskning og minedrift konference (SAEMC) vil give et tiltrængt udstillingsvindue af blue sky efterforskning og minedrift i SA.

"Dagsordenen for fredag er derfor et åbent vindue til muligheder for at vende denne tendens [faldende exploration] og tiltrække ny kapital tilbage i vores efterforskning industriområder," Dr. Wills sagde.

Konferencen kommer kun få uger efter, at en ny strategi blev bebudet af de SA regering til at skabe 5.000 nye minedrift og energi relaterede job og at nå $10 milliarder i mineral eksport fra den medlemsstat inden tre år, samt den nye frihandelsaftale med Kina.
"Vi er bevidste om den udfordrende for de fleste virksomheder i denne indtægter plads, uanset størrelse, som råvarepriser har været intet mindre end flygtige i år, at volatilitet påvirker aktiekursen værdier, evnen til at tiltrække ny kapital projekt, og investorernes følelser," Dr. Wills sagde.

"Men, efterforskning og minedrift er en langsigtet spil. "Det handler om vedvarende vurdering af geologiske muligheder, afbalancerede partnerskaber og joint ventures, der kan fokusere på korrekt projekt målretning og værdi for pengene "jord" udgifter kan levere kommerciel minedrift resultater.

Dr. Wills, som også er administrerende direktør for Flinders udforskning, sagde den anerkendelse af South Australia's uudnyttede ressourcer potentiale er blevet stadfæstet af sent i nye politiske holdninger af den regering, der støttes af industrien.
"Der er et fælles ønske om, at minedrift ikke blot bliver en større bidragyder til statens formue, men på en måde, der er bæredygtig og tiltrækker nye nationale og internationale partnere," sagde han.
The Koyal Group Info Mag Review In the Digital Age, Science Publishing Needs an Upgrade (Op Ed)
Daniel Marovitz is CEO of Faculty of 1000. Faculty of 1000 is a publisher for life scientists and clinical researchers, and comprises of three services; F1000Prime, F1000Research and F1000Posters. F1000Research is an open science publishing platform for life scientists that offers immediate publication and transparent peer review. Before that, he was the CEO and co-founder of Buzzumi, a cloud-based enterprise software company. He contributed this article to Live Science's Expert Voices: Op-Ed & Insights.

Quick quiz, which is bigger: the global music industry or scientific publishing? You may be surprised to learn that the music industry racks up $15 billion each year in sales, whereas scientific publishing quietly brings in $19 billion. This "under-the-radar" colossus gets very little attention, yet influences us all.

In many ways, published science tracks and influences the course of our species on this planet. It enables scientists to find out what other researchers are working on and what discoveries they have made. It helps governments decide where to invest and helps universities decide whom to hire. Most people don't give it a second thought but they should. All of us are consumers of science, and perhaps most crucially, all of us are eventually medical patients dependent on the discoveries published in medical journals. The way science is disseminated and the way articles are published is not just a geeky question for librarians — it impacts our society in profound ways.

Publishing science

The history of scientific journals dates back to 1665, when French Journal des sçavan and the English Philosophical Transactions of the Royal Society first published research results. Around the same time, the first peer review process was recorded at the Royal Society of London. By the 20th century, peer review became common practice to help allocate scientific funding, and before the Internet, all scientific journals were published on paper.

Paper costs money to buy, more money to print, and even more money to transport. It made sense that journals worked hard to find the "best" studies because they were constrained to publishing 10 to 20 articles each month. They limited the number of pages authors could write and severely limited (and sometimes charged the authors extra for) color and additional images. The process was long and laborious for everyone involved, and was constrained by the limits and costs of a necessarily analog world.

You would naturally assume that the Internet Age would have changed all of that, but while all journals now publish online, most of the process is still based on a paper past. This means many perfectly sound articles are rejected, articles take too long to be published, and most articles are published with conclusions, but without the data that supports them. Enough data should be shared by authors to ensure that anyone can replicate their research efforts and achieve similar results.

Such processes seriously bias what is published, impacting all aspects of science and thus society: from new scientific discoveries and the development of new medicines, to scientists' livelihoods and how public money is spent.

Redefining science publication

Become part of the discussion — on Facebook and Tumblr.


The Koyal Info Mag

Latest Month

March 2015



RSS Atom
Powered by LiveJournal.com
Designed by Lilia Ahner