• Wallabag.it! - Save to Instapaper - Save to Pocket -

    50 years ago, U.S. commercial whaling was coming to an end

    cover of the March 6, 1971 issue

    Whale protectionScience News, March 6, 1971

    Whaling by the single remaining United States whaling firm, the Del Monte Fishing Co. of San Francisco, will probably end as the result of a proposal … to terminate licensing for hunting the finback, sei and sperm whales. The three were placed on the endangered species list last year.

    Update

    During the 20th century, humans killed an estimated 2.9 million large whales. In response to those losses, countries eventually took action. Legislation passed in the 1970s effectively put a stop to commercial whaling in the United States. A worldwide ban followed in 1986, though some countries including Japan, Norway and Iceland continue to hunt the animals.

    The bans have helped whale populations recover, but not enough to move these three species off the U.S. endangered species list. Sperm whales have rebounded to an estimated 450,000 individuals, sei whales number around 50,000 and finback whales have reached about 100,000. Ship collisions now pose a bigger threat to the mammals than commercial whaling (SN Online: 7/29/14).

    in Science News on March 04, 2021 01:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dev session: James Knight, Thomas Nowotny: GeNN

    The GeNN simulator

    James Knight and Thomas Nowotny will introduce the GeNN simulation environment and discuss its development in this dev session.

    The abstract for the talk is below:

    Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this dev session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework [1], which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. The Python interface has enabled us to develop a PyNN [2] frontend and we are also working on a Keras-inspired frontend for spike-based machine learning [3].

    In the session we will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it works inside. We will then talk in more depth about its development with a focus on testing for GPU dependent software and some of the further developments such as Brian2GeNN [4].

    in INCF/OCNS Software Working Group on March 04, 2021 12:08 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    DNA databases are too white, so genetics doesn’t help everyone. How do we fix that?

    It’s been two decades since the Human Genome Project first unveiled a rough draft of our genetic instruction book. The promise of that medical moon shot was that doctors would soon be able to look at an individual’s DNA and prescribe the right medicines for that person’s illness or even prevent certain diseases.

    That promise, known as precision medicine, has yet to be fulfilled in any widespread way. True, researchers are getting clues about some genetic variants linked to certain conditions and some that affect how drugs work in the body. But many of those advances have benefited just one group: people whose ancestral roots stem from Europe. In other words, white people.

    Instead of a truly human genome that represents everyone, “what we have is essentially a European genome,” says Constance Hilliard, an evolutionary historian at the University of North Texas in Denton. “That data doesn’t work for anybody apart from people of European ancestry.”

    She’s talking about more than the Human Genome Project’s reference genome. That database is just one of many that researchers are using to develop precision medicine strategies. Often those genetic databases draw on data mainly from white participants. But race isn’t the issue. The problem is that collectively, those data add up to a catalog of genetic variants that don’t represent the full range of human genetic diversity.

    When people of African, Asian, Native American or Pacific Island ancestry get a DNA test to determine if they inherited a variant that may cause cancer or if a particular drug will work for them, they’re often left with more questions than answers. The results often reveal “variants of uncertain significance,” leaving doctors with too little useful information. This happens less often for people of European descent. That disparity could change if genetics included a more diverse group of participants, researchers agree (SN: 9/17/16, p. 8).

    One solution is to make customized reference genomes for populations whose members die from cancer or heart disease at higher rates than other groups, for example, or who face other worse health outcomes, Hilliard suggests.

    And the more specific the better. For instance, African Americans who descended from enslaved people have geographic and ecological origins as well as evolutionary and social histories distinct from those of recent African immigrants to the United States. Those histories have left stamps in the DNA that can make a difference in people’s health today. The same goes for Indigenous people from various parts of the world and Latino people from Mexico versus the Caribbean or Central or South America.

    Researchers have made efforts to boost diversity among participants in genetic studies, but there is still a long way to go. How to involve more people of diverse backgrounds — which goes beyond race and ethnicity to include geographic, social and economic diversity — in genetic research is fraught with thorny ethical questions.

    To bring the public into the conversation, Science News posed some core questions to readers who watched a short video of Hilliard explaining her views.

    Again and again, respondents to our unscientific survey said that genetic research is important for improving medical care. But our mostly white respondents had mixed feelings about whether the solution is customized projects such as Hilliard proposes or a more generalized effort to add variants to the existing human reference genome. Many people were concerned that pointing out genetic differences may reinforce mistaken concepts of racial inferiority and superiority, and lead to more discrimination.

    illustration of a strand of DNADelphine Lee

    Why is genetics so white?

    Some of our readers asked how genetic research got to this state in the first place. Why is genetic research so white and what do we do about it?

    Let’s start with the project that makes precision medicine even a possibility: the Human Genome Project, which produced the human reference genome, a sort of master blueprint of the genetic makeup of humans. The reference genome was built initially from the DNA of people who answered an ad in the Buffalo News in 1997.

    Although many people think the reference genome is mostly white, it’s not, says Valerie Schneider, a staff scientist at the U.S. National Library of Medicine and a member of the Genome Reference Consortium, the group charged with maintaining the reference genome. The database is a mishmash of more than 60 people’s DNA.

    An African American man, dubbed RP11, contributed 70 percent of the DNA in the reference genome. About half of his DNA was inherited from European ancestors, and half from ancestors from sub-Saharan Africa. Another 10 people, including at least one East Asian person and seven of European descent, together contributed about 23 percent of the DNA. And more than 50 people’s DNA is represented in the remaining 7 percent of the reference, Schneider says. Information about the racial and ethnic backgrounds of most of the contributors is unknown, she says.

    All humans have basically the same DNA. Any two people are 99.9 percent genetically identical. That’s why having a reference genome makes sense. But the 0.1 percent difference between individuals — all the spelling variations, typos, insertions and deletions sprinkled throughout the text of the human instruction book — contributes to differences in health and disease.

    Much of what is known about how that 0.1 percent genetic difference affects health comes from a type of research called genome-wide association studies, or GWAS. In such studies, scientists compare DNA from people with a particular disease with DNA from those who don’t have the disease. The aim is to uncover common genetic variants that might explain why one person is susceptible to that illness while another isn’t.

    In 2018, people of European ancestry made up more than 78 percent of GWAS participants, researchers reported in Cell in 2019. That’s an improvement from 2009, when 96 percent of participants had European ancestors, researchers reported in Nature.

    Most of the research funded by the major supporter of U.S. biomedical research, the National Institutes of Health, is done by scientists who identify as white, says Sam Oh, an epidemiologist at the University of California, San Francisco. Black and Hispanic researchers collectively receive about 6 percent of research project grants, according to NIH data.

    “Generally, the participants who are easier to recruit are people who look like the scientists themselves — people who share similar language, similar culture. It’s easier to establish a rapport and you may already have inroads into communities you’re trying to recruit,” Oh says.

    illustration of a strand of DNADelphine Lee

    When origins matter

    Hilliard’s hypothesis is that precision medicine, which tailors treatments based on a person’s genetic data, lifestyle, environment and physiology, is more likely to succeed when researchers consider the histories of groups that have worse health outcomes. For instance, Black Americans descended from enslaved people have higher rates of kidney disease and high blood pressure, and higher death rates from certain cancers than other U.S. racial and ethnic groups.

    In her work as an evolutionary historian studying the people and cultures of West Africa, Hilliard may have uncovered one reason that African Americans descended from enslaved people die from certain types of breast and prostate cancers at higher rates than white people, but have lower rates of the brittle-bone disease osteoporosis. African Americans have a variant of a gene called TRPV6 that helps their cells take up calcium. Overactive TRPV6 is also a hallmark of those breast and prostate cancers that disproportionately kill Black people in the United States.

    The variant can be traced back to the ancestors of some African Americans: Niger-Congo–speaking West Africans. In that part of West Africa, the tsetse fly kills cattle, making dairy farming unsustainable. Those ancestral people typically consumed a scant 200 to 400 milligrams of calcium per day. The calcium-absorbing version of TRPV6 helped the body meet its calcium needs, Hilliard hypothesizes. Today, descendents of some of those people still carry the more absorbent version of the gene, but consume more than 800 milligrams of calcium each day.

    Assuming that African American women have the same dietary need for calcium as women of European descent may lead doctors to recommend higher calcium intake, which may inadvertently encourage growth of breast and prostate cancers, Hilliard reported in the Journal of Cancer Research & Therapy in 2018.

    “Nobody is connecting the dots,” Hilliard says, because most research has focused on the European version of TRPV6.

    illustration of a strand of DNADelphine Lee

    One size doesn’t fit all

    Some doctors and researchers advocate for racialized medicine in which race is used as proxy for a patient’s genetic makeup, and treatments are tailored accordingly. But racialized medicine can backfire. Take the blood thinner clopidogrel, sold under the brand name Plavix. It is prescribed to people at risk of heart attack or stroke. An enzyme called CYP2C19 converts the drug to its active form in the liver.

    Some versions of the enzyme don’t convert the drug to its active form very well, if at all. “If you have the enzyme gene variant that will not convert [the drug], you’re essentially taking a placebo, and you’re paying 10 times more for something that will not do what something else — aspirin — will do,” Oh says.

    The inactive versions are more common among Asians and Pacific Islanders than among people of African or European ancestry. But just saying that the drug won’t work for someone who ticked the Pacific Islander box on a medical history form is too simplistic. About 60 to 70 percent of people from the Melanesian island nation of Vanuatu carry the inactive forms. But only about 4 percent of fellow Pacific Islanders from Fiji and the Polynesian islands of Samoa, Tonga and the Cook Islands, and 8 percent of New Zealand’s Maori people have the inactive forms.

    Assuming that someone has a poorly performing enzyme based on their ethnicity is unhelpful, according to Nuala Helsby of the University of Auckland in New Zealand. These examples “reiterate the importance of assessing the individual patient rather than relying on inappropriate ethnicity-based assumptions for drug dosing decisions,” she wrote in the British Journal of Clinical Pharmacology in 2016.

    A far better approach than either assuming that ethnicity indicates genetic makeup or that everyone is like Europeans is to analyze a person’s DNA and have a precise reference genome to compare it against, Hilliard says. Deciding which genomes to create should be based on known health disparities.

    “We have to stop talking about race, and we have to stop talking about color blindness.” Instead, researchers need to consider the very particular circumstances and environments that a person’s ancestors adapted to, Hilliard stresses.

    illustration of a strand of DNADelphine Lee

    What is diversity in genetics?

    Recruiting people from all over the world to participate in genetic research might seem like the way to increase diversity, but that’s a fallacy, Hilliard says. If you really want genetic diversity, look to Africa, she says.

    Humans originated in Africa, and the continent is home to the most genetically diverse people in the world. Ancestors of Europeans, Asians, Native Americans and Pacific Islanders carry only part of that diversity, so sequencing genomes from geographically dispersed people won’t capture the full range of variants. But sequencing genomes of 3 million people in Africa could accomplish that task, medical geneticist Ambroise Wonkam of the University of Cape Town in South Africa proposed February 10 in Nature (SN Online: 2/22/21).

    Wonkam is a leader in H3Africa, or Human Heredity and Health in Africa. That project has cataloged genetic diversity in sub-Saharan Africa by deciphering the genomes of 426 people representing 50 groups on the continent. The team found more than 3 million genetic variants that had never been seen before, the researchers reported October 28 in Nature. “What we found is that populations that are not well represented in current databases are where we got the most bang for the buck; you see so much more variation there,” says Neil Hanchard, a geneticist and physician at Baylor College of Medicine in Houston.

    What’s more, groups living side by side can be genetically distinct. For instance, the Berom of Nigeria, a large ethnic population of about 2 million people, has a genetic profile more similar to East African groups than to neighboring West African groups. In many genetic studies, scientists use another large Nigerian group, the Yoruba, “as the go-to for Africa. But that’s probably not representative of Nigeria, let alone Africa,” Hanchard says.

    That’s why Hilliard argues for separate reference genomes or similar tools for groups with health problems that may be linked to their genetic and localized geographic ancestry. For West Africa, for example, this might mean different reference datasets for groups from the coast and those from more inland regions, the birthplace of many African Americans’ ancestors.

    Some countries have begun building specialized reference genomes. China compiled a reference of the world’s largest ethnic group, Han Chinese. A recent analysis indicates that Han Chinese people can be divided into six subgroups hailing from different parts of the country. China’s genome project is also compiling data on nine ethnic minorities within its borders. Denmark, Japan and South Korea also are creating country-specific reference genomes and cataloging genetic variants that might contribute to health problems that their populations face. Whether this approach will improve medical care remains to be seen.

    People often have the notion that human groups exist as discrete, isolated populations, says Alice Popejoy, a public health geneticist and computational biologist at Stanford University. “But we really have, as a human species, been moving around and mixing and mingling for hundreds of thousands of years,” she says. “It gets very complicated when you start talking about different reference genomes for different groups.” There are no easy dividing lines. Even if separate reference genomes were built, it’s not clear how a doctor would decide which reference is appropriate for an individual patient.

    illustration of a strand of DNADelphine Lee

    Discrimination worries

    One big drawback to Hilliard’s proposal may be social rather than scientific, according to some Science News readers.

    Many respondents to our survey expressed concern that even well-intentioned scientists might do research that ultimately increases bias and discrimination toward certain groups. As one reader put it, “The idea of diversity is being stretched into an arena where racial differences will be emphasized and commonalities minimized. This is truly the entry to a racist philosophy.”

    Another reader commented, “The fear is that any differences that are found would be exploited by those who want to denigrate others.” Another added, “The idea that there are large genetic differences between populations is a can of worms, isn’t it?”

    Indeed, the Chinese government has come under fire for using DNA to identify members of the Uighur Muslim ethnic group, singling them out for surveillance and sending some to “reeducation camps.”

    People need a better understanding of what it means when geneticists talk about human diversity, says Charles Rotimi, a genetic epidemiologist and director of the Center for Research on Genomics and Global Health at the U.S. National Human Genome Research Institute, or NHGRI, in Bethesda, Md. He suggests beginning with “our common ancestry, where we all started before we went to different environments.” Because the human genome is able to adapt to different environments, humans carry signatures of some of the geographic locations where their ancestors settled. “We need to understand how this influenced our biology and our history,” Rotimi says.

    illustration of a DNA strand made of peopleExpanding DNA databases to include a broader mix of people may reveal more variants relevant to some common diseases. Delphine Lee

    Researchers can work to understand the genetic diversity within our genome “without invoking old prejudices, without putting our own social constructs on it,” he says. “I don’t think the problem is the genome. I think the problem is humanity.”

    Lawrence Brody, director of NHGRI’s Division of Genomics and Society, agrees: “The scientists of today have to own the discrimination that happened in the generations before, like the Tuskegee experiment, even though we’re very far removed from that.” During the infamous Tuskegee experiment, African American men with syphilis were not given treatment that could have cured the infection.

    “We want the fruits of genetic research to be shared by everyone,” Brody says. It’s important to determine when genetic differences contribute to disease and when they don’t. Especially for common diseases, such as heart disease and diabetes, genetics may turn out to take a back seat to social and economic factors, such as access to health care and fresh foods, for example, or excessive stress, racism and racial biases in medical care. The only way to know what’s at play is to collect the data, and that includes making sure the data are as diverse as possible. “The ethical issue is to make sure you do it,” Brody says.

    Hilliard says that the argument that minorities become more vulnerable when they open themselves to genetic research is valid. “Genomics, like nuclear fusion, can be weaponized and dangerous,” she says in response to readers’ concerns. “Minorities can choose to be left out of the genomic revolution or they can make full use of it,” by adding their genetic data to the mix.

    illustration of a strand of DNADelphine Lee

    Different priorities

    Certain groups are choosing to steer clear, even as scientists try to recruit them into genetic studies. The promise that the communities that donate their DNA will reap the benefits someday can be a hard sell.

    “We’re telling these communities that this is going to reduce health disparities,” says Keolu Fox, a Native Hawaiian and human geneticist at the University of California, San Diego. But so far, precision medicine has not produced drugs or led to health benefits for communities of color, he pointed out last July in the New England Journal of Medicine. “I’m really not seeing the impact on [Native Hawaiians], the Navajo Nation, on Cheyenne River, Standing Rock. In the Black and brown communities, the least, the last, the looked over, we’re not seeing the … impact,” Fox says.

    That’s because, “we have a real basic infrastructure problem in this country.” Millions of people don’t have health care. “We have people on reservations that don’t have access to clean water, that don’t have the … internet,” he says. Improving infrastructure and access to health care would do much more to erase health disparities than any genetics project could right now, he says.

    Many Native American tribes have opted out of genetic research. “People ask, ‘How do we get Indigenous peoples comfortable with engaging with genomics?’ ” says Krystal Tsosie, a member of the Navajo (Diné) Nation, geneticist at Vanderbilt University in Nashville, and cofounder of the Native Biodata Consortium. “That should never be the question. It sounds coercive, and there’s always an intent in mind when you frame the question that way.” Instead, she says, researchers should be asking how to protect tribes that choose to engage in genetic research.

    What would you like to tell the scientists working in this area? Send your thoughts to feedback@sciencenews.org.

    And issues of privacy become a big deal for small groups, such as the 574 recognized Native American tribal nations in the United States, or isolated religious or cultural groups such as the Amish or Hutterites. If one member of such a group decides to give DNA to a genetic project, that submission may paint a genetic portrait of every member of the group. Such decisions shouldn’t be left in individual hands, Tsosie says; it should be a community decision.

    Hilliard says minorities’ resistance to participating in genetic research is about more than a fear of being singled out; it’s the result of being experimented on but seeing medical breakthroughs benefit only white people.

    “Medical researchers just need to accomplish something that benefits somebody other than Europeans,” she says. “If Blacks or Native Americans or other underrepresented groups saw even a single example of someone of their ethnicity actually being cured of the many [common] chronic diseases and specific cancers for which they are at high risk, that paranoia would evaporate overnight.”

    in Science News on March 04, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    University of New Mexico investigation finds manipulated data and images, prompts retractions

    A research group at the University of New Mexico has lost at least two papers after an inquiry found evidence of manipulated data.  One article, “Large-Area Semiconducting Graphene Nanomesh Tailored by Interferometric Lithography,” appeared in 2015 in Scientific Reports, a Springer Nature title, and has been cited 25 times, according to Clarivate Analytics’ Web of … Continue reading University of New Mexico investigation finds manipulated data and images, prompts retractions

    in Retraction watch on March 04, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Advocates Of Equality For All Are More Likely To Show Prejudice Against Older Adults At Work

    By Emma Young

    Social justice movements, such as Black Lives Matter and #MeToo, have done huge amounts to address racism and sexism in our society. It’s now common for organisations to have diversity programmes, for example. As Ashley Martin at Stanford University and Michael S North at New York University note in their new paper in the Journal of Personality and Social Psychology, Facebook has famously invested millions of dollars in increasing diversity. However — and this is a big however — the pair’s work reveals that people who are keenest to advocate for women and racial minorities harbour more prejudice against a group that reports almost as much US workplace discrimination as these two: older people. 

    As the researchers note, Facebook’s Mark Zuckerberg has said that “younger people are just smarter” while Vinod Khosla, co-founder of Sun Microsystems, which has won awards for diversity and inclusion, has opined that “people over 45 basically die in terms of new ideas”. They are not alone in this attitude — broader society (or at least US and UK society, as a generalisation) finds ageism acceptable, too. “Ageism is so condoned in American culture that many do not see it as an ‘-ism’, in the same manner of other forms of prejudice,” the researchers note.  And yet, older people as a group are disadvantaged, and have more limited opportunities.

    To explore ageism and how it might interact with views on sexism and racism, the researchers ran a series of seven studies. A critical measure in each was a participant’s level of “egalitarian advocacy” — their belief in and commitment to creating a more egalitarian world. This was assessed using a scale that the researchers had developed, which gauged participants’ agreement with statements like “I feel angry when I think about the injustices and inequality in society”.

    In initial online studies, the team found that people who scored higher on this scale were more disapproving of racism and sexism but also more likely to endorse “Succession-based ageism” — the idea that older people should step aside to improve younger people’s job opportunities.

    In a later study, 298 participants were given a theoretical scenario in which a company had a million-dollar fund for improving diversity, to split between a number of minority groups. The researchers found that the more that participants endorsed egalitarian advocacy, the more money they wanted to go to women and racial minorities — and the less they wanted to go to older people. Questionnaire results showed that this was driven by a belief that older people block women and racial minorities from getting ahead. “Together, this research suggests that when it comes to egalitarianism, equality for all may only mean equality for some,” write Martin and North.

    Is there any way, then, to encourage people to be less ageist? Well, the pair did also find that when participants were informed about the true, difficult economic state of many older people in the US — many of whom simply cannot afford to retire — this made a difference. In fact, after this intervention, participants who were more egalitarian showed the biggest reductions in their bias against older people.

    The pair then looked for possible nuances in prejudice against older people. They found that though biases were strongest against older White men, older Black women were also subject to them. (They did explore who participants had in mind when thinking about older people who “should” step aside, and though older White men certainly did feature, the group was much broader.) Egalitarians did, though, report “liking” and wanting to interact with older Black women more. So, as the researchers note, this particular older group was supported in other ways.

    Of course, older people represent a fast-growing demographic. This means there’s an urgent need for more research into ageism, and how it might be tackled. Current diversity initiatives focus far more frequently on race, gender and LGBTQ, rather than age, the researchers note. “As organisations contribute more money than ever before to increasing diversity, it is increasingly necessary to take stock of who is being included, versus who is being left behind,” the pair writes.

    Equality for (almost) all: Egalitarian advocacy predicts lower endorsement of sexism and racism, but not ageism.

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on March 04, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    People who have had COVID-19 might need only one shot of a coronavirus vaccine

    People who have already had COVID-19 — even if they didn’t show symptoms — may be able to get away with just a single dose of a two-dose coronavirus vaccine, a study of health care workers suggests.

    Researchers tested for antibodies in the blood of 59 health care workers who got vaccinated with either the Pfizer or Moderna vaccines. Some of the volunteers had COVID-19 eight to nine months before vaccination.

    “Their bodies remembered it, no problem,” and reacted very quickly to the vaccine, says Mohammad Sajadi, an infectious disease doctor at the Institute of Human Virology at the University of Maryland School of Medicine in Baltimore. After the first vaccine dose, antibody levels quickly shot up in people who previously had COVID-19 either with or without symptoms to more than 500 times the levels seen in people who were never infected.

    Those results, published March 1 in JAMA, suggest that people who have had COVID-19 could get one shot or be moved to the end of the line for vaccinations. An estimated 9 percent of people in the United States have had confirmed cases of COVID-19. Limiting those people to one dose of vaccine could free up 4 to 5 percent of vaccine doses, Sajadi says. 

    “Immunologically it makes sense,” he says. “With the ongoing pandemic and vaccine shortages, it makes sense, too. The cost of inaction is just too great” not to spare vaccine doses where possible.

    in Science News on March 03, 2021 10:47 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    This soft robot withstands crushing pressures at the ocean’s greatest depths

    Inspired by a strange fish that can withstand the punishing pressures of the deepest reaches of the ocean, scientists have devised a soft autonomous robot capable of keeping its fins flapping — even in the deepest part of the Mariana Trench.

    The team, led by roboticist Guorui Li of Zhejiang University in Hangzhou, China, successfully field-tested the robot’s ability to swim at depths ranging from 70 meters to nearly 11,000 meters, it reports March 4 in Nature.

    Challenger Deep is the lowest of the low, the deepest part of the Mariana Trench. It bottoms out at about 10,900 meters below sea level (SN: 12/11/12). The pressure from all that overlying water is about a thousand times the atmospheric pressure at sea level, translating to about 103 million pascals (or 15,000 pounds per square inch). “It’s about the equivalent of an elephant standing on top of your thumb,” says deep-sea physiologist and ecologist Mackenzie Gerringer of State University of New York at Geneseo, who was not involved in the new study.

    The tremendous pressures at these hadal depths — the deepest ocean zone, between 6,000 and 11,000 meters — present a tough engineering challenge, Gerringer says. Traditional deep-sea robots or manned submersibles are heavily reinforced with rigid metal frames so as not to crumple — but these vessels are bulky and cumbersome, and the risk of structural failure remains high.

    To design robots that can maneuver gracefully through shallower waters, scientists have previously looked to soft-bodied ocean creatures, such as the octopus, for inspiration (SN: 9/17/14). As it happens, such a deep-sea muse also exists: Pseudoliparis swirei, or the Mariana hadal snailfish, a mostly squishy, translucent fish that lives as much as 8,000 meters deep in the Mariana Trench.

    a deep-sea snailfishIn 2018, researchers described three newly discovered species of deep-sea snailfish (one shown) found in the Pacific Ocean’s Atacama Trench, living at depths down to about 7,500 meters. Also found in the Mariana Trench, such fish are well adapted for living in high-pressure, deep-sea environments, with only partially hardened skulls and soft, streamlined, energy-efficient bodies.Newcastle University

    Gerringer, one of the researchers who first described the deep-sea snailfish in 2014, constructed a 3-D printed soft robot version of it several years later to better understand how it swims. Her robot contained a synthesized version of the watery goo inside the fish’s body that most likely adds buoyancy and helps it swim more efficiently (SN: 1/3/18).

    But devising a robot that can swim under extreme pressure to investigate the deep-sea environment is another matter. Autonomous exploration robots require electronics not only to power their movement, but also to perform various tasks, whether testing water chemistry, lighting up and filming the denizens of deep ocean trenches, or collecting samples to bring back to the surface. Under the squeeze of water pressure, these electronics can grind against one another.

    So Li and his colleagues decided to borrow one of the snailfish’s adaptations to high-pressure life: Its skull is not completely fused together with hardened bone. That extra bit of malleability allows the pressure on the skull to equalize. In a similar vein, the scientists decided to distribute the electronics — the “brain” — of their robot fish farther apart than they normally would, and then encase them in soft silicone to keep them from touching.

    robot and snailfish side by sideThe design of the new soft robot (left) was inspired by the deep-sea snailfish (illustrated, right), which is adapted to live in the very high-pressure environments of the deepest parts of the ocean. The snailfish’s skull is incompletely ossified, or hardened, which allows external and internal pressures to equalize. Spreading apart the robot’s sensitive electronics and encasing them in silicone keeps the parts from squeezing together. The robots flapping fins are inspired by the thin pectoral fins of the fish (although the real fish doesn’t use its fins to swim).Li et al/ Nature 2021

    The team also designed a soft body that slightly resembles the snailfish, with two fins that the robot can use to propel itself through the water. (Gerringer notes that the actual snailfish doesn’t flap its fins, but wriggles its body like a tadpole.) To flap the fins, the robot is equipped with batteries that power artificial muscles: electrodes sandwiched between two membranes that deform in response to the electrical charge.

    The team tested the robot in several environments: 70 meters deep in a lake; about 3,200 meters deep in the South China Sea; and finally, at the very bottom of the ocean. The robot was allowed to swim freely in the first two trials. For the Challenger Deep trial, however, the researchers kept a tight grip, using the extendable arm of a deep-sea lander to hold the robot while it flapped its fins.

    This machine “pushes the boundaries of what can be achieved” with biologically inspired soft robots, write robotocists Cecilia Laschi of the National University of Singapore and Marcello Calisti of the University of Lincoln in England. The pair have a commentary on the research in the same issue of Nature. That said, the machine is still a long way from deployment, they note. It swims more slowly than other underwater robots, and doesn’t yet have the power to withstand powerful underwater currents. But it “lays the foundations” for future such robots to help answer lingering questions about these mysterious reaches of the ocean, they write.

    Researchers successfully ran a soft autonomous robot through several field tests at different depths in the ocean. At 3,224 meters deep in the South China Sea, the tests demonstrated that the robot could swim autonomously (free swim test). The team also tested the robot’s ability to move under even the most extreme pressures in the ocean. A deep-sea lander’s extendable arm held the robot as it flapped its wings at a depth of 10,900 meters in the Challenger Deep, the lowest part of the Mariana Trench (extreme pressure test). These tests suggest that such robots may, in future, be able to aid in autonomous exploration of the deepest parts of the ocean, the researchers say.

    Deep-sea trenches are known to be teeming with microbial life, which happily feed on the bonanza of organic material — from algae to animal carcasses — that finds its way to the bottom of the sea. That microbial activity hints that the trenches may play a significant role in Earth’s carbon cycle, which is in turned linked to the planet’s regulation of its climate.

    The discovery of microplastics in Challenger Deep is also incontrovertible evidence that even the bottom of the ocean isn’t really that far away, Gerringer says (SN: 11/20/20). “We’re impacting these deep-water systems before we’ve even found out what’s down there. We have a responsibility to help connect these seemingly otherworldly systems, which are really part of our planet.”

    in Science News on March 03, 2021 06:58 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Spatiotemporal Dynamics of the Brain at Rest Exploring EEG Microstates as Electrophysiological Signatures of BOLD Resting State Networks

    This week on Journal Club session David Haydock will talk about a paper "Spatiotemporal Dynamics of the Brain at Rest Exploring EEG Microstates as Electrophysiological Signatures of BOLD Resting State Networks".


    Neuroimaging research suggests that the resting cerebral physiology is characterized by complex patterns of neuronal activity in widely distributed functional networks. As studied using functional magnetic resonance imaging (fMRI) of the blood-oxygenation-level dependent (BOLD) signal, the resting brain activity is associated with slowly fluctuating hemodynamic signals (10 s). More recently, multimodal functional imaging studies involving simultaneous acquisition of BOLD-fMRI and electroencephalography (EEG) data have suggested that the relatively slow hemodynamic fluctuations of some resting state networks (RSNs) evinced in the BOLD data are related to much faster (100 ms) transient brain states reflected in EEG signals, that are referred to as "microstates".

    To further elucidate the relationship between microstates and RSNs, we developed a fully data-driven ap- proach that combines information from simultaneously recorded, high-density EEG and BOLD-fMRI data. Using independent component analysis (ICA) of the combined EEG and fMRI data, we identified thirteen mi- crostates and ten RSNs that are organized independently in their temporal and spatial characteristics, respec- tively. We hypothesized that the intrinsic brain networks that are active at rest would be reflected in both the EEG data and the fMRI data. To test this hypothesis, the rapid fluctuations associated with each microstate were correlated with the BOLD-fMRI signal associated with each RSN.

    We found that each RSN was characterized further by a specific electrophysiological signature involving from one to a combination of several microstates. Moreover, by comparing the time course of EEG microstates to that of the whole-brain BOLD signal, on a multi-subject group level, we unraveled for the first time a set of microstate-associated networks that correspond to a range of previously described RSNs, including visual, sensorimotor, auditory, attention, frontal, visceromotor and default mode networks. These results extend our understanding of the electrophysiological signature of BOLD RSNs and demonstrate the intrinsic connec- tion between the fast neuronal activity and slow hemodynamic fluctuations.


    Papers:

    Date: 2021/03/05
    Time: 16:00
    Location: online

    in UH Biocomputation group on March 03, 2021 06:05 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Three visions of the future, inspired by neuroscience’s past and present

    A century ago, science’s understanding of the brain was primitive, like astronomy before telescopes. Certain brain injuries were known to cause specific problems, like loss of speech or vision, but those findings offered a fuzzy view.

    Anatomists had identified nerve cells, or neurons, as key components of the brain and nervous system. But nobody knew how these cells collectively manage the brain’s sophisticated control of behavior, memory or emotions. And nobody knew how neurons communicate, or the intricacies of their connections. For that matter, the research field known as neuroscience — the science of the nervous system — did not exist, becoming known as such only in the 1960s.

    Over the last 100 years, brain scientists have built their telescopes. Powerful tools for peering inward have revealed cellular constellations. It’s likely that over 100 different kinds of brain cells communicate with dozens of distinct chemicals. A single neuron, scientists have discovered, can connect to tens of thousands of other cells.

    Yet neuroscience, though no longer in its infancy, is far from mature.

    Today, making sense of the brain’s vexing complexity is harder than ever. Advanced technologies and expanded computing capacity churn out torrents of information. “We have vastly more data … than we ever had before, period,” says Christof Koch, a neuroscientist at the Allen Institute in Seattle. Yet we still don’t have a satisfying explanation of how the brain operates. We may never understand brains in the way we understand rainbows, or black holes, or DNA.

    Deeper revelations may come from studying the vast arrays of neural connections that move information from one part of the brain to another. Using the latest brain mapping technologies, scientists have begun drawing detailed maps of those neural highways, compiling a comprehensive atlas of the brain’s communication systems, known as the connectome.

    Those maps are providing a more realistic picture than early work that emphasized the roles of certain brain areas over the connections among them, says Michael D. Fox, a neuroscientist who directs the Center for Brain Circuit Therapeutics at Brigham and Women’s Hospital in Boston.

    Scientists now know that the dot on the map is less important than the roads leading in and out.

    “With the building of the human connectome, this wiring diagram of the human brain, we all of a sudden had the resources and the tools to begin to look at [the brain] differently,” Fox says.

    Scientists are already starting to use these new brain maps to treat disorders. That’s the main goal of Fox’s center, dedicated to changing brain circuits in ways that alleviate disorders such as Parkinson’s disease, obsessive-compulsive disorder and depression. “Maybe for the first time in history, we’ve got the tools to map these symptoms onto human brain circuits, and we’ve got the tools to intervene and modulate these circuits,” Fox says.

    The goal sounds grandiose, but Fox doesn’t think it’s a stretch. “My deadline is a decade from now,” he says.

    Whether it’s 10 years from now or 50, by imagining what’s ahead, we can remind ourselves of the progress that’s already been made, of the neural galaxies that have been discovered and mapped. And we can allow ourselves a moment of wonder at what might come next.

    The three fictional vignettes that follow illustrate some of those future possibilities. No doubt they will be wrong in the details, but each is rooted in research that’s under way today, as described in the “reality checks” that follow each imagined scenario.

    Science future: brain bots

    science fiction style illustration of two people looking at a brain mesh system
    What if nanobots could slide into the brain to end a bout of depression before it started? 
    Glenn Harvey

    Sarah had made up her mind. After five years, she was going to get her neural net removed. The millions of nanobots in her brain had given her life back to her, by helping her mind to work again. They had done their job. It was time to get them out.

    After Sarah’s baby was born on the summer solstice, things got dark. The following months had tipped Sarah into a postpartum depression that kept her from enjoying her gorgeous, perfect little girl.

    Unable to feel much of anything, Sarah barely moved through those early days. She rarely looked at the baby. She forgot to eat. She would sit in a dark room for hours, air conditioner on full blast, staring at nothing. Those endless days stretched until an unseasonably hot September morning. Her mother watched the baby while Sarah’s husband drove her to the Institute for Neuroprosthetics, a low-slung brick building in the suburbs of Nashville.

    Inside, Sarah barely listened as the clinic coordinator described the technology again. An injection would deliver the nanobots to her blood. Then a tech would guide the bots, using a magnet, from her arm toward her head. A fast, strong pulse of ultrasound would open the blood-brain barrier temporarily, allowing an army of minuscule particles to slip in.

    Powered by the molecular motion inherent in the brain, the nanobots would spread out to form a web of microscopic electrodes. That neural network could pinpoint where Sarah’s brain circuitry was misfiring and repair it with precise but persuasive electrical nudges.

    Over the following weeks, Sarah’s nanobots learned the neural rhythms of her brain as she moved through her life with debilitating depression. With powerful computational help — and regular tinkering by the clinic technologist — the system soon learned to spot the earliest neural rumblings of a deteriorating mood. Once those warning signs were clear, Sarah’s web of nanobots could end budding episodes before they could take her down.

    Soon after the injection, Sarah’s laugh started to reappear, though sometimes at the wrong times. She recalled the day she and her husband took the baby to a family birthday party. In the middle of a story about her uncle’s dementia treatment, Sarah’s squawks of laughter silenced the room.

    Those closest to her understood, but most of her family and friends didn’t know about the millions of bots working to shore up her brain.

    After a few months and some adjustments, Sarah’s emotions evened out. The numb, cold depression was gone. Gone too were the inappropriate bursts of laughter, flashes of white rage and insatiable appetites. She was able to settle in with her new family, and feel — really feel — the joy of it all.

    But was this joy hers alone? Maybe it belonged to the army of tiny, ever-vigilant helpers, reworking and evening out her brain. Without her neural net, she might have been teary watching her daughter, still her baby, walk into her kindergarten classroom on the first day. Instead, Sarah waved, turned and went to work, feeling only slightly wistful, nothing more intense than that.

    The science supporting the success of neural nets was staggering. They could efficiently fix huge problems: addiction, dementia, eating disorders and more. But the science couldn’t answer bigger questions of identity and control — what it means to be a person.

    That search for herself is what drove Sarah back to the clinic, five years after she welcomed the nanobots in.

    Her technologist went over the simple extraction procedure: a quick ultrasound pulse to loosen the blood-brain barrier again, a strong magnet over the inside of Sarah’s elbow and a blood draw. He looked at her. “You ready?”

    She took a deep breath. “Yes.”

    Reality check: brain bots

    In this story, Sarah received a treatment that doesn’t exist in the real world. But the idea that scientists will be able to change certain brain networks — and improve health — is not fiction. It’s happening.

    Already, a technique known as deep brain stimulation, or DBS, uses electrodes surgically implanted in people’s brains to tweak the behavior of brain cells. Such electrode implants are helping reduce Parkinson’s tremors, epileptic seizures and uncontrollable movements caused by Tourette’s syndrome. Mood disorders like Sarah’s have been targeted too.

    X-ray of electrodes in the brain of a patient with Parkinson's diseaseElectrodes penetrate deep into the brain of a 58-year-old person to treat Parkinson’s disease. Deep brain stimulation is being improved and tested in movement disorders, obsessive-compulsive disorder and depression.Zephyr/Science Source

    The central idea of DBS — that the brain can be fixed by stimulating it — is not new. In the 1930s, psychiatrists discovered that a massive wallop of seizure-inducing electricity could sometimes relieve psychiatric symptoms. In the 1940s and 1950s, researchers studied whether more constrained electrical stimulation could help with disorders such as depression.

    In 1948, for instance, neurosurgeon J. Lawrence Pool of Columbia University’s Neurological Institute of New York implanted electrodes to stimulate the brain of a woman with severe Parkinson’s who had become depressed and lost weight. Soon, she began to “eat well, put on weight and react in a more cheerful manner,” Pool reported in 1954.

    The experiment ended three years later when one of the wires broke. “It is the writer’s conviction that focal controlled stimulation of the human brain is a new technique in psychosurgery that is here to stay,” Pool wrote.

    Compared with those early days, today’s scientists understand a lot more about how to selectively influence brain activity. But before a treatment such as Sarah’s is possible, two major challenges must be addressed: Doctors need better tools — nimble and powerful systems that are durable enough to work consistently inside the brain for years — and they need to know where in the brain to target the treatment. That location differs among disorders, and even among people.

    These are big problems, but the various pieces needed for this sort of precision healing are beginning to coalesce.

    The specs of the technology that will be capable of listening to brain activity and intervening as needed is anyone’s guess. Yet those nanobots that snuck into Sarah’s brain from her blood do have roots in current research. For example, Caltech’s Mikhail Shapiro and colleagues are working toward nanoscale robots that roam the body and act as doctors (SN: 10/10/20 & 10/24/20, p. 27).

    Other kinds of sensors are growing up, fast. In the last 20 years, electrodes have improved by an astonishing amount, becoming smaller, more flexible and less likely to scar the brain, says biomedical engineer Cynthia Chestek. When she began working on electrode development in the early 2000s, there were still insolvable problems, she says, including the scars that big, stiff electrodes can leave, and the energy they require to operate. “We didn’t know if anybody was ever going to deal with them.”

    But those problems have largely been overcome, says Chestek, whose lab team at the University of Michigan in Ann Arbor develops carbon fiber electrodes. Imagine the future, Chestek says. “You could have thousands of electrodes safely interfacing with neurons. At that point, it becomes really standard medical practice.”

    Neural dust — minuscule electrodes powered by external ultrasounds — already can pick up nerve and muscle activity in rats. Neuropixels can record electrical activity from over 10,000 sites in mice’s brains. And mesh electrodes, called neural lace, have been injected into the brains of mice.

    hand holding a brain electrode array called NeuropixelsArrays of electrodes are getting smaller and more reliable, collecting an onslaught of data about brains at work. Shown is Neuropixels, an array created by the company Imec, that contains nearly 1,000 electrodes.IMEC

    Once inside, these nets integrate into the tissue and record brain activity from many cells. So far, these mesh electrodes have captured neural activity over months as the mice have scurried around.

    Other systems under development can be controlled with magnets, light or ultrasound. There are still problems to solve, Chestek says, but none are insurmountable. “We just need to figure out the last set of practical tricks,” she says.

    Once scientists know how to reliably change brain activity, they need to know where to make the change. Precision targeting is complicated by the fact that ultimately, every part of the brain is connected to every other part, in a very Kevin Bacon way.

    Advances in tractography — the study of the physical connections among groups of nerve cells — are pointing to which parts of these neural highways could be targeted to deal with certain problems.

    Other studies of people with implanted electrodes reveal brain networks in action. When certain electrodes were stimulated, people experienced immediate and obvious changes in their moods (SN: 2/16/19, p. 22). Those electrodes were near the neural tracts that converge in a brain region just behind and above the eyes called the lateral orbitofrontal cortex.

    In the future, we might all have our personalized brain wiring diagrams mapped, Fox says. And perhaps for any symptom — anxiety, food craving or addiction — doctors could find the brain circuit responsible. “Now we’ve got our target,” he says. “We can either hold the neuromodulation tool outside your scalp, or implant a tool inside your head, and we’re going to fix that circuit.”

    The hurdles to building a nimble, powerful and precise system similar to the one that helped Sarah are high. But past successes suggest that innovative, aggressive research will find ways around current barriers. For people with mood disorders, addiction, dementia or any other ailment rooted in the brain, those advances can’t come soon enough.

    Science future: mind meld

    Does the future hold a way for humans to connect with say, a bird, to get a memory boost?Glenn Harvey

    Sofia couldn’t sleep. Tomorrow was the big day. As the project manager for the Nobel Committee for Physiology or Medicine, she had overseen years of prize announcements, but never one like this.

    At 11:30 a.m. Central European Summer Time tomorrow, the prize would be given to a bird named Harry, a 16-year-old Clark’s nutcracker. Sofia smiled in the dark as she thought about how the news would land.

    Harry was to be recognized for benefiting humankind “in his role as a pioneering memory collective that enhances human minds.” Harry would share the prize (and the money) with his two human trainers.

    Tomorrow morning, the world would be buzzing, Sofia knew. But as with every Nobel Prize, the story began long before the announcement. Even in the 20th century, scientists had been dreaming of, and tinkering with, merging different kinds of minds.

    As the technology got more precise and less invasive, human-to-human links grew seamless, inspired by ancient and intriguing examples of conjoined twins with shared awareness. External headsets could send and receive signals between brains, such as “silent speech” and sights and sounds.

    Next, scientists began looking to other species’ brains for different types of skills that might boost our human abilities. Other animals have different ways of seeing, feeling, experiencing and remembering the world. That’s where Harry came in.

    Crows, ravens and other corvids have prodigious memories. That’s especially true for Clark’s nutcrackers. These gray and black birds can remember the locations of an estimated 10,000 seed stashes at any given time. These powerful memory abilities soon caught the eye of scientists eager to augment human memory.

    The scientists weren’t talking about remembering where the car is parked in the airport lot. They set their sights higher. Done right, these enhancements could allow a person to build stunningly complete internal maps of their world, remembering every place they had ever been. And it turned out that these memory feats didn’t just stop at physical locations. Strengthening one type of memory led to improvements in other kinds of memories too. The systems grew stronger all around.

    Harry wasn’t the first bird to link up with humans, but he has been one of the best. As a young bird, Harry underwent several years of intense training (aided by his favorite treat, whitebark pine seeds). Using a sophisticated implanted brain chip, he learned to merge his neural signals with those of a person who was having memory trouble or needed a temporary boost. The connection usually lasted for a few hours a day, but its effects endured. Noticeable improvements in people’s memories held fast for months after a session with Harry. The people who tried it called the change “breathtaking.” The bird had made history.

    By showing this sort of human-animal mind meld was possible, and beneficial, Harry and his trainers had helped create an entirely new field, one worthy of Nobel recognition, Sofia thought.

    Some scientists are now building on what Harry’s brain could do during these mingling sessions. Others are expanding to different animal abilities: allowing people to “see” in the dark like echolocating bats, or “taste” with their arms like octopuses. Imagine doctors being able to smell diseases, an olfactory skill borrowed from dogs. News outlets were already starting to run interviews with people who had augmented animal awareness.

    Still wide awake, Sofia’s mind ran back through the meetings she had held with her communications team over the last week. Tomorrow’s announcement would bring amusement and delight. But she also expected to hear strong objections, from religious groups, animal rights activists and even some ethicists concerned about species blurring. The team was prepared for protests, lots of them.

    In the middle of the night, these worries seemed a smidge more substantial to Sofia. Then she thought of Harry flitting around, hiding seeds, and the threat faded away. Sofia marveled at how far the science had come since she was a girl, and how far it was bound to go. Fully exhausted, she rolled over, ready to sleep, ready for tomorrow. She smiled again as she thought about what she’d tell people, if the chance arose: For better or worse, resistance is futile.

    Reality check: mind meld

    Accepting that a bird could win a Nobel Prize demands a pretty long flight of fancy. But scientists have already directly linked together multiple brains.

    Today, the technology that makes such connections possible is just getting off the ground. We are in the “Kitty Hawk” days of brain interface technologies, says computational neuroscientist Rajesh Rao of the University of Washington in Seattle, who is working on brain-based communication systems. In the future, these systems will inevitably fly higher.

    Such technology might even take people beyond the confines of their bodies, creating a sort of extended cognition, possibly enabling new abilities, Rao says. “This direct connection between brains — maybe that’s another way we can make a leap in our human evolution.”

    Rao helped organize a three-way direct brain chat, in which three people sent and received messages using only their minds while playing a game similar to Tetris. Signals from the thoughts of two players’ brains moved over the internet and into the back of the receiver’s brain via a burst of magnetic stimulation designed to mimic information coming from the eyes.

    Senders could transmit signals that told the receiver to rotate a piece, for instance, before dropping it down. Those results, published in 2019 in Scientific Reports, represent the first time multiple people have communicated directly with their brains.

    person wearing an EEG capAn EEG cap measures brain signals of a “sender” (shown) as she and two other people play a video game with their brains. Those signals form instructions that are sent directly to the brain of another player who can’t see the board but must decide what to do based on the instructions.Mark Stone/Univ. of Washington

    Other projects have looped in animals, though no birds yet. In 2019, people took control of six awake rats’ brains, guiding the animals’ movements through mazes via thought. A well-trained rat cyborg could reach turning accuracy of nearly 100 percent, the researchers reported.

    But those rats took commands from a person; they did not send information back. Continuous back-and-forth exchanges are a prerequisite for an accomplishment like Harry’s.

    These types of experiments are happening too. A recent study linked three monkeys’ brains, allowing their minds to collectively move an avatar arm on a 3-D screen. Each monkey was in charge of moving in two of three dimensions; left or right, up or down, and near or far. Those overlapping yet distinct jobs caused the networked monkeys to flounder initially. But soon enough, their neural cooperation became seamless as they learned to move the avatar arm to be rewarded with a sip of juice.

    With technological improvements, the variety of signals that can move between brains will increase. And with that, these brain collectives might be able to accomplish even more. “One brain can do only so much, but if you bring many brains together, directly connected in a network, it’s possible that they could create inventions that no single mind could think of by itself,” Rao says.

    Groups of brains might be extra good at certain jobs. A collective of surgeons, for instance, could pool their expertise for a particularly difficult operation. A collective of fast-thinking pilots could drive a drone over hostile territory. A collective of intelligence experts could sift through murky espionage material.

    Maybe one day, information from an animal’s brain might augment human brains — although it’s unlikely that the neural signals from a well-trained Clark’s nutcracker will be the top choice for a memory aid. Artificial intelligence, or even human intelligence, might make better memory partners. Whatever the source, these external “nodes” could ultimately expand and change a human brain’s connectome.

    Still, connecting brains directly is fraught with ethical questions. One aspect, the idea of an “extended mind,” poses particularly wild conundrums, says bioethicist Elisabeth Hildt of the Illinois Institute of Technology in Chicago.

    “Part of me is connected and extended to this other human being,” she says. “Is this me? Is this someone else? Am I doing this myself?” she asks.

    Some scientists think it’s too early to contemplate what it might feel like to have our minds dispersed across multiple brains (SN: 2/13/21, p. 24). Others disagree. “It may be too late if we wait until we understand the brain to study the ethics of brain interfacing,” Rao says. “The technology is already racing ahead.”

    So feel free to mull over how it would feel to connect minds with a bird. If you were the human who could link to the mind of Harry the Clark’s nutcracker, for instance, perhaps you might start to dream of flying.

    Science future: thoughts for sale

    science fiction style illustration of people observing a man's thoughts while he drivesWill people be willing to let their inner thoughts and interests be monitored, for a fee?Glenn Harvey

    Javier had just been fired. “They’re done with me,” he told his coworker Marcus. “They’re done with the whole Signal program.”

    Marcus shook his head. “I’m sorry, man.”

    Javier went on: “It gets worse; they’re moving all of Signal’s data into the information market.”

    The two were in the transportation business. Javier was the director of neural systems engagement for Zou, an on-demand ride hailing and courier system in Los Angeles. After the self-driving industry imploded because of too many accidents, Zou drove into L.A. with a promise of safety — so the company needed to make sure its drivers were the best.

    That’s where Javier and his team came in. The ambitious idea of the Signal program was to incentivize drivers with cash, using their brain data, gathered by gray headsets.

    Drivers with alert and focused brains earned automatic bonuses; a green power bar on-screen in the car showed minute-to-minute earnings. Drivers whose brains appeared sluggish or aggressive didn’t earn extra. Instead, they were warned. If the problem continued, they were fired.

    This carrot-and-stick system, developed by Javier and his team, worked beautifully at first. But a few months in, accidents started creeping back up.

    The problem, it turned out, was the brain itself: It changes. Human brains learn, find creative solutions, remake themselves. Incentivized to maintain a certain type of brain activity, drivers’ brains quickly learned to produce those signals — even if they didn’t correspond to better driving. Neural work-arounds sparked a race that Javier ultimately lost.

    That failure was made worse by Zou’s latest plans. What had started as a driving experiment had morphed into an irresistible way for the company to make money. The plan was to gather and sell valuable data — information on how the drivers’ brains responded to a certain style of music, how excited drivers got when they saw a digital billboard for a vacation resort and how they reacted to a politician’s promises.

    Zou was going to require employees to wear the headsets when they weren’t driving. The caps would collect data while the drivers ate, while they grocery shopped and while they talked with their kids, slurping up personal neural details and selling them to the highest bidders.

    Of course, the employees could refuse. They could decide to take off the caps and quit. “But what kind of choice is that?” Javier asked. “Most of these drivers would open up their skulls for a paycheck.”

    Marcus shook his head, and then asked, “How much extra are they going to pay?”

    “Who knows,” Javier said. “Maybe nothing. Maybe they’ll just slip the data consent line into the standard contract.”

    The two men looked at each other and shook their heads in unison. There wasn’t much left to say.

    Reality check: thoughts for sale

    Javier’s fictional program, Signal, was built with information gleaned externally from drivers’ brains. Today’s technology isn’t there yet. But it’s tiptoeing closer.

    Some companies already sell brain monitoring systems made of electrodes that measure external brain waves with a method called electroencephalography. For now, these headsets are sold as wellness devices. For a few hundred dollars, you can own a headset that promises to fine-tune your meditation practice, help you make better decisions or even level up your golf game. EEG caps can measure alertness already; some controversial experiments have monitored schoolchildren as they listened to their teacher.

    The claims by these companies are big, and they haven’t been proven to deliver. “It is unclear whether consumer EEG devices can reveal much of anything,” ethicist Anna Wexler of the University of Pennsylvania argued in a commentary in Nature Biotechnology in 2019. Still, improvements in these devices, and the algorithms that decode the signals they detect, may someday enable more sophisticated information to be reliably pulled from the brain.

    Other types of technology, such as functional MRI scans, can pull more detailed information from the brain.

    Complex visual scenes, including clips of movies that people were watching, can be extracted from brain scans. Psychologist Jack Gallant and colleagues at the University of California, Berkeley built captivating visual scenes using data from people’s brains as they lay in an fMRI scanner. A big red bird swooped across the screen, elephants marched in a row and Steve Martin walked across the screen, all impressionistic versions of images pulled from people’s brain activity.

    That work, published in 2011, foreshadowed ever more complex brain-reading tricks. More recently, researchers used fMRI signals to re-create faces that people were seeing.

    Visual scenes are one thing; will our more nebulous thoughts, beliefs and memories ever be accessible? It’s not impossible. Take a study from Japan, published in 2013. Scientists identified the contents of three sleeping people’s dreams, using an fMRI machine. But re-creating those dreams required hours of someone telling a scientist about other dreams first. To get the data they wanted, scientists first needed to be invited into the dreamers’ minds, in a way. Those three people were each awakened over 200 times early in the experiments and asked to describe what they had been dreaming about.

    More portable and more reliable ways to eavesdrop on the brain from the outside are moving forward fast, a swiftness that has prompted some ethicists, scientists and futurists to call for special protections of neural data. Debates over who can access our brain activity, and for what purposes, will only grow more intense as the technology improves.

    in Science News on March 03, 2021 03:55 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Ghanaian chemist is finding toxic substances in unusual places

    Through her research and science diplomacy, an environmental chemist is changing the narrative in her native Ghana

    in Elsevier Connect on March 03, 2021 02:09 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Societies' Update

    Information for reviewers about relevant Elsevier and industry developments, support and training.

    in Elsevier Connect on March 03, 2021 11:37 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    During Lockdown, Meaningful Activity Is More Fulfilling Than Simply Staying Busy

    By Emily Reynolds

    It’s not hard to find ways to stay busy during lockdown. Yes, many of us are spending lots of time at home and have evenings and weekends free from almost any kind of social activity — but we’re also juggling work, chores, childcare, life admin and the various emotional demands of living through a global pandemic.

    For some, in fact, staying busy has been an appealing prospect; indeed, hundreds of articles have been written with ideas on how to stay busy and distracted during the boredom of lockdown. But a new study from a team at Australia’s RMIT University, published in PLOS ONE, suggests that meaningful activity, rather than simply busyness, may be the way to stay emotionally stable during this period.

    All 95 participants were adults engaging in some form of social distancing. After sharing demographic data, participants rated themselves on a number of wellbeing measures, both as they were at the time of the study and as they were one month before the start of social distancing. These measures covered anxiety, panic, loneliness, depression, crying, cheerfulness, contentedness and laughter. Through these measures, the team was able to measure change in both positive and negative affect.

    Participants also indicated how much time they had spent on various activities before and during social distancing, including being outside the home, talking online and off, engaging with childcare or other chores, working, watching TV, reading or other creative pursuits, and doing nothing. Participants also indicated how important they felt each activity was to them.

    When, during lockdown, participants began to engage in more activities they found meaningful, they saw a reduction in both positive and negative affect. But increased busyness with activities participants didn’t consider important was related to increases in both positive and negative mood.

    At first glance, this seems to suggest that busyness is more desirable overall — you might be crying more, but you’re also laughing, right? The team thinks not. Instead, they argue that the extremes induced by busy but meaningless activity suggests participants were unsettled and unable to regulate their emotions. In other words, while busyness can make you feel more agitated and therefore prone to mood swings, meaningful activity is more likely to calm you down, soothing both extremes.

    As senior author Lauren Saling puts it, “extreme emotions are not necessarily a good thing… when you’re doing what you love, it makes sense that you feel more balanced – simply keeping busy isn’t satisfying.”

    There are some obvious limitations to the work — for one, the fact that participants had to recall how they felt a month before lockdown started. It’s easy to imagine that at least some participants would have been unable to accurately recall how they felt, perhaps even romanticising fairly mundane activities which they were, at that point, barred from doing. And away from individual emotions, how much were lockdown activities impacting general life satisfaction?

    We’re now nearly a year into various kinds of lockdown, so it’s easy to look at studies like this and beat yourself up, especially if you’re spending more time aimlessly playing video games than engaging in activities you know you’d probably find more meaningful. We’re all doing what we can to cope with distressing (and incredibly tedious) circumstances — but trying to balance meaning and mindlessness might be one way for us all to do that better.

    Increased meaningful activity while social distancing dampens affectivity; mere busyness heightens it: Implications for well-being during COVID-19

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on March 03, 2021 11:30 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Legal researcher who claimed false affiliation up to 31 retractions

    A law researcher who has falsely claimed to have been affiliated with several institutions has lost eight more publications, bringing his retraction total to 31 and earning him a spot in the top 20 of our leaderboard. The most recent retractions for Dimitris Liakopoulos include The Regulation of Transnational Mergers in International and European Law, … Continue reading Legal researcher who claimed false affiliation up to 31 retractions

    in Retraction watch on March 03, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Coronil and other peer-reviewed Ayurvedic scams

    "ऑथर ने किसी हित संघर्ष की घोषणा नहीं की है"

    in For Better Science on March 03, 2021 09:13 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    ‘Green’ burials are slowly gaining ground among environmentalists

    Despite “green” burials becoming increasingly available in North America, some older eco-conscious adults remain unaware of the option when planning for their deaths, a small study hints.

    Green burials do not use concrete vaults, embalm bodies or use pesticides or fertilizers at gravesites. Bodies are buried in a biodegradable container like a pinewood or wicker casket, or a cotton or silk shroud. Proponents of the small but growing trend argue it is more environmentally friendly and in line with how burials were done before the invention of the modern funeral home industry.

    But when researchers asked 20 residents of Lawrence, Kan., over the age of 60 who identify as environmentalists if they had considered green burial, most hadn’t heard of the practice. That’s despite the fact that green burial had been available in Lawrence for nearly a decade at the time. More than half of the survey participants planned on cremation, because they viewed it as the eco-friendliest option, the team reported online January 26 in Mortality.

    In 2008, Lawrence became the first U.S. city to allow green burials in a publicly owned cemetery. Several years later, at a meeting of an interfaith ecological community organization in the city, sociologist Paul Stock of the University of Kansas in Lawrence and his colleague Mary Kate Dennis noticed that most of the attendees were older adults. These people “live and breathe their environmentalism,” says Dennis, now a social work researcher at the University of Manitoba in Canada. “We were curious if it followed them all the way through to their burials.”

    That the majority of participants in the new survey leaned towards cremation aligns with national trends. Cremation recently surpassed traditional burial as the most popular death care choice in the United States. In July 2020, the National Funeral Directors Association projected the cremation rate that year would be 56 percent compared to 38 percent for casket burials. By 2040, the cremation rate is projected to grow to about 78 percent while the burial rate is estimated to shrink to about 16 percent.

    Cremation’s growing popularity can be traced to a number of factors, including affordability and concerns about traditional burial’s environmental impacts. But cremation comes with its own environmental cost, releasing hundreds of kilograms of carbon dioxide into the air per body.

    The preference for green burial, meanwhile, is small but growing. The Green Burial Council was founded in 2005 to establish green burial standards by certifying green burial sites. Now 14 percent of Americans over age 40 say they would choose green burial, the NFDA reports, and around 62 percent are open to exploring it.

    For those who go the green burial route, there now are a variety of commercially available choices. More adventurous options include a burial suit designed to sprout mushrooms as the body decomposes, an egg-shaped burial pod that eventually grows into a tree and human composting (SN: 2/16/20) — a one- to two-month process that turns the body into soil. In 2019, Washington became the first and only U.S. state to legalize human composting. 

    Conservation burial cemeteries take the green burial concept a step further by doubling as protected nature preserves. To date, the Green Burial Council has certified over 200 green burial sites and eight conservation burial sites in North America.

    Such initiatives showcase a growing awareness that death care choices can have a positive impact on ecosystems, says Lynne Carpenter-Boggs, a soil scientist at Washington State University in Pullman and a research advisor for the Seattle-based human composting company Recompose. But, she cautions, there is still little formal research comparing the environmental impacts of different death care choices.

    Stock and Dennis think this lack of research, coupled with a general lack of awareness of green burial as an available choice, could be the reason why many of the environmentalists they spoke with weren’t yet considering it. But as the option becomes more widely available, Dennis says, “it will be interesting to see how that shifts.”

    in Science News on March 02, 2021 03:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Exploring the growing impact of Chinese research via open access

    As KeAi launches its 100th international OA journal, experts behind the Beijing-based CSPM-Elsevier partnership explain how quality and visibility are driving change

    in Elsevier Connect on March 02, 2021 02:23 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Black hole visionaries push the boundaries of knowledge in a new film

    Black holes sit on the cusp of the unknowable. Anything that crosses a black hole’s threshold is lost forever, trapped by an extreme gravitational pull. That enigmatic quality makes the behemoths an enticing subject, scientists explain in the new documentary Black Holes: The Edge of All We Know.

    The film follows two teams working over the last several years to unveil the mystery-shrouded monstrosities. Scientists with the Event Horizon Telescope attempt to make the first image of a black hole’s shadow using a global network of telescopes. Meanwhile, a small group of theoretical physicists, anchored by Stephen Hawking — who was still alive when filming began — aim to solve a theoretical quandary called the black hole information paradox (SN: 5/16/14).

    When big discoveries happen, the camera is right there — allowing us to thrill in the moment when Event Horizon Telescope scientists first lay eyes on a black hole’s visage. And we triumph as the team unveils the result in 2019, a now-familiar orange, ring-shaped image depicting the supermassive black hole in the center of galaxy M87 (SN: 4/10/19). Likewise, scenes where Hawking questions his collaborators as they explain chalkboards full of equations prove mesmerizing. Viewers witness brilliant minds playing off one another, struggling with mistakes and dead ends in their calculations, punctuated by occasional, groundbreaking progress.

    Watch the trailer for Black Holes: The Edge of All We Know.

    Stunning cinematography and skillful editing lend energy to Black Holes, directed by Harvard physicist and historian Peter Galison and available on Apple TV, Amazon Prime Video and other on-demand platforms on March 2. When the Event Horizon Telescope team begins taking data, we’re treated to a crisp montage of telescopes around the world, all swiveling to catch a glimpse of the black hole. Later, bright sunbeams slice across an office floor while scientists muddle through calculations regarding the darkest objects of the cosmos. Such scenes are punctuated by delightfully strange black-and-white animations that evoke a pensiveness appropriate for contemplating cosmic oddities.

    There’s drama too: Event Horizon Telescope’s scientists wrestle with misbehaving equipment and curse uncooperative weather. The theoretical physicists grapple with the immense complexity of the cosmos on slow, distracted walks in the forest.

    Other research topics garner brief mentions, such as the study of gravitational waves from colliding black holes (SN: 1/21/21) and black hole analogs made using water vortices (SN 6/12/17). The film treats these varied efforts to study black holes independently; some viewers may wish the dots were better connected.

    a water vortex lit with green lightThe film Black Holes: The Edge of All We Know features this water vortex, lit by green light. Scientists used such vortices along with other techniques to re-create the physics of black holes.Giant Pictures

    Still, Black Holes successfully leads viewers through a fascinating, understandable trek across the varied frontiers of black hole knowledge. As Harvard physicist Shep Doeleman of the Event Horizon Telescope team describes it in the film, “we are chasing down something that struggles with all of its might to be unseen.” Pulling us to the very rim of this fathomless abyss, Black Holes invites us to stand with scientists peering over the edge.

    in Science News on March 02, 2021 01:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Companies Undermine “Sacred” Values Like Environmentalism When Co-opting Them For Profit Or Prestige

    By Emma Young

    How important is it to you to protect our planet’s wildest places? Would you put a price on it — or is it the kind of goal that just can’t be subject to a cost-benefit analysis? If the latter, then for you, protecting Earth’s wilds is a “sacred value”. Patriotism, or the protection of human lives, or diversity in the workplace can be sacred values, too. So what happens when a for-profit organisation embraces such values — is the pursuit of social or environmental values and profit a “win-win”, as is often claimed?

    A new paper in the Journal of Personality and Social Psychology: Interpersonal Relations and Group Processes, suggests not — and it’s the value that suffers, as it becomes less sacred. If this is right, then businesses that co-opt such values in their advertising (think no end of outdoor clothing companies that make a big deal about caring for the wilderness, as just one example) are degrading the very values that they claim to promote.

    Rachel L Ruttan at the University of Toronto and Loran F Nordgren at Northwestern University base their conclusions on the results of seven studies on a total of 2,785 participants. In initial studies, participants who read a pro-environment message from a for-profit producer of agricultural equipment subsequently felt that environmentalism was less sacred than did participants who’d read the same message, but from the environmental body Conservation International. The “sacredness scale” used by the researchers assessed levels of feeling that a value (such as environmentalism) is “sacred”, and something that should not be sacrificed, no matter the benefits, but rather defended under any circumstance.

    In a further study, participants who read that an organisation was launching a pro-environmental campaign to drive greater profits or for better PR subsequently felt that environmentalism was less sacred, compared with participants who’d read that the new campaign was driven by a desire for greater sustainability.

    Earlier work has suggested that when a value is overtly threatened, this reinforces the desire of someone who upholds that value to protect it. However, the use of sacred values in the pursuit of corporate gain might not always seem like an obvious threat to these values, point out the researchers. Indeed, when Ruttan and Nordgren explored this in two further studies, the sacredness of a value was only degraded when participants did not perceive an action as clearly threatening or undermining that value . So, to keep with the same example (though of course there are others), when an outdoor clothing company pledges to donate a portion of its profits to protecting areas of wilderness, though this might seem like a “good thing”, it could work insidiously to degrade the sanctity of the value of wilderness protection — which could ultimately weaken protection efforts.

    This possibility is supported by the results of the seventh study. In 2015, two US senators (including John McCain) issued a report documenting how the NFL was being paid by the US military to put on patriotic displays. This was widely picked up by US media. The researchers wondered whether people who were aware of this “paid patriotism” would subsequently hold patriotism as being less sacred. So they gave online participants a quiz that was ostensibly about their knowledge of the NFL, but which included this question: “Is it true or false that the term ‘Paid Patriotism’ was used to describe the practice of NFL teams accepting money to put on patriotic displays at games?”

    Next, the participants were asked for their opinion on a purported potential NFL policy change: that singing the national anthem at games would become voluntary, and the choice of individual teams, rather than mandatory. The researchers found that participants who were aware of paid patriotism were significantly less likely to feel that “no matter how great the benefits, this shouldn’t be done”; that is, they were less likely to hold patriotism sacred. (This link held when controlling for a range of factors, including political orientation, education and a history of military service.) The results suggest that an awareness of paid patriotism decreased the sanctity of the value of patriotism.

    “As markets increasingly interface with our sacred values — be it through advertisements, product offerings, promoting the strategic advantages of a diverse workforce, or Corporate Social Responsibility initiatives — it becomes increasingly important to understand the effects of using sacred values towards market aims on perceptions of and commitment to values,” the researchers write. 

    Clearly, there’s a lot more work to be done in this area — to investigate whether the same effects might hold in other countries, as well as in more real-world scenarios, for example. But it does suggest that if “do-gooder” actions and initiatives by companies are perceived to be driven by a desire to enhance the bottom line, this could undermine the very value that they’re endorsing — whether it springs from a “win-win” motivation, or not.

    Instrumental use erodes sacred values.

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on March 02, 2021 11:15 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Predatory octopuses were drilling into clamshells at least 75 million years ago

    Tiny holes in three fossil clams reveal that by 75 million years ago, ancient octopuses were deviously drilling into their prey. The find pushes evidence of this behavior back 25 million years, scientists report February 22 in the Biological Journal of the Linnean Society.

    The clams, Nymphalucina occidentalis, once lived in what is now South Dakota, where an inland sea divided western and eastern North America. While examining the shells, now at the American Museum of Natural History in New York, paleontologists Adiël Klompmaker of the University of Alabama in Tuscaloosa and AMNH’s Neil Landman spotted telltale oval-shaped holes. Each hole was between 0.5 and 1 millimeters in diameter, thinner than a strand of spaghetti.

    A modern octopus uses a sharp ribbon of teeth called a radula on its tongue to drill a hole into thick-shelled prey — useful for when the shell is too tough for the octopus to pop apart with its suckers. The octopus then injects venom into the hole, paralyzing the prey and dissolving it a bit, which makes for easier eating. Octopus-drilled holes were previously found in shells dating to 50 million years ago, but the new find suggests this drilling habit evolved 25 million years earlier in their history.

    Such drill holes augment the scant fossil record of octopus evolution. The soft bodies of the clever, eight-armed Einsteins don’t lend themselves well to fossilization, tending instead to decay away (SN: 8/12/15). What fossils do exist — a handful of specimens dating to about 95 million years ago — suggest little change in the basic body plan from ancient to modern octopuses.

    The find also puts the evolution of octopus drilling squarely within the Mesozoic Marine Revolution, an escalation in the ancient arms race between ocean predators and prey (SN: 6/15/17). During the Mesozoic Era, which spanned 251 million to 66 million years ago, predators lurking near the seafloor became adept at crushing or boring holes into the shells of their prey.

    in Science News on March 02, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Erdogan’s academic elites

    Önder Metin had a rogue PhD student whom he trusted "to ensure their academic growth". But "mistakes were made by mistake", conclusions are never affected. Yet those who still complain, will pay dearly.

    in For Better Science on March 02, 2021 06:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Announcing the PLOS Scientific Advisory Council

    Author: Veronique Kiermer, Chief Scientific Officer

    We are delighted to announce the creation of the PLOS Scientific Advisory Council, a small group of active researchers with diverse perspectives to help us shape our efforts to promote Open Science, globally.

    PLOS, as a non-profit organization committed to empowering researchers to change research communication, cannot successfully pursue its mission without listening to the research community. We regularly survey and consult the research communities we work with, formally and informally, to guide our choices and developments. The organization’s governance, including our Board of Directors, has always involved active researchers. And we derive great insight and advice from our continuous exchange with the academic Editors of PLOS journals. 

    We’ve decided to take an additional formal step and create a forum where the researchers who contribute to PLOS through different channels can interact directly with each other, and to ensure that this forum includes voices from researchers around the globe. 

    We’ve created the PLOS Scientific Advisory Council, a small group of researchers who represent varied scientific and career perspectives, who will advise PLOS executive and editorial leadership on strategic questions of scientific interest. 

    At this point, the Scientific Advisory Council is deliberately small–about a dozen individuals–to ensure in-depth discussions and engagement, but we’ve strived to include different disciplinary interests, career stages and geographic representation. The group includes four PLOS Board members who are themselves active researchers, two of our journals’ academic Editors-in-Chief, alongside researchers who are not formally associated with PLOS. 

    We are delighted to welcome the following members to the PLOS Scientific Advisory Council. To see their photos and full bios, please visit their page on our website: 

    Sue Biggins
    Fred Hutchinson Cancer Research Center and University of Washington, Seattle, USA

    Yung En Chee
    University of Melbourne, Australia

    Gregory Copenhaver
    University of North Carolina, Chapel Hill, USA

    Abdoulaye A. Djimde
    University of Bamako, Mali

    Robin Lovell-Badge
    The Francis Crick Institute, London, UK

    Direk Limmathurotsakul
    Mahidol University, Bangkok, Thailand

    Meredith Niles
    University of Vermont, Burlington, USA

    Jason Papin
    University of Virginia, Charlottesville, USA

    Simine Vazire (Chair)
    University of Melbourne, Australia

    Keith Yamamoto
    University of California, San Francisco, USA

    Veronique Kiermer (Secretary, ex officio)
    PLOS

    The post Announcing the PLOS Scientific Advisory Council appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on March 01, 2021 08:11 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    “The right decision”: Group retracts Nature Chemical Biology paper after finding a key error

    Researchers in Australia have retracted a 2016 paper in Nature Chemical Biology after discovering a critical error in their research, bringing some closure to a gut-wrenching case for the scientists involved.  As we reported in January, Nicola Smith, the senior author of the article, titled “Orphan receptor ligand discovery by pickpocketing pharmacological neighbors,” described learning … Continue reading “The right decision”: Group retracts Nature Chemical Biology paper after finding a key error

    in Retraction watch on March 01, 2021 04:46 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    A music therapist seeks to tap into long-lost memories

    The upbeat song “Sweet Caroline” often prompts listeners to sing and dance. But when music therapist Alaine Reschke-Hernández played the song for an older person, the song evoked a sad memory and tears. That patient’s surprising reaction highlights how music, and the memories that come with it, can influence emotions, even years later.

    Strategically using music can improve well-being, particularly for older people, says Reschke-Hernández, of the University of Kentucky in Lexington. “Music is so connected and integrated with so many different elements of our life,” she says. From joyful celebrations to solemn ceremonies, music is part of meaningful events throughout life and becomes strongly associated with memory.

    In her own practice, Reschke-Hernández has seen the benefits of music therapy. But she wanted to see how far those benefits might go. She partnered with neuroscientists to find out if tapping into music-associated memories could positively influence the emotions of people with dementia.

    music therapist playing guitar with an old man holding drum sticksMusic therapist Alaine Reschke-Hernández plays guitar for a patient. She is partnering with neuroscientists to better understand how music, memory and emotions are linked in people with dementia.Andrea Mahoney

    Because memories are so personal, music that evokes happiness in some may not do so in others. For this reason, the researchers asked participants — including 19 healthy older adults and 20 people with Alzheimer’s disease — to choose songs that evoked either sadness or happiness.

    After listening to the self-selected music, participants rated how they felt and stated whether they remembered listening to music. Both positive and negative emotions lingered for up to 20 minutes in both healthy adults and in participants with Alzheimer’s disease, whether they remembered listening to music or not, the team reported in November in the Journal of Alzheimer’s Disease.

    The findings suggest that emotional responses to music may not depend on memory recall, Reschke-Hernández says. “If we can help [people with dementia] have a lasting emotional response, or better emotional regulation using music, well, that’s fantastic,” she says.

    Such information could be important for doctors and caregivers. Music could have long-lasting effects on a patient’s emotions and well-being — in some cases negatively. Applying music therapy without this consideration might cause more harm than good, says Melita Belgrave, a music therapy professor at Arizona State University in Tempe. “If you’re using music, you can do harm if you’re not paying attention.”

    This type of research may also help scientists better understand memory loss in dementia. Some evidence suggests that brain regions involved in recognizing music remain relatively untouched by Alzheimer’s disease, and that music may boost autobiographical memories in people with the disease (SN: 6/15/15). The study “raises this question about other types of memory that seem to be relatively intact in people with dementia,” such as unconscious memory, says coauthor Edmarie Gúzman-Vélez, a neuroscientist at Massachusetts General Hospital and Harvard Medical School in Boston. “What can we do to tap into those?”

    Reschke-Hernández wants to continue building partnerships with neuroscientists and other researchers to understand exactly how music activates memories and the emotions associated with them. Eventually, she hopes to replicate the study with more diverse participants. “It’s not just looking at a diverse population of participants in terms of their culture or ethnicity, but also their age,” Reschke-Hernández emphasizes.

    For some dementia patients, music may be the perfect tool to explore, access and benefit from positive emotions activated by hidden memories. That could make these people’s lives better, she says, even if they don’t remember.

    in Science News on March 01, 2021 03:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    What happened when a group of sleuths flagged more than 30 papers with errors?

    Retraction Watch readers may recall the name Jennifer Byrne, whose work as a scientific sleuth we first wrote about four years ago, and have followed ever since. In a new paper in Scientometrics, Byrne, of New South Wales Health Pathology and the University of Sydney, working along with researchers including Cyril Labbé, known for his work … Continue reading What happened when a group of sleuths flagged more than 30 papers with errors?

    in Retraction watch on March 01, 2021 02:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    ‘Gory Details’ dives into the morbid, the taboo — and our minds

    Gory Details book cover

    Gory Details
    Erika Engelhaupt
    National Geographic, $26

    We tend to turn away, physically or metaphorically, from things we find unsavory: leggy insects, bodily fluids, conversations about death. But just because something is disgusting, morbid or taboo shouldn’t keep us from learning about it — and could even be a cue that we should, posits science journalist Erika Engelhaupt.

    In Gory Details, Engelhaupt takes on a range of such topics, everything from which mammals are most likely to murder members of their own species and the spotty history of research on female genitalia to how fecal transplants work and the psychology of why we find clowns creepy. She often uses science, history or both to break down what gives a particular topic its taboo or ick status. How else are you going to stop chills from running up your spine while reading about a woman who pulled 14 tiny worms out of her eye other than by learning the story of parasitic survival that landed them there?

    Regular Science News readers might recognize Engelhaupt’s name: She was an editor at the magazine from 2009 to 2014. While here, Gory Details was born as a blog and later moved to National Geographic. The book includes updated and expanded versions of some blog posts, as well as plenty of new material.

    Science News caught up with Engelhaupt to talk about the book. The following conversation has been edited for clarity and brevity.

    SN: You’ve mentioned that when people learn your book title is Gory Details, they assume you write for kids.

    Engelhaupt: Yes. At some point, people are expected to grow up and not be interested in gross things anymore, and I reject that. I think actually we all are interested in a wide variety of gross things. It’s a matter of how you frame it. We may love watching murder mysteries and true crime and CSI-type shows. We don’t necessarily think of ourselves as being morbid because of it. But when it comes to things like biology, anatomy and subjects that are taboo involving sex or death, we hold ourselves to a different standard. I want people to read this book and walk away feeling like, you know what? It’s OK to be curious about things that we have considered off-limits for polite conversation.

    SN: Do you think you have a higher tolerance than most for “gross” topics?

    Engelhaupt: There is a quiz that you can take to see how easily disgusted you are. I’m totally average. I think maybe that’s part of why I’m so interested in these topics, because they gross me out just as much as everyone else.

    SN: You went to a conference on edible insects. This seemed like it was right at your limit of what you were willing to do in the name of Gory Details.

    Engelhaupt: It was. I felt the need to go where all of the scientists would be and really learn why they think we’re all going to be eating more insects in 20 years. It was a challenge for me. There’s a little bit of a thrill in doing something like eating that first mealworm. You know it’s not actually going to hurt you, but it’s gross and it’s new and it is exciting. The biggest challenge was the silkworm pupa, which was large and segmented and just looked so … insecty.

    SN: Do you have a favorite reporting field trip?

    Engelhaupt: Probably the most fun travel I did for the book was going to biologist Rob Dunn’s lab at North Carolina State University to find my own face mites. There are two species of little eight-legged mites that live on all of our faces — and elsewhere on our bodies, by the way. Seeing something that was living in my pores squiggling around on the microscope slide — for me, there’s nothing more fun than that. I still keep pictures on my cell phone of my face mites so that I can show them to people.

    SN: You write about “delusions of infestation,” where people believe their bodies are teeming with insects. I was struck by the stories of people with this condition, and that they seemed to have no other mental illness.

    Engelhaupt: A delusion is just a fixed idea that’s incorrect. When you hear that someone is delusional, you might think they’re schizophrenic or psychotic. There can be cases where there’s overlap with mental illness, but a lot of cases start off in a normal way. A person feels an itch, there’s a real physical sensation. It’s not too hard to imagine they’d think something is crawling on them and that it could be insects. It becomes extremely important to the person to convince people that they’re right and not crazy. So the person gets deeper and deeper into [the delusion], and it becomes harder and harder to get them to accept treatment.

    There are antipsychotic drugs that can help people let go of the idea and treatments that can solve underlying problems — skin problems, for example, or nerve problems that can cause the sensations. [Treatment with antipsychotics] makes it all sound very scary. That’s one reason this problem goes so unrecognized and untreated — because of the stigma around mental illness and because it seems like people must be crazy. Our squeamishness and fear of people who are experiencing this, our deep discomfort with it, has really created a trap for people.

    SN: You also write about a lot of new scientific research. Any standout papers where you thought, I have to write about this?

    Engelhaupt: A study where scientists fed different human bodily fluids to blowflies to see which ones the flies found tastiest. [The scientists] were looking at how flies might transfer human DNA picked up from bodily fluids to different parts of a crime scene. [DNA analysis] techniques are now so sensitive that we’re picking up DNA from fly poop. If the flies have previously eaten human blood or semen or saliva, there can be DNA from that person that gets pooped out by the fly. That [DNA] might get interpreted as blood spatter or get picked up incidentally at a crime scene and really confuse the situation. Who would have thought that you need to study fly poop to analyze DNA at a crime scene?

    SN: I was sure you were going to say the paper on the calorie count of a human, from the chapter on cannibalism.

    Engelhaupt: That’s one where it was a question I didn’t know I had until I saw that a scientist had answered it. And those are some of the kinds of things that I wanted to fill this book with: You didn’t know you wanted to know this, but I’m hoping that now you’re glad you do.


    Buy Gory Details from Bookshop.org. Science News is a Bookshop.org affiliate and will earn a commission on purchases made from links in this article.

    in Science News on March 01, 2021 01:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Americans Simultaneously Hold Both Positive And Negative Stereotypes About Atheists

    By Emily Reynolds

    What — or who — do you think about when you hear the word “atheist”? Someone scientific, rational, and open-minded? Or, instead, someone who lacks morality, or who is less trustworthy than your average religious person? Prior research hasn’t been wholly positive for non-believers, finding serious levels of distrust of atheists — even among atheists themselves.

    But the real picture might be slightly more complicated. According to a new study, published in Social Psychological and Personality Science, positive and negative stereotypes abound when it comes to atheists. And for many, these stereotypes exist at the same time: people can believe atheists to be fun and open-minded just as they find them to be immoral.

    Although past work has mainly looked at negative stereotypes of atheists, there is reason to believe that people could, simultaneously, hold positive stereotypes. You might perceive atheists as uninhibited and rebellious and want to invite them to a party or have them serve you at a restaurant, for example — but you may not want them to look after your children for exactly the same reason.

    The first study looked precisely at these traits, testing the hypothesis that people see atheists as fun, open-minded and scientific (and the religious as the opposite). Participants were randomly assigned to read three vignettes in which a character either displayed one of these three traits (e.g. fun), or its opposite (e.g. not fun). They were then asked which of two sentences about the character seemed more probable. One statement was standalone and the other was linked to religion or atheism: e.g. “Henry is a teacher” or “Henry is a teacher and [an atheist/believes in God]”.

    Participants were more likely to link being fun, open-minded and scientific to atheism, while they viewed the opposite traits as more representative of religious people. The extent to which this was true varied based on how religious the participants were themselves: people of high and low religiosity believed atheists were fun and scientific, but highly religious people were less likely to consider atheists as particularly open-minded. A second study replicated these findings, as well as adding another element: the stereotype of atheists as immoral, which participants tended to agree with. 

    A third study looked at the contexts in which people prefer atheists or religious people. Participants were presented with a choice between the two in a number of hypothetical situations, each related to the positive stereotypes identified in the first two studies: e.g. “which would you choose if you wanted to attend a fun party: one thrown by an atheist or by a religious person?”, “which would you choose if you wanted to have an open-minded political conversation with someone?” or “which would you choose if you wanted a tutor for a high-level college course in the physical sciences?” To disguise the purpose of the study, five distractor scenarios (e.g. hiring a mechanic) were included with five different targets (e.g. Mac users vs. PC users).

    As expected, participants preferred atheist partners in fun, open-minded and scientific situations. This was not the case for highly religious participants, however — while they had no bias towards religious or atheist party hosts, they did strongly favour conversational partners and science tutors who shared their beliefs.

    The team suggests that the results could have serious implications for non-believers: though being considered fun is hardly the worst thing in the world, it could also lead to negative ramifications like being passed over for career opportunities or being seen as an inappropriate romantic prospect. It would be interesting to explore how much positive and negative stereotypes actually stand in the way for atheists in the real world, and in what domains.

    Overall, the study paints an interesting picture of how we see both atheists and religious people — and how our own beliefs colour those pictures. While some certainly do have firm, rigid ideas about certain groups, these results suggest that for many of us stereotyping is somewhat more fluid.

    Is There Anything Good About Atheists? Exploring Positive and Negative Stereotypes of the Religious and Nonreligious

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on March 01, 2021 12:03 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    An ancient dog fossil helps trace humans’ path into the Americas

    An ancient bone from a dog, discovered in a cave in southeast Alaska, hints at when and how humans entered the Americas at the end of the Ice Age.

    The bone, just the fragment of a femur, comes from a dog that lived about 10,150 years ago, based on radiocarbon dating. That makes this dog fossil one of the oldest, or possibly the oldest, found in the Americas, researchers report in the Feb. 24 Proceedings of the Royal Society B.

    Analysis of DNA from the bone, roughly the same age as three other dogs dating to around the same time period previously found buried in the Midwest (SN: 4/16/18), suggests that the dog belonged to a lineage of dogs that split from Siberian dogs around 16,700 years ago. The timing of that split suggests that the dog’s ancestors, probably following along with humans, had left Asia by around that time.

    “Dogs’ movement and domestication is obviously very, very closely associated with humans. So the interesting thing is, if you’re following dogs’ movement, it can tell you something about humans as well,” says Charlotte Lindqvist, an evolutionary biologist at the University at Buffalo in New York.

    The new finding also adds to an ongoing debate about what route humans took after arriving in North America via a land bridge in Alaska. One long-held idea is that these first colonizers traveled inland through an ice-free corridor (SN: 8/8/18). But around 16,700 years ago, that corridor would have been covered in ice. Thus, the existence of this ancient dog supports an alternative idea — that these colonizers hugged the Pacific coast as they moved south, possibly traveling by boat.

    The bit of bone, smaller than a dime, was originally thought to be from a bear. But when Lindqvist and colleagues analyzed DNA from the bone, it turned out to be canine. Comparing the DNA with that from wolves, ancient dogs and modern dog breeds allowed the team to estimate when the dog last shared an ancestor with dogs from Siberia.

    This finding is a big deal, says Angela Perri, an archaeologist at Durham University in England, whose recent genetic research suggests that domesticated dogs accompanied the first humans into the Americas around 15,000 years ago. This new paper suggests that “at least around 16,700 years ago, humans and dogs seemed to be moving into the Americas,” she says. “And that would be almost 2,000 years earlier than we thought.”

    Kelsey Witt, a geneticist at Brown University in Providence, R.I., looks forward to additional discoveries of early American dogs. By finding more ancient fossils and studying more DNA, Witt says, “I think we’ll get a better picture of exactly how people migrated and exactly when dogs came through.”

    in Science News on March 01, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    “I absolutely stand by the validity of the science” says author of energy field paper now flagged by journal

    An integrative health journal has issued an expression of concern for an article it published two years ago last month about the “human biofield” and related topics after receiving complaints that the piece lacked scientific “validity.”  The article, “Energy Medicine: Current Status and Future Perspectives,” appeared in Global Advances in Health and Medicine, a SAGE … Continue reading “I absolutely stand by the validity of the science” says author of energy field paper now flagged by journal

    in Retraction watch on March 01, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 1 March 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.


    Please join us at the next regular Open NeuroFedora team meeting on Monday 1 March at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'
    

    The meeting will be chaired by @major. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on March 01, 2021 09:48 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Alcohol Policies, Firearm Policies, and Suicide in the United States

    Background

    Alcohol and firearms are a dangerous combination and are commonly involved in suicide in the United States. This has only increased in importance as a public health issue as alcohol drinking, firearm sales, and suicides in the United States have all increased since the start of the covid-19 pandemic. State alcohol policies and state firearm policies might impact alcohol and firearm related suicide, but it is unknown how these policies specifically relate to these suicides, or how these policies might interact with one another.

    The study

    We conducted a cross-sectional study to assess relationships between alcohol policies, firearm policies, and U.S. suicides in 2015 involving alcohol, firearms, or both. We used the Alcohol Policy Scale, previously created and validated by our team, to assess alcohol policies and the Gun Law Scorecard from Giffords Law Center to quantify firearm policies. Suicide data came from the National Violent Death Reporting System. State- and individual-level GEE Poisson and logistic regression models assessed relationships between policies and firearm- and/or alcohol-involved suicides with a 1-year lag.

    Results

    Higher alcohol and gun law scores were associated with reduced incidence rates and odds of suicides involving either alcohol or firearms.

    In the United States in 2015, alcohol and/or firearms were involved in 63.9% of suicides. Higher alcohol and gun law scores were associated with reduced incidence rates and odds of suicides involving either alcohol or firearms. For example, a 10% increase in alcohol policy score was associated with a 28% reduction in the rate of suicides involving alcohol or firearms. Similarly, a 10% increase in gun policy score was associated with a 14% decrease in the rate of suicides involving firearms.

    These relationships were similar for suicides that involved alcohol and firearms. For example, a 10% increase in alcohol policy score was associated with a 52% reduction in the rate of suicides involving alcohol and firearms. A 10% increase in gun policy score was associated with a 26% reduction in the rate of suicides involving alcohol and firearms.

    In addition, we found synergistic effects between alcohol and firearm policies, such that states with restrictive policies for both alcohol and firearms had the lowest odds of suicides involving alcohol and firearms.

    Conclusions and next steps

    Results of the study suggest that laws restricting firearms ownership among high-risk individuals, including those who drink excessively or have experienced alcohol-related criminal offenses, may reduce firearm suicides.

    We found restrictive alcohol and firearm policies to be associated with lower rates and odds of suicides involving alcohol or firearms, and alcohol and firearms and our research suggests that alcohol and firearm policies may be a promising means by which to reduce suicide. These protective relationships were particularly striking for suicides involving both alcohol and firearms as well as in the strong protective interaction between alcohol and firearm policy variables, particularly for suicides involving alcohol. These findings, taken in the context of the broader literature, also suggest that laws restricting firearms ownership among high-risk individuals (so-called ‘may issue’ laws), including those who drink excessively or have experienced alcohol-related criminal offenses, may reduce firearm suicides.

    Because this was a cross-sectional analysis, this should be considered a hypothesis-generating study that cannot prove a causal association between alcohol or firearm policies and suicide. In future research, studies using multiple years of policy and suicide data would strengthen causal inference.

    Stronger alcohol and firearm policies are a promising means to prevent a leading and increasing cause of death in the U.S. The findings further suggest that strengthening both policy areas may have a synergistic impact on reducing suicides involving either alcohol, firearms, or both.

    The post Alcohol Policies, Firearm Policies, and Suicide in the United States appeared first on BMC Series blog.

    in BMC Series blog on March 01, 2021 07:11 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Overview of 'The Spike': an epic journey through failure, darkness, meaning, and spontaneity

    from Princeton University Press (March 9, 2021)


    THE SPIKE is a marvelously unique popular neuroscience book by Professor Mark Humphries, Chair of Computational Neuroscience at the University of Nottingham and Proprietor of The Spike blog on Medium. Humphries' novel approach to brain exposition is built around — well — the spike, the electrical signal neurons use to communicate. In this magical rendition, the 2.1 second journey through the brain takes 174 pages (plus Acknowledgments and Endnotes).

    I haven't read the entire book, so this is not a proper book review. But here's an overview of what I might expect. The Introduction is filled with inventive prose like, “We will wander through the splendor of the richly stocked prefrontal cortex and stand in terror before the wall of noise emanating from the basal ganglia.” (p. 10).


    Did You Know That Your Life Can Be Reduced To Spikes?

    Then there's the splendor and terror of a life reduced to spikes (p. 3):

    “All told, your lifespan is about thirty-four billion billion cortical spikes.”


    Spike Drama

    But will I grow weary of overly dramatic interpretations of spikes? “Our spike's arrival rips open bags of molecules stored at the end of the axon, forcing their contents to be dumped into the gap, and diffuse to the other side.” (p. 29-30).

    Waiting for me on the other side of burst vesicles are intriguing chapters on Failure (dead end spikes) and Dark Neurons, the numerous weirdos who remain silent while their neighbors are “screaming at the top of [their] lungs.” (p. 83). I anticipate this story like a good mystery novel with wry throwaway observations (p. 82):

    “Neuroimaging—functional MRI—shows us Technicolor images of the cortex, its regions lit up in a swirling riot of poorly chosen colors that make the Pantone people cry into their tasteful coffee mugs.”


    Pantone colors of 2021 are gray and yellow

     

    Wherever it ends up – with a mind-blowing new vision of the brain based on spontaneous spikes, or with just another opinion on predictive coding theory – I predict THE SPIKE will be an epic and entertaining journey. 

     


    in The Neurocritic on February 28, 2021 09:45 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The Invisibility of COVID-19

    Why is it so hard to picture COVID-19?

    in Discovery magazine - Neuroskeptic on February 28, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    What you need to know about J&J’s newly authorized one-shot COVID-19 vaccine

    And then there were three: A single-shot vaccine is the latest weapon to join the battle against COVID-19 in the United States.

    On February 27, the U.S. Food and Drug Administration gave emergency use authorization for Johnson & Johnson’s vaccine against SARS-CoV-2, the coronavirus that causes COVID-19. South Africa is the only other country to OK Johnson & Johnson’s vaccine so far, though other countries are poised to follow suit.

    The FDA determined that Johnson & Johnson’s vaccine meets the criteria for safety and effectiveness and that there is clear evidence that it may prevent COVID-19, the agency said in a statement.

    “With today’s authorization, we are adding another vaccine in our medical toolbox to fight this virus,” said Peter Marks,  director of the FDA’s Center for Biologics Evaluation and Research.

    Its authorization for emergency use in the United States – for people age 18 and older – follows similar authorizations in December for vaccines made by Moderna and by Pfizer and its German partner BioNTech.

    Shortages of vaccines make the addition of a third safe and effective vaccine welcome. “We’re still in the midst of this deadly pandemic,” says Archana Chatterjee, Dean of the Chicago Medical School at Rosalind Franklin University of Medicine and Science.

    “Authorization of this vaccine will help meet the needs at the moment,” she said February 26 after an FDA vaccine advisory committee unanimously voted to recommend Johnson & Johnson’s vaccine for emergency use.

    But even as the pharmaceutical company readies to ship out 4 million doses, questions remain about how well the public will embrace the new shot.

    On the one hand, people weary of struggling to set up not just one but two appointments to get the currently available double-dose vaccines may welcome one-stop shopping. And adding millions more vaccines to the pipeline should speed up efforts to get the vast majority of Americans protected.

    But on the other hand, its efficacy results fall short of those reported for two shots of the mRNA vaccines made by Moderna (94.1 percent) and Pfizer (95 percent) (SN: 1/29/21; SN: 11/16/20; SN: 11/18/20). In real-world situations, a single shot of Pfizer’s vaccine was 74 percent to 85 percent effective at preventing hospitalizations (SN: 2/26/21).

    In clinical trials, Johnson & Johnson’s vaccine was about 66 percent effective at preventing moderate and severe disease. Its efficacy rose to 85 percent when it came to preventing severe and critical cases requiring hospitalization.

    Here’s what you need to know about the vaccine, which was developed by Johnson & Johnson’s subsidiary Janssen Pharmaceuticals:

    How does it work?

    Researchers engineered a common cold virus called adenovirus 26 to carry instructions for making the coronavirus’s spike protein into human cells. The human cells make the viral protein, which goads the immune system to make antibodies and train immune cells to attack the coronavirus, should the person encounter it later.

    The engineered adenovirus 26, which has been altered so that it can’t cause disease, is the base for other vaccines made by Janssen, including an approved Ebola vaccine, and experimental vaccines against Zika, HIV and respiratory syncytial virus (RSV). Together, trials of those vaccines have tested the engineered virus in more than 193,000 people, including children, pregnant women and immunocompromised people. Those trials have shown that the technology has a good safety record.

    Why is this one less effective than other authorized vaccines?

    It may be unfair to directly compare the efficacy results. Johnson & Johnson’s vaccine was tested in the United States, South Africa, Brazil and other parts of Latin America when coronavirus variants that can escape some immune protection were circulating. Under the same conditions, the mRNA vaccines might be less effective, too.

    This is also a single-shot vaccine. It’s efficacy is similar to that of a different two-dose adenovirus vaccine made by the University of Oxford and its partner AstraZeneca (SN: 11/23/20).  

    Johnson & Johnson has begun testing whether a second dose of its vaccine can boost efficacy. If a second dose improves efficacy, researchers worry that the new information could sow confusion among those who have already gotten the shot.

    “If you bring out a single-dose vaccine … and later say that a second dose is clinically better enough that we recommend a second dose, you can see how that would be confusing,” Paul Offit, director of the Vaccine Education Center at Children’s Hospital of Philadelphia, said during the FDA vaccine advisory board meeting.

    Should I get the shot?

    Yes, the experts say.

    “We’re going to have to communicate effectively so people don’t feel they’re getting a second-rate product. It’s very good at what it does,” says Georges Benjamin, executive director of the American Public Health Association in Washington, D.C.

    Although the Johnson & Johnson vaccine didn’t prevent moderate or severe illness as well as the mRNA vaccines do, “it’s going to protect, no matter what, for the part of the disease that we really care about, which is hospitalization, severe disease and death,” Benjamin says. “There’s no difference.” 

    As of February 25, more than 52,000 people were hospitalized in the United States fighting COVID-19, according to the COVID Tracking Project. That’s down from the record-setting daily peaks of more than 130,000 in early January and the lowest since early to mid-November. More than half a million people in the United States have now died from COVID-19.

    In Johnson & Johnson’s clinical trial, two of the 19,514 people in the vaccine group were hospitalized with COVID-19 starting 14 days after vaccination. That compares with 29 hospitalizations among the 19,544 people in the placebo group. None of the vaccinated people died, but there were seven deaths related to COVID-19 in the placebo group. Those numbers are small and some researchers say the data aren’t clear-cut on the benefits.

    “The data indicate that the vaccine is effective, but doesn’t prove that the vaccine is especially effective against moderate to severe COVID,” said Diana Zuckerman, president of the National Center for Health Research, a Washington, D.C.–based think tank that analyzes health research.

    The data were also collected after only two months of follow-up. Normally, the FDA requires a year or more of data to fully approve a vaccine. Some questions about the vaccine can’t be answered with less than six months of data, Zuckerman said during a public comment period in the Feb. 26 advisory board hearing.  “Let’s be very honest with the public about what we do know and what we won’t know” for some time to come.

    For all the vaccines, no one knows how long immunity will last. And what’s already authorized might need to be tweaked if resistant variants become widespread. Booster shots may be needed, Benjamin says.

    Most people probably won’t be able to choose which vaccine they get, but if the choice is taking the Johnson & Johnson vaccine or waiting months for an mRNA vaccine, “to me that’s not a close call. You should get the J&J now,” says Robert Wachter, who chairs the Department of Medicine at the University of California, San Francisco. “The best vaccine is the one you get today.”

    How many people will be able to get the vaccine?

    The company fell short of its goal to deliver 10 million doses by the end of February. But it can have 20 million doses by the end of March and 100 million by the end of June, a company official told a subcommittee of the U.S. House of Representatives Energy and Commerce Committee on February 23.

    Because the vaccine is given as a single shot, each dose is enough to vaccinate a person. Pfizer’s and Moderna’s vaccines require two shots for complete efficacy.

    “The fact that it’s a single dose lends itself to be a game changer,” says Krishna Udayakumar, director of the Duke Global Health Institute in Durham, N.C.

    People who have a fear of needles, or those who can’t take time off work or don’t have transportation to vaccination sites might prefer a single shot over the two-dose mRNA vaccines.

    “We have poorly housed people who come to the ER,” Wachter says. “They don’t have a doctor. They don’t have a house and we’re going to try to vaccinate them and bring them back in a month? It’s just not going to work.” A single-dose vaccine would be ideal in that setting.

    Plus, the vaccine doesn’t require freezing. It can be stored in a standard refrigerator for up to three months. That makes it easier to use in places that don’t have easy access to freezers needed to keep the mRNA vaccines fresh.

    With the three authorized vaccines, the United States may have enough doses by the end of the summer to vaccinate everyone, Udayakumar says.

    The quicker the United States can vaccinate vulnerable populations, the sooner it might begin sharing vaccines with low-income countries through the World Health Organization’s COVAX program (SN: 2/26/20).

     “We still have 130 countries that have had zero vaccinations,” says Udayakumar. “In the U.S., we’ve purchased more vaccine than we could ever use.”

    in Science News on February 27, 2021 11:37 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Weekend reads: Prof plagiarized student, says investigation; universities mishandle allegations; what we should learn from ‘bad science’

    Before we present this week’s Weekend Reads, a question: Do you enjoy our weekly roundup? If so, we could really use your help. Would you consider a tax-deductible donation to support Weekend Reads, and our daily work? Thanks in advance. The week at Retraction Watch featured: Meet the postdoc who says he’s been trying to retract … Continue reading Weekend reads: Prof plagiarized student, says investigation; universities mishandle allegations; what we should learn from ‘bad science’

    in Retraction watch on February 27, 2021 02:08 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dialogue With Dreamers

    Researchers claim that they can ask questions and receive answers from dreaming participants.

    in Discovery magazine - Neuroskeptic on February 27, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Global inequity in COVID-19 vaccination is more than a moral problem

    Months before the first COVID-19 vaccine was even approved, wealthy nations scrambled to secure hundreds of millions of advance doses for their citizens. By the end of 2020, Canada bought up 338 million doses, enough to inoculate their population four times over. The United Kingdom snagged enough to cover a population three times its size. The United States reserved over 1.2 billion doses, and has already vaccinated about 14 percent of its residents.

    It’s a drastically different story for less wealthy nations. More than 200 have yet to administer a single dose. Only 55 doses in total have been delivered among the 29 lowest-income countries, all to Guinea. Only a few sub-Saharan African countries have begun systematic immunization programs.

    “The world is on the brink of a catastrophic moral failure, and the price of this failure will be paid with the lives and livelihoods in the world’s poorest countries,” Tedros Adhanom Ghebreyesus, director of the World Health Organization recently said.

    COVAX, an international initiative tasked with ensuring more equitable access to COVID-19 vaccines, aims to redress this imbalance by securing deals that send shots to low-income countries free of charge. Despite new pledges of support from some of the wealthiest nations, COVAX is off to a slow start. Its first shipment of 600,000 shots was sent February 24, to Ghana. COVAX still needs nearly $23 billion to meet its goal of vaccinating 20 percent of participating countries by the end of the year.

    Such stark inequities don’t just raise moral questions of fairness. With vaccine demand still vastly outstripping supply, lopsided distribution could also ultimately prolong the pandemic, fuel the evolution of new, potentially vaccine-evading variants, and drag down the economies of rich and poor — and vaccinated and unvaccinated — nations alike.

    “I think the leaders of rich nations have done a very poor job explaining to their citizens why it’s so important that vaccines are distributed worldwide and not just within their own nation,” says Gavin Yamey, a global public health policy expert at Duke University. “No one is safe until all of us are safe, since an outbreak anywhere can become an outbreak everywhere.”

    Vaccine inequity could breed vaccine-evading variants

    Here’s why a new coronavirus outbreak anywhere can become an outbreak everywhere: Viruses mutate.

    It’s normal and happens by chance as a virus replicates inside a host. Most mutations are harmless, or hurt the virus itself. But every so often, a tiny genetic tweak makes the virus better at infecting hosts or dodging their immune response. The more a virus spreads, the more opportunity that one (or more likely a handful) of these tweaks could birth a new, more threatening strain.

    This is already happening. In December, scientists detected a new variant, dubbed B.1.1.7 in the United Kingdom. It soon became clear that it had acquired mutations that made it more infectious (SN: 1/27/21). In just a few months, that variant has circled the globe, popping up in more than 70 countries, including the United States.

    Another variant first detected in South Africa is also more transmissible — and appears to be slightly less affected by existing vaccines (SN: 1/27/21). It too has spread worldwide. Variants detected in California and New York are now raising concern too. As long as widespread viral transmission continues, new variants will emerge.

    “It’s uncertain at this point whether we’re going to have to continually chase this virus and develop more vaccines,” says William Moss, the executive director of the International Vaccine Access Center at Johns Hopkins Bloomberg School of Public Health.

    The more the virus replicates, the more opportunity it has to evolve around existing vaccines or natural immune responses to older variants, Moss says. Large pockets of unvaccinated people can serve as incubators for new variants. The longer such pockets persist, the greater the chance of variants accumulating changes that make them more and more resistant to vaccines. Eventually, such variants might invade well-vaccinated countries that thought themselves safe.

    Barely vaccinated populations might be especially fertile grounds for vaccine-evading variants, says Abraar Karan, an internal medicine physician at Harvard Medical School and Brigham Women’s Hospital in Boston. In a vaccinated individual, mutations that even slightly evade that induced immune response can get a foothold. Unless that variant completely evades vaccines, which is unlikely, its spread will be blunted by a well-vaccinated population. But if most of a region remains totally naïve to infection, that new variant could burn quickly through the largely unvaccinated population, fueling the changed virus’ spread to other regions.

    In Israel, where cases have fallen after more than 40 percent of the population has received at least one vaccine dose, the health ministry has reported at least three cases of reinfection by the South African variant in non-vaccinated people. That’s a very small sample, but indicative of the threat posed by uneven vaccination rates globally.

    “If we want to stop the spread we have to stop it everywhere, starting with the most vulnerable,” Karan says. “Otherwise we’re going to see continued outbreaks and suffering.”

    temporary closed sign in a store windowIn areas where coronavirus transmission spikes, restrictions on businesses may be imposed to curb spread. Because international demand drives the global economy, shutdowns like these will slow overall recovery, experts say.Ian Forsyth/Getty Images

    “No economy is an island”

    Protecting people from getting sick is obviously a big driver of the rush to vaccinate in wealthy nations, many of which have been hit hard by the virus. Vaccines are also seen as a way out of the largest global economic downturn since World War II, roughly a 4.4 percent dip. But an inequitable distribution of vaccines could imperil a robust and quick recovery, experts say.

    If extended outbreaks, lockdowns, sickness and deaths continue in countries with less access to vaccines, all economies will suffer, says Selva Demiralp, an economist at Koç University in Istanbul. “No economy is an island,” she says, “and no economy will be fully recovered unless others are recovered, too.”

    Extreme vaccine inequity could cost the global economy more than $9 trillion dollars in 2021, about half of which would come from rich nations, Demiralp and her colleagues reported January 25 in a paper published by the National Bureau of Economic Research. In that scenario, wealthy nations largely vaccinate their populations by midyear, but leave poorer nations out completely.

    Everybody takes a hit thanks to the interconnectedness of the global economy. The production process to build a Volkswagen or iPhone, for instance, spans continents. Disruptions to one link of that supply chain, say steel manufacturing in Turkey, ripple throughout. Today’s marketplace is global, too: Diminished demand for goods in countries saddled with coronavirus restrictions will affect the bottom line of companies headquartered in wealthy nations. “As infections rise in a country, both supply and demand can decrease,” Demiralp says. 

    She and her colleagues estimated these virus-induced fluctuations in supply and demand by combining a statistical model of how coronavirus spreads with vast amounts of economic data across 35 sectors in 65 countries. By tweaking the pace and extent of vaccination, the team estimated total costs to each country under different scenarios. The $9 trillion number represents extreme inequity. But less extreme gaps are still very expensive. 

    If rich countries vaccinate their entire populations in four months, while the lowest-income countries vaccinate half their population by the end of 2021, global gross domestic product this year will fall by between $1.8 and $3.8 trillion, with rich countries losing about half of that, the team calculated. 

    Those costs could be averted with a much smaller investment, on the order of tens to hundreds of billions of dollars, in distributing vaccines globally. “It’s a no brainer,” Demiralp says. “It’s not an act of charity. It’s economic rationality.”

    Evening the playing field

    COVAX is trying to even the vaccine playing field — but with limited success so far. There are a lot of hurdles, from securing scarce doses to ensuring that countries have the infrastructure to handle them. That could mean equipping some countries with more ultracold refrigerators to store vaccines (SN: 11/20/20) to revamping mass vaccination programs designed for kids to work for adults too. “Equitable distribution will take a lot more than just securing vaccines,” says Angela Shen, a public health expert at Children’s Hospital of Philadelphia’s Vaccine Education Center.

    Three global public health powerhouses lead the international initiative: the Global Alliance for Vaccines and Immunization, the World Health Organization and the Coalition for Epidemic Preparedness Innovations. COVAX uses funds from governments and charitable organizations to buy up doses from pharmaceutical companies and distribute them to lower-income countries free of charge.

    For starters, COVAX plans to distribute 330 million doses to lower-income countries in the first half of the year, enough to vaccinate, on average, 3.3 percent of each population. Meanwhile, by June many rich nations will be well on their way to vaccinating most of their populations.

    All told, COVAX says it’s reserved 2.27 billion doses so far, enough to vaccinate 20 percent of the populations of 92 low-income countries by year’s end. Actually meeting that goal is contingent on raising $37 billion dollars, and COVAX is not even halfway there yet. On February 19, several countries including the United States and Germany pledged to contribute an additional $4.3 billion to the effort. Still, COVAX is nearly $23 billion short.

    “Money is not the only challenge we face,” WHO’s Ghebreyesus said in a Feb. 22 news briefing. Deals between wealthy nations and pharmaceutical companies threaten to gobble up global vaccine supply, reducing COVAX’s access. “If there are no vaccines to buy, money is irrelevant.”

    People getting vaccinated, in any country, is something to be celebrated, says Yamey, of Duke University, “but it should disturb us to know that low-risk people are going to get vaccinated in rich countries well ahead of high-risk people in poor countries.” A more equitable rollout, Yamey says, would prioritize healthcare workers and vulnerable people in all countries. “I don’t see that happening in any scenario unfortunately.”

    Even if COVAX achieves its goal this year, these countries will be far from reaching herd immunity, the threshold at which enough people are immune to a pathogen to slow its spread (SN: 3/24/20). Estimates to reach that herd immunity range from 60 to 90 percent of a population.

    “Many low-income nations won’t have widespread vaccination until 2023 or 2024, because they can’t get the doses,” Yamey says. “This inequity is due to hoarding of doses by rich nations, and that me-first, me-only approach ultimately goes against their long-term interests.”

    in Science News on February 26, 2021 04:50 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Phone Fears And Dolphin Directions: The Week’s Best Psychology Links

    Our weekly round-up of the best psychology coverage from elsewhere on the web

    In 2001, Air Transat Flight 236 lost engine power and made an emergency landing in the Azores. All passengers survived, but for 30 terrifying minutes, many thought they were going to die. Writing for Wired, Erika Hayasaki has the fascinating story of one of those passengers, Margaret McKinnon, a psychologist who then went on to study why some survivors developed PTSD and others did not — and who is now looking at the mental health of frontline workers during the pandemic.  


    Although we often think of mental health disorders as falling neatly into discrete categories — “depression”, say — the reality is that they can present quite differently in different people. So could treatments like brain stimulation be personalised to individuals? Kim Tingley takes a look at The New York Times.   


    Many of us experience a feeling of dread when the phone rings — or when we have to make a call. At The Conversation, Ilham Sebah explains why we experience phone anxiety, and what we can do to get over it. (And, as we reported last year, you may actually get more out of phone calls than you expect).


    Undark has a nice podcast this week about the hope — and hype — surrounding the use of psychedelics as antidepressants. It’s a great place to start for a balanced overview of where the field is at.


    We reported earlier this week on the similarities between dolphin and human personalities — but do dolphins also have “handedness” like humans? Past work had suggested that the aquatic mammals showed behavioural asymmetries in their movements, preferring to spin rightward. But a new study casts doubt on those findings, writes researcher Kelly Jaakkola at Scientific American.


    “Mini-brains” — brain organoids grown from stem cells in the lab — are used to study the development of the human brain, though they are far more primitive than real brains. But researchers have reported a surprising finding: after around 9 months, even these basic organoids show changes in gene expression similar to that in human babies after birth. The results suggest that mini-brains may be useful for studying disorders that emerge after birth, rather than just prenatally, reports Kelly Servick at Science.


    A lot of human neuroscience is about finding a signal in all of the background noise — identifying which regions are more active during a certain task, say, while ignoring the other activity that is going on in the brain. But what if there is useful information in all of that noise? At Wired, Elizabeth Landau reports on studies that have linked patterns of white noise within EEG data to aspects of behaviour and cognition.

    Compiled by Matthew Warren (@MattBWarren), Editor of BPS Research Digest

    in The British Psychological Society - Research Digest on February 26, 2021 04:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Why arXiv needs a brand

    Pop quiz: There are so many “-Xivs” online and on social media.  Which ones are operated by arXiv?

    Answer: only arXiv.org and @arXiv

    arXiv is a highly valued tool known primarily through its digital presence. However, the use of arXiv’s name by other services has led to confusion. And despite decades of reliable service, arXiv’s inconsistent visuals and voice have projected an air of neglect. This jeopardizes our ability to raise funds for critical improvements.

    “As the role of arXiv in open science becomes more evident, its value should be made obvious lest we end up losing the system we cherish and rely upon so much,” said Alberto Accomazzi, PhD, ADS Program Manager.

    In 2020, Accomazzi joined nine other diverse community members to become part of an advisory group formed to support arXiv’s communications and identity project. The goal? To ensure that the way we present ourselves to the world reflects arXiv’s true nature as a professional, innovative tool created by and for researchers.

    Throughout the identity project, we:

    • assessed user feedback collected since 2016,
    • surveyed board members and 7,000 additional users about their perceptions of arXiv,
    • gathered ten diverse community members to serve as advisors,
    • contracted with a professional designer to produce a logo, and
    • are working with an accessibility consultant to address the needs of all arXiv readers and authors.

    To guide our branding efforts we focused on arXiv as a place of connection, linking together people and ideas, and connecting them with the world of open science. After many rounds of revision and refinement, arXiv’s first brand guide was produced, in addition to our new logo and usage guidelines, and we’d like to share them with you now.

    arXiv's logoThe intertwining arms at the heart of the logo represent arXiv as a place of connection

    The arXiv logo looks to the future and nods to the past with a font that pays homage to arXiv’s birth in the 90’s while also being forward looking. The arms of the ‘X’ retain stylistic elements of the ‘chi’ in our name, with a lengthened top left and lower right branch. Symbolically, the intertwining of the arms at the heart of the logo captures the spirit of arXiv’s core value. arXiv is a place of connection, linking together people and ideas, and connecting them with the world of open science.

    The brand guide and usage guidelines ensure that we express arXiv’s identity with consistent quality and continuity across communication channels. By strengthening our identity in this way, arXiv will be recognizable and distinct from other services. Staff will save time by having access to clear, consistent guidelines, visual assets, and style sheets, and collaborators will know the expectations regarding arXiv logo usage.

    The arXiv community will notice that the main arXiv.org site remains the same at this time. That’s because the identity rollout and implementation process will be gradual, starting with official documents before moving to core arXiv services.

    “arXiv must take control of its identity to maintain its place and grow within the scholarly communications ecosystem,” said arXiv’s executive director Eleonora Presani, PhD.

    in arXiv.org blog on February 26, 2021 03:55 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The guardians of Scopus

    Here’s how independent subject experts monitor the titles in Scopus to uncover predatory journals

    in Elsevier Connect on February 26, 2021 03:26 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Can a COVID-19 vaccine’s second dose be delayed? It’s complicated

    Within a couple weeks after a first vaccine dose, people are well protected against severe COVID-19, new data suggest. With demand for shots far outpacing supply, that’s sparked a debate among scientists and policy makers: Is it OK to hold off giving the second dose?

    Delaying the dose could make way for more people to get their first shots and stem the coronavirus’s spread, proponents say. Opponents say there’s not enough data to show if that one-shot protection is long-lasting enough. And they worry that changing timing now could confuse people, undermine trust and lead to more widespread hesitancy to get the vaccine. 

    Here’s a closer look at the issues involved.

    Data on dosing

    In clinical trials, the second dose of the Pfizer/BioNTech vaccine was given 21 days after the first. Moderna’s second shot followed the first jab after 28 days. Both vaccines were about 94 percent to 95 percent effective after two doses (SN: 12/18/20).

    AstraZeneca and the University of Oxford spaced doses of their vaccine four to 12 weeks apart in four separate trials. That vaccine’s efficacy ranged from 62 percent to about 90 percent depending on dosing schedules and amounts (SN: 11/23/20).

    The U.S. Food and Drug Administration gave emergency authorization for the Pfizer and Moderna vaccines to be given on the same schedule tested in the trials. (The AstraZeneca vaccine is not approved for use yet in the United States.)

    The United Kingdom took a different approach, deciding in late December to delay giving booster shots of coronavirus vaccines for 12 weeks after the initial dose. The goal: to stretch vaccine supplies to cover as many people as possible. The decision drew criticism. After all, scientists said, that timing had never been tested for efficacy against the coronavirus.

    But now, some new data seem to justify the decision to delay.

    A reanalysis of the Pfizer clinical trial data found that the mRNA vaccine has an efficacy of 92.6 percent starting two weeks after the first shot, two Canadian researchers write in a letter to the editor published February 17 in the New England Journal of Medicine. That’s similar to the 92.1 percent efficacy Moderna reported after one shot of its mRNA vaccine.

    Pfizer had initially calculated that the first shot’s efficacy was 52.4 percent, but that included cases that emerged in the first two weeks after vaccination when immunity was still ramping up. Those early cases aren’t a fair test of a vaccine’s efficacy, says Danuta Skowronski, epidemiology lead for Influenza & Emerging Respiratory Pathogens at the British Columbia Centre for Disease Control in Vancouver. It takes a couple of weeks to build antibodies and train immune cells to attack a virus. The new estimate is similar that of Public Health England’s assessment of the data.

    Here’s how first-shot efficacy has played out both in the real world and when real-world problems have thrown wrinkles into trials:

    • Among health care workers at Sheba Medical Centre in Israel, rates of infection dropped by 75 percent 15 to 28 days after the first dose of the Pfizer vaccine compared with unvaccinated health care workers, researchers report February 18 in the Lancet. And rates of cases with symptoms were reduced by 85 percent.
    • Among nearly 600,000 people who got the Pfizer vaccine through Israel’s largest health care system, the vaccine was 46 percent effective at preventing infections, 62 percent effective at preventing severe disease and 72 percent effective at preventing death two or more weeks after the first dose, researchers report February 24 in the New England Journal of Medicine.
    • In Scotland, the Pfizer vaccine was 85 percent effective at preventing hospitalizations 28 to 34 days after the first shot, researcher report February 19 in a preprint in the Lancet. That study also found that the AstraZeneca vaccine was 94 percent effective at keeping people out of the hospital a month out from the first shot. Those preliminary data have not been thoroughly vetted by other scientists yet.
    • And when manufacturing delays postponed giving the second dose of the AstraZeneca vaccine in trials in United Kingdom, Brazil and South Africa, efficacy went up. When people got the second shot less than six weeks from the first, the vaccine’s efficacy was about 55 percent, but waiting 12 weeks or more to give the booster shot produced about 81 percent efficacy, researchers reported February 19 in the Lancet. Antibody levels in the study participants’ blood did not drop in the three months after the first shot, the researchers also found, suggesting the first shot provides some lasting protection against the coronavirus.

    Arguing for delay

    Those numbers justify temporarily postponing second doses to ensure that more people get their first shots, says Robert Wachter, who heads the Department of Medicine at the University of California, San Francisco.

    “That’s not a hard math question,” he says. “You’ll save far, far more lives — on the order of tens of thousands more lives — giving those extra vaccine doses to people for their first shot, getting them from zero to 85 percent protected, than using that same capacity [for] giving people their second shot and getting them from 85 to 95 [percent efficacy].”

    The real driving force behind proposals to delay the second shots is that there just isn’t enough vaccine to go around. It’s all about getting jabs into as many arms as possible, Skowronski says.

    Postponing the second dose doesn’t mean cancelling it, she says. It’s just a delay that could allow for more widespread distribution of the vaccine, especially to people at high risk of hospitalization and death from COVID-19.

    Even though no one knows how long protection from a single shot will last, immunity doesn’t disappear overnight. That buys time, she says.

    “We should be ensuring as many people as possible, by whatever means possible, get the first dose before we double back and try to top up with a second dose,” Skowronski says. “Every second dose we administer is essentially depriving someone else of the substantial protection they could have got from that vaccine supply as a first dose.”

    Arguing against delay

    Yes, the data overall suggest the first doses work pretty well, but scientists don’t know how durable that protection is, says virologist Onyema Ogbuagu. That may not be as much of a problem in countries like Israel and the United Kingdom, which have been vaccinating people pretty quickly. But in the United States, the vaccine rollout has been creeping and crawling, Ogbuagu says. Because of that slow progress, “you could be six months or 10 months into vaccination and the first people you vaccinated could become vulnerable again.”

    The second shot should make immunity last longer. “The role of the second dose is, without question, an advantage,” he says. “It optimizes the efficacy and durability.” Ogbuagu, who oversees COVID-19 clinical trials at Yale School of Medicine, was involved in testing the Pfizer vaccine’s efficacy.

    Earlier Phase I and II safety trials also tested people’s immune responses to the mRNA vaccines. Those data showed that antibody levels after the first shot are respectable but often don’t get close to matching levels seen in people who have recovered from COVID-19, Ogbuagu says. “But the pattern after the second dose is just so striking, antibody levels just skyrocket,” often exceeding levels from recovered patients, he says.

    He also notes that the dosing data from the AstraZeneca trial came from a part of the study wasn’t planned and other unknown things might be influencing the outcome, A new clinical trial mixing the Pfizer and AstraZeneca vaccines will test in which order the shots should be given, and whether a four- or 12-week interval between doses produces better efficacy. That trial will produce more reliable data on which to base a decision about shot schedules. For now, though, “we have to deal with the unknowns,” he says, “and I think the benefits of giving that second shot outweigh giving just the first one and hoping for the best.”

    There’s another big worry: Even under a best case scenario, some people are bound to get sick after getting vaccinated. The vaccines aren’t perfect and some new variants of the coronavirus can evade antibodies generated by the jabs. Some researchers are concerned that delaying a second dose could help produce new variants (SN: 1/14/21).

    And if infections happen while monkeying with untried dosing intervals, it could undermine public confidence in the shots, worries Nicole Lurie, a strategic adviser for the Coalition for Epidemic Preparedness Innovations, an organization that funds vaccine development.

    It may feed a narrative that health officials didn’t fully follow the science as promised, Lurie says. If public confidence erodes to the point that people turn down vaccines, “then in the long run, you’re doing the nation a disservice.”

    It’s fine to provide a little wiggle room for people to get the second shot when circumstances — such as the winter storms in Texas, or other problems — prevent getting it on time, she says. But sticking as closely to the schedule as possible should be the policy.

    She and Wachter laid out their counterarguments on delaying doses February 17 in the New England Journal of Medicine. And while they came to different conclusions, they don’t necessarily disagree on the challenges, including the concern that some people will interpret the data to mean they don’t need a second shot at all. Says Watcher: “We have to decide if the uncertainty is too great to do what, mathematically makes, to me, a ton of sense.”  

    in Science News on February 26, 2021 01:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Know Your Brain: Red Nucleus

    The red nuclei are colored red in this cross-section of the midbrain.

    The red nuclei are colored red in this cross-section of the midbrain.

    Where is the red nucleus?

    The red nucleus is found in a region of the brainstem called the midbrain. There are actually two red nuclei—one on each side of the brainstem.

    The red nucleus can be subdivided into two structures that are generally distinct in function: the parvocellular red nucleus, which mainly contains small and medium-sized neurons, and the magnocellular red nucleus, which contains larger neurons. The red nucleus is recognizable immediately after dissection because it maintains a reddish coloring. This coloring is thought to be due to iron pigments found in the cells of the nucleus.

    What is the red nucleus and what does it do?

    As mentioned above, the red nucleus can be subdivided into two structures with separate functions: the parvocellular red nucleus and the magnocellular red nucleus. In the human brain, most of the red nucleus is made up of the parvocellular red nucleus, or RNp; the magnocellular red nucleus (RNm) is not thought to play a major role in the adult human brain. In four-legged mammals (e.g., cats, mice), however, the RNm is a more prominent structure—in both size and importance.

    Neurons from the cerebellum project to the RNm, and RNm neurons leave the red nucleus and form the rubrospinal tract, which descends in the spinal cord. In animals that walk on four legs, this pathway is activated around the time of voluntary movements; it seems to play an important role in walking, avoiding obstacles, and making coordinated paw movements. RNm neurons, however, also respond to sensory stimulation, and may provide sensory feedback to the cerebellum to help guide movements and maintain postural stability.

    In primates that mainly walk on two legs (including humans), the RNm is not thought to play a large role in walking and maintaining postural stability, as other tracts (e.g., the corticospinal tract) take over such functions. The RNm, however, does appear to be involved in controlling hand movements in humans and other primates. Interestingly, the RNm is more prominent in the human fetus and newborn, but regresses as a child ages, which may have to do with the development of corticospinal tract and the ability to walk on two legs.

    Despite its relatively greater import in the human brain, the RNp is poorly understood, as its diminished presence in other animals makes it more difficult to study using an animal model. Neurons from motor areas in the prefrontal cortex and premotor cortex, as well as neurons from nuclei in the cerebellum known as the deep cerebellar nuclei, extend to the RNp. There is also a collection of neurons that leave the RNp and travel to the inferior olivary nucleus, which communicates with the cerebellum and is thought to be involved in the control of movement. A number of proposed functions have been attributed to these connections between the RNp, cerebellum, and inferior olivary nucleus, such as movement learning, the acquisition of reflexes, and the detection of errors in movements. But the precise function of these pathways—and thus the RNp’s role in them—is still not clear.

    Several studies have found the red nucleus to play a role in pain sensation as well as analgesia. The latter might be due to connections between the red nucleus and regions like the periaqueductal gray and raphe nuclei, which are part of a natural pain-inhibiting system in the brain.

    In terms of pathology, dysfunction in the human red nucleus has been linked to the development of tremors, and is being investigated as playing a potential role in Parkinson’s disease. Damage to the red nucleus has also been associated with a number of other problems with movement and muscle tone.

    References (in addition to linked text above):

    Basile GA, Quartu M, Bertino S, Serra MP, Boi M, Bramanti A, Anastasi GP, Milardi D, Cacciola A. Red nucleus structure and function: from anatomy to clinical neurosciences. Brain Struct Funct. 2021 Jan;226(1):69-91. doi: 10.1007/s00429-020-02171-x. Epub 2020 Nov 12. PMID: 33180142; PMCID: PMC7817566.

    Vadhan J, M Das J. Neuroanatomy, Red Nucleus. 2020 Jul 31. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2020 Jan–. PMID: 31869092.

    in Neuroscientifically Challenged on February 26, 2021 11:26 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Solar storms can wreak havoc. We need better space weather forecasts

    Since December 2019, the sun has been moving into a busier part of its cycle, when increasingly intense pulses of energy can shoot out in all directions. Some of these large bursts of charged particles head right toward Earth. Without a good way to anticipate these solar storms, we’re vulnerable. A big one could take out a swath of our communication systems and power grids before we even knew what hit us.

    A recent near miss occurred in the summer of 2012. A giant solar storm hurled a radiation-packed blob in Earth’s direction at more than 9 million kilometers per hour. The potentially debilitating burst quickly traversed the nearly 150 million kilometers toward our planet, and would have hit Earth had it come just a week earlier. Scientists learned about it after the fact, only because it struck a NASA satellite designed to watch for this kind of space weather.

    That 2012 storm was the most intense researchers have measured since 1859. When a powerful storm hit the Northern Hemisphere in September of that year, people were not so lucky. Many telegraph systems throughout Europe and North America failed, and the electrified lines shocked some telegraph operators. It came to be known as the Carrington Event, named after British astronomer Richard Carrington, who witnessed intensely bright patches of light in the sky and recorded what he saw.

    The world has moved way beyond telegraph systems. A Carrington-level impact today would knock out satellites, disrupting GPS, mobile phone networks and internet connections. Banking systems, aviation, trains and traffic signals would take a hit as well. Damaged power grids would take months or more to repair.

    Especially now, during a pandemic that has many of us relying on Zoom and other video-communications programs to work and attend school, it’s hard to imagine the widespread upheaval such an event would create. In a worst-case scenario conceived before the pandemic, researchers estimated the economic toll in the United States could reach trillions of dollars, according to a 2017 review in Risk Analysis.

    To avoid such destruction, in October then-President Donald Trump signed a bill that will support research to produce better space weather forecasts and assess possible impacts, and enable better coordination among agencies like NASA and the National Oceanic and Atmospheric Administration.

    “We understand a little bit about how these solar storms form, but we can’t predict [them] well,” says atmospheric and space scientist Aaron Ridley of the University of Michigan in Ann Arbor. Just as scientists know how to map the likely path of tornadoes and hurricanes, Ridley hopes to see the same capabilities for predicting space weather.

    The ideal scenario is to get warnings well before a storm disables satellites or makes landfall, and possibly even before the sun sends charged particles in our direction. With advance warning, utilities and governments could power down the grids and move satellites out of harm’s way.

    Ridley is part of a U.S. collaboration creating simulations of solar storms to help scientists quickly and accurately forecast where the storms will go, how intense they will be and when they might affect important satellites and power grids on Earth. Considering the havoc an extreme solar storm could wreak, many scientists and governments want to develop better forecasts as soon as possible.

    Ebbs and flows

    When scientists talk about space weather, they’re usually referring to two things: the solar wind, a constant stream of charged particles flowing away from the sun, and coronal mass ejections, huge outbursts of charged particles, or plasma, blown out from the sun’s outer layers (SN Online: 3/7/19). Some other phenomena, like high-energy particles called cosmic rays, also count as space weather, but they don’t cause much concern.

    Coronal mass ejections, or CMEs, the most threatening kind of solar storms, aren’t always harmful — they generate dazzling auroras near the poles, after all. But considering the risks of a storm shutting down key military and commercial satellites or harming the health of astronauts in orbit, it’s understandable that scientists and governments are concerned.

    Astronomers have been peering at our solar companion for centuries. In the 17th century, Galileo was among the first to spy sunspots, slightly cooler areas on the sun’s surface with strong magnetic fields that are often a precursor to more intense solar activity. His successors later noticed that sunspots often produce bursts of radiation called solar flares. The complex, shifting magnetic field of the sun also sometimes makes filaments or loops of plasma thousands of kilometers across erupt from the sun’s outer layers. These kinds of solar eruptions can generate CMEs.

    “The sun’s magnetic field lines can get complicated and twisted up like taffy in certain regions,” says Mary Hudson, a physicist at Dartmouth College. Those lines can break like a rubber band and launch a big chunk of corona into interplanetary space.

    It was 19th century German astronomer Samuel Heinrich Schwabe who realized that such solar activity ebbs and flows during 11-year cycles. This happens because the sun’s magnetic field completely flips every 11 years. The most recent sun cycle ended in December 2019, and we’re emerging from the nadir of sun activity while heading toward the maximum of cycle 25 (astronomers started numbering solar cycles in the 19th century). Solar storms, particularly the dangerous CMEs, are now becoming more frequent and intense, and should peak between 2024 and 2026.

    Solar storms develop from the sun’s complex magnetic field. The sun rotates faster at its equator than at its poles, and since it’s not a solid sphere, its magnetic field constantly roils and swirls around. At the same time, heat from the sun’s interior rises to the surface, with charged particles bringing new magnetic fields with them. The most intense CMEs usually come from the most vigorous period in a particularly active solar cycle, but there’s a lot of variation. The 1859 CME originated from a fairly modest solar cycle, Hudson points out.

    A CME has multiple components. If the CME is on a trajectory toward Earth, the first thing to arrive — just eight minutes after it leaves the sun — is the electromagnetic radiation, which moves at the speed of light. CMEs often produce a shock wave that accelerates protons to extremely fast speeds, and those arrive within 20 minutes of the light. Such energetic particles can damage the electronics or solar cells of satellites in high orbits. Those particles could also harm any astronauts outside of Earth’s protective magnetic field, including any on the moon. A crew on board the International Space Station, inside Earth’s magnetic field, however, would most likely be safe.

    But a CME’s biggest threat — its giant cloud of plasma, which can be millions of kilometers wide — typically takes between one and three days to reach our planet, depending on how fast the sun propelled the shotgun blast of particles toward us. Earth’s magnetic field, our first defense against space weather and space radiation, can protect us from only so much. Satellites and ground-based observations have shown that a CME’s charged particles interact with and distort the magnetic field. Those interactions can have two important effects: producing more intense electric currents in the upper atmosphere and shifting these stronger currents away from the poles to places with more people and more infrastructure, Ridley says. With an extremely powerful storm, it’s these potentially massive currents that put satellites and power grids at risk.

    animation of a coronal mass ejection from 2013A bright cloud of particles blew out from the sun in 2013. Activity in the current solar cycle is expected to peak in 2025.SDO/Goddard/NASA/Flickr

    Anyone who depends on long-distance radio signals or telecommunications might have to do without them until the storm blows over and damaged satellites are repaired or replaced. A powerful storm can disturb airplanes in flight, too, as pilots lose contact with air traffic controllers. While these are temporary effects, typically lasting up to a day, impacts on the electrical grids could be worse.

    A massive CME could suddenly and unexpectedly drive currents of kiloamps rather than the usual amps through power grid wires on Earth, overwhelming transformers and making them melt or explode. The entire province of Quebec, with nearly 7 million people, suffered a power blackout that lasted more than nine hours on March 13, 1989, thanks to such a CME during a particularly active solar cycle. The CME affected New England and New York, too. Had electricity grid operators known what was coming, they could have reduced power flow on lines and interconnections in the power grid and set up backup generators where needed.

    Early warning

    But planners need more of a heads-up than they get today. Perhaps within the next decade, improved computer modeling and new space weather monitoring capabilities will enable scientists to predict solar storms and their likely impacts more accurately and earlier, says physicist Thomas Berger, executive director of the Space Weather Technology, Research and Education Center at the University of Colorado Boulder.

    Space meteorologists classify solar storms, based on disturbances to the Earth’s magnetic field, on a five-level scale, like hurricanes. But unlike those tropical storms, the likely arrival of a solar storm isn’t known with any precision using available satellites. For storms brewing on Earth, the National Weather Service has access to constantly updated data. But space weather data are too sparse to be very useful, with few storms to monitor and provide data.

    Two U.S. satellites that monitor space weather are NASA’s ACE spacecraft, which dates from the 1990s and should continue to collect data for a few more years, and NOAA’s DSCOVR, which was designed at a similar time but not launched until 2015. Both orbit about 1.5 million kilometers above Earth — which seems far but is barely upstream of our planet from a solar storm’s perspective. The two satellites can detect and measure a solar storm only when its impact is imminent: 15 to 45 minutes away. That’s more akin to “nowcasting” than forecasting, offering little more than a warning to brace for impact.

    “That’s one of the grand challenges of space weather: to predict the magnetic field of a CME long before it gets [here] so that you can prepare for the incoming storm,” Berger says. But aging satellites like SOHO, a satellite launched by NASA and the European Space Agency in 1995, plus ACE and DSCOVR monitor only a limited range of directions that don’t include the sun’s poles, leaving a big gap in observations, he says.

    Ideally, scientists want to be able to forecast a solar storm before it’s blown out into space. That would give enough lead time — more than a day — for power grid operators to protect transformers from power surges, and satellites and astronauts could move out of harm’s way if possible.

    That requires gathering more data, particularly from the sun’s outer layers, plus better estimating when a CME will burst forth and whether to expect it to arrive with a bang or a whimper. To aid such research, NOAA scientists will outfit their next space weather satellite, scheduled to launch in early 2025, with a coronagraph, an instrument used for studying the outermost part of the sun’s atmosphere, the corona, while blocking most of the sun’s light, which would otherwise blind its view.

    A second major improvement could come just two years later, in 2027, with the launch of ESA’s Lagrange mission. It will be the first space weather mission to launch one of its spacecraft to a unique spot: 60 degrees behind Earth in its orbit around the sun. Once in position, the spacecraft will be able to see the surface of the sun from the side before the face of the sun has rotated and pointed in Earth’s direction, says Juha-Pekka Luntama, head of ESA’s Space Weather Office.

    That way, Lagrange will be able to monitor an active, flaring area of the sun days earlier than other spacecraft, getting a fix on a new solar storm’s speed and direction sooner to allow scientists to make a more precise forecast. With these new satellites, there will be more spacecraft watching for incoming space weather from different spots, giving scientists more data to make forecasts.

    Lagrange points diagramThe European Space Agency’s upcoming Lagrange mission will monitor the sun with spacecraft at “Lagrange points” L1 and L5, two locations in orbit where the combined gravitational pull of the Earth and sun helps objects in space stay in position.WMAP Science Team/NASA
    illustration of the Lagrange mission satellite and the sunLagrange will be the first mission with a satellite (illustrated) at L5, to monitor the sun from the side to try and spot Earth-bound coronal mass ejections much earlier.WMAP Science Team/NASA

    Meanwhile, Berger, Ridley and colleagues are focused on developing better computer simulations and models of the behavior of the sun’s corona and the ramifications of CMEs on Earth. Ridley and his team are creating a new software platform that allows researchers anywhere to quickly update models of the upper atmosphere affected by space weather. Ridley’s group is also modeling how a CME shakes our planet’s magnetic field and releases charged particles toward the land below.

    Berger also collaborates with other researchers on modeling and simulating Earth’s upper atmosphere to better predict how solar storms affect its density. When a storm hits, it compresses the magnetic field, which can change the density of the outer layers of Earth’s atmosphere and affect how much drag satellites have to battle to stay in orbit.

    Satellite safety

    There have been a few cases of satellites damaged by solar storms. The Japanese ADEOS-II satellite stopped functioning in 2003, following a period of intense outbursts of energy from the sun. And the Solar Maximum Mission satellite appeared to have been dragged into lower orbit — and eventually burned up in the atmosphere — following the same 1989 solar storm that left Quebec in the dark.

    Satellites affected by solar storms could be at risk of crashing into each other or space debris, too. With mega-constellations of satellites like SpaceX’s being launched by the hundreds (SN: 3/28/20, p. 24), and with tens of thousands of satellites and bits of space flotsam already in crowded orbits, the risks are real of something drifting into the path of something else. Any space crash will surely create more space junk, too, tossing out debris that also puts spacecraft at risk.

    These are all strong motivators for Ridley, Berger and colleagues to study how storm-driven drag works. The U.S. military tracks satellites and debris and predicts where they’ll likely be in the future, but all those calculations are worthless without knowing the effects of solar storms, says Boris Krämer, an aerospace engineer at the University of California, San Diego who collaborates with Ridley. “To put satellites on trajectories so that they avoid collisions, you have to know space weather,” Krämer says.

    It takes time to create simulations estimating the drag on a single satellite. Current models run on powerful super-computers. But if a satellite needs to use its onboard computer to make those computations on the fly, researchers need to develop sufficiently accurate models that run much more quickly and with less energy.

    New data and new models probably won’t be online in time for the upcoming solar storm season, but they should be in place for solar cycle 26 in the 2030s. Perhaps by then, scientists will be able to give earlier red alerts to warn of an incoming storm, giving more time to move satellites, buttress transformers and stave off the worst.

    The goal of improving space weather forecasts has drawn broad federal government support and interest from industry, including Lockheed Martin, because of the threats to important satellites, including the 31 that constitute the U.S. GPS network.

    The growing interest in space weather led to the 2020 law, known as the Promoting Research and Observations of Space Weather to Improve the Forecasting of Tomorrow Act, or PROSWIFT. And the National Science Foundation and NASA have thrown support behind space weather research programs like Berger’s and Ridley’s. For instance, Ridley, Krämer and their collaborators recently received $3.1 million in NSF grants to develop new space weather computer simulations and software, among other things.

    Our reliance on technology in space comes with increasing vulnerabilities. Some space scientists speculate that we’ve failed to find alien civilizations because some of those civilizations were wiped out by the very active stars they orbit, which could strip a once-habitable world’s atmosphere and expose life on the surface to harmful stellar radiation and space weather. Our sun is not as dangerous as many other stars that have more frequent and intense magnetic activity, but it has the potential to be perilous to our way of life.

    “Globally, we have to take space weather seriously and prepare ourselves. We don’t want to wake up one day, and all our infrastructure is down,” ESA’s Luntama says. With key satellites and power grids suddenly wrecked, we wouldn’t even be able to use our phones to call for help.

    in Science News on February 26, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Meet the postdoc who says he’s been trying to retract his own paper since 2016

    In August 2015, bioengineers gathered in Milan, Italy, for the 37th annual conference of the IEEE Engineering in Medicine and Biology Society. About 2,000 papers were accepted and published online for the conference. But an author of one of those articles says he’s been trying to retract it since 2016. As a PhD student at … Continue reading Meet the postdoc who says he’s been trying to retract his own paper since 2016

    in Retraction watch on February 26, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Preventing and controlling water pipe smoking

    Water pipe smoking as a public health crisis

    Water Pipe Smoking (WPS) accounts for a significant and growing share of tobacco use globally. WPS is a culture-based method of tobacco use and it has experienced a worldwide re-emergence since 1990 and is regaining popularity among different groups of populations, especially in school and university students. Similarly, WPS is also prevalent among highly educated groups. Although WPS is most prevalent in Asia, specifically the Middle East region and Africa, it has now been changed to a rapidly emerging problem in other continents such as Europe, North, and South America.

    WP business has remained largely unregulated and uncontrolled, which may result in the increasing prevalence of WPS.

    It has been shown that WPS’ smoking rate can be more addictive compared to that of the cigarette. It has a huge negative impact on populations’ health, health costs and the gross domestic product of the countries. WP business has remained largely unregulated and uncontrolled, which may result in the increasing prevalence of WPS. Using deceptive advertising, many cafes and restaurants offer WP services along with their orthodox services in order to earn more profit and lure more customers. The provision of flavored tobacco products or psychotropic WP, the proximity of WP cafe to the public settings such as educational or residential settings and sports clubs, tempting decoration, the provision of study places for students, live music, a variety of games and gambling, and the possibility of watching live movie and sport matches are the factors contribute to attracting children and adolescents to WP cafes.

    The importance of our study

    Despite the concerns about WPS outcomes and nearly three decades of using control measures, the prevalence of WPS has increased over the world. Due to the unique nature of WP (multi-components), little is known about the prevention and control of WPS. Thus, special interventions might be required to prevent and control WP tobacco use. Accordingly, our study published in BMC Public Health aimed to identify the management interventions in international and national levels for preventing and controlling water pipe smoking.

    Our study

    We conducted a systematic literature review. Studies aiming at evaluating, at least, one intervention in preventing and controlling WPS were included in this review, followed by performing the quality assessment and data extraction of eligible studies by two independent investigators.

    After deleting duplications, 2228 out of 4343 retrieved records remained and 38 studies were selected as the main corpus of the present study. The selected studies focused on 19 different countries including the United States (13.15 %), the United Kingdom (7.89 %), Germany (5.26 %), Iran (5.26 %), Egypt (5.26%), Malaysia (2.63%), India (2.63%), Denmark (2.63%), Pakistan (2.63%), Qatar (2.63%), Jordan(2.63%), Lebanon (2.63%), Syria (2.63%), Turkey (2.63%), Bahrain (2.63%), Israel (2.63%), the United Arab Emirates (2.63%), Saudi Arabia (2.63%), and Switzerland (2.63%). Additionally, the type of study design included cross-sectional (31.57 %), quasi-experimental (15.78 %) and qualitative types (23.68 %).

    Interventions that were identified from the content analysis process were discussed and classified into relevant categories. We identified 27 interventions that were grouped into four main categories including preventive (5,18.51%) and control (8, 29.62%) interventions, as well as the enactment and implementation of legislations and policies for controlling WPS at national (7, 25.92%) and international (7, 25.92%) levels. The interventions are shown in the following table.

    Table: Effective Interventions in Preventing and Controlling Water Pipe Smoking

    Study implications

    The current enforced legislations are old, unclear, incompatible with the needs of the adolescents and are not backed by rigorous evidence.

    In general, our findings indicated WPS related social and health crisis have not come into attention in high levels of policy making. The current enforced legislations are old, unclear, incompatible with the needs of the adolescents and are not backed by rigorous evidence. In addition, the WP industry is rapidly expanding without monitoring and controlling measures. Informing and empowering adolescents for those who have not yet experienced smoking is a sensible intervention in this regard. Besides, empowering and involving health students and professionals in WPS control programs can lead to promising results in preventing and controlling WPS. It seems that there is a paucity of evidence regarding strategies on controlling and preventing WTS, thus further research in the society is warranted in this respect.

    The post Preventing and controlling water pipe smoking appeared first on BMC Series blog.

    in BMC Series blog on February 26, 2021 07:33 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    COVID-19 vaccines may be ready for teens this summer

    Encouraging news about COVID-19 vaccines keeps coming. No unusual safety issues arose during the first month of vaccination, when 13.8 million doses of the Pfizer and Moderna vaccines were administered in the United States, the U.S. Centers for Disease Control and Prevention reported February 19. The vaccines also appear to slow the spread of the coronavirus (SN: 2/12/21).

    But the available data on COVID-19 vaccines — as well as access to them — centers almost entirely on adults. Most children aren’t yet authorized to receive the shots. An exception is 16- and 17-year-olds; last year, Pfizer expanded its adult trial to these older teens. They were included in Pfizer’s emergency use authorization in the United States, although few have actually been vaccinated, as the group isn’t prioritized to get the shots yet. The World Health Organization also recommended emergency use of this vaccine for 16- and 17-year-olds.

    The work to fill in the data gap on kids and COVID-19 vaccines is now gaining steam. Pfizer is testing their vaccine in adolescents as young as age 12. Moderna is currently recruiting for a clinical trial for 12- to 17-year-olds. And on February 12, AstraZeneca announced the start of a trial for their jab in children ages 6 to 17.

    As to when a COVID-19 vaccine might get the OK for use in adolescents in the United States, “I’d be optimistic for summer,” says infectious disease physician Emily Erbelding, who directs the Division of Microbiology and Infectious Diseases at the National Institute of Allergy and Infectious Diseases in Rockville, Md. It would most likely be Pfizer’s vaccine, as the company is the farthest along with testing in adolescents. Younger children will wait longer for a COVID-19 vaccine, with most trials not yet under way for those under 12 years old.

    Getting past the pandemic

    Proving the shots are safe and effective for children is a crucial first step to vaccinating this population and protecting kids’ health. Although severe illness from COVID-19 is much less common in children than adults, kids haven’t come out unscathed. More than 3 million cases of COVID-19 have been reported in children in the United States, according to the American Academy of Pediatrics. Black and Latino children have a disproportionate share of SARS-CoV-2 infections, researchers reported in Pediatrics in October 2020.

    There have also been more than 2,000 cases of multisystem inflammatory syndrome in children, or MIS-C (SN: 6/3/20), a rare but serious complication of a SARS-CoV-2 infection. The brunt of these cases have been among Black and Latino children. And indirect harms from the pandemic continue to mount, as disruptions to children’s education and social lives endanger their health.

    Furthermore, getting children immunized is part of how society gets past the pandemic. In the United States, there are around 73 million children. “In order to reach a level of herd immunity in our population where we can get rid of this virus, we’re going to need to vaccinate our kids,” says Kawsar Talaat, a vaccine researcher and infectious disease physician at Johns Hopkins University Bloomberg School of Public Health.

    Along with establishing herd immunity, vaccinating children is necessary to curb the emergence of more variants (SN: 2/5/21). “If you fail to vaccinate a population … that means you’re still allowing the virus to go on to have new mutations” as it continues to spread, says pediatric infectious disease doctor Sharon Nachman at the Stony Brook University Renaissance School of Medicine in New York.

    Testing in children

    To pave the way for children to be vaccinated against COVID-19, the shots will be tested in this group to assess effectiveness, safety and dosing. The trials will proceed somewhat differently for younger individuals than for adults.

    To measure how well the shots worked for adults, clinical trials assessed whether the vaccines prevented symptomatic illness. That required tens of thousands of participants, so there would be enough cases of symptomatic COVID-19 to compare cases among those who had and had not gotten vaccinated and determine the vaccines’ efficacy (SN: 10/4/20).

    But to do this for children, trials “would have to even be larger,” says Erbelding, because symptoms are less likely to occur in children than in adults. Instead, researchers will look at how children’s immune systems respond to the vaccine, by measuring antibodies, for example. With data from adults’ immune responses as a guide, the trials can assess whether the vaccines work for children.

    The trials for children will also monitor vaccine safety. The fact that the shots have proven safe in adults — both in testing and post-vaccination monitoring — is a good sign, as there isn’t reason to expect wildly different reactions in children than in adults. Some researchers have wondered if MIS-C is a risk after vaccination, as children diagnosed with the syndrome have higher levels of antibodies than children with COVID-19. However, a similar form of this complication in adults, MIS-A, has not been reported in adults who have been immunized. “That should lend some reassurance,” says Erbelding.

    When figuring out dosing, the goal is “to see what gets you into the sweet spot of good immune response” and manageable side effects from the shot, says Nachman. Some of the expected side effects with COVID-19 vaccines are pain at the injection site, headache and fatigue. Adolescent trial participants are receiving the same dose as adults. But younger children are smaller, and their immune systems tend to respond really well to vaccines, so there may be a different “sweet spot” for them. Vaccine trials for children are designed to test smaller doses as needed when moving into the ages of 7 to 11, 2 to 6 and under 2 years old.

    It’s still a big question as to how many children might be able to be vaccinated before the start of the next school year. But it’s going to take getting everyone vaccinated to stop the virus so children “can go back to school in a normal fashion — without masks, without having to social distance, without all of the things that we’ve enacted in the last year,” says Talaat.

    in Science News on February 25, 2021 10:08 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    A new laser-based random number generator is the fastest of its kind

    By normal standards, the design for a new laser is a total dud. Rather than producing a crisp, steady beam, the laser casts a fuzzy patch of light full of randomly flickering speckles of brightness. But to a team of physicists, the laser’s messy output is its greatest asset. The chaotic fluctuations in the laser’s light can be translated into 254 trillion random digits per second — more than 100 times faster than other laser-powered random number generators, researchers report in the Feb. 26 Science.

    “This is a marvelous step” toward more efficient random number generation, says Rajarshi Roy, a physicist at the University of Maryland in College Park who was not involved in the work.

    Random number generators are valuable tools in computing (SN: 5/27/16). They are used to create encryption keys that scramble private data, such as passwords and credit card numbers, so that information can travel securely over the internet. Computer simulations of complex systems, such as Earth’s climate or the stock market, also require many random numbers to properly capture chance occurrences that happen in real life.

    Lasers can generate random number sequences thanks to tiny, naturally occurring fluctuations in the light’s frequency over time. But using a laser beam to produce random numbers like that is sort of like repeatedly rolling a single die. To generate many strings of random digits from a single laser at once, physicist Hui Cao of Yale University and colleagues came up with a new design.

    In the team’s laser, light bounces between mirrors positioned at either end of an hourglass-shaped cavity before exiting the device. This irregular shape allows light waves of various frequencies to ricochet through the laser and overlap with each other. As a result, when the laser is shined on a surface, its light contains a constantly changing pattern of tiny pinpricks that brighten and dim randomly. The brightness at each spot in the pattern over time can be translated by a computer into a random series of ones and zeros.

    Cao and her colleagues pointed the laser at a high-speed camera, which measured light intensity at 254 spots across the beam about every trillionth of a second. But that camera tracked the laser light for only a couple of nanoseconds before its memory filled up, after which the data were uploaded to a computer to be encoded as 0s and 1s, says Daniel Gauthier, a physicist at Ohio State University who cowrote a commentary on the study in the same issue of Science. To work in the real world, this random number generator would need to be outfitted with light detectors that could send rapid-fire brightness measurements to computers in real time.

    in Science News on February 25, 2021 07:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    A single male lyrebird can mimic the sound of an entire flock

    You might be able to do a mean celebrity impression or two, but can you imitate an entire film’s cast at the same time? A male superb lyrebird (Menura novaehollandiae) can, well almost. During courtship and even while mating, the birds pull off a similar feat, mimicking the calls and wingbeat noises of many bird species at once, a new study shows.

    The lyrebirds appear to be attempting to recreate the specific ecological soundscape associated with the arrival of a predator, researchers report February 25 in Current Biology. Why lyrebirds do this isn’t yet clear, but the finding is the first time that an individual bird has been observed mimicking the sounds of multiple bird species simultaneously. 

    The uncanny acoustic imitation of multispecies flocks adds a layer of complexity to the male lyrebird’s courtship song yet unseen in birds and raises questions about why its remarkable vocal mimicry skills, which include sounds like chainsaws and camera shutters, evolved in the first place.

    Superb lyrebirds — native to forested parts of southeastern Australia — have a flair for theatrics. The males have exceptionally long, showy tail feathers that are shaken extensively in elaborate mating dances (SN: 6/6/13). The musical accompaniment to the dance is predominantly a medley of greatest hits of the songs of other bird species, the function of which behavioral ecologist Anastasia Dalziell was studying via audio and video recordings of the rituals.

    “When you hear lyrebirds, you hear this very loud, very lyrical, dramatic delivery of mimicry of lots of different species of Australian birds,” says Dalziell, of the University of Wollongong in Australia. The strident calls of kookaburras and parrots are common targets. “But when I started to record [lyrebirds] in detail and for very long periods of time, I realized that every now and then they did something completely different.”

    The lyrebirds would transition into a shorter, quieter song made of fluttering noises and scattered chirping. Dalziell thought it sounded like the mixed species “mobbing flocks” she’d experienced in her fieldwork, where prey birds spot a predator and aggregate into a loud, aggressive contingent that attempts to drive away the threat. 

    When Dalziell and her colleagues analyzed the acoustic signatures of the lyrebirds’ strange songs and compared them to those of actual mobbing flocks, the similarities were striking. It was an accurate enough impression to fool other birds too. When the team played back the lyrebird’s fake flock noises in the wild, songbirds were attracted to the speakers to a similar degree as when the speakers played audio from a real mobbing flock. But the songbirds largely ignored the speakers when they played the lyrebird’s typical mimicked melodies.

    “Mimicking the calls and the wingbeats of a flock of small songbirds while they are mobbing predators is quite convincing to my human ears,” says Çağlar Akçay, a behavioral ecologist at Koç University in Istanbul not involved with this research. The findings, he says, are part of a “very cool study on a very cool animal.”

    While the lyrebirds could be mimicking a mobbing flock, they might not be doing so to mimic the mobbing intention itself, says Dominique Potvin, an ecologist at the University of the Sunshine Coast in Queensland, Australia, also not involved with this research. Replicating mobbing calls, she says, could just be a difficult vocal feat meant to impress a mate. 

    Some clues about why the males sing these mobbing songs might come from their timing. Video recordings reveal that the males make the calls right at the end of a courtship display and during mating. The flock mimicry may not be about wooing a female, but deceiving her into believing a predator is nearby, Dalziell says. Such a tactic by this “master illusionist” might enhance the chance of a successful mating by keeping the female close. 

    At the close of trying to impress a female with an elaborate song and dance, the male lyrebird adds a remarkable flourish. Its voice recreates the alarmed chirps and wingbeats of many birds of different species, a degree of mimicry prowess never seen before in birds.

    Akçay is skeptical of this explanation. “Intuitively, it seems that it wouldn’t be exactly adaptive for a female to return to an area — to copulate no less — if she is under the impression that there is a predator around,” he says. 

    The findings generate lots of new avenues for research, notes Dalziell. Determining if females react to the simulated mobbing flock similarly to the real version might be one way to test the deception idea.

    in Science News on February 25, 2021 04:15 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Using mathematics to solve practical problems? It’s elementary.

    Math sleuth Khongorzul Dorjgotov honed her problem-solving skills on Sherlock Holmes

    in Elsevier Connect on February 25, 2021 02:51 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How collaboration changes the world for people with rare diseases

    In recognition of #RareDiseaseDay, Elsevier is making select research articles and book chapters freely available for two months

    in Elsevier Connect on February 25, 2021 12:55 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Nanotech group that retracted Nature study pulls two more papers

    Nanotechnology researchers in Japan, who in November retracted a paper in Nature for lack of reproducibility, have retracted two more articles after what they said was a failure to replicate their findings. As we reported previously, the authors, led by Kenichiro Itami of Nagoya University, called for an investigation into the problems with their work, … Continue reading Nanotech group that retracted Nature study pulls two more papers

    in Retraction watch on February 25, 2021 12:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Our Brains “See” Beams Of Motion Emanating From People’s Faces Towards The Object Of Their Attention

    By guest blogger Sofia Deleniv

    Back in the 1970s, the developmental psychologist Jean Piaget discovered that, if you ask young children to explain the mechanics of vision as they understand them, their answers tend to reveal the exact same misconception: that the eyes emit some sort of immaterial substance into the environment and capture the sights of objects much like a projector.

    Although this belief declines with age, it is still surprisingly prevalent in adults. What’s more, so-called extramission theories of vision have a long-running history dating all the way back to antiquity. The Greek philosopher Empedocles was amongst the first to suggest in the 5th century BC that our ability to see must stem from an invisible fire beaming out of our eyes to interact with our surroundings. This view was subsequently endorsed by intellectual authorities like Ptolemy and Galen.

    Now, a duo of researchers behind a recent publication in PNAS think they might have found an explanation for the intuitive appeal of extramission theories. According to their paper, this worldview might just be a reflection of the mechanisms that play out within our brains when we follow other people’s gazes and track where they pay attention. This is because, to carry out this process, our brains actually conjure illusory beams of motion emanating from other’s faces — a quirk of evolution with interesting consequences.

    Scientists had already found signs that this was taking place. For instance, when people spend time looking at an image of someone gazing sideways at an object, they temporarily become slower at spotting subtle movement in that same direction. It is as if our brains treat the experience of seeing a pair of glancing eyes as an animated display of motion flowing towards the object of attention. This fatigues the brain region responsible for processing movement and renders it briefly “blind” to real motion along that same direction.

    Shortly thereafter, an fMRI brain scanning study further deepened scientists’ suspicions. It showed that watching someone gazing at an object activated the motion-sensitive regions of our brains in a pattern that was remarkably similar to that triggered by the experience of viewing actual motion.  

    Tantalising as this evidence was, it was virtually impossible to conclude that this motion signal played any causal role in how we track people’s attention. Sceptical researchers suggested that the signal might simply reflect study participants imagining people reaching for the objects of their interest.

    With this in mind, Arvid Guterstam and Michael Graziano — the Princeton University psychologists behind the recent PNAS publication — decided to set the causal record straight. They did this by testing whether meddling with the brain’s internal motion signal, using real (but subliminal) movement, could manipulate people’s perceptions of where someone was attending.

    In their study, participants looked at a screen where two faces, set against a background of randomly moving black dots, gazed in the general direction of an object located at the center. They were asked to indicate which face seemed to be paying more attention to that item. But unbeknownst to them, these faces were actually mirror images, so participants unsurprisingly selected one or the other face with equal probability.

    This changed when the researchers stealthily introduced a subtle signal in the background: a beam-shaped area emanating from the one of the faces, in which 30% of dots were made to drift coherently in the direction of the object. The manipulation was subtle enough that only seven out of over 650 participants were actually aware of it. And yet, it had a significant impact on how they attributed attention. On the trials where the subliminal beam of motion was flowing away from the left face, participants were 6% more likely to judge that face as paying more attention to the object — a significant deviation from their baseline indecisiveness.  

    Importantly, this motion-induced bias completely vanished when the dots moved in the opposite direction — i.e. from the object to the face. This suggests that our brains don’t fabricate motion signals for all interactions between objects in our surroundings. Rather, they reserve them for the social act of inferring the connection between someone’s gaze and the object of their attention. The byproduct of this, claim the study’s authors, is the inexplicable sensation that people’s eyes emit beams of immaterial substance. And even though this takes place outside the realm of our awareness, it might just be the driving force behind extramission theories’ historic and intuitive allure.

    One almost cannot help but wonder whether this quirk of the brain also shapes how we intuitively think about mythical creatures. Indeed, aliens and robots with so-called “optic weaponry” have a propensity to pop up in works of science fiction, while medieval and classical-era tales are packed with beasts with killer glares (look no further than the basilisk, or the Gorgon Medusa, whose lethal gaze spurred the Greek hero Perseus to bring a reflective shield to his assassination attempt on the monster).

    Of course, scientists will be hard pressed to prove that the historical and developmental appeal of extramission theories really is a reflection of the way our brains process gaze and attention. But these coincidences do raise interesting questions about the deep-seated role our fundamental neural mechanisms play in shaping our intuitions and imaginations — children and philosophers alike.

    Visual motion assists in social cognition

    Post written by Sofia Deleniv for the BPS Research DigestSofia is a scientific writer whose work has appeared in magazines such as New Scientist and Discover Magazine. She holds a BA in Experimental Psychology and a PhD in Neuroscience from the University of Oxford, where she investigated how the brain processes sensations using a mix of electrophysiology and computer modelling. Ever enthusiastic about anything from genes and brains to animal behaviour, Sofia’s Twitter feed features the occasional update on her written work and other exciting bits of science.

    At Research Digest we’re proud to showcase the expertise and writing talent of our community. Click here for more about our guest posts.

    in The British Psychological Society - Research Digest on February 25, 2021 11:21 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Having more friends may help female giraffes live longer

    Grown-up giraffes just aren’t huggy, cuddling, demonstrative animals. So it took identity-recognition software grinding through five years of data to reveal that female social life matters to survival.

    The more gregarious adult female giraffes in northern Tanzania’s Tarangire ecosystem tend to live longer, concludes wildlife biologist Monica Bond of the University of Zurich. Females that typically hung around at least three others of their kind, were more likely to outlive those with fewer routine companions, Bond and colleagues report February 10 in Proceedings of the Royal Society B.

    In published science, the idea that giraffes even have social lives isn’t much more than a decade old, Bond says. (For the time being, Bond still treats giraffes as one species, Giraffa camelopardalis, until there’s more agreement on how many species there are.)  Adult males spend most of their time in solitary searches for females willing to mate, but females often hang around in groups.

    Compared with bats clustering under a bridge or baboons grooming pals’ fur, even the most sociable female giraffes often look as if they just happen to be milling around feeding in the same shrubbery. These “loose” groups, as Bond describes them, don’t snuggle or groom each other. A group mostly just browses in the same vicinity, then may fray apart and reconfigure with different members in the fission-fusion pattern seen in many animals, such as dolphins. Yet closer looks have found that females, in their low-drama way, prefer certain neighbors and seem to avoid certain others.

    Bond encountered giraffes in the wild in 2007 on her first trip to Africa. “I loved everything,” she says, but especially giraffes looking “as fanciful and weird as a unicorn.” To examine their lives, she and colleagues have now recorded sightings for nearly 3,000 individuals in the Tarangire region. Each giraffe’s spots are unique and remain identifiable throughout life, so photographs of the animals’ torsos make identification possible (SN: 10/2/18).

    Unlike Africa’s much-studied Serengeti National Park, the Tarangire region lets researchers watch animals across a wide range of human impacts. At the low-impact end, giraffes munch acacia trees in protected parkland or stroll under baobab trees that are “sticking up like a giant broccoli,” Bond says. Human influence becomes more common where the Maasai people tend their cattle, and the heaviest human footprints lie in the region’s bustling towns.

    Bond and her colleagues looked at how the kinds of plants eaten, soil types, closeness to humans and other factors affected females’ chances of surviving from one season to the next. The most important predictor of survival for 512 adult female wild giraffes was the number of other females typically found around them. She doesn’t think it’s just that loners or some straggly groups get more easily picked off by predators. In this region, lions don’t hunt in the big prides that can overwhelm adult prey and “a giraffe can kick a lion to death,” Bond says.

    Instead, Bond speculates that gregarious females might suffer less stress. Lions in the area stalk giraffe calves, for instance. In a bigger group, calves can cluster near each other in creches that a few females watch over, letting the other moms get a break. And when bigger female groups settle down at night, Bond sees some alert eyes among the drowsy ones that will get better rest.

    This analysis, however, comes from just the Tarangire region. “It would be great for the methods to be replicated in other ecosystems to see how it holds up,” says Arthur Muneza, the east Africa coordinator based in Nairobi, Kenya, for the Giraffe Conservation Foundation. A place where giraffes need to travel farther to find water or other vital resources, for instance, might make a difference in the results.

    in Science News on February 25, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Interactions between Brassica Napus and Extracellular Fungal Pathogen Pyrenopeziza Brassicae

    This week on Journal Club session Chinthani Karandeni Dewage will talk about a paper "Interactions between Brassica Napus and Extracellular Fungal Pathogen Pyrenopeziza Brassicae".


    Light leaf spot (Pyrenopeziza brassicae) is currently the most damaging foliar disease on winter oilseed rape (Brassica napus) in the UK. Deployment of cultivar resistance remains an important aspect of effective management of the disease. Nevertheless, the genetic basis of resistance remains poorly understood and no B. napus resistance (R) genes against P. brassicae have been cloned. In this talk, I will be presenting the findings from my research on host resistance against P. brassicae and specific interactions in this pathosystem. New possibilities offered by genomic approaches for rapid identification of R genes and pathogenicity determinants will also be discussed.


    Papers:

    • Chinthani Karandeni Dewage et. al, "Interactions between Brassica Napus and Extracellular Fungal Pathogen Pyrenopeziza Brassicae" , 2021, in preparation

    Date: 2021/02/25
    Time: 14:00
    Location: online

    in UH Biocomputation group on February 25, 2021 10:29 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Ardi may have been more chimplike than initially thought — or not

    One of the earliest known hominids, a 4.4-million-year-old partial skeleton of a female dubbed Ardi, had hands suited for climbing trees and swinging from branches, a new investigation suggests.

    These results, based on statistical comparisons of hand bones from fossil hominids and present-day primates, stoke an ongoing debate not only about how Ardi moved (SN: 2/22/19) but also what the last common ancestor of humans and chimps looked like (SN: 12/31/09).

    “The last common ancestor of humans and chimpanzees was more similar to chimps than to any other living primate,” says paleoanthropologist Thomas Prang of Texas A&M University in College Station. That ancestor, who lived roughly 7 million years ago, had hands designed much like those of tree-adept, knuckle-walking chimps and bonobos, he and his colleagues say. That hand design was retained by early hominids such as Ardi’s East African species, Ardipithecus ramidus, the team reports February 24 in Science Advances.

    Hand fossils showing a more humanlike design and grip first appeared in a later hominid, Australopithecus afarensis, Prang’s group reports. That fossil species, best known for Lucy’s partial skeleton, inhabited East Africa from around 3.9 million to 3 million years ago.

    Not until after Lucy’s kind had died out did bonobos diverge into a species apart from chimps, between 1.6 million and 2 million years ago (SN: 10/27/16). That makes the older chimp lineage a closer relative of early hominids. Still, Prang cautions, chimps have evolved over the past several million years and don’t represent “living fossils” that can be used as stand-ins for the ancient ancestor of humans and chimps.

    To assess which species possessed especially similar hands, Prang’s team analyzed the sizes and dimensions of four fossils from Ardi’s hands. The researchers then compared those measurements with comparable ones from other fossil hominids and from living primates.

    Using the same statistical approach, Prang has previously argued that Ar. ramidus had a foot that most closely resembles those of present-day chimps and gorillas. If so, then Ardi and her compatriots, who were close in size to chimps, most likely split their time between walking on all fours and moving through trees, he argued April 2019 in eLife.

    In stark contrast to Prang’s conclusions, paleoanthropologists who discovered and studied Ardi’s remains contend that Ar. ramidus was built neither like chimps nor humans (SN: 9/9/15).

    Ardi’s finger bones look like those of chimps in some ways, says Morgan Chaney of Kent State University in Ohio. Chaney works with Kent State’s Owen Lovejoy, one of the scientists who originally studied Ardi’s remains. But the fossil female’s palm and forearm were much shorter than those of chimps, Chaney says. Combined with her distinctive wrists, her arms would have allowed only for grasping branches while moving slowly in trees.

    Ardi’s forearm structure was not that of a knuckle-walker, Chaney contends.

    Prang’s earlier analysis of Ardi’s feet also falls short of demonstrating a chimplike design, Chaney and colleagues argue January 10 in the Journal of Human Evolution. Ardi’s relatively long mid-foot, which is ill-suited to climbing, was not accounted for in Prang’s statistical analysis, the scientists say. Similarities in body mass between Ardi and chimps, rather than a close evolutionary relationship, at least partly explain the chimplike foot measurements that Prang cites.

    Based on her overall body design, Ardi walked upright, Chaney and colleagues argue. She combined a long lower pelvis that stabilized a straight-legged stance with an apelike, opposable big toe. Ardi climbed trees cautiously and rarely hung or swung from branches, those researchers hold.

    in Science News on February 24, 2021 07:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Darwin’s theory of agency: back to the future in evolutionary science?

    Was Darwin a one-trick pony? The scientists who most praise him typically cite just one of his ideas: natural selection. Do any know that his theory of evolution—like his take on psychology—presumed all creatures were agents? This fact has long been eclipsed by the “gene’s-eye view” of adaptation which gained a strangle-hold over biology during the twentieth century—and hence over sociobiology and today’s evolutionary psychology. Are current efforts to revise this view—emphasising “new” topics like the flexibility of phenotypes (an organism’s living characteristics) and the importance of development in adaptation—simply rediscovering Darwin’s approach?

    How do members of a species come to differ from each other, thus furnishing the raw material for the struggle whose results Darwin subsumed under what he called the law of natural selection? In two ways, wrote Darwin in 1859. A creature’s parents transmit different characteristics to them as starting-points for their lives. And, over their life-course, creatures develop their starting-characteristics in different ways, depending on how they respond to the various challenges they meet. In 1942, Julian Huxley and his colleagues recast natural selection as a mechanism, not a law. Imagine twinned roulette-wheels: each individual’s evolutionary fate resulted from a random genetic provision of phenotypic traits being pitted against a lottery of environmental events. This new language of genes, genomes, and (from 1953) DNA, made Darwin look ignorant about how individual differences arose. Dying before genes or DNA were discovered, he never knew that mutations and chromosome-changes caused all variations (according to Huxley).

    To retain Darwin as the figurehead of this gene-first take on nature, he was retro-fitted with twentieth-century beliefs. If DNA “programmes” what creatures are and do, genetic processes drive evolution forward, not creaturely acts. Hence, Huxley asserted, the “great merit” of Darwin was his proof that living organisms never acted purposively: everything they did could be accounted for “on good mechanistic principles.” Likewise, even when Harvard biologist Richard Lewontin spoke out against the way the cult of genes blinds us to the active role organisms play in adaptation, his first target was Darwin, who, Lewontin said, portrayed organisms “as passive objects moulded by the external force of natural selection.”

    Lewontin’s vouch that organisms help shape their own fates is now gaining traction. Few genes behave as they should according to Mendel. So, talk about genes “for” a phenotypic character is rarely appropriate. Even when we know everything we can about genes and environment, we still cannot predict what characteristics will emerge in an organism—proving phenotypes are independent sources of plasticity in the genesis of adaptations. Organisms help cause their own development and destiny, which means phenotypes themselves have evolutionary effects. Never mind why a fly hatches from its pupa to be small: its unusually big surface to volume ratio cannot help but shape its remaining life-history. This point underlines findings from biologists who study animal behaviour. Beavers build dams, chimps and crows make tools, wolves hunt better in packs. All such feats alter the evolutionary prospects of phenotypes.

    Darwin’s books herald all these emphases: the distinction between transmission and development when discussing inheritance; the “plasticity of organisation” in all creatures; and, importantly here, the tie between action and structure. Darwin saw nature as a theatre of agency. The roots of cabbage seedlings successfully improvised, after Darwin experimentally blocked them from plunging straight down into the earth. Earthworms “intelligently” grasped how best to tug his artificially-shaped “leaves” to plug their holes against the cold. And when newly-arrived finches were competing for food on the Galapagos Islands, it must have been the birds who first found the best new diets—not random genetic changes—who gained reproductive supremacy and consequently, over millennia, such new bodily adaptations as the skin-piercing beak of blood-sucking Vampire Finches.

    Actions produce reactions, The Origin of Species repeatedly reminds us. Which means an organism’s actions inevitably render it interdependent with its habitat, animate and inanimate. Such ties may be competitive or cooperative—“mutual aid” being the hallmark of evolution in “social animals” like us. Hence, when Darwin published his views on human agency in The Descent of Man—first published 150 years ago this month—and its sequel, The Expression of the Emotions (1872), social interdependency took pride of place. Darwin argued non-verbal expressions to be purposeless by-products of functional habits—we weep because we protectively close our eyes when screaming, incidentally squeezing our tear-glands. Such unintended side-effects only come to signify emotion because others “recognize” their meaning as linked to suffering. When I blush, my inbuilt capacity for reading expressions has rebounded, leading me to read in you how I imagine you to be reading me. Such “self-attention” underpins sexual display, plus such quintessentially human traits as language, culture, and conscience.

    Go back to what Darwin wrote about evolution, and you will hear him speaking from a place that the latest biology now renders prescient. Interdependencies of agency not only forge individual differences, and winnow the kernels of inter-generational success from the chaff of failure. They also compose Darwin’s unsung creation of the first naturalistic psychology.

    Featured image by Pat Josse

    The post Darwin’s theory of agency: back to the future in evolutionary science? appeared first on OUPblog.

    in OUPblog - Psychology and Neuroscience on February 24, 2021 01:30 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dolphins’ Personality Traits Are Surprisingly Similar To Our Own

    By Emma Young

    We’re all familiar with the “Big Five” model of personality, which measures the traits of conscientiousness, agreeableness, extraversion, neuroticism, and openness. But what drove the evolution of these personality domains? And how do animal personalities compare with ours? Answers to the second question can help to answer the first. And now a major new study of personality in bottlenose dolphins, published in the Journal of Comparative Psychology, has found that in some key ways, dolphin personality is like ours; in others, though, it is not.

    The 134 dolphins studied by Blake Morton at the University of Hull and colleagues were all kept in captivity in 15 centres in eight different countries. Staff who had known the dolphins for at least a year rated each animal on a 49-item Dolphin Personality Questionnaire, adapted from a similar questionnaire for primates. Staff indicated the extent to which dolphins showed various traits, such as “exhibitionistic, flamboyant” (for dolphins who regularly try to attract visitors’ attention, say), “aggressive”, “sociable”, “erratic”, “playful”, “easygoing”, “suspicious”, and “stubborn”. Many of the dolphins were rated by two or more people, and the reasonable consistency between different people’s ratings for a single animal gave the team some confidence that these were accurate.

    The team then analysed the data, and found that the traits clustered into four main groups, or domains: openness (a tendency to be active and explore the environment), disagreeableness (a tendency to be aggressive, jealous, despotic and obstinate), sociability (being friendly towards other dolphins and people), and “directedness”, characterised by consistency in behaviour, boldness, and low emotional arousal (this was like a blend of high conscientiousness and low neuroticism). In contrast to findings for chimpanzees or gorillas, a domain of “dominance” did not emerge. This domain is notably absent from human personality models, too, perhaps because neither dolphin nor human social groups feature very strong hierarchies, while chimp groups do, the team suggests.

    As with orcas, California sea lions, mountain gorillas and bonobos, bottlenose dolphins don’t seem to have a domain of “neuroticism”, either. It’s been suggested that neuroticism is more likely to emerge in species that live in unpredictable environments (the theory is that neurotic individuals may be more prone to anxiety but also more vigilant when it comes to spotting dangers in their environment — and an unpredictable environment requires greater vigilance). But the data on dolphins, which, as the researchers note, evolved in relatively unpredictable environments, runs counter to this idea. The questionnaire didn’t include many neuroticism-related items, however. More work is now needed to explore the origins of neuroticism in other animals as well as us, the team observes.

    It has been suggested that conscientiousness evolves in species that need to pay close attention to other individuals — to carefully watch another using a tool, and so learn how to use it, for example. Though dolphins do learn to use tools from other dolphins, the team did not find evidence that conscientiousness, in and of itself, is a clear domain of their personality. Something like human conscientiousness has been observed in Asian elephants, however. These elephants have highly manipulatable and useful trunks, which are similar in some ways to our hands. Perhaps, then, the need to pay close attention to the use of hands (or trunks) in manipulating objects and caring for infants is important for the evolution of conscientiousness, the team suggests.

    What about the similarities between the personalities of dolphins and people, beyond the lack of a trait of dominance? Openness has been found in other intelligent species that also live in groups, such as chimpanzees, as well as humans (but not orangutans, which don’t live in stable social groups). The finding for dolphins fits with this pattern, and more work is now needed to explore the extent to which one or both of these two factors might have contributed to the evolution of this trait, the team writes. Agreeableness (or disagreeableness) is also shared, but as the traits that relate to sociability in dolphins are different to those seen in humans and other  primates, more work is needed to explore this, too.

    It will be fascinating simply to know more about other animals’ personalities, of course, but the team certainly also hopes that it will provide some broader insights into how our own human personality evolved: “Further work on cetaceans, other aquatic mammals and other vertebrates will lead to a better understanding of the evolutionary forces that unite and divide species that inhabit the surface and depths of our planet,” the team concludes.

    Personality structure in bottlenose dolphins (Tursiops truncatus).

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on February 24, 2021 12:51 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    JAMA journal retracts, replaces paper linking nonionizing radiation to ADHD

    A JAMA journal is retracting and replacing a 2020 paper which linked exposure to nonionizing radiation — think cellphones, Bluetooth devices and microwave ovens — during pregnancy to the risk for attention deficit disorder later in childhood after a reader pointed out a critical error in the study.  The paper, “Association Between Maternal Exposure to … Continue reading JAMA journal retracts, replaces paper linking nonionizing radiation to ADHD

    in Retraction watch on February 24, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    PLOS and Yale University Announce Publishing Agreement

    Yale University posted the following announcement on its website on February 23, 2021

    Yale University Library has signed two innovative agreements that will allow Yale-affiliated authors to publish in any PLOS open access journal without paying article processing charges (APCs).

    PLOS is a non-profit, open access publisher of seven highly respected scientific and medical journals. Last year Yale authors published more than 100 articles in PLOS journals, with APCs of up to $3,000 per article. Effective Jan. 1, 2021, these author-paid APCs will be eliminated and replaced with annual fees paid by the library. The authors will maintain copyright ownership of their research.

    “Our goal is to make open access publishing a more viable option for more Yale researchers in science and medicine, and to support a publication model that will also encourage open access publishing beyond Yale,” said Barbara Rockenbach, the Stephen Gates ’68 University Librarian.

    Open access publishing has grown in popularity since the 1990s when peer-reviewed journals began publishing online with a traditional business model based on limited access and high subscription fees. Open access developed as an alternative to make new research quickly and widely available with financial support from those producing the research. However, financial support for APCs from academic departments, government, and other research funders has varied widely, with some authors having to pay from personal funds.

    The library agreements will eliminate APCs for Yale authors publishing in PLOS Biology, PLOS Medicine, PLOS One, PLOS Computational Biology, PLOS Pathogens, PLOS Genetics, and PLOS Neglected Tropical Diseases, as well as in any new PLOS publications launched during the contract term. The initial agreements are for three years and will be funded through Yale Library’s Collection Development department with support from the Cushing/Whitney Medical Library.

    “We are pleased that Yale Library can support this emerging, more sustainable model of open-access publishing,” Rockenbach said. “We are committed to facilitating equitable access to research in science and medicine–and the progress research fuels.”

    The post PLOS and Yale University Announce Publishing Agreement appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on February 23, 2021 04:10 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Belief In Conspiracy Theories Is Associated With Lower Levels Of Critical Thinking

    By Emily Reynolds

    Over the last few years, conspiracy thinking seems to have mushroomed — most visibly perhaps in the US, where QAnon supporters stormed the Capitol. Elsewhere, across the world, coronavirus-related conspiracies have abounded; one large-scale survey conducted last year found that as many as one in five Britons believed the COVID-19 fatality rate may have been exaggerated.

    We already know that certain factors make individuals particularly prone to conspiratorial thinking — their level of education, for example, or a desire to feel special. And a new study, published in Applied Cognitive Psychology, has identified another facet of cognition linked to conspiratorial beliefs: critical thinking. Anthony Lantian from Université Paris Nanterre and colleagues find that the higher the level of critical thinking, the lower the belief in conspiracy theories, potentially offering a path out of conspiratorial thinking for those particularly susceptible.

    In the first study, 86 participants were asked to complete a conspiracy belief scale, indicating how much they agreed with statements such as “certain significant events have been the result of the activity of a small group who secretly manipulate world events”.

    They then took part in a critical thinking activity, reading a letter to the editor of a newspaper arguing that overnight parking should be banned in a particular area. Participants were asked to respond to each paragraph of the letter, assessing the relevance of the argument and evaluating the letter as whole, then wrote their responses in the form of a letter to the editor. These letters were assessed by judges on various measures of critical thinking, such as identifying good arguments, seeing other explanations, and avoiding over-generalisation. The team found that the higher participants scored on the critical thinking task, the less they believed in conspiracy theories.

    However, the relationship in the first study didn’t quite reach significance. So a second study replicated the first, this time with more participants; overall, 252 took part. As well as completing the conspiracy thinking measure and the letter evaluation task, participants also reported on their own critical thinking skills.

    Again, those with high levels of critical thinking ability were less likely to believe in conspiracy theories. Interestingly enough, however, conspiracy-minded participants didn’t seem aware of their critical thinking skills — both those with high and low levels of conspiracy thinking rated themselves highly on critical thinking. This makes sense when you consider the narratives of many conspiracy theory movements, which often frame themselves as true critical or free thinkers, seeing the light where others cannot.

    The results may be useful when designing interventions to combat conspiratorial thinking — but being careful about how these are framed would be crucial. If someone truly believes in a specific conspiracy theory, telling them they lack critical thinking skills is unlikely to help and may instead further entrench them in their beliefs, as researchers have highlighted in coverage of QAnon. The study is also correlational — we can’t say, based on these results, that lack of critical thinking is the reason people believe conspiracy theories.

    Further research could look at why critical thinking might protect against conspiratorial thinking, as well as explore degrees of conspiratorial thinking. When does somebody tip from “healthy scepticism”, as the team puts it, into full-on conspiracy? Where is the line between critically engaging with what the media or politicians tell us, for example, and labelling everything as “fake news”? Though media coverage may focus on the “true believers” of particular conspiracy theories, the journey to such a staunch position often begins somewhere far more reasonable; tracking this journey could provide valuable insight.

    Maybe a free thinker but not a critical one: High conspiracy belief is associated with low critical thinking ability

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on February 23, 2021 01:38 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    With a successful medical career, this researcher pursues his dream job

    Dr Seng Cheong Loke is designing an augmented reality app to help older people communicate with their families

    in Elsevier Connect on February 23, 2021 12:43 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The Cardiff Coronatest

    "A consumable component, magnetic beads, used by the laboratory was supplied by Magnacell Ltd, a company that has Dr T Jurkowski as a director. Cardiff University did not tender for these consumables at the time".

    in For Better Science on February 23, 2021 12:02 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Former Cleveland Clinic researcher’s papers “more likely than not” included falsified images, says investigation

    A former researcher at the Cleveland Clinic who studied cardiac genetics has lost three papers for what an institutional investigation concluded was “more likely than not” a case image falsification.  As we reported last year, the work of Subha Sen, once a highly funded scientist at Cleveland Clinic but who left the institution in 2011, … Continue reading Former Cleveland Clinic researcher’s papers “more likely than not” included falsified images, says investigation

    in Retraction watch on February 23, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Temporal dispersion of spike rates from deconvolved calcium imaging data

    On Twitter, Richie Hakim asked whether the toolbox Cascade for spike inference (preprint, Github) induces temporal dispersion of the predicted spiking activity compared to ground truth. This kind of temporal dispersion had been observed in a study from last year (Wei et al., PLoS Comp Biol, 2020; also discussed in a previous blog post), suggesting that analyses based on raw or deconvolved calcium imaging data might falsely suggest continuous sequences of neuronal activations, while the true activity patterns are coming in discrete bouts.

    To approach this question, I used one of our 27 ground truth datasets (the one recorded for the original GCaMP6f paper). From all recordings in this data set, I detected events that exceeded a certain ground truth spike rate. Next, I assigned these extracted events in 3 groups and systematically shifted the detected event of groups 1 and 3 by 0.5 seconds forth and back. Note that this is a short shift compared to the timescale investigated by the Wei et al. paper. This is how the ground truth looks like. It is clearly not a continuous sequence of activations:

    To evaluate whether the three-bout pattern would result in a continuous sequence after spike inference, I just used the dF/F recordings associated with above ground truth recordings and Cascade’s global model for excitatory neurons (a pretrained network that is available with the toolbox), I infered the spike rates. There is indeed some dispersion due to the difficulty to infer spike rates from noisy data. But the three bouts are very clearly visible.

    This is even more apparent when plotting the average spike rate across neurons:

    Therefore, it can be concluded that there are conditions and existing datasets where discrete activity bouts can be clearly distinguished from sequential activations based on spike rates inferred with Cascade.

    This analysis was performed on neurons at a standardized noise level of 2% Hz-1 (see the preprint for a proper definition of the standardized noise level). This is a typical and very decent noise level for population calcium imaging. However, if we perform the same analysis on the same data set but with a relatively high noise level of 8% Hz-1, the resulting predictions are indeed much more dispersed, since the dF/F patterns are too noisy to make more precise predictions. The average spike rate still shows three peaks, but they are only riding on top of a more broadly distributed, seemingly persistent increase of the spike rate.

    If you want to play around with this analysis with different noise levels or different data sets, you do not need to install anything. You can just, within less than 5 minutes, run this Colaboratory Notebook in your browser and reproduce the above results.

    in Peter Rupprecht on February 23, 2021 01:08 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dynamique et contrôle épidémique : quelques concepts simples

    Dans ce texte, j’essaie d’expliquer quelques concepts simples concernant la dynamique et le contrôle d’une épidémie. Je l’écris bien sûr en pensant à l’épidémie de Covid-19, mais la plupart des concepts sont généraux. En préambule, je tiens à préciser que je ne suis ni médecin ni épidémiologiste, donc je ne parlerai pratiquement pas des aspects purement médicaux, ni de subtilités d’épidémiologie, mais simplement de quelques notions générales. Ma spécialité est la modélisation de phénomènes dynamiques en biologie, en l’occurrence en neurosciences. Merci donc aux collègues compétents d’apporter des précisions ou corrections éventuelles, ou des références pertinentes.

    Quelques remarques préliminaires sur les statistiques

    Avant de commencer les explications, je voudrais tout d’abord inviter le lecteur à la prudence quant à l’interprétation des statistiques, en particulier des statistiques de mortalité. A l’heure où j’écris ces lignes, on estime qu’environ 15% de la population française a été infectée. Autrement dit, l’épidémie est débutante. Les statistiques de mortalité ne sont donc pas un « bilan » de l’épidémie, mais des statistiques sur une épidémie en cours. Comparer avec le bilan d’épidémies passées, ou d’autres causes de mortalité, n’a donc pas beaucoup de sens (éventuellement, on pourrait multiplier par 5 ces statistiques pour avoir un ordre d’idée).

    Deuxièmement, la mortalité d’une maladie ne dépend pas que du virus. Elle dépend aussi de la personne malade. Un facteur majeur est l’âge, et il faut donc prendre cela en compte lorsque l’on compare des pays de démographies très différentes. En première approximation, le risque de décès de la Covid-19 augmente avec l’âge de la même manière que le risque de décès hors Covid. On peut voir cela comme le fait que les jeunes sont peu à risque, ou bien que toutes les classes d’âge voient leur risque de décès dans l’année augmenter d’un même facteur. Quoi qu’il en soit, avec ce type de profil de mortalité, l’âge moyen ou médian au décès n’est pas très informatif puisqu’il est le même avec et sans infection.

    Troisièmement, la mortalité d’une maladie dépend aussi de la prise en charge. En l’occurrence, la Covid-19 se caractérise par un fort taux d’hospitalisation et de réanimation. La mortalité observée en France jusqu’à présent correspond à celle d’un système de soin qui n’est pas saturé. Naturellement, elle serait bien plus élevée si l’on ne pouvait pas procurer ces soins, c’est-à-dire si l’épidémie n’était pas contrôlée, et la mortalité se déplacerait également vers une population plus jeune.

    Enfin, il va de soi que la gravité d’une maladie ne se réduit pas à sa mortalité. Une hospitalisation n’est généralement pas anodine, et les cas moins sévères peuvent avoir des séquelles à long terme.

    Le taux de reproduction

    Un virus est une entité qui peut se répliquer dans un hôte et se transmettre à d’autres hôtes. Contrairement à une bactérie qui est une cellule, un virus n’est pas à proprement parler un organisme, c’est-à-dire qu’il dépend totalement de l’hôte pour sa survie et sa reproduction. Par conséquent, pour comprendre la dynamique d’une épidémie virale, il faut s’intéresser au nombre d’hôtes infectés et à la transmission entre hôtes.

    Un paramètre important est le taux de reproduction (R). C’est le nombre moyen de personnes qu’une personne infectée va contaminer. On voit que l’épidémie se développe si R>1, et s’éteint si R<1. A chaque transmission, le nombre de cas est multiplié par R. Ce taux de reproduction ne dit pas à quelle vitesse l’épidémie se développe, car cela dépend du temps d’incubation et de la période de contagiosité. C’est en fait un paramètre qui est surtout intéressant pour comprendre le contrôle de l’épidémie. Par exemple, si R = 2, alors on peut contrôler l’épidémie en divisant par deux le nombre de ses contacts.

    Comme le nombre de cas est multiplié par un certain facteur à chaque épisode de contamination, une épidémie a typiquement une dynamique exponentielle, c’est-à-dire que c’est le nombre de chiffres qui augmente régulièrement. Cela prend autant de temps de passer de 10 à 100 cas que de 100 à 1000 cas, ou de 1000 à 10 000 cas. La dynamique est donc de nature explosive. C’est pourquoi la quantité à suivre avec attention n’est pas tant le nombre de cas que ce taux de reproduction : dès que R>1, le nombre de cas peut rapidement exploser et il faut agir vite.

    Naturellement, ce raisonnement suppose que la population n’a pas été déjà infectée. Si une proportion p de la population est immunisée (infectée ou vaccinée), alors chaque personne infectée va contaminer en moyenne un nombre de personnes R x (1-p). L’épidémie va donc s’arrêter quand ce nombre descend en-dessous de 1, c’est-à-dire p > 1- 1/R. Par exemple, avec R = 3, l’épidémie s’arrête quand les 2/3 de population sont immunisés.

    Ceci nous dit également l’impact de la vaccination en cours sur le contrôle de l’épidémie. Par exemple, à l’heure où j’écris ces lignes (22 février 2021), environ 2% de la population a été vaccinée (4% a reçu la première dose). Cela contribue donc à diminuer R de 2% (par exemple de 1.1 à 1.08). Il est donc clair que la vaccination n’aura pas d’effet important sur la dynamique globale avant plusieurs mois.

    Il est important de comprendre que ce taux de reproduction n’est pas une caractéristique intrinsèque du virus. Il dépend du virus, mais également de l’hôte qui peut être plus ou moins contagieux (on a parlé par exemple des « superspreaders »), d’aspects comportementaux, de mesures de protection (par exemple les masques). Ce taux n’est donc pas forcément homogène dans une population. Par exemple, il est vraisemblable que le R soit supérieur chez les jeunes actifs que chez les personnes âgées.

    Peut-on isoler une partie de la population ?

    Est-il possible de préserver la population la plus fragile en l’isolant du reste de la population, sans contrôler l’épidémie ? Cette hypothèse a été formulée plusieurs fois, bien que très critiquée dans la littérature scientifique.

    On peut comprendre assez facilement que c’est une idée périlleuse. Isoler une partie de la population a un impact quasi nul sur le taux de reproduction R, et donc la dynamique de l’épidémie est inchangée. Il faut bien garder à l’esprit que contrôler une épidémie pour qu’elle s’éteigne suppose simplement de faire en sorte que R<1, de façon à ce que le nombre de cas décroisse exponentiellement. Ainsi pendant le confinement strict de mars 2020, le taux était d’environ R = 0.7. C’est suffisant pour que l’épidémie s’éteigne, mais il n’en reste pas moins qu’une personne infectée continue à contaminer d’autres gens. Par conséquent, à moins de parvenir à isoler ces personnes fragiles beaucoup plus strictement que lors du premier confinement (ce qui semble douteux étant donné qu’il s’agit pour partie de personnes dépendantes), l’épidémie dans cette population va suivre l’épidémie dans la population générale, avec la même dynamique mais dans une version un peu atténuée. Autrement dit, il semble peu plausible que cette stratégie soit efficace.

    Les variants

    Un virus peut muter, c’est-à-dire que lorsqu’il se réplique dans un hôte, des erreurs sont introduites de sorte que les propriétés du virus changent. Cela peut avoir un impact sur les symptômes, ou sur la contagiosité. Naturellement, plus il y a d’hôtes infectés, plus il y a de variants, c’est donc un phénomène qui survient dans des épidémies non contrôlées.

    Supposons que R = 2 et qu’un variant ait un taux R = 4. Alors à chaque transmission, le nombre de cas du variant relatif au virus original est multiplié par 2. Au bout de 10 transmissions, le variant représente 99.9% des cas. Ceci reste vrai si des mesures restrictives réduisent la transmission (par exemple R=2/3 et R=4/3). Après ces 10 transmissions, le R global est celui du variant. Par conséquent, c’est le nombre de cas et le R du variant plus contagieux qui déterminent le nombre de cas et la dynamique à moyen terme (c’est-à-dire quelques semaines). Le nombre de cas du virus original et même le nombre de cas globaux sont essentiellement insignifiants.

    Cela signifie que l’on peut être dans une dynamique explosive alors même que le nombre de cas diminue. Pour savoir si l’épidémie est sous contrôle, il faut regarder le R du variant le plus contagieux. A l’heure où j’écris, on est précisément dans la situation où le virus original est encore dominant avec R<1 et les variants ont un R>1, ce qui signifie que malgré une diminution globale des cas, on est dans une dynamique explosive qui sera apparente dans le nombre global de cas lorsque les variants seront dominants.

    Le contrôle épidémique

    Contrôler l’épidémie signifie faire en sorte que R<1. Dans cette situation, le nombre de cas diminue exponentiellement et s’approche de 0. Il ne s’agit pas nécessairement de supprimer toute transmission mais de faire en sorte par une combinaison de mesures que R soit plus petit que 1. Ainsi, passer de R = 1.1 à 0.9 suffit pour passer d’une épidémie explosive à une extinction de l’épidémie.

    Naturellement, la mesure la plus sûre pour éteindre l’épidémie est d’empêcher toute relation sociale (le « confinement »). Mais il existe potentiellement de nombreuses autres mesures, et idéalement il s’agit de combiner plusieurs mesures à la fois efficaces et peu contraignantes, le confinement pouvant être utilisé en dernier recours lorsque ces mesures ont échoué. La difficulté est que l’impact d’une mesure n’est pas précisément connu pour un virus nouveau.

    Ces connaissances sont cependant loin d’être négligeables après un an d’épidémie de Covid-19. On sait par exemple que le port du masque est très efficace (on s’en doutait déjà, étant donné que c’est une infection respiratoire). On sait que le virus se propage par projection de gouttelettes et par aérosols. On sait également que les écoles et les lieux de restauration collective sont des lieux importants de contamination. Cette observation peut conduire à fermer ces lieux, mais on pourrait alternativement les sécuriser par l’installation de ventilation et de filtres (investissement qui pourrait d’ailleurs être synergique avec un plan de rénovation énergétique).

    Il y a deux grands types de mesures. Il y a des mesures globales qui s’appliquent aux personnes saines comme aux porteurs du virus, comme le port du masque, la fermeture de certains lieux, la mise en place du télétravail. Le coût de ces mesures (au sens large, c’est-à-dire le coût économique et les contraintes) est fixe. Il y a des mesures spécifiques, c’est-à-dire qui sont déclenchées lorsqu’il y a un cas, comme le traçage, la fermeture d’une école, le confinement local. Ces mesures spécifiques ont un coût proportionnel au nombre de cas. Le coût global est donc une combinaison d’un coût fixe et d’un coût proportionnel au nombre de cas. Par conséquent, il est toujours plus coûteux de maîtriser une épidémie lorsque le nombre de cas est plus grand (choix qui semble pourtant avoir été fait en France après le deuxième confinement).

    Le « plateau »

    Une remarque importante : les mesures ont un impact sur la progression de l’épidémie (R) et non directement sur le nombre de cas. Cela signifie que si l’on sait maintenir un nombre de cas haut (R=1), alors on sait tout aussi bien (avec les mêmes mesures), maintenir un nombre de cas bas. Avec un petit effort supplémentaire (R=0.9), on peut supprimer l’épidémie.

    Avoir comme objectif la saturation hospitalière n’a donc pas particulièrement d’intérêt, et est même un choix plus coûteux que la suppression. Il existe une justification à cet objectif, la stratégie consistant à « aplatir la courbe », qui a été suggérée au début de l’épidémie. Il s’agit de maximiser le nombre de personnes infectées de façon à immuniser rapidement toute la population. Maintenant qu’il existe un vaccin, cette stratégie n’a plus beaucoup de sens. Même sans vaccin, infecter toute la population sans saturer les services hospitaliers prendrait plusieurs années, sans parler naturellement de la mortalité.

    La suppression de l’épidémie

    Comme remarqué précédemment, il est plus facile de maîtriser une épidémie faible que forte, et donc une stratégie de contrôle doit viser non un nombre de cas « acceptables », mais un taux de reproduction R<1. Dans cette situation, le nombre de cas décroît exponentiellement. Lorsque le nombre de cas est très bas, il faut prendre en compte les cas importés. C’est-à-dire que sur une période de contamination, le nombre de cas va passer non plus de n à R x n mais de n à R x n + I, où I est le nombre de cas importés. Le nombre de cas va donc se stabiliser à I/(1-R) (par exemple, 3 fois le nombre de cas importés si R = 2/3). Si l’on veut diminuer encore le nombre de cas, il devient alors important d’empêcher l’importation de nouveaux cas (tests, quarantaine, etc).

    Lorsque le nombre de cas est très bas, il devient faisable d’appliquer des mesures spécifiques très poussées, c’est-à-dire pour chaque cas. Par exemple, pour chaque cas, on isole la personne, et l’on teste et on isole toutes les personnes susceptibles d’être également porteuses. Non seulement on identifie les personnes potentiellement contaminées par la personne positive, mais on recherche également la source de la contamination. En effet, si l’épidémie est portée par des événements de supercontamination (« clusters »), alors il devient plus efficace de remonter à la source de la contamination puis de suivre les cas contacts.

    A faible circulation, comme on dispose de ces moyens supplémentaires pour diminuer la transmission, il devient possible de lever certains moyens non spécifiques (par exemple le confinement général ou autres restrictions sociales, les fermetures d’établissements et même le port du masque). Pour que les moyens spécifiques aient un impact important, un point clé est que la majorité des cas puissent être détectés. Cela suppose des tests systématiques massifs, par exemple en utilisant des tests salivaires, des drive-in, des tests groupés, des contrôles de température. Cela suppose que les personnes positives ne soient pas découragées de se tester et s’isoler (en particulier, en maintenant les revenus). Cela suppose également un isolement systématique en attente de résultat pour les cas suspectés. Autrement dit, pour avoir une chance de fonctionner, cette stratégie doit être appliquée de manière la plus systématique possible. L’appliquer sur 10% des cas n’a pratiquement aucun intérêt. C’est pourquoi elle n’a de sens que lorsque la circulation du virus est faible.

    Il est important d’observer que dans cette stratégie, l’essentiel du coût et des contraintes est porté par le dispositif de test, puisque le traçage et l’isolement ne se produisent que lorsqu’un cas est détecté, ce qui idéalement arrive très rarement. Si elle demande une certaine logistique, c’est en revanche une stratégie économique et peu contraignante pour la population.

    Quand agir ?

    J’ai expliqué que maintenir un niveau élevé de cas est plus coûteux et plus contraignant que maintenir un niveau faible de cas. Maintenir un niveau très faible de cas est encore moins coûteux et contraignant, bien que cela demande plus d’organisation.

    Bien entendu, pour passer d’un plateau haut à un plateau bas, il faut que l’épidémie décroisse, et donc transitoirement appliquer des mesures importantes. Si l’épidémie n’est pas contrôlée – et je rappelle que cela est le cas dès lors qu’un variant est en croissance (R>1) même si le nombre global de cas décroît – ces mesures vont devoir être appliquées à un moment donné. Quand faut-il les appliquer ? Est-il plus avantageux d’attendre le plus possible avant de le faire ?

    Ce n’est clairement jamais le cas, car plus on attend, plus le nombre de cas augmente et donc plus les mesures restrictives devront être appliquées longtemps avant d’atteindre l’objectif de circulation faible, où des mesures plus fines (traçage) pourront prendre le relais. Cela peut sembler contre-intuitif si le nombre de cas est en décroissance mais c’est pourtant bien le cas, parce que le nombre de cas à moyen terme ne dépend que du nombre de cas du variant le plus contagieux, et non du nombre de cas global. Donc, si le variant le plus contagieux est en expansion, attendre ne fait qu’allonger la durée des mesures restrictives.

    De combien ? Supposons que le nombre de cas du virus (le variant le plus contagieux) double chaque semaine, et que les mesures restrictives divisent le nombre de cas par 2 en une semaine. Alors attendre une semaine avant de les appliquer allongent ces mesures d’une semaine (j’insiste : allongent, et non simplement décalent). Dans l’hypothèse (plus réaliste) où les mesures sont un peu moins efficaces, chaque semaine d’attente augmente la durée des mesures d’un peu plus d’une semaine.

    Il est donc toujours préférable d’agir dès que R>1, de façon à agir le moins longtemps possible, et non pas d’attendre que le nombre de cas augmente considérablement. La seule justification possible à l’attente pourrait être une vaccination massive qui laisserait espérer une décroissance de l’épidémie par l’immunisation, ce qui n’est manifestement pas le cas dans l’immédiat.

     

    Quelques liens pertinents:

    in Romain Brette on February 22, 2021 03:45 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How can we increase adoption of open research practices?

    This blog was written by Iain Hrynaszkiewicz, Director of Open Research Solutions for PLOS.

    Researchers are satisfied with their ability to share their own research data but may struggle with accessing other researchers’ data – according to PLOS research released as a preprint this week. Therefore, to increase data sharing in a findable and accessible way, PLOS will focus on better integrating existing data repositories and promoting their benefits rather than creating new solutions. We also call on the scholarly publishing industry to improve journal data sharing policies to better support researchers’ needs.

    PLOS has long supported Open Science with our data sharing policy. Our authors are far more likely to provide information about publicly available data compared to journals with less stringent policies. But best practice for data sharing – use of data repositories – is observed in less than 30% of PLOS publications. To help us understand if there are opportunities for new solutions to help improve adoption of best practice, we built on previous research into the frequency of problems associated with sharing research data. We investigated the importance researchers attach to different tasks associated with data sharing, and researchers’ satisfaction with their ability to complete these tasks.

    Through a survey conducted in 2020, which received 728 completed responses, we found that tasks relating to research impact, funder policy compliance, and credit had the highest importance scores. Tasks associated with funder, journal, and institutional policy compliance – including preparation of Data Management Plans (DMPs) – received high satisfaction scores from researchers, on average.

    52% of respondents reuse research data but the average satisfaction score for obtaining data for reuse – such as accessing data from journal articles or making requests for data from other individuals – was relatively low. Tasks associated with sharing data were rated somewhat important and respondents were reasonably well satisfied with their ability to accomplish them.

    Figure: When we plot mean importance and satisfaction score, respondents were on average satisfied with their ability to complete the majority of tasks associated with Data Preparation, Data Publishing and Reuse of their own data but dissatisfied with their ability to complete tasks associated with Reuse of other researchers’ data. Tasks associated with meeting policy requirements are both important and well satisfied.

    What are the implications?

    We presume that researchers are unlikely to seek new solutions to a problem or task that they are satisfied in their ability to accomplish. This implies there are few opportunities for new solutions to meet researcher needs for data sharing – at least in our cohort, which consisted mostly of PLOS authors. PLOS – and other publishers – can likely meet these needs for data sharing by working to seamlessly integrate existing solutions that reduce the effort involved in some tasks, and focusing on advocacy and education around the benefits of sharing data in a Findable, Accessible, Interoperable and Reusable (FAIR) manner. 

    The challenges that researchers have reusing data could be addressed in part by strengthening journal data sharing policies – such as only permitting “data available on request” when there are legal or ethical restrictions on sharing, and improving the links between articles and supporting datasets. Generic “data available on request” statements in publications, which are not permitted under PLOS’s policies, usually mean data will not be available.

    While our research revealed a “negative result” with respect to new solution opportunities, the results are informative for how PLOS can best meet known researcher needs. This includes more closely partnering with established data repositories and improving the linking of research data and publications. These are important parts of our plans to support adoption of Open Science in 2021 and beyond.

    Read the preprint here and access the survey dataset and survey instrument here.

    The post How can we increase adoption of open research practices? appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on February 22, 2021 03:11 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    School Kids’ Memory Is Better For Material Delivered With Enthusiasm, Because It Grabs Their Attention

    By Emma Young

    Like countless other parents across the UK, I’m finding it pretty hard to maintain enthusiasm for my kids’ home-schooling lessons. Or muster it, for that matter. Yet we all know that when an instructor is enthusiastic, those sessions are more enjoyable — and we remember more. While this might be common knowledge, however, “the underlying mechanisms for the favourable effects of teacher enthusiasm are still largely unknown,” write Angelica Moè at the University of Padova, Italy, and her colleagues, in their new paper in the British Journal of Educational Psychology. The team therefore set out to better understand its power. And in a series of studies, they explored the idea that attention is key — that a more enthusiastic delivery grabs pupils’ attention more, which improves their memory for the material.

    In an initial study, German children aged 8 to 12 listened to a trained instructor read two brief texts, one of which was a description of the characteristics of dragonflies, the other a story about a farmer. The instructor read with either low enthusiasm (a monotone voice, few or no body movements, fixed facial expression, eyes fixed on the text) or high enthusiasm (exuberant movements, excited and varied vocal delivery, shining eyes oriented towards the children, varied facial expressions). The team found that kids in the high enthusiasm condition not only reported enjoying both texts more — and smiled more — but were also spent more time looking at the instructor, suggesting they were more attentive.

    Next, a total of 54 Italian pupils aged 9 to 11 underwent a very similar procedure, except that some had to complete a task that required their attention at the same time. (They had to spot and circle pictures of bells, which were among other small pictures on an A4 sheet). For the kids who were not given this extra task — i.e. those whose attention wasn’t already tied up — those in the high enthusiasm group showed better recall of the texts when questioned afterwards than those in the low enthusiasm group. However, the experimenter’s enthusiasm level had no impact on the bell-circlers’ recall, “thus showing that attention is among the underlying mechanisms explaining the positive effect of enthusiasm on recall,” the researchers write.

    A final study using the same texts found that when a distracting task doesn’t require sustained attention — in this case, some kids were asked simply to touch corners of their desk in a clockwise direction, instead of circling bells — greater instructor enthusiasm did make for better later recall for the narrative text. This provides further evidence that an enthusiastic delivery style improves recall “only when a secondary task is not competing for the students’ attention”, the team writes.

    So what are the implications of these findings? Some people — and some teachers — are naturally more enthusiastic than others. But it is possible to learn to be more demonstrative and engaging, both physically and verbally. So perhaps there’s a case for arguing that more naturally reserved teachers might be encouraged to consciously try to be more enthusiastic — but not necessarily all the time.

    The results suggest that high enthusiasm is only important when the listeners aren’t simultaneously paying attention to something else. So while extra enthusiasm during a lecture could help students to enjoy the lecture more, pay more attention to the content and learn better, it would not help students engaged in a practical challenge, for example. “This is an important message for those who believe they should be enthusiastic at any cost, but who may suffer from a constant, effortful up-regulation of positive emotions,” the team notes. “Our results demonstrate that they may ‘economize’ their efforts of up-regulating their enthusiasm and do so only in situations where students are not engaged in tasks competing for their attention.”

    Economise the efforts of up-regulation… Well, perhaps that’s something I can try at home.

    Displayed enthusiasm attracts attention and improves recall

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on February 22, 2021 02:26 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Neanderthals abused to support bully Alan Cooper

    The journal Science and the anthropology community manage an amazing feat of celebrating bullying, harassment and bad science while urinating upon Douglas Adams' grave.

    in For Better Science on February 22, 2021 01:40 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    “Serious non-compliance” prompts retraction of book on social justice in Hawai’i

    A publisher retracted a book last year after the home institution of one of the editors, the University of Hawai’i, “identified research protocol violations by two of the editors, which constitute Serious Non-Compliance.” The 2019 book, Voices of Social Justice and Diversity in a Hawai‘i Context, was edited by Amarjit Singh and Mike Devine, of … Continue reading “Serious non-compliance” prompts retraction of book on social justice in Hawai’i

    in Retraction watch on February 22, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Weekend reads: An editorial board resigns over interference; what a manuscript rejection means; the scientific 1%

    Before we present this week’s Weekend Reads, a question: Do you enjoy our weekly roundup? If so, we could really use your help. Would you consider a tax-deductible donation to support Weekend Reads, and our daily work? Thanks in advance. The week at Retraction Watch featured: Exclusive: Ohio State researcher kept six-figure job for more than … Continue reading Weekend reads: An editorial board resigns over interference; what a manuscript rejection means; the scientific 1%

    in Retraction watch on February 20, 2021 02:21 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Mind-Reading And Lucid Dreaming: The Week’s Best Psychology Links

    Our weekly round-up of the best psychology coverage from elsewhere on the web

    In a fascinating sleep study, researchers have managed to “talk” to people in lucid dreams. Dreamers were given simple yes/no questions or asked to do basic arithmetic, and had to respond by moving their eyes and facial muscles. Several participants were able to correctly answer the questions in their sleep. The researchers hope that this new method will ultimately improve our understanding of sleep and consciousness, writes Claire Cameron at Inverse.


    Psychologists have found problems in a number of papers linking violence in movies and video games to real-world aggression. Some of the papers have been retracted, reports Cathleen O’Grady at Science, but others live on, leaving researchers concerned about how they have influenced the field.   


    Public shaming has been around in one form or another for much of human history — but the internet has allowed it to occur at a scale never seen before. At Discover Magazine, Timothy Meinch explores the implications of this new era of shame and social media outrage.


    We reported earlier this week on how to deal with feeling bored — and the surprising benefits that boredom can sometimes have. Over at BBC Worklife, Sara Harrison examines more findings about the “unique emotional state” that is boredom.


    Every day we read other people’s minds, trying to understand what they are thinking. But this process is different from empathy, which involves understanding another’s emotions. Now a group of researchers have created a new scale to distinguish between the two, reporting their preliminary results at The Conversation.


    Pausing before answering a question can make you seem like you are lying, reports Natalie Grover at The Guardian. Participants read about, listened to, or watched people responding in a range of scenarios, from everyday conversations to police interrogations. Slower responses were seen as less credible and sincere.


    What’s it like to have no “mind’s eye”? At Psyche, Neesa Sunar describes her experience with aphantasia, which leaves her unable to mentally visualise her thoughts. Also check out our podcast on the condition from 2019.

    Compiled by Matthew Warren (@MattBWarren), Editor of BPS Research Digest

    in The British Psychological Society - Research Digest on February 19, 2021 03:42 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Widely shared vitamin D-COVID-19 preprint removed from Lancet server

    A preprint promoted by a member of the UK Parliament for claiming to show that vitamin D led to an “80% reduction in need for ICU and a 60% reduction in deaths” has been removed from a server used by The Lancet family of journals. The preprint, “Calcifediol Treatment and COVID-19-Related Outcomes,” was posted to … Continue reading Widely shared vitamin D-COVID-19 preprint removed from Lancet server

    in Retraction watch on February 19, 2021 01:43 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Seeking the secrets of the universe in the particle

    An award-winning particle physicist in Guatemala hopes to convince society that basic science matters too

    in Elsevier Connect on February 19, 2021 11:24 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Exclusive: Ohio State researcher kept six-figure job for more than a year after a misconduct finding

    In 2016, Mingjun Zhang, a biomedical engineering researcher at The Ohio State University, along with collaborators, published a paper that explored the mechanism behind ivy’s impressive adhesive strength. In it, the authors claimed to report the genetic sequences of the proteins making up the adhesive. The paper, entitled “Nanospherical arabinogalactan proteins are a key component … Continue reading Exclusive: Ohio State researcher kept six-figure job for more than a year after a misconduct finding

    in Retraction watch on February 19, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Kate Brown’s “Plutopia”: book review

    A book review about the history of two nuclear communities, one capitalist, one socialist, and their toxic legacies.

    in For Better Science on February 18, 2021 12:20 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How To Deal With Boredom, Digested

    By Emily Reynolds

    One year into lockdown, and it’s safe to say a lot of us are very, very bored. We’ve watched all the boxsets we can stomach, developed (and subsequently ditched) a long list of increasingly esoteric hobbies, and have quite probably exhausted every possible walking route within several miles of our home. Yet the boredom persists.

    Lockdown is, for most of us, an unusually boredom-inducing situation to be in, unable as we are to engage in many of the outside activities we would usually pass the time with. But boredom itself is common: as Camus rather pessimistically put it, “the truth is that everyone is bored”.

    So how do you deal with boredom? And does being bored even come with some benefits? Here’s the research on boredom, digested.

    Don’t look at your phone

    The first thing many of us do when we feel even a twinge of boredom is reach for our phones, ready to endlessly scroll until we’re bored of that and switch to something else.

    But one study suggests that, at least during working hours, smartphone use doesn’t actually do very much to relieve boredom. While phone use increased as workers became more bored, it also worked the other way around: participants were more bored after using their smartphone than they were when they started.

    It could be that the act of switching tasks from work to using a phone depletes our mental resources, and the reward of a sneaky look at your phone isn’t able to counteract the additional cognitive load. Or perhaps looking at your phone can underline the tedium of the task you’re trying to escape from. Either way, the results suggest that clinging to our phones might not be the way to relieve our boredom.

    Reframe the way you think about boredom

    Nobody really likes being bored. But thinking about boredom not as a chore but as an opportunity for introspection might make it easier to bear.

    In 2016, Tim Lomas, from the University of East London, purposefully made himself bored while on a long haul flight, making minute-by-minute notes about what he was thinking and feeling over the course of an hour. Rather poetically, he described his thoughts “emerging unbidden like fish appearing in an ocean”, and was “intrigued by how slippery, elusive and strange the mind was, a fleeting dance of vague ephemera”. He concluded that if people “were to regard boredom as a meditative experience, it may no longer be appraised as negative; indeed it may no longer even be boring.” York University’s Professor John Eastwood has similarly argued that boredom offers a chance to “discover the possibility and content of one’s desires”.

    So with a bit of introspection, you could turn your boredom into something more meaningful.

    Do something creative

    In a 2014 study, Sandi Mann and Rebekah Cadman, both from the University of Central Lancashire, found that boredom actually increased creativity.

    Participants were asked to either write something novel or do something undeniably tedious: copy numbers out of the telephone directory. All participants then completed a creative task, coming up with as many uses for two polystyrene cups as they possibly could. And those in the boring condition came up with far more uses than those in the non-boring condition.

    So not only could indulging your creativity be a way out of boredom, boredom itself might also give your creativity a boost.

    Get nostalgic

    At the moment, a lot of us are probably feeling pretty nostalgic for a life before the pandemic. Could focusing on that nostalgia help our boredom, too?

    In one 2013 paper published in Emotion, participants were first induced into high or low states of boredom by being asked to copy down either two or ten pieces of text about concrete mixtures. They then retrieved either a neutral or nostalgic memory. Those who were in high states of boredom before retrieving a nostalgic memory recorded feeling more nostalgic overall than those who were in low states of boredom. A follow-up experiment also found that nostalgia can actually counteract the effects of boredom, creating a sense of meaning in people’s lives.

    Boredom often comes with a sense of existential emptiness, so reestablishing yourself as a person with meaning and purpose could help — and the way to do that could be through meditating on meaningful past times.

    Let your mind wander

    Mind wandering isn’t always a positive activity, particularly if you’re trying (and failing) to get on with an important task: mind wandering has been linked to poorer reading comprehension and worse memory, to use just two examples.

    But, if you’re bored, it could offer some relief. According to one literature review, mind wandering can make boring tasks feel shorter, help us disengage from boring surroundings, and improve our moods while we’re doing something tedious. And as mind wandering has also been linked with increased creativity and problem-solving skills, there are other potential benefits too.

    Tackle it head on

    Schoolwork can be a serious cause of boredom — who doesn’t remember watching the clock tick slowly by as we sat in a class we hated? Luckily, this also makes school a good place to study boredom, and in 2011 one team explored a variety of different ways of dealing with boredom, focusing on avoidance (thinking or doing something unrelated to the boring situation) and “approach coping” (thinking or doing something that actively changes the boring situation itself).

    The team grouped students’ responses to a boring maths lesson into three categories: “reappraisers” dealt with boredom by meditating on the value of mathematics and therefore changing their view of the situation; “criticizers” tried to act to improve the situation by suggesting changes to the teacher; and “evaders” tried to avoid boredom by occupying themselves with something else.

    The reappraising group were the least bored overall, and also experienced the most positive outcomes when it came to emotions and motivation: they enjoyed maths more and experienced the lowest levels of anxiety. So, as Tim Lomas’ research also suggests, rethinking what boredom actually represents, rather than trying to avoid it altogether, might be the best way of ameliorating it long-term.

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on February 18, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dev session: Caglar Cakan: neurolib

    neurolib

    Caglar Cakan will introduce neurolib and discuss its development in this developer session.

    The abstract for the talk is below:

    neurolib is a computational framework for whole-brain modelling written in Python. It provides a set of neural mass models that represent the average activity of a brain region on a mesoscopic scale. In a whole-brain network model, brain regions are connected with each other based on structural connectivity data, i.e. the connectome of the brain. neurolib can load structural and functional data sets, set up a whole-brain model, manage its parameters, simulate it, and organize its outputs for later analysis. The activity of each brain region can be converted into a simulated BOLD signal in order to calibrate the model to empirical data from functional magnetic resonance imaging (fMRI). Extensive model analysis is possible using a parameter exploration module, which allows to characterize the model’s behaviour given a set of changing parameters. An optimization module allows for fitting a model to multimodal empirical data using an evolutionary algorithm. Besides its included functionality, neurolib is designed to be extendable such that custom neural mass models can be implemented easily. neurolib offers a versatile platform for computational neuroscientists for prototyping models, managing large numerical experiments, studying the structure-function relationship of brain networks, and for in-silico optimization of whole-brain models.

    in INCF/OCNS Software Working Group on February 18, 2021 09:19 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    ‘No malicious intent’: Authors retract week-old Science Advances paper based on embargoed data

    The authors of a paper in Science Advances on methanogens — archaea that produce methane — have retracted the work a week after its publication because they included genetic data that violated an embargo.  The article, published on February 10, was titled “A methylotrophic origin of methanogenesis and early divergence of anaerobic multicarbon alkane metabolism,” … Continue reading ‘No malicious intent’: Authors retract week-old Science Advances paper based on embargoed data

    in Retraction watch on February 17, 2021 08:43 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The dynamism of clinical knowledge

    How can we meet clinicians’ knowledge needs in a rapidly evolving medical world?

    in Elsevier Connect on February 17, 2021 02:42 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Publisher retracting five papers because of “clear evidence” that they were “computer generated”

    A publisher is retracting five papers from one of its conference series after discovering what it says was “clear evidence” that the articles were generated by a computer. The five papers were published from 2018 to 2020 in IOP Publishing’s “Conference Series: Earth and Environmental Science.” According to an IOP spokesperson, the retraction notices will … Continue reading Publisher retracting five papers because of “clear evidence” that they were “computer generated”

    in Retraction watch on February 17, 2021 01:24 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    What Makes For A “Meaningful” Death In Fiction?

    By Emily Reynolds

    Death can be a powerful narrative tool. We sob over the demise of a beloved character, cheer at the comeuppance of our favourite villain, or sit at the edge of our seats, shocked at deaths we didn’t see coming. Red Wedding, anyone?

    All deaths are not created equal, however, and in a new study Kaitlin Fitzgerald from the State University of New York and team look at what makes certain fictional deaths so memorable. The team reports that although we find some deaths pleasurable — the long-awaited downfall of an antagonist, for example — it’s those we find meaningful that truly stick with us in the long-term.

    Participants were first asked to think of a death scene from a narrative — film, TV, or other media. Those in the control condition then wrote about why the death they’d thought of was particularly memorable, while the other two groups were asked specifically to recall a death that was  particularly “meaningful” or “pleasurable” and write about why they found it to be that way.

    After writing about the scene, participants categorised the genre of the narrative they’d chosen, and indicated which emotions they felt in response to the scene. They also rated the extent to which they appreciated the narrative (how meaningful, moving or thought provoking they found it) and enjoyed it (how fun or entertaining it was, and whether they’d had a good time engaging with it). Finally, participants categorised the character as a hero, villain, anti-hero or anti-villain, rated the morality of the character and how much they deserved their death, and indicated how much they liked the character.

    The results showed that “meaningful” and “pleasurable” deaths in fiction differ in key ways. Participants who had recalled a meaningful death were more likely to appreciate the narratives than those in the pleasurable condition, who were more likely to enjoy them. Those in the meaningful condition were also more likely to pick narratives from dramas or tear-jerkers, while participants in the pleasurable condition were more likely to pick deaths from the action genre, or from horrors or thrillers. The death of characters seen as moral, as heroes, or as less deserving of death were also more likely to be picked by those in the meaningful condition.

    The relationship between morality and appreciation could be explored further, however. While the results suggest that the deaths of immoral characters are generally considered less meaningful, what about those whose morality is somewhat more blurred? Co-author Matthew Grizzard noted in an interview that, though both could very reasonably be considered villains, Blade Runner’s Roy Batty and Star Wars’ Darth Vader had come up several times as examples of meaningful deaths due to their redemption arcs. These moral grey areas could be explored further.

    Overall, though, the study indicates that even when we’re watching a film or reading a book we can experience death as a meaningful and reflective experience. In particular, the team suggests that media deaths can help people process “disenfranchised” grief — grief for someone they don’t feel “allowed” to grieve for — with fictional characters acting as a conduit for repressed feelings. So while it might be fun when a schlocky Bond villain falls from the top of the Golden Gate Bridge or Samuel L. Jackson gets eaten by a shark, there are scores of other examples that speak to people on a level that goes far beyond entertainment, and that may even help them understand their own grief.

    Memorable, Meaningful, Pleasurable: An Exploratory Examination of Narrative Character Deaths

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on February 17, 2021 12:58 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Third journal scammed by rogue editors

    Burned by the offer of a special issue, a journal has retracted four papers after determining that the guest editors of the supplement were not legit.  Neuroscience Letters, an Elsevier title, published the special issue — “Special Issue on Clinical and Imaging Assessment of Cognitive Dysfunction in Neurological and Psychiatric Disorders” — last summer, but … Continue reading Third journal scammed by rogue editors

    in Retraction watch on February 16, 2021 03:31 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    We Prefer To Experience Good — And Bad — Events On The Same Day As A Friend

    By Emma Young

    You rub off the panels on a scratch card and find that you’re the lucky winner of £100. If you could choose when the same thing should happen to a good friend, would you rather it was the same day as your win — or a different day? And what if we’re talking negative, rather than positive, experiences — when you’ve both been issued with parking tickets, say, or both suffered a bereavement?

    Earlier work shows that we tend to prefer to get through a series of negative experiences as quickly as possible, while we like to space out multiple personal positive experiences, so as to receive the most pleasure from each joy. A new study in Social Psychological and Personality Science finds that when we’re thinking about shared experiences, though, this doesn’t hold. The participants in this study preferred to experience both negative and positive events on the same day as a friend, rather than on different days — as long as those events weren’t powerfully emotional. Franklin Shaddy at the University of California, Los Angeles and his colleagues think we have this “preference for integration” because it increases our feelings of connection with others. This could have implications for how we arrange our lives during lockdown.

    In the first of five studies, the team found that the vast majority of a group of student participants wanted to receive a surprise friendly message (in this case from the head coach of UCLA’s basketball team) on the same day as a friend, rather than a different day. In a second study, the majority of 304 online participants said that they’d feel happier about winning a moderate amount of money on a lottery — or being issued with a tax demand for a similar amount — if a similar financial win or loss happened to a friend on the same day than if it happened on another day shortly before or afterwards.

    In further studies, the researchers found that people preferred to “integrate” — or “coordinate” — an event such as receiving a first-class flight upgrade or missing a flight with someone whom they liked. But when it came to someone they didn’t like, they had no strong feelings about the timing of the event. The same was true for another group of participants who were asked to think of imaginary people who they were told had similar political beliefs to their own, or opposite beliefs. The team thinks that people like the idea of coordinating positive or negative experiences with friends (or people they think they would get on with) because they feel that this will boost their feelings of social connection with the other person.

    However a final study found that there are limits to this preference for integration: we prefer more extreme wins — and especially losses (such as getting divorced or totalling a car) — not to happen at the same time to us as to a friend. The team interpret this as reflecting a limited human capacity to savour gains and buffer losses, with shared extreme losses being harder to handle.

    There are other potential interpretations of the results, though. If you won money, might you worry that telling a friend could make them a little jealous, or even resentful or self-pitying — or might you even feel guilty about your own good luck? How much better it would be, then, for you both to win money on the same day. Also, we could be averse to experiencing strongly upsetting events on the same day as a friend simply because a friend who’s also suffering is less likely to be able to help us to cope with our own distress.

    Still, there are potential practical implications of the finding that we prefer to share experience of pleasant events. “Past research has shown that people underestimate their enjoyment of, and thus hesitate to engage in, hedonic activities alone,” the team writes. That resistance might be overcome if we were to share a positive experience with someone else if not in space, at least in time. During lockdown, we can’t meet a group of friends to go for a walk in a pleasant spot near our home, for example — but a group of friends could arrange to go individually for a walk in different uplifting places at the same time, and hopefully benefit from the feeling of enhanced social connection that this could bring.

    Social Hedonic Editing: People Prefer to Experience Events at the Same Time as Others

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on February 16, 2021 12:27 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    ‘Conference organizers have ignored this:’ How common is plagiarism and duplication in abstracts?

    Harold “Skip” Garner has worn many hats over the course of his career, including plasma physicist, biologist, and administrator. One of his interests is plagiarism and duplication the scientific literature, and he and colleagues developed a tool called eTBLAST that compares text passages to what has already been published to flag potential overlap. A new … Continue reading ‘Conference organizers have ignored this:’ How common is plagiarism and duplication in abstracts?

    in Retraction watch on February 16, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How good science communication can cut through the COVID “madness”

    Vaccine Editor-in-Chief talks about why people are complacent about COVID – and how we can help them take it seriously

    in Elsevier Connect on February 16, 2021 10:26 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Macchiarini partner Anthony Hollander chairs mass-sacking committee in Liverpool

    "I felt I had a lot to give the world. Getting my first at university and doing so well in research was an antidote. Underneath, though, there is part of me that feels maybe one day someone will discover that I am stupid." - Tony "Blue Peter" Hollander

    in For Better Science on February 16, 2021 06:28 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    PLOS and Uppsala University Announce Publishing Deal

    Uppsala University and the Public Library of Science (PLOS) today announced two 3-year publishing agreements that allow researchers to publish in PLOS journals without incurring article processing charges (APC). The Community Action Publishing (CAP) agreement enables Uppsala University researchers to publish fee-free in PLOS Medicine and PLOS Biology. The Flat fee agreement also allows them to publish in PLOS ONE and PLOS’ community journals[1]. These models shift publishing costs from authors to research institutions based on prior publishing history and anticipated future growth with PLOS.

    “We are thrilled to be collaborating with Uppsala University, our first European flat fee customer, on two new models that allow Uppsala researchers to publish without APCs,” said Sara Rouhi, Director of Strategic Partnerships for PLOS.  “As one of the most prestigious university’s in the world, Uppsala is well-positioned to further the cause of equitable and barrier free open reading and publishing and we are delighted to join with them on this effort.”

    PLOS’ Community Action Publishing (CAP) is PLOS’ effort to sustain highly selective journal publishing without Article Processing Charges for authors. More details about the model can be found here and here. PLOS’ Flat Fee model enables APC-free publishing with PLOS’ other five journals creating efficiency and reducing administrative overhead for managing gold APC funds. Uppsala University and PLOS will also collaborate on future data, metrics, and tools for institutions to evaluate Open Access publishing agreements.

    The Uppsala University publishing deal continues the momentum for PLOS, following other agreements with the University of California system, Big Ten Academic Alliance, Jisc (including University College London, Imperial College London, University of Manchester) and the Canadian Research Knowledge Network among others.


    [1] PLOS Computational Biology, PLOS Genetics, PLOS Neglected Tropical Diseases, and PLOS Pathogens.

    The post PLOS and Uppsala University Announce Publishing Deal appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on February 15, 2021 03:15 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 15 February 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.


    Please join us at the next regular Open NeuroFedora team meeting on Monday 15 February at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'
    

    The meeting will be chaired by @ankursinha. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on February 15, 2021 12:11 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 1 February 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.


    Please join us at the next regular Open NeuroFedora team meeting on Monday 1 February at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'
    

    The meeting will be chaired by @bt0dotninja. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on February 15, 2021 12:11 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 18 January 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.


    Please join us at the next regular Open NeuroFedora team meeting on Monday 18 January at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 next Monday'
    

    The meeting will be chaired by @ankursinha (me). The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on February 15, 2021 12:11 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Saying That Girls Are “Just As Good” As Boys At Maths Can Inadvertently Perpetuate Gender Stereotypes

    By Emma Young

    Though girls and boys do equally well on maths tests, the stereotype that girls aren’t as naturally able at maths — or as likely to be extremely smart — is adopted early; even 6-year-olds in the US endorse it. Of course, these stereotypes harm women in an educational setting and in their professional lives, point out the authors of a new study in Developmental Psychology. So it’s important to understand what gives rise to them. Eleanor Chestnut at Stanford University and her colleagues now report that one common and well-intentioned way of attempting to convey girls’ equality with boys actually backfires: saying that girls are “just as good” as boys at something leads the listener to conclude that boys are naturally better, and girls must work harder to equal them.

    Earlier work has shown that we use the syntax of a sentence to make inferences about the relative status of objects and social groups. Typically, we view the thing or person that is being compared to as the more typical or superior reference example. So, if you were to read “Molly’s cake is as good as Jessica’s”, you’d be likely to infer that Jessica’s is the exemplar that Molly is striving to equal. 

    “Girls are just as good at boys at maths” is something that family members, caregivers, teachers and public figures all say to try to promote gender equality, note the researchers. And in 2018, Chestnut and her colleague Ellen M Markman reported that adults who hear that statement infer that boys are more skilled at maths and have more natural ability. Unfortunately, then, it seems to perpetuate the very stereotype that it seeks to counteract. In the new study, the team set out to discover whether stereotypes can not only be perpetuated but learned, based on syntax alone.

    First, the team studied 288 adults, who were recruited online. These participants read sentences that incorporated nonsense words for abilities or traits. The team found that people who read that girls are “as good as boys” at “thrupping”, say, or “trewting”, inferred more natural skill to the boys, and that girls had to work harder to be “trewtic”, and so on. When the boys were reported to be “as good as girls”, though, the girls were perceived to be superior — showing the influence of syntax in the participant’s judgements. When the sentence read, “Suppose someone tells you that boys and girls [or girls and boys] are equally good at thrupping”, and so on, neither gender was perceived to be naturally superior. 

    The team then ran a similar study with 337 children, aged 7 to 11, who were recruited from museums in the San Francisco Bay area. This time, the researchers used puppets (who said they wanted to tell the children about boys and girls on their planet) as well as written statements. And instead of using nonsense words, the researchers referred to activities that the children would understand but which the team didn’t think they’d have a strong pre-existing gender bias about — whistling, hopping on one foot, doing handstands and snapping.

    When these children were told that boys [or girls] are “as good as” girls [or boys] at one of these things, the children more likely to report that the gender in the reference position, at the end of the sentence, was naturally better, and would have to work less hard to be skilled at it. Also in line with the findings from adults, when the boys and girls on the alien planet were presented as being “equally as good” at an activity, the children didn’t view either gender as being naturally superior. The children’s age made no difference; they all made similar inferences.

    “We conclude that is critically important to consider how we frame equality when talking to children,” the researchers write. The work suggests that if a child does not already hold the stereotype that boys are better than girls at maths, or more likely to be extremely smart, hearing that girls are “as good as boys” in these spheres could actually teach it. Public figures, websites, statements on Twitter, and even psychological research articles readily equate girls’ abilities with boys’ or women’s with men’s, the team notes.

    Their work suggests that presenting girls and boys, and men and women, as being “equally as good” at something is a far better strategy. “Until women and men are on equal syntactic footing, they will not be on equal social footing,” they conclude. 

    “Just as good”: Learning gender stereotypes from attempts to counteract them.

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on February 15, 2021 12:10 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The one that got away: Researchers retract fish genome paper after species mix-up

    A group of researchers in Canada has retracted their 2018 paper on the gene sequence of the Arctic charr — a particularly hearty member of the Salmonidae family that includes salmon and trout — after discovering that the sample they’d used for their analysis was from a different kind of fish. The paper, “The Arctic … Continue reading The one that got away: Researchers retract fish genome paper after species mix-up

    in Retraction watch on February 15, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    What role does culture play in shaping children’s school experiences?

    One way to think of culture is as a context in which we learn and develop. We share, live, perform, and experience culture through our participation in daily activities, customs, and routines with social others. Culture helps us make sense of our social worlds and shapes our actions, thoughts, and feelings. For example, culture plays a role in the way we experience emotions, construct our self-concepts, and learn and problem-solve.

    With increasing migration and the movement of people in the twenty-first century, many children in the US and worldwide are attending school in formal settings where cultural norms and practices at home may conflict with those children encounter at school. This experience places children in the position of having to navigate two different social worlds—home and school. One broad question we can explore is: “what role does culture play in shaping children’s school experiences and academic success?” Let’s visit three specific areas: parental beliefs and socialization practices, teacher perceptions, and school curricula and children’s learning.

    Parental beliefs and socialization practices

    Parental expectations, beliefs, and attitudes about education shape children’s academic experiences. Many parents in diverse cultural communities view education as a path to future success. For example, as a group many Asian and Asian American children attain academic success. What role does culture play in these outcomes? Chang notes that as a group, Chinese and Taiwanese parents place a high value on education. Kim and Park  note the same is true for Korean parents. Parenting approaches in these communities highlight training and disciplining children, parent self-sacrifice, and devotion to children. Parents believe perseverance and hard work is the key to success and socialization practices reinforce these values and traits. These cultural practices help children internalize the values their parents place upon education and behaving according to social norms. Children acquire these values and are loyal, appreciative, and dedicated to their parents for their support and encouragement. In part, they attain academic success to honor their parents and the broader social groups to which they belong.

    Teacher perceptions

    Teachers play an important role in children’s academic success too. What practices work best to motivate children to do well at school? The answer depends upon numerous factors. For example, many teachers will be entrusted with educating children who may not share their cultural heritage. How might this cultural mismatch shape children’s school experiences and potential for academic success?

    Most American school practices reflect dominant, mainstream American values, norms, and behavioral scripts. For most European American children who value independence and uniqueness, teacher praise and rewards can be highly motivating. However, for children who come from families that value humility and modesty, receiving praise in front of classmates might be an uncomfortable interaction.

    Student engagement norms are another example. Many American teachers using a mainstream cultural lens, connect active student engagement with student attentiveness. Yamamoto and Linoted that for many Asian and Asian American students, knowing when to be quiet is a desirable skill which caregivers socialize their children to acquire. Teachers using mainstream, American cultural values and norms may perceive quiet students as disengaged and inattentive. These perceptions impact children’s motivation to learn and academic success.

    School curricula and children’s learning

    Dominant, mainstream American values tend to permeate curricular and teaching practices. Many American schools promote individual learning and problem-solving approaches rather than group or collaborative problem-solving strategies. The focus upon individualized learning connects to the cultural ideology of individualism and the independent self. These approaches promote the self as separate and unique from social others. Many European American children participate in cultural practices and routines that reinforce this worldview and values.

    However, many Latinx and indigenous children participate in cultural practices and activities at home that value group and collaborative problem solving. These practices connect to the cultural ideology of collectivism and the interdependent self. At home children participate in practices and routines that emphasize the importance of the group, especially family. Thus, there is a disconnect between the approaches at home with the practices and routines the child encounters at school. Consequently, many of these children often have difficulty reaching their maximum potential in classrooms that promote individual problem-solving skills. Why does this happen and does this necessarily have be the outcome?

    For many children from cultural heritages that promote collectivist values and an interdependent cultural model of the self, the cultural practices in which the child participates at home may conflict with those the child encounters at school. Often, teachers are unaware of these sources of conflict. However, this conflict between home and school does not need to impede students’ success or invalidate the child’s cultural heritage. One solution is for schools to meet students halfway and bridge the gap between the two contexts.

    Two fine examples are The Bridging Cultures Project designed to assist immigrant and indigenous children in the US and the Kamehameha Early Education Program (KEEP) designed to assist native Hawaiian children. Both programs highlight how helping teachers become more aware, respectful, accepting, and inclusive of all their students’ cultural values and goals shapes children’s motivation and academic success at school.

    Final thoughts

    Numerous cultural forces connect to children’s school experiences and academic achievement. These include parental beliefs, socialization practices, and cultural worldviews. Cultural values, practices, and ways of learning at home both shape and connect to children’s formal school experiences. Educational initiatives such as The Bridging Cultures Project  and KEEP highlight the importance of cultural compatibility and connectedness in fostering children’s active engagement in school. Acknowledging and incorporating cultural knowledge, patterns, and ways of learning at home when they disconnect with those at school is one important way to ensure all children’s academic success.

    Featured image: School children in India, by Richard Veit

    The post What role does culture play in shaping children’s school experiences? appeared first on OUPblog.

    in OUPblog - Psychology and Neuroscience on February 15, 2021 10:30 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Thank You for your hard work and support!

    Thank you to our volunteers. Your contributions throughout the past year have helped accelerate discovery and develop science during these unpredictable times.

    The team at PLOS and the wider Open Science community are incredibly grateful for the thoughtfulness and expertise that you, our Editorial Board Members, Guest Editors and Reviewers, bring to your roles in the peer review process. Your rigorous assessment of submitted manuscripts is an invaluable contribution, and your ongoing commitment to Open Science inspires us.

    Thank You

    Watch PLOS’ Chief Scientific Officer, Veronique Kiermer’s message of thanks.

    What did we achieve?

    In the past year our journal teams have worked to introduce new Open Access opportunities to support inclusivity, reproducibility and improving trust in science by involving more people in the scientific process. Our efforts aim to ensure a diverse and sustainable publishing ecosystem providing researchers of every career stage and discipline the chance to publish their findings. 

    • As of 2020 all PLOS journals now offer institutional partnership deals to equitably distribute the costs of publication and ensure that Open Science is really open for all. 
    • We’ve empowered researchers to share more of their science with new registered report submission options at PLOS ONE and PLOS Biology which introduces peer review at the experiment design stage.
    • The quality and breadth of PLOS research has enabled us to create new curated editorial collections to advance a range of important fields.

    None of this would be possible without you, our dedicated Editorial Board Members, Guest Editors and Reviewers. 


    Coming soon…

    As we step into 2021 we will all undoubtedly come up against a unique set of challenges but rest assured, PLOS will continue to serve the Open Science community. Together we can ensure that researchers from all regions have every opportunity to share their work with experienced scientists from a host of disciplines. 

    Once again, THANK YOU to ALL of our dedicated volunteers.


    Find out how you can take part in our ongoing initiatives to advance science. Learn More!

    The post Thank You for your hard work and support! appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on February 15, 2021 10:01 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Highlights of the BMC Series – January 2021

    BMC Public Health: Predicting epidemics using search engine data: a comparative study on measles in the largest countries of Europe

    from Samaras, et al. (2021)

    Tracking disease outbreaks is key to combating them, and predicting epidemics can help direct prevention measures most effectively. Google Trends has been used to track and/or predict the spread of infections diseases including respiratory syncytial virus (RSV) and AIDS.

    This study extended these analyses to measles, a highly infectious disease causing outbreaks around the world, by comparing official data on measles in Europe to data from Google Trends.

    The authors found that the prediction model generated from the Google Trends data overall closely matched the data on actual cases. The prediction was more accurate for countries that had higher measles activity. One advantage of using data from Google trends to track and predict disease outbreaks is that the data are available almost in real time, in contrast to official data that often take months to be released. This could help with faster responses to contain the spread of the illness.

    BMC Cardiovascular Disorders: Ultrasound-targeted microbubble destruction promotes myocardial angiogenesis and functional improvements in rat model of diabetic cardiomyopathy

    Diabetic cardiomyopathy (DCM), a heart condition common in people with diabetes, is caused in part by a lack of blood vessel growth and repair in heart tissue. Growth factor therapy can promote angiogenesis to treat DCM, but delivery of the therapy has proven difficult.

    The authors of this study tested ultrasound-targeted microbubble destruction (UTMD) in rats with DCM as a potential safe and effective way to stimulate angiogenesis in the heart.

    They found that UTMD used with Sonovue microbubbles restored heart function in the rats who received the treatment. Histology and microscopy showed that much of the damage to the heart tissue seen in the DCM rats had been reversed by the treatment. This new technique shows promise as a noninvasive therapy for early intervention of DCM in diabetic patients.

     

    BMC Pregnancy and Childbirth:  Which growth standards should be used to identify large- and small-for-gestational age infants of mothers with type 1 diabetes? A pre-specified analysis of the CONCEPTT trial

    Babies born at either end of the birthweight bell curve are at higher risk of neonatal complications such as respiratory distress or NICU admission, but determining the cut-off precentile for considering a newborn at risk depends on a range of factors. Different standards account for these factors differently, leading to discrepancies in identifying babies at risk. Maternal diabetes is one of the factors that affect both a newborn’s birthweight and their risk of complications.

    A new study has applied three widely-used growth standards to a large, international cohort of babies born to mothers with diabetes. Birthweight percentiles were calculated using each of the standards and then predicted risk of complications based on those percentiles was compared with actual outcomes.

    The authors concluded that one of the growth standards, which did not account for gestational age at birth, was not suitable for risk assessment of babies born to mothers with diabetes, while the other two showed high associations between cut-offs for large- and small-for-gestational-age and rates of complications.

    BMC Geriatrics: Life after falls prevention exercise – experiences of older people taking part in a clinical trial: a phenomenological study

    Falling is a serious risk for older adults, and falls-prevention exercise interventions have shown good results in preventing falls in the short-term. However, little is known about the long-term effects of these interventions and whether they produce ongoing changes in the exercise behaviors of the participants.

    To fill in these gaps, the authors of this study interviewed older adults who had participated in a fall-prevention exercise trial about their experiences in the trial and their exercise activities and falls after the conclusion of the trial.

    The participants described maintaining physical activity since the conclusion of the trial, but very few continued with the exercises they had learned in the intervention. Many of the participants minimized the seriousness of falls they had experienced. These interviews highlighted that the challenges in encouraging older adults to continue with falls-prevention exercises after the conclusion of an intervention are similar to the challenges in convincing them to participate in the interventions in the first place. Those working with older adults need to understand individual needs and habits and to contextualize the importance of fall prevention within each individual’s own priorities for maintaining health and independence.

    BMC Health Services Research: Meta-ethnography in healthcare research: a guide to using a meta-ethnographic approach for literature synthesis

    These clear guidelines will allow authors … to produce rigorous and comprehensive syntheses.

    Meta-ethnography, the synthesis of multiple qualitative studies, can provide insights into the experiences of patients and health-care professionals and generate evidence that can be used to inform healthcare practice and policy. However, there are not yet clear guidelines on the process of meta-ethnography.

    To address this, the authors of a new paper have designed a step-by-step method for conducting a meta-ethnography, including examples. They especially focused on the middle and later stages of the process, “determining how the studies are related, translating the studies into one another, synthesising the translations,” due to the difficulty of these steps and the dearth of existing guidance for conducting them.

    These clear guidelines will allow authors of future meta-ethnographies, even those new to the methodology, to produce rigorous and comprehensive syntheses of evidence that contribute to improvements in patient care and healthcare processes.

    The post Highlights of the BMC Series – January 2021 appeared first on BMC Series blog.

    in BMC Series blog on February 15, 2021 08:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Alysson Muotri, a minibrain

    Autistic Neanderthal minibrains operating crab robots via brain waves of newborn babies are to be launched into outer space for the purpose of interstellar colonization. No, I am not insane. Science Has Spoken.

    in For Better Science on February 15, 2021 06:30 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The Man Who Thought AIDS Was All In The Mind

    I look at one of the most remarkable articles in the history of psychology

    in Discovery magazine - Neuroskeptic on February 14, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Weekend reads: “Hot-crazy matrix” paper; “comfort women” controversy; COVID-19 vaccine misinformation

    Before we present this week’s Weekend Reads, a question: Do you enjoy our weekly roundup? If so, we could really use your help. Would you consider a tax-deductible donation to support Weekend Reads, and our daily work? Thanks in advance. The week at Retraction Watch featured: Eleven papers corrected after nutrition prof fails to disclose patent, … Continue reading Weekend reads: “Hot-crazy matrix” paper; “comfort women” controversy; COVID-19 vaccine misinformation

    in Retraction watch on February 13, 2021 01:48 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Restoration of the Daily Email Announcement

    Subscribers to arXiv’s daily email announcement recently experienced a disruption in service. Maintenance performed by arXiv’s email provider was found to be the cause of the disruption. The issue has been resolved, and we sincerely apologize for any inconvenience.

    During this time, functionality on arXiv.org remained intact. Readers and authors could continue to browse and download new papers on arXiv.org, as well as submit their own articles.

    The issue

    Daily emails announcing new papers were received late, or not at all. Other emails, such as account registration confirmations, were also affected.

    The response

    We first identified and fixed a problem related to the email disruption, and began sending the email backlog to subscribers. Soon after that process began, another issue was identified that inadvertently truncated subscriber lists, specific to several subject areas. Finally, the distribution lists for all subject areas were restored and remaining announcements were sent.

    Moving forward, if any subscribers find that the daily announcement email is not received, we advise them to resubscribe.

    Remediation 

    We know that researchers rely on the daily announcement for their work, and we aim to avoid such disruptions in the future by:

    • Automating daily backups of the distribution lists
    • Applying fixes to prevent the root cause
    • Improving the monitoring of the daily email announcement systems
    • Reviewing our overall backup and recovery plans

    We appreciate our subscribers’ patience as we resolved the issue and, again, we apologize for any inconvenience.

     

    in arXiv.org blog on February 12, 2021 10:11 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Enhancing and maintaining a culture of inclusive excellence: The NIH Faculty Institutional Recruitment for Sustainable Transformation (FIRST) Program

    A message to the community from the Directors of the Institutes, Centers, and Offices involved in the NIH BRAIN Initiative and NIH Blueprint for Neuroscience Research.

    NINDS Director Walter J. Koroshetz, NIMH Director Joshua A. Gordon, NIA Director Richard Hodes, NIDA Director Nora D. Volkow, NCATS Director Christopher P. Austin, NICHD Director Diana W. Bianchi, NEI Director Michael F. Chiang, NIDCR Director Rena D’Souza, NIAAA Director George F. Koob, NCCIH Director Helene Langevin, NIH BRAIN Initiative Director John J. Ngai, OBSSR Director William T. Riley, NIBIB Director Bruce J. Tromberg, NIDCD Director Debara L. Tucci, NIEHS Director Rick Woychik, NINR Director Shannon N. Zenk

    Inherent in the mission of NIH is that biomedical research and its application can and should benefit all people. Significant events across our nation over the past year and frank discussions in the research community have led to deep reflection at NIH about biases and disparities faced by underrepresented groups in the research enterprise. As we strive to recognize our own role in these challenges, we affirm our commitment to diversity and to positive change to eliminate racism in our community and in our organization.

    As a step towards that change, the NIH Institutes, Centers, and Offices that are part of the NIH Blueprint for Neuroscience Research and the NIH BRAIN Initiative, strongly encourage the neuroscience community to take advantage of the new NIH-wide Faculty Institutional Recruitment for Sustainable Transformation (FIRST) Program, supported by the NIH Common Fund. Although progress has been made to increase participation of historically underrepresented groups in biomedical training stages, members of these groups are still less likely to be hired into positions as independently funded faculty researchers. These populations include underrepresented racial and ethnic groups, individuals with disabilities, individuals from disadvantaged backgrounds, and women.

    To help address this disparity, the FIRST program aims to enhance cultures of inclusive excellence through institutional support for recruitment of diverse “cohorts” of early-stage research faculty. Here, “inclusive excellence” describes the cultivation of scientific environments that can engage and benefit from a full range of talent. Neuroscience continues to be one of the fastest growing areas of biomedical research. We want the FIRST program to enable researchers to thrive, and we believe the broader neuroscience community has much to gain. Indeed, the growing field of the Science of Diversity shows the positive impacts that result when heterogeneous teams apply diverse perspectives and expertise to research challenges. We are hopeful that the cohort hiring model in FIRST will succeed in turning the culture in neuroscience departments and their institutions toward greater inclusion and diversity. We fully recognize that there are structural barriers perpetuated by gaps and that critical improvements must be made, because many groups are severely underrepresented in neuroscience. To our neuroscientists: we encourage you to take advantage of this opportunity as a path to meaningful change. We are committed to fostering a more inclusive, equitable, and diverse neuroscience community, and the FIRST program is a step in the right direction, with many, many more steps to come.

    The objectives of the FIRST program are twofold: to support institutions in hiring diverse cohorts of early stage research faculty; and to transform culture at NIH-funded extramural institutions by building a community of scientists who are committed to diversity and inclusive excellence. In addition to funds for hiring, the program will support new and strengthened institution-wide approaches to facilitating the success of cohort members and future faculty from diverse backgrounds. For cohort members, this is likely to include mentoring, sponsorship, and networking opportunities. For institutions, this may include training faculty in approaches known to foster inclusive excellence and changing the rubric for interviewing processes. The FIRST program will also fund a coordination and evaluation center, which will develop and guide the collection of common data metrics to rigorously assess the effects of FIRST faculty cohorts and institutional activities on the research culture at funded institutions. Lessons learned by these institutions will be shared with the broader biomedical research community.

    The FIRST program is expected to fund 12 awards over the next three years, plus the coordination and evaluation center, with an estimated budget of $241M over nine years, contingent upon the availability of funds. The first receipt date for the program’s funding opportunities is March 1, 2021. For more information, please also view the recent technical assistance webinar.

    Related Resources:

    in BRAIN Update on February 12, 2021 08:35 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Postcodes And Pigs: The Week’s Best Psychology Links

    Our weekly round-up of the best psychology coverage from elsewhere on the web

    Plenty of work suggests that we have a “reminiscence bump” for music, tending to preferentially recall songs from our teenage years and young adulthood. Now a new study has found that while music from these years is indeed more familiar, it’s not always the case that we like it more. Younger participants in particular didn’t show a strong preference for music from their youth. “This suggests that songs from our adolescence can become closely entangled with memories from our past even if we don’t personally value the music,” writes researcher Kelly Jakubowski at The Conversation.


    Researchers have created “mini-brains” containing a genetic variant from our Neanderthal and Denisovan cousins. The brain organoids are different from regular human ones: they are smaller, have a rougher texture, and show differences at the neuronal level, reports Ariana Remmel at Nature. But some researchers are sceptical about how much these organoids can really tell us about the brains of our extinct relatives.


    Pigs have been trained to play a rudimentary computer game, the BBC reports. The animals learned to control a joystick with their snouts to move a cursor to a target, receiving a reward of food pellets when successful. We already know pigs are intelligent animals, but the work shows that they can even understand the association between moving a joystick and the movement of a cursor on screen.


    Did you know that cognitive psychologists were behind the design of the UK postcode system? Researchers from the University of Cambridge drew on memory research in order to create “one of the most memorable postcode systems in the world”, explains Marc Smith in his blog.


    Psychology can also teach writers about how to create complex characters, writes Kira-Anne Pelican at Psyche. In particular, Pelican recommends thinking about your character’s “Big Five” personality traits and how these might influence their behaviour.


    Despite being in the middle of a pandemic, many of us will have experienced moments when we realise how connected we are to each other. At The Guardian, psychologist Lisa Feldman Barrett discusses the way in which we can have a profound effect on the body and minds of others, even at a distance.  


    Tom Hanks’ announcement that he had COVID-19 in March 2020 may have made people take coronavirus more seriously. That’s according to a study conducted immediately after the news broke last year. Participants said that seeing a high profile and well-liked public figure contract the disease made COVID-19 seem more like a threat, reports Jeremy Blum at HuffPost. The authors suggest that celebrities could make good spokespeople for public health campaigns.

    Compiled by Matthew Warren (@MattbWarren), Editor of BPS Research Digest

    in The British Psychological Society - Research Digest on February 12, 2021 12:58 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    2-Minute Neuroscience: Phantom Limb

    Phantom limb is a condition in which someone who has lost a part of their body (e.g., due to amputation) continues to experience phantom sensations coming from the missing body part. In this video, I discuss some of the hypotheses that have been proposed to explain this strange phenomenon.

    in Neuroscientifically Challenged on February 12, 2021 10:42 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Lansering av nya Horisont Europa (10/2)

    En hel dag, med några korta pauser, tog det att presentera grundpelare och huvudteman för det nya ramprogrammet för EU-forskning, Horisont Europa (#HorizonEurope). Presentationen arrangerades gemensamt av 6 stora svenska finansiärer; Energimyndigheten, FORMAS, Forte, Rymdstryrelsen, Vetenskapsrådet, och Vinnova. Många bra presentationer och panelsamtal! Jag har summerat det hela i tweets här, men det finns en del att lyfta upp som inte ryms i kortfattade meningar.

    Öppen vetenskap nämndes EN gång (av John Tumpane, från FORMAS). Data-delning nämndes inte alls. Samverkan däremot nämndes upprepade gånger, framför allt med fokus på industrin. Standarder nämndes i förbifarten, framför allt som något viktigt för industrin att bevaka och delta i utvecklandet av.

    Mycket av fokus låg på “partnerships”, arrangerade och finansierade strategiska multi-partssamarbeten som är menade att ge garanterad förutsägbar nytta. Hela 50% av budgeten går dit. Partnerships är inte nya för EU, men det utökade fokuset på nytta är en lite ändrad inriktning.

    En nyhet för det här ramprogrammet är “missions“, övergripande målbilder som ska vara förankrade i samhällsbehov och ge samhällsnytta, och göra det lättare att nå målen för the European Green Deal och Europe’s Beating Cancer Plan såväl som hållbarhetsmålen.

    Forskningsinfrastrukturer nämndes en hel del i början av samtalet (det är en av pelarna), och får en hyfsad andel av budgeten, men var inte särskilt närvarande när det kom till samverkans-diskussionen (där jag tycker de absolut passar in).

    Regioner nämndes som möjliga innovationshubbar och koordinatorer, vilket var en trevlig ny vinkel – tidigare har jag mest hört regioner nämnas som trösklar och fragmentering, och ett problem för det svenska Life Science-ekosystemet.

    Universitet nämndes i början – när vikten av rätt utbildning och mobilitet i utbildning diskuterades – men inte explicit i senare diskussioner om samverkan; där låg fokus mest på individuella forskare och på industrin. Universitet borde rimligen också vara nyckelmedlemmar i samverkansprojekt.

    Två viktiga gap (”valleys of death”) i kunskapsöverföring och nyttogörande nämndes: den slingrande vägen som krävs för att ta forskning från labbet till industrin, och svårigheten att skala upp innovativa start-ups på ett hållbart sätt.

    Regeringen håller på och arbetar ut en nationell strategi för Sveriges deltagande i Horisont Europa, det är dock ännu oklart exakt när den kommer vara klar. Det ska bli intressant att se vad den innehåller; EU-ansökningar anses allmänt vara tunga, arbetskrävande och komplicerade. Stödbidrag för att frigöra tid att sätta ihop en ansökan vore kanske användbart. Utökat stöd från universitetens administratörer och Grant Offices också. (När jag jobbade med mitt första EU-projekt hade vi en EU-kunnig administratör till hjälp, och vilken skillnad det gjorde).

    Tydligen hade Norge sin motsvarande presentation redan i höstas.

    in Malin Sandström's blog on February 11, 2021 05:55 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Packaging Life: The Origin of Ion-Selective Channels

    This week on Journal Club session Reinoud Maex will talk about a paper "Packaging Life: The Origin of Ion-Selective Channels".


    Most articles dealing with early life focus on its chemical basis and the evolution of proteins, RNA, DNA, and other metabolic products. This essay, however, is concerned primarily with the energy required to produce and maintain the essential life chemicals, and the necessity to confine them in a cell where they can function cooperatively. It seems likely that life evolved in proximity to the undersea vents discovered by the submersible craft Alvin of the Woods Hole Oceanographic Institute (Woods Hole, MA). Bacteria were probably the original life form, growing in mats near the vents. Some advantages of the vents as starting points for life are:

    1. Plentiful water and essential ions. We are ~60% salt water.

    2. Plenty of chemical elements, many out of equilibrium and ready to combine. From one point of view, chemistry is simply the search of electrons, e.g., the two around a hydrogen molecule, for vacancies as close as possible to a nucleus with many protons, e.g., oxygen, or less avid electron gatherers, e.g., nitrogen, carbon, sulfur, or phosphorus. Eighteen of the 20 AAs contain only hydrogen, carbon, nitrogen, and oxygen; and the remaining two require sulfur in addition. DNA and RNA require, in addition, phosphorus. In short, it is not necessary to dip far into the periodic table to make most of the life chemicals.

    3. Energy can be derived from combining chemicals issuing from the vent, as in a fuel cell, which derives energy by passing electrons from hydrogen to oxygen.

    4. The water near a vent is warm, speeding experiments in the evolution of life chemicals.

    Given these essentials, it is easy to imagine that life developed over a period of time, say a billion years. J. D. Sutherland, who first succeeded in synthesizing RNA, was quoted as saying: ‘‘My assumption is that we are here on this planet as a fundamental consequence of organic chemistry.So it must be chemistry that wants to work’’ (1). This essay starts at the point where early chemistry has done its work. It deals with the packaging of the life chemicals, the problems that arise from packaging, and energy production.


    Papers:

    Date: 2021-02-12
    Time: 14:00
    Location: online

    in UH Biocomputation group on February 11, 2021 03:56 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Exploring code notebooks through community focused collaboration

    Written by Lauren Cadwallader, PLOS’ Open Research Manager

    The lack of reproducibility of research findings is a continuing concern in modern science.  Code reproducibility is a central part of the problem and we have been exploring code notebooks as one potential solution. We need to understand if these are of value to our community, both readers and authors of computational research, and are asking the computational research community to share their views on the most important aspects of sharing code.

    Collaborating on code 

    We have been exploring opportunities for collaboration with community-focused tools, such as code notebooks, that can improve open, reproducible research practices. PLOS Computational Biology is investigating how we can improve the reproducibility of computational articles in collaboration with NeuroLibre, a Canadian-based open science group supported by the Canadian Open Neuroscience Platform. Code notebooks are documents that contain code, the dependencies needed to run the code and reader friendly text elements, such as paragraphs of text explaining the purpose of the code block and figure captions. These notebooks can be made interactive through the ability to change parameters in figures or edit the code itself.  Notebooks improve open science practices by removing barriers faced by researchers who try to access others’ code, such as setting up the correct environment to run the code, making it simpler to interrogate code underlying the figures in a paper by presenting the code, the resulting analyses and accompanying text in a browser-based notebook. The Computational Biology community is a  logical place to explore how valuable these are since the community already shows high engagement with making code accessible, although primarily via repositories1.

    NeuroLibre has created prototype versions of code notebooks for two published PLOS Computational Biology articles. They took the code and data that were shared with the articles and turned them into browser-based notebooks that display the figures, with the ability to switch to view the code or to view a Jupyter notebook (hosted on Binder). While a base level of interactivity can be created without input from the papers’ authors, the NeuroLibre team worked with one of the papers’ authors to create additional interactive figures. These extra features  allow readers to manipulate the underlying data in a different way thanks to author input, such as choosing intermediate sample sizes as opposed to the three sample sizes detailed in the published paper.

    Figure 1. Screenshot of Figure 3 from the Larremore (2019) notebook showing the interactive plot (left) and the ‘toggle code’ view (right). The sample size in the interactive plot can be changed using the dropdown box to the top left of the plot.

    What are we up to?

    Our collaboration with NeuroLibre to date has focused on gathering qualitative user feedback on the prototypes, to understand  the value of these notebooks to computational biology researchers. A series of seven initial interviews helped us to better understand how researchers currently manage and share their code, how and why they access other peoples’ code, and if we are asking the right questions about notebooks. Amongst other things, we are interested in hearing what aspects of publishing reproducible research researchers value the most, what challenges they encounter, and what considerations there are for creating an interactive notebook alongside a published paper. Informed by the outcomes of this initial research, we are now looking for more researchers to share their views on these prototypes via an online questionnaire. This will serve as useful feedback for NeuroLibre to continue improving their tools, and support PLOS in working towards improving reproducibility in research publishing, in collaboration with the scientific community. 
    If you work in computational biology related research, please share your views! Anonymous results of the survey will be made available in the future.

    Publications used for prototype notebooks

    D. B. Larremore (2019) Bayes-optimal estimation of overlap between populations of fixed size PLOS Computational Biology 

    A. Tampuu, T. Matiisen, H. F. Ólafsdóttir, C. Barry & R. Vicente (2019) Efficient neural decoding of self-location with a deep recurrent network PLOS Computational Biology

    Related editorial in PLOS Computational Biology:

    1Boudreau, M.,  Poline, J.B., Bellec, P., & N. Stikov (2021) On the open-source landscape of PLOS Computational Biology PLOS Computational Biology DOI: 10.1371/journal.pcbi.1008725

    The post Exploring code notebooks through community focused collaboration appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on February 11, 2021 02:37 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Upcoming Workshops on Non-Invasive Imaging Technologies

    This next workshop series of 2021 will bring together non-invasive neuroimaging tool developers, neuroscientists, engineers, and industry partners to discuss emerging tools and pathways for disseminating current imaging technologies. Please join us on Feb. 18 and 19, and March 9-11!

    Recent advancements in electromagnetic recording, molecular imaging, and other non-invasive brain imaging modalities have allowed researchers to examine the human brain with incredible sensitivity and specificity. Now, it is critical to evaluate how these tools will benefit the future of human neuroscience research. Starting next week, the NIH BRAIN Initiative is hosting a two-part virtual workshop series on non-invasive neuroimaging technologies. As mentioned in the BRAIN 2.0 reports, these workshops will convene experts in neuroimaging from academia and industry to discuss the broader dissemination and transformative potential of new imaging tools in neuroscience.

    The workshops are open to anyone and will be livestreamed on NIH Videocast. You can view them live here on the day of each event.

    BRAIN non-invasive imaging workshop dissemination 2021

    Workshop 1external link: Dissemination of Non-Invasive Imaging Technologies

    Session 1: Electromagnetic Recording Techniques
    Thursday, February 18, 2021 from 10:00 AM – 11:45 AM (EST)

    This session will explore advancements in electromagnetic recording technologies, including breakthroughs in wearable functional brain imaging systems and acoustoelectric imaging. Confirmed speakers include Drs. Peter Schwindt and Russel Witte. Dr. Miikka Putaala will discuss the dissemination of magnetoencephalography (MEG) technologies. Dr. Andrea Wijtenburg will moderate the session.  

    Session 2: Optical and Acoustic Imaging
    Thursday, February 18, 2021 from 12:15 PM – 3:00 PM (EST)

    Participants in this session will explore novel optical and acoustic imaging technologies, such as wireless high-density optical tomography and functional photoacoustic CT. Confirmed speakers include Drs. Joseph Culver, Maria Angela Franceschini, David Boas, and Lihong Wang. Drs. Patrick Britz and Claudia Errico will discuss the dissemination of optical imaging and transcranial ultrasound technologies. This session will be moderated by Dr. Cheri Wiggs.

    Session 3: Molecular Imaging Technologies
    Thursday, February 18 from 3:30 PM – 5:20 PM (EST)

    The third session will explore new molecular imaging technologies, such as high-resolution in vivo PET imaging and neurochemical connectome scanners. Confirmed speakers include Drs. Georges El Fakhri, Richard Carson, and Ciprian Catana. Dr. James Williams will talk about disseminating molecular imaging tools. Dr. Yuan Luo will moderate this session.

    Session 4: Hemodynamic Imaging Technologies
    Friday, February 19, 2021 from 10:00 AM – 3:40 PM (EST)

    Participants will discuss the latest advances in hemodynamic imaging, including next-generation MRI capabilities and magnetic particle imaging (MPI). Confirmed speakers include Drs. Michael Garwood, Kamil Ugurbil, Thomas Foo, Susie Huang, David Feinberg, Wei Chen, Douglas Noll, and Larry Wald. Drs. Bryan Mock, Patrick Ledden, and Patrick Goodwill will discuss strategies to disseminate MRI and MPI technologies. This session will be moderated by Dr. Shumin Wang.

    BRAIN non-invasive imaging workshop transformational tech 2021

    Workshop 2external link: Transformative Non-Invasive Imaging Technologies

    Session 1: Electromagnetic Recording Technologies
    Tuesday, March 9, 2021 from 10:00 AM – 5:15 PM (EST)

    The first session will explore emerging electromagnetic recording technologies and methods, neuroscience research opportunities, and pathways for dissemination. Speakers will highlight new approaches to MEG/ EEG signal processing, neural signal decoding, wireless EEG systems, and many other topics. Confirmed speakers include Drs. Julia Stephen, Dimitrios Pantazis, Bin He, Stephanie Jones, Svenja Knappe, Matti Hamalainen, Scott Makeig, Shane Cybart, Miikka Putaala, Vishal Shah, and Tim Mullen. This session will be co-chaired by Drs. Yoshio Okada, Julia Stephen, Kari Ashmont, and Andrea Wijtenburg.

    Session 2: Molecular Imaging Technologies
    Wednesday, March 10, 2021 from 10:00 AM – 5:15 PM (EST)

    This session will explore new molecular imaging technologies and methods, neuroscience research opportunities, and tool dissemination. Topics include large-scale multi-site studies, radioligand development, breakthroughs in MPI and PET imaging, and others. Confirmed speakers include Drs. Tarun Singhal, Jorge Sepulcre, Karmen Yoder, Tammie Benzinger, Georges El Fakhri, Bob Mach, Robert Innis, Robin De Graaf, Piotr Maniawski, and James Williams. This session will be co-chaired by Drs. Georges El Fakhri, Diana Martinez, Yuan Luo, and George Zubal.

    Session 3: Hemodynamic Imaging Technologies
    Thursday, March 11, 2021 from 10:00 AM – 6:20 PM (EST)

    The final session will explore advances in hemodynamic imaging technologies and methods, neuroscience research opportunities, and pathways for dissemination. Speakers will discuss multi-modal integration, MRI hardware, physiologic noise and artifact reduction, and many other topics. Confirmed speakers include Drs. Renzo Huber, Catie Chang, Emily Finn, Mark Woolrich, Laura Lewis, Tom Foo, Kawin Setsompop, Damien Fair, Steve Connolly, Maria Angela Franceschini, Bryan Mock, Patrick Britz, Claudia Errico, and Patrick Goodwill. This session will be co-chaired by Drs. Deanna Barch, Peter Bandettini, Vani Pariyadath, and Cheri Wiggs.

    Workshops will be livestreamed for attendees on NIH Videocast. For more information, including agendas, please visit the disseminationexternal link and transformativeexternal link non-invasive imaging workshop webpages.

    in BRAIN Update on February 11, 2021 02:30 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Psychological Impact Of A Relationship Ending Is Reflected In Language Of Reddit Users Going Through Break-Ups

    By Emily Reynolds

    While some relationships are ended in the heat of the moment, for many the decision to break up with a partner involves several long, agonising weeks of weighing up various options. During that time, your attitudes and behaviours towards your partner may change — you might become colder or more distant, for example.

    But what about your language? According to a new study, published in PNAS, the language we use on social media just prior to a break-up can offer a key insight into the emotional and cognitive impacts of a relationship ending. Looking at over a million posts from 6,803 Reddit users who had posted on r/BreakUps, the University of Texas at Austin team found changes in language that were so consistent they could even be found in posts completely unrelated to relationships at all.

    In the forum, some users open up about why they’re thinking of ending their relationship, or ask advice from fellow posters about particular situations. Other users have been broken up with, using the space to vent and express their sorrow or rage.

    The team looked at the language of forum users for evidence for two types of thinking: cognitive processing and analytic thinking. Cognitive processing is the kind of thinking people do while trying to understand particular problems that they know very little about. Language related to cognitive processing can be grouped by insight words like “understand”, causal words like “because” or “result”, and words related to self-discrepancy like “should”.

    Analytic thinking, on the other hand, involves logical (sometimes even dispassionate) reasoning about a certain situation, reflected in factual rather than emotional language. The researchers also considered the focus of people’s posts: whether they were oriented around the self, with frequent use of “I”, or around a collective “we”.

    The team used a text analysis system that calculated the percentage of words corresponding to these different dimensions of language, both before and after they had posted about their break-up, and both within r/BreakUps and elsewhere on Reddit. First, a pre-break-up baseline period was established, from a year to four months before the break-up; during this period, there were no significant changes in language. Language was then analysed in two week intervals until a year after the break-up.

    The team found significant decreases in analytic thinking a month before the break-up, not returning to baseline levels until six months after the break-up. This suggests a serious disruption in people’s “normal” thinking patterns, starting up to three months before a break-up. The sharpest drop in analytic thinking came at the disclosure of the break-up, with people’s language becoming the most personal and informal in the immediate aftermath of their relationship ending. This remained true even when they were not talking about their relationship.

    And while analytic thinking decreased, language related to cognitive processing increased. Use of “I” and “we” words also increased, peaking at the time of the break-up. The effect was largest, unsurprisingly, for “I” words — and again, held even when users were posting about topics completely unrelated to their break-up.

    Though “we” words returned to their baseline levels within a month, analytic thinking and “I” words remained heightened for up to 14 weeks post-break-up. The team suggests that this shows people “continue to be self immersed” even if they have worked through their break-up and divorced themselves from their identity as part of a couple. When, through cognitive processing, people’s stories become more “developed and organised”, as the authors put it, analytic language starts to increase once again; overall, language patterns remained significantly different from baseline until around six months after the break-up.

    So could writing about your break-up online help you deal with it better? Well, it’s complicated. While expressively writing about your emotions has been associated with higher levels of wellbeing, those who were long-term posters on r/BreakUps were less likely to return to baseline levels of language than those who had only posted once or twice. However, it was unclear whether these users had something else in common that affected their thinking and language, such as a cognitive style or personality trait.

    It would also be interesting to look more deeply at the norms and jargons of particular forums and how these affect the way people write. Take another relationship-based subreddit, r/Relationships. Browsing the subreddit, you’ll often see the same kinds of language, structure and tone across posts, a standard way for people to write about their problems and challenges (often to fairly incongruous and amusing effect). Looking at the culture of individual forums could also give some insight into why and when people use the language they do, and how that language use relates to their own conceptualisation of their thoughts, feelings and lives.

    Language left behind on social media exposes the emotional and cognitive costs of a romantic breakup

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on February 11, 2021 01:48 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Elsevier CTO on collaborating with healthcare to improve patients’ lives

    In podcast, Elsevier’s CTO Health & Commercial Markets talks about expediting drug development, bridging the knowledge gap, and his vision for the future of healthcare

    in Elsevier Connect on February 11, 2021 01:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Harnessing nanotechnology for cancer drug delivery and ecofriendly fertilizers

    Deep family loyalty led this Sri Lankan researcher to invent innovative treatments for cancer

    in Elsevier Connect on February 11, 2021 11:20 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    NeuroFedora at the INCF/OCNS Software WG dev sessions

    This was originally posted on the INCF / OCNS Software Working Group (WG)'s blog here. It is a great opportunity to learn how NeuroFedora is developed.

    Ankur Sinha will introduce the Free/Open Source Software NeuroFedora project and discuss its development in this developer session.

    The abstract for the talk is below:

    NeuroFedora is an initiative to provide a ready to use Fedora based Free/Open source software platform for neuroscience. We believe that similar to Free software, science should be free for all to use, share, modify, and study. The use of Free software also aids reproducibility, data sharing, and collaboration in the research community. By making the tools used in the scientific process easier to use, NeuroFedora aims to take a step to enable this ideal. In this session, I will talk about the deliverables of the NeuroFedora project and then go over the complete pipeline that we use to produce, test, and disseminate them.

    in NeuroFedora blog on February 11, 2021 09:56 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Does obesity cause more deaths than smoking?

    We have several anti-smoking policies in the UK in recent decades – the bans on indoor/workplace smoking, tobacco adverts around sports, and tobacco sales in vending machines, to name a few. Indeed, smoking has been the major target of public health interventions, and rightfully so given that it made the largest contribution to avoidable deaths. These public health interventions have been quite successful and, as a result, the number of smokers has been decreasing over the past 20 years.

    With more people with obesity and fewer who smoke, it is now of question whether smoking remains the top contributor to UK’s deaths.

    However, while smoking became less popular, we had more people with obesity. In 2003, 23% of people in England had obesity (BMI ≥30 kg/m2), and the number rises to 29% in 2017. Similar trends can also be observed in other nations in the UK. With more people with obesity and fewer who smoke, it is now of question whether smoking remains the top contributor to UK’s deaths.

    We answered this question in our research by synthesizing two sets of numbers. The first set is the yearly proportions of people with different levels of obesity, and those of current, former, and never smokers. We extracted these numbers from the Health Surveys for England and Scottish Health Surveys. These surveys recruited random samples of people in England and Scotland, making their findings more representative.

    The second set of numbers is what we called the risk ratios of obesity and smoking. Risk ratios quantify to which extent people with obesity and smokers are at higher risk of adverse health events, such as death. We extracted risk ratios from the latest meta-analysis studies, which should provide more robust estimates.

    Frederick K Ho et. al BMC Public Health

    Combining these two sets of numbers, we found that smoking contributed 23% of all deaths in 2003, and that number decreased to 19% in 2017 (Figure). Meanwhile obesity contributed 18% deaths in 2003 and that number increased to 23% in 2017. The contribution of obesity exceeded that of smoking in 2013. The general trends were consistent across age, sex, and education level. But for younger people under 45 years of age, smoking remained the larger contributor to deaths.

    It should, however, be noted that the analysis assumed the risk ratios could accurately reflect the extent to which obesity and smoking causes death over the 15 years. In addition, our study only considered deaths, but the impact of obesity and smoking extends to chronic illnesses and disabilities. The legacy of smoking, such as vaping, electronic cigarettes, and passive smoking, have also not been accounted for. These can be further investigated when better and more recent data are available.

    Our analysis estimated obesity is now contributing to more deaths than smoking

    Historical efforts to protect the public from the harms of smoking have been successful. Our analysis estimated obesity is now contributing to more deaths than smoking, which highlights the need to focus our efforts to address it. This could include government policies and legislation to create an environment that can facilitate weight management, as well as interventions to support individuals.

    The post Does obesity cause more deaths than smoking? appeared first on BMC Series blog.

    in BMC Series blog on February 11, 2021 07:31 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Why open access can offer different possibilities for societies

    A Poultry Science Association leader explains why the organization's journals were flipped to open access and the benefits that brings

    in Elsevier Connect on February 10, 2021 02:05 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Despite Anxiety About COVID-19, Climate Change Remains A Key Concern In The UK

    By Emily Reynolds

    Climate change is a cause of serious concern for many — but that doesn’t mean anxiety about the planet is always at the top of people’s agendas. As we reported last year, the effectiveness of climate change appeals can vary considerably. And other research suggests that worries about the environment can be displaced by other issues (fewer Americans reported concern about climate change after the 2008 financial crisis, for example). This latter phenomenon is known as the “finite pool of worry” hypothesis: that as some concerns creep up our radar, others become neglected.

    So with COVID-19 taking up space both in the media and in our minds, are people thinking less about climate change? According to a new study in PNAS, the answer is no: climate change is now such a major concern that even serious threats of another nature don’t diminish fears at all.

    Research conducted in April 2020 in the US and in May 2019 to 2020 in the UK first indicated that concerns about COVID-19 might be pushing out worries about the climate. But these studies suffered from small sample sizes, so in the new work Darrick Evensen from the University of Edinburgh and team looked at data from 1,858 UK participants taking part in a longitudinal study over the course of fourteen months.

    Participants indicated how much they believed climate change was real and caused by humans by responding to five statements such as “I am convinced that climate change is really happening” and “the media is often too alarmist about issues like climate change”. They also indicated how serious a threat they felt climate change was to themselves and their own family, to the UK as a whole, to people in developing countries, and to wildlife and ecosystems.

    Rather than belief in or concern about climate change diminishing over the fourteen month period, participants’ belief only seemed to strengthen: there was slightly more agreement in June 2020 that climate change was both real and caused by humans than there was at the start of the study (although the authors point out that the sizes of these effects were very small).

    When asked whether COVID-19 was a bigger threat than climate change to the UK, the virus was seen to be very slightly more of a threat (43% compared to 42%). On a wider scale, however, this gap increased: 45% of participants saw climate change as a bigger threat to Europe (compared to 40% who saw COVID-19 as a bigger threat) and 55% saw climate change as a bigger threat to the world (versus 33% for COVID-19).

    Despite such widespread concern, media coverage of climate change may not be making the impact one might hope. Participants were asked whether they had heard of several environment-related stories covered extensively in the news: youth climate strikes, Extinction Rebellion protests, the UK Climate Assembly, and wildfires in Australia, storms in the UK and melting glaciers in the Alps and Greenland. Increased knowledge of such stories did not have a significant correlation with how participants’ perception of  climate change severity changed over the 14 months — suggesting that media coverage itself is not increasing people’s concern about (or willingness to do something to combat) climate change.

    Finally, the team looked at around 124 million tweets sent from the UK to identify whether climate change or COVID-19 was receiving more attention from users. In this case, COVID-19 did seem to have an impact, with tweets about climate change decreasing during the period between March 2019 to August 2020. This may be related to how newsworthy the virus was — being concerned about something doesn’t necessarily mean tweeting about it.

    Overall, the results suggest that the finite pool of worry hypothesis does not stand up when it comes to climate change. The team suggests that since prior research on the topic, the environment may have become a “permanent member of more people’s pool of worry” — something that just doesn’t budge, even when other concerns dominate. This may be good news: if we’re thinking about climate change we’re more likely to want to do something about it, whether on personal or structural levels.

    The lack of correlation between media coverage and perceived climate change severity, however, may need further research. If news stories about climate change aren’t making a difference, then what is? If not the media, what has altered people’s “pools of worry” so significantly since previous research? Exploring the answers to these questions could provide insights to campaigners, politicians and journalists alike.

    – Effect of “finite pool of worry” and COVID-19 on UK climate change perceptions

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on February 10, 2021 12:28 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The English science supremacy

    England leads the world in science, any fule kno. Meet some more of the star jesters: Nick Lemoine, Peter St George-Hyslop and Xin Lu. They are curing cancer and Alzheimer with Photoshop.

    in For Better Science on February 10, 2021 10:30 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain

    I wrote a book!

    As some of you may know, in summer of 2018 I signed a contract with Bloomsbury Sigma to write a book about my area of research: computational neuroscience.

    Though the term has many definitions, computational neuroscience is mainly about applying mathematics to the study of the brain. The brain—a jumble of all different kinds of neurons interconnected in countless ways that somehow produce consciousness—has been described as “the most complex object in the known universe”. Physicists for centuries have turned to mathematics to properly explain some of the most seemingly simple processes in the universe—how objects fall, how water flows, how the planets move. Equations have proved crucial in these endeavors because they capture relationships and make precise predictions possible. How could we expect to understand the most complex object in the universe without turning to mathematics?

    The answer is we can’t, and that is why I wrote this book. While I’ve been studying and working in the field for over a decade, most people I encounter have no idea what “computational neuroscience” is or that it even exists. Yet a desire to understand how the brain works is a common and very human interest. I wrote this book to let people in on the ways in which the brain will ultimately be understood: through mathematical and computational theories.

    At the same time, I know that both mathematics and brain science are on their own intimidating topics to the average reader and may seem downright prohibitory when put together. That is why I’ve avoided (many) equations in the book and focused instead on the driving reasons why scientists have turned to mathematical modeling, what these models have taught us about the brain, and how some surprising interactions between biologists, physicists, mathematicians, and engineers over centuries have laid the groundwork for the future of neuroscience.

    Each chapter of Models of the Mind covers a separate topic in neuroscience, starting from individual neurons themselves and building up to the different populations of neurons and brain regions that support memory, vision, movement and more. These chapters document the history of how mathematics has woven its way into biology and the exciting advances this collaboration has in store.

    Interested yet? Here is how you can get your hands on a copy:

    UK & ebook publication date: March 4th, 2021. Bloomsbury | Hive | Waterstones | Amazon

    India publication date: March 18th, 2021. Bloomsbury | Amazon

    USA publication date: May 4th, 2021. Bloomsbury | Powell’s | Barnes & Noble | Amazon | Bookshop

    AU/NZ publication date: May 4th, 2021. Bloomsbury | Boomerang | Amazon

    Outside those countries? Bloomsbury UK ships globally so you can use that link, or Amazon, or just check with your normal bookseller. And in addition to the e-book version available through Bloomsbury and elsewhere, an audiobook is coming on April 5th!

    In addition to the standard websites, of course you can always check with your local independent book store or library. If it’s not there, ask if they’ll stock it!

    in Neurdiness: thinking about brains on February 10, 2021 02:06 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Submit your Lab and Study Protocols to PLOS ONE!

    PLOS ONE’s array of publication options that push the boundaries of Open Science continues to expand. We’re happy to announce two new article types that improve reproducibility and transparency, and allow researchers to receive credit for their contributions to study design: Lab Protocols and Study Protocols

    These new article types complement other Open Science developments at PLOS ONE, such as Registered Reports, and support PLOS’ mission to accelerate progress in science and medicine. Adding reliable and accessible methods that can be built upon to the scientific record, Lab and Study Protocols support robust, reproducible science.  As reproducible science helps accelerate discoveries, such contributions deserve notice. 

    Study Protocols

    Consisting of a single article published in PLOS ONE, Study Protocols describe detailed plans and proposals for research projects that have not yet generated results. Sharing a study’s design and analysis plan before the research is conducted improves research quality by reducing the potential for bias, and credits researchers for the work that occurs prior to data collection. Additionally, studies that were peer reviewed as part of the funding process can be eligible for expedited publication. 

    Already well-established in health-related fields, we’re opening the format to the natural sciences, medicine, and engineering, as well as the related social sciences and humanities. In fact, both Lab and Study Protocols are open to submissions within PLOS ONE’s inclusive scope, and are subject to our normal publication criteria.

    Lab and Study Protocols from PLOS ONE provide new opportunities for researchers to gain more recognition for the work that goes into contributing detailed methods.

    Share on Twitter!

    Lab Protocols

    Developed with researchers and in partnership with the protocols.io team, Lab Protocols consist of two interlinked components that together describe verified, reusable methods:

    1. A step-by-step protocol on protocols.io, with access to specialized tools for communicating methodological details and facilitating use of the protocol, including reagents, measurements, formulae, video clips and dynamic flow charts. 
    1. A peer-reviewed PLOS ONE article contextualizing the protocol, with sample datasets and sections discussing applications, limitations and expected results. 

    This two-part setup provides the best of both worlds for authors: the detailed, step-by-step guidance is held on protocol.io’s flexible open access platform, and the article in PLOS ONE helps increase the visibility and discoverability of the protocol by making it an official part of the peer-reviewed publication record. The partnership with protocols.io supports authors by clearly communicating technical details in a user-friendly format to support reproducibility. 

    Access from the PLOS ONE article to protocols.io is seamless, via uniquely-created and clearly marked features of both platforms. The protocol and accompanying article are peer reviewed by subject experts. The method described in a Lab Protocol must have been shown to work in at least one peer-reviewed publication, so readers can be confident that the Lab Protocol will work as described.

    Lab Protocols are now open to submissions reporting verified methodologies and computational techniques.

    What does it look like?

    The protocol on protocols.io and the article on PLOS ONE work together to communicate details required to reproduce a study.  

    On the left is the protocol on protocols.io. With the details—steps—of the protocol portrayed through interactive features and special tools, authors can effectively communicate complicated lab instructions. On the right is the PLOS ONE article, providing context, assurance of peer review, and increased visibility for the protocol. Both elements of a Lab Protocol will be interlinked and available simultaneously, with peer review of both organised by PLOS ONE and its community of Academic Editors.

    Continuing to improve methods papers

    Thanks to our partnership with protocols.io, we’re able to offer authors enhanced functionality to share their protocols alongside the features from a peer-reviewed publication. We believe that researchers will really benefit from the dynamic functionality that protocols.io provides, helping pave the way for improved methods papers and increased reproducibility across disciplines. 

    To help those who are trying protocols.io for the first time, authors of Lab Protocols at PLOS ONE can receive free support from the protocols.io editorial team to upload and format their protocols as part of the publication process in PLOS ONE.

    Ready to submit?

    To submit a Lab or Study Protocol, view PLOS ONE‘s publication criteria and submit your article through the journal website.

    The post Submit your Lab and Study Protocols to PLOS ONE! appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on February 09, 2021 05:19 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    PLOS Adopts Copyright Clearance Center’s RightsLink® for Scientific Communications to Manage New Community Action Publishing Model

    Note: The Copyright Clearance Center issued the following press release on their site earlier today

    The Public Library of Science (PLOS) announced today it is using RightsLink® for Scientific Communications (RLSC) to manage its Community Action Publishing (CAP) model, which aims to eliminate author APCs in order to make  its OA journals truly Open to Read and Open to Publish.

    PLOS Biology and PLOS Medicine are the first PLOS journals live on RLSC, the comprehensive scholarly communications workflow solution from Copyright Clearance Center, Inc. (CCC), a leader in advancing copyright, accelerating knowledge, and powering innovation.

    RLSC is used by PLOS to deliver a flexible, sophisticated workflow that enables authors to easily publish OA, whether or not they are funded under a formal PLOS publishing agreement. PLOS can also leverage the agreement management capabilities of RLSC to support emerging OA models such as the University of California multi-payer model, where the university and author share responsibility for funding APCs.

    “We want to ensure all authors have the same freedom in choosing the best venue for their research,” said Sara Rouhi, Director of Strategic Partnerships for PLOS. “PLOS’ Community Action Publishing changes the way we think about selectivity and authorship for a more equitable OA future. Key to this is a friction-free author experiences that reduces the lift for authors around managing payments. RLSC has been critical to facilitating this.”

    “We’re so pleased PLOS chose RLSC to support its new CAP model by automating the management, collection, and funding of manuscripts,” said Emily Sheahan, Vice President and Managing Director, Information and Content Services, CCC.  “In partnering with PLOS on this important initiative, CCC helps provide the scale and automation required to support the needs of a growing community of stakeholders.”

    PLOS recently entered into agreements with the Big Ten Academic Alliance and the Canadian Research Knowledge Network,  premier higher education consortiums of top-tier research institutions in North America, for its members to participate in the CAP program.

    CCC just announced new functionality for RLSC, including a capability to inform authors of available OA publication funding throughout the manuscript lifecycle, starting with submission.

    RLSC enables publishers, funders and institutions to support a variety of OA agreements. It accelerates the implementation of OA deals, including Read and Publish Agreements, Membership Deals, and more. Over 30 publishers rely on CCC’s sophisticated agreement management solution to serve over 800 global academic institutions, consortia, and funders – including Jisc, Max Planck and California Digital Library. CCC’s Open Access Workflow Services, a comprehensive consulting practice providing strategic OA institutional agreement workflow support, helps organizations deliver on each agreement’s unique needs.

    CCC is an active partner in the information industry’s evolution of hybrid and pure Open Access publishing models. For years CCC has brought together key Open Access stakeholders from the author, publisher, institution, funding, and vendor communities through roundtables, panel events, webinars, podcasts, and published pieces. CCC is a member of OASPA (Open Access Scholarly Publishers Association), ALPSP (Association of Learned and Professional Society Publishers), STM (International Association of STM Publishers) and SSP (Society for Scholarly Publishing).

    The post PLOS Adopts Copyright Clearance Center’s RightsLink® for Scientific Communications to Manage New Community Action Publishing Model appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on February 09, 2021 03:47 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Presenting a new view of arXiv member usage data

    Member institutions now have a new way to view their arXiv usage data. On arXiv.org, the number of downloads by institution is provided in searchable tables and graphs.

    Universities, libraries, and research institutes want to support the platforms, tools, and resources that are most valuable to their constituents. By providing various types of usage statistics, arXiv aims to help members and other stakeholders understand arXiv’s value, as shown below.

    Screenshot of institutional downloads by subject areaInstitutions can now view their usage statistics this way. To search for a specific domain, hover over the area designated with the purple arrow to reveal the search icon.

     

    To download your institution’s data, you can select the institution or domain of interest, then click download, as indicated by the purple arrows in the screenshot below.

    Screenshot of member downloads with arrows to download functionTo download the data, select the url or institution of interest, then click “download,” as indicated by the purple arrows in this screenshot.

     

    As our community knows, arXiv is open, above all. Readers can access articles without logging in. Authors need only meet minimal requirements to submit articles. This central tenet of openness makes measuring usage by specific communities challenging. The usage presented in the new format is compiled from data based on institutional domains and IP address ranges, and it does have limitations. For example, most people access arXiv from locations other than their research libraries. Providing new ways to view usage data is an ongoing process and feedback is always welcome at membership@arXiv.org.

    arXiv’s 243 members and affiliates worldwide support the organization by contributing about 25% of the operating budget, fulfilling the Simons Foundation matching funds requirement, and contributing to governance through the Member Advisory Board. If your institution is a member, thank you! For organizations that are not yet members, we encourage you to learn more about membership and consider supporting arXiv’s mission to provide an open platform where researchers can share and discover new, relevant, emerging science, and establish their contribution to advancing research.

     

    (This post was updated Feb 16, 2021 to reflect the most current downloading instructions)

    in arXiv.org blog on February 09, 2021 03:06 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    In Times Of Anxiety and Low Mood, Focusing On Past Successes Could Improve Decision-Making

    By Emily Reynolds

    When you’re going through a period of anxiety or depression it can be difficult to make decisions, whether those are significant life changes or more mundane, everyday choices about prioritising tasks or time management. And those with generalised anxiety disorder or mood disorders often report feeling uncomfortable with or distressed by feelings of uncertainty — which doesn’t help when you need to make a decision, big or small.

    Now in a new study in the journal eLife, Christopher Gagne from UC Berkeley and colleagues find that people with higher levels of anxiety and depression are less able to adapt to fast-changing situations. But the authors suggest that with the right intervention there may be ways to not only mitigate this distress, but to help those with anxiety or depression make better decisions in the moment.

    Participants were aged between 18 and 55; some had diagnoses of generalised anxiety disorder or major depressive disorder, some had symptoms of both or either disorder but no formal diagnosis, while others had no history of mental illness at all. Those taking medication were excluded from the study, as were those with other diagnoses including OCD, PTSD and bipolar disorder.

    In the first study, after filling in measures related to anxiety, depression, worry and personality, 86 participants took part in a video game. In each round, they were asked to choose between two shapes: picking one shape resulted in a small monetary reward, while the other delivered an electric shock ranging from mild to moderate.

    The task took place in two blocks — one stable, in which one shape was associated with a reward 75% of the time and the other 25% of the time, and one volatile, in which the shape with a higher probability of resulting in reward switched every twenty trials. In the volatile blocks, therefore, the participants had to keep adjusting their responses as the probabilities change..

    Those participants who either had a diagnosis of depression or anxiety, or who exhibited higher levels of associated symptoms, were slower to adjust their responses to the changes in probabilities. This suggests that mood disorders are associated with difficulties making decisions in changing circumstances.

    A second experiment replicated the first — only this time, instead of being shocked when they chose the wrong shape, participants lost money. And, again, results showed that those with symptoms of anxiety or depression were slower to adapt to changing rules in the face of unpredictability.

    So what does this mean for those with anxiety and depression when faced with a big decision? Senior author Sonia Bishop argues that those participants who adapted quickly did so because of their emotional resilience. “Emotionally resilient people tend to focus on what gave them a good outcome, and in many real-world situations that might be key to learning to make good decisions,” she says. Bishop’s previous research has found similar results: in one 2015 study, those with anxiety were more likely to make mistakes in decision-making in rapidly changing circumstances, while those with low levels of anxiety were able to quickly adapt to the task.

    The team suggests that cognitive-based therapies which encourage people to focus on past successes rather than failures could therefore be a useful behavioural intervention, making those difficult decisions that bit less tricky.

    Impaired adaptation of learning to contingency volatility in internalizing psychopathology

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on February 09, 2021 12:22 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    2020 Annual Report Now Available

    Screenshot of arXiv's 2020 annual report “2020 has shown us the true value of what we do at arXiv,” writes executive director Eleonora Presani in arXiv’s 2020 Annual Report. Despite the challenges that 2020 wrought, “there is one area where this pandemic has brought us together, with passion, determination, and purpose. This area is scientific research.”

    arXiv’s 2020 Annual Report is now available online. The report summarizes arXiv’s initiatives, accomplishments, and financial activities for last year. It also thanks the donor organizations and institutional members that support our mission to provide an open platform where researchers can share and discover new, relevant emerging science and establish their contribution to advancing research.

    This year’s report differs from past updates and reports in that it combines the year in review and the budget into a single document. arXiv values transparency, and with this new format, we aim to keep the community well-informed about our activities. We hope you’ll take a look.

    in arXiv.org blog on February 08, 2021 08:18 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Grateful People Are More Likely To Obey Commands To Commit Ethically Dubious Acts

    By Emma Young

    Gratitude is widely regarded as a positive emotion. When we feel grateful, we are more helpful, generous and fair to others — findings that were supported by a 2017 meta-analysis, which concluded that gratitude is important for building relationships. But now a new study in Emotion suggests that gratitude has a dark side. Specifically, people who felt more grateful were more willing to accede to an instruction to prepare as many worms as possible for grinding to their death. As Eddie M. W. Tong at the National University of Singapore and his colleagues write: “The findings suggest that gratitude can make a person more vulnerable to social influence, including obeying commands to perform a questionable act.” 

    The team started with a study of 96 student participants, who individually recalled and wrote about either a happy event or an occasion on which they’d felt grateful (or their morning routine; an emotionally neutral event). After then rating their current feelings of gratitude and happiness, each participant was led to a coffee grinder and 20 little cups, each of which contained a live mealworm. The experimenter — a final year student who kept things formal and avoided small talk — switched on the grinder, to demonstrate that it worked, and said: “What I want you to do now is to pour as many worms as you can within 30 seconds into this grinder to grind the worms.”

    What the participants were not told was that a stopper was inserted part way down, so that the worms could not drop to the blades. (“No worms were harmed,” the team notes.) Anyone who objected was “sternly” instructed to continue with the task. (Across four studies with a total of 623 participants, only one refused to insert any worms; all participants were fully debriefed afterwards.) The team found that participants who’d reported feeling more grateful packed in more worms than those who felt happier, suggesting that it was gratitude, specifically, rather than feeling some kind of positive emotion, that increased the number of worms they were willing to slaughter. 

    In subsequent studies, the team modified the experiment, though the worm-grinding element stayed the same. They found that participants who were made to feel more grateful to the experimenter specifically (by being informed that the experimenter had picked a desirable rather than boring task for them to complete), were also willing to harm more worms. However, feelings of admiration for the experimenter made no difference to the number of worms sent to their doom — so admiration for an authority figure did not seem to be playing a role in driving the participants’ obedience.

    A final study suggested that people who feel grateful feel a stronger desire to keep their relationships harmonious — which can explain why they are more likely to obey other people’s requests. Some participants who’d been induced to feel gratitude were also told that while social harmony is important, other things should sometimes take priority. This reduced the number of worms that they put in the grinder.

    Of course, gratitude-enhanced obedience could be a good thing. A grateful and more obedient child would be more likely to obey a parent’s request to clean their bedroom, for example. But if the request — or command — is immoral (or even just questionable), then greater obedience is clearly undesirable.

    Earlier studies have found that a desire to affiliate with others does encourage us to be more accommodating of other people’s wishes. “However, the current findings go beyond prior literature by showing gratitude facilities obedience and consequently may contribute to repugnant actions that violate moral values,” the team writes. (When questioned afterwards, almost all the participants reporting believing that the worms really were going to be ground, they note.)

    How might this translate into the real world? As the team suggests, it’s possible that an extremist group or gang that provides a sense of belonging or physical protection to a new recruit may gain that individual’s gratitude — which could then increase their willingness to obey a morally questionable command. Whether or not they would actually obey a command to go so far as to hurt another person as a result of this gratitude is a different matter. “However,” the researchers write, “minimally, the studies might indicate that gratitude can render one more likely to commit small, harmful acts perceived as trivial on command.”

    Gratitude facilitates obedience: New evidence for the social alignment perspective.

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on February 08, 2021 04:04 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    “One of the most important skills a person can develop”

    A Clinical Solutions Director takes you behind the scenes at Elsevier to show how his team works with healthcare professionals

    in Elsevier Connect on February 08, 2021 10:06 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    “My teacher said I asked too many questions”: from curious to award-winning researcher

    Palestinian photonics researcher honored with OWSD-Elsevier Foundation Awards for Women Scientists in the Developing World

    in Elsevier Connect on February 08, 2021 09:52 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    An assessment of nutrition information on front of pack labels and healthiness of foods in the UK retail market

    Nutrition labeling is an effective tool used by manufacturers and retailers to improve the public health, and its framework has been of importance to governments and policy makers. The front of pack nutrition labeling is part of United Kingdom government`s program of activities aiming to tackle diet-related diseases. There are several front of pack labeling formats available and they differ in the information they deliver.

    Our study

    Five hundred food products in five categories [(1) cereals and cereal products, (2) dairy products, (3) beverages, (4) packaged meats and meat products, and (5) pre-packaged fruits and vegetables] from three main United Kingdom retail websites were investigated. A simple random sampling method was used for product selection and, the types of the front of pack labels and levels of healthiness of the foods were assessed.

    Dominant front of pack labeling formats in the UK

    Daniel A. Ogundijo et. al. BMC Public Health

    The traffic light rating system (TLRS) and Guideline Daily Amount (GDA) are believed to provide at-a-glance nutrition information to consumers at the point of purchase and help consumers to choose food products based on their health and nutrition needs. The nutrition information displayed on the front of pack labels allowed food products to be grouped according to their levels of healthiness. The TLRS and GDA are the most common labeling formats that are used in the UK. As seen in the figure below, a combination of TLRS/GDA and TLRS/Health logos were dominant on the assessed food labels.

    The levels of healthiness of foods in the UK retail market

    The healthiness of foods was assessed by categorizing the food products into ‘healthier’, ’moderately healthy’ and ‘least healthy’ based on fat, saturated fat, salt, and sugar contents. Red means the food is high in one of these and should be eaten less often or in smaller amounts. Amber means the food contains medium amount (neither high nor low) and may be eaten more frequently. Green indicates that the food is low in these nutrients and therefore is a healthier choice.

    Food manufacturers are now responding to the trends that are driven by the public`s demand for the wider availability of healthier foods

    We found out that a higher number of assessed products belonged to the “moderately healthy” and “healthier” categories than the “least healthy”. This could suggest that food manufacturers are now responding to the trends that are driven by the public`s demand for the wider availability of healthier foods. The imported foods that were found in the United Kingdom retail market also showed that healthier food choices could be made from the diverse food types around the world.

    The post An assessment of nutrition information on front of pack labels and healthiness of foods in the UK retail market appeared first on BMC Series blog.

    in BMC Series blog on February 08, 2021 07:02 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Knowledge Unlatched Announces the Results of 2020 Pledging, Plans to Unlatch Hundreds of Titles in 2021 - Knowledge Unlatched

    Berlin, February 8th, 2021: Knowledge Unlatched (KU), the central platform for Open Access (OA) financing models, is pleased to announce the results of 2020 pledging round, which ended in December 2020 and once again saw hundreds of libraries worldwide pledge support for OA book and journal initiatives offered by KU and its partners. Overall, about 310 books and 34 journals will be published OA in 2021. These include 240 books from the KU Select 2020 HSS Books Collection and 65 books from KU’s partner collections. In addition, 31 journals will be flipped to OA thanks to two ground-breaking Subscribe-to-Open (S2O) projects: Pluto Open Journals and IWAP Open Journals.

    in Open Access Tracking Project: news on February 08, 2021 06:50 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Operation Cytokine Storm

    Exceptional times call for exceptional men. In Israel, men of science boldly step forward to face the coronavirus, armed with miracle cures. Pandemic, like a good war, is always good for business.

    in For Better Science on February 08, 2021 06:30 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dev session: Ankur Sinha: NeuroFedora

    The NeuroFedora project

    Ankur Sinha will introduce the Free/Open Source Software NeuroFedora project and discuss its development in this developer session.

    The abstract for the talk is below:

    NeuroFedora is an initiative to provide a ready to use Fedora based Free/Open source software platform for neuroscience. We believe that similar to Free software, science should be free for all to use, share, modify, and study. The use of Free software also aids reproducibility, data sharing, and collaboration in the research community. By making the tools used in the scientific process easier to use, NeuroFedora aims to take a step to enable this ideal. In this session, I will talk about the deliverables of the NeuroFedora project and then go over the complete pipeline that we use to produce, test, and disseminate them.

    in INCF/OCNS Software Working Group on February 07, 2021 05:27 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dev session: Marcel Stimberg: Brian Simulator

    The Brian Simulator

    Marcel Stimberg will introduce the Brian Simulator and discuss its development for the first developer session of the year.

    The abstract for the talk is below:

    The Brian Simulator is a free, open-source simulator for spiking neural networks, written in Python. It provides researchers with the means to express any kind of neural model in mathematical notation and takes care of translating these model descriptions into efficient executable code. During this dev session I will first give a quick introduction to the simulator itself and its code generation mechanism. I will then walk through Brian’s code structure, our automatic systems for tests and documentation, and demonstrate how we work on its development. The Brian simulator welcome contributions on many levels, hopefully this dev session will give you an idea where to start.

    in INCF/OCNS Software Working Group on February 07, 2021 04:55 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    WG meeting 26 January 2021

    Photo by Daria Nepriakhina on Unsplash

    Photo by Daria Nepriakhina on Unsplash.


    These are the meeting logs from the Software WG meeting that was held on 26th January 2021 at 1000 UTC using INCF’s BlueJeans account. The next progress meeting will be held in ~4 weeks time. For any clarifications and suggestions, please feel free to contact the current WG chairs at webmaster AT cnsorg DOT org.

    Preset agenda

    This was the loose agenda that was set up before the meeting.

    • Quick introductions (name, background, current position, tools used/developed/supported).
    • Progress check on guidelines.
    • WG meeting at CNS*2021 (hackathons/dev focussed workshops/tutorials too?).
    • WG presence at INCF virtual Assembly 2021 in April - community poll.
    • Choosing slots for user/dev sessions.
    • Group photo for INCF WG page.
    • Open floor.

    Attendees

    • Malin Sandström, INCF: BIDS-Matlab.
    • Ankur Sinha, Silver Lab at University College London: Open Source Brain/NeuroML/NeuroFedora.
    • Marcel Stimberg, Institut de la Vision/Sorbonne Université (Paris France), Brian simulator.
    • Shailesh Appukuttan, Neuro-PSI, CNRS: HBP related tools/PyNN.
    • Thomas Nowotny, University of Sussex: GeNN/PyGeNN/Brian2GeNN.
    • Felix B. Kern, University of Tokyo: StdpC.
    • Jamie Knight, Sussex U: GeNN/PyGeNN.

    Meetings going forward

    • Bluejeans, do we want to record? For SIG meetings, we agreed to use BlueJeans with recordings enabled on for members to refer to later.
    • Do we need breakout rooms? We agreed that breakout rooms may be needed during tutorials/developer sessions and so can be decided on a per-session basis. These are not expected to be needed at regular WG progress meetings.

    Guidelines progress

    • The source repository is here on GitHub: https://github.com/OCNS/SoftwareDevelopmentGuidelines
    • In general, we agreed that the style of the guidelines should be to advise readers, not to force them to follow listed suggestions. Readers will choose the right practices that suit their project.
    • The first version will contain high level suggestions and will be targeted at beginners. More technical, detailed guidelines will be added later for advanced users.
      • For example, we will mention the presence of PEPs for Python and suggest ones that projects should follow. However, we will stress that not all PEPs are to be followed as rules. Rather, it is more important to adopt a set of relevant PEPs and apply them consistently in the project.
    • The guidelines will aid readers in choosing the right license for their project.
    • The guidelines will also help readers document their projects, for their users and potential contributors.

    The WG discussed the possibility of taking on more teaching focussed tasks, for example, in collaboration with the Software Carpentry and/or Code Coderefinery projects. The INCF training space’s study tracks can also be used for such activities.

    RSE societies & resources for development of research software

    These are other resources/societies that the WG’s tasks may overlap with:

    Miscellaneous External Resources:

    Potential deliverables

    These are our currently planned deliverables:

    • Software development guidelines
    • Presentations of software projects
    • Community Poll at INCF Assembly on research software stumbling stones
    • Idea: study track(s) on RSE for students (first target, link with Carpentries), for established coders, ask Brainhack to adopt/include in TrainTrack.

    WG meeting at CNS*2021

    • Hackathon(s)? Local hackathons in collaboration with the Brainhack project?
    • Dev focussed workshops?
    • Tutorials? Neuroscience focussed Software Carpentry sessions?
    • Beginner level tutorials: Git, Containers, IDEs?
    • Community poll to gather information on software development issues?

    WG presence at INCF Assembly 2021 in April (roll call, enter your details)

    We will at at minimum hold a WG progress meeting at the INCF assembly. The current plan is to host more sessions/tutorials, depending on the members’ workloads/availability.

    Schedule and slots for user/dev sessions

    This is being done using the Housekeeping repository on GitHub.

    Group photo (screenshot)

    This was postponed to a later meeting.

    in INCF/OCNS Software Working Group on February 07, 2021 03:31 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Brave Faces And Being A Beginner: The Week’s Best Psychology Links

    Our weekly round-up of the best psychology coverage from elsewhere on the web

    Instead of putting on a brave face in front of your kids, you might want to consider putting on a “brave voice”. That’s according to research from Paddy Ross and team, who find that children tend to focus on the emotion in people’s voices more than the emotion in their body language. Ross describes the work at The Conversation.


    Wired has published a fascinating extract from Tom Vanderbilt’s book “Beginners: The Joy and Transformative Power of Lifelong Learning”. In the piece, Vanderbilt describes his experience of learning to juggle — and discusses what this process reveals about the way humans learn new skills.   


    The coronavirus pandemic has affected the entire population in one way or another: even those of us fortunate enough not to have caught the virus have still had our lives turned upside-down.  At BBC Future, Ed Prideaux explores how we might deal with the lingering effects of this “mass trauma” once the pandemic over.


    Meanwhile, over at BBC Science Focus, Amy Fleming examines strategies for dealing with “COVID burnout”: the fatigue and feelings of stagnation that many of us are experiencing. Social support, maintaining a sense of a control, and getting regular exercise are all important, Fleming writes.


    Earlier this week we wrote about the lack of evidence behind the idea that microdosing psychedelic drugs can have a positive impact on mental health. But when it comes to larger doses, there is a fair amount of preliminary evidence that these substances could be useful. At Scientific American, researcher Austin Lim takes a look at the current state of the field.


    Donald Trump is no longer in office — but his amplification of conspiracy theories has had long-lasting political and social effects. Researchers who specialise in the spread of misinformation are now left trying to make sense of it all, reports Jeff Tollefson at Nature.


    Finally, researchers have mapped out how electrical brain stimulation can give rise to various emotions. The team stimulated several different brain areas in a woman who was receiving deep brain stimulation for depression. Some of those sites produced a positive or pleasant response, but others produced feelings of “doom and gloom”, apathy, and even sickness, reports Neuroskeptic at Discover Magazine.

    Compiled by Matthew Warren (@MattbWarren), Editor of BPS Research Digest

    in The British Psychological Society - Research Digest on February 05, 2021 03:39 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Upcoming Workshops on Tissue Quality and Pipeline for the Human Brain Census

    In the second of several planned for 2021, these workshops – co-hosted by the NIH BRAIN Initiative and the NIH Blueprint – will bring together experts to explore emerging questions on the upcoming ramp-up of the NIH BRAIN Initiative’s cell census and atlas-ing efforts. Please join us on Feb. 9-11, 2021!

    Together with the NIH Blueprint for Neuroscience Research, the NIH BRAIN Initiative is sponsoring an exciting three-part virtual series. These upcoming workshops seek input on best practices and recommendations to ensure a dependable supply of high-quality human brain tissue for the ramp-up of the NIH BRAIN Initiative’s cell census and atlas-ing efforts of the human brain. Additionally, these workshops align with recently announced Congressional support for The Human Brain Cell Atlas, one of the transformative projects outlined in BRAIN 2.0: From Cells to Circuits, Toward Cures.

    The sessions will be hosted on Zoom and are open to anyone, although you will need to register to attendexternal link. Each session will take place from 2:00 PM – 6:00 PM (EST).

    Session 1: Tissue quality and processing for human census
    Tuesday, February 9, 2021

    This session will be chaired by Drs. Ed Lein and Li-Huei Tsai. It will explore questions on current methods of tissue preparation and stabilization, optimal processing and annotation methods for prospective collection, and considerations of best practices for tissue collection, stabilization, dissection, distribution, and assay protocols. Confirmed speakers include Drs. Evan Macosko, Genevieve Konopka, Dirk Keene, Nadejda Tsankova, and Noah Snyder-Mackler.

    Session 2: Open consent and tissue pipeline for prospective collection
    Wednesday, February 10, 2021

    This session will be chaired by Drs. Dirk Keene and Li-Huei Tsai. It will explore questions on ensuring biological diversity and a representative sample in the census, how best to achieve consent for open sharing of census data, and considerations of inclusion and exclusion criteria. Confirmed speakers include Drs. Thomas Hyde, Steven McCarroll, Deborah Mash, Tish Hevel, Harry Haroutunian, Kristin Ardlie, and Rebecca Folkerth.

    Session 3: Break-out sessions to generate ideas for NIH consideration
    Thursday, February 11, 2021

    In this session, attendees will breakout into small groups to identify key takeaways based on the questions posed in Sessions 1 and 2. Groups will re-convene as a collective body to provide ideas and an action plan for the NIH BRAIN Initiative to consider.

    Registration for these workshops is still open! Please register hereexternal link. For more information, please view the workshop agendaexternal link.

    in BRAIN Update on February 05, 2021 02:30 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    A ‘furever’ home: AI to the rescue!

    How many of us love animals and think of our pets as part of the family? Well, what if I told you that among the 6-8 million animals that enter the rescue shelters every year, nearly 3-4 million (i.e., 50% of the incoming animals) are euthanized. Even more heartbreaking is that 10% – 25% of them are put to death specifically because of shelter overcrowding each year.

    The problem of overpopulation of domestic animals continues to rise, leaving shelters faced with the challenge of how to increase adoption rates. Though animal shelters provide incentives such as reduced adoption fees and sterilizing animals before adoption, only a quarter of total animals living in the shelter are adopted.

    Among the 6-8 million animals that enter rescue shelters every year, nearly 3-4 million are euthanized

    These staggering statistics led us to investigate the length of stay of animals at shelters and the factors influencing the rate of animal adoption. The overarching goal of this study was to use these factors to predict and then minimize how long an animal will stay in a shelter, thereby decreasing the number of animals euthanized due to overcrowding. Several steps must be conducted to accomplish this goal, such as a literature search for the factors, collection of data from databases and animal shelters, and utilizing machine learning algorithms on this data to make predictions on length of stay in the shelters.

    To answer the question of what factors influence the length of stay, a thorough literature review was conducted. Several factors were found to influence the length of stay including color, gender, breed, animal type, and age. To make the predictions about the length of stay using these factors, we evaluated using machine learning algorithms and predictive analytics.

    Machine learning is just the ability to program computers to learn and improve by itself using training experience. The developed system needs to analyze big data, quickly deliver accurate and repeatable results, and adapt to new data. A system can be trained to make accurate predictions by learning from examples of desired input-output data. In other words, we wanted to utilize a labeled data set with the output (length of stay) already known, so that the computer could learn from it. The next step was to obtain this data from databases and animal shelters across the country.

    The data that was collected from the databases and animal shelters included information such as animal type, intake and outcome date, gender, color, breed, and intake and outcome status (behavior of animal entering the shelter and behavior of animal at outcome type). These data sets included information from mostly southern and southwestern states. For the length of stay, the categories included “low”, “medium”, “high”, and “very high” (euthanized).  Once the data was collected and cleaned, it was time to input it into the machine learning algorithms.

    There are so many different types of algorithms that can be used on a data set to make predictions. The hard part is determining which algorithm will perform the best on the given data set, as the performance of the models depends on the application. Simple classification algorithms such as logistic regression, artificial neural network, gradient boosting, and random forest were used in this study.

    Examining the results, the most proficient predictive model was developed by the gradient boosting algorithm for this dataset, followed by the random forest algorithm. The logistic regression algorithm appeared to have the worst performance metrics for all categories of length of stay. What was interesting was that the gradient boosting and random forest algorithms performed well when predicting the very high length of stay or when the outcome was euthanization at around 70-80%.

    Age vs. Days in Shelter for Cats and Dogs

    Looking at the results from the exploratory data above, it was observed that the number of days a dog stays in the shelter decreases as the age increases. This was not expected, as it is predicted that the number of days in a shelter would be lower for younger dogs and puppies. This observation could be due to having more data points for younger dogs.

    Results showed that age, size, and color have a significant impact or influence on the length of stay.

    Another interesting result from the study was the top features or factors from each machine learning algorithm. Results showed that age (senior, super senior, and puppy), size (large and small), and color (multicolor) have a significant impact or influence on the length of stay.

    For future studies, a prescriptive analytics approach will be utilized. Not only is our goal to increase adoption rates of pets in animal shelters, but to also determine the optimal animal shelter location where the animal will have the least amount stay in a shelter and most likely be adopted.

    The post A ‘furever’ home: AI to the rescue! appeared first on BMC Series blog.

    in BMC Series blog on February 05, 2021 01:50 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Zane Lybrand PhD

    On February 4 , 2021, University of Texas San Antonio neuroscientists sat down with Zane Lybrand for an episode of Neuroscientists Talk Shop. Jenny Hsieh, Chris Navara, and Salma Quraishi chatted with him about new work describing aberrant migration of adult born granule cells following pilocarpine insult, and its contribution to generation of spontaneous seizure. Zane Lybrand PhD is Assistant Professor of Biology at Texas Womens University. https://apps.twu.edu/my1cv/profile.as...​ Jenny Hsieh PhD is Semmes Foundation Chair in Cell Biology at UTSA & Director of the UTSA Brain Health Consortium. https://hsiehlab.org​ Chris Navara PhD is Associate Professor of Research in Biology at UTSA and Director of the Stem Cell Core. https://www.utsa.edu/bhc/core/stem-ce...​ Salma Quraishi PhD is Associate Director of the UTSA Neurosciences Institute & Assistant Professor of Research at UTSA https://neuroscience.utsa.edu​ Neuroscientists Talk Shop podcast: https://tinyurl.com/yxatz6fq​​ UTSA Neurosciences Institute: https://neuroscience.utsa.edu​​ The University of Texas San Antonio: https://www.utsa.edu​

    in Neuroscientists talk shop on February 04, 2021 11:56 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Meet the winners of the 2021 OWSD-Elsevier Foundation Award

    Five award-winning women scientists from the developing world will present virtually at AAAS

    in Elsevier Connect on February 04, 2021 09:51 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Rapid online learning and robust recall in a neuromorphic olfactory circuit

    This week on Journal Club session Damien Drix will talk about a paper "Rapid online learning and robust recall in a neuromorphic olfactory circuit".


    We present a neural algorithm for the rapid online learning and identification of odourant samples under noise, based on the architecture of the mammalian olfactory bulb and implemented on the Intel Loihi neuromorphic system. As with biological olfaction, the spike timing-based algorithm utilizes distributed, event-driven computations and rapid (one shot) online learning. Spike timing-dependent plasticity rules operate iteratively over sequential gamma-frequency packets to construct odour representations from the activity of chemosensor arrays mounted in a wind tunnel. Learned odourants then are reliably identified despite strong destructive interference. Noise resistance is further enhanced by neuromodulation and contextual priming. Lifelong learning capabilities are enabled by adult neurogenesis. The algorithm is applicable to any signal identification problem in which high-dimensional signals are embedded in unknown backgrounds.


    Papers:

    Date: 2021/02/05
    Time: 16:00
    Location: online

    in UH Biocomputation group on February 04, 2021 06:51 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Project update: GSoC

    We (INCF) have, as of this morning, officially applied to Google as a mentoring organization in 2021. This is our 11th time. We have a Project Ideas list of 55 projects from 27 mentor teams, a new record!

    The next step is interacting with prospective students, and waiting for Google’s announcement of this year’s accepted mentor organizations (March 9). Since a few years back, we post all project ideas on our forum to make it easier for students and mentors to discuss.

    in Malin Sandström's blog on February 04, 2021 05:32 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Project Gunhild update: finished!

    Thick yarn makes for a quick knit. I am already finished with the Gunhild cardigan, a month before the agreed deadline. Slight stumbling block on the way; I ran out of main colour yarn after doing some modifications. And it wasn’t to be had anywhere in Europe, and out of stock at the supplier.

    But I had bought some extra contrast color yarn, so I ended up doing half sleeves in the contrast color, and not just the edging. With extra long sleeves, which I like. And I sewed in a button so it would be wearable without a shawl pin for closure.

    in Malin Sandström's blog on February 04, 2021 05:03 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    New tool to visualize related articles

    Connected Papers Logo ImageA new feature on arXiv.org helps readers explore related academic papers directly from article abstract pages. Developed by Connected Papers and now released as an arXivLabs collaboration, the tool links to interactive visualizations of similar articles. Connected Papers graphs can help readers explore a visual overview of a new academic field, create a bibliography, discover the most relevant prior and derivative works of a paper, and more.

    Readers will find the feature below an article abstract in the “Related Papers” tab, as shown below. By activating the Connected Papers toggle switch, readers can follow a link to the article’s graph displayed at Connected Papers. Each paper’s graph is created by analyzing tens of thousands of papers for similarity in their citations, and then a small subset of those analyzed are arranged according to their degree of similarity. Each node in the graph represents an article, which has its own set of connected papers.

    “Connected Papers started as a weekend side project between friends, to solve our own problems with literature reviews,” said Eddie Smolyansky, co-founder of Connected Papers. “We can’t believe how quickly the scientific community embraced the tool and we’re so excited to be featured on arXiv – a website that we use daily in our own research. With this kind of support, we plan to keep improving Connected Papers and to build more tools for the academic community.”

    arXivLabs is a framework enabling innovative collaborations with individuals and organizations to bring innovative tools to the arXiv community, and we welcome new proposals.

     

    screenshot of abstract page with related papers tab selected

     

     

     

    connected papers graph

     

     

    in arXiv.org blog on February 03, 2021 06:57 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Publishers and societies voice concerns over the Rights Retention Strategy

    Elsevier among more than 50 signatories to statement

    in Elsevier Connect on February 03, 2021 02:49 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Combining data and expertise to hack a rare but serious disease

    Elsevier contributed scientific content and data to the #hackforNF as part of its partnership with the Children’s Tumor Foundation

    in Elsevier Connect on February 03, 2021 12:59 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Open access: Could defeat be snatched from the jaws of victory?

    (A print version of this eBook is available here

    When news broke early in 2019 that the University of California had walked away from licensing negotiations with the world’s largest scholarly publisher (Elsevier), a wave of triumphalism spread through the OA Twittersphere. 

    The talks had collapsed because of Elseviers failure to offer UC what it demanded: a new-style Big Deal in which the university got access to all of Elsevier’s paywalled content plus OA publishing rights for all UC authors – what UC refers to as a “Read and Publish” agreement. In addition, UC wanted Elsevier to provide this at a reduced cost. Given its size and influence, UC’s decision was hailed as “a shot heard around the academic world”. 

    The news had added piquancy coming as it did in the wake of a radical new European OA initiative called Plan S. Proposed in 2018 by a group of European funders calling themselves cOAlition S, the aim of Plan S is to make all publicly funded research open access by 2021. 

    Buoyed up by these two developments open access advocates concluded that – 17 years after the Budapest Open Access Initiative (BOAI) – the goal of universal (or near-universal) open access is finally within reach. Or as the Berkeley librarian who led the UC negotiations put it, “a tipping point” has been reached. But could defeat be snatched from the jaws of success?

    For my take on this topic please download the attached pdf

    Please note that this document is more eBook than essay. It is very long. I know, I know, people will complain, but that is what I do. 

    Any brave soul willing to give it a go but who (like me) does not like to read long documents on the screen may like to print it out as a folded book. I have long used the Blue Squirrel software ClickBook to do this. Alternatively, you can print booklets directly from word processing software like Word, and I am happy to send over a Word file to anyone who would like to do that

    Meanwhile, the eBook is available as a pdf file here.


    Rick Anderson has published a summary of and commentary on this eBook on The Scholarly Kitchen here

    A second post on The Scholarly Kitchen referencing this eBook was posted 10 days later here

    in Open and Shut? on February 03, 2021 10:53 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Open Access: “Information wants to be free”?

    (A print version of this eBook is available here)

    Earlier this year I was invited to discuss with Georgia Institute of Technology librarian Fred Rascoe my eBook “Open access: Could defeat be snatched from the jaws of victory?” for Lost in the Stacks, the research library rock and roll show he hosts. 

    Prior to the interview, Rascoe sent me a list of questions. As we did not have time to discuss them all during the interview, I decided to publish my answers on my blog. With the greater space available I also took the opportunity to expatiate at considerable length in doing so. This turned into another eBook!

    Please note that what I say in the attached document is built on an interview. It is not intended to be any kind of prediction of the future; it is more an extended reflection after 20 years reporting on the OA movement, coupled with a heavy dose of speculation. Who knows, perhaps this will be the last thing I ever write on open access. Maybe this will prove my swan song.

    I would also like to stress upfront that in the critique of the OA movement I make I don’t claim that my knowledge, or predictions, are superior to anyone else’s. This is just what I have concluded after many years observing the movement and reflects my current view on where I think we are today. It does also include a lot of factual data, as well as links and footnotes for those who like them. 

    Importantly, while I do not consider myself to be an OA advocate, I admit that I was as naïve as anyone else about what the movement might be able to achieve.

    Finally, while what I say might be slightly overweight in European developments, it may not matter if (as I believe is possible) events in Europe end up determining how open access develops globally. 

    I say this because it seems possible that European OA initiatives will reconfigure the international scholarly communication system, and in ways that OA advocates will not be comfortable with. 

    I would add that the main focus is on science publishing rather than HSS. 

    To read/download my new eBook please click this link

    The file can also be downloaded here. (Health warning: it is 163 pages long). 

    A short review of the eBook has been posted on Reddit here.


    in Open and Shut? on February 03, 2021 10:51 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    France’s Ugly Brown Derriere

    "legions d'honneurs, prix, promotion.... Le champ du cygne de ce système politico médical qui n'a plus le choix que de se soutenir mutuellement. Patience, en d'autre temps, on a donné des médailles aux derniers combatants. On connait la fin" - Capitaine Eric Chabriere.

    in For Better Science on February 03, 2021 06:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Inaugural BMC Ecology and Evolution image competition announcement

    In celebration of the recent launch of BMC Ecology and Evolution, we are delighted to announce our inaugural BMC Ecology and Evolution image competition! Submit your images for a chance to highlight your research, win prizes, and have your photography featured in the journal.

    Its predecessor, the BMC Ecology Image competition, ran for a successful 7 years, attracting substantial attention not just from ecologists but also from the wider public, featuring in international media outlets such as the Guardian, IFL Science, Scientific American, and the BBC.

    Send us your most stunning images showcasing the beauty of ecology and evolutionary biology research

    The image competition is your opportunity to celebrate your research and show off your photographic talent.

    Anyone affiliated with a research institution is eligible to enter. We invite you to submit to one of the following six categories:

    • Evolutionary developmental biology and biodiversity
    • Behavioural ecology
    • Human evolution and ecology
    • Ecological developmental biology
    • Conservation Biology
    • Population ecology

    Judging and prizes

    Judging the competition will be members of the BMC Ecology and Evolution editorial board, along with Dr Alison Cuff, the Editor of the journal, picking the best image for each category. The overall winner will receive a cash prize of €300, while the runner-up will receive €150.  Additional prizes of €75 will be awarded for the best images submitted to each category and also the Editors pick.

    How to submit              

    Please email your images to the Editor, Alison Cuff, at alison.cuff@biomedcentral.com with the subject line “BMC Ecology and Evolution Image Competition” and include the following :

    1. Name:
    2. Category:
    3. Image:
    4. Description (Max. 300 words):
    5. File type:
    6. Data attribution (if applicable):
    7. Affiliation:
    8. Contact details of Research Institute:
    9. I agree to release this image under a Creative Commons License:  Y/N
    10. Twitter handle (optional):

    Please attach your image entry to your email. All images must conform to the following criteria:

    • A minimum 300dpi (1831 x 1831 pixels for a raster image).
    • In one of the allowable formats – EPS, PDF (for line drawings), PNG, TIFF, JPEG, BMP, DOC, PPT [Please note that it is the responsibility of the author(s) to obtain permission from the copyright holder to reproduce figures or tables that have previously been published elsewhere.]

    In line with our policies on open access, entry to the competition implies release of the images under a creative commons license, to allow file sharing with proper attribution.

    Submit your images now!

    We’ll be posting regular updates on the competition in the coming months, so don’t forget to keep up to date by reading our blog or by following us on Twitter @BMC_series

    Closing date for entries is the 1st June 2021, with winners being announced in July 2021.

    Good luck!

    The post Inaugural BMC Ecology and Evolution image competition announcement appeared first on BMC Series blog.

    in BMC Series blog on February 02, 2021 08:47 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Jan Van Deursen sues me again because UT San Antonio withdrew job offer

    "After the University of Texas learned of the articles, the plaintiff did not receive the promised contract for signing. He was told that the recruitment process has ended. […] No academic institution is currently interested to recruit the plaintiff despite his enormous scientific reputation." - Lucas Brost, van Deursen's lawyer.

    in For Better Science on February 02, 2021 07:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Saving the Catheter

    Nephrologists deal with catheter malfunctions and non-functional catheters on a daily basis, usually related to thrombosis. Nephrologists want every patient on chronic hemodialysis to have a more permanent type of vascular access such as a fistula or graft, but for some patients, this is not suitable or they may require a catheter while a fistula or graft is developing. In the United States in 2018, more than four out of every five patients receiving hemodialysis were using a fistula or graft (82.4%; 65.7% fistula and 16.7% graft). Meaning about 17.6% were using a catheter.

    Given the time spent on trouble-shooting catheter malfunction or dealing with bloodstream infections, I’m always looking for new ways to prevent these complications related to catheters.  An article by Richtrova et al., published last week in BMC Nephrology, looked at whether rt-PA vs trisodium citrate would prevent catheter-related bloodstream infections, catheter malfunction, or catheter non-function. It was a small study with only 18 patients, with the authors originally planning to recruit 80 patients. However, the recruitment rates are not unexpected given the efforts to keep catheter use low and increase fistula and graft use. Even with the small numbers, there are some findings worth keeping in mind. There was no reduction in the incidence of catheter malfunction/non-function or catheter-related bloodstream infections, but the overall incidence of catheter-related bloodstream infections was low in their population. Catheter patency (also known as being unobstructed) was more related to the time interval between sessions and if there were prior catheter malfunctions as opposed to the use of agents. The longer period without the use of a catheter will predispose catheter thrombosis leading to malfunction or non-function.

    There was no reduction in the incidence of catheter malfunction/non-function or catheter-related bloodstream infections, but the overall incidence of catheter-related bloodstream infections was low

    Other agents have been used to prevent thrombosis leading to catheter dysfunction. The most common anticoagulant locking solutions (meaning solutions used to fill catheters in between hemodialysis sessions to prevent clotting) are heparin and 4% citrate. In fact, the American Society of Diagnostic Interventional Nephrologists recommends that both heparin and 4% citrate are acceptable catheter locks. Thrombolytic agents like rt-PA have been looked at before. It has been previously shown that rt-PA is significantly more effective than heparin in preventing clot formation in hemodialysis catheters. Another study previously compared rt-PA to heparin as a locking solution, concluding that there was a significantly reduced incidence of catheter malfunction and bacteremia. So why not switch to rt-PA as a locking solution for one out of three sessions per week? First, there is an issue of cost. Overall, nephrologists tend to use rt-PA as a treatment to fix the catheter when it stops working rather than as a prevention strategy.

    Even though there are significant efforts to have a more permanent vascular access such as a fistula or graft, there are individuals that can only dialyze via a catheter. Since dialysis vascular access is a lifeline for hemodialysis patients, we should continue to look at options for maintenance and optimization of catheter function. The complications related to catheters, especially infections, have an unfavorable effect on morbidity, mortality, and cost. In addition, infections are the second most frequent cause of death in dialysis patients. This is why I’m always on the lookout for ways to preserve my patient’s catheter function.

    The post Saving the Catheter appeared first on BMC Series blog.

    in BMC Series blog on February 01, 2021 04:08 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Upcoming Workshops on Brain Connectivity to Map Mammalian Circuitry at Scale

    The first of many planned for 2021, these workshops – co-hosted by the NIH BRAIN Initiative and Department of Energy (DOE) – will bring together experts to discuss neural circuit mapping, connectomics, and challenges in whole-brain mapping. Please join us on Feb. 5 and 17, and March 5, 17, and 31!

    The NIH BRAIN Initiative recently partnered with the Department of Energy (DOE) Office of Science to organize five workshops on brain connectivity. Following the findings of the BRAIN 2.0 reports, this series will bring together researchers with broad expertise to discuss state-of-the-art technologies in mapping complete neural circuits, opportunities to advance connectomics, and challenges to creating detailed maps of brain connectivity – “wiring diagrams” spanning the entire mammalian brain. Ultimately, together with the BRAIN Initiative Cell Census Network cell-type “parts list”, these diagrams will advance our understanding of brain circuitry and function, and drive innovations in data science.

    Workshops are open to anyone and will be livestreamed on NIH Videocast. You can view the them live at https://videocast.nih.gov/ on the day of the event. Each workshop will take place from 11:00 AM – 4:00 PM (EST).

    Please view the brain connectivity workshop websiteexternal link for an up-to-date list of speakers and topics.

    Workshop 1external link: Significance of mapping complete neural circuits
    Friday, Feb. 5, 2021

    This workshop will explore the potential impact of generating detailed maps of brain connectivity spanning the entire mammalian brain. Invited speakers will discuss the value of mapping connectomes across species (e.g., drosophila, zebrafish, songbirds), retina connectomics, and how rodent connectomes inform our understanding of learning and behavior. There will also be a panel discussion on the importance of the whole mouse brain connectome.

     Workshop 2external link: Sample preparation in mammalian whole-brain connectomics
    Wednesday, Feb. 17, 2021

    Speakers will identify current capabilities and issues in tissue sample preparation for whole mouse brain electron microscopy connectomics at the synapse and complementary imaging at lower resolution in mouse and larger (including human) brains. For this discussion, relevant complementary techniques include light microscopy, functional imaging, cell-type labeling, and others. A panel will consider the current state of tissue preparation techniques and key challenges of scaling up to larger brains.

    Workshop 3external link: Experimental modalities for whole-brain connectivity mapping
    Friday, March 5, 2021

    The third workshop will identify opportunities and challenges in a variety of state-of-the-art whole-brain connectivity mapping modalities. The first session will cover multi-scale imaging of the connectome, efforts to disseminate these imaging technologies, and democratization of data collection. Speakers will also discuss projectome to connectome imaging, synapto-projectomes, bridging spatial and temporal gaps, and other topics.

     Workshop 4external link: Connectome generation and data pipelines
    Wednesday, March 17, 2021

    Speakers will identify opportunities and challenges associated with connectome generation and data pipelines. The first session will cover volume assembly (i.e., image stitching, alignment, registration) and automated reconstruction and annotation (i.e., segmentation, synapses, types, compartments). Speakers will also discuss connectome proofreading, verification, crowd-sourcing, high performance computing, and other topics.

    Stay tuned for more information on the fifth workshopexternal link, which is planned for March 31, 2021.

    Brain connectivity workshops 2021 graphic

    Registration for all workshops is still open! Please register hereexternal link. For workshop agendas, speaker biosexternal link, and more, check out the workshop series websiteexternal link.

    in BRAIN Update on February 01, 2021 02:30 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Are Effect Sizes in Psychology Meaningless?

    An argument that conceptual replications are more important than effect sizes

    in Discovery magazine - Neuroskeptic on January 31, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Brain Stimulation's Complex Emotional Effects

    Mapping the emotional responses to deep brain stimulation

    in Discovery magazine - Neuroskeptic on January 30, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    BRAIN Publication Roundup – January 2021

    Creating a 3D atlas of nerve innervation in the pancreas…Instinct and experience drive the cortical plasticity needed for maternal behavior…A new central amygdala fear circuit…Shedding light on visual discrimination circuits in primates….

    Scientists create a whole-organ 3D atlas of pancreatic nerve fibers in diabetes

    Islets are groups of cells in the pancreas that are important for glucose metabolism. Pancreatic islets contain several types of cells, including insulin-producing β cells. Previous studies have shown that islet cells receive dense input from nerve fibers and that this innervation controls blood glucose; thus, elucidating the detailed anatomy of the pancreas in diabetes may provide new insights into metabolic disease. The pancreas, however, is highly heterogeneous and our understanding of its anatomy and function remains incomplete. Here, Dr. Sarah Stanleyexternal link and her research team at the Icahn School of Medicine at Mount Sinai used tissue clearing and whole-organ imaging to create a high-resolution 3D atlas of pancreatic nerve fibers and identified a new mechanism of neural input remodeling in diabetes. Specifically, researchers used the tissue clearing technique iDISCO+ and 3D volume imaging and analysis to determine the distribution of β cells, glucagon-producing α cells, and neurofilament 200 kDa (NF200)-positive input fibers in the pancreata of healthy and diabetic mice. NF200 is a neuronal marker that is thought to reflect neural remodeling. They also examined islet and nerve structure in pancreatic tissue from five healthy and three diabetic human donors. Dr. Stanley and her colleagues found that the innervation of the endocrine pancreas was enriched in healthy mice, and innervated islets were larger than non-innervated islets in healthy mice and humans. Islet nerve density and NF200 were increased in the islets of two mouse models of diabetes. Also, nerve density and the proportion of innervated islets was increased in pancreatic tissue from diabetic humans compared to healthy subjects. Researchers also suspected that nerve innervation may change during the development of diabetes. To address this, they used streptozotocin (STZ), a drug that is toxic to β cells, in mice to induce diabetes and found that islet nerve density increased over time compared to control mice. Overall, these findings show that nerve input into islets is maintained in diabetic humans and may be remodeled during disease development. This work constitutes a “3D atlas of pancreatic nerve innervation”, serving as a tool for researchers to quantify β cell mass, define islets, map pancreatic innervation, and assess the interaction between islets and innervation across species. Altogether, this work may help further our understanding of diabetes and lead to novel treatment targets.

    Stanley_pancreatic isletsβ cell distribution and innervation in the human pancreas. (a) Pancreatic samples from human donors without (C1 to C5) and with type 2 diabetes (DM2; D1 to D3). Intrapancreatic ganglia (NF200, magenta) and β cells (insulin, green). h) Distribution and density of innervated insulin+ islets located at 1.6 μm from nerves was greater in tissue from diabetic individuals.

    ***

    Maternal responses to the cries of pups are driven by both innate and plastic mechanisms

    The brains of mammals seem to be hardwired to respond to the cries of our infants. However, a virgin mouse will not respond to the cries of another’s pups until she is co-housed with a dam and her litter. This suggests that there may be a re-wiring of the auditory cortex during this experience that enables pup retrieval behavior. But what enables this tuning of the brain to the cries of pups? Here, the lab of Dr. Robert Froemkeexternal link and his team at New York University recently demonstrated that after co-housing with pups, cortical plasticity and the oxytocin system rapidly re-tuned neurons in the auditory cortex of virgin mice. The virgin mice also learned to recognize pup cries and retrieve pups. Thus, neural mechanisms triggered by pup experience build upon hard-wired instincts, and drive parenting behavior. In the study, researchers co-housed virgin female mice with a dam and her litter, and then assessed cortical re-tuning. First, they determined the inter-syllable intervals at which females responded to pups and established a library of both prototypical pup calls and morphed calls for later playback procedures. Then, using two-photon imaging and in vivo voltage-clamp recordings, they found that excitatory and inhibitory tuning and synaptic responses were altered by maternal experience. Next, they expressed the calcium indicator GCaMP6f in either excitatory or inhibitory neurons in the auditory cortex of virgins and monitored temporal tuning throughout co-housing. This revealed that co-housing results in the coordinated plasticity of neuronal tuning. Finally, the researchers optogenetically inhibited hypothalamic oxytocin neurons of virgin females, co-housed those females with a dam and her litter, during the playback of various pup calls. They found that the oxytocin system is required for the re-tuning of cortical neurons. Altogether, these data demonstrate the importance of central oxytocin in synaptic plasticity processes within the auditory cortex for maternal behavior. Further, this study illustrates how the cortex tunes to the vocalizations of pups. While both experience and innate mechanisms are necessary for the development of maternal behavior, these data provide new insight into how the brain learns maternal skills. To learn more about this study, check out the AAAS EurekAlert! news releaseexternal link.

    Fromke_pup callsA stimulus library of prototypical and morphed pup calls. a) Prototypical pup calls were chosen and then b) one call was morphed in order to study the tuning properties of cortical neurons.

    ***

    Identification of a new fear circuit from the central amygdala to the globus pallidus

    While it has long been known that the amygdala is central to fear and emotional learning, the exact organization and function of its long-range outputs remain elusive. The central amygdala (CeA) is classically thought of as the output zone of the amygdala complex, but how do CeA projections modulate fear learning? Recently, Dr. Bo Liexternal link and his team from Cold Spring Harbor Laboratory (CSHL) and Stanford University sought to answer this question. In their new paper, the researchers showed that the CeA projects to the globus pallidus (GPe) in order to relay information about a stimulus to regulate fear learning. Specifically, they found that a subpopulation of CeA-GPe projections conveys information about the unconditioned stimulus (US) during auditory fear conditioning. During fear conditioning, the US is the electrical foot shock, whereas the stimulus that elicits a fear response (i.e., freezing behavior) after it is paired with the US (i.e., a tone) is called the conditioned stimulus. Researchers were interested in the inhibitory somatostatin positive (Sst+) neurons within the CeA and whether their circuitry helps regulate fear learning. To investigate this, Dr. Li’s team used a combination of state-of-the-art neuroscience techniques such as anatomical tract tracing, optogenetic modulation, fiber photometry, in vitro electrophysiology, and fluorescent in situ hybridization in male and female mice. First, using retrograde cholera toxin B tracers, they confirmed that the CeA sends a subpopulation of neuronal projections to the GPe. Using sophisticated labeling procedures, they next showed that the vast majority of GPe-projecting CeA neurons express Sst.  Anterograde tracing also demonstrated that projections from the CeA to the GPe originate predominantly from Sst+ neurons. Using a tetanus toxin light chain (TeLC) viral method, they also showed that blocking neurotransmission within CeA-GPe projections abolished conditioned freezing, indicating that these projections are necessary for fear learning. Fiber photometry revealed that these projections are specifically relaying information about the US during auditory fear conditioning. Further, optogenetically inhibiting or activating GPe-projecting CeA neurons during US presentation blocked or promoted fear learning, respectively, demonstrating that these projections are essential for fear memory formation. Altogether, this team showed that a new CeA-GPe projection is critical for fear memory regulation. These data are critical for characterizing the circuits controlling fear learning-related behaviors, which is an important step toward the development of circuit-based therapeutics following aversive experiences. Read more about this study in the CSHL news releaseexternal link

    Li_amyg circuitCharacterization of CeA-to-GPe projections. Retrograde tracing with cholera toxin B (CTB; red) combined with fluorescent in situ labeling procedures show that most CeA-to-GPe neurons (93 ± 3%) express somatostatin (Sst; green) and not PKC-delta (Prkcd; white). A nuclear counterstain (DAPI; blue) was also used to visualize cell bodies.

    ***

    The macaque ventrolateral prefrontal cortex boosts inferior temporal cortex population coding to allow rapid object recognition

    While the prefrontal cortex has long been known for its role in cognition and emotional processing, it is also critical for visual perception. There is also an emerging role for the inferior temporal cortex (IT) in object discrimination. But how do these two regions interact to enable the recognition of objects? Studying the computational functions within cortical circuits is critical in developing the next generation of models of visual intelligence, which will enable us to understand fundamental behaviors such as object recognition and discrimination. At the Massachusetts Institute of Technology, Dr. James DiCarloexternal link and Dr. Kohitij Karexternal link are investigating these topics. Their new study illustrates the role of connections between the ventrolateral prefrontal cortex (vlPFC) and the primate ventral visual cortex in robust core visual object recognition. Specifically, they were interested in testing whether the vlPFC is an important node in the visual object processing network by pharmacologically inactivating this region and simultaneously recording IT activity with Utah electrode arrays in macaques. They found that reversible, pharmacological inactivation of the vlPFC with muscimol reduced the quality of the IT population code and resulted in deteriorations in object discrimination performance. During the visual discrimination task, this effect of silencing the vlPFC was significantly higher for the late-solved images than for the early-solved images. They also found that inactivation of the vlPFC reduced IT late-phase neuronal population activity. These results suggest that vlPFC is part of a recurrent neural circuit that boosts the performance of the ventral visual processing stream, as opposed to shallow feedforward systems. Studies such as this will one day help to elucidate a complete, mechanistic understanding of visual object recognition, from images to behavior, at the level of the neuron. Read more about these findings in the MIT News press releaseexternal link.

    DiCarlo_ephys NHP visual discrimInvestigating object discrimination in macaque monkeys. a) Behavioral performance was tested on ten object categories. b) Two example trials of the object discrimination task showing the timeline of events. Performance was compared within subjects on sessions with and without muscimol injections in the vlPFC.

    in BRAIN Update on January 29, 2021 03:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Thoughts of Blue Brains and GABA Interneurons


    An unsuccessful planto create a computer simulation of a human brain within 10 years. An exhaustive catalog of cell types comprising a specific class of inhibitory neurons within mouse visual cortex. What do these massive research programs have in common? Both efforts were conducted by large multidisciplinary teams at non-traditional research institutions: the Blue Brain Project based in Lausanne, Switzerland and the Allen Institute for Brain Science in Seattle, Washington.

    BIG SCIENCE is the wave of the future, and the future is now. Actually, that future started 15-20 years ago. The question should be, is there a future for any other kind of neuroscience?
     

    Despite a superficial “BIG SCIENCE” similarity, the differences between funding sources, business models, leadership, operation, and goals of Blue Brain and the Allen Institute are substantial. Henry Markram, the “charismatic but divisive” visionary behind Blue Brain (and the €1 billion Human Brain Project) has been criticized for his “autocratic” leadership, “crap” ideas, and “ill-conceived, ... idiosyncratic approach to brain simulation” in countless articles. His ambition is undeniable, however:

    “I realized I could be doing this [eg., standard research on spike-timing-dependent plasticity] for the next 25, 30 years of my career, and it was still not going to help me understand how the brain works.”

     

    I'm certainly not a brilliant neuroscientist in Markram's league, but I commented previously on how a quest to discover “how the brain works” might be futile:

    ...the search for the Holy Grail of [spike trains, network generative models, manipulated neural circuit function, My Own Private Connectome, predictive coding, the free energy principle (PDF), or a computer simulation of the human brain promised by the Blue Brain Project] that will “explain” how “The Brain” works is a quixotic quest. It's a misguided effort when the goal is framed so simplistically (or monolithically).


    In his infamous 2009 TED talk, Markram stated that a computer simulation of the human brain was possible in 10 years:
    “I hope that you are at least partly convinced that it is not impossible to build a brain. We can do it within 10 years, and if we do succeed, we will send to TED, in 10 years, a hologram to talk to you.”


    This claim would come back to haunt him in 2019, because (of course) he was nowhere close to simulating a human brain. In his defense, Markram said that his critics misunderstood and misinterpreted his grandiose proclamations.1

    Blue Brain is now aimed at “biologically detailed digital reconstructions and simulations of the mouse brain.”

    In Silico

    Documentary filmmaker Noah Hutton2 undertook his own 10 year project that followed Markram and colleagues as they worked towards the goals of Blue Brain. He was motivated by that TED talk and its enthralling prediction of a brain in a supercomputer (hence in silico). Originally entitled Bluebrain and focused on Markram, the documentary evolved over time to include more realistic viewpoints and interviews with skeptical scientists, including Anne Churchland, Terry Sejnowski, Stanislas Dehaene, and Cori Bargmann. Ironically, Sebastian Seung was one of the loudest critics (ironic because Seung has a grandiose TED talk of his own, I Am My Connectome).


     

    In Silico was available for streaming during the DOC NYC Festival in November (in the US only), and I had the opportunity to watch it. I was impressed by the motivation and dedication required to complete such a lengthy project.  Hutton had gathered so much footage that he could have made multiple movies from different perspectives.

    Over the course of the film, Blue Brain/Human Brain blew up, with ample critiques and a signed petition from hundreds of neuroscientists (see archived Open Letter).

    And Hutton grew up. He reflects on the process (and how he changed) at the end of film. He was only 22 at the start, and 10 years is a long time at any age.

    Some of the Big Questions in In Silico:

    • How do you make sure all this lovely simulated activity would be relevant for an animal's behavior?
    • How do you build in biological imperfections (noise) or introduce chaos into your perfect pristine computational model? “Tiny mistakes” are critical for adaptable biological systems.

    • “You cannot play the same soccer game again,” said one of the critics (Terry Sejnowski, I think)

    • “What is a generic brain?”
    • What is the vision? 

    The timeline kept drifting further and further into the future. It was 10 years in 2009, 10 years in 2012, 10 years in 2013, etc. 

    Geneva 2019, and it's Year 10 only two Principals left, 150 papers published, and a model of 10 million neurons in mouse cortex. Stunning visuals, but still disconnected from behavior.

    In the end, “What have we learned about the brain? Not much. The model is incomprehensible,” to paraphrase Sejnowski.


    GABA Interneurons

    Another brilliant and charismatic neuroscientist, Christof Koch, was interviewed by Hutton. “Henry has two personalities. One is a fantastic, sober scientist … the other is a PR-minded messiah.”

    Koch is Chief Scientist of the MindScope Program at the Allen Institute for Brain Science, which focuses on how neural circuits produce vision. Another major unit is the Cell Types Program, which (as advertised) focuses on brain cell types and connectivity.3

    The Allen Institute core principles are team science, Big Science, and open science. An impressive recent paper by Gouwens and 97 colleagues (2020) is a prime example of all three. Meticulous analyses of structural, physiological, and genetic properties identified 28 “met-types” of GABAergic interneurons that have congruent morphological, electrophysiological, and transcriptomic properties. This was winnowed down from more than 500 morphologies in 4,200 GABA-containing interneurons in mouse visual cortex. With this mind-boggling level of neuronal complexity in one specific class of cells in mouse cortex — along with the impossibility of “mind uploading” — my inclination is to say that we will never (never say never) be able to build a realistic computer simulation of the human brain.


    Footnotes 

    1 Here's another gem: “There literally are only a handful of equations that you need to simulate the activity of the neocortex.”

    2 Most of Hutton's work has been as writer and director of documentary films, but I was excited to see that his first narrative feature, Lapsis, will be available for streaming next month. To accompany his film, he's created an immersive online world of interlinked websites that advertise non-existent employment opportunities, entertainment ventures, diseases, and treatments. It very much reminds me of the realistic yet spoof websites associated with the films Eternal Sunshine of the Spotless Mind (LACUNA, Inc.) and Ex Machina (BlueBook). In fact, I'm so enamored with them that they've appeared in several of my own blog posts.

    3 Investigation of cell types is big in the NIH BRAIN Initiative ® as well.



    References

    Abbott A. (2020). Documentary follows implosion of billion-euro brain project. Nature 588:215-6.

    [Alison Abbott covered the Blue Brain/Human Brain sturm und drang for years]

    Gouwens NW, Sorensen SA, Baftizadeh F, Budzillo A, Lee BR, Jarsky T, Alfiler L, Baker K, Barkan E, Berry K, Bertagnolli D ... Zeng H et al. (2020). Integrated morphoelectric and transcriptomic classification of cortical GABAergic cells. Cell 83(4):935-53.

    Waldrop M. (2012). Computer modelling: Brain in a box. Nature News 482(7386):456.


    Further Reading

    The Blue Brain Project (01 February 2006), by Dr. Henry Markram

    “Alan Turing (1912–1954) started off by wanting to 'build the brain' and ended up with a computer. ... As calculation speeds approach and go beyond the petaFLOPS range, it is becoming feasible to make the next series of quantum leaps to simulating networks of neurons, brain regions and, eventually, the whole brain.”

    A brain in a supercomputer (July 2009), Henry Markram's TED talk
    “Our mission is to build a detailed, realistic computer model of the human brain. And we've done, in the past four years, a proof of concept on a small part of the rodent brain, and with this proof of concept we are now scaling the project up to reach the human brain.”


    Blue Brain Founder Responds to Critics, Clarifies His Goals (11 Feb 2011), Science news

    Bluebrain: Noah Hutton's 10-Year Documentary about the Mission to Reverse Engineer the Human Brain (9 Nov 2012), an indispensable interview with Ferris Jabr in Scientific American

    European neuroscientists revolt against the E.U.'s Human Brain Project (11 July 2014), Science news

    Row hits flagship brain plan (7 July 2014), Nature news

    Brain Fog (7 July 2014), Nature editorial

    Human Brain Project votes for leadership change (4 March 2015), Nature news

    'In Silico:' Director Noah Hutton reveals how one neuroscientist's pursuit of perfection went awry (10 Nov 2020), another indispensable interview, this time with Nadja Sayej in Inverse

    “They still haven’t even simulated a whole mouse brain. I realized halfway through the 10-year point that the human brain probably wasn’t going to happen.” ...

    In the first few years, I followed only the team. Then, I started talking to critics.

     

     

    in The Neurocritic on January 29, 2021 11:25 AM.