• Wallabag.it! - Save to Instapaper - Save to Pocket -

    The U.S.’s first open-air genetically modified mosquitoes have taken flight

    The first genetically modified mosquitoes that will be allowed to fly free outdoors in the United States have started reaching the age for mating in the Florida Keys.

    In a test of the biotech company Oxitec’s GM male mosquitoes for pest control, these Aedes aegypti started growing from tiny eggs set out in toaster-sized, hexagonal boxes on suburban private properties in late April. On May 12, experiment monitors confirmed that males had matured enough to start flying off on their own to court American female mosquitoes.

    This short-term Florida experiment marks the first outdoor test in the United States of a strain of GM male mosquitoes as a highly targeted pest control strategy. This strain is engineered to shrink local populations of Ae. aegypti, a mosquito species that spreads dengue and Zika (SN: 7/29/16). That could start happening now that the GM mosquitoes have reached mating age because their genetics makes them such terrible choices as dads.

    The mosquitoes now waving distinctively masculine (extra fluffy) antennae in Florida carry genetic add-ons that block development in females. No female larvae should survive to adulthood in the wild, says molecular biologist Nathan Rose, Oxitec’s chief of regulatory affairs. Half the released males’ sons, however, will carry dad’s daughter-killing trait. The sons of the bad dads can go on to trick a new generation of females into unwise mating decisions and doomed daughters (SN: 1/8/09).

    The trait is not designed to last in an area’s mosquitoes, though. The genetics just follow the same old rules of natural inheritance that mosquitoes and people follow: Traits pass to some offspring and not others. Only half a bad dad’s sons will carry the daughter-killing trait. The others will sire normal mosquito families.

    Imagined versions of live-mosquito pest control in Florida have been both glorified and savaged in spirited community meetings for some time (SN: 8/22/20). But now it’s real. “I’m sure you can understand why we’re so excited,” said Andrea Leal, executive director of the Florida Keys Mosquito Control District, at the mosquito test (virtual) kickoff April 29.

    The debate over these transgenic Ae. aegypti mosquitoes has gone on so long that Oxitec has upgraded its original more coddled version with one that is essentially plug and play. The newer strain, dubbed OX5034, no longer needs a breeding colony with its (biting) females and antibiotics in easy reach of the release area to produce fresh males.

    Instead, Oxitec can just ship eggs in a phase of suspended development from its home base in Abingdon, England, to whatever location around the world, high-tech or not, wants to deploy them. Brazil has already tested this OX5034 strain and gone through the regulatory process to permit Oxitec to sell it there.

    The targets for these potential living pest controls will be just their own kind. They represent only about 4 percent of the combined populations of the 45 or so mosquito species whining around the Keys. Other species get annoying, and a more recent invader, Ae. albopictus, can also spread dengue and Zika to some extent. Yet Leal blames just about all the current human disease spread by mosquitoes in the Keys, including last year’s dengue outbreak, on Ae. aegypti.

    It’s one of the top three mosquitoes in the world in the number of diseases it can spread, says Don Yee, an aquatic ecologist at the University of Southern Mississippi in Hattiesburg, who studies mosquitoes (SN: 3/31/21). His lab has linked at least three dozen human pathogens, including some viruses and worms, to Ae. aegypti. Although most mosquitoes lurk outdoors in vegetation, this one loves humankind. In the tropics, “the adults are literally resting on the walls or the ceiling,” he says. “They’re hanging around the bathroom.” The species bites humans for more than half of its blood meals.

    In a long-running battle with this beast, staff in Florida in late April added water to boxes of shipped eggs and set them out at selected suburban private properties on Vaca, Cudjoe and Ramrod Keys. Other spots, with no added mosquitoes, will be watched as controls. All locations were chosen in part because American-hatched females of the same species were already there to be wooed, Rose says.

    person holding box containing modified mosquito eggsToaster-sized hexagonal boxes (one pictured) that contain eggs of genetically modified Aedes aegypti were set out on selected private property in the Keys in late April. There the males develop normally — and then fly away to mate.Oxitec

    Males typically don’t billow out of their boxes in a gray cloud but emerge sporadically, a few at a time. If all goes well in this preliminary test, up to 12,000 GM mosquitoes in total across the release sites will take to the air each week for 12 weeks.

    Neighboring households will host mosquito traps to monitor how far from the nursery boxes the Oxitec GM males tend to fly. That’s data that the U.S. Environmental Protection Agency wants to see. Based on distance tests elsewhere, 50 meters might be the median, Rose estimates. 

    The distance matters because pest controllers want to keep the free-flying GM mosquitoes away from outdoor sources of the antibiotic tetracycline. That’s the substance the genetic engineers use as an off switch for the self-destruct mechanism in female larvae. Rearing facilities supply the antibiotic to larvae, turning off the lethal genetics and letting females survive in a lab to lay eggs for the next generation.

    If GM males loosed in Florida happened to breed with a female that lays eggs in some puddle of water laced with the right concentration of tetracycline, daughters that inherited the switch could survive to adulthood as biters and breeders. The main possible sources in the Keys would be sewage treatment plants, Rose says. The test designers say they have selected sites well away from them.

    After the distance tests, bigger releases still start looking at how well males fare and whether pest numbers shrink. Up to 20 million Oxitec mosquitoes in total could be released in tests running into the fall.

    Despite some high-profile protests, finding people to host the boxes was not hard, Rose says. “We were oversubscribed.” At public hearings, the critics of the project typically outshout the fans. Yet there’s also support. In a 2016 nonbinding referendum on using GM mosquitoes, 31 of 33 precincts in Monroe County, which comprises the Keys, voted yes for the test release. Twenty of those victories were competitive though, not reaching 60 percent.

    The males being released rely on a live-sons/dead-daughters strategy. That’s a change from the earlier strain of Oxitec mosquitoes. Those males sabotaged all offspring regardless of sex. The change came during the genetic redesign that permits an egg-shipping strategy. Surviving sons, however, mean the nonengineered genes in the new Oxitec strain can mix into the Florida population more than in the original version.

    Those mixed-in genes from the test are “unlikely” to strengthen Floridian mosquitoes’ powers to spread disease, researchers from the EPA and the U.S. Centers for Disease Control and Prevention wrote in a May 1, 2020 memorandum. Many factors besides mosquito genetics affect how a disease spreads, the reviewers noted. Oxitec will be monitoring for mixing.

    There may be at least one upside to mixing, Rose says. The lab colonies have little resistance to some common pesticides such as permethrin that the Floridian mosquitoes barely seem to notice.

    Pesticide resistance in the Keys is what drives a lot of the interest in GM techniques, says chemist Phil Goodman, who chairs the local mosquito control district’s board of commissioners. During the dengue outbreak in 2009 and 2010, the first one in decades, the district discovered that its spray program had just about zero effect on Ae. aegypti. With some rethinking of the program’s chemicals, the control district can now wipe out up to 50 percent of mosquitoes of this species in a treated area. That’s not great control, at best. Then when bad weather intervenes for days in a row, the mosquitoes rebound, Goodman says.

    Aedes aegypti mosquitoThe invasive mosquito species Aedes aegypti (shown), which can spread Zika, dengue and yellow fever, is now under attack in the Florida Keys by GM males genetically tweaked to sabotage the American mosquito populations.Joao Paulo Burini/Moment/Getty Images Plus

    Since that 2009–2010 outbreak, catching dengue in Florida instead of just through foreign travel has become more common. In 2020, an unusually bad year for dengue, Florida reported 70 cases caught locally, according to the CDC’s provisional tally

    Traditional pesticides can mess with creatures besides their pest targets, and some critics of the GMO mosquitoes also worry about unexpected ecological effects. Yet success of the Oxitec mosquitoes in slamming the current pests should not cause some disastrous shortage of food or pollination for natives, Yee says. Ae. aegypti invaded North America within the past four centuries, probably too short a time to become absolutely necessary for some native North American predator or plant.

    For more details on pretrial tests and data, the Mosquito Control District has now posted a swarm of documents about the GM mosquitoes. The EPA’s summary of Oxitec’s tests, for instance, reports no effects noticed for feeding the aquatic mosquito larvae to crawfish.

    Yee doesn’t worry much about either crustaceans or fish eating the larvae. “That’s somewhat analogous to saying, well, we’re concerned about releasing buffalo back into the prairies of the Midwest because they might get eaten by lions,” he says. Crawfish and fish, he notes, don’t naturally inhabit the small containers of still water where Ae. aegypti mosquitoes breed.

    Still, new mosquito-fighting options are springing up: Radiation techniques might become precise enough to sterilize males but leave them attractive enough to fool females into pointless mating. And researchers are developing other genetic ways to weaponize mosquitoes against their own kind.

    One technique that uses no GM wizardry just infects mosquitoes with Wolbachia bacteria that make biting unlikely to spread dengue. The latest data from Mexico and Columbia suggest this infection “could be effective in the southern U.S. and across the Caribbean,” says biologist Scott O’Neil, based in Ho Chi Minh City, Vietnam, founder of the World Mosquito Program.

    He has no plans for working in the United States but is instead focusing on places with much worse dengue problems. His version of the Wolbachia strategy just makes bites less dangerous (SN: 6/29/12). The mosquito population doesn’t shrink or grow less bloodthirsty, so this approach might not appeal to Floridians anyway.

    in Science News on May 14, 2021 02:53 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Drug company withdraws court motion requesting retraction of papers critical of its painkiller

    A drug maker has blinked in a lawsuit against the leading anesthesiology society in the United States, along with several anesthesiology researchers, who it claims libeled the company in a series of articles and other materials critical of its main product.  As we reported last month, Pacira Biosciences, which makes the local anesthetic agent Exparel, … Continue reading Drug company withdraws court motion requesting retraction of papers critical of its painkiller

    in Retraction watch on May 14, 2021 12:52 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Moral Panics And Poor Sleep: The Week’s Best Psychology Links

    Our weekly round-up of the best psychology coverage from elsewhere on the web

    A neural implant has allowed a paralysed individual to type by imagining writing letters. The implant of 200 electrodes in the premotor cortex picks up on the person’s intentions to perform the movements associated with writing a given letter, translating these into a character on a screen. The individual was able to type 90 characters per minute with minimal errors, reports John Timmer at Ars Technica.

    Robin Dunbar famously estimated that humans are limited to having around 150 friends, due to cognitive restrictions imposed by the size of our brains. Now a new study challenges the accuracy of Dunbar’s number, as Jenny Gross reports at The New York Times (though Dunbar in turn disputes the findings).

    Poor sleep can make us less able to focus on a task at hand and ignore distractions. That’s according to a study comparing the performance on an attention test of 23 people with insomnia and 23 good sleepers. The results might not sound very surprising, but as researchers David James Robertson and Christopher B Miller point out at The Conversation, it hammers home how important sleep is for tasks like driving, where distractions can be deadly.

    An analysis of bacteria in the teeth of Neanderthals and ancient Homo sapiens has found that both groups ate lots of starchy foods. This in turn suggests that their common ancestor was already eating these kinds of foods more than 600,000 years ago, reports Ann Gibbons at Science. This was a time when hominin brains became a lot bigger, so the researchers think that the starch gave ancient humans the energy needed to fuel this growth.

    Moral panics about technology are common — but they rarely consider how those technologies evolve. After all, movies and video games from the 1970s are very different from their modern day counterparts. So have the supposed links between these technologies and mental health outcomes changed over time? Broadly, no, reports Tom Chivers at Unherd.

    Researchers have explored how individual neurons across different parts of the brain behave when someone (well, a monkey) undergoes anaesthesia. At Wired, Max G Levy looks at the results of the studies, and ponders their implications for understanding consciousness.

    Can we control our experience of disgust? At Scientific American, Charlie Kurth explores what the research says about combatting disgust in non-moral scenarios (think a medical student disgusted by the sight of blood) — and how this might extend to moral situations (when someone is disgusted by members of a certain social group, for instance).

    Compiled by Matthew Warren (@MattBWarren), Editor of BPS Research Digest

    in The British Psychological Society - Research Digest on May 14, 2021 12:27 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    “Yep, pretty slow”: Nutrition researchers lose six papers

    Six months after we reported that journals had slapped expressions of concern on more than three dozen papers by a group of nutrition researchers in Iran, the retractions have started to trickle in.  But clock started nearly two years ago, after data sleuths presented journals with questions about the findings in roughly 170 papers by … Continue reading “Yep, pretty slow”: Nutrition researchers lose six papers

    in Retraction watch on May 14, 2021 11:08 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Elephants are dying in droves in Botswana. Scientists don’t know why

    Die-offs of African elephants have once again erupted in Botswana. In just the first three months of 2021, 39 have succumbed.

    The mysterious deaths occurred in the Moremi Game Reserve, in the northern part of the country, nearly 100 kilometers from a region of the Okavango Delta, where about 350 African elephants died during May and June in 2020. Puzzled scientists have been calling for thorough investigations as the government sends mixed messages on the cause of death.

    Anthrax and bacterial infections had been ruled out in the new deaths and “further laboratory analysis is ongoing,” Botswana’s Department of Wildlife and National Parks reported in a March 24 news release.

    However, the 39 recent deaths were linked, based on preliminary results, to the same cyanobacteria toxins blamed for last year’s mass die-off, said Philda Kereng, Botswana’s Minister of Environment, Natural Resources Conservation and Tourism, in a March 30 state television address.

    Remote sensing of areas of last year’s mass die-off supports the cyanobacteria theory. From March through July 2020, cyanobacteria abundance increased continuously as water sources were shrinking, researchers report online May 28 in the Innovation. With climate change, bodies of water get warmer and toxic cyanobacteria thrive.

    Other evidence points to a pathogen as well. “The 2021 elephant mortalities are again specific to elephants, as was the case in 2020,” says Shahan Azeem, a veterinary scientist at the University of Veterinary and Animal Sciences in Lahore, Pakistan.

    If anthrax were to blame, other animals would have been affected, but they were not. And there would have been the telltale signs of bleeding on the carcasses, which was not the case. Poaching was also ruled out, because the elephants’ bodies were intact with their tusks. An investigation of the larger 2020 die-off suggests that a pathogen may have been the cause, Azeem and colleagues reported online August 5, 2020, in the African Journal of Wildlife Research.

    Botswana and neighboring countries in southern Africa have a transboundary conservation agreement under which elephants can roam across borders during migration. As Botswana, home to about 130,000 African elephants, has struggled to explain the recent deaths, Zimbabwe on its eastern border reported the death of 37 elephants in 2020. Sudden deaths in one area concern the others. Scientists had first blamed the Zimbabwe deaths on hemorrhagic septicemia, a disease caused by the bacterium Pasteurella multocida.

    But more recent genetic studies point to a related bacterium, Bisgaard Taxon 45, as the culprit, says Jessica Dawson, CEO of Victoria Falls Wildlife Trust in Zimbabwe, which has been doing lab analyses for that country’s deaths. 

    In March, the International Union for Conservation of Nature called African forest elephants “critically endangered” and African savannah elephants “endangered.” The IUCN lists poaching as the principal threat along with a rapid increase in land use by humans, which has decreased and fragmented the elephants’ living areas.

    Shrinking habitat and climate change may play a role in keeping the elephants exposed to the deadly pathogen — whatever it is, researchers say. The area is a hot spot for human-elephant conflict. Fencing to keep the animals away from crops and the deep Okavango River “imprison” the elephants, biologist Stuart Pimm of Duke University and colleagues wrote January 11 in PeerJ. The researchers tracked elephants in the area and showed very limited movement.

    “What’s clear is that in Botswana, and indeed in other places, fences restrict those movements,” Pimm says. “Elephants can’t escape what may be a dangerous situation for them.”

    in Science News on May 14, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Schneider Shorts 14.05.2021: Conclusions not affected

    This week's Schneider Shorts are about unaffected conclusions and destroyed raw data, the war on virus, vaccines and antivaxxers, and the virtues of having a long nose.

    in For Better Science on May 14, 2021 06:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Online spike rate inference with Cascade

    To infer spike rates from calcium imaging data for a time point t, knowledge about the calcium signal both before and after time t is required. Our algorithm Cascade (Github) uses by default a window that is symmetric in time and feeds this window into a small deep network to use the data points in the window for spike inference (schematic below taken from Fig. 2A of the preprint; CC-BY-NC 4.0):

    However, if one wants to perform spike inference not as a post-processing step but rather during the experiment (“online spike inference”), it would be ideal to perform spike inference with a delay as short as possible. This would allow for example to use the result of spike inference for a closed-loop interaction with the animal.

    Dario Ringach recently came up with this interesting problem. With the Cascade algorithm already set up, I was curious to check very specifically: How many time points (i.e., imaging frames) are required after time point t to perform reliable spike inference?

    Using GCaMP/mouse datasets from the large ground truth database (teh database is again described in the preprint), I addressed this question directly by training separate models. For each mode, the time window was shifted such that a variable number of data points (between minimally 1 and maximally 32) were used for spike inference. Everything was evaluated at a typical frame rate of 30 Hz, and also at different noise levels of the recordings (color-coded below); a noise level of “2” is pretty decent, while a noise level of “8” is quite noisy – explained with examples (Fig. S3) and equations (Methods) again in the preprint.

    The results are quite clear: For low noise levels (black curve, SEM across datasets as corridor), spike inference seems to reach a saturating performance (correlation with ground truth spike rates) around a value of almost 8 frames. This would result in a delay of almost 8*33 ms ≈ 260 ms after a spiking event (dashed line).

    But let’s have a closer look. The above curve was averaged across 8 datasets, mixing different indicators (GCaMP6f and GCaMP6s) and induction methods (transgenic mouse lines and AAV-based induction). Below, I looked into the curve for each single dataset (for the noise level of 2).

    It is immediately clear that for some datasets fewer frames after t are sufficient for almost optimal spike inference, for others not.

    For the best datasets, optimal performance is already reached with 4 frames (left panel; delay of ca. 120 ms). These are datasets #10 and #11, which use the fast indicator GCaMP6f, which in addition is here transgenically expressed. The corresponding spike-triggered linear kernels (right side; copied from Fig. S1 of the preprint) are indeed faster than for other datasets.

    Two datasets with GCaMP6s (datasets #15 and #16) stand out as non-ideal, requiring almost 16 frames after t before optimal performance is reached. Probably, expression levels in these experiment using AAV-based approaches were very high, resulting in calcium buffering and therefore slower transients. The corresponding spike-triggered linear kernels are indeed much slower than for the other GCaMP6s- or GCaMP6f-based datasets.

    The script used to perform the above evaluations can be found on Cascade’s Github repository. Since each data point requires retraining the model from scratch, it cannot be run on a CPU in reasonable time. On a RTX 2080 Ti, the script took 2-3 days to complete.


    1. Only few frames (down to 4 frames) after time t are sufficient to perform almost ideal spike inference. This is probably a consequence of the fact that the sharp step increase is more informative than the slow decay of a spike-triggered event.
    2. To optimize the experiment for online spike-inference, it is helpful to use a fast indicator (e.g., GCaMP6f). It also seems that transgenic expression might be an advantage, since indicator expression and calcium buffering is typically lower for transgenic expression than for viral induction, preventing a slow-down of the indicator by overexpression.

    in Peter Rupprecht on May 13, 2021 06:19 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Rivers might not be as resilient to drought as once thought

    Rivers ravaged by a lengthy drought may not be able to recover, even after the rains return. Seven years after the Millennium drought baked southeastern Australia, a large fraction of the region’s rivers still show no signs of returning to their predrought water flow, researchers report in the May 14 Science.

    There’s “an implicit assumption that no matter how big a disturbance is, the water will always come back — it’s just a matter of how long it takes,” says Tim Peterson, a hydrologist at Monash University in Melbourne, Australia. “I’ve never been satisfied with that.”

    The years-long drought in southeastern Australia, which began sometime between 1997 and 2001 and lasted until 2010, offered a natural experiment to test this assumption, he says. “It wasn’t the most severe drought” the region has ever experienced, but it was the longest period of low rainfall in the region since about 1900.

    Peterson and colleagues analyzed annual and seasonal streamflow rates in 161 river basins in the region from before, during and after the drought. By 2017, they found, 37 percent of those river basins still weren’t seeing the amount of water flow that they had predrought. Furthermore, of those low-flow rivers, the vast majority — 80 percent — also show no signs that they might recover in the future, the team found.

    Many of southeastern Australia’s rivers had bounced back from previous droughts, including a severe but brief episode in 1983. But even heavy rains in 2010, marking the end of the Millennium drought, weren’t enough to return these basins to their earlier state. That suggests that there is, after all, a limit to rivers’ resilience.

    What’s changed in these river basins isn’t yet clear, Peterson says. The precipitation post drought was similar to predrought precipitation, and the water isn’t ending up in the streamflow, so it must be going somewhere else. The team examined various possibilities: The water infiltrated into the ground and was stored as groundwater, or it never made it to the ground at all — possibly intercepted by leaves, and then evaporating back to the air.

    But none of these explanations were borne out by studies of these sites, the researchers report. The remaining, and most probable, possibility is that the environment has changed: Water is evaporating from soils and transpiring from plants more quickly than it did predrought.

    Peterson has long suggested that under certain conditions rivers might not, in fact, recover — and this study confirms that theoretical work, says Peter Troch, a hydrologist at the University of Arizona in Tucson. Enhanced soil evaporation and plant transpiration are examples of such positive feedbacks, processes that can enhance the impacts of a drought. “Until his work, this lack of resilience was not anticipated, and all hydrological models did not account for such possibility,” Troch says.

    “This study will definitely inspire other researchers to undertake such work,” he notes. “Hopefully we can gain more insight into the functioning of [river basins’] response to climate change.”

    Indeed, the finding that rivers have “finite resilience” to drought is of particular concern as the planet warms and lengthier droughts become more likely, writes hydrologist Flavia Tauro in a commentary in the same issue of Science.

    in Science News on May 13, 2021 06:18 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    A study of Earth’s crust hints that supernovas aren’t gold mines

    A smattering of plutonium atoms embedded in Earth’s crust are helping to resolve the origins of nature’s heaviest elements.

    Scientists had long suspected that elements such as gold, silver and plutonium are born during supernovas, when stars explode. But typical supernovas can’t explain the quantity of heavy elements in our cosmic neighborhood, a new study suggests. That means other cataclysmic events must have been major contributors, physicist Anton Wallner and colleagues report in the May 14 Science.

    The result bolsters a recent change of heart among astrophysicists. Standard supernovas have fallen out of favor. Instead, researchers think that heavy elements are more likely forged in collisions of two dense, dead stars called neutron stars, or in certain rare types of supernovas, such as those that form from fast-spinning stars (SN: 5/8/19).

    Heavy elements can be produced via a series of reactions in which atomic nuclei swell larger and larger as they rapidly gobble up neutrons. This series of reactions is known as the r-process, where “r” stands for rapid. But, says Wallner, of Australian National University in Canberra, “we do not know for sure where the site for the r-process is.” It’s like having the invite list for a gathering, but not its location, so you know who’s there without knowing where the party’s at.

    Scientists thought they had their answer after a neutron star collision was caught producing heavy elements in 2017 (SN: 10/16/17). But heavy elements show up in very old stars, which formed too early for neutron stars to have had time to collide. “We know that there has to be something else,” says theoretical astrophysicist Almudena Arcones of the Technical University of Darmstadt, Germany, who was not involved with the new study.

    If an r-process event had recently happened nearby, ­some of the elements created could have landed on Earth, leaving fingerprints in Earth’s crust. Starting with a 410-gram sample of Pacific Ocean crust, Wallner and colleagues used a particle accelerator to separate and count atoms. Within one piece of the sample, the scientists searched for a variety of plutonium called plutonium-244, which is produced by the r-process. Since heavy elements are always produced together in particular proportions in the r-process, plutonium-244 can serve as a proxy for other heavy elements. The team found about 180 plutonium-244 atoms, deposited into the crust within the last 9 million years.

    chunk of deep-sea crustScientists analyzed a sample of Earth’s deep-sea crust (shown) to search for atoms of plutonium and iron with cosmic origins.Norikazu Kinoshita

    Researchers compared the plutonium count to atoms that had a known source. Iron-60 is released by supernovas, but it is formed by fusion reactions in the star, not as part of the r-process. In another, smaller piece of the sample, the team detected about 415 atoms of iron-60.

    Plutonium-244 is radioactive, decaying with a half-life of 80.6 million years. And iron-60 has an even shorter half-life of 2.6 million years. So the elements could not have been present when the Earth formed, 4.5 billion years ago. That suggests their source is a relatively recent event. When the iron-60 atoms were counted up according to their depth in the crust, and therefore how long ago they’d been deposited, the scientists saw two peaks at about 2.5 million years ago and at about 6.5 million years ago, suggesting two or more supernovas had occurred in the recent past.

    The scientists can’t say if the plutonium they detected also came from those supernovas. But if it did, the amount of plutonium produced in those supernovas would be too small to explain the abundance of heavy elements in our cosmic vicinity, the researchers calculated. That suggests regular supernovas can’t be the main source of heavy elements, at least nearby.  

    That means other sources for the r-process are still needed, says astrophysicist Anna Frebel of MIT, who was not involved with the research. “The supernovae are just not cutting it.”

    The measurement gives a snapshot of the r-process in our corner of the universe, says astrophysicist Alexander Ji of Carnegie Observatories in Pasadena, Calif. “It’s actually the first detection of something like this, so that’s really, really neat.”

    in Science News on May 13, 2021 06:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Linking to Datasets on arXiv

    We’re excited to announce our collaboration with Papers With Code to support links to datasets on arXiv!

    Machine learning articles on arXiv now have a Code & Data tab to link to datasets that are used or introduced in a paper. Readers can activate Links to Code & Data from the paper abstract page to see links to official code, community implementations of the code, and the datasets used.

    screenshot of links to datasets from arXiv abstract pagesFrom the “Code & Data” tab on the arXiv article abstract page, readers can find links to datasets used in the paper.

    Authors can add datasets to their arXiv papers by going to arxiv.org/user and clicking on the “Link to code & data” Papers with Code icon (see below). From there they will be directed to Papers with Code where they can add their datasets. Once added, these will show on the arXiv article’s abstract page.

    This makes tracking dataset usage across the community and quickly finding other papers using the same dataset easier. From Papers with Code, readers can discover other papers using the same dataset, track usage over time, compare models, and find similar datasets.

    All data on Papers with Code is freely available and is licensed under CC-BY-SA (same as Wikipedia).


    An arXivLabs Collaboration

    arXiv’s mission is to provide an open platform where researchers can share and discover new, relevant, emerging science and establish their contribution to advancing research. Datasets are a critical component of this.

    This is the second stage of our arXivLabs collaboration with Papers with Code, following the introduction of code on arXiv last October.

    “Members of our community want to contribute tools that enhance the arXiv experience, and we value that kind of community engagement,” said Eleonora Presani, arXiv Executive Director.

    screen shot of arXiv user page showing icon to link to datasets on Papers with CodeFrom the arXiv user account page, authors can click on the Papers with Code icon to link their work to relevant code and data.


    A version of this blog post also appears here.


    in arXiv.org blog on May 13, 2021 03:26 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Stressful Days At Work Leave Us Less Likely To Exercise

    By Emily Reynolds

    After an incredibly stressful day of work, which are you more likely to do: walk several miles home, or get on a bus straight to your door? While the first option certainly comes with increased health benefits — including, potentially, decreased stress — many of us would choose the second anyway.

    A new study, published in the Journal of Experimental Psychology: Applied, seeks to understand why, even when we know how positive exercise can be, we often fail to be active after work. It could come down to how high-pressure your job is, according to Sascha Abdel Hadi from Justus-Liebig-University Giessen and team — and how much control you have over your work.

    In the first study, 100 participants took part in a workplace simulation. Adopting the role of a call centre worker, participants answered emails from customers, solved maths problems related to product pricing and promotions, and answered a live customer call.

    In the low demand condition, emails and phone calls were from friendly customers, while in the high demand condition, customers were disgruntled; maths problems also ranged in difficulty based on condition. High demand participants were also explicitly instructed to “serve with a smile”, while the low demand condition only required “acting authentically”.

    Following these tasks, participants were invited to ride (up to a maximum of fifteen minutes) on a static bike in the break room, for as long as they wanted, after which they could read magazines in a seated position. As expected, those in the high demand condition spent significantly less time on the bike than those in the low demand condition.

    A second study, conducted with 144 participants, sought to expand on these findings — only this time, there was an additional focus on the control people had over the choices they made during the job. In addition to the high and low demand conditions  from the first study, participants were assigned to a high or low control condition. Participants in the high control condition could select which emails they responded to and in which order, which arithmetic problems they wanted to solve, and which customer requests they picked up via call. Low control participants could do none of these things.

    Again, participants with a more demanding job spent less time on the bike. And while there was no direct relationship between levels of control and time spent on the bike, there was an indirect effect of job control on cycling time through its impact on participant’s feelings of self-determination — their beliefs that they are autonomous and able to freely make choices. People in the low control condition rated their self-determination as lower, which in turn led to reduced time cycling. This suggests that high levels of control can improve self-determination — and, indirectly, increase self-motivated behaviour, even outside of the workplace.

    This last point is key. What we do at work (or perhaps more accurately, what is done to or imposed on us) doesn’t just affect the hours we spend in the office, in the factory, or on the shop floor. Rather, these things seep into our personal lives, making us more or less likely to use leisure time the way we might want. The team suggests that other mechanisms could also play a role in the link between job demands and exercise: for instance, if people are unable to “switch off” from work demands, this may hamper motivation to engage in activities like working out.

    Either way, it is clear that jobs with high levels of demand can impact not only our overall wellbeing but our motivation outside of work. The team suggests that employers should place greater importance on self-determination and participation; and, as other research suggests, this could also change how democratic, just and participatory workplaces are.

    Experimental evidence for the effects of job demands and job control on physical activity after work.

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on May 13, 2021 11:22 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    “We didn’t want to hurt them. We are polite”: When a retraction notice pulls punches

    A group of anesthesiology researchers in China have lost their 2020 paper on nerve blocks during lung surgery after finding that the work contained “too many” errors to stand. But after hearing from the top editor of the journal, it’s pretty clear “too many errors” was a euphemism for even worse problems. The article, “Opioid-sparing … Continue reading “We didn’t want to hurt them. We are polite”: When a retraction notice pulls punches

    in Retraction watch on May 13, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Brain implants turn imagined handwriting into text on a screen

    Electrodes in a paralyzed man’s brain turned his imagined handwriting into words typed on a screen. The translation from brain to text may ultimately point to ways to help people with disabilities like paralysis communicate using just their thoughts.

    A 65-year-old man had two grids of tiny electrodes implanted on the surface of his brain. The electrodes read electrical activity in the part of the brain that controls hand and finger movements. Although the man was paralyzed from the neck down, he imagined writing letters softly with his hand. With an algorithm, researchers then figured out the neural patterns that went with each imagined letter and transformed those patterns into text on a screen.

    From his brain activity alone, the participant produced 90 characters, or 15 words, per minute, Krishna Shenoy, a Howard Hughes Medical Institute investigator at Stanford University, and colleagues report May 12 in Nature. That’s about as fast as the average typing rate of people around the participant’s age on smartphones.

    The thought-to-text system worked even long after the injury. “The big surprise is that even years and years after spinal cord injury, where you haven’t been able to use your hands or fingers, we can still listen in on that electrical activity. It’s still very active,” Shenoy says.

    Thought-powered communication is still in its early stages (SN: 4/24/19). Research with more volunteers is needed, but “there’s little doubt that this will work again in other people,” says Shenoy. The researchers plan to test the system with a person who has lost both the ability to move and speak.

    in Science News on May 12, 2021 03:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Partnering with TCC Africa

    Author: Roheena Anand, Director, Global Publishing Development, PLOS

    Two weeks ago we announced five new journals and their role in our plans to spread our roots deeper and absorb researchers and local practices more fully into our business. However, this is only one part of our work in this area. In the transition to an Open future we need to keep asking ourselves “Open for Whom?” Openness in itself, while valuable, does not tackle inequality in the scholarly communications ecosystem, or increase inclusion.  

    Therefore, we need to be intentional in addressing power imbalances and the legacy of devaluing knowledge from particular groups or regions, e.g. from communities often marginalized in North American/Western European publications, including researchers from Low-to-Middle-Income countries. 

    Over the course of this year we will be expanding our presence into different continents, embedding ourselves into local communities to work alongside them, listening and learning, so that we can understand and reflect their needs and values.  

    Today, as our first major step, we’re sharing the wonderful news that we are formally partnering with the Training Centre in Communication, based in the University of Nairobi, Kenya, commonly known as TCC Africa. TCC Africa is a nonprofit trust that has been doing valuable work across the continent since 2006. They’re committed to improving African researchers’ visibility (and therefore impact) through training in scholarly communication. Like us, they are heavily invested in an Open future and work with stakeholders across the scholarly communication ecosystem to promote and increase uptake of open access and open science more broadly.

    Working with TCC Africa will help us to ensure that the interests and values of African research communities are represented in PLOS publications, policies, and services and to ensure that Open Science practices work for local stakeholders. We believe that all Open Science and Open Research activities should be informed and co-created by local communities at a global scale, helping us all to rebuild the system better.

    We’ve started to build our strategic plan to achieve our joint goals: it’s the first step on our path to a more inclusive Open Science future.

    Please follow PLOS and TCC Africa on social media, and/or follow our blogs, to keep up to date on the progress of this partnership!

    TCC Africa Blog | TCC Africa Twitter | TCC Africa LinkedIn | TCC Africa Facebook
    PLOS Blog | PLOS Twitter | PLOS LinkedIn | PLOS Facebook

    The post Partnering with TCC Africa appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on May 12, 2021 02:59 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Event-Driven Sensing for Efficient Perception: Vision and Audition Algorithms

    This week on Journal Club session Nik Dennler will talk about a paper "Event-Driven Sensing for Efficient Perception: Vision and Audition Algorithms".

    Event sensors implement circuits that capture partial functionality of biological sensors, such as the retina and cochlea. As with their biological counterparts, event sensors are drivers of their own output. That is, they produce dynamically sampled binary events to dynamically changing stimuli. Algorithms and networks that process this form of output representation are still in their infancy, but they show strong promise. This article illustrates the unique form of the data produced by the sensors and demonstrates how the properties of these sensor outputs make them useful for power-efficient, low-latency systems working in real time.


    Date: 2021/05/14
    Time: 14:00
    Location: online

    in UH Biocomputation group on May 12, 2021 12:13 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Want To Know Whether A Movie Or Book Will Be A Hit? Look At How Emotional The Reviews Are

    By Emma Young

    You want to choose a new vacuum cleaner, or book, or hotel, or kids’ toy, or movie to watch — so what do you do? No doubt, you go online and check the star ratings for various options on sites such as Amazon or TripAdvisor, and so benefit from the wisdom of crowds.

    However, there are problems with this star-based system, as a new paper in Nature Human Behaviour makes clear. Firstly, most ratings are positive — so how do you choose between two, or potentially many more, products with high ratings, or even the same top rating? Secondly, star ratings aren’t a great predictor of the success (and so actual general appeal and approval) of a movie, book, and so on, note Matthew D. Rocklage at the University of Massachusetts and his colleagues. The team presents an alternative method for picking the best product and also predicting success, which focuses on the emotional responses of the reviewers.

    In all four studies reported in the paper, the team used a text analysis tool called the Evaluative Lexicon. This provided measures of the average emotionality and valence (positivity) of a review. Emotionality relates to how much an attitude is rooted in emotion, rather than how positive or negative it is (so reviews that included lots of terms like “awe-inspiring” or “enchanting” got higher emotionality scores than reviews with terms like “impeccable”.)

    First, the team looked at the earliest 30 reviews for all movies included on the website metacritic.com from 2005 to 2018. For each movie, they gathered star ratings (from 0 to 10), valence scores and measures of text emotionality.

    Overall, 81% of these movies got above average star-ratings. This highlights “the challenge of discerning success and how people will behave in this sea of positive ratings”, which the team calls the “positivity problem”. They also found that star ratings weren’t a good predictor of box office revenue, and text valence wasn’t a helpful predictor, either. Higher emotionality was a positive predictor of future box office takings, however. (This result held when they controlled for a variety of factors, including the genre of the movie, the year it was released, its budget, and so on.)

    Next, Rocklage and colleagues used the same approach to try to predict the sales of all books listed on Amazon.com from 1995 to 2015. This time, for some genres, star ratings did predict sales, while for others they didn’t. However, greater emotionality emerged as a predictor of sales across 93 different genres. It was, then, consistently useful.

    The researchers then turned to 187,206 real-time tweets posted in response to TV ads for 84 different businesses played during the 2016 and 2017 US Super Bowls. The team found that the greater the emotionality of the tweets about an advert, the more Facebook followers the company gathered over the next two weeks. The equivalent of star ratings for these ads had been gathered by the newspaper USA Today, and these ratings were not predictive of followers.

    Finally, the team considered Chicago restaurant reviews on yelp.com and 1.3 million table reservations made on a popular booking website. In contrast to earlier results, high star ratings did predict more table reservations. However, higher emotionality still emerged as a unique predictor of numbers of bookings. As the team writes, “restaurants that elicited more emotion were associated with more table reservations”.

    Overall, then, movies, books and restaurants that appeared to evoke more emotion in consumers ended up being more successful. Why might that be the case? Emotions flag memories as being important, and are relatively readily recalled, and attitudes based on emotion tend to be more stable. Clearly this could influence a person’s own behaviour. “Additional work could explore whether attitudes based more on emotion also affect success by increasing individuals’ propensity to spread information via word of mouth,” the team notes.

    Overall, the new work does call into question the validity and helpfulness of star ratings. This in itself is not new, but of course the researchers also describe what seems to be a more useful system, which in theory could be broadly adopted. “One possibility is that organizations could consider aggregating reviewers’ language and providing an ‘emotional star rating’ to provide more meaningful assessments to individuals,” they write.

    Mass-scale emotionality reveals human behaviour and marketplace success

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on May 12, 2021 11:41 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Small bribes may help people build healthy handwashing habits

    Good habits are hard to adopt. But a little bribery can go a long way.

    That’s the finding from an experiment in India that used rewards to get villagers hooked on routine handwashing. While tying rewards to desired behaviors has long been a staple of habit formation, handwashing has proven difficult to stick.

    The rewards worked. “If you bribe kids, handwashing rates shoot up,” says developmental economist Reshmaan Hussam of Harvard Business School. And even just making handwashing a pleasant, easy activity improved health: Children in households with thoughtfully designed soap dispensers experienced fewer illnesses than children in households without those tools, Hussam and colleagues report in a paper to appear in American Economic Journal: Applied Economics.

    Significantly, good habits lingered even after researchers stopped giving out rewards. “The fact that they found persistence suggests to me that participants did form habits,” says Jen Labrecque, a social psychologist at Oklahoma State University in Stillwater who was not involved with the research.

    The study involved 2,943 households in 105 villages in the state of West Bengal between August 2015 and March 2017. All participants had access to soap and water. Nearly 80 percent said they knew soap killed germs, but initially only 14 percent reported using soap before eating.

    To objectively assess habits, Hussam’s team devised a way to monitor handwashing in the absence of observers — whose presence typically makes people behave better. In collaboration with the MIT Media Lab, the team built a soap dispenser with a hidden sensor that recorded whenever somebody used it.

    a young boy holding his hand under a sensor while his parents look onA young boy in West Bengal, India, uses a soap dispenser with a built-in sensor. The dispenser, designed by the MIT Media Lab, can track when and how often members of a household use soap, providing useful feedback for users and researchers alike.Reshmaan Hussam

    They then educated families on how to build good handwashing habits, such as establishing a trigger (dinner time) and a routine (handwashing right before meals). They also made the handwashing experience as simple and enjoyable as possible, such as by using scented soap and mounting the sensors where children could easily reach them. Researchers visited households every two weeks to collect data on children’s health and refill the dispensers.

    Hussam’s team divided households into multiple groups. Some households received only a dispenser. Others received automated reports on their daily handwashing performance, a social incentive to gently prod routine activity. Still others got tickets each time somebody pressed the dispenser around dinner time — these tickets could be traded for toothbrushes, backpacks and other useful items. A control group received no dispensers.

    In households that got no incentives, the team found that people used soap at dinnertime 36 percent of the time, one to four months after receiving a dispenser. Those who got automated reports used soap 45 percent of the time. And those earning tickets used soap 62 percent of the time.

    Once rewards and feedback ceased, soap use abruptly plummeted. With little to lose, the researchers kept the sensors on. As months progressed, handwashing rates among households that had received incentives ticked slightly upward. Nine months after incentives ceased, households that had received tickets washed their hands 16 percentage points more than households that received dispensers only.

    The team suspects that the return of cold and flu season reminded parents to use soap. Perhaps “when parents see that kids are sniffly or sneezing, that’s when they’re triggered to use the device,” Hussam speculates. Often, “habits are tied to specific cues.”  

    This study shows the value of spending a limited pool of money up front versus spreading it more evenly across time, as is common in public health campaigns, says medical epidemiologist Stephen Luby of Stanford University. “I do see the value of front-loading habit adoption.”

    Even children living in households with just a dispenser and no rewards had better health than children in households without a dispenser. Eight months after incentives ceased, children with soap dispensers in their households experienced 38 percent fewer days with diarrhea and 16 percent fewer days with respiratory infections than children without dispensers. Access to a well-designed dispenser also tracked to healthier height and weight for children.

    For product designers hoping to steer people toward good habits, a valuable lesson emerges: “Think carefully about human-centered design,” Hussam says.  

    in Science News on May 12, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Remember Charles-Henri Lecellier?

    Charles-Henri Lecellier is about to get promoted to CNRS research director 2nd class. Time to dig up old stories and let the ghosts rise to wash their dirty laundry.

    in For Better Science on May 12, 2021 10:21 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Who owns your thesis data? We do, says one university, prompting retraction

    Here’s a story that’s likely to strike a sour chord with graduate students.  A researcher in Italy has lost his 2020 paper, based on work he conducted for his doctoral thesis, after the university claimed that he didn’t have the right to publish the data.  The paper, “Musical practice and BDNF plasma levels as a … Continue reading Who owns your thesis data? We do, says one university, prompting retraction

    in Retraction watch on May 12, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    10+ years of Brainhack: an open, inclusive culture for neuro tool developers at all levels

    Brainhacks and similar formats are increasingly recognized as a new way of providing academic training and conducting research that extends traditional settings. There is a new paper out in Neuron, by 200+ authors, describing the format and what makes it valuable to the community. This post aims to highlight some of the core themes of the paper.

    Brainhacks have been held since 2012, organized by local communities, sometimes in sync with other hackathons taking place elsewhere. In 2016 the format developed into the Brainhack Global – a synchronous swarm of hybrid meetings arranged by local communities with local and virtual participants. In 2020, during the pandemic, the BG went fully virtual.

    Figure 1D from https://doi.org/10.1016/j.neuron.2021.04.001

    A similar growth of hackathons has occurred in communities adjacent to Brainhack. INCF started funding hackathons early because our community asked for it; and when we saw the value and inspiration hackathons brought to the community, it became a regular line item in the INCF budget. Since funding our first hackathon in 2012, we have funded or partially funded at least one hackathon each year (see our entry in Acknowledgements).

    The Brainhack format is inspired by the hackathon model and centers on ad-hoc, informal collaborations for building, updating and extending community software tools developed by the participants’ peers, with the goal to have functioning software by the end of the event. Unlike many hackathons, Brainhacks welcome participants from all disciplines and with any level of experience—from those who have never written a line of code to software developers and expert neuroscientists, and also feature informal dissemination of ongoing research through unconferences. Also unlike some traditional hackathons, Brainhacks do not have competitions. Brainhacks value education; recent major Brainhack events even have a TrainTrack, a set of entirely education-focused sessions that run in parallel with regular hacking on projects.

    The five defining Brainhack features: 

    1) a project-oriented approach that fosters active participation and community-driven problem-solving

    2) learning by doing, which enables participants to gain more intensive training, particularly in computational methods

    3) training in open science and collaborative coding, which helps participants become more effective collaborators

    4) focus on reproducibility, which leads to more robust scientific research; and

    5) accelerated building and bridging of communities, which encourages inclusivity and seamless collaboration between researchers at different career stages

    Brainhacks have increased insight in the value of tool usability and reusability and the need for long-term maintenance; shifting community culture from individuals creating tools for their own needs to a community actively contributing to an existing resource. They also help to disseminate good practices for writing code and documentation, ensuring code readability, using version control and licensing. 

    Brainhacks promote awareness of reproducible practices that integrate easily into research workflows, and show the value of data sharing and open data. They introduce participants to data standards, such as BIDS, allowing them to experience the benefits of a unified data organization and provides them with the skillset to use these formats in their own research. 

    Brainhacks create a scientific culture around open and standardized data, metadata, and methods, as well as detailed documentation and reporting.

    The Brainhack community are currently also working to collate Brainhack-related insights and expertise into a Jupyter Book  that will serve as a centralized set of resources for the community.


    Brainhack: Developing a culture of open, inclusive, community-driven neuroscience


    Det här inlägget skrevs för och publicerades först (12/5 2021) på INCF’s blogg

    in Malin Sandström's blog on May 12, 2021 09:53 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Infographic: How to answer 5 difficult questions about Covid vaccines

    Here’s how you can help friends and family understand the benefits of vaccination.

    in Elsevier Connect on May 12, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    As the COVID-19 pandemic evolves, we answer 7 lingering vaccine questions

    It’s now open season for COVID-19 vaccines across the United States.

    After months of having to scramble to find a shot, the tables have turned and most people who want one can get one. Everyone 16 years and older is eligible for a vaccine, and the U.S. Food and Drug Administration on May 10 extended emergency use authorization for Pfizer’s jab to those aged 12 to 15 years old (SN: 5/10/21).

    So far, nearly 60 percent of adults 18 years and older — or around 150 million people —  have gotten at least one dose as of May 10. President Joe Biden has set a goal of 70 percent of adults, or around 180 million, getting at least one dose by July 4, and 160 million adults being fully vaccinated — at least two weeks beyond their last shot — by that date.

    But with supply beginning to outstrip demand in many parts of the country, that goal could be difficult to reach. Local officials already are launching innovative ways to reach people who are hesitant to get the shot, from going door-to-door to address people’s concerns to promising a free beer or baseball game ticket with each jab.

    How many people get the shots will influence when life in the United States might approach something resembling a pre-pandemic normal. Computer simulations showed that if up to 75 percent of eligible people are on track to get vaccinated by September, there could be a sharp drop in cases of COVID-19 even earlier, by July, researchers report May 5 in Morbidity and Mortality Weekly Report. That decline may happen even as health officials loosen some public health guidelines, the simulations showed.

    The U.S. Centers for Disease Control and Prevention has already revised mask-wearing recommendations for people who are fully vaccinated. And on May 9, Anthony Fauci, Biden’s top medical adviser for the pandemic, suggested during an interview on ABC’s “This Week” that as vaccinations rise and daily new cases drop, requirements for wearing masks indoors could ease. 

    “We are not out of the woods yet,” CDC Director Rochelle Walensky said in a news conference on May 5. “But we could be very close.”

    As we enter this new phase of the pandemic in the United States — amid a push to get doses to as many willing (or willing-to-be-convinced) people as possible — here are some of the big outstanding questions about vaccines.

    How long does immunity last?

    The short answer is that researchers don’t know yet. But studies suggest that for most people, antibodies that recognize the coronavirus can last at least a year after an infection — perhaps longer (SN: 11/24/20). And evidence is building that vaccines provide superior protection than natural infection, so it’s not unreasonable to expect that immunity might be longer-lasting for vaccinated people.

    One small study, for instance, found that of 19 people tested for antibodies a year after getting sick with COVID-19, 17 people still had detectable levels, researchers report in a preliminary study posted May 2 at medRxiv.org. Those who had more severe COVID-19 symptoms were more likely to have higher antibody levels, the researchers found. So it’s possible that people who had mild infections may become susceptible to getting infected again sooner than severely ill individuals.

    Data for how long the immune response sparked by a vaccine lasts is trickling in. People who received Moderna’s mRNA shot still have high levels of antibodies six months after getting the second dose, suggesting that they remain protected against COVID-19, researchers reported April 6 in the New England Journal of Medicine. And Pfizer’s jab, which uses a similar technology, has an efficacy of 91.3 percent against COVID-19 symptoms after six months, the pharmaceutical company announced in a news release on April 1.  

    Also, the immune system has more in its arsenal than just antibodies. Immune cells called T cells are also important for fighting off infections. Studies hint that T cells also stick around for at least six months after recovery from a natural infection, and potentially for years to come.

    If I didn’t have side effects after getting the vaccine, is it working? 

    This is the most common question people ask Juliet Morrison, a virologist at the University of California, Riverside. “Everyone keeps saying, ‘I didn’t feel anything. Am I protected?’”

    Morrison reassures her questioners with data. In Moderna’s 30,000-person trial, about 79 percent of people who got the vaccine had whole-body, or systemic, side effects, most commonly headache, fatigue and muscle aches. Some had chills or fever.  That left more than 20 percent of people who didn’t have bad side effects beyond an achy arm, or sometimes no side effects at all. But the vaccine’s efficacy was 94 percent. “That’s pretty compelling evidence that you do not need to have the adverse effects to develop immunity against SARS-CoV-2,” Morrison says.

    About 37 percent of people in the placebo group in Moderna’s trial also reported systemic side effects. “That might suggest some people have adverse reactions just as a result of the process of receiving an injection, or they might have psyched themselves up about receiving the vaccine,” she says.

    Many of the side effects are produced by immune responses that aren’t responsible for building lasting immunity, says Brianne Barker, an immunologist at Drew University in Madison, N.J. “Just because you’re not inducing the particular response that leads to fever, doesn’t mean you aren’t inducing the part that we’re hoping to induce with the vaccine.”

    Should I get an antibody test to tell if the vaccine worked?

    No. That’s not recommended because many of the antibody tests on the market now don’t test for antibodies like those made after getting vaccinated. Antibody tests usually test for antibodies against the virus’s nucleocapsid, or N protein. Some also test for antibodies against the coronavirus’ spike protein. Such tests are used to determine whether people have had SARS-CoV-2 infections in the past.

    Since the vaccines contain only the spike protein, people who have been vaccinated but never had COVID-19 would not have antibodies directed against the N protein. They would get a negative result or indeterminate result from tests that detect N protein antibodies.

    “You just need to trust that the efficacy of these vaccines is very high,” Morrison says.

    If I have had COVID-19, do I need to be vaccinated?  

    “All of the evidence says yes,” Barker says.  “The immune response you make when you’re infected with SARS-CoV-2 is not ideal.”

    That’s because at least four of the coronavirus’s proteins inhibit immune responses and may damage the ability to make lasting immune memories. Studies also indicate that people who have gotten two doses of an mRNA vaccine make more neutralizing antibodies — the kind that help prevent the virus from entering cells — than people who have recovered from COVID-19.

    “The immunity the vaccines confer is much more robust than the immunity from an infection,” Morrison says. “The vaccines that we have do a much better job than natural infection does.”

    Scientists are still debating whether people who had previous infections need both doses of the mRNA vaccines or if they can get away with just one dose (SN: 3/3/21). For logistical reasons, health officials are currently advising that everyone get the recommended number of doses for the vaccine they’re given (two doses for the mRNA vaccines, one for Johnson & Johnson).

    People who got sick and were treated with monoclonal antibodies or with convalescent plasma should wait 90 days before getting a COVID-19 vaccine, as these therapies can otherwise interfere with the immune response, says Matthew Laurens, a pediatric infectious diseases physician and vaccine researcher at the University of Maryland School of Medicine in Baltimore.

    Can the vaccine help people recover from long COVID?

    Some anecdotal and preliminary evidence suggests it might. About 30 percent to 40 percent of people who have persistent symptoms, known as post-acute sequelae of COVID (PASC), or long- COVID, say they feel better after vaccination.

    Exactly why that is isn’t known. One hypothesis is that people with long-COVID never quite cleared the infection. Vaccination may help give any lingering virus the boot. Or it may give the immune system a reset.

    Researchers are launching clinical trials to test whether vaccination really can help with the long-term symptoms. 

    Can the current vaccines protect me from variants?

    For the variants that have emerged so far, antibodies sparked by the COVID-19 vaccines used in the United States still seem to do their job and protect people from the worst of the disease. And the shots seem to provide superior protection against variants than previous infections do, Fauci said in a news conference on May 5.

    Studies of Pfizer’s vaccine in Israel suggest it is highly effective against a variant first identified in the United Kingdom, called B.1.1.7 (SN: 4/19/21). In Qatar, Pfizer’s shot was 89.5 percent effective against COVID-19 symptoms for infections caused by that variant, researchers report May 5 in the New England Journal of Medicine. For a variant that was first identified in South Africa — called B.1.351 — the vaccine was 75 percent effective against symptomatic COVID-19, the team found. That’s heartening news because the variant has a mutation that helps the virus evade antibodies to infect lab-grown cells (SN: 1/27/21). The shot’s effectiveness to prevent severe disease or death caused by both variants was even higher, coming in at 97.4 percent.  

    Other vaccines, including one developed by Novavax, are also showing some promise against variants (SN: 1/28/21). In South Africa where B.1.351 is prevalent, Novavax’s shot had an efficacy of 60 percent in participants without HIV, researchers report  May 5 in the New England Journal of Medicine. Johnson & Johnson’s jab had an efficacy of 64 percent against moderate to severe COVID-19in a South African trial. AstraZeneca’s vaccine, on the other hand, was only 10 percent effective against B.1.351 (SN: 3/22/21).   

    Some vaccine developers are making moves to update their shots. Moderna, for example, announced May 5 that giving people a third dose boosted the immune response against variants first identified in South Africa and Brazil. Participants in the trial either received a third dose of the original vaccine or an adapted one based on the variant identified in South Africa. Those who got the adapted version had antibodies that were better at stopping the variant viruses from infecting cells compared with the antibodies from people who got a third dose of the original formulation.

    Moderna is also testing a version that includes an equal mix of the original strain and the variant from South Africa.     

    Is it possible to reach herd immunity?

    In short, we still don’t know. But achieving herd immunity in the United States is seeming much harder as the pace of vaccinations slows and more contagious variants loom.

    Long held up as the ultimate end of the pandemic, herd immunity is the proportion of a population that must be immune to prevent the virus from spreading. When the average infected person spreads the virus to less than one other person, herd immunity is reached and small outbreaks can’t balloon out of control.

    Early on, estimates of the threshold needed to reach herd immunity estimates ranged from 60 percent to 70 percent of a population. That number stemmed from initial estimates of the contagiousness of the virus. But viruses can change, and estimates have ticked above 80 percent as more worrisome variants, like B.1.1.7, which is up to 70 percent more transmissible, gain steam (SN: 4/19/21). That variant is now the dominant one causing coronavirus infections in the United States.

    It will take exceeding the theoretical threshold to reach herd immunity in the real world. That’s because vaccines aren’t 100 percent effective. And scientists still aren’t sure how well, or durably, they prevent someone from transmitting the virus, although there are tantalizing hints that vaccinated people who do get infected carry less virus and so are less infectious (SN: 2/12/21) . Even with a maximally effective vaccine, there may not be enough people willing to take it to reach herd immunity. According to recent polls, about 25 to 30 percent of Americans express reluctance to get the vaccine.

    Biden’s goal of vaccinating 70 percent of adults with at least one shot by July 4 includes about 55 percent of the total population. That likely wouldn’t push us over the herd immunity threshold, but it would still help curb the pandemic. In Israel, for instance, about 60 percent of their population is now vaccinated and cases have dropped significantly and daily deaths have dropped to near zero in recent weeks.

    “You vaccinate enough people, the infections are going to go down,” Fauci told the New York Times.  

    in Science News on May 11, 2021 04:46 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Morphing noodles start flat but bend into curly pasta shapes as they’re cooked

    This pasta is no limp noodle.

    When imprinted with carefully designed arrangements of grooves, flat pasta morphs as it cooks, forming tubes, spirals and other shapes traditional for the starchy sustenance. The technique could allow for pasta that takes up less space, Lining Yao and colleagues report May 5 in Science Advances.

    Pasta aficionados “are very picky about the shapes of pasta and how they pair with different sauces,” says Yao, who studies the design of smart materials at Carnegie Mellon University in Pittsburgh. But those shapes come at a cost of excess packaging and inefficient shipping: For some varieties of curly pasta, more than 60 percent of the packaging space is used to hold air, the researchers calculated.

    Yao and colleagues stamped a series of grooves onto one side of each noodle. As the pasta absorbed water during cooking, the liquid couldn’t penetrate as fully on the grooved side, causing it to swell less than the smooth side of the pasta. That asymmetric swelling bent the previously flat noodle into a curve. By changing the arrangement of the grooves, the researchers controlled the final shape. Computer simulations of swelling pasta replicated the shapes seen in the experiments.

    Flat pasta (top) with the right pattern of grooves imprinted on it curls into traditional pasta shapes when boiled. Computer simulations of the pasta (bottom) show the same behavior.

    The technique isn’t limited to pasta: Another series of experiments, performed with silicone rubber in a solvent, produced similar results. But whereas the pasta held its curved shape, the silicone rubber eventually absorbed enough solvent to flatten out again. The gluey nature of cooked pasta helps lock in the twists by fusing neighboring grooves together, the researchers determined. Removing the silicone from the solvent caused the silicone to bend in the opposite direction. This reversible bending process could be harnessed for other purposes, such as a grabber for robot hands, Yao says.

    The pasta makes particularly good camping food, Yao says. A member of her team brought it along on a recent hiking trip. The pasta slips easily into a cramped pack but cooks into a satisfying shape.

    in Science News on May 11, 2021 01:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Vaccine hesitancy is nothing new. Here’s the damage it’s done over centuries

    As vaccines to protect people from COVID-19 started becoming available in late 2020, the rhetoric of anti-vaccine groups intensified. Efforts to keep vaccines out of arms reinforce misinformation about the safety and effectiveness of the vaccines and spread disinformation — deliberately misleading people for political, ideological or other reasons.

    Vaccines have been met with suspicion and hostility for as long as they have existed. Current opposition to COVID-19 vaccines is just the latest chapter in this long story. The primary driver of vaccine hesitancy throughout history has not been money, selfishness or ignorance.

    “Vaccine hesitancy has less to do with misunderstanding the science and more to do with general mistrust of scientific institutions and government,” says Maya Goldenberg, a philosophy expert at the University of Guelph, Ontario, who studies the phenomenon. Historically, people harmed or oppressed by such institutions are the ones most likely to resist vaccines, adds Agnes Arnold-Forster, a medical historian at the University of Bristol in England.

    A range of recurring and intersecting themes have fueled hesitancy globally and historically. These include anxiety about unnatural substances in the body, vaccines as government surveillance or weapons, and personal liberty violations. Other concerns relate to parental autonomy, faith-based objections, and worries about infertility, disability or disease. For example, some people oppose vaccines that were grown in cell culture lines that began from aborted fetal cells, or they mistakenly believe vaccines contain fetal cells. One of today’s false beliefs — that COVID-19 vaccines contain a microchip — represents anxiety about both vaccine ingredients and vaccines as a surveillance tool.

    “The reasons people have hesitated reflect the cultural anxieties of their time and place,” Goldenberg says. People worried about toxins arising during environmentalism in the 1970s and people in countries steeped in civil war have perceived vaccines as government weapons.

    Historical attempts to curb vaccine hesitancy often failed because they relied on authoritarian and coercive methods. “They were very blunt, very punitive and very ineffective,” Arnold-Forster says. “They had very little impact on actual vaccine intake.”

    The most effective remedies center on building trust and open communication, with family doctors having the greatest influence on people’s decision to vaccinate. Increased use of “trusted messengers” to share accurate and reassuring vaccine information with their communities builds on this.

    18th Century
    Smallpox vaccine sets the stage around the globe

    In a way, anti-vaccination attitudes predate vaccination itself. Public vaccination began after English physician Edward Jenner learned that milkmaids were protected from smallpox after exposure to cowpox, a related virus in cows. In 1796, Jenner scientifically legitimized the procedure of injecting people with cowpox, which he termed variolae vaccinae, to prevent smallpox. However, variolation — which staved off serious smallpox infections by triggering mild infection through exposure to material from an infected person — dates back to at least the 1000s in Asia, Africa and other parts of the world. In some cases people inhaled the dried scabs of smallpox lesions or rubbed or injected pus from smallpox lesions into a healthy person’s scratched skin.

    About 1 to 2 percent of people — including a son of Britain’s King George III in 1783 — died from the procedure, far fewer than the up to 30 percent who died from smallpox. Benjamin Franklin rejected variolation, but later regretted it when smallpox killed his youngest son. Onesimus, an enslaved man in Boston, taught the procedure to Puritan minister Cotton Mather, who in turn urged doctors to inoculate the public during a 1721 smallpox outbreak. Many refused, and Mather faced hostility: A small bomb was thrown through his window. Reasons given for avoiding variolation — particularly that it was unnatural to interfere with a person’s relationship with God — were the seeds of later anti-vaccination attitudes.

    19th Century
    The first vaccination laws kindle resistance

    In 1809, Massachusetts passed the world’s first known mandatory vaccination law, requiring the general population to receive the smallpox vaccine. Resistance began to grow as other states passed similar laws. Then the U.K. Vaccination Act of 1853 required parents to get infants vaccinated by 3 months old, or face fines or imprisonment. The law sparked violent riots and the formation of the Anti-Vaccination League of London. Vaccine resisters were often poor people suspicious of a forced medical intervention since, under normal circumstances, they rarely received any health care. Anti-vaccination groups argued that compulsory vaccination violated personal liberty, writing that the acts “trample upon the right of parents to protect their children from disease” and “invaded liberty by rendering good health a crime.”

    a political cartoon against vaccinationThis 1838 illustration seems to take a negative view of a vaccination method that used cowpox to immunize people against a similar, and deadly, human disease, smallpox.National Library of Medicine

    Anti-vaccination sentiment grew and spread across Europe until an 1885 demonstration of about 100,000 people in Leicester, England, prompted the British monarchy to appoint a commission to study the issue. The resulting 1896 report led to an 1898 act that removed penalties for parents who didn’t believe vaccination was safe or effective. The act introduced the term “conscientious objectors,” which later became more commonly associated with those who refuse military service on religious or moral grounds.

    Across the Atlantic, most U.S. residents had embraced Jenner’s cowpox protective, leading to a precipitous drop in smallpox outbreaks. But with fewer outbreaks, complacency set in and vaccination rates dropped. As smallpox outbreaks resurfaced in the 1870s, states began enforcing existing vaccination laws or passing new ones. British anti-vaccinationist William Tebb visited New York in 1879, which led to the founding of the Anti-Vaccination Society of America. The group’s tactics will sound familiar: pamphlets, court battles and arguments in state legislatures that led to the repeal of mandatory vaccination laws in seven states. The 1905 Supreme Court decision Jacobson v. Massachusetts upheld a state’s right to mandate vaccines; it remains precedent today.

    20th Century
    A menu of vaccines draws praise and ire

    1982: Documentary hypes vaccine injuries

    The U.S. entered a golden age of vaccine development from the 1920s through the 1970s with the arrival of vaccines for diphtheria, pertussis, polio, measles, mumps and rubella. Opposition diminished as infection rates, particularly for polio, fell. Rosalynn Carter and Betty Bumpers, the wives of the governors of Georgia and Arkansas, respectively, began a vaccination campaign that grew into a national effort in the 1970s. The goal was to encourage every state to require children attending public school to receive most vaccines recommended by the U.S. Centers for Disease Control and Prevention.

    A nationally aired 1982 news documentary called “DPT: Vaccine Roulette” changed everything. Lea Thompson, a reporter with WRC-TV in Washington, D.C., shared emotional stories of parents claiming their children had suffered seizures and brain damage from the diphtheria-pertussis-tetanus, or DPT, shot. Interviews with doctors lent the stories credence. Fever-caused seizures were a known side effect of DPT, and a 1974 study had reported neurological complications developing in 36 children within 24 hours of DPT vaccination. But the study did not follow the children long-term. Later research revealed neither the seizures nor the vaccine caused long-term brain damage.

    But the damage to public trust was done. Coopting the DPT acronym, one parent, Barbara Loe Fisher, cofounded Dissatisfied Parents Together, which became the National Vaccine Information Center, the most influential anti-vaccine organization in the United States.

    1998: Fraudulent study links vaccines to autism

    The National Vaccine Information Center maintained a steady hum of anti-vaccination sentiment and activity through the 1980s and ’90s. Then British gastroenterologist Andrew Wakefield published a report in the Lancet alleging that the measles-mumps-rubella, or MMR, vaccine caused autism spectrum disorder in 12 children. Wakefield falsified data, violated informed consent and secretly invested in development of a solo measles vaccine, but it took years to uncover his deceit (SN Online: 2/3/10). Fears about autism and vaccines had already exploded by the time the study was retracted 12 years after publication.

    Almost immediately after publication of the study, U.K. vaccination rates began falling. But news of Wakefield’s work didn’t reach the United States until 2000, just as U.S. medical authorities were embroiled in a debate about the use of thimerosal, a mercury-containing preservative, in vaccines. In 1999, the U.S. Public Health Service recommended removing thimerosal from childhood vaccines as a precautionary measure to reduce infants’ mercury exposure. Later research showed no safety concerns about its use.

    The MMR vaccine never contained thimerosal, but fears about mercury-related brain damage merged with those about MMR and autism, creating a storm of anger and fear surrounding claims of vaccine harm.

    a photo showing a vaccine protest from 2021Protesters at a February 2021 event in Sydney, Australia, came out against the idea of mandatory COVID-19 vaccinations, just days before vaccines became available to frontline health care workers.Brook Mitchell/Getty Images

    21st Century
    Social media and slick documentaries

    Despite the 2010 retraction of his study and the revocation of his license to practice medicine in the United Kingdom, Wakefield remains a leader in today’s anti-vaccination movement. Joining him is Robert F. Kennedy, Jr., who gained prominence promoting unfounded allegations about thimerosal. Both men rode the wave of anti-vaccination networking on social media and the promotion of disinformation through slick documentaries like 2016’s Vaxxed: From Cover-Up to Catastrophe (SN Online: 4/1/16).

    In 2014, the United States saw its highest number of measles cases since the disease was eliminated from the country in 2000, culminating in a large outbreak that began at Disneyland that December. In response, California passed a law removing parents’ ability to opt out of vaccinating their children based on personal beliefs and required that all children receive CDC-recommended vaccines to attend school (SN Online: 7/2/19). Extreme opposition to that law and subsequent ones helped fuel a resurgence in anti-vaccine advocacy along with an alarming measles outbreak in 2019 (SN: 12/21/19 & 1/4/20, p. 24).

    The vast majority of people accept recommended vaccines and their role in stemming the spread of infectious diseases. Recent surveys suggest that 69 percent of U.S. adults say they have or will get a COVID-19 vaccine, an improvement over the 60 percent willing to do so in November. But responses to surveys don’t necessarily predict behavior, Goldenberg says.

    in Science News on May 11, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Clinical trial paper that made anemia drug look safer than it is will be retracted

    A study that a pharmaceutical company admitted last month included manipulated data will be retracted, Retraction Watch has learned. The paper, “Pooled Analysis of Roxadustat for Anemia in Patients With Kidney Failure Incident to Dialysis,” was published in Kidney International Reports in December 2020. The study analyzed data from a clinical trial for roxadustat, a … Continue reading Clinical trial paper that made anemia drug look safer than it is will be retracted

    in Retraction watch on May 11, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Self-Reflection Can Make You A Better Leader At Work

    By Emily Reynolds

    What does being a good leader mean to you? Having tonnes of charisma? Being intelligent? Encouraging fairness and participation in the workplace? Whatever combination of qualities you value, it’s likely that your vision of good leadership is different from your colleague’s or your manager’s, who themselves will have a highly personal vision of who they want to be at work.

    A new study from Remy E. Jennings at the University of Florida and colleagues, published in Personnel Psychology, looks closely at this individualised idea of leadership — our “best possible leader self”. If we focus and reflect on this best possible self every morning, they find, it could help us behave more like a leader in the here and now.

    Participants were 54 students (mainly White men) enrolled on a weekend MBA course in the United States, chosen because they were likely to aspire to leadership. All participants were in the intervention condition for five days and the control condition for five days.

    In the intervention condition, participants were asked to write expressively every morning in response to the prompt “think about your best possible self in a leadership role sometime in the future. Imagine everything has gone as well as it possibly could for you. Think of this as the realisation of the best possible leader you could ever hope to be”. During this period, participants were also asked to reflect on positive traits, useful skills and achievements they felt they had and that could help them become this best possible leader.

    In the control condition participants wrote instead about three neutral objects, describing their car, objects in their office, or landmarks they saw on their way to work. Positive and negative affect was measured in both conditions.

    The rest of the day was the same in both conditions. In the afternoon, a follow-up survey looked at “helping” behaviours, with participants asked whether they had mentored or helped someone at work or given encouragement or appreciation to colleagues, and “visioning” behaviours: whether participants had spoken about future opportunities or strategic goals during their day. 

    Finally, in the evening, participants responded to two measures. One, looking at leader identity, required participants to indicate how much they agreed with statements such as “today, I displayed the characteristics of a leader”; the other invited them to describe their day in their own words and was later analysed for feelings of high expertise and confidence.

    The results showed that reflecting on the best possible leader self increased participants’ positive affect, which in turn was associated with more helping behaviours and with vision-related behaviours during the work day. Those engaging in both helping and vision-related behaviours also felt more like they had behaved like a leader that day, and showed more “clout” — the feelings of high expertise and confidence as measured in the evening survey.

    The study suggests that a period of envisioning and reflecting on the best possible leader self could make people more helpful and more confident in their leadership abilities. There are, however, limitations. Firstly, the small sample size was overwhelmingly made up of White men: would such an intervention have the same impact for other groups? Similarly, it’s important not to take the results as a straightforward ticket to success in the workplace. Self-reflection may be beneficial for many, but there may also be systemic factors working against promotion or increased clout for women, people of colour, or others who can be marginalised at work.

    It would also be interesting to explore the actual content of participants’ best possible leader selves. Results from the study suggest that self-reflection of any kind can increase prosocial helping behaviours — but what if somebody’s best possible leader self is authoritarian and harsh with employees, or focused merely on power rather than collectivity or helpfulness? Looking at the range of “leader selves” and how they interact with workplace behaviours could give further insight.

    Reflecting on one’s best possible self as a leader: Implications for professional employees at work

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on May 11, 2021 07:15 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Meaningless and pseudoscientific potatoes

    How to cook potato data. A recipe from Poland.

    in For Better Science on May 11, 2021 04:58 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Helping you know – and show – the ROI of the research you fund

    When funding depends on it, how do you show the impact of your org’s research? This team tackled that challenge through analytics and collaboration

    in Elsevier Connect on May 11, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Pfizer’s COVID-19 vaccine recommended for adolescents by CDC committee

    Adolescents as young as 12 can now receive the Pfizer COVID-19 vaccine in the United States. On May 12, the U.S. Centers for Disease Control and Prevention’s Advisory Committee on Immunization Practices recommended the Pfizer vaccine for adolescents aged 12 to 15. The vote came two days after the U.S. Food and Drug Administration granted emergency use authorization of this vaccine for adolescents.

    “The benefits far outweigh the risks” for this age group, ACIP member Henry Bernstein said after the vote. Vaccinating adolescents against COVID-19 will protect them, decrease transmission in their families, help control community spread and allow adolescents to more safely go back to in-person school, said Bernstein, a pediatrician at Cohen Children’s Medical Center in New Hyde Park, N.Y.

    Most states have been waiting for the committee’s go ahead, although a handful, including Georgia and Delaware, have already begun giving the shots to this age group.

    On May 5, Canada became the first country to authorize Pfizer’s COVID-19 vaccine for that age group. Meanwhile, Moderna announced in a news release May 6 that early data from its trial in adolescents ages 12 and up indicate that the vaccine has 96 percent efficacy in that age group. The company says it is working with regulators to extend use of its vaccine to teens and adolescents, perhaps by the end of May.

    Previously, Pfizer’s vaccine was authorized for emergency use in the United States for people 16 and older. Along with other vaccine makers, Pfizer is also testing its shot in even younger children. It expects to have results for those ages 2 to 11 by September, and for those down to 6 months old by the end of the year.  

    “My hope is that, if everything goes as planned, by early next year, 2022, we may have an [emergency use authorization] for younger [and] younger children,” says Inci Yildirim, a pediatric infectious diseases physician and vaccinologist at Yale School of Medicine. She is leading Yale’s portion of Moderna’s KidCOVE trial testing the vaccine in children from 6 months to 11 years old. Moderna’s vaccine is currently OK’d for those 18 and older.

    The timeline means elementary school–age children and some middle schoolers will probably remain unvaccinated in the fall, though many middle school and high school students will be eligible.

    So far, kids seem to react to the vaccines at least as well as adults do, Yildirim says. Younger adolescents in Pfizer’s trial had even higher antibody levels than 16- to 18-year-olds did, the company reported in a March 31 news release. In that trial, 18 of 1,129 kids who got a placebo shot got COVID-19. None of the 1,131 kids who got the vaccine developed the disease.

    It remains to be seen if the youngest children can muster up a strong immune response to the vaccine. Babies and toddlers up to 2 years old still have immune systems in training. It may take a higher dose of vaccine to get their immature immune systems to react, Yildirim says. “We’re trying to find a dose for those age groups that will be safe, but at the same time effective and immunogenic.”

    Vaccinating children is important for “protecting the child in front of you,” Yildirim says. Though most children develop mild illness, 0.1 percent to 1.9 percent are hospitalized with the disease, and an estimated 378 children have died, according to the American Academy of Pediatrics and the Children’s Hospital Association. Even kids who get such mild disease that they barely notice they’re sick may develop lingering symptoms often called long-COVID.

    “We have patients coming to the doctor’s office saying, ‘I cannot run. I cannot swim. I cannot concentrate at school as much as I used to,’” she says. Testing antibody levels for those children usually reveals they had COVID-19 previously.

    Another post-COVID malady called multisystem inflammatory syndrome in children, or MIS-C has struck more than 3,000 children in the United States, killing 36, according to the U.S. Centers for Disease Control and Prevention. That out-of-control inflammatory syndrome can land kids in the intensive care with organ failure, Yildirim says (SN: 6/3/20). Vaccines may help prevent those serious complications.

    Vaccinating children is necessary to reach herd immunity, when enough people are protected from the virus that its spread is thwarted. Right now, children account for about 22 percent of new COVID-19 cases. About 70 to 80 percent of people will need to become immune to the virus to reach population-level protection, Yildirim says. “You cannot get there without vaccinating children.”

    To get kids vaccinated, “we will need parent buy-in,” says Donna Hallas, a pediatric nurse practitioner at NYU Rory Meyers College of Nursing in New York City. Tackling parental hesitancy is a hard, but necessary job, she says. In a recent poll, a quarter of parents of 12- to 15-year-olds said they would not vaccinate their children against COVID-19. Another quarter said they would wait to see how well the vaccines work. About a third said they would vaccinate their kids as soon as possible, and 18 percent said they would get their children vaccinated if their schools require it.

    Pfizer applied May 7 for full approval of its vaccine, and Moderna has announced plans to also seek full approval soon. The move may have important implications for vaccinating children. “With emergency authorization use, you can’t really say everybody should have that vaccine,” Hallas says. But schools can mandate use of fully approved vaccines.

    For many parents, including Yildirim, COVID-19 vaccines for kids can’t come soon enough. She began testing the Moderna vaccine in March 2020. Nine months later, she got that shot in her arm when health care workers became eligible. Her 18-year-old son has gotten the Pfizer vaccine. But “my 5-year-old daughter has no vaccine available to her,” Yildirim says, “so I’m looking forward to a pediatric vaccine.”

    Biomedical writer Aimee Cunningham contributed to this story.

    in Science News on May 10, 2021 10:15 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Scientists remotely controlled the social behavior of mice with light

    With the help of headsets and backpacks on mice, scientists are using light to switch nerve cells on and off in the rodents’ brains to probe the animals’ social behavior, a new study shows.

    These remote control experiments are revealing new insights on the neural circuitry underlying social interactions, supporting previous work suggesting minds in sync are more cooperative, researchers report online May 10 in Nature Neuroscience.

    The new devices rely on optogenetics, a technique in which researchers use bursts of light to activate or suppress the brain nerve cells, or neurons, often using tailored viruses to genetically modify cells so they respond to illumination (SN: 1/15/10). Scientists have used optogenetics to probe neural circuits in mice and other lab animals to yield insights on how they might work in humans (SN: 10/22/19).

    Optogenetic devices often feed light to neurons via fiber-optic cables, but such tethers can interfere with natural behaviors and social interactions. While scientists recently developed implantable wireless optogenetic devices, these depend on relatively simple remote controls or limited sets of preprogrammed instructions.

    These new fully implantable optogenetic arrays for mice and rats can enable more sophisticated research. Specifically, the researchers can adjust each device’s programming during the course of experiments, “so you can target what an animal does in a much more complex way,” says Genia Kozorovitskiy, a neurobiologist at Northwestern University in Evanston, Ill.

    These head-mounted and back-mounted devices are battery-free, wirelessly powered by the same high-frequency radio waves used to remotely control the intensity, duration and timing of the light pulses. The prototypes also allow scientists to simultaneously control four different neural circuits in an animal, thanks to LEDs that emit four hues — blue, green, yellow and red — instead of just one.

    In experiments with mice, Kozorovitskiy and colleagues used the devices to target the prefrontal cortex, a part of the brain linked with decision making and other complex behaviors. When the team delivered similar patterns of neural stimulation in this area to pairs or trios of mice, the rodents groomed and sniffed companions with whom their neurons were in sync more often than ones with whom they were out of sync. The findings support previous research suggesting this kind of synchrony between minds can boost social behavior, “particularly cooperative interactions,” Kozorovitskiy says.

    The widely available wireless technology used in this work, the same now used in contactless payment with credit cards, could allow broad adoption across the neuroscience community “without extensive specialized hardware,” says neurotechnologist Philipp Gutruf at the University of Arizona at Tucson, who did not take part in this research. That “means that we might see these devices in many labs in the near future, enabling new discoveries.” The insights gained on the nervous system from such research, he says, may in turn “inform better diagnostics and therapeutics in humans.”

    in Science News on May 10, 2021 05:31 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Planet-forming disks around stars may come preloaded with ingredients for life

    The chemistry leading to life may start before stars are even born.

    In the planet-forming disk of gas and dust around a young star, astronomers have detected methanol. The disk is too warm for the methanol to have formed there, so this complex organic molecule probably originated in the interstellar cloud that collapsed to form the star and its disk, researchers report online May 10 in Nature Astronomy. This finding offers evidence that at least some organic matter from interstellar space can seed the disks around newborn stars to provide potential ingredients for life on new planets.

    “That’s pretty exciting, because it means that, in principle, all planets forming around any kind of star could have this material,” says Viviana Guzmán, an astrochemist at the Pontifical Catholic University of Chile in Santiago not involved in the work.

    Complex organic molecules have been observed in interstellar clouds of gas and dust (SN: 3/22/21), as well as in planet-forming disks around young stars (SN: 2/18/08). But astronomers didn’t know whether organic material from interstellar space could survive the formation of a protoplanetary disk, or whether organic chemistry had to start from scratch around new stars.

    “When you form a star and its disk, it’s not a very easy, breezy process,” says Alice Booth, an astronomer at Leiden University in the Netherlands. Radiation from the new star and shock waves in the imploding material, she says, “could destroy a lot of the molecules that were originally in your initial cloud.”

    Using the ALMA radio telescope array in Chile, Booth and colleagues observed the disk around a bright, young star named HD 100546, about 360 light-years away. There, the team spotted methanol, which is thought to be a building block for life’s molecules, such as amino acids and proteins.

    Methanol could not have originated in the disk, because this molecule forms when hydrogen interacts with carbon monoxide ice, which freezes below temperatures of about –253° Celsius. The disk around HD 100546 is much warmer than that, heated by a star whose surface is roughly 9,700° C — some 4,000 degrees hotter than the sun. So the disk must have inherited its methanol from the interstellar cloud that forged its central star, the researchers conclude.

    “This is the first evidence that the really interesting chemistry we see early on [in star formation] actually survives incorporation into the planet-forming disk,” says Karin Öberg, an astrochemist at Harvard University who was not involved in the work. Astronomers should next search the disks around other young stars for methanol or other organic molecules, she says, to “explore whether this is a one-time, get lucky kind of thing, or whether we can safely assume that planet-forming disks always inherit these kinds of molecules.”

    in Science News on May 10, 2021 03:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    A common antibiotic slows a mysterious coral disease

    Slathering corals in a common antibiotic seems to temporarily soothe a mysterious tissue-eating disease, new research suggests.

    Just off Florida, a type of coral infected with stony coral tissue loss disease, or SCTLD, showed widespread improvement several months after being treated with amoxicillin, researchers report April 21 in Scientific Reports. While the deadly disease eventually reappeared, the results provide a spot of good news while scientists continue the search for what causes it.

    “The antibiotic treatments give the corals a break,” says Erin Shilling, a coral researcher at Florida Atlantic University’s Harbor Branch Oceanographic Institute in Fort Pierce. “It’s very good at halting the lesions it’s applied to.”

    Divers discovered SCTLD on reefs near Miami in 2014. Characterized by white lesions that rapidly eat away at coral tissue, the disease plagues nearly all of the Great Florida Reef, which spans 580 kilometers from St. Lucie Inlet in Marin County to Dry Tortugas National Park beyond the Florida Keys. In recent years, SCTLD has spread to reefs in the Caribbean (SN: 7/9/19).

    As scientists search for the cause, they are left to treat the lesions through trial and error. Two treatments that show promise involve divers applying a chlorinated epoxy or an amoxicillin paste to infected patches. “We wanted to experimentally assess these techniques to see if they’re as effective as people have been reporting anecdotally,” Shilling says.

    In April 2019, Shilling and colleagues identified 95 lesions on 32 colonies of great star coral (Montastraea cavernosa) off Florida’s east coast. The scientists dug trenches into the corals around the lesions to separate diseased tissue from healthy tissue, then filled the moats and covered the diseased patches with the antibiotic paste or chlorinated epoxy and monitored the corals over 11 months.

    left: coral treated with antibacterial paste. right: same coral healed 11 months laterTreatment with an amoxicillin paste (white bands, left) stopped a tissue-eating lesion from spreading over a great star coral colony up to 11 months later (right).E.N. Shilling, I.R. Combs and J.D. Voss/Scientific Reports 2021

    Within about three months of the treatment, some 95 percent of infected coral tissues treated with amoxicillin had healed. Meanwhile, only about 20 percent of infected tissue treated with chlorinated epoxy had healed in that time — no better than untreated lesions. 

    But a one-and-done treatment doesn’t stop new lesions from popping up over time, the team found. And some key questions remain unanswered, the scientists note, including how the treatment works on larger scales and what, if any, longer-term side effects the antibiotic could have on the corals and their surrounding environment.

    “Erin’s work is fabulous,” says Karen Neely, a marine biologist at Nova Southeastern University in Fort Lauderdale, Fla. Neely and her colleagues see similar results in their two-year experiment at the Florida National Marine Sanctuary. The researchers used the same amoxicillin paste and chlorinated epoxy treatments on more than 2,300 lesions on upwards of 1,600 coral colonies representing eight species, including great star coral.

    Those antibiotic treatments were more than 95 percent effective across all species, Neely says. And spot-treating new lesions that popped up after the initial treatment appeared to stop corals from becoming reinfected over time. That study is currently undergoing peer-review in Frontiers in Marine Science.

    “Overall, putting these corals in this treatment program saves them,” Neely says. “We don’t get happy endings very often, so that’s a nice one.”

    in Science News on May 10, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next WG meeting: 12 May at 1200 UTC

    Photo by Daria Nepriakhina on Unsplash

    Photo by Daria Nepriakhina on Unsplash.

    The next open community meeting for the working group will be at 1200 UTC on 12th May, 2021. The primary agenda for this meeting is to plan what tutorials the working group intends to undertake at CNS*2021 in July. Please join us via Zoom.

    You can see the local time for the meeting using this link. On Linux style systems, you can also use this command in the terminal:

    date --date='TZ="UTC" 1200 this wednesday'

    We hope to see you there.

    in INCF/OCNS Software Working Group on May 10, 2021 10:27 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 10 May 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.

    Please join us at the next regular Open NeuroFedora team meeting on Monday 10 May at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'

    The meeting will be chaired by @gicmo. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on May 10, 2021 10:21 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Authors yank ketamine study, hoping it will go away without attention, and journal obliges

    The authors of a paper on the antidepressant effects of ketamine have retracted their article for a lack of reproducibility — but readers have no way of knowing that because the journal declined to say as much in the retraction notice. If that sounds like a tale from the pages of the Journal of Neuroscience, … Continue reading Authors yank ketamine study, hoping it will go away without attention, and journal obliges

    in Retraction watch on May 10, 2021 10:01 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    In English, Round And Spiky Objects Tend To Have “Round” And “Spiky” Sounds

    By Emma Young

    Many of us are familiar with the “bouba/kiki”, or “maluma/takete” effect — that we tend to pair round, blobby shapes with the words “bouba” or “maluma” and spiky shapes with “kiki” or “takete”. These findings hold for speakers of many different languages and ages, and various explanations for the effect have been proposed.

    But these studies have almost exclusively used made-up words (like these four), note the authors of a new paper in Psychonomic Bulletin & Review, who have found that the effect is also at play in the English language. That is, the components of made-up words that we commonly pair with a round shape tend also to be found in nouns that refer to actual round objects, and the same for spiky sounds and objects.

    David M Sidhu, now at University College, London, and colleagues recruited 171 student participants in Canada. Each person was presented with 100 sets of six concrete nouns, and for each set, had to choose the two nouns that they felt referred to the “most round” and “most spiky” objects — so, for example, they had to pick the most-round and most-spiky objects from “unicycle”, “moon”, “balcony”, “pyramid”, “jet” and “driller”. From these results, the researchers generated a spiky/round rating for each word. (The top five “roundest” in order were : softball, ball, olive, pea and globe, while spike, fork, porcupine and scalpel led the “spikiest” object list.)

    The team then looked at the phonemes, or distinct “units” of sound, which made up these nouns. They compared these results with those from earlier studies that have investigated which phonemes in made-up words are generally associated with roundness or spikiness.

    Almost all of the phonemes previously associated with roundness were indeed present in the real words that participants identified as referring to round objects, and the same for spikiness. So the sounds u as in “up”, m, oo as in boot and b were more common in nouns that referred to round objects, while the sounds k, t and i as in “ship” were more common in nouns that referred to spiky objects. “Our main finding was that many of the associations between phonemes and shapes found in laboratory tasks are attested in the pairing between sound and meaning in English,” the team writes.

    There were a few unexpected findings. For example, in this study, in contrast to some earlier ones involving made-up words, s and sh were more common in words for spiky objects. However, this result does fit with work linking certain phonemes to greater or lesser emotional arousal. We tend to perceive words with hissing sounds as being higher in emotional arousal — which is also linked to spikiness. The researchers also note that these phonemes, along with other consonants common in words for spiky objects, require the involvement of the tongue, whereas m, b, etc are made just with the lips.

    This work contributes to the argument that (in addition to cases of onomatopoeia) there is at least some relationship between the sound of words in English and what they refer to. This isn’t a powerful relationship; as the researchers themselves stress: “Many other factors play larger roles in the form of language.” Still, the work does reveal some consistent round/spiky patterns in real English words, which seems to be a first in almost 100 years of research into this effect.

    Sound symbolism shapes the English language: The maluma/takete effect in English nouns

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on May 10, 2021 07:36 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The unwanted COVID-19 preprint of Rudolph Jaenisch

    Famous MIT lab discovered the coronavirus integrates into human genome and is still transcribed! Between preprint and contributed PNAS paper, three authors and a mechanism were dropped.

    in For Better Science on May 10, 2021 05:39 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How India’s COVID-19 crisis became the worst in the world

    PUNE, India — Mohanish Ellitam watched helplessly as his 49-year-old mother’s oxygen levels dipped dangerously and she gasped for air. “I could see her stomach rising and falling,” Ellitam said. “I was so scared.”

    Watching his mother’s health deteriorate, Ellitam knew he couldn’t wait any longer. But in Shevgaon, a small town in the state of Maharashtra, health care facilities were limited and already overwhelmed with people suffering from COVID-19. He frantically called friends, family and almost everyone on his contact list with connections to the region’s hospitals. After nearly 100 calls, on April 12 Ellitam finally found a spot at Surabhi Hospital in Ahmednagar, nearly 60 kilometers from his hometown.

    But there was no room for relief just yet. His father, 53, also started growing tired and breathless. While his father stayed isolated in a hotel room opposite the hospital, Ellitam lived out of his car parked nearby, and the frustrating search for another hospital bed began.

    “I was in a helpless state,” he said. “I felt alone. I broke into tears many times.”

    This is what it’s like to be in the hardest-hit state in the country now hit hardest by the coronavirus pandemic. Although Ellitam’s father secured a bed in Surabhi Hospital a day later, scenes like this — and far worse — are playing out hundreds of thousands of times every day across India. As its second wave of COVID-19 sweeps through, India recorded more than 400,000 daily new cases on May 6 — the largest single-day spike in the world — and its highest daily death toll of 4,187, a day later. Those numbers are predicted to soar even higher in the coming days.

    Dire SOS pleas from doctors, patients and their loved ones in need of hospital beds, oxygen and medication have flooded social media platforms. In Pune, one of the worst-hit cities in India, the wailing sirens of ambulances have become a macabre feature of the city’s soundscape. In many parts of the country, family members are shedding tears of despair outside of hospitals as they beg for medical attention for their dying kin.

    “We don’t have enough ward beds, we don’t have enough ICU beds, and we’re running out of ventilators,” said Sumit Ray, a critical care specialist at Holy Family Hospital in India’s capital city of New Delhi. “People are coming into the ER requiring huge amounts of oxygen support, and we were on the edge of running out.”

    Like many others in India, Ray is somewhat baffled by the seemingly sudden COVID-19 surge. In an unprecedented move, hundreds of scientists sent a plea on April 30 to Prime Minister Narendra Modi asking to ramp up data collection and allow access to already collected COVID-19 data. These scientists say more data are needed to understand how the coronavirus is spreading, manage the outbreak and predict what’s to come.

    “It is essential now, more than ever before, that dynamic public health plans be implemented on the basis of scientific data to arrest the spread of infections and save the lives of our citizens,” they wrote. As of May 6, more than 800 scientists had signed on to that appeal.

    How did we get here?

    During the first wave of the pandemic in 2020, India reported over 90,000 daily new COVID-19 cases at its peak, with the highest single-day record at 97,894 on September 16. Daily case numbers then gradually declined to nearly 10,000 in early February. 

    The falling numbers ignited conversations about whether many Indians, especially those living in densely populated urban centers, might have already been exposed to the virus, thus conferring some immune defenses to prevent reinfection.

    In Mumbai — home to more than 20 million people, more than 40 percent of whom live in overcrowded slums where disease can spread like wildfire — blood antibody tests of nearly 7,000 individuals from three municipal wards suggested 57 percent of the sample’s nearly 4,000 slum dwellers had a past infection with COVID-19, researchers reported in the Lancet Global Health in November 2020. In Delhi, similar tests showed that by January 2021, more than half of 28,000 people sampled in 272 municipal wards had developed antibodies against COVID-19 compared with 23 percent of 21,387 people sampled in early 2020.

    A national serological survey of over 28,000 participants suggested that 1 in 4 Indians may have been exposed to COVID-19 by December 2020, researchers reported online March 4 on the preprint server SSRN.

    “We thought we may not see a big second wave,” said Shahid Jameel, a virologist at Ashoka University in Sonipat, India. “Obviously we were wrong because we didn’t account for both the introductions and indigenous development of new variants.”

    In December, India recorded its first six cases of the highly infectious B.1.1.7 variant, which was first identified in the United Kingdom. Between February and March, genetic testing showed that the variant became dominant in India’s northern state of Punjab, appearing in 326 of 401 sequenced viral samples. In New Delhi, B.1.1.7 was present in half the samples sequenced toward the end of March compared with 28 percent two weeks earlier. 

    India’s own B.1.617 variant first identified in October in Maharashtra is now present in up to 60 percent of samples from some parts of this hard-hit state, according to Jameel. This variant is also spreading in Delhi, he said, in addition to other parts of India and the world. 

    While B.1.1.7 is thought to be highly transmissible and potentially more lethal than other known variants (SN: 4/19/21), it’s still unclear how contagious B.1.617 is and if it induces severe disease. This makes it challenging to assess its role in India’s increasingly grim situation. One glimmer of hope is that Covaxin, a COVID-19 vaccine administered in India, appears to be effective against the variant, according to a recent paper posted online April 23 at the preprint server bioRxiv.org. 

    But just how much variants are driving the current surge remains poorly understood because scientists have sequenced viral genetic material from a mere 1 percent of all COVID-19 cases recorded from January to March 2021. “We cannot tell if variants are responsible because we’re not sequencing enough,” said Satyajit Rath, an immunologist associated with the Indian Institute of Science Education and Research in Pune and a signatory on the scientists’ appeal for access to data. “It’s not just inadequate but pathetic.”

    A lax attitude toward mask wearing and social distancing in the aftermath of the stringent and prolonged national lockdown from March to June 2020 may also be a big factor in the surge. A misplaced sense of triumph over COVID-19 encouraged gatherings at weddings, political rallies and religious ceremonies. “All those became superspreader events,” Jameel said.

    As people mingled and traveled, the virus likely spread and overwhelmed India’s unprepared health care system.

    people participating in HoliMasking and social distancing took a back seat during celebrations of Holi, the festival of colors, in Hyderabad and across India on March 29, 2021, even as COVID-19 cases surged.Mahesh Kumar A./AP Images

    Struggles getting treatments 

    Many hospitals in the worst-hit parts of India house only severely ill COVID-19 patients. Some states have set up triage centers or “COVID-19 war rooms” to help prioritize patient care and hospitalization amid a grave shortage of resources.

    At Mumbai’s P.D. Hinduja Hospital, pulmonologist Lancelot Pinto treats COVID-19 patients but also remotely manages moderately infected individuals, often entire families, who are quarantining at home. He’s seeing fevers that may last longer than a week (compared with just two or three days in the first wave), after which patients either recover or sometimes end up in the hospital due to complicating risk factors such as hypertension and diabetes.

    In some cases, doctors are starting stay-at-home patients on steroids like dexamethasone and prednisone right away, in an effort to stave off more serious infections. But that can backfire. Although those drugs have been shown to reduce the risk of death of critically ill patients, they can actually dampen the immune response if given too early in an infection (SN: 9/2/20). That can make it harder for a patient to fight off the virus.

    Some patients are also receiving a combination of as many as five to 10 other drugs, which can interact with each other and pose side effects. “We’ve been flabbergasted by the prescriptions we’ve seen throughout the last eight weeks,” Pinto said. “I’ve seen patients who’ve received such a cocktail of drugs deteriorate in their first week of getting admitted.”

    Anxious and desperate patients are sometimes requesting — and doctors are sometimes prescribing — unproven treatments. Convalescent plasma therapy is one of them. Early in the pandemic, scientists thought blood plasma from recovered COVID-19 patients could help those newly infected get a jump-start on building up antibodies (SN: 8/25/20). But there’s little evidence the therapy can arrest progression to severe disease. And in India, some doctors are prescribing it as a last-resort measure, often under pressure from patient families who want to ensure they’ve tried everything they could. But several studies have failed to show that convalescent plasma reduces COVID-19 deaths at this late stage of infection. 

    Some doctors are also prescribing the antimalarial drug hydroxychloroquine. Despite scant evidence for the drug’s effectiveness (SN: 8/2/20), the Indian Council of Medical Research’s latest guidelines for managing COVID-19 still list hydroxychloroquine as a “may use” drug.

    Even when a therapy shows some promise, it’s often not easy to get. In April, chaos erupted when the antiviral drug remdesivir, which can potentially shorten the COVID-19 recovery time by a few days but isn’t life-saving, became nearly unavailable (SN: 10/16/20). Some patients and their families resorted to purchasing the drug at two to five times the market price as a black market emerged amid the shortage. The hospital at which Ellitam’s parents were admitted, too, ran out of remdesivir. With help from friends in two different cities, each more than 100 kilometers away, he managed to procure four doses at market price.

    people waiting in lines outside a pharmacy in PuneIn early April, acute shortages of remdesivir in Pune hospitals resulted in long queues outside the Indian city’s pharmacies. Health officials blamed indiscriminate use of the antiviral drug for shortages in Pune and elsewhere.AP Images

    Looking forward

    An array of mathematical models predict that India’s surge will peak sometime between early and mid-May. Daily case numbers could rise to anywhere between 800,000 and 1 million, and single-day deaths may hit around 5,500 toward the end of the month, said Bhramar Mukherjee, a biostatistician at the University of Michigan in Ann Arbor who has been modeling India’s COVID-19 outbreak since March 2020. “That’s really troubling,” she said. 

    But these may be overestimates; Mukerjee’s model doesn’t account for the current lockdowns and restrictions that are in place in some states, cities and villages.

    To quell case numbers, some public health experts in India say it’s time for a nationwide lockdown, but one that’s more coordinated and humane than the last lockdown. But the unfolding COVID-19 crisis is not just India’s problem; it’s the world’s problem. Rising numbers of infections can provide the virus with greater opportunities to mutate and evolve and thus form new variants (SN: 2/5/21). In a globally connected world, short of draconian lockdowns, it’s hard to contain the spread of infections and new strains. India’s outbreak has already spilled over into neighboring Nepal; other countries, including the United States, are now limiting travelers from India, but it may be too late. B.1.617 has already shown up in the United States and at least 20 other countries. 

    The crisis could also result in widespread vaccine shortages. India, the world’s largest producer of vaccines, has stopped exports to prioritize domestic needs. Even so, less than 2 percent of Indians are fully vaccinated and less than 9 percent have received their first shot, thanks to a major COVID-19 vaccine shortage. Ramping up vaccination efforts will be key to combating COVID-19, but it’s unlikely to pull India out of the current crisis.

    Back in Shevgaon, Ellitam’s parents have recovered and returned home. But he is now battling the virus himself, lying in the same hospital where his parents spent nearly 10 days. Although he has a cough and is fatigued with moderate symptoms, he’s spending several hours every day making phone calls to help others find ventilator- and oxygen-supported hospital beds for their loved ones. 

     “The situation here is very bad,” he said. “I pray that no one ever goes through times like these.”

    in Science News on May 09, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Weekend reads: Allegations about exploitative research; COVID-19 retractions; how to get cited more often

    Before we present this week’s Weekend Reads, a question: Do you enjoy our weekly roundup? If so, we could really use your help. Would you consider a tax-deductible donation to support Weekend Reads, and our daily work? Thanks in advance. The week at Retraction Watch featured: Ecologist who lost thesis awards earns expressions of concern after … Continue reading Weekend reads: Allegations about exploitative research; COVID-19 retractions; how to get cited more often

    in Retraction watch on May 08, 2021 12:01 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Mild zaps to the brain can boost a pain-relieving placebo effect

    Placebos can make us feel better. Mild electric zaps to the brain can make that effect even stronger, scientists report online May 3 in Proceedings of the National Academy of Sciences. The finding raises the possibility of enhancing the power of expectations to improve treatments. 

    This is the first study to boost placebo and blunt pain-inducing nocebo effects by altering brain activity, says Jian Kong, a pain researcher at Massachusetts General Hospital in Charlestown.

    The placebo effect arises when someone feels better after taking an inactive substance, like a sugar pill, because they expect the substance to help. The nocebo effect is the placebo’s evil twin: A person feels worse after taking an inactive substance that they expect to have unpleasant effects.

    To play with people’s expectations, Kong’s team primed 81 participants for painful heat. The heat was delivered by a thermal stimulator to the forearm while participants lay in a functional MRI scanner. Each person received three creams, each to a different spot on their arms. One cream, participants were told, was a numbing lidocaine cream, one was a regular cream and one was a pain-increasing capsaicin cream. But in fact, all the creams were the same inert lotion, dyed different colors.

    Participants reported lower pain intensity from the heat on the “lidocaine” patch of skin, an expected placebo effect. People also reported higher pain intensity on the “capsaicin” skin, an expected nocebo effect.

    Before testing the placebo and nocebo effects, researchers had delivered electric currents to some participants’ brains with a method called transcranial direct current stimulation, or tDCS. During these tDCS sessions, two electrodes attached to the scalp delivered weak electric current to the brain to change the behavior of brain cells. 

    Some participants received tDCS targeted at a brain area thought to be important in placebo and nocebo effects, the right dorsolateral prefrontal cortex. Researchers used two types of current: positive anodal tDCS, which typically makes nerve cells more likely to fire off signals, and negative cathodal tDCS, which usually makes cells quieter.

    Compared with people who didn’t receive tDCS, people who received cathodal tDCS reported stronger placebo effects when heat was applied to the skin with “lidocaine” cream. For people who received anodal tDCS, the stimulation dampened the nocebo effect of the “capsaicin” cream. 

    Brain stimulation affected neural pathways that were already thought to be involved in the placebo and nocebo effects. Cathodal tDCS, for instance, boosted connections between the targeted brain area with a nearby area involved in emotion and cognition.  This strengthened pattern correlated with participants reporting a stronger placebo effect, Kong and his colleagues found.

    “This is a very elegant study and I’m very excited and enthusiastic about it,” says Luana Colloca, a neuroscientist at the University of Maryland Baltimore. Colloca, who wasn’t involved in the study, sees the potential to help chronic pain patients by ramping up the placebo effect (SN: 9/13/18). “We’re not there yet,” she cautions. “We need to see if these same results can be replicated in patients with chronic pain.” 

    Kong agrees. His study was small, and people experience pain and placebos differently. “But I have to say, this is also encouraging,” he says.

    in Science News on May 07, 2021 03:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    T. rex’s incredible biting force came from its stiff lower jaw

    The fearsome Tyrannosaurus rex could generate tremendous bone-crushing bite forces thanks to a stiff lower jaw. That stiffness stemmed from a boomerang-shaped bit of bone that braced what would have been an otherwise flexible jawbone, a new analysis suggests.

    Unlike mammals, reptiles and their close kin have a joint dubbed the intramandibular joint within their lower jawbone, or mandible. New computer simulations show that with a bone spanning the IMJ, T. Rex could have generated bite forces of more than 6 metric tons, or about the weight of a large male African elephant, researchers reported April 27 at the virtual annual meeting of the American Association of Anatomy. 

    In today’s lizards, snakes and birds, the IMJ is bound by ligaments, making it relatively flexible, says study author John Fortner, a vertebrate paleontologist at the University of Missouri in Columbia. That flexibility helps the animals maintain a better grip on struggling prey and also allows the mandible to flex wider to accommodate larger morsels, he notes. But in turtles and crocodiles, for example, evolution has driven the IMJ to be rather tight and inflexible, enabling strong bite forces.

    Until now, most researchers have presumed that dinosaurs had lower jaws with a flexible IMJ, but there’s a big flaw with that premise, Fortner notes. A flexible jaw wouldn’t have enabled bone-crushing bite forces, but fossil evidence — including coprolites, or fossil poop, filled with partially digested bone shards — strongly suggests that T. rex could indeed chomp down with such forces (SN: 10/22/18).

    “There’s every reason to believe that T. rex could bite really hard, kinda off the charts,” says Lawrence Witmer, a vertebrate paleontologist at Ohio University in Athens who wasn’t involved in the study. “It’d be nice to know how they could carry off these bite forces.”

    Using a 3-D scan of a fossil T. rex skull, Fortner and his colleagues created a computer simulation of the mandible that could be used to analyze stresses and strains, akin to the way engineers analyze bridges and aircraft parts. Then they created two versions of the virtual jawbone. In both of them, they cut in half a boomerang-shaped bone, called the prearticular, that is adjacent to but spans the IMJ. Then, in one simulation, they joined the two sides of the IMJ with virtual ligaments that rendered the jawbone flexible. In a second version of the simulation, the team virtually rejoined the two pieces of the prearticular with bone rather than ligaments.

    The team’s simulations showed that when the severed prearticular was virtually rejoined with ligaments, stresses couldn’t be effectively transferred from one side of the IMJ to another, says Fortner. In that scenario, the mandible became too flexible to generate large bite forces. But when the pieces of the prearticular were rejoined with bone — similar to having the bone remain intact — stresses could be smoothly and efficiently transferred from one side of the joint to another.

    stress simulation of T. rex jawTwo simulated T. rex jawbones reveal how a small bone (not visible) that spans a joint (white arrow) provides for a strong bite. In a version where that bone is not intact (top), the jawbone flexes, which prevents stress induced by a bite at one tooth (black arrow) from transferring effectively across the joint. But in a jawbone in which that bone is intact (bottom), the more rigid joint transfers stresses effectively, enabling greater bite forces.John Fortner
    stress simulation of T. rex jawTwo simulated T. rex jawbones reveal how a small bone (not visible) that spans a joint (white arrow) provides for a strong bite. In a version where that bone is not intact (top), the jawbone flexes, which prevents stress induced by a bite at one tooth (black arrow) from transferring effectively across the joint. But in a jawbone in which that bone is intact (bottom), the more rigid joint transfers stresses effectively, enabling greater bite forces.John Fortner

    The team’s findings “are potentially interesting,” says Witmer. “The prearticular is not a particularly big bone, but it could be involved in the bite,” he notes.

    The T. rex mandible is a complicated arrangement of various bones, but “the prearticular seems to lock the system together,” says Thomas Holtz, Jr., a vertebrate paleontologist at the University of Maryland in College Park who wasn’t involved in the study. These simulations show “it provides a demonstrable benefit.”

    In the future, Fortner and his colleagues will conduct similar analyses for the mandibles of other dinosaurs in the T. rex lineage to see how the arrangements of constituent bones, and particularly the IMJ, might have evolved over time.

    The results of such studies could be quite interesting, says Holtz. Dinosaurs near the base of the T. rex family tree had jawbones that were shaped differently, and they didn’t have bones to brace the IMJ, he notes. These theropods, or bipedal meat-eating dinosaurs, also had bladelike teeth rather than the banana-shaped teeth of T. rex, so they probably had a vastly different feeding style. In those ancestors, Holtz notes, a flexible IMJ may have served as a “shock absorber” when chomping down or during attacks on prey.

    in Science News on May 07, 2021 12:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Mangrove forests on the Yucatan Peninsula store record amounts of carbon

    Coastal mangrove forests are carbon storage powerhouses, tucking away vast amounts of organic matter among their submerged, tangled root webs.

    But even for mangroves, there is a “remarkable” amount of carbon stored in small pockets of forest growing around sinkholes on Mexico’s Yucatan Peninsula, researchers report May 5 in Biology Letters. These forests can stock away more than five times as much carbon per hectare as most other terrestrial forests.

    There are dozens of mangrove-lined sinkholes, or cenotes, on the peninsula. Such carbon storage hot spots could help nations or companies achieve carbon neutrality — in which the volume of greenhouse gas emissions released into the atmosphere is balanced by the amount of carbon sequestered away (SN: 1/31/20).

    At three cenotes, researchers led by Fernanda Adame, a wetland scientist at Griffith University in Brisbane, Australia, collected samples of soil at depths down to 6 meters, and used carbon-14 dating to estimate how fast the soil had accumulated at each site. The three cenotes each had “massive” amounts of soil organic carbon, the researchers report, averaging about 1,500 metric tons per hectare. One site, Casa Cenote, stored as much as 2,792 metric tons per hectare.

    Mangrove roots make ideal traps for organic material. The submerged soils also help preserve carbon. As sea levels have slowly risen over the last 8,000 years, mangroves have kept pace, climbing atop sediment ported in from rivers or migrating inland. In the cave-riddled limestone terrain of the Yucatan Peninsula, there are no rivers to supply sediment. Instead, “the mangroves produce more roots to avoid drowning,” which also helps the trees climb upward more quickly, offering more space for organic matter to accumulate, Adame says.

    As global temperatures increase, sea levels may eventually rise too quickly for mangroves to keep up (SN: 6/4/20). Other, more immediate threats to the peninsula’s carbon-rich cenotes include groundwater pollution, expanding infrastructure, urbanization and tourism.

    in Science News on May 07, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Years after faked peer review concerns surfaced, journals are still falling for it

    A group of authors has lost a pair of papers in a computing journal for monkeying with the peer review process.  The first author on both articles was Mohamed Abdel-Basset of the Department of Operations Research in the Faculty of Computers and Informatics at Zagazig University, in Sharqiya. Mai Mohamad, also of Zagazig, is the … Continue reading Years after faked peer review concerns surfaced, journals are still falling for it

    in Retraction watch on May 07, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Flags And Phrenology: The Week’s Best Psychology Links

    Our weekly round-up of the best psychology coverage from elsewhere on the web

    “Grumpy” dogs may be better learners than their more agreeable counterparts, reports James Gorman at The New York Times. Researchers found that grumpier canines were better at learning how to reach an object placed behind a fence by observing a stranger. But other scientists suggest that something more specific than “grumpiness” is responsible for the animals’ superior performance, such as increased aggression, reduced inhibition, or hyperactivity.

    Adults are more compassionate when children are around. That’s according to a series of studies whose results include the finding that people are more likely to donate to charity when more kids are nearby, and even that adults are more prosocial after merely thinking about children. Lukas Wolf and colleagues explain the work at The Conversation.

    Psychologists are working on various strategies to help people deal with the “infodemic” of fake news and social media manipulation. At Undark, Teresa Carr explores some of these attempts, ranging from online games to lessons on how to behave like fact-checkers.

    The country’s recent obsession with the Union flag could end up damaging social cohesion, warns Amit Katwala at Wired. While flags can act as a symbol of unity, research has found that they can also make outsiders feel less welcome. Other work has shown that, in some cases, the presence of a flag can increase feelings of nationalism and prejudice towards immigrants. 

    In the not-too-distant future, our lives may return to something resembling the pre-2020 days. But after more than a year of living under lockdowns and social distancing rules, many of us are likely to find it hard to resume “normal” life. At Scientific American, Melba Newsome examines what some people are calling “cave syndrome”.

    Two hundred years ago, Franz Joseph Gall popularised phrenology, the idea that patterns of bumps on our skulls predict our character. In a BBC video hosted at Aeon, Claudia Hammond explores the history of an idea that was clearly pseudoscience, but which nevertheless contributed in some ways to modern psychology and neuroscience.

    Patients can experience positive effects from taking a placebo, even when they know they’re being given an inert pill. That’s the case for conditions ranging from depression and anxiety to arthritis and irritable-bowel syndrome. So how exactly do these “open-label” placebos work? Brian Resnick takes a look at Vox.

    Compiled by Matthew Warren (@MattBWarren), Editor of BPS Research Digest

    in The British Psychological Society - Research Digest on May 07, 2021 08:17 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Schneider Shorts 7.05.2021

    Schneider Shorts for 5 May 2021: retractions and resignations, geniuses saving the world from COVID-19, anti-aging scams and Didier Raoult's legal attack on Elisabeth Bik.

    in For Better Science on May 07, 2021 05:34 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Nobel Summit academics: the greatest challenges facing humanity may also be the greatest opportunities

    “Let’s use this new age of humility to grasp an opportunity to re-describe who we are, what we want and how we will achieve it.” — Richard Horton, EiC, The Lancet

    in Elsevier Connect on May 07, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    These climate-friendly microbes recycle carbon without producing methane

    Earth’s hot springs and hydrothermal vents are home to a previously unidentified group of archaea. And, unlike similar tiny, single-celled organisms that live deep in sediments and munch on decaying plant matter, these archaea don’t produce the climate-warming gas methane, researchers report April 23 in Nature Communications.

    “Microorganisms are the most diverse and abundant form of life on Earth, and we just know 1 percent of them,” says Valerie De Anda, an environmental microbiologist at the University of Texas at Austin. “Our information is biased toward the organisms that affect humans. But there are a lot of organisms that drive the main chemical cycles on Earth that we just don’t know.”

    Archaea are a particularly mysterious group (SN: 2/14/20). It wasn’t until the late 1970s that they were recognized as a third domain of life, distinct from bacteria and eukaryotes (which include everything else, from fungi to animals to plants).

    For many years, archaea were thought to exist only in the most extreme environments on Earth, such as hot springs. But archaea are actually everywhere, and these microbes can play a big role in how carbon and nitrogen cycle between Earth’s land, oceans and atmosphere. One group of archaea, Thaumarchaeota, are the most abundant microbes in the ocean, De Anda says (SN: 11/28/17). And methane-producing archaea in cows’ stomachs cause the animals to burp large amounts of the gas into the atmosphere (SN: 11/18/15).

    Now, De Anda and her colleagues have identified an entirely new phylum — a large branch of related organisms on the tree of life — of archaea. The first evidence of these new organisms were within sediments from seven hot springs in China as well as from the deep-sea hydrothermal vents in the Guaymas Basin in the Gulf of California. Within these sediments, the team found bits of DNA that it meticulously assembled into the genetic blueprints, or genomes, of 15 different archaea.

    The researchers then compared the genetic information of the genomes with that of thousands of previously identified genomes of microbes described in publicly available databases. But “these sequences were completely different from anything that we know,” De Anda says.

    She and her colleagues gave the new group the name Brockarchaeota, for Thomas Brock, a microbiologist who was the first to grow archaea in the laboratory and who died in April. Brock’s discovery paved the way for polymerase chain reaction, or PCR, a Nobel Prize–winning technique used to copy small bits of DNA, and currently used in tests for COVID-19 (SN: 3/6/20).

    Brockarchaeota, it turns out, actually live all over the world — but until now, they were overlooked, undescribed and unnamed. Once De Anda and her team had pieced together the new genomes and then hunted for them in public databases, they discovered that bits of these previously unknown organisms had been found in hot springs, geothermal and hydrothermal vent sediments from South Africa to Indonesia to Rwanda.

    Within the new genomes, the team also hunted for genes related to the microbes’ metabolism — what nutrients they consume and what kind of waste they produce. Initially, the team expected that — like other archaea previously found in such environments — these archaea would be methane producers. They do munch on the same materials that methane-producing archaea do: one-carbon compounds like methanol or methylsulfide. “But we couldn’t identify the genes that produce methane,” De Anda says. “They are not present in Brockarchaeota.”

    That means that these archaea must have a previously undescribed metabolism, through which they can recycle carbon — for example in sediments on the seafloor — without producing methane. And, given how widespread they are, De Anda says, these organisms could be playing a previously hidden but significant role in Earth’s carbon cycle.

    “It’s twofold interesting — it’s a new phylum and a new metabolism,” says Luke McKay, a microbial ecologist of extreme environments at Montana State University in Bozeman. The fact that this entire group could have remained under the radar for so long, he adds, “is an indication of where we are in the state of microbiology.”

    But, McKay adds, the discovery is also a testimonial to the power of metagenomics, the technique by which researchers can painstakingly tease apart individual genomes out of a large hodgepodge of microbes in a given sample of water or sediments. Thanks to this technique, researchers are identifying more and more parts of the previously mysterious microbial world.

    “There’s so much out there,” De Anda says. And “every time you sequence more DNA, you start to realize that there’s more out there that you weren’t able to see the first time.”

    in Science News on May 06, 2021 03:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Humans Aren’t The Only Animals To Experience Jealousy — Dogs Do, Too

    By Emily Reynolds

    Jealousy is a fairly common human emotion — and for a long time, it was presumed it truly was only human. Some have argued that jealousy, with its focus on social threat, requires a concept of “self” and a theory of mind — being jealous of someone flirting with your partner, for example, requires a level of threat (real or imagined) to your relationship. This element of jealousy has been used to argue that animals, without such a sense of self, are therefore unable to experience it.

    However a new study, published in Psychological Science, suggests this might not be the case. Amalia P. M. Bastos and team from the University of Auckland find evidence that dogs may, in fact, be able to mentally represent the threatening social interactions that give rise to jealousy.

    Some previous research has already suggested that dogs get jealous — one 2018 study, for example, found that dogs would move in between or push owners away from interactions with fake dogs. But that research didn’t provide conclusive evidence that the dogs were actually experiencing jealousy.

    In the new study, the team recruited 18 dogs and their owners: all dogs had been in the household for at least six months, were non-aggressive and showed no signs of discomfort within the experimental setting, removing the possibility that they would move towards a fake dog out of aggression or fear.

    The owner sat behind a large barrier, wearing a blindfold and noise-cancelling headphones. The dogs were placed around five metres away, tied to a door frame attached to a force gauge in order to measure how hard they pulled on their leash.

    In the fake dog condition, a realistic-looking fake dog was placed next to the owner behind the barrier. The barrier was then moved across the room for five seconds in order to reveal the fake dog to the real dog. When the barrier was moved back, blocking the view of the fake dog, the owner was instructed to pet and talk to it as if it were real.

    In the cylinder condition, on the other hand, owners were shown petting and talking to a fleece cylinder behind the barrier. The realistic looking fake dog was still in the room, however, placed behind a separate, smaller barrier and revealed to the real dog.

    Dogs in the fake dog condition pulled significantly harder on their leashes than those in the fleece cylinder condition, suggesting that the dogs were attempting to break up the interaction between their owner and a perceived rival. The fact that the fake dog was present during the cylinder trials is important, showing that the mere presence of a rival wasn’t enough to invoke jealous behaviours: it was the actual interaction with the dog’s owner that led to increased jealous behaviour. When the dogs were later allowed to reach the fake dog, the team also found that the dogs engaged in genital and face sniffing — suggesting that they believed the fake dogs were real throughout the study.

    Overall, the dogs seemed to show “signatures” of jealous behaviour similar to those in humans: they reacted to a social partner engaging with a social rival (the fake dog) but not another object (the fleece cylinder), and didn’t react when the rival was in the room but not engaging with their owner. And most strikingly, they reacted strongly even though they couldn’t directly see the interaction between the owner and rival. Together, this suggests that dogs do indeed experience some form of jealousy, and can even mentally represent the social interactions that give rise to jealous feelings.

    This insight is important for several reasons, not only confirming previous research on jealousy in dogs but suggesting that dogs may have more complex cognitive abilities than we might assume. This may indicate the ability to conjure other mental representations in different situations — and a much richer inner life than we currently understand.

    Dogs Mentally Represent Jealousy-Inducing Social Interactions

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on May 06, 2021 01:32 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Saturn has a fuzzy core, spread over more than half the planet’s diameter

    One of Saturn’s rings has revealed properties of its core, hidden deep beneath the planet’s golden atmosphere.

    That core isn’t the lump of rock and ice that many scientists had envisioned, the new study finds. Instead, the core is diffuse, pervaded by huge amounts of hydrogen and helium and so spread out that it spans 70,000 kilometers, or about 60 percent of the planet’s diameter, researchers report April 28 at arXiv.org.

    The new intel should help planetary scientists better understand not only how giant planets formed in our solar system but also the nature of such worlds orbiting other stars.  

    To ascertain the structure of Saturn’s core, astronomer Christopher Mankovich and astrophysicist Jim Fuller, both at Caltech, examined the giant planet’s rings. Just as earthquakes help seismologists probe Earth’s interior, oscillations inside Saturn can reveal its internal composition. These oscillations alter Saturn’s gravitational forces, inducing waves in the rings —especially the C ring, which is the nearest of the three main rings to the planet (SN: 1/22/19).

    By analyzing a wave in that ring, along with data on Saturn’s gravity field from the now-defunct Cassini spacecraft (SN: 9/15/17), Mankovich and Fuller found that the core has about 17 Earth masses of rock and ice. But there’s so much hydrogen and helium mixed in, the core encompasses 55 Earth masses altogether — more than half of Saturn’s total, which is equivalent to the mass of 95 Earths. This “ring seismology” work will appear in a future Nature Astronomy.

    “It’s a new way to look at gas giant planets in the solar system,” says Ravit Helled, a planetary scientist at the University of Zurich who was not involved with the work. “This knowledge is important because it reflects on our understanding of giant exoplanets,” and indicates that giant planets in other solar systems probably have more complex structures than many researchers had thought.

    The discovery also illuminates how Saturn formed, says Nadine Nettelmann, a planetary scientist at the German Aerospace Center in Berlin.

    Older theories posited that a gas giant such as Saturn arises when rock and ice orbiting the sun start to conglomerate. Tenuous gaseous envelopes let additional solid materials sink to the center, forming a compact core. Only later, according to this theory, does the core attract lots of hydrogen and helium — the ingredients that make up most of the planet. Although these elements are gases on Earth, Saturn’s great gravity squeezes most of them into a fluid.

    But newer theories say instead that plenty of gas got incorporated into the core of rock and ice when it was taking shape 4.6 billion years ago. As the planet accreted additional mass, the proportion of gas rose. The structure Mankovich and Fuller deduce for Saturn’s core preserves this formation history, Nettelmann says, because the planet’s very center, representing the oldest part of Saturn, has the greatest proportion of rock and ice. The fraction of rock and ice decreases gradually rather than abruptly from the core’s center to its edge, reflecting the core’s development over time.

    “I find the conclusions very important and very exciting and the line of reasoning very convincing,” Nettelmann says. Still, she cautions that additional waves in the rings should be analyzed for confirmation.

    The type of oscillation that Mankovich and Fuller detect inside Saturn also implies that the core is stable rather than bubbling like a pot of water on a hot stove, which is one way a planet can carry heat from its hot interior outward. The core’s stability may help explain a long-standing puzzle: why Saturn emits more energy than it gets from the sun.

    After the planet formed, it was warm with the heat of its birth, but then it cooled off. The core’s stability could have put a lid on some of this cooling, however, which helped the planet retain heat that it still radiates to this day. In contrast, if the core had instead transported heat via the upwelling and downwelling of material, the planet would have cooled off faster and no longer give off so much heat.

    in Science News on May 06, 2021 12:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Rejection overruled, retraction ensues when annoyed reviewer does deep dive into data

    As a prominent criminologist, Kim Rossmo often gets asked to review manuscripts. So it was that he found himself reviewing a meta-analysis by a pair of Dutch researchers — Wim Bernasco and Remco van Dijke, of the Netherlands Institute for the Study of Crime and Law Enforcement, in Amsterdam — looking at a phenomenon called … Continue reading Rejection overruled, retraction ensues when annoyed reviewer does deep dive into data

    in Retraction watch on May 06, 2021 10:30 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Fast Odour Dynamics Are Encoded in the Olfactory System and Guide Behaviour

    This week on Journal Club session Maria Psarrou will talk about a paper "Fast Odour Dynamics Are Encoded in the Olfactory System and Guide Behaviour".

    Odours are transported in turbulent plumes, which result in rapid concentration fluctuations that contain rich information about the olfactory scenery, such as the composition and location of an odour source. However, it is unclear whether the mammalian olfactory system can use the underlying temporal structure to extract information about the environment. Here we show that ten-millisecond odour pulse patterns produce distinct responses in olfactory receptor neurons. In operant conditioning experiments, mice discriminated temporal correlations of rapidly fluctuating odours at frequencies of up to 40 Hz. In imaging and electrophysiological recordings, such correlation information could be readily extracted from the activity of mitral and tufted cells the output neurons of the olfactory bulb. Furthermore, temporal correlation of odour concentrations reliably predicted whether odorants emerged from the same or different sources in naturalistic environments with complex airflow. Experiments in which mice were trained on such tasks and probed using synthetic correlated stimuli at different frequencies suggest that mice can use the temporal structure of odours to extract information about space. Thus, the mammalian olfactory system has access to unexpectedly fast temporal features in odour stimuli. This endows animals with the capacity to overcome key behavioural challenges such as odour source separation, figure ground segregation and odour localization by extracting information about space from temporal odour dynamics.


    Date: 2021/05/06
    Time: 14:00
    Location: online

    in UH Biocomputation group on May 06, 2021 10:21 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How to detect, resist and counter the flood of fake news

    From lies about election fraud to QAnon conspiracy theories and anti-vaccine falsehoods, misinformation is racing through our democracy. And it is dangerous.

    Awash in bad information, people have swallowed hydroxychloroquine hoping the drug will protect them against COVID-19 — even with no evidence that it helps (SN Online: 8/2/20). Others refuse to wear masks, contrary to the best public health advice available. In January, protestors disrupted a mass vaccination site in Los Angeles, blocking life-saving shots for hundreds of people. “COVID has opened everyone’s eyes to the dangers of health misinformation,” says cognitive scientist Briony Swire-Thompson of Northeastern University in Boston.

    The pandemic has made clear that bad information can kill. And scientists are struggling to stem the tide of misinformation that threatens to drown society. The sheer volume of fake news, flooding across social media with little fact-checking to dam it, is taking an enormous toll on trust in basic institutions. In a December poll of 1,115 U.S. adults, by NPR and the research firm Ipsos, 83 percent said they were concerned about the spread of false information. Yet fewer than half were able to identify as false a QAnon conspiracy theory about pedophilic Satan worshippers trying to control politics and the media.

    Scientists have been learning more about why and how people fall for bad information — and what we can do about it. Certain characteristics of social media posts help misinformation spread, new findings show. Other research suggests bad claims can be counteracted by giving accurate information to consumers at just the right time, or by subtly but effectively nudging people to pay attention to the accuracy of what they’re looking at. Such techniques involve small behavior changes that could add up to a significant bulwark against the onslaught of fake news.

    anti-vaccine protestors outside of Dodger stadiumIn January, protests closed down a mass vaccination site at Dodger Stadium in Los Angeles.Irfan Khan/Los Angeles Times via Getty Images

    Wow factor

    Misinformation is tough to fight, in part because it spreads for all sorts of reasons. Sometimes it’s bad actors churning out fake-news content in a quest for internet clicks and advertising revenue, as with “troll farms” in Macedonia that generated hoax political stories during the 2016 U.S. presidential election. Other times, the recipients of misinformation are driving its spread.

    Some people unwittingly share misinformation on social media and elsewhere simply because they find it surprising or interesting. Another factor is the method through which the misinformation is presented — whether through text, audio or video. Of these, video can be seen as the most credible, according to research by S. Shyam Sundar, an expert on the psychology of messaging at Penn State. He and colleagues decided to study this after a series of murders in India started in 2017 as people circulated via WhatsApp a video purported to be of child abduction. (It was, in reality, a distorted clip of a public awareness campaign video from Pakistan.)

    Sundar recently showed 180 participants in India audio, text and video versions of three fake-news stories as WhatsApp messages, with research funding from WhatsApp. The video stories were assessed as the most credible and most likely to be shared by respondents with lower levels of knowledge on the topic of the story. “Seeing is believing,” Sundar says.

    The findings, in press at the Journal of Computer-Mediated Communication, suggest several ways to fight fake news, he says. For instance, social media companies could prioritize responding to user complaints when the misinformation being spread includes video, above those that are text-only. And media-literacy efforts might focus on educating people that videos can be highly deceptive. “People should know they are more gullible to misinformation when they see something in video form,” Sundar says. That’s especially important with the rise of deepfake technologies that feature false but visually convincing videos (SN: 9/15/18, p. 12).

    One of the most insidious problems with fake news is how easily it lodges itself in our brains and how hard it is to dislodge once it’s there. We’re constantly deluged with information, and our minds use cognitive shortcuts to figure out what to retain and what to let go, says Sara Yeo, a science-communication expert at the University of Utah in Salt Lake City. “Sometimes that information is aligned with the values that we hold, which makes us more likely to accept it,” she says. That means people continually accept information that aligns with what they already believe, further insulating them in self-reinforcing bubbles.

    Compounding the problem is that people can process the facts of a message properly while misunderstanding its gist because of the influence of their emotions and values, psychologist Valerie Reyna of Cornell University wrote in 2020 in Proceedings of the National Academy of Sciences.

    Thanks to new insights like these, psychologists and cognitive scientists are developing tools people can use to battle misinformation before it arrives — or that prompts them to think more deeply about the information they are consuming.

    One such approach is to “prebunk” beforehand rather than debunk after the fact. In 2017, Sander van der Linden, a social psychologist at the University of Cambridge, and colleagues found that presenting information about a petition that denied the reality of climate science following true information about climate change canceled any benefit of receiving the true information. Simply mentioning the misinformation undermined people’s understanding of what was true.

    That got van der Linden thinking: Would giving people other relevant information before giving them the misinformation be helpful? In the climate change example, this meant telling people ahead of time that “Charles Darwin” and “members of the Spice Girls” were among the false signatories to the petition. This advance knowledge helped people resist the bad information they were then exposed to and retain the message of the scientific consensus on climate change.

    Here’s a very 2021 metaphor: Think of misinformation as a virus, and prebunking as a weakened dose of that virus. Prebunking becomes a vaccine that allows people to build up antibodies to bad information. To broaden this beyond climate change, and to give people tools to recognize and battle misinformation more broadly, van der Linden and colleagues came up with a game, Bad News, to test the effectiveness of prebunking (see Page 36). The results were so promising that the team developed a COVID-19 version of the game, called GO VIRAL! Early results suggest that playing it helps people better recognize pandemic-related misinformation.

    Take a breath

    Sometimes it doesn’t take very much of an intervention to make a difference. Sometimes it’s just a matter of getting people to stop and think for a moment about what they’re doing, says Gordon Pennycook, a social psychologist at the University of Regina in Canada.

    In one 2019 study, Pennycook and David Rand, a cognitive scientist now at MIT, tested real news headlines and partisan fake headlines, such as “Pennsylvania federal court grants legal authority to REMOVE TRUMP after Russian meddling,” with nearly 3,500 participants. The researchers also tested participants’ analytical reasoning skills. People who scored higher on the analytical tests were less likely to identify fake news headlines as accurate, no matter their political affiliation. In other words, lazy thinking rather than political bias may drive people’s susceptibility to fake news, Pennycook and Rand reported in Cognition.

    When it comes to COVID-19, however, political polarization does spill over into people’s behavior. In a working paper first posted online April 14, 2020, at PsyArXiv.org, Pennycook and colleagues describe findings that political polarization, especially in the United States with its contrasting media ecosystems, can overwhelm people’s reasoning skills when it comes to taking protective actions, such as wearing masks.

    Inattention plays a major role in the spread of misinformation, Pennycook argues. Fortunately, that suggests some simple ways to intervene, to “nudge” the concept of accuracy into people’s minds, helping them resist misinformation. “It’s basically critical thinking training, but in a very light form,” he says. “We have to stop shutting off our brains so much.”

    With nearly 5,400 people who previously tweeted links to articles from two sites known for posting misinformation — Breitbart and InfoWars — Pennycook, Rand and colleagues used innocuous-sounding Twitter accounts to send direct messages with a seemingly random question about the accuracy of a nonpolitical news headline. Then the scientists tracked how often the people shared links from sites of high-quality information versus those known for low-quality information, as rated by professional fact-checkers, for the next 24 hours.

    On average, people shared higher-quality information after the intervention than before. It’s a simple nudge with simple results, Pennycook acknowledges — but the work, reported online March 17 in Nature, suggests that very basic reminders about accuracy can have a subtle but noticeable effect.

    For debunking, timing can be everything. Tagging headlines as “true” or “false” after presenting them helped people remember whether the information was accurate a week later, compared with tagging before or at the moment the information was presented, Nadia Brashier, a cognitive psychologist at Harvard University, reported with Pennycook, Rand and political scientist Adam Berinsky of MIT in February in Proceedings of the National Academy of Sciences.

    Prebunking still has value, they note. But providing a quick and simple fact-check after someone reads a headline can be helpful, particularly on social media platforms where people often mindlessly scroll through posts.

    Social media companies have taken some steps to fight misinformation spread on their platforms, with mixed results. Twitter’s crowdsourced fact-checking program, Birdwatch, launched as a beta test in January, has already run into trouble with the poor quality of user-flagging. And Facebook has struggled to effectively combat misinformation about COVID-19 vaccines on its platform.

    Misinformation researchers have recently called for social media companies to share more of their data so that scientists can better track the spread of online misinformation. Such research can be done without violating users’ privacy, for instance by aggregating information or asking users to actively consent to research studies.

    Much of the work to date on misinformation’s spread has used public data from Twitter because it is easily searchable, but platforms such as Facebook have many more users and much more data. Some social media companies do collaborate with outside researchers to study the dynamics of fake news, but much more remains to be done to inoculate the public against false information.

    “Ultimately,” van der Linden says, “we’re trying to answer the question: What percentage of the population needs to be vaccinated in order to have herd immunity against misinformation?”

    in Science News on May 06, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Using graphical abstracts to enrich and expand the reach of your research

    Graphical or visual abstracts increase utility and readership in a bite-sized, visual format

    in Elsevier Connect on May 06, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How social media can help societies and journals extend their reach

    A medical journal editor shares his tips and strategies for social media success

    in Elsevier Connect on May 06, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Northeastern University professor with 69 papers on PubPeer has resigned

    A chemistry professor at Northeastern University in Boston, MA who has almost 70 papers flagged on PubPeer resigned yesterday, May 4, 2021.

    On his blog For Better Science (Update May 5, at the bottom), Leonid Schneider shared an email from the Chair of the Department of Engineering, which states that Thomas J Webster has resigned from the university.

    Webster has 69 papers flagged on PubPeer, mostly for concerns about image irregularities. I reported 59 to the journals and institution in March 2020.

    Some of these papers, which appeared to have duplications of features within the same photo, were quietly corrected. Perhaps coincidentally, these had been published in the Elsevier journal Nanomedicine : Nanotechnology, Biology, and Medicine, where Webster is an Associate Editor. See e.g. here and here.

    The apparent duplications in the colorful image on the right below this text were explained by the first author on PubPeer as “the voltage of the instrument is not insufficient in that time, so that the carbon membrane (which was bought homemade) on the copper screen may affect the background and the resolution of the picture, which leads to this fuzzy image, bringing you some identification troubles” — sort of blaming me for seeing these duplications. You may remember that I awarded the journal a This Image Is Fine Award in November 2020.

    The image got replaced with something less obnoxious, without the journal even blinking when the authors wrote “there was an inadvertent mistake for Figure 3, B which appeared as a replicated image”. In my professional opinion, both images contain duplicated parts — and both papers should have been retracted. But the journal had a severe conflict of interest in both cases. One of the senior authors is an Associate Editor at the journal, so the papers were not handled according to COPE guidelines.

    Two examples of papers with severe image concerns, published in a journal where one of the senior author serves on the Editorial Board. In both cases, the authors were allowed to replace these figures with a clean panel, and the paper got a correction. From: https://pubpeer.com/publications/582FCA40E662923EAC611828E80CBE and https://pubpeer.com/publications/8994973E1DC6C384321CBAC47F273C

    Several other papers with image problems were published in Dove’s International Journal of Nanomedicine, where Webster is the founding Editor-in-Chief, and not surprisingly these have not been acted upon. It is always complicated to investigate such cases if one of the authors is the founding father of the journal.

    Despite the muddy corrections at the journals, it has to be said that Northeastern University appears to have handled this case appropriately and swiftly.

    As reported by For Better Science, the Webster lab was suspended in November 2020, and now in May 2021 Webster has left the university.

    Still, only 8 of these 69 papers have been corrected so far (and as we have seen above, that was not always a good decision), while as of today zero papers have been retracted. I hope the university will contact the journals with the findings of their investigation, advising them which of these cases involved research misconduct.

    in Science Integrity Digest on May 05, 2021 10:44 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Mindfulness Can Make Independent-Minded People Less Likely To Help Others

    By Emily Reynolds

    Mindfulness — in basic terms, the practice of being “present” in the moment and paying attention to one’s own thoughts and feelings — has seen something of a boom over the last few years. In the United States, the mindfulness business is set to reach a value of $2 billion by next year, while in the United Kingdom, lockdown saw a spike in downloads for digital meditation offerings such as Headspace and Calm. 

    But is mindfulness all it’s cracked up to be? While it certainly has its benefits, some argue that it encourages blind acceptance of the status quo, taking us so far into ourselves that we forget the rest of the world. In a new preprint on PsyArxiv, Michael Poulin and colleagues from New York’s University at Buffalo also find that mindfulness can decrease prosocial behaviours — at least for those who see themselves as independent from others.

    The first study was designed to look at the impact of mindfulness on prosocial activity, and in particular whether this depends on a person’s “self-construal”. In short, if someone has an independent self-construal they see the self as separate from others, rather than thinking more collectively and conceptualising themselves as part of a wider group.

    Participants were randomly assigned to one of two conditions, one oriented around mindfulness meditation, and the other focusing on a control meditation in the form of mind wandering. Those in the mindfulness condition listened to a tape designed to induce mindfulness through mindful breathing, while those in the mind wandering condition were instructed to “let your mind wander and think freely without needing to focus hard on anything in particular”.

    After listening to the tapes, participants read about a local poverty and homelessness charity, before being asked whether or not they wanted to stuff envelopes in support of the organisation. Participants who decided to take part were left to do so for as long as they wanted. The team also measured participants’ self-construal by asking them to indicate how much they identified with friends, family, and wider groups compared to how much they thought of themselves as independent.

    Most participants (84%) stuffed at least some envelopes after the task, though the number varied significantly — some stuffed up to 158, while others did just one. People who participated in the mindfulness meditation stuffed 15% more envelopes than those who did the control mediation — if they had an interdependent self-construal. But for those with independent self-construals, mindfulness decreased the number of envelopes stuffed by 15%.

    The second study replicated the first: this time, however, the team attempted to manipulate participants’ self-construal. Participants were asked to go through a paragraph selecting all of the pronouns. In the independent condition, participants selected singular pronouns (“I went to the city”) and in the interdependent condition, they selected plural pronouns (“we went to the city”).

    As the second study took place online, participants were not asked to stuff envelopes, but instead to sign up (or not) for time slots to chat online with alumni donors to request financial support for the same charity. And similar to the results of the first study, those primed with independent self-construal were less likely to volunteer after listening to the mindfulness exercise, while those in the interdependent condition saw an increased likelihood of volunteering after the mindfulness task.

    Other research has found similar results; one 2017 study, for example, found that mindfulness did not have the empathy-boosting benefits that many of its adherents had claimed, at least in those high in narcissistic traits. This latter finding seems key: as in this study, it may not be that mindfulness decreases empathy or prosocial behaviours across the board, but only in combination with particular personality types. After all, interdependent participants saw an increase in prosocial behaviours, not a decrease.

    Developing a more nuanced view of the benefits of mindfulness, then, might be one way of dealing with this issue. Following its meteoric rise, mindfulness has often been positioned as a panacea, not only for anxiety or other mental health conditions but in other areas, too: productivity, creativity, personal relationships, and particular traits or habits. Rather than treating it as a wholesale good, however, it may be better to understand when mindfulness might be truly beneficial — and, importantly, for whom.

    Minding your own business? Mindfulness decreases prosocial behavior for those with independent self-construals [this paper is a preprint, meaning that it has not yet been subjected to peer review and the final published version may differ from the version this report was based on]

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on May 05, 2021 01:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    An update on Lab and Study Protocols at PLOS ONE

    Author: Emily Chenette, Editor-in-Chief, PLOS ONE

    PLOS empowers researchers to transform science by offering more options for credit, transparency and choice. The launch of Lab Protocols and Study Protocols in PLOS ONE earlier this year supported this crucial goal by bringing reproducibility and transparency to research, and enabling those who contributed to study design to receive credit for their contributions.  

    Today, we’re delighted to share the news that two Study Protocols have now been published in PLOS ONE.

    The first, by Satoru Joshita and colleagues, describes a protocol for studying the prevalence and etiology of portopulmonary hypertension in a cohort of Japanese people with chronic liver disease. 

    The second, by John Cole and colleagues, provides the protocol for the Copy Number Variation and Stroke (CaNVAS) Risk and Outcome study. This study aims to identify copy-number variations that are associated with a risk of ischemic stroke in diverse populations. 

    Cole writes, “Very little is known about the genetics of stroke outcome. In CaNVAS, copy number variation (CNV) variation as associated with both stroke-risk and-outcome will be explored on a large scale basis. Providing the scientific community with the CaNVAS protocol early in the study will help identify other researchers interested in these efforts, with the goal to increase collaboration and scientific discovery regarding CNV throughout the project.”

    Furthermore, by having their Study Protocols reviewed and published, these authors have had the opportunity to ensure that their study designs are robust and reproducible before the research is completed. They’re also contributing to reducing publication bias by sharing the study aims before the results are available. 

    If you’re interested in submitting your own Study Protocol for consideration, our Submission Guidelines have more information about the submission and review process. One author-friendly feature of Study Protocols is that they are eligible for expedited review if the study has received funding after peer review from an external funding source. 

    We also encourage researchers to share their detailed, verified research methodology by publishing a Lab Protocol in PLOS ONE. This unique article type was developed in partnership with protocols.io, and consists of two interlinked components: 1) a step-by-step protocol on protocols.io, with access to specialized tools for communicating methodological details and facilitating use of the protocol; and 2) a peer-reviewed PLOS ONE article contextualizing the protocol, with sections discussing applications, limitations and expected results. Several Lab Protocols are under review right now, and we look forward to publishing the first article soon! 

    Thank you to our authors, reviewers, editors and readers for contributing to these article types and supporting Open Science at PLOS ONE.

    The post An update on Lab and Study Protocols at PLOS ONE appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on May 05, 2021 12:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Paper on ‘energy medicine’ retracted after reader complaints

    An integrative health journal has retracted a 2019 paper two months after issuing an expression of concern about the article distancing itself from the work.  The paper, which appeared in Global Advances in Health and Medicine, was a review of “energy medicine” by Christina Ross, of Wake Forest University in Winston-Salem, N.C.  As we reported … Continue reading Paper on ‘energy medicine’ retracted after reader complaints

    in Retraction watch on May 05, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Stefano Mancuso’s Brilliant Green and Plant Revolution: review of two books

    Here I review two more books by Stefano Mancuso, a somewhat unorthodox plant scientist from Florence

    in For Better Science on May 05, 2021 06:37 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How nurses can act as a touchpoint in underrepresented communities

    Dr Jasmiry Bennett, Nurse Practitioner Specialist at Yale, explains why nurses play a key role in combating disparities in healthcare access

    in Elsevier Connect on May 05, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Here’s The Best Way To Forgive And Forget

    By Emma Young

    If somebody else has treated you badly, what are the best strategies for overcoming this, and moving on?

    There has been, of course, an enormous amount of research in this field, in relation to everything from getting over a romantic break-up to coping with the after-effects of civil war. Now a new study in the Journal of Experimental Psychology: Learning, Memory and Cognition, led by Saima Noreen at De Montfort University, specifically investigates how different types of forgiveness towards an offender can help people who are intentionally trying to forget an unpleasant incident.  

    As the name implies, “intentional forgetting” involves actively trying to suppress memories of an unpleasant experience. Recent studies have suggested that this lessens the associated negative emotions. Forgiveness has been more extensively investigated, and there is work finding that forgiving the perpetrator helps (though of course not all victims feel able or willing to forgive, and forgiveness is not an essential component of recovery).

    Noreen and her colleagues set out to explore possible interactions between intentional forgetting and “decisional” vs “emotional” forgiveness. Decisional forgiveness is making the decision to forgive the perpetrator, and not to seek revenge — indeed, even to make efforts to maintain a relationship — but while still bearing a grudge. In contrast, emotional forgiveness involves getting rid of negative emotions towards the perpetrator and replacing them with positive ones.

    In initial studies, the team presented online participants with this scenario: just as they are about to move in with their partner, they discover that their partner is having an affair. Participants were then encouraged to forget details  associated with this hypothetical unpleasant experience (e.g. a list of adjectives that described the offender). Some were also instructed to forgive the offender either through emotional forgiveness (to “wish that the offender experiences something positive or healing and to focus their thoughts and feelings on empathy”) or decisional forgiveness (“think of the offender as someone who has behaved badly and that you have resolved not to pay her/him back and to treat him/her in a positive, rather than a negative way”.)  Others had no forgiveness intervention.

    The team found that participants in the emotional forgiveness group showed greater forgetting of the detail, though not the gist, of the offence than the other groups. These participants also reported feeling more psychological distance from the offence.

    These studies involved hypothetical offences, however. For the final online, two-stage study, the team recruited 360 fresh participants. In the first of two sessions, these people were guided to write in detail about a time in their life when someone close to them had done something that deeply offended, harmed or hurt them and that still left them feeling angry or resentful. Next, they rated various aspects of this experience, including the extent to which they had forgiven the perpetrator.

    Between 7 and 11 days later, these participants then completed the second session. This included the same three-group forgiveness intervention as in the earlier studies. The team’s analysis revealed that for these participants, emotional, but not decisional, forgiveness was associated with greater forgetting of the detail of the original transgression (though again not the gist of it). It was also associated with a shift to reporting feeling more forgiveness for the perpetrator.

    “Collectively, our findings suggest that the act of emotional forgiveness leads to a transgression becoming more psychologically distant, such that victims will construe the event at a higher and more abstract level,” the team writes. (In other words, retaining the gist, but not all the detail). “This high-level construal, in turn, promotes larger intentional forgetting effects, which, in turn, promote increased emotional forgiveness,” they go on.

    This research does support some earlier suggestions that to function as a coping strategy, forgiveness should take an emotional form. And there are clear potential implications for strategies designed to help people recover psychologically from an offence. However, more work is clearly needed to explore this apparent “virtuous circle”, and to get a better handle on potential effects in the real world.

    Moving on or deciding to let go? A pathway exploring the relationship between emotional and decisional forgiveness and intentional forgetting.

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on May 04, 2021 10:31 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Prominent Chinese scientist failed to disclose company ties in COVID-19 clinical trial paper

    One of China’s leading scientists in the fight against COVID-19 failed to disclose ties to a pharmaceutical company in a paper stemming from a clinical trial, Retraction Watch has learned. A co-author on the paper is married to the daughter of that pharmaceutical company’s founder, who herself sits on the firm’s board of directors.  Nanshan … Continue reading Prominent Chinese scientist failed to disclose company ties in COVID-19 clinical trial paper

    in Retraction watch on May 04, 2021 10:01 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Ecologist who lost thesis awards earns expressions of concern after laptop stolen

    Readers may roll their eyes at the various excuses authors use — including flooded labs and “my laptop was stolen” — when their data are unavailable for further scrutiny following questions. But here’s a case in which a stolen laptop is a real story. On April 5, Daniel Bolnick, the editor-in-chief of The American Naturalist, … Continue reading Ecologist who lost thesis awards earns expressions of concern after laptop stolen

    in Retraction watch on May 03, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Kristian Helin gets the perfect job

    ICR London has a new director. The Great Dane Kristian Helin is the perfect successor to continue the ideological line of fictional cancer research.

    in For Better Science on May 03, 2021 06:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    In nursing, self-assessment is key to ensuring inclusive healthcare

    Dr Anna Valdez (she/her), Sonoma State University professor and Editor-in-Chief of Teaching and Learning in Nursing, explains how nurses can take action on inclusivity

    in Elsevier Connect on May 03, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Weekend reads: COVID-19 issue pulled; an author announces a retraction; FDA sanctions a company for not publishing results

    Before we present this week’s Weekend Reads, a question: Do you enjoy our weekly roundup? If so, we could really use your help. Would you consider a tax-deductible donation to support Weekend Reads, and our daily work? Thanks in advance. The week at Retraction Watch featured: Editor declines to correct paper with duplicated image after earlier … Continue reading Weekend reads: COVID-19 issue pulled; an author announces a retraction; FDA sanctions a company for not publishing results

    in Retraction watch on May 01, 2021 12:01 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Heating up the objective for two-photon imaging

    To image neurons in vivo with a large field of view, a large objective is necessary. This big piece of metal and glass is in indirect contact with the brain surface, with only water and maybe a cover slip in between. The objective touching the brain effectively results in local cooling of the brain surface through heat conduction (Roche et al., eLife, 2019; see also Kalmbach and Waters, J Neurophysiology, 2012). Is this a problem?

    Maybe it is: Cooling by only few degrees can result in a drop of capillary blood flow and some side-effects (Roche et al., eLife, 2019). And it has also been shown (in slice work) that minor temperatures changes can affect the activity of astrocytic microdomains (Schmidt and Oheim, Biophysical J, 2020), which might in turn affect neuronal plasticity or even neuronal activity.

    For a specific experiment, I wanted to briefly test how such a temperature drop affects my results. Roche et al. used a commercial objective heating device with temperature controller, and a brief email exchange with senior author Serge Charpak was quite helpful to get started. However, the tools used by Roche et al. are relatively expensive. In addition, they used a fancy thermocouple element together with a specialized amplifier from National Instruments to probe the temperature below the objective.

    Since this was only a brief test experiment, I was hesitant to buy expensive equipment that would maybe never be used again. As a first attempt, I wrapped a heating pad, which is normally used to keep the temperature of mice during anesthesia at physiological levels, around the objective; however, the immersion medium below the objective could only heated up to something like 28°C, which is quite a bit below the desired 37°C.

    Heating pad, wrapped around a 16x water immersion objective. Not hot enough.

    Therefore, I got in touch with Martin Wieckhorst, a very skilled technician from my institute. He suggested a more effective heating of the objective by using a very simple solution. After a layer of insulation tape (Kapton tape, see picture below), we wrapped a constantan wire, which he had available from another project, in spirals around the objective body, followed again by a layer of insulation tape. Then, using a lab power supply, we just sent some current (ca. 1A at 10 V) through the wire. The wire acts as a resistor – therefore it is important that adjacent spirals do not touch each other – and produces simply heat that is taken up by the objective body.

    Constantan wire wrapped in spirals around the objective body. Semi-transparent Kapton tape used for insulation makes the wires barely visible on this picture.

    To measure the temperature below the objective, we needed a sensor as small as possible. A typical thermometer head would simply not fit into the space between objective and brain surface. We decided to use a thermistor or RTD (resistance temperature detectors). How can we read out the resistance and convert it into temperature? Fortunately, Martin found an old heating block which contained a temperature controller (this one). These controllers are typically capable to use information from standardized thermistors of different kinds or thermocouples.

    Next, we bought the sensor itself, a PT100 thermistor (I think it was this one) with a very small spatial footprint. The connection from the PT100 to the temperature controller is pretty straightforward once you understand the connection scheme based on three wires (explained here). This three-wire scheme serves to eliminate the effect of the electrical resistance of the cables on the measurement. Then, we dipped the head of the PT100 into non-corrosive hot glue in order to prevent a shortcut of the PT100 resistor once it dips into the immersion medium. The immersion medium is at least partially conductive and would therefore affect the measure resistance and also the measured temperature. Once we had everything set up, we checked the functionality of the sensor in a water bath, using a standard thermometer for calibration. Another way to perform this calibration would be an ice bath, which is stably at 0°C.

    A repurposed heating block to read out a thermistor. We first looked up the data sheet of the built-in controller (bottom right) and then connected a PT100 thermosensor to its inputs. The PT100 sensor is located at the tiny end of the blue cable (inset), covered by a thin film of non-corrosive hot glue.

    The contact surface of my objective with the immersion medium is mostly glass and a bit of plastic, therefore it took roughly 30-60 min until the temperature below the objective reached a stable value of around 37°C. In order to prevent that the heat is distributed throughout the whole microscope, we used a plastic objective holder that does not conduct heat.

    Together, I found this small project very instructive. First, I was surprised to learn how reliable and fast an objective heater based on simple resistive wire can be. Heating up the metal part of the objective up to >60°C within minutes was no problem. It took however much longer until the non-metal parts of the objective also reached the desired temperature. I was also glad to see that the objective (16x Nikon) was not damaged and its resolution during imaging was not affected by its increased temperature!

    The problem of designing a very small temperature sensor was more complicated, also due to the standard three-wire scheme to measure with thermistors. However, all components that we used were relatively cheap, and I think that these temperature measurement devices are interesting tools that could be used also for other experiments, e.g., to monitor body temperature or to build custom-made temperature controllers of water bath temperature for slice experiments.

    in Peter Rupprecht on May 01, 2021 09:34 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Free Will And Facial Expressions: The Week’s Best Psychology Links

    Our weekly round-up of the best psychology coverage from elsewhere on the web

    It’s not possible to reliably predict the emotions someone is experiencing based just on their facial expressions. And yet tech companies are trying to do just that. At The Atlantic, Kate Crawford explores some of these attempts — and the contested research on which they are based.

    At Science, Kelly Servick takes a look at attempts to understand and treat the “brain fog” experienced by some COVID-19 survivors.

    Short sessions of unconscious bias training are unlikely to produce any long-term changes in the workplace. But many researchers believe that we’ve been too quick to simply dismiss these courses, writes David Robson at The Observer: instead, we should understand what they get wrong and what they get right, and use this knowledge to develop better solutions for combating bias.

    Is free will an illusion? Does it even matter? Oliver Burkeman explores what philosophy, psychology and neuroscience have to say in a long read at The Guardian.

    Researchers have developed a method to predict whether a psychedelic compound will produce hallucinations, reports Ariana Remmel at Nature. The technique involves using a fluorescent sensor to determine exactly how the molecule will bind to a particular serotonin receptor in the brain. The method could be useful for finding non-hallucinogenic psychedelics for the treatment of mental health disorders.

    How has the pandemic affected those living with obsessive compulsive disorder? Although early on some researchers were worried that public health measures like hand washing could make certain symptoms worse, the data suggests that that’s not the case, write Carey Wilson and Thibault Renoid at The Conversation. However, the pandemic may have increased general feelings of anxiety and stress among some people with OCD, similar to effects on people with other mental health conditions.

    Finally, the Association of British Science is holding an online event with psychologist Lisa DeBruine on 4th May. DeBruine will be talking about the replication crisis and the work of the Psychological Science Accelerator. For a primer, look check out Brian Resnick’s recent piece at Vox, and Jon Brock’s story in The Psychologist from last year.

    Compiled by Matthew Warren (@MattBWarren), Editor of BPS Research Digest

    in The British Psychological Society - Research Digest on April 30, 2021 01:53 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    arXiv’s Giving Week is May 2 – 8, 2021

    arXiv is free to read and submit research, so why are we asking for donations?

    arXiv is not free to operate, and, as a nonprofit, we depend on the generosity of foundations, members, donors, volunteers, and individuals like you to survive and thrive. If arXiv matters to you and you have the means to contribute, we humbly ask you to join arXiv’s global community of supporters with a donation during arXiv’s Giving Week, May 2 – 8, 2021.

    Less than one percent of the five million visitors to arXiv this month will donate. If everyone contributed just $1 each, we would be able to meet our annual operating budget and save for future financial stability.

    Would you like to know more about our operations and how arXiv’s funds are spent? Check out our annual report for more information.

    Thank you for your support!

    Donate Button

    in arXiv.org blog on April 30, 2021 01:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Announcing the first OA book funded entirely by library membership programme ‘Opening the Future’

    Roll out of first open access books fully funded by Opening the Future We're thrilled to announce that our Opening the Future library membership programme has reached the threshold needed to begin funding the first titles in open access. The Opening the Future platform is a CEU Press and COPIM initiative, launched earlier this year to facilitate transitioning the entire monograph program of CEU Press into open access together with its partners Project MUSE, LYRASIS and Jisc. Within the model, which is a first of its kind, CEU Press provides access to portions of their highly-regarded backlist, to which members subscribe. The revenue from these subscriptions is allocated entirely to allow the frontlist to be OA from the date of publication. Words in Space and Time: Historical Atlas of Language Politics in Modern Central Europe by Dr Tomasz Kamusella will be the first title to be published OA through the programme. The atlas, available this autumn, offers novel insights into the history and mechanics of Central Europe’s languages as products of human history and a part of culture. It includes forty-two annotated and interactive maps. Further titles will be announced soon and advance notice will be given to avoid any doubledipping. Opening the Future is a simple and cost-effective way for libraries to support OA books for HSS subjects especially. With 250 libraries, each participating at appropriate pricing tiers in the model, CEU Press can publish 25 OA books at a cost of 11 EUR / 13 USD / 10 GBP per monograph for each library (if we reach our targets). “With the Opening the Future model, CEU Press and COPIM developed a fair pricing system in the true sense of the word “fair” that allows libraries of all sizes and budgets to support OA monograph publishing in a sustainable way,” said Curtis Brundy, Associate University Librarian for Scholarly Communication and Collections at Iowa State University, a member of the OtF programme. “During the past five years, the move toward open access for all publicly-funded research publications has become a new norm and goal. CEU Press with its own OtF project trailblazes this new ground as a leader in this regard,” said Dr Tomasz Kamusella, Reader at the University of St. Andrews, UK. More Information For libraries or other institutions that want to support the move to immediate OA, without author-facing charges, more information can be found at https://ceup.openingthefuture.net/. For further details on the pricing and structure of the model, see the FAQs and resources web pages.

    in Open Access Tracking Project: news on April 30, 2021 07:27 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Schneider Shorts 30.04.2021

    Schneider Shorts 30 April 2021: a stupid Neanderthals study, Sputnik V meltdown, German anti-maskers in MDPI, and probably the most unethical COVID-19 clinical trial.

    in For Better Science on April 30, 2021 06:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Pandemic raises new alarms on mental health

    Primary care providers play a key role in screening for mental health issues; we’re offering free resources for Mental Health Awareness Month and beyond

    in Elsevier Connect on April 30, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The importance of early career researchers for promoting open research

    Author: Iain Hrynaszkiewicz, PLOS’ Director of Open Research Solutions

    Early career researchers again appear to be at the vanguard of open research, with them reporting more positive attitudes towards sharing of code compared to more experienced researchers – as found in PLOS research released as a preprint this week.

    At the end of March 2021 PLOS Computational Biology introduced a more stringent policy on sharing code associated with articles published in the journal. This was in response to a desire of members of the journal’s community to go further to promote open science, which appears to be reflected in the community at large – determined through collaborative research between PLOS and the journal’s community of editors and researchers.

    While more than 40% of papers published in PLOS Computational Biology voluntarily shared their code already, requiring more authors to share more of their research outputs as a condition of publication is not a decision to be taken lightly by any journal. Therefore, to better understand the attitudes and experiences of the computational biology community in relation to code sharing, we designed a survey to help us understand:

    • Is a mandatory code sharing policy suitable for researchers in the community?
    • What proportion of researchers’ papers generate code?
    • What concerns do they have about code sharing?
    • How common are these concerns?
    • How much would submissions to the journal be affected?
    • Are there differences in different segments of researchers (regions, disciplines, career stages)?

    Supporting the editorial announcing the policy and survey dataset that were released in March, a more in-depth analysis of our survey of more than 200 researchers has been released as a preprint. As well as supporting the journal’s plans, we hope this work will be a resource for other stakeholders considering adoption of new policies on sharing of code. Along with research data and protocols, sharing of code is important to help ensure research is reusable and reproducible but, as we discovered when developing the policy, there is limited evidence on the prevalence of code sharing – relative to other open research practices such as data sharing – and researchers’ experiences with sharing code and software.

    The authors surveyed report that, on average, 71% of their research articles have associated code, and that for the average author, code has not been shared for 32% of these papers. A lot of researchers had not shared their code in the past due to practical or technical issues such as insufficient time, skills or systems dependencies, which — at least in principle — would not prevent compliance with a mandatory code sharing policy. Twenty two percent of respondents who had not shared their code in the past, however, cited intellectual property (IP) concerns — a legitimate issue that might prevent public sharing of code under a mandatory policy. In combination with these survey results and testing draft versions of the policy with researchers in the field, we concluded that an inclusive policy would need to permit exemptions in certain cases. However, the results also implied that more of the respondents’ previous publications could have shared code.

    Figure 1. Reasons given by respondents for not sharing code publicly in the past.

    Another key finding was differences in levels of acceptance of a mandatory code sharing policy between research fields, and career stage (as determined by the number of previous publications respondents had published). Medical researchers reported being less likely to submit to the journal if it had a mandatory code sharing policy, as did researchers with more than 100 publications. Whereas, researchers with fewer than 20 published papers showed more positive responses towards submitting to the journal if it implemented a code sharing policy. Other studies have found greater affinity for open research amongst early career researchers, including a 2021 peer-reviewed survey of Early Career Researchers (ECRs) within the Max Planck society, which concluded ECRs seem to hold a generally positive view toward open research practices.

    Figure 2. Respondents were asked “If PLOS Computational Biology required you to publicly share any computer code you created to interpret your results, how would this affect your likelihood to submit to the journal?”

    Also, similar to what we discovered about researchers’ needs and priorities for data sharing in 2020, respondents were satisfied with their ability to share their own code but were less satisfied with their ability to access other researchers’ code. From this we infer that offering researchers new products or services to share code, at least in this community, in the absence of a stronger policy, would be unlikely to achieve the goal: of measurably increasing the availability of code with the journal’s publications. However, as with research data, we see opportunities for journals and publishers to increase findability and accessibility – and ultimately reuse – of code generated by researchers, which in turn can help realise more benefits of open research.

    Read the preprint here and survey dataset here. Please note our results have not yet been peer reviewed, but will be submitted to a peer-reviewed journal soon.

    The post The importance of early career researchers for promoting open research appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on April 29, 2021 02:40 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    People Hold Negative Views About Those Who Believe Life Is Meaningless

    By Emily Reynolds

    “The only absolute knowledge attainable by man is that life is meaningless,” wrote Leo Tolstoy in A Confession, a succinct summing up of the nihilist worldview. Depressing as it may be, nihilism seems to be on the rise, with the importance of finding a meaningful worldview steadily decreasing over the last decade or so.

    But how do other people view nihilists? This is the question posed by Matthew J. Scott and Adam B. Cohen of Arizona State University in a new paper published in The Journal of Social Psychology. They find that stereotypes of nihilists are overwhelmingly negative — and unlike stereotypes about atheists, people don’t seem to have any positive views about nihilists at all.

    In the first study, 464 participants viewed a short profile of either a fictitious man or woman, containing a picture and some information including occupation, favourite foods, hobbies, and preference for cats or dogs. The profile also outlined the person’s “pet philosophy”: they either had a nihilistic outlook (“We are here because of random events. Our lives have no purpose”) or a meaningful outlook (“We are all here for a reason. All of our lives serve a purpose”).

    Participants then rated how much the person embodied certain traits: the Big Five personality traits, “folk social value traits” which included “fun”, “energetic”, “educated”, and “trustworthy”, and the ability to fulfil social tasks including self-protection, being a good friend, attracting romantic partners, caring for family, and being interested in having children.

    Profiles that indicated life had a purpose were rated as more agreeable, conscientious, outgoing, and open-minded than nihilist profiles, which were in turn considered more neurotic. Similarly, the folk social value traits were all more closely associated with meaningful profiles than they were with nihilist profiles. Nihilists were also considered less competent in all social tasks except for self-protection.

    The team also found that participants viewed nihilists as less religious, more depressed, and less likely to plan for the future — and that these perceptions could go some way to explaining why they had more negative views of nihilists than of those with a more meaningful outlook. 

    A second study, involving 312 undergraduate participants, replicated the first, and came to many of the same conclusions. This time, the team also found that people had negative views of nihilists no matter what their own beliefs were on the meaning of life.

    In three final studies, instead of using profiles of people who were or weren’t nihilists, the team showed participants profiles of people who were described as religious or non-religious, depressed or happy, or good or bad at future planning. Each of these factors was associated with negative judgements: among other findings, non-religious profiles were considered less socially competent and less conscientious; depressed profiles were seen as less extraverted and agreeable, and less able to avoid disease, attract romantic partners and look after children; and those with lower levels of future-planning were seen as less conscientious, less virtuous, and less intelligent, and less likely to be able to care for themselves and others. Together, the results of the final studies suggests that these three factors are indeed the key reason nihilists are negatively stereotyped.

    Overall, the results don’t paint a rosy picture when it comes to views of nihilists. Unlike previous research on atheism, which found both positive and negative stereotypes of the non-religious, there also seems to be very little upside to the results — compared to people who think life has meaning, nihilists were viewed more negatively across the board.

    Looking more closely at different kinds of nihilism might produce different results, however — the team uses the example of someone who finds the accumulation of wealth meaningless, which may come with more positive stereotypes. It’s also perfectly possible that someone might believe life itself has no inherent purpose but has nonetheless developed their own sense of meaning. Looking more closely at people’s personal webs of meaning may reveal more nuanced attitudes, challenging broader stereotypes of how a “nihilist” thinks, feels, and behaves.

    Stereotypes of nihilists are overwhelmingly negative

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on April 29, 2021 12:13 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Delivering non-communicable disease care in a humanitarian crisis

    Non-communicable diseases (NCDs), like cardiovascular diseases and diabetes, represent the primary cause of mortality worldwide. An increasing volume of NCDs in low- and middle-income countries (LMICs) now accounts for the majority of recent surges in the global NCD burden.

    Simultaneously, increasing humanitarian crises in LMICs have resulted in the global forced displacement of people reaching record highs, with the average length of displacement now greater than 20 years. Until recently, the problem of NCDs in conflict-affected populations was largely neglected. However, with growing awareness of the challenges NCDs represent in humanitarian settings, clinical guidance has been developed for chronic NCD care in LMICs.

    Since 2014, the non-governmental organization (NGO) Médecins Sans Frontières (MSF) has supported the Jordanian health system by providing multidisciplinary, primary level NCD care to Syrian refugees and vulnerable Jordanians living in Irbid, Jordan. This week in BMC Health Services Research, Ansbro et al. describe their experiences with MSF, adapting the RE-AIM framework to evaluate MSF’s program of NCD care.

    RE-AIM is a framework consisting of five elements, Reach, Effectiveness, Adoption, Implementation, and Maintenance. The framework was originally developed to encourage program planners, evaluators, funders, and policy-makers to be more attentive to essential program elements, to improve sustainable adoption, and promote the implementation of effective, generalizable, evidence-based interventions. In this study, through the use of mixed methods, Ansbro et al. examined the RE-AIM five key elements of the MSF Syrian refugee NCD program.

    Image Source: Ory et al. (2015)

    Reach, Effectiveness, and Adoption

    Most areas of the program were viewed as acceptable to patients, staff, and stakeholders. Although the care was free, patients were burdened with indirect costs, like paying for transport, to attend clinics, potentially outweighing the benefits of ‘free’ care.

    The program achieved good clinical outcomes for treating patients with hypertension and diabetes. However, more research in the future that uses longer follow-up periods is needed to understand the prevalence or outcomes of major complications of NCDs.


    A key challenge for implementation was the impact of Syrian patients’ experiences of war, loss, and social suffering on their engagement with NCD care. Staff also shared concerns that they could not manage medical problems in isolation from the psychosocial issues patients faced and they felt ill-equipped to handle patients’ war-related trauma.

    Patients and staff also reported that the referral pathways for specialist services were troublesome and confusing. Funding for specialist referral pathways was limited, even though MSF managed to secure additional funding. This highlights the need for future programs to securely implement specialist referral pathways, ensuring they are designed to be affordable and accessible from the start.


    A key challenge for maintaining the program was cost. High costs were partly responsible for the program’s limited coverage and scope. However, there is room for adaptations. For example, MSF introduced nurse task sharing, which could lead to cost savings.

    Another challenge here is the availability of highly qualified family medicine specialists, for the management of patients with complex needs. While MSF had access to a large number of highly qualified Jordanian staff on this occasion, this is often not the case in all humanitarian crisis settings. This issue is difficult to address, but the authors propose potential workarounds, like telemedicine, as a possible solution for future programs.

    Research in humanitarian crisis settings is extremely difficult. Not only do governments and humanitarian organizations face significant challenges in effectively tackling NCDs in LMICs, but the evaluation of interventions in humanitarian settings is equally as challenging. Still, this study demonstrated that RE-AIM is a valuable tool to guide complex interventions in humanitarian crisis settings. Ansbro et al. bridge a knowledge gap in the delivery of care in humanitarian settings, however, more research is urgently needed to strengthen responses to NCDs in humanitarian crisis settings. Future programs need to focus on simplifying care models, reducing costs, and harnessing telehealth resources.

    The post Delivering non-communicable disease care in a humanitarian crisis appeared first on BMC Series blog.

    in BMC Series blog on April 29, 2021 07:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Time for art – at work and beyond

    I am priviledged to work for an employer that values work-life balance and wellness.

    in Elsevier Connect on April 29, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Learning Compositional Sequences with Multiple Time Scales through a Hierarchical Network of Spiking Neurons

    This week on Journal Club session Muhammad Yaqoob will talk about a paper "Learning Compositional Sequences with Multiple Time Scales through a Hierarchical Network of Spiking Neurons".

    Sequential behaviour is often compositional and organised across multiple time scales: a set of individual elements developing on short time scales (motifs) are combined to form longer functional sequences (syntax). Such organisation leads to a natural hierarchy that can be used advantageously for learning, since the motifs and the syntax can be acquired independently. Despite mounting experimental evidence for hierarchical structures in neuroscience, models for temporal learning based on neuronal networks have mostly focused on serial methods. Here, we introduce a network model of spiking neurons with a hierarchical organisation aimed at sequence learning on multiple time scales. Using biophysically motivated neuron dynamics and local plasticity rules, the model can learn motifs and syntax independently. Furthermore, the model can relearn sequences efficiently and store multiple sequences. Compared to serial learning, the hierarchical model displays faster learning, more flexible relearning, increased capacity, and higher robustness to perturbations. The hierarchical model redistributes the variability: it achieves high motif fidelity at the cost of higher variability in the between-motif timings.


    Date: 2021/04/30
    Time: 14:00
    Location: online

    in UH Biocomputation group on April 28, 2021 03:53 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Passion Is Linked To Greater Academic Achievement — But In Some Cultures More Than Others

    By Emily Reynolds

    “Passion” is a word that often crops up on job descriptions and in interviews; articles proliferate online explaining how to adequately express your passion to potential employers. On the whole, passionate people — those who have a strong interest in a particular topic, who are confident in themselves and who dedicate themselves to what they’re doing — are thought of in a positive light, and considered likely to achieve their goals.

    But when it comes to predicting achievement, how important is passion really? According to Xingyu Li from Stanford University and colleagues, writing in PNAS, passion may be less important in certain cultures — and the fact that passion is often seen as the key to achievement may reflect a “distinctly Western model of motivation”.

    The chief factor explored in the study was how individualistic a society was: in individualistic cultures, the team argues, people are more motivated by pursuing paths related to their passions, while those in collectivistic cultures are more likely to see themselves as part of an interdependent network, often fulfilling obligations to others rather than focusing on their own interests.

    Data was gathered from a large international survey focusing on education, which included over 1.2 million participants from nearly sixty different countries. The survey measured academic achievement in maths, science and reading during several closed-book tests, and the team also examined the individualism or collectivism of each culture.

    Passion itself is hard to measure across cultures — some participant languages, including Mandarin, have no direct translation for the word as understood in the Western world. Instead, passion was measured through self-report answers on various different factors: passionate students were those who were strongly and independently motivated and who showed high levels of enjoyment, interest and efficacy.

    In science subjects, passion was positively correlated with academic achievement: those with higher levels of enjoyment, interest and efficacy also had higher test scores. But the strength of this correlation wasn’t the same across cultures: those in individualistic societies such as the US, Australia, and the UK showed a stronger link between passion and achievement than did those in more collectivistic societies such as Colombia or Thailand.

    These results were also mirrored in achievement in both mathematics and in reading — again, high-achieving participants from more individualistic societies were also more likely to have high levels of passion, whilst high-achieving participants from collectivistic societies were not.

    In order to check whether any other cultural differences could predict the link between passion and achievement, the team also looked at eight further factors that vary between cultures, including how likely people are to avoid uncertainty, whether they value simple survival over self-expression, and how indulgent they are. However, only differences in individualism vs. collectivism could explain the association between passion and achievement across cultures.

    It’s clear from the results that passion isn’t the only thing that motivates people to achieve their goals — indeed, the team also found that parental support was a crucial factor in achievement in collectivistic societies. And these findings could help with the development of educational support programmes that cater to a wider range of individuals. Rather than developing support that focuses solely on self-regulation and passion, for example, institutions could take a wider view of what motivates students to do well.

    It’s also important to note that one type of motivation is no better than another. Although motivation from others could be seen as more extrinsic, the team writes, it “need not feel like coercive pressure from the outside”: rather than feeling overbearing, such motivation can be a source of “empowerment, persistence and resilience”. While for many students passion is key, it’s certainly worth considering the other factors that might make people tick.

    Passion matters but not equally everywhere: Predicting achievement from interest, enjoyment, and efficacy in 59 societies

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on April 28, 2021 10:57 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Know Your Brain: Olivary Nuclei

    Where are the olivary nuclei?

    The olivary nuclei, which consist of the inferior olivary nucleus and superior olivary nucleus, are found in brainstem. The olivary nuclei are paired structures, with one inferior and one superior olivary nucleus on each side of the brainstem. The inferior olivary nuclei are located in the medulla oblongata, and the superior olivary nuclei are found in the pons. Both nuclei are typically subdivided into collections of smaller nuclei.

    What are the olivary nuclei and what do they do?



    The inferior and superior olivary nuclei are distinct in function. The inferior olivary nucleus is typically subdivided into the principal olive, medial accessory olive, and dorsal accessary olive, and is thought to play an important role in movement, coordination, and movement-related learning. The superior olivary nucleus consists of the lateral superior olive and medial superior olive, as well as a number of surrounding nuclei known as the periolivary nuclei. The superior olivary nuclei are thought to be involved in hearing, and specifically with identifying the location of sounds.

    The inferior olivary nuclei receive movement-related information from several sources, including the spinal cord and motor cortex. This includes information about current movement, body position, muscle tension, and intention. The inferior olivary nuclei use this information to communicate with the cerebellum to fine-tune movements and aid in movement-related learning.

    Superior olivary nuclei indicated in a cross-section of the brainstem AT THE LEVEL OF THE PONS

    Superior olivary nuclei indicated in a cross-section of the brainstem AT THE LEVEL OF THE PONS

    The superior olivary nuclei receive projections from the cochlear nuclei that carry information about hearing. Neurons leave the superior olivary nuclei to extend to the inferior colliculus, which is an important part of the auditory system. The superior olivary nuclei receive information from both ears, and that information is compared to detect differences in qualities like intensity and to determine the location of a sound in the environment. The information is then sent to the inferior colliculus and processed further before being sent on to other regions like the thalamus and cerebral cortex. Additionally, neurons in the superior olivary nuclei project back to the cochlear nuclei. These projections are thought to be involved in negative feedback mechanisms that help to inhibit auditory stimuli that are deemed less important, such as background conversations.


    Paul MS, M Das J. Neuroanatomy, Superior and Inferior Olivary Nucleus (Superior and Inferior Olivary Complex) [Updated 2020 Jul 31]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2021 Jan-. Available from: https://www.ncbi.nlm.nih.gov/books/NBK542242/

    Schweighofer N, Lang EJ, Kawato M. Role of the olivo-cerebellar complex in motor learning and control. Front Neural Circuits. 2013 May 28;7:94. doi: 10.3389/fncir.2013.00094. PMID: 23754983; PMCID: PMC3664774.

    in Neuroscientifically Challenged on April 28, 2021 10:24 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Corona up your NOse

    "Prior going to the grocery store, after the grocery store, you'd spray it in your nose, for instance, or you go to day care or someone coughs on you," - Dr Chris Miller, co-founder of SaNOtize.

    in For Better Science on April 28, 2021 06:48 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Accelerating progress in brain recording tech

    In Stevenson and Kording (2011), the authors estimated that every 7.4 years, the number of neurons we can record with doubles. Think of it as Moore’s law for brain recordings. Since then, Stevenson has updated the estimate, which now stands at 6 years. Could it be that progress itself is accelerating?

    Matteo Carandini raised a question: why should progress be log-linear anyway? Technological phenomena have been argued to follow a double-exponential curve: the pace of progress itself accelerates over time. This is only noticeable when we look over a very long time horizon, for instance, when we look at computation per dollars over more than a century:

    Doubling timesimage CC BY Steve Jurvetson

    But we have 60 years to look at, so we can make these inferences! I took the data in Urai et al. (2021) – generously released under a CC-BY license – and fit a Bayesian Poisson regression model over time (code here). I fit only the electrophysiology data. It’s clear here that early times are underfit by the line. The doubling time estimated here is shorter than what has been noted in the literature – 4.5 years.

    A log-linear model of progress in electrophysiology

    On a technical note, a Poisson regression model will tend to give larger weight to higher numbers – hence, it focuses on fitting the right-hand side of the graph, while the linear regression model that’s conventionally used gives equal weight everywhere. With an accelerating trend, that means the Poisson regression model give a shorter doubling time.

    We can do one better – fit a double-exponential model. This is only a few lines of code in PyMC3 – a miracle of automatic differentiation and Hamiltonian Monte Carlo. Here’s what that looks like:

    You can see visually this is a much better fit, and it implies something pretty dramatic: progress itself is accelerating. That means that doubling time itself has changed over time – and it currently stands at 3.6 years under this model [95% CI 3.5-3.7].

    These results project a 1M neuron average recording capability by 2045 – of course, this discounts ceiling effects and potential paradigm shifts, which could adjust these bounds far upward or downward. What about optical methods? It turns out that the Poisson model works poorly because of overdispersion. I used a negative binomial to model the noise in the curve. I tried to let the overdispersion parameter be free, but I was getting convergence problems. Hence I fixed it to 2.0.

    The implied doubling rate is a little less than 2 years. These numbers could swing wildly as we add more data, but we see that the doubling rate for imaging is at least twice the current rate for electrodes. This is due to the market pressures in cellphone sensors and telecommunications (fiber optics and LiDAR), making good sensors very cheap. Many in neurotech have taken note, including Facebook, which is building light-based BCIs, and Paradromics, which is adapting some of the fabrication methods from imaging sensors to electrophysiology.

    Thus, this generalized Moore’s law of recordings is likely to continue decreasing in doubling time over the foreseeable future. Does this mean recording from every cell in the brain ($10^{11}$ cells) in the next 25 years? Probably not with electrodes – but if progress with light-based sensing continues at the same pace, perhaps. There is the vexing issue of scatter – and some of the people in this thread have some ideas on how to solve this.

    Regardless of the exact course of progress, I think that 7 years is far too long a doubling time – perhaps 3.5 years for ephys, 2 years for imaging. The future ain’t what it used to be, and it’s coming far faster than we’ve perhaps imagined. What will we do with all this data? There’s some great hints in the Urai paper. An interesting research question si how holographic the brain is – perhaps we will get most of the understanding with far less than 100% coverage. Regardless, I think Adam Calhoun put it best:

    Update: Ian Stevenson re-did the analysis with slightly different models, and found some slightly different results (longer doubling times than those reported here), from 4.5 to 5.6 years, depending on the assumptions. These are doubling times are nevertheless shorter than the ones reported previously in the literature so far, so the larger point still stands: the future is happening faster than we thought just a few years ago. Read the thread here.

    Further reading

    in xcorr.net on April 27, 2021 05:48 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    To boldly grow: five new journals shaped by Open Science

    PLOS announces new journals

    We are extremely excited to announce the imminent launch of five new journals, our first new launches in fourteen years. These new journals are unified in addressing global health and environmental challenges and are rooted in the full values of Open Science:

    But, before we go on, let’s address this: “Yet more journals?” (Yes, we heard you…!)

    Yes. We set out, with our original seven journals, to transform research communication by making research content openly accessible. Over the eighteen years since our first journal launch we’ve helped prove the viability of Open Access which, despite the occasional disagreement about how best to achieve it, is now a mainstream notion. We changed the publishing landscape via PLOS ONE, and this new type of “mega-journal” now features at almost every publisher and in every research field. We helped focus the conversation about peer review on rigor and transparency, and more people now understand how impact-seeking should never be at the expense of these notions. And, via open data policies, open peer review options, protocols, preregistration, preprint facilitation, etc., we’ve worked towards Open Science at a scale and in ways that increase the transparency and rigor of the entire research communication process, not just our journals. 

    All this is to say that we do not take lightly the responsibility of introducing new journals into the world. While we don’t believe the long-term future of research communication is always going to be the journal as we know it today, we do appreciate the impacts that journals can still have, and the new communities of practice they can empower. 

    Therefore, we are launching these five journals in the spirit of them being additional change-making vehicles.

    We’ve invited key members of the PLOS Leadership Team along with our CEO, Alison Mudditt, to field specific questions, below: 

    “Why launch new journals at this point in PLOS’ journey?” 

    Niamh O’Connor, Chief Publishing Officer, PLOS

    “We’re a nonprofit, driven by our mission, and we need to adapt to continue to deliver on it. Even though Open Access is now widely adopted, and Open Science is advancing, there are still key voices missing. We are expanding our global footprint in locally responsible ways to get closer to researchers. Researchers have always driven our mission forward, and in order for us to have a meaningful impact we need to include the broadest range of their voices, globally. This way, we ensure the co-creation of paths to Open Science that work for diverse communities and do not simply extend existing power structures. These new journals create new and diverse communities of practice, and ensure that they are at the forefront of shaping how we address the most pressing health and environmental issues facing our society. 

    Additionally, all these journals will be underpinned by our existing, and new, institutional business models that move beyond the APC to ensure more equitable and regionally appropriate ways to support Open Access publishing. Our existing institutional models are our Community Action Publishing (CAP) and Flat Fee agreement models. We’ll talk more about our brand new models when the journals open for submissions, suffice to say that they are also not based on APCs. With all this said, author fee-based models will still be available for those authors who prefer or need them.”

    “What are the special characteristics of these new journals?”

    Dan Shanahan, Publishing Director, PLOS

    “As Niamh says, our next phase of work is not just about Open Access. These new journals not only complement and naturally extend the existing PLOS suite of journals, but will hardwire a lens of social responsibility into sharing research. Via these new journals we can work together to address the most pressing ‘Openness’ issue specific to each field, and enable the researchers addressing these challenges, everywhere, to have the broadest impact. 

    In full alignment with the proposed UNESCO Recommendation on Open Science, these titles will ensure diversity and equity of representation at all levels – editors, editorial boards, reviewers, authors – and will actively seek out research from under-represented communities. The journals will play a part in broader efforts to create a more equitable system of knowledge-sharing, accelerating and increasing the benefits of scientific endeavour for global society as a whole. 

    The new journals all focus on some of the world’s most globally-relevant issues, to which locally-relevant research can, and must, become more visible. This approach will directly challenge the unfortunate norm that most ‘global’ forums remain dominated by research from Western Europe and North America.”

    “How will these journals contribute to the increased adoption of Open Science practices?”

    Veronique Kiermer, Chief Scientific Officer, PLOS

    “PLOS has always led on key Open Science matters, we are experimenters, but we’re also striving to be better listeners. We know a rigid approach to Open Science won’t foster equitable participation from all communities. As a publisher, we’ve never been driven by tradition, but by a willingness to question the status quo and an eagerness to explore how we can understand and improve the system. We’ll continue to investigate and test new ways of sharing, assessing, and recognizing research. We’ll be partnering with leaders across research communities, and the Open Science communities, to enact change. Not every solution will be journal-shaped, but all our journals will be shaped by Open Science, enabling those who publish with us to join communities of practice. We can use our newly expanded journal portfolio to influence norms and advance Open Science practices in considered and appropriate ways. We stand firm on any policies like open data that promote rigor and advance trust in research, but we want to understand the specific challenges that such policies represent for new communities, and work with them to find solutions that empower them, in their contexts, to practice Open Science. Overall, we want to further empower new communities to join us and inform us, working more tightly together towards ever more trust in science.”

    “How does this expansion of the portfolio fit into PLOS’ wider aims for the future?”

    Alison Mudditt, Chief Executive Officer, PLOS

    “PLOS has grown, times have changed, and how we deliver our mission has to evolve. Science is a global, collaborative enterprise. Challenging times help us see where we are and where we need to be. Global collaboration, transparency, and trust in science (and policies…and interventions…)  have all become recurring themes. We’re also coming to terms with how we, as a society, need to do a lot more to address systemic barriers to inclusion. How we think of ourselves as an organization, especially a research communication organization, isn’t separate from everything else going on in the world today. We have always worked to raise the bar for Openness. To continue this work we need to continue to grow – but not just in the traditional business sense, and not just in counting the number of journals we publish: we also need to, and have concrete plans to, spread our roots deeper, create more global hubs for PLOS, absorb researchers’ local practices more fully into our business, and, as others commented before me, ensure that any journal we publish is informed by local communities at a global scale, challenging problems, and rebuilding the system better.”

    Watch these spaces!

    We have Editors in place, we have dedicated staff, and the journals will open for submissions a little later this year. All updated information will appear on the respective journal websites (linked from the list above) as the journals take shape. Visit them often for submission guidelines, how our editorial boards are developing, how to apply to be a member of the board, where to follow them on social media, etc.

    Please share this news on your preferred social media via the buttons above. If you would like to join the Facebook and LinkedIn groups “PLOS Open Science Champions” for early announcements of this type, please visit the groups and request to be added: LinkedIn; Facebook

    Thank you for reading, and thank you to all of you who have supported us since the launch of PLOS Biology eighteen years ago in 2003. PLOS, and Open Access itself, would not be this far along without you!



    We would like to thank everyone who supported us and provided input and insight for these new journals, including Jamie Bartram, University of Leeds, UK; Clarissa Brocklehurst, Water and Sanitation Specialist, Canada; Alexandros Gasparatos, University of Tokyo, Japan; Alex Godoy-Faúndez, Universidad del Desarrollo, Chile; Ashantha Goonetilleke, Queensland University of Technology, Australia; Suzanne Hulscher, University of Twente, the Netherlands; Lawrence E. Hunter, University of Colorado School of Medicine, USA; Christopher Jackson, Imperial College London, UK; Malte Meinshausen, University of Melbourne, Australia; Angus Morrison Saunders, Edith Cowan University, Australia; Lucila Ohno-Machado, University of California, San Diego, USA; Farhana Sultana, Syracuse University, USA;  among others, as well as the PLOS Scientific Advisory Council.

    The post To boldly grow: five new journals shaped by Open Science appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on April 27, 2021 02:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The MDAR Framework – a new tool for life sciences reporting

    PLOS note: We are delighted to share this announcement by the MDAR Working Group (Materials, Design, Analysis, Reporting) of a new framework for transparent reporting in the life sciences. PLOS has always put emphasis—through editorial policies and submission guidelines—on complete and transparent reporting to facilitate the analysis, trust, reproduction and reuse of the research we publish. We’ve supported and participated in the MDAR Working Group by sharing our experience and helping to develop and test the new reporting framework. We hope that the MDAR Framework will help bring consistency to journals reporting guidelines and make it easier for authors to adopt transparency norms. The MDAR Framework is consistent with what is practiced at PLOS and we will work with our editorial boards to explore the most effective ways to implement it.

    Incomplete or imprecise reporting of life sciences research contributes to challenges with reproducibility, replicability, and biomedical applications. For the last three years we – a group of journal editors and researchers – have been working together to develop a new framework for transparent reporting of life sciences research. This framework has just been published in PNAS.

    The MDAR Framework establishes the four domains – research Materials, Design, Analysis, and Reporting – in which we define both a set of basic minimum requirements, and best practice recommendations.

    We were motivated to develop the MDAR Framework as part of our own and others’ attempts to improve reporting to drive research improvement and ultimately greater trust in science. Existing tools, such as the ARRIVE guidelines, guidance from FAIRSharing, and the EQUATOR Network, speak to important sub-elements of biomedical research. This new MDAR Framework aims to be  more general and less deep, and therefore complements these important specialist guidelines.  

    Previous approaches have led to improved reporting, but often at considerable cost to both authors’ and editors’ time. A recent period of experimentation has resulted in a thorough but fragmented landscape of reporting guidelines for life science journals. A drive for efficiency  inspired us to learn from each other’s experiences and to harmonize the most effective practices. 

    The MDAR Framework provides flexibility along with broad applicability. The standard articulation of expectations across different journals will make it easier for: (i) authors to better understand what is expected of them, and (ii) for more journals to adopt an established approach rather than develop it from scratch. Journals can choose a level of implementation appropriate to their needs, enabling greater adoption potential. 

    We also hope that the MDAR Framework will be helpful for other organizations such as funders, who can signal reporting expectations early and therefore have an effect at the time the studies are designed, and tool/software developers, who can devise means of facilitating compliance for authors and journals. 

    Alongside the framework, the project provides a checklist (for authors, journals or reviewers) as an optional implementation tool, and an explanation and elaboration document. The checklist was piloted on over 289 manuscript submissions across 13 journals, seeking feedback from authors and editors actually using the checklist. Our team analysed agreement between observers, sought feedback from outside experts, and revised the framework in the light of this experience. 

    The full set of MDAR resources will be maintained and updated as a community resource, in a Collection on the Open Science Framework. 

    We are sharing this update on the MDAR Framework through coordinated posts on working group member platforms. Working group members have been free to add any additional context as appropriate.

    On behalf of the MDAR working group:

    • Andy Collings (eLife)
    • Chris Graf (Wiley)
    • Veronique Kiermer (PLOS; vkiermer@plos.org)
    • David Mellor (Center for Open Science)
    • Malcolm Macleod (University of Edinburgh)
    • Sowmya Swaminathan (Nature Portfolio/Springer Nature; s.swaminathan@us.nature.com)
    • Deborah Sweet (Cell Press/Elsevier)
    • Valda Vinson (Science/AAAS)

    The post The MDAR Framework – a new tool for life sciences reporting appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on April 26, 2021 02:41 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Spandidos and the Paper Mill

    Papermills run by Chinese universities and funding a notorious Greek publisher. Smut Clyde tells it all!

    in For Better Science on April 26, 2021 09:39 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 26 April 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.

    Please join us at the next regular Open NeuroFedora team meeting on Monday 26 April at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'

    The meeting will be chaired by @ankursinha. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on April 26, 2021 08:37 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Hoarders and Collectors

    Andy Warhol's collection of dental models

    Pop artist Andy Warhol excelled in turning the everyday and the mundane into art. During the last 13 years of his life, Warhol put thousands of collected objects into 610 cardboard boxes. These Time Capsules were never sold as art, but they were meticulously cataloged by museum archivists and displayed in a major exhibition at the Andy Warhol Museum. “Warhol was a packrat. But that desire to collect helped inform his artistic point of view.” Yet Warhol was aware of his compulsion, and it disturbed him: “I'm so sick of the way I live, of all this junk, and always dragging home more.”

    Where does the hobby of collection cross over into hoarding, and who makes this determination? 

    Artists get an automatic pass into the realm of collectionism, no matter their level of compulsion. The Vancouver Art Gallery held a major exhibition of the works of Canadian writer and artist Douglas Coupland in 2014. One of the sections consisted of a room filled with 5,000 objects collected over 20 years and carefully arranged in a masterwork called The Brain. Here's what the collection looked like prior to assembly.

    Materials used in the The Brain, 2000–2014, mixed-media installation with readymade objects. Courtesy of the Artist and Daniel Faria Gallery. Photo: Trevor Mills, Vancouver Art Gallery.

    Hoarding, on the other had, lacks the artistic intent or deliberate organization of collection. Collectors may be passionate, but their obsessions/compulsions do not hinder their everyday function (or personal safety). According to Halperin and Glick (2003):
    “Characteristically, collectors organize their collections, which while extensive, do not make their homes dysfunctional or otherwise unlivable. They see their collections as adding a new dimension to their lives in terms of providing an area of beauty or historical continuity that might otherwise be lacking.”
    The differential diagnosis for the DSM-5 classification of Hoarding Disorder vs. non-pathological Collecting considers order and value of primary importance.

    Fig. 2 (Nakao & Kanba, 2019).
    If possessions are well organized and have a specific value, the owner is defined as a ‘collector.’ Medical conditions that cause secondary hoarding are excluded from Hoarding Disorder. The existence of comorbidities such as obsessive-compulsive disorder (OCD), autism spectrum disorder (ASD), and attention deficit hyperactivity disorder (ADHD) must be excluded as well.

    I've held onto the wish of writing about this topic for the last eight months...

    ...because of the time I spent sorting through my mother's possessions between July 2020 and November 2020 after she died on July 4th. This process entailed flying across the country five times in a total of 20 different planes in the midst of a pandemic.
    Although my mother showed some elements of  hoarding, she didn't meet clinical criteria. She had various collections of objects (e.g., glass shoes, decorator plates, snuff bottles, and ceremonial masks), but what really stood out were her accumulations — organized but excessive stockpiles of useful items such as flashlights, slippers, sweatshirts, kitchen towels, and watches (although most of the latter were no longer useful).

    Ten pairs of unworn gardening gloves

    During the year+ of COVID sheltering-in-place, some people wrote books, published papers, started nonprofits, engaged in fundraising, held Zoom benefit events, demonstrated for BLM, home-schooled their kids, taught classes, cared for sick household members, mourned the loss of their elder relatives, or endured COVID-19 themselves.
    I dealt with the loss of a parent, along with the solo task of emptying 51 years of accumulated belongings from her home. To cope with this sad and lonely and emotionally grueling task, I took photos of my mother's accumulations and collections. It became a mini-obsession unto itself. I tried to make sense of my mother's motivations, but the trauma of her suffering and the specter of an unresolved childhood were too overwhelming. Besides, there's no computational model to explain the differences between Collectors, Accumulators and Hoarders.

    Additional Reading

    Compulsive Collecting of Toy Bullets

    Compulsive Collecting of Televisions

    The Neural Correlates of Compulsive Hoarding

    Welcome to Douglas Coupland's Brain


    Halperin DA, Glick J. (2003). Collectors, accumulators, hoarders, and hoarding perspectives. Addictive Disorders & Their Treatment 2(2):47-51.

    Nakao T, Kanba S. (2019). Pathophysiology and treatment of hoarding disorder. Psychiatry Clin Neurosci. 73(7):370-375. doi:10.1111/pcn.12853

    in The Neurocritic on April 26, 2021 06:08 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How the Brain Works

    Every now and then, it's refreshing to remember how little we know about “how the brain works.” I put that phrase in quotes because the search for the Holy Grail of [spike trains, network generative models, manipulated neural circuit function, My Own Private Connectome, predictive coding, the free energy principle (PDF), or a computer simulation of the human brain promised by the Blue Brain Project] that will “explain” how “The Brain” works is a quixotic quest. It's a misguided effort when the goal is framed so simplistically (or monolithically).

    First of all, whose brain are we trying to explain? Yours? Mine? The brain of a monkey, mouse, marsupial, monotreme, mosquito, or mollusk? Or C. elegans with its 306 neurons? “Yeah yeah, we get the point,” you say, “stop being so sarcastic and cynical. We're searching for core principles, first principles.”

    In response to that tweet, definitions of “core principle” included:

    • Basically: a formal account of why brains encode information and control behaviour in the way that they do.
    • Fundamental theories on the underlying mechanisms of behavior. 
      • [Maybe “first principles” would be better?]
    • Set of rules by which neurons work?


    Let's return to the problem of explanation. What are we trying to explain? Behavior, of course [a very specific behavior most of the time]: X behavior in your model organism. But we also want to explain thought, memory, perception, emotion, neurological disorders, mental illnesses, etc. Seems daunting now, eh? Can the same core principles account for all these phenomena across species? I'll step out on a limb here and say NO, then snort for asking such an unfair question. Best that your research program is broken down into tiny reductionistic chunks. More manageable that way.

    But what counts as an “explanation”? We haven't answered that yet. It depends on your goal and your preferred level of analysis (à la three levels of David Marr):

    computation – algorithm – implementation



    Again, what counts as “explanation”? A concise answer was given by Lila Davachi during a talk in 2019, when we all still met in person for conferences:

    “Explanations describe (causal) relationships between phenomena at different levels.”

    from Dr. Lila Davachi (CNS meeting, 2019)
    The Relation Between Psychology and Neuroscience
    (see video, also embedded below)

    UPDATE April 25, 2021: EXPLANATION IS IMPOSSIBLE, according to Rich, de Haan, Wareham, and van Rooij (2021), because "the inference problem is intractable, or even uncomputable":
    "... even if all uncertainty is removed from scientific inference problems, there are further principled barriers to deriving explanations, resulting from the computational complexity of the inference problems."

    Did I say this was a “refreshing” exercise? I meant depressing... but I'm usually a pessimist. (This has grown worse as I've gotten older and been in the field longer.)  
    Are there reasons for optimism?

    You can follow the replies here, and additional replies to this question in another thread starting here.

    I'd say the Neuromatch movement (instigated by computational neuroscientists Konrad Kording and Dan Goodman) is definitely a reason for optimism!

    Further Reading

    The Big Ideas in Cognitive Neuroscience, Explained (2017)

    ... The end goal of a Marr-ian research program is to find explanations, to reach an understanding of brain-behavior relations. This requires a detailed specification of the computational problem (i.e., behavior) to uncover the algorithms. The correlational approach of cognitive neuroscience and even the causal-mechanistic circuit manipulations of optogenetic neuroscience just don't cut it anymore.

    An epidemic of "Necessary and Sufficient" neurons (2018)

    A miniaturized holy grail of neuroscience is discovering that activation or inhibition of a specific population of neurons (e.g., prefrontal parvalbumin interneurons) or neural circuit (e.g., basolateral amygdala → nucleus accumbens) is “necessary and sufficient” (N&S) to produce a given behavior.

    Big Theory, Big Data, and Big Worries in Cognitive Neuroscience (from CNS meeting, 2018)
    Dr. Eve Marder ... posed the greatest challenges to the field of cognitive neuroscience, objections that went mostly unaddressed by the other speakers.  [paraphrased below]:
    • How much ambiguity can you live with in your attempt to understand the brain? For me I get uncomfortable with anything more than 100 neurons
    • If you're looking for optimization (in [biological] neural networks), YOU ARE DELUSIONAL!
    • Degenerate mechanisms produce the same changes in behavior, even in a 5 neuron network...
    • ..so Cognitive Neuroscientists should be VERY WORRIED



    The Neuromatch Revolution (2020)

    “A conference made for the whole neuroscience community”


    An Amicable Discussion About Psychology and Neuroscience (from CNS meeting, 2019)

    • the conceptual basis of cognitive neuroscience shouldn't be correlation
    • but what if the psychological and the biological are categorically dissimilar??

    ...and more!

    The video below is set to begin with Dr. Davachi, but the entire symposium is included.

    in The Neurocritic on April 25, 2021 07:38 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Characterization of an open access medical news platform readership during the COVID-19 pandemic

    Abstract:  Background: There now exists many alternatives to direct journal access, such as podcasts, blogs, and news sites for physicians and the general public to stay up-to-date with medical literature. Currently however, there is a scarcity of literature that investigates these readership characteristics of open access medical news sites and how they may have shifted with coronavirus disease 19 (COVID-19). Objective: The current study aimed to employ readership and survey data to characterize open access medical news readership trends in relation to COVID-19 in addition to overall readership trends regarding pandemic related information delivery. Methods: Anonymous aggregate readership data was obtained from 2 Minute Medicine® (www.2minutemedicine.com), an open-access, physician-run medical news organization that has published over 8000 original physician-written text and visual summaries of new medical research since 2013. In this retrospective observational study, the average article views, actions (defined as the sum of views, shares, and outbound link clicks), read times, and bounce rate (probability to leave a page in <30s) were compared between COVID-19 articles published between January 1 to May 31, 2020 (N = 40) to non-COVID-19 articles (N = 145) published in the same time period. A voluntary survey was also sent to subscribed 2 Minute Medicine readers to further characterize readership demographics and preferences scored by Likert Scale. Results: COVID-19 articles had significantly more median views than non-COVID-19 articles (296 vs. 110, U = 748.5, P < 0.001). There were no differences in average read times or bounce rate. Non-COVID-19 had more median actions than COVID-19 articles (2.9 vs. 2.5, U = 2070.5, P < 0.05). On a Likert scale of 1 (Strongly Disagree) to 5 (Strongly Agree), survey data revealed that 66% (78/119) of readers Agreed or Strongly Agreed that they preferred staying up to date with emerging literature surrounding COVID-19 using sources such as 2 Minute Medicine versus direct journal access. A greater proportion of survey takers also indicated open access news sources to be one of their primary means of staying informed (71.7%) than direct journal article access (50.8%). A lesser proportion of readers indicated reading one or less full length medical study following introduction to 2 Minute Medicine compared to prior (16.9% vs. 31.8%, P < 0.05). Conclusions: There is a significantly increased readership in one open-access medical literature platform during the pandemic, reinforcing that open-access physician-written sources of medical news represent an important alternative to direct journal access for readers to stay up to date with medical literature.

    in Open Access Tracking Project: news on April 25, 2021 09:40 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Highlights of the BMC Series – March 2021

    BMC Chemistry: Why do those bagels smell so good?

    Have you ever walked past a baker’s shop and immediately felt hungry thanks to the lovely aroma from freshly baked bread?  Smell plays an integral part in how we taste our food; if it smells good, then chances are, it’s going to taste good too. Here, Lasekan et al. looked into how the aroma of the popular bagel changed due to differences in cold fermentation time.

    • Dough A (control) was allowed to develop for an hour before cooking.
    • Dough B was also allowed to develop for an hour initially but was then kneaded and placed in a chiller for 48 hours before cooking.
    • Dough C was treated similarly to Dough B, but the chilling time was reduced to 24 hours.

    Key aroma-active compounds were collected via solvent assisted flavor evaporation and then characterized by a process called aroma extract dilution analysis.

    Results showed all bagels had similar roasty, malty, buttery, baked potato-like, smoky and biscuit-like notes. However, the odor notes in the long, cold fermented bagels were more intense than those produced by the control bagels.  These findings provide a basis for more research into the effect of cold fermentation on bakery products in the future.


    BMC Pediatrics: Vaccinated mother gives birth to baby with SARS-CoV-2 antibodies

    It is well established that newborn babies can be protected from a number of potentially fatal viruses, such as tetanus or diphtheria, though the vaccination of their mother during pregnancy; the antibodies are passed from mother to fetus though the placenta.

    With vaccine rollouts now underway in many countries, it would be hoped that a similar protection against coronavirus disease 2019 (COVID-19) would occur for babies born to mothers vaccinated against the virus responsible for the disease (severe acute respiratory syndrome coronavirus 2, or SARS-CoV-2)

    This has indeed been shown to be the case, as demonstrated by an exciting new case study presented by pediatricians at Florida Atlantic University. The baby, a healthy baby girl, is the first infant known to have been born to a vaccinated mother with SARS-CoV-2 IgG antibodies detected in her cord blood.


    BMC Health Services Research:  The influence of politics on healthcare systems

    In an age where our healthcare systems are under intense pressure, it is important to look at the services offered and implement change when needed.  Clarke and colleagues from the University of Birmingham recently performed a systematic review in order to discover how the use of “political skill” can contribute to changes in health services from both within and across organizations.

    The review involved the analysis of 62 papers published over the last 40 years and from diverse areas of research.   The findings pointed towards political skill certainly having an impact, with changes implemented via five “thematic dimensions” which include performance, awareness, influence, stakeholder engagement and influence on policy processes.


    BMC Psychology: Does walking through a doorway always make you forget?

    Hands up everyone who has walked into a room and then forgotten why you were there. Interestingly, this is a well-documented phenomenon called the doorway effect, which literally refers to how memory can be affected by passing through a doorway or other boundary.  Research into this effect has demonstrated that people can forget items of recent significance when they pass through a physical boundary (such as walking from one room to another), imagining that they have done so (i.e. metaphysical boundary) or even when moving from one desktop window to another on a computer.

    In a study recently published in BMC Psychology, a team of researchers from the Queensland Brain Institute (University of Queensland) attempted to replicate the doorway effect with the use of both virtual and physical environments. The team ran four experiments that measured participants hot and false alarm rates to memory probes for items placed in either the same or previous room. Two experiments used highly immersive virtual reality, one used passive video watching and experiment 4 involved physically moving from one room to another.

    The results of these experiments showed that there was, in fact, no significant effect of doorways on a person’s memory. These findings cast doubts on how common the doorway effect actually is and also on the finding of previous studies looking into the phenomenon.


    BMC Surgery – Call for papers: Collection on Robotic surgery

    In order to recognize the significant growth and advancement of robotic surgery over the past few decades, BMC Surgery is welcoming original submissions on robotic surgery. The collection will be edited by an international team of Guest Editors. The collection is open for submissions until 1st April 2022.

    The post Highlights of the BMC Series – March 2021 appeared first on BMC Series blog.

    in BMC Series blog on April 23, 2021 11:03 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Schneider Shorts 23.04.2021

    Schneider Shorts 23 April 2021: exciting COVID-19 clinical trials, blood tests for depression, photoshopped plant science, more tea with Professor Seeberger, chocolate diets, amazing cancer cures, and an Italian mystery troll obsessed with me.

    in For Better Science on April 23, 2021 05:30 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    WikiLala, 'Google' of Ottoman-Turkish documents, launches in full | Daily Sabah

    The online digital library project, “WikiLala,” which brings together and aims to digitize all the printed texts from the Ottoman Empire since the introduction of the printing press, has recently launched a full version of its website which had been in beta for a while. Since its launch, the website has attracted more than 200,000 visitors from 107 countries.

    in Open Access Tracking Project: news on April 22, 2021 04:54 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Bursting Neurons Signal Input Slope

    This week on Journal Club session Volker Steuber will talk about a paper "Bursting Neurons Signal Input Slope".

    Brief bursts of high-frequency action potentials represent a common firing mode of pyramidal neurons, and there are indications that they represent a special neural code. It is therefore of interest to determine whether there are particular spatial and temporal features of neuronal inputs that trigger bursts. Recent work on pyramidal cells indicates that bursts can be initiated by a specific spatial arrangement of inputs in which there is coincident proximal and distal dendritic excitation (Larkum et al., 1999). Here we have used a computational model of an important class of bursting neurons to investigate whether there are special temporal features of inputs that trigger bursts. We find that when a model pyramidal neuron receives sinusoidally or randomly varying inputs, bursts occur preferentially on the positive slope of the input signal. We further find that the number of spikes per burst can signal the magnitude of the slope in a graded manner. We show how these computations can be understood in terms of the biophysical mechanism of burst generation. There are several examples in the literature suggesting that bursts indeed occur preferentially on positive slopes (Guido et al., 1992; Gabbiani et al., 1996). Our results suggest that this selectivity could be a simple consequence of the biophysics of burst generation. Our observations also raise the possibility that neurons use a burst duration code useful for rapid information transmission. This possibility could be further examined experimentally by looking for correlations between burst duration and stimulus variables.


    Date: 2021/04/23
    Time: 14:00
    Location: online

    in UH Biocomputation group on April 21, 2021 10:37 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    CRL and East View Release Open Access Imperial Russian Newspapers | CRL

    "CRL and East View Information Services have opened the first release of content for Imperial Russian Newspapers (link is external), the fourth Open Access collection of titles digitized under the Global Press Archive (GPA) CRL Charter Alliance. This collection adds to the growing body of Open Access material available in the Global Press Archive by virtue of support from CRL members and other participating institutions. The Imperial Russian Newspapers(link is external) collection, with a preliminary release of 230,000 pages, spans the eighteenth through early twentieth centuries and will include core titles from Moscow and St. Petersburg as well as regional newspapers across the vast Russian Empire. Central and regional “gubernskie vedomosti” will be complemented by a selection of private newspapers emerging after the Crimean War in 1855, a number of which grew to be influential...."

    in Open Access Tracking Project: news on April 21, 2021 10:21 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Editor’s tips for passing journal checks

    You’ve painstakingly mapped out your research goal: to answer that unanswered question. You’ve conducted your experiments, analyzed the results and written your paper. Now it’s off to a journal. And the process begins. PLOS editors have seen it all and want to help get your paper published as quickly as possible.

    What does the journal office look for, and what are the potential pitfalls? More importantly, how can you ensure that your manuscript passes journal checks and moves on to peer review quickly? Here, PLOS staff discuss a few of the most common reasons why a manuscript is rejected during the initial technical check, and how to avoid them.

    For a bit of background, after a manuscript is submitted to a scientific journal it undergoes a series of technical and ethical checks. Submissions that pass this initial screening go on to editorial assessment and peer review. Submissions that don’t meet requirements, or don’t provide enough information, may be returned to the authors for clarification. This can extend review times and even lead to a manuscript being rejected without review. Below are 5 checks and tips on how to smoothly get past them.

    Check #1: Sense check

    Quite simply, does the manuscript make sense as a submission? Is it a scientific article? Are all the typical parts of an article (abstract, introduction, methods, results/discussion, figures citations) all present? Is the language clear and understandable?

    How to pass it: Make sure that your manuscript is complete, and that the writing is clear and unambiguous. Note that it doesn’t have to be perfect at this stage, just precise enough for fellow researchers in your field to understand and evaluate your work.

    Read PLOS’ guide to editing your work

    Check #2: Journal fit and scope

    Journals tend to specialize in particular subjects and types of studies. “The biggest reason we reject without review is scope.” Explains Kendall McKenzie, Managing Editor of PLOS Neglected Tropical Diseases.Our scope page breaks down the diseases and categories of research we’re interested in, and even specifically states the kinds of things we don’t consider.”

    How to pass it: It comes down to submitting the right manuscript to the right publication. Carefully investigate the journal’s scope before submitting to ensure that your manuscript has a good chance of publication. If your particular article is on the edge of the journal’s expressed scope, or if you’re just not sure, search the journal for similar articles; if there are no comparable publications, your study is likely out of scope.

    Read PLOS’ guide to choosing a journal

    Check #3: Acceptance criteria

    Laura Simmons, Managing Editor of PLOS Genetics agrees. “In addition to scope, our Editors in Chief and Section Editors may reject without review if a submission is lacking in biological or mechanistic insight (i.e. if it is too descriptive), or if the research doesn’t represent a significant advance in the field.”

    How to pass it: This one is all about doing your research. Different journals have different criteria for publication. Consult the journal website and consider whether your study fulfills the requirements and mission of the journal. Does the journal publish the type of research your study describes? Will your article appeal to the readers the journal serves? If not, consider a more specialized publication that focuses specifically on the type of research you are conducting, or, alternatively, a journal with a broader, more inclusive scope.

    Check #4: Plagiarism

    Most journals run an automated check that looks for similarities between your manuscript and previously published works. If the manuscript scores above a certain threshold, members of the journal staff will take a closer look at your manuscript to ensure that any direct quotes are framed within quotation marks and properly cited. “Overall the most common issue we see is authors reusing their own methods section, introduction, or conclusion from previous or related studies,” explains PLOS ONE Publishing Editor Emma Stillings. Authors don’t always realize that “you have to cite everyone, even yourself, to avoid any delay in the peer review process.”

    How to pass it: Any direct quotes must be framed within quotation marks and properly attributed. That includes your own prior works. Try to avoid reusing text, and especially copy-pasting from your other papers. Check to make sure that any summaries or allusions are properly cited as well.

    Check #5: Complete and consistent ethical, funding, data, and other statements

    If the statements in the submission form are unclear, lacking detail, or otherwise incomplete, the process will pause while the journal office contacts the authors for more information. Similarly, if the statements within the manuscript are different from those in the submission system, the journal office will work with the authors to reconcile them before the manuscript can advance.

    How to pass it: Label and save the paperwork from the early part of your research process: funding information, committee approval documents, permits, permission forms, patient disclosure statements, study designs, and any other materials. You may need them to complete your submission form. When you are ready to submit, proofread carefully to ensure that everything in your manuscript is up-to-date and clear. Double check to make sure that any placeholder text has been replaced with the final version.

    Read PLOS’ guide to scientific ethics & preparing data

    Final words of wisdom

    “It’s so important to familiarize yourself with a journal before submitting. What’s the scope of the journal? What article types do they publish? Are you adhering to the guidelines for that particular article type? Making sure you’re informed about what type of work the journal publishes and how, can go a long way in deciding where to submit and speeding your manuscript through the initial submission stages.” Eileen Clancy, Managing Editor of PLOS Pathogens

    The post Editor’s tips for passing journal checks appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on April 20, 2021 02:53 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Center for Research Libraries (CRL) and East View Release Open Access Imperial Russian Newspapers | LJ infoDOCKET

    CRL and East View Information Services have opened the first release of content for Imperial Russian Newspapers, the fourth Open Access collection of titles digitized under the Global Press Archive (GPA) CRL Charter Alliance. This collection adds to the growing body of Open Access material available in the Global Press Archive by virtue of support from CRL members and other participating institutions.

    in Open Access Tracking Project: news on April 20, 2021 12:15 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Champions of Discovery

    Researchers tell their stories

    in Elsevier Connect on April 20, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    2-Minute Neuroscience: Chronic Traumatic Encephalopathy (CTE)

    Chronic traumatic encephalopathy, or CTE, is a neurological condition linked primarily to repetitive head trauma. In this video, I discuss what happens in the brain during CTE.

    in Neuroscientifically Challenged on April 18, 2021 12:16 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Is early vision like a convolutional neural net?

    Early convolutional neural net (CNNs) architectures like the Neocognitron, LeNet and HMAX were inspired by the brain. But how much like the brain are modern CNNs? I made a pretty strong claim on Twitter a few weeks ago that the early visual processing is nothing like a CNN:

    In typical Twitter fashion, my statement was a little over-the-top, and it deserved a takedown. What followed was a long thread of replies from Blake Richards, Grace Lindsay, Dileep George, Simon Kornblith, members of Jim DiCarlo’s lab, and even Yann LeCun. The different perspectives are interesting, and I’m curating them here.

    My position, in brief, is that weight sharing and feature maps – the defining characteristics of modern CNNs – are crude approximations of the topography of the early visual cortex and its localized connection structure. Whether this distinction matters in practice depends on what you want your models to accomplish – I argue that for naturalistic vision, it can matter a lot.


    There are several defining characteristics of modern CNNs:

    • Hierarchical processing
    • Selectivity operations (i.e the ReLU)
    • Pooling operations
    • Localized receptive fields
    • Feature maps
    • Weight sharing

    Where did these ideas come from?

    Hubel and Wiesel (1962) described simple and complex cells in the cat primary visual cortex. Simple cells respond best to a correctly oriented bar or edge. Complex cells, on the other hand, are not sensitive to the sign of the contrast of the bar or edge, or its exact location. They hypothesized that simple cells are generated by aggregating the correct LGN afferents selective for light and dark spots, followed by the threshold nonlinearity of the cells. Complex cells were hypothesized to be generated by pooling from similarly tuned simple cells. They later expanded these ideas: perhaps first-order and second-order hypercomplex cells are created by the repetition of the same pattern. Thus, hierarchy, selectivity, pooling and localized receptive fields were all hypothesized to be taking place.

    Adapted from (Hubel and Wiesel, 1962). | Download Scientific DiagramAnnotated figures from Hubel & Wiesel (1962)

    Fukushima (1980) adapted many of these ideas in a self-organized hierarchical neural network called the Neocognitron. In the S-layers, neurons weight the input from previous layers via localized receptive fields, followed by a ReLU nonlinearity. In C-layers, inputs from the previous are pooled linearly, followed by a sigmoid.

    From Fukushima (1980)

    The Neocognitron also introduces what may be the defining features of modern CNNs: parallel feature maps (here called planes) and weight sharing. This is most clearly seen in the selectivity operation for S-layers, which is a convolution over the set of local coordinates S_l, followed by a ReLU:

    From Fukushima (1980)

    The reason behind the introduction of parallel feature maps and weight sharing is not very clear in the original paper, and in fact in section 3 the paper casts doubt on how realistic these assumptions are as a model of vision:

    One of the basic hypotheses employed in the neocognitron is the assumption that all the S-cells in the same S-plane have input synapses of the same spatial distribution, and that only the positions of the presynaptic cells shift in parallel in accordance with the shift in position of individual S-cell’s receptive fields. It is not known whether modifiable synapses in the real nervous system are actually self-organized always keeping such conditions. Even if it is assumed to be true, neither do we know by what mechanism such a self-organization goes on.

    So why introduce these features at all?

    Topography conquers all

    Fukushima however links localized receptive fields with the idea of localized topography:

    […] orderly synaptic connections are formed between retina and optic tectum not only in the development in the embryo but also in regeneration in the adult amphibian or fish

    This is one of the enduring facts of visual neuroscience: inputs from the retina are preserved topologically as they make their way to the LGN, V1, and onwards to extrastriate cortex. Visual angle and log radius are mapped from the retina to V1 in development, guided by chemical gradients. The progression of the maps reverse as we go up the visual hierarchy (in fact, areas are defined by this very reversal in direction). What’s remarkable is how precise the topography is: Hubel and Wiesel estimated that in primary visual cortex, neighbouring cells varied in receptive field center by less than half a receptive field.

    FIG. 1.From Ringach (2004)

    There’s another fact of life in cortex, which is that horizontal wiring is expensive. If a neuron integrates from localized inputs, and those inputs are retinotopically organized, that neuron will have a localized receptive field. If you assume that the synaptic pattern is localized and random then for every cell, there must be other similarly tuned cells somewhere else in the visual field. My postdoc advisor Dario Ringach introduced a model (2004) for how simple cells in primary visual cortex can occur from such sparse localized connections.

    That covers feature maps. However, if the input statistics to primary visual cortex are stationary, then a self-supervised criterion could refine a randomly initialized V1 feature map, effectively letting the input do the weight tying – a subtle point brought up by Blake Richards.

    Non-stationarities at the large scale

    I hope to have convinced you that the link between CNNs, first introduced by Fukushima, and early visual processing in the brain is quite subtle. Whether a CNN is a good or a bad model of early visual processing depends on what phenomenon we care to model. If we’re talking about core visual recognition in the parafovea – which is what the majority of the quantification comes from – the match is quite good. At larger spatial scales, however, I would argue that the match is compromised by two facts.

    First, spatial resolution is not constant as a function of space. A CNN on a Cartesian grid doesn’t scale with eccentricity. This issue has received some attention lately, and foveating CNNs are starting to be considered – see this recent paper from Arturo Deza and Talia Konkle for example.

    Spatial frequency tuning correlates with spatial tuning. From Arcaro and Livingstone (2017)

    Secondly, the eyes foveate towards interesting targets – e.g. faces – which mean that statistics are highly nonstationary different as a function of space. The ground looks very different from the sky. Hands tend to be in the lower half of the visual field, and numerous visual areas have only partial maps of the visual field. There’s overrepresentation of radial orientations, more curvature selectivity in the fovea, no blue cones in the fovea, etc.

    Here’s a picture to illustrate this. If VGG19 is a good model of primary visual visual cortex, then it follows that maximizing predicted activity in primary visual cortex could be done by maximizing an unweighted sum of VGG19 subunits, a la DeepDream. If you do that for layer 7 of VGG19, you get the picture on the left. However, we can use fMRI to estimate a matrix, via ridge regression that maps VGG19 to an fMRI subspace, and then optimize a weighted sum of VGG19 subunits, giving us the picture on the right. This image highlights some of the known spatial biases in primary visual cortex – shifts in prefered spatial frequency as a function of eccentricity, radial bias, curvature bias in the fovea – which are not apparent in an unweighted VGG19. So at the very least, the brain’s image manifold is rotated and rescaled compared to the VGG19 image manifold.

    Margaret Livingstone has a very interesting line of research (talk here) highlighting the close link between spatial biases and the development of object and face recognition modules in high-level visual cortex. More generally, if we’re interested in how vision is organized for action – ecological vision in the tradition of Gibson – then whether a network has captured, e.g. the fact that the bottom half of the visual field has very different affordances than the top half matters. If you applied a unsupervised criterion to learn features from natural images without weight sharing, those features would vary quite a bit across space – unlike in a CNN. This point was also raised by Dileep George.

    Weight sharing is not strictly necessary

    LeCun (1989) demonstrates the use of backpropagation to train a convolutional image recognition network. There’s a beautiful figure that highlights the effect of adding more and more constraints, in particular in going from a fully-connected 2-layer network (net2) to a high-capacity convolutional neural net (net5):

    From LeCun (1989)

    LeCun (1989) clarifies that weight sharing is a clever trick that decreases the number of weights to be learned, which in the small data limit is practically important to obtain good test set accuracy. We are a long ways from 1989 in terms of dataset size, and unsupervised learning could learn untied feature maps, as pointed out by Yann LeCun in the Twitter thread. But would currently existing benchmarks favor ANNs-with-local-receptive-field-but-untied-weights?

    The leaderboard is the message

    Let’s turn back to the claim that early visual cortex is like a particular CNN. As Wu, David and Gallant (2006) put it:

    an important long-term goal of sensory neuroscience [is] developing models of sensory systems that accurately predict neuronal responses under completely natural conditions.

    Brain-Score is an embodiment of this idea – benchmark, across many different datasets and ventral stream areas, the ability of many different CNNs to explain brain activity. The leaderboard is pretty clear that the VGG19 architecture outperform alternatives in explaining average firing rate – even when these alternatives are better at classifying images in ImageNet. That’s an interesting finding – VGG19 is shallower and has larger receptive fields than more modern CNNs, so perhaps its performance on this benchmark is because of that.

    Grace Lindsay pointed out that:

    […] when the results were first coming out that trained convolutional neural networks are good predictors of activity in the visual system some people had the attitude of “that’s not interesting because obviously anything that does vision well will look like the brain”

    What’s interesting is that we now have architectures that solve object recognition which are very different from CNNs, namely visual transformers (ViT). They are architecturally different from brains, and they while they perform well on ImageNet, they underperform in predicting single neuron responses. So now we have a clear example of a disconnect between ImageNet performance and similarity to the brain, and that strengthens the claim that core object recognition is like a CNN.

    So on the one hand, the data from BrainScore says that CNNs are the best models of core object recognition, yet there are pretty stark ways in which early visual processing is unlike a CNN, some of which I’ve mentioned already, others which have been highlighted in Grace Lindsay’s excellent review.

    Would Brain-Score, in theory, rate better a model which respects the distinction between excitation and inhibition (Dale’s law)? That seems unlikely, since there are no cell type annotations in the datasets. Would it rate better a foveated model, or one that doesn’t have weight sharing? Again, unlikely, since as Tiago Marques points out, for technical reasons, most datasets are taken in the parafovea. In any case, the metric used to score the models is rotation invariant, so it wouldn’t be able to tell these cases apart. As Simon Kornblith points out, choices of metrics are not falsifiable. The right approach for choosing metrics is the axiomatic one, as demonstrated in Simon’s CKA paper: the modeller or the community decides what the right metric is based off of design criterias that represent what they think is interesting about the data.

    Does that mean we should ignore Brain-Score? No! I really like Brain-Score – I wish there were many more Brain-Score-like leaderboards! Comparing on lots of datasets prevents cherry picking – there is robustness in the meta-analytic approach. What I’m excited about is the possibility of Brain-Scores – a constellation of leaderboards that benchmark different models according to rules which match on modeler’s interests. I’ve been involved in proto-community-benchmark efforts like the neural prediction challenge and spikefinder before, and the technological barriers to run such a leaderboard have been lowered by commoditized cloud infra. The value proposition is also becoming clearer, and I am excited to see more of these efforts pop up.


    In brief:

    • Convolutional neural nets assume shared weights
    • This assumption is not valid at large spatial scales
    • Large spatial scales matter if you care about naturalistic foveated vision
    • It’s a useful assumption in the low-data regime, which we’re not in
    • The current benchmarks are not sensitive to this issue
    • It’s possible and desirable to create new benchmarks which are sensitive to this, and I hope to do that in the future!

    in xcorr.net on April 16, 2021 06:52 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    International representation in Neurology journals: no improvement in over a decade

    Click here to read the full study published in BMC Medical Research Methodology.

    Biomedical and public health research should reflect the diversity of global health and the impact that social, cultural, and geographical factors can have on local health practice and policy. Until now, no studies have examined the contribution of developing countries to authors, editors or research in high-ranking neurology journals.

    The need for research in neurology cannot be understated. Globally, neurological disorders are the leading cause of disability, yet low and middle-income countries bear almost 80% of the burden of neurological disorders. As the population continues to grow, the degree of burden will continue to rise.

    Authorship and editorial board representation from developing countries in neurology journals is exceedingly rare.

    To assess representation in neurology journals, we conducted a cross-sectional study of all research articles published in 2010 and 2019 in the five highest ranked peer-reviewed neurology journals: The Lancet Neurology, Acta Neuropathologica, Nature Reviews Neurology, Brain, and Annals of Neurology. Using this data, we determined the extent of contributions of authors, editors, and research from developing countries, as well as the degree of international research collaboration between developed and developing countries.

    We found that authorship and editorial board representation from developing countries in neurology journals is exceedingly rare, and this has not changed in the past decade. First authorship was attributed to authors from developing countries in only 2% of research articles in 2010 and 3% in 2019. The lack of representation in research extends to the editorial boards of the selected journals, none of which had a board member from a developing country. Unsurprisingly, the primary data of these publications originated largely from developed countries with advanced research facilities, namely the United States, United Kingdom, and Germany.

    National and international research bodies are well placed to reduce this disparity.

    Tackling underrepresentation in research is no simple feat. Nevertheless, our results highlight that there is an urgent need for strategies to support high-quality locally-driven biomedical research in developing countries. Local researchers in developing countries benefit from exposure to greater research opportunities, education, and training. This is beneficial to developing countries as they are able to direct socially and culturally relevant research that is readily applicable to local healthcare systems.

    National and international research bodies are well placed to reduce this disparity through greater representation via international collaborations which strengthen the quality of research in developing countries. By fostering high-quality and culturally relevant research, local healthcare systems are able to readily apply these findings to meet the neurological needs of their population.

    The post International representation in Neurology journals: no improvement in over a decade appeared first on BMC Series blog.

    in BMC Series blog on April 16, 2021 01:42 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Impersonators and fake email addresses

    This morning a professor at a US university warned me that they had gotten an email from a person pretending to be me. The email came from elisabeth_bik@yeah.net, and was signed with my name. But that is not my email address!

    The email sent to the professor flagged a scientific paper as “The following article is fake. The impact on society is very bad”.

    Maybe I should feel flattered, but it is quite disturbing that people pretend to be me.

    So here is a quick warning that there are Elisabeth Bik impersonators using fake email addresses. My correct email address is eliesbik at gmail dot com and if an email comes from any other address that is not me!

    in Science Integrity Digest on April 15, 2021 07:30 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Organization of Cell Assemblies in the Hippocampus

    This week on Journal Club session Emil Dmitruk will talk about a paper "Organization of Cell Assemblies in the Hippocampus" and will briefly presnt how this subject can be approached with topological data analysis.

    Neurons can produce action potentials with high temporal precision. A fundamental issue is whether, and how, this capability is used in information processing. According to the "cell assembly" hypothesis, transient synchrony of anatomically distributed groups of neurons underlies processing of both external sensory input and internal cognitive mechanisms. Accordingly, neuron populations should be arranged into groups whose synchrony exceeds that predicted by common modulation by sensory input. Here we find that the spike times of hippocampal pyramidal cells can be predicted more accurately by using the spike times of simultaneously recorded neurons in addition to the animals location in space. This improvement remained when the spatial prediction was refined with a spatially dependent theta phase modulation. The time window in which spike times are best predicted from simultaneous peer activity is 10-30,ms, suggesting that cell assemblies are synchronized at this timescale. Because this temporal window matches the membrane time constant of pyramidal neurons, the period of the hippocampal gamma oscillation and the time window for synaptic plasticity, we propose that cooperative activity at this timescale is optimal for information transmission and storage in cortical circuits.


    Date: 2021/04/16
    Time: 14:00
    Location: online

    in UH Biocomputation group on April 14, 2021 02:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Operational Response to US Measles Outbreaks, 2017-19

    Despite widespread coverage with a highly efficacious vaccine, pockets of under vaccinated populations and imported cases can result in sizable measles outbreaks. And even though measles was declared eliminated in the United States more than 20 years ago, the country has faced a series of large measles outbreaks in the years since, particularly over the past decade.

    In our study just published in BMC Public Health, we conducted interviews with individuals from state and local health departments and health systems from across the country who responded to measles outbreaks in 2017-19, representing outbreaks that account for more than 75% of all US measles cases over that period. Our principal aim was to capture firsthand operational experiences in order to generate evidence-based lessons that can inform preparedness activities for future outbreaks of measles and other highly transmissible diseases.

    This study took place under the umbrella of a larger initiative, Outbreak Observatory, which seeks to address the operational challenges and barriers faced during outbreak responses and disseminate lessons to facilitate future preparedness and response efforts. Outbreak responses tend to be busy times that do not lend themselves to documenting operational knowledge acquired during the response. Outbreak Observatory aims to provide a forum to aggregate those lessons so that other jurisdictions can put them into practice without having to learn them firsthand. Previous Outbreak Observatory studies—including on the 2017-18 US hepatitis A epidemic and the impact of Candida auris and the 2017-18 influenza season on US health systems—identified lessons that reach beyond those individual responses to inform broader outbreak and epidemic preparedness. As highlighted by the complexities of the COVID-19 pandemic response, it is critical to share the wealth of operational experience held by frontline responders to improve outbreak response readiness and capacity in advance of future communicable disease events.

    Even for the smaller outbreaks, health departments faced considerable challenges conducting routine outbreak response activities, such as contact tracing

    The participants in our study on US measles outbreaks called attention to a number of resource and operational gaps during their response operations that apply directly to the COVID-19 pandemic. While some of these outbreaks were quite large, none even remotely compared to the scale of the US COVID-19 epidemic. Even for the smaller outbreaks, health departments faced considerable challenges conducting routine outbreak response activities, such as contact tracing. These gaps became front-page news with COVID-19, as health departments around the country struggled to implement effective testing, contact tracing, and other surveillance operations during the pandemic. Additionally, most health departments interviewed in this study indicated that they did not have the resources available to conduct mass vaccination operations in response to the outbreak. Rather, they relied heavily on healthcare providers and pharmacists in the community to administer the vaccinations.

    Public health preparedness funding has decreased steadily over the past decade or longer, and health departments are unable to maintain the programs and personnel required to respond to even minor outbreaks, let alone major epidemics or pandemics.

    As we have observed during the COVID-19 response—first with mass testing clinics and currently with vaccinations—health departments need considerable external resources to stand up large-scale responses for major epidemics. Public health preparedness funding has decreased steadily over the past decade or longer, and health departments are unable to maintain the programs and personnel required to respond to even minor outbreaks, let alone major epidemics or pandemics.

    As with the outbreaks included in this study, many of which occurred largely in insular communities—eg, racial/ethnic minorities, immigrants, orthodox religious communities—COVID-19 has disproportionately impacted vulnerable racial and ethnic minority communities. Public health and healthcare organizations faced significant barriers to encouraging vaccination among the affected communities, and we are currently observing similar challenges as the availability of SARS-CoV-2 vaccines scales up and eligibility groups expand. During the measles outbreaks, health officials often relied on community healthcare providers, particularly those that serve vulnerable and insular communities, to engage with the affected community. Additionally, trusted community leaders—including religious leaders, business owners, and community-based organizations—played a critical role as trusted voices to disseminate accurate information regarding protective measures, including vaccination. It requires substantial time and resources to develop these relationships, however, and it can be very difficult to do so in the midst of an outbreak or epidemic response. Health departments that had previously established relationships (e.g., from previous outbreaks) found it easier to leverage these community leaders to make a positive impact.

    Historically, most outbreak research focuses on the disease epidemiology or clinical care or is limited to after-action reports that focus on organizational challenges and may never be published publicly

    Historically, most outbreak research focuses on the disease epidemiology or clinical care or is limited to after-action reports that focus on organizational challenges and may never be published publicly. It is critical to understand the operational experiences of frontline responders, including public health and healthcare organizations, and to translate those experiences into evidence-based lessons that can inform other preparedness efforts. Without these lessons, the jurisdictions repeat the same mistakes and must learn those lessons firsthand. And the challenges, barriers, and shortcomings identified during smaller outbreaks will only be exacerbated during larger events.

    Efforts to document and disseminate these operational experiences, like Outbreak Observatory and others, support the development of programs and policies that can enable sustainable public health preparedness capacity that is needed for a range of communicable disease events, from smaller outbreaks to larger health emergencies like COVID-19.

    The post Operational Response to US Measles Outbreaks, 2017-19 appeared first on BMC Series blog.

    in BMC Series blog on April 13, 2021 01:37 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The amoral nonsense of Orchid’s embryo selection

    If you haven’t heard about Clubhouse yet… well, it’s the latest Silicon Valley unicorn, and the popular new chat hole for thought leaders. I heard about it for the first time a few months ago, and was kindly offered an invitation (Club house is invitation only!) so I could explore what it is all about. Clubhouse is an app for audio based social networking, and the content is, as far as I can tell, a mixed bag. I’ve listened to a handful of conversations hosted on the app.. topics include everything from bitcoin to Miami. It was interesting, at times, to hear the thoughts and opinions of some of the discussants. On the other hand, there is a lot of superficial rambling on Clubhouse as well. During a conversation about genetics I heard someone posit that biology has a lot to learn from the fashion industry. This was delivered in a “you are hearing something profound” manner, by someone who clearly knew nothing about either biology or the fashion industry, which is really too bad, because the fashion industry is quite interesting and I wouldn’t be surprised at all if biology has something to learn from it. Unfortunately, I never learned what that is.

    One of the regulars on Clubhouse is Noor Siddiqui. You may not have heard of her; in fact she is officially “not notable”. That is to say, she used to have a Wikipedia page but it was deleted on the grounds that there is nothing about her that indicates notability, which is of course notable in and of itself… a paradox that says more about Wikipedia’s gatekeeping than Siddiqui (Russell 1903, Litt 2021). In any case, Siddiqui was recently part of a Clubhouse conversation on “convergence of genomics and reproductive technology” together with Carlos Bustamante (advisor to cryptocurrency based Luna DNA and soon to be professor of business technology at the University of Miami) and Balaji Srinivasan (bitcoin angel investor and entrepreneur). As it happens, Siddiqui is the CEO of a startup called “Orchid Health“, in the genomics and reproductive technology “space”. The company promises to harness “population genetics, statistical modeling, reproductive technologies, and the latest advances in genomic science” to “give parents the option to lower a future child’s genetic risk by creating embryos through in IVF and implanting embryos in the order that can reduce disease risk.” This “product” will be available later this year. Bustamante and Srinivasan are early “operators and investors” in the venture.

    Orchid is not Siddiqui’s first startup. While she doesn’t have a Wikipedia page, she does have a website where she boasts of having (briefly) been a Thiel fellow and, together with her sister, starting a company as a teenager. The idea of the (briefly in existence) startup was apparently to help the now commercially defunct Google Glass gain acceptance by bringing the device to the medical industry. According to Siddiqui, Orchid is also not her first dive into statistical modeling or genomics. She notes on her website that she did “AI and genomics research”, specifically on “deep learning for genomics”. Such training and experience could have been put to good use but…

    Polygenic risk scores and polygenic embryo selection

    Orchid Health claims that it will “safely and naturally, protect your baby from diseases that run in your family” (the slogan “have healthy babies” is prominently displayed on the company’s website). The way it will do this is to utilize “advances in machine learning and artificial intelligence” to screen embryos created through in-vitro fertilization (IVF) for “breast cancer, prostate cancer, heart disease, atrial fibrillation, stroke, type 2 diabetes, type 1 diabetes, inflammatory bowel disease, schizophrenia and Alzheimer’s“. What this means in (a statistical geneticist’s) layman’s terms is that Orchid is planning to use polygenic risk scores derived from genome-wide association studies to perform polygenic embryo selection for complex diseases. This can be easily unpacked because it’s quite a simple proposition, although it’s far from a trivial one- the statistical genetics involved is deep and complicated.

    First, a single-gene disorder is a health problem that is caused by a single mutation in the genome. Examples of such disorders include Tay-Sachs disease, sickle cell anaemia, Huntington’s disease, Duchenne muscular dystrophy, and many other diseases. A “complex disease”, also called a multifactorial disease, is a disease that has a genetic component, but one that involves multiple genes, i.e. it is not a single-gene disorder. Crucially, complex diseases may involve effects of environmental factors, whose role in causing disease may depend on the genetic composition of an individual. The list of diseases on Orchid’s website, including breast cancer, prostate cancer, heart disease, atrial fibrillation, stroke, type 2 diabetes, type 1 diabetes, inflammatory bowel disease, schizophrenia and Alzheimer’s disease are all examples of complex (multifactorial) diseases.

    To identify genes that associate with a complex disease, researchers perform genome-wide association studies (GWAS). In such studies, researchers typically analyze several million genomic sites in a large numbers of individuals with and without a disease (used to be thousands of individuals, nowadays hundreds of thousands or millions) and perform regressions to assess the marginal effect at each locus. I italicized the word associate above, because genome-wide association studies do not, in and of themselves, point to genomic loci that cause disease. Rather, they produce, as output, lists of genomic loci that have varying degrees of association with the disease or trait of interest.

    Polygenic risk scores (PRS), which the Broad Institute claims to have discovered (narrator: they were not discovered at the Broad Institute), are a way to combine the multiple genetic loci associated with a complex disease from a GWAS. Specifically, a PRS \hat{S} for a complex disease is given by

    \hat{S} = \sum_{j=1}^m X_j \hat{\beta}_j,

    where the sum is over m different genetic loci, the X_j are coded genetic markers for an individual at the m loci, and the \hat{\beta}_j are weights based on the marginal effects derived from a GWAS. The concept of a PRS is straightforward, but the details are complicated, in some cases subtle, and generally non-trivial. There is debate over how many genomic loci should be used in computing a polygenic risk score given that the vast majority of marginal effects are very close to zero (Janssens 2019), lots of ongoing research about how to set the weights to account for issues such as bias caused by linkage disequilibrium (Vilhjálmsson et al. 2015, Shin et al. 2017, Newcombe et al. 2019, Ge et al. 2019, Lloyd-Jones et al. 2019, Pattee and Pan 2020, Song et al. 2020), and continuing discussions about the ethics of using polygenic risk scores in the clinic (Lewis and Green 2021).

    While much of the discussion around PRS applications centers on applications such as determining diagnostic testing frequency (Wald and Old 2019), polygenic embryo selection (PES) posits that polygenic risk scores should be taken a step further and evaluated for embryos to be used as a basis for discarding, or selecting, specific embryos for in vitro fertilization implantation. The idea has been widely criticized and critiqued (Karavani et al. 2019). It has been described as unethical, morally repugnant, and concerns about its use for eugenics have been voiced by many. Underlying these criticisms is the fact that the technical issues with PES using PRS are manifold.

    Poor penetrance

    The term “penetrance” for a disease refers to the proportion of individuals with a particular genetic variant that have the disease. Many single-gene disorders have very high penetrance. For example, F508del mutation in the CFTR gene is 100% penetrant for cystic fibrosis. That is, 100% of people who are homozygous for this variant, meaning that both copies of their DNA have a deletion of the phenylalanine amino acid in position 508 of their CFTR gene, will have cystic fibrosis. The vast majority of variants associated with complex diseases have very low penetrance. For example, in schizophrenia, the penetrance of “high risk” de novo copy number variants (in which there are variable copies of DNA at a genomic loci) was found to be between 2% and 7.4% (Vassos et al 2010). The low penetrance at large numbers of variants for complex diseases was precisely the rationale for developing polygenic risk scores in the first place, the idea being that while individual variants yield small effects, perhaps in (linear) combination they can have more predictive power. While it is true that combining variants does yield more predictive power for complex diseases, unfortunately the accuracy is, in absolute terms, very low.

    The reason for low predictive power of PRS is explained well in (Wald and Old 2020) and is illustrated for coronary artery disease (CAD) in (Rotter and Lin 2020):

    The issue is that while the polygenic risk score distribution may indeed be shifted for individuals with a disease, and while this shift may be statistically significant resulting in large odds ratios, i.e. much higher relative risk for individuals with higher PRS, the proportion of individuals in the tail of the distributions who will or won’t develop the disease will greatly affect the predictive power of the PRS. For example, Wald and Old note that PRS for CAD from (Khera et al. 2018) will confer a detection rate of only 15% with a false positive rate of 5%. At a 3% false positive rate the detection rate would be only 10%. This is visible in the figure above, where it is clear that control of the false positive right (i.e. thresholding at the extreme right-hand side with high PRS score) will filter out many (most) affected individuals. The same issue is raised in the excellent review on PES of (Lázaro-Muńoz et al. 2020). The authors explain that “even if a PRS in the top decile for schizophrenia conferred a nearly fivefold increased risk for a given embryo, this would still yield a >95% chance of not developing the disorder.” It is worth noting in this context, that diseases like schizophrenia are not even well defined phenotypically (Mølstrøm et al. 2020), which is another complex matter that is too involved to go into detail here.

    In a recent tweet, Siddiqui describes natural conception as a genetic lottery, and suggests that Orchid Health, by performing PES, can tilt the odds in customers’ favor. To do so the false positive rate must be low, or else too many embryos will be discarded. But a 15% sensitivity is highly problematic considering the risks inherent with IVF in the first place (Kamphuis et al. 2014):

    To be concrete, an odds ratio of 2.8 for cerebral palsy needs to be balanced against the fact that in the Khera et al. study, only 8% of individuals had an odds ratio >3.0 for CAD. Other diseases are even worse, in this sense, than CAD. In atrial fibrillation (one of the diseases on Orchid Health’s list), only 9.3% of the individuals in the top 0.44% of the atrial fibrillation PRS actually had atrial fibrillation (Choi et al 2019).As one starts to think carefully about the practical aspects and tradeoffs in performing PES, other issues, resulting from the low penetrance of complex disease variants, come into play as well. (Lencz et al. 2020) examine these tradeoffs in detail, and conclude that “the differential performance of PES across selection strategies and risk reduction metrics may be difficult to communicate to couples seeking assisted reproductive technologies… These difficulties are expected to exacerbate the already profound ethical issues raised by PES… which include stigmatization, autonomy (including “choice overload”, and equity. In addition, the ever-present specter of eugenics may be especially salient in the context of the LRP (lowest-risk prioritization) strategy.” They go on to “call for urgent deliberations amongst key stakeholders (including researchers, clinicians, and patients) to address governance of PES and for the development of policy statements by professional societies.”

    Pleiotropy predicaments

    I remember a conversation I had with Nicolas Bray several years ago shortly after the exciting discovery of CRISPR/Cas9 for genome editing, on the implications of the technology for improving human health. Nick pointed out that the development of genomics had been curiously “backwards”. Thirty years ago, when human genome sequencing was beginning in earnest, the hope was that with the sequence at hand we would be able to start figuring out the function of genes, and even individual base pairs in the genome. At the time, the human genome project was billed as being able to “help scientists search for genes associated with human disease” and it was imagined that “greater understanding of the genetic errors that cause disease should pave the way for new strategies in diagnosis, therapy, and disease prevention.” Instead, what happened is that genome editing technology has arrived well before we have any idea of what the vast majority of the genome does, let alone the implications of edits to it. Similarly, while the coupling of IVF and genome sequencing makes it possible to select embryos based on genetic variants today, the reality is that we have no idea how the genome functions, or what the vast majority of genes or variants actually do.

    One thing that is known about the genome is that it is chock full of pleiotropy. This is statistical genetics jargon for the fact that variation at a single locus in the genome can affect many traits simultaneously. Whereas one might think naïvely that there are distinct genes affecting individual traits, in reality the genome is a complex web of interactions among its constituent parts, leading to extensive pleiotropy. In some cases pleiotropy can be antagonistic, which means that a genomic variant may simultaneously be harmful and beneficial. A famous example of this is the mutation to the beta globin gene that confers malaria resistance to heterozygotes (individuals with just one of their DNA copies carrying the mutation) and sickle cell anemia to homozygotes (individuals with both copies of their DNA carrying the mutation).

    In the case of complex diseases we don’t really know enough, or anything, about the genome to be able to truly assess pleiotropy risks (or benefits). But there are some worries already. For example, HLA Class II genes are associated with Type I and non-insulin treated Type 2 diabetes (Jacobi et al 2020), Parkinson’s disease (e.g. James and Georgopolous 2020, which also describes an association with dementia) and Alzheimer’s (Wang and Xing 2020). PES that results in selection against the variants associated with these diseases could very well lead to population susceptibility to infectious disease. Having said that, it is worth repeating that we don’t really know if the danger is serious, because we don’t have any idea what the vast majority of the genome does, nor the nature of antagonistic pleiotropy present in it. Almost certainly by selecting for one trait according to PRS, embryos will also be selected for a host of other unknown traits.

    Thus, what can be said is that while Orchid Health is trying to convince potential customers to not “roll the dice“, by ignoring the complexities of pleiotropy and its implications for embryo selection, what the company is actually doing is in fact rolling the dice for its customers (for a fee).

    Population problems

    One of Orchid Health’s selling points is that unlike other tests that “look at 2% of only one partner’s genome…Orchid sequences 100% of both partner’s genomes” resulting in “6 billion data points”. This refers to the “couples report”, which is a companion product of sorts to the polygenic embryo screening. The couples report is assembled by using the sequenced genomes of parents to simulate the genomes of potential babies, each of which is evaluated for PRS’ to provide a range of (PRS based) disease predictions for the couples potential children. Sequencing a whole genome is a lot more expensive that just assessing single nucleotide polymorphisms (SNPs) in a panel. That may be one reason that most direct-to-consumer genetics is based on polymorphism panels rather than sequencing. There is another: the vast majority of variation in the genome occurs at a known polymorphic sites (there are a few million out of the approximately 3 billion base pairs in the genome), and to the extent that a variant might associate with a disease, it is likely that a neighboring common variant, which will be inherited together with the causal one, can serve as a proxy. There are rare variants that have been shown to associate with disease, but whether or not they explain can explain a large fraction of (genetic) disease burden is still an open question (Young 2019). So what has Siddiqui, who touts the benefits of whole-genome sequencing in a recent interview, discovered that others such as 23andme have missed?

    It turns out there is value to whole-genome sequencing for polygenic risk score analysis, but it is when one is performing the genome-wide association studies on which the PRS are based. The reason is a bit subtle, and has to do with differences in genetics between populations. Specifically, as explained in (De La Vega and Bustamante, 2018), variants that associate with a disease in one population may be different than variants that associate with the disease in another population, and whole-genome sequencing across populations can help to mitigate biases that result when restricting to SNP panels. Unfortunately, as De La Vega and Bustamante note, whole-genome sequencing for GWAS “would increase costs by orders of magnitude”. In any case, the value of whole-genome sequencing for PRS lies mainly in identifying relevant variants, not in assessing risk in individuals.

    The issue of population structure affecting PRS unfortunately transcends considerations about whole-genome sequencing. (Curtis 2018) shows that PRS for schizophrenia is more strongly associated with ancestry than with the disease. Specifically, he shows that “The PRS for schizophrenia varied significantly between ancestral groups and was much higher in African than European HapMap subjects. The mean difference between these groups was 10 times as high as the mean difference between European schizophrenia cases and controls. The distributions of scores for African and European subjects hardly overlapped.” The figure from Curtis’ paper showing the distribution of PRS for schizophrenia across populations is displayed below (the three letter codes at the bottom are abbreviations for different population groups; CEU stands for Northern Europeans from Utah and is the lowest).

    The dependence of PRS on population is a problem that is compounded by a general problem with GWAS, namely that Europeans and individuals of European descent have been significantly oversampled in GWAS. Furthermore, even within a single ancestry group, the prediction accuracy of PRS can depend on confounding factors such as socio-economic status (Mostafavi et al. 2020). Practically speaking, the implications for PES are beyond troubling. The PRS scores in the reports customers of Orchid Health may be inaccurate or meaningless due to not only the genetic background or admixture of the parents involved, but also other unaccounted for factors. Embryo selection on the basis of such data becomes worse than just throwing dice, it can potentially lead to unintended consequences in the genomes of the selected embryos. (Martin et al. 2019) show unequivocally that clinical use of polygenic risk scores may exacerbate health disparities.

    People pathos

    The fact that Silicon Valley entrepreneurs are jumping aboard a technically incoherent venture and are willing to set aside serious ethical and moral concerns is not very surprising. See, e.g. Theranos, which was supported by its investors despite concerns being raised about the technical foundations of the company. After a critical story appeared in the Wall Street Journal, the company put out a statement that

    “[Bad stories]…come along when you threaten to change things, seeded by entrenched interests that will do anything to prevent change, but in the end nothing will deter us from making our tests the best and of the highest integrity for the people we serve, and continuing to fight for transformative change in health care.”

    While this did bother a few investors at the time, many stayed the course for a while longer. Siddiqui uses similar language, brushing off criticism by complaining about paternalism in the health care industry and gatekeeping, while stating that

    “We’re in an age of seismic change in biotech – the ability to sequence genomes, the ability to edit genomes, and now the unprecedented ability to impact the health of a future child.”

    Her investors, many of whom got rich from cryptocurrency trading or bitcoin, cheer her on. One of her investors is Brian Armstrong, CEO of Coinbase, who believes “[Orchid is] a step towards where we need to go in medicine.” I think I can understand some of the ego and money incentives of Silicon Valley that drive such sentiment. But one thing that disappoints me is that scientists I personally held in high regard, such as Jan Liphardt (associate professor of Bioengineering at Stanford) who is on the scientific advisory board and Carlos Bustamante (co-author of the paper about population structure associated biases in PRS mentioned above) who is an investor in Orchid Health, have associated themselves with the company. It’s also very disturbing that Anne Wojcicki, the CEO of 23andme whose team of statistical geneticists understand the subtleties of PRS, still went ahead and invested in the company.


    Orchid Health’s polygenic embryo selection, which it will be offering later this year, is unethical and morally repugnant. My suggestion is to think twice before sending them three years of tax returns to try to get a discount on their product.

    The Bulbophyllum echinolabium orchid. The smell of its flowers has been described as decomposing rot.

    in Bits of DNA on April 12, 2021 08:46 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 12 April 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.

    Please join us at the next regular Open NeuroFedora team meeting on Monday 12 April at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'

    The meeting will be chaired by @bt0dotninja. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on April 12, 2021 08:41 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Reader/Principal Lecturer in Computer Science (Artificial Intelligence)

    School of Physics Engineering and Computer Science/ Department of Computer Science

    University of Hertfordshire, Hatfield, UK

    FTE: Full time position working 37 hours per week (1.0 FTE)

    Duration of Contract: Permanent

    Salary: UH9 £51,034 - £60,905 pa dependent on relevant skills and experience

    Closing date: 9 May 2021

    Applications are invited for an academic position as Reader/Principal Lecturer in the Department of Computer Science, University of Hertfordshire. The Department has an international reputation for teaching and research, with 64 academic staff, 40 adjunct lecturer staff, and 65 research students and postdoctoral research staff. With a history going back to 1958, the Department teaches one of the largest cohorts of undergraduate students in the UK, and also delivers a thriving online computer science degree programme.

    Main duties and responsibilities

    The person appointed will be expected to make a significant contribution to the leadership of research in the department, including gaining research awards as Principal Investigator, the development of the research environment in the department and across the University, and publishing in peer reviewed journal articles and other internationally excellent or world-leading publications in education. To contribute to the development of, and supervise and teach on, doctoral programmes in the UK and internationally in relation to a wide spectrum of AI, especially emerging topics in AI. Possible fields include, but are not limited to:

    • Machine learning: reinforcement learning, Deep Methods, statistical methods, large scale data modelling/intelligent processing and high-performance learning algorithms
    • Robotics: embodied and/or cognitive robotics, HRI, robot safety, emotional/social robots, smart homes and sensors, sensor fusion, assistive robotics, soft robotics, adaptive or evolutionary robotic design
    • Biological and biophysical computation paradigms, systems biology, neural computation
    • Complex Systems: collective intelligence, adaptive, autonomous and multi-agent/robot systems, collective and swarm intelligence, social and market modelling, adaptive, evolutionary and unconventional computation
    • Mathematical Modelling: statistical modelling, information-theoretic methods, compressive sensing, intelligent data visualization, multiscale models, optimization; causality
    • Emerging Topics in AI: computer algebra and AI, topological methods (e.g. persistent homology), algebraic and category-theoretical methods in AI; modern topics in games and AI; quantum algorithms for AI
    • AI and applications: financial modelling, AI and biology/physics/cognitive sciences
    • Foundations: fundamental questions of intelligence and computation, emergence of life/intelligence, Artificial Life

    Preference will be given to candidates that can deliver teaching to Level 7 in a selection of relevant subjects.

    The appointee will also be expected to lead and develop taught modules in a range of computer science areas.  For appointees with the appropriate experience, there will be the possibility of taking up the role of Head of Subject Group within the department of Computer Science.

    Skills and experience

    The appointee will strengthen the research culture in the Department by pursuing research as part of a larger research team, seeking external funding, publishing papers, supervising research students, and participating in commercial activity as appropriate. Therefore it is essential that candidates have a track record (e.g. in published, grant-funded research) in Computer Science. Additionally, experience of different types of assessment and higher education quality assurance is an essential requirement of this role.

    Prior experience of developing modules and/or programmes of study in Computer Science is essential in addition to significant experience of operating in a UK HE Environment, or equivalent professional experience. Readers/Principal lecturers are expected to take on duties in the capacity of leader, and hence experience of academic leader, programme leadership and line management is desirable. Good interpersonal and presentation skills with proficiency in the English Language are essential along with the ability to manage conflicting demands and work to deadlines.

    Qualifications required

    Reader/Principal Lecturer applicants must hold a First Degree and a PhD in an appropriate area of Computer Science or an equivalent, relevant postgraduate professional qualifications.

    In addition, the Reader/Principal Lecturer will be expected to contribute to the leadership and management academic programmes, as well as proactive participation in enterprise, knowledge transfer and/or research and scholarship in the School.  There are expectations of leadership and potentially supervisory oversight of groups of staff. Readers/Principal lecturers are also expected to contribute to the richness of the academic environment, through scholarly activity, support events, projects and activities, including open days, outreach, extra curricula initiatives, and potentially act as a representative of the School or University at national or international fora.

    The School of Physics, Engineering and Computer Science is an Athena Swan Bronze award holder, and we are committed to providing a supportive environment and flexible working arrangements. The university also provides an onsite childcare facility and child-centred holiday clubs. Staff work with the university values, which are: Friendly, Ambitious, Collegial, Enterprising, and Student focused.

    Contact Details/Informal Enquiries: Informal enquiries may be addressed to Dr Simon Trainis, Head, Department of Computer Science by email: S.A.Trainis [at] herts.ac.uk Please note that applications sent directly to this email address will not be accepted.

    Closing Date: 9 May 2021

    Interview Dates: TBC but candidates are advised to be available on 16 and 17 June 2021

    Apply through https://www.herts.ac.uk/staff/careers-at-herts, Reference Number: 032595

    Date Advert Placed: 8 April 2021

    in UH Biocomputation group on April 09, 2021 09:56 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dealing with measles outbreaks in areas of high anti-vaccination sentiment

    Managing the spread of infectious diseases has taken on heightened significance over the last year, as COVID-19 swept the world. Terms like ‘contact tracing’ and ‘vaccine trials’ have become hot discussion topics for the general population, not just those who work in public health or medical research. But the research I want to share here was research conducted before coronavirus was even ‘a thing’.

    Back in 2019, we set out to uncover what the top priorities are when dealing with a measles outbreak, particularly in a region where there might be lower than average vaccination rates. We conducted a multi-round survey (an interactive method known as a Delphi survey) with a range of Australian health professionals who are responsible for managing and responding to a disease outbreak: public health officials, infectious disease experts, immunization program staff, and others involved in delivering vaccinations.

    Our goal was to find out what they see as the priority issues when trying to contain an outbreak. We asked our participants to imagine a measles outbreak in a hypothetical region known for its low levels of childhood vaccination coverage. We asked them what practical issues and challenges they would face, and what they would need from non-vaccinating members of the community to effectively manage the outbreak.

    The study we undertook is part of a broader project called UPCaV for short – ‘Understanding, Parenting, Communities and Vaccination’. This project has been examining vaccine refusal and hesitancy in Australia from a range of different perspectives.

    Not all parents who refuse vaccines for their children are “anti-vaxxers”

    An earlier phase of the project involved in-depth interviews with Australian parents who refuse some or all vaccines for their children. Some of the findings from that phase have been published by Wiley et al (2020). Parents reported feeling stigmatized, bullied, disrespected and misunderstood. Notably, not all parents who refuse vaccines for their children are “anti-vaxxers”, a label that has become increasingly used in media reporting and one that tends to close down conversation rather than open it up.

    Trying to combat misinformation, while considered important by the people we surveyed, was deemed a longer-term strategy

    One thing that stood out the most for me from our Delphi survey, was that trying to combat misinformation, while considered important by the people we surveyed, was deemed a longer-term strategy. Trying to change the minds of vaccine-refusers was not considered a priority during an outbreak, nor was offering vaccination. Participants’ first priority was to contact potentially infected people and isolate vulnerable members of the community in order to halt the spread of the disease.

    Another thing I found compelling in our findings, were the responses to a question about what strategies are most successful for countering mistrust among non-vaccinating parents during an outbreak. Recurring responses to this question were the need for patience and calm education. Panellists suggested it was important to highlight to parents the serious complications that can arise from measles. And they suggested that it is important not to get into arguments with non-vaccinating members of the community, but instead to offer reassurance, provide accurate information and to acknowledge different beliefs. These comments were exciting to see because they align so well with previous research I’ve been involved with, called Sharing Knowledge About Immunisation (SKAI).

    The following two quotes from participants struck a chord with me:

    “Reassurance, patience, education may help with those people that are still ‘sitting on the fence’ about vaccination.”

    “Recognizing and acknowledging people’s beliefs, however conveying facts and not getting drawn into a discussion or arguments.”

    Although these participants were talking about an imagined measles outbreak, their perspectives have significance for how we respond to other disease threats and how we communicate effectively with diverse populations. As I write this (April 2021), the COVID-19 vaccines have begun to be rolled out in Australia. While the questions in our survey were about measles, the implications for how best to manage the threat of disease, while also communicating in an open, thoughtful and compassionate way, especially with people who might be feeling nervous about vaccinations, is timely.

    Further reading:

    Wiley KE, Leask J, Attwell K, Helps C, Degeling C, Ward P et al. Parenting and the vaccine refusal process: A new explanation of the relationship between lifestyle and vaccination trajectories. Social Science & Medicine. 2020;263. doi: https://doi.org/10.1016/j.socscimed.2020.113259.

    SKAI (Sharing Knowledge About Immunisation) website, National Centre of Immunisation Research and Surveillance, https://www.talkingaboutimmunisation.org.au, Australia.

    The post Dealing with measles outbreaks in areas of high anti-vaccination sentiment appeared first on BMC Series blog.

    in BMC Series blog on April 09, 2021 05:02 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Sci-Hub Case: Academics Urge Court To Rule Against ‘Extortionate Practices’

    A piece of News about the signed a statement by urging the Delhi high court to rule against three publishers that have petitioned the court to have access to Sci-Hub and Libgen blocked in India.

    in Open Access Tracking Project: news on April 08, 2021 12:16 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Overinterpreting Computational Models of Decision-Making

    Bell (1985)

    Can a set of equations predict and quantify complex emotions resulting from financial decisions made in an uncertain environment? An influential paper by David E. Bell considered the implications of disappointment, a psychological reaction caused by comparing an actual outcome to a more optimistic expected outcome, as in playing the lottery. Equations for regret, disappointment, elation, and satisfaction have been incorporated into economic models of financial decision-making (e.g., variants of prospect theory).

    Financial choices comprise one critical aspect of decision-making in our daily lives. There are so many choices we make every day, from the proverbial option paralysis in the cereal aisle...

    ...to decisions about who to date, where to go on vacation, whether one should take a new job, change fields, start a business, move to a new city, get married, get divorced, have children (or not).

    And who to trust. Futuristic scenario below...

    Decision to Trust

    I just met someone at a pivotal meeting of the Dryden Commission. We chatted beforehand and discovered we had some common ground. Plus he's brilliant, charming and witty.

    “Are you looking for an ally?” he asked. 

    Neil, Laura and Stanley in Season 3 of Humans


    Should I trust this person and go out to dinner with him? Time to ask my assistant Stanley, the orange-eyed (servile) Synthetic, an anthropomorphic robot with superior strength and computational abilities.

    Laura: “Stanley, was Dr. Sommer lying to me just then, about Basswood?”

    Stanley, the orange-eyed Synth: “Based on initial analysis of 16 distinct physiological factors, I would rate the likelihood of deceit or misrepresentation in Dr. Sommer's response to your inquiry at... EIGHTY-FIVE PERCENT.”

    The world would be easier to navigate if we could base our decisions on an abundance of data and well-tuned weighting functions accessible to the human brain. Right? Like a computational model of trust and reputation or a model of how people choose to allocate effort in social interactions. Right?

    I'm out of my element here, so this will limit my understanding of these models. Which brings me to a more familiar topic: meta-commentary on interpretation (and extrapolation).

    Computational Decision-Making

    My motivation for writing this post was annoyance. And despair. A study on probabilistic decision-making under uncertain and volatile conditions came to the conclusion that people with anxiety and depression will benefit from focusing on past successes, instead of failures. Which kinda goes without saying. The paper in eLife was far more measured and sophisticated, but the press release said:

    The more chaotic things get, the harder it is for people with clinical anxiety and/or depression to make sound decisions and to learn from their mistakes. On a positive note, overly anxious and depressed people’s judgment can improve if they focus on what they get right, instead of what they get wrong...

    ...researchers tested the probabilistic decision-making skills of more than 300 adults, including people with major depressive disorder and generalized anxiety disorder. In probabilistic decision making, people, often without being aware of it, use the positive or negative results of their previous actions to inform their current decisions.

    The unaware shall become aware. Further advice:

    “When everything keeps changing rapidly, and you get a bad outcome from a decision you make, you might fixate on what you did wrong, which is often the case with clinically anxious or depressed people...”

    ...individualized treatments, such as cognitive behavior therapy, could improve both decision-making skills and confidence by focusing on past successes, instead of failures...


    The final statement on individualized CBT could very well be true, but it has nothing to do with the outcome of the study (Gagne et al., 2021), wherein participants chose between two shapes associated with differential probabilities of receiving electric shock (Exp. 1), or financial gain or loss (Exp. 2).

    With that out of the way, I will say the experiments and the computational modeling approach are impressive. The theme is probabilistic decision-making under uncertainty, with the added bonus of volatility in the underlying causal structure (e.g., the square is suddenly associated a higher probability of shocks). People with anxiety disorders and depression are generally intolerant of uncertainty. Learning the stimulus-outcome contingencies and then rapidly adapting to change was predictably impaired.

    Does this general finding differ for learning under reward vs. punishment? For anxiety vs. depression? In the past, depression was associated with altered learning under reward, while anxiety was associated with altered learning under punishment (including in the authors' own work). For reasons that were not entirely clear to me, the authors chose to classify symptoms using a bifactor model designed to capture “internalizing psychopathology” common to both anxiety and depression vs. symptoms that are unique to each disorder [ but see Fried (2021) ]1

    Overall, high scores on the common internalizing factor were associated with impaired adjustments to learning rate during the volatile condition, and this held whether the outcomes were shocks, financial gains, or financial losses. Meanwhile, high scores on anxiety-unique or depression-unique symptoms did not show this relationship. This was determined by computational modeling of task performance, using a hierarchical Bayesian framework to identify the model that best described the participants' behavior:

    We fitted participants’ choice behavior using alternate versions of simple reinforcement learning models. We focused on models that were parameterized in a sufficiently flexible manner to capture differences in behavior between experimental conditions (block type: volatile versus stable; task version: reward gain versus aversive) and differences in learning from better or worse than expected outcomes. We used a hierarchical Bayesian approach to estimate distributions over model parameters at an individual- and population-level with the latter capturing variation as a function of general, anxiety-specific, and depression-specific internalizing symptoms. 

    We've been living in a very uncertain world for more than a year now, often in a state of loneliness and isolation. Some of us have experienced loss after loss, deteriorating mental health, lack of motivation, lack of purpose, and difficulty making decisions. My snappish response to the press release concerns whether we can prescribe individualized therapies based on the differences between the yellow arrows on the left (“resilient people”) compared to the right (“internalizing people” — i.e., the anxious and depressed), given that the participants may not even realize they're learning anything.


    1 I will leave it to Dr. Eiko Fried (2021) to explain whether we should accept (or reject) this bifactor model of “shared symptoms” vs. “unshared symptoms”.


    Bell DE. (1985) Disappointment in decision making under uncertainty. Operations research 33(1):1-27.

    Gagne C, Zika O, Dayan P, Bishop SJ. (2020). Impaired adaptation of learning to contingency volatility in internalizing psychopathology. Elife 9:e61387.

    Further Reading

    Fried EI. (2020). Lack of theory building and testing impedes progress in the factor and network literature. Psychological Inquiry 31(4):271-88.

    Guest O, Martin AE. (2021) How computational modeling can force theory building in psychological science. Perspect Psychol Sci. Jan 22:1745691620970585.

    van Rooij I, Baggio G. (2021). Theory before the test: How to build high-verisimilitude explanatory theories in psychological science. Perspect Psychol Sci. Jan 6:1745691620970604.

    in The Neurocritic on April 01, 2021 04:25 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Goodbye Discover

    The end of this Neuroskeptic era

    in Discovery magazine - Neuroskeptic on March 31, 2021 08:30 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The relationship between self-harm and bullying among adolescents

    Self-harm is often the result of a complex interplay of genetic- biological, psychological and environmental factors. Bullying is one aspect that can increase the risk of self-harm. However not all adolescents who experience bullying harm themselves.

    Bullies are also at risk

    It is not only being the victim of bullying that is related to self-harm, but also being a bully aggressor. In fact, in our study we found that adolescents who are both bullies and are bullied by their peers (the bully-victims), were the group most vulnerable to self-harm. They were six times more likely to self-harm than adolescents who weren’t bullies or bullied.

    It is possible that the “bully-victims” have the broadest range of adjustment problems, presenting difficulties common to both bullies and victims. An earlier study found that many of the bully-victims first have a history of being bullied, and then begin to bully peers later in adolescence. Thus, they suffer from both emotional and behavioral problems.

    Why is there a relationship between self-harm and bullying behavior?

    Both bullying victimization and self-harm are associated with emotional problems such as anxiety and depression. In our study we found that emotional problems and parental conflicts were important factors for the association between the bullied adolescents and self-harm, and between the bully-victims and self-harm.

    Our study showed that school behavioral problems also accounted for some of the relationship between self-harm and the bullies, and self-harm and bully-victims. This confirms the idea that the act of bullying is part of a broader concept of conduct behavioral problems with aggressive and delinquent behaviors, school failure, and drop out.

    Parental support and school well-being

    School well-being (including support from teacher) was protective of self-harm for the bullies and the bully-victims

    While we see that those who are bullied and/or bullies may be more likely to engage in self-harm, we know that not everyone who is bullied or bullies self-harms. We therefore investigated what might protect against self-harm amongst the bullies, bullied and bully-victims. We found that although parental support had a protective effect on self-harm among boys and girls in general, it was especially important for those who are bullied.

    This shows that the willingness to talk to and seek help from parents during a difficult time like bullying, may protect against self-harm among adolescents.

    We were surprised to also find that school well-being (including support from teacher) was protective of self-harm for the bullied and the bully-victims. Our result appears to be the first to investigate the buffering effect of school well-being on bullying behavior and self-harm among adolescents. This is an important result for the prevention of self-harm. Schools, parents, and health care professionals should be aware of the importance of school well-being for adolescents who are being bullied, in terms of identifying those at risk of self-harm.

    How frequent is self-harm?

    Our study showed that fifteen percent of participating adolescents reported engaging in self-harm during the last year. This is consistent with earlier studies finding the 12-month prevalence rate to be between 10 to 19% around the world.

    Who participated in the study?

    The data we used in this study was the “Ungdata” that is a cross-sectional, large, national survey, designed for adolescents. A total of 14 093 adolescents aged 12 to 19 years, from different parts of Norway, participated in the study; this was 87% of those invited to participate.

    Our data was collected at one point in time and therefore we do not know if the bullying occurred prior to self-harm. However, previous longitudinal studies have shown that bullying increased the risk of self-harm and not the other way around.

    How we measured self-harm and bulling

    We measured self-harm by asking if the adolescents had tried to harm them self in the past 12 months. Bullied was measured by asking adolescent if they had been teased, threatened, or frozen out by other young people in school, free time, online, or on their mobile phones. Bullying other peers was measured by asking if they had taken part in teasing, threatening or freezing out out other young people at school, free time, online or by mobile phone. We created a new variable to measure those both being bullied and who bully others (the bully-victims), by combining the variables “Bullied” and “Bullying other peers”.


    High levels of parental support and school well-being may buffer the harmful relationship between bullying behavior and self-harm

    There is a strong link between bullying and self-harm. Interventions to address bullying may reduce self-harm. Our findings also suggest that high levels of parental support and school well-being may buffer the harmful relationship between bullying behavior and self-harm. Addressing these factors may be important in reducing the risk of self-harm among those experiencing bullying.

    The post The relationship between self-harm and bullying among adolescents appeared first on BMC Series blog.

    in BMC Series blog on March 31, 2021 05:53 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Supporting community needs with an enhanced code policy on PLOS Computational Biology

    PLOS Computational Biology has adopted an enhanced code sharing policy for all papers submitted from 30 March 2021. The change, announced in this editorial, comes in response to community needs and support for initiatives which drive Open Science practices. 

    Community drivers

    Code sharing is not new to many of our authors. Research shows that 41% of papers in PLOS Computational Biology already share code voluntarily (Boudreau et al. 2021), demonstrating the community’s willingness to make their work both rapidly available and methodologically transparent. This creates a strong foundation for implementing a stronger policy and also works towards establishing code sharing as a normal research behaviour. 

    PLOS Computational Biology defines “open” as more than the availability of a research article. Our authors, editors, and readers see data and code sharing as tools to boost reproducibility and transparency. Sharing code alongside data will allow others to check and reproduce work and ultimately drive new discoveries. Requiring authors to share their code (unless there are good reasons not to) works towards making the research published in PLOS Computational Biology as robust as possible for the benefit of the whole research community.

    In developing the policy we sought views from researchers in the computational community on the barriers they face when sharing code (Harney et al. 2021). We then tested our policy text with researchers to help inform its design and content. This has allowed us to respond to researcher needs, for example by allowing exemptions to the policy for those with legitimate legal or ethical constraints on sharing. In this way, PLOS Computational Biology continues to be shaped by policies the community has helped define, and Open Science values that reflect the way they want to communicate research. 

    Open Science at PLOS

    Open Science is one of our founding principles at PLOS. Enhancing the code sharing policy at one of our journals in response to community needs is just one step we are making to increase the adoption of Open Science practices. We aim to empower researchers to share their good practices because we believe communities are best placed to define their own needs. The computational biology community is shaping the future of research communication by leading the way in practices that make science more open and equitable for all. However, we do offer guidance and help to those who need it, for example, by detailing best practice for code sharing alongside our policy text. No matter how researchers choose to make their work Open, PLOS Computational Biology provides options to support researcher needs and the platform to influence broader change.

    Collaboration with the community and responding to their needs is part of our approach to Open Science, regardless of how that community is defined. We will be closely monitoring the reaction to the code sharing policy at PLOS Computational Biology and exploring what an enhanced policy could mean for other communities that we serve. 

    Written by Lauren Cadwallader, Open Research Manager


    Boudreau M, Poline J-B, Bellec P, Stikov N (2021) On the open-source landscape of PLOS Computational Biology. PLOS Comput Biol 17(2): e1008725. https://doi.org/10.1371/journal.pcbi.1008725

    Harney J, Hrynaszkiewicz I, Cadwallader L, (2021) Code Sharing Survey 2020 – PLOS Computational Biology. figshare. Dataset. https://doi.org/10.6084/m9.figshare.13366025 

    The post Supporting community needs with an enhanced code policy on PLOS Computational Biology appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on March 31, 2021 04:59 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The Erection of a Placebo

    When yesterday's placebo is tomorrow's treatment

    in Discovery magazine - Neuroskeptic on March 30, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 29 March 1400 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.

    Please note the change in time for today's meeting: 1400 UTC instead of 1300 UTC

    Please join us at the next regular Open NeuroFedora team meeting on Monday 29 March at 1400UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1400 today'

    The meeting will be chaired by @ankursinha. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on March 29, 2021 09:26 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Building Neuromatch Academy

    Neuromatch Academy 2020 is a three-week online summer school in computational neuroscience that took place in July of 2020. We created interactive notebooks covering all aspects of computational neuroscience – from signals and models of spikes to machine learning and behaviour. We hired close to 200 TAs to teach this material to 1,700 students from all over the world in an inverted classroom setting. Over 87% of students that started the class finished it, a number unheard of in the MOOC space.

    By now, we have a preprint out, and NMA has been covered in Nature and the Lancet. So how did we do it? Building something that goes from 0 to several hundred volunteers working tirelessly is a huge endeavor, and I’m sure a lot of the readers of this blog coming from an academic background have trouble even imagining how you can get something like that off the ground. Richard Gao wrote a great post on how it felt to be flying in this anarchist hacker spaceship as it was being built. I wanted to share some thoughts with you that I hope will be helpful if you ever want to create an experience at this scale.

    A little background: my stint into teaching

    After my time at Facebook, I wanted to focus on the academic route. I became interested in teaching and taught an introductory CS class at a local college, building my materials from scratch. I thought, rather naively, that since I knew CS, this would be fairly straightforward. But preparing course materials materials for beginners is anything but easy. One issue is the curse of expertise: if you know a subject pretty well it becomes harder to explain it to a novice. For example, here’s my explanation of globals in Python:

    Python has two scopes: module and function. You have read access to module variables inside of a function. However, if you want to write to a module variable it, you need to use the global keyword. Unless the variable is a reference type… If a variable is a dict or a list, for example, you can change its contents inside a function even if it’s a module variable. That’s because Python has pass-by-ref semantics for complex types. You see, default pass-by-ref semantics for complex types were originally introduced in fourth generation languages to get rid of the difficulty of granular pass-by-value or pass-by-ref semantics in languages like C. […many rambling minutes later…]. In any case, don’t use globals if you can avoid them.

    Although this explanation is technically correct, it doesn’t fit with novice students’ current development, and giving that explanation is more likely to make them feel inadequate than enlightened. It takes a tremendous amount of attention to detail, radical empathy and craftsmanship to understand the adult learner’s motivations – both intrinsic and extrinsic – and to create materials that engage them at the right level. And creating this stuff takes a huge amount of time! By the time I finished the class, I was ready to throw away the material I created. At the same time, it felt quixotic to take half a year off to craft the perfect materials for a class that was going to be taken by ~15 students a semester. The lack of good materials meant that I had to spend much of my time in class on adjusting materials dynamically – at the end of the day, I was completely exhausted. Surely, there must be a better way!

    In response to the global pandemic, Chris Piech and Mehran Sahami decided to offer the first half of the introductory Stanford CS class to students at large, an experiment called Code in Place. Over 800 section leaders signed up to give 6 weeks of instruction on core computational concepts in Python. Here was a completely different approach to teaching a subject: very high quality materials crafted over years at Stanford – including a robot-control environment in Python called Karel – given in a hybrid setting, with the section leader (aka TA) holding Q&A sessions and solving test problems with students in an inverted classroom setting. And it worked! Students were engaged and stuck around, and they worked on self-directed projects to continue their learning. What I found is that by not having to focus so much on materials prep and the fallout from having so-so materials, I had more time to connect with the students on an emotional, social level – understanding what they’re stuck on, where they’re coming from, what their motivations are, etc.

    Karel is an environment where students can learn basis of programming, including variables, if statements, for loops and decomposition, by controlling a cute robot on a grid called Karel

    A few weeks into Code In Place, I came across Konrad Kording’s Twitter post looking for NMA volunteers. By that time, they had a core team – Megan in the Exec role, Sean as finance guru, Brad as ops, Konrad as dreamer, Gunnar and Paul on curriculum. They had just gotten started, they needed people with technical skills. I took on the technical chair, things escalated, and within a few weeks I was CTO and along with many other, was doing NMA full time. My time in industry proved to be an asset – working with short deadlines, focusing on shipping, and managing relationships with a large interdisciplinary team. Pretty soon, I was helping coordinate dozens of volunteers, a growth experience that I didn’t know I was ready for. So if you’re wondering whether industry is right for you, if you want to build big things and manage big collaborations in the future, I think it’s a great place to get that experience.

    Deciding what to build

    When you first get started on a large project like this, you get overwhelmed with ideas about what it is that you’re trying to build. One framework for orienting your thoughts are the Heilmeier questions:

    • What are you trying to do? Articulate your objectives using absolutely no jargon.
    • How is it done today, and what are the limits of current practice?
    • What is new in your approach and why do you think it will be successful?
    • Who cares? If you are successful, what difference will it make?
    • What are the risks?
    • How much will it cost?
    • How long will it take?
    • What are the mid-term and final “exams” to check for success?

    If you can answer these questions, you know what you’re building, and you can get a team to congregate around those ideas. This isn’t very different, conceptually, from writing a grant. For us, it was really the fundraising document that formalized a lot of our early brainstorming The answer to these questions could be written simply as:

    • We’re building an online, 3-week intensive summer school in computational neuroscience. It will be cheap (100$ or less to attend) and accessible to all.
    • Traditional summer schools offer a great experience and the chance to create life-long scientific networks, but they are elitist and expensive. MOOCs are cheap and accessible to all but a vanishingly small percentage of people actually complete them because they don’t foster belonging.
    • We create high-quality videos and interactive notebooks and we deliver them in pods of students matched to TAs. We offer the close-knit experience of traditional summer schools with the accessibility of MOOCs in a new hybrid model. People are bored and alone because of COVID so we have a unique opportunity to show that this online inverted classroom model can work.
    • We’ll bring cheap high-quality education to thousands of people and will set the standard for open source educational materials for years to come. We can bring the model to many other areas of science.
    • Execution risk and legal risk
    • 1500$ for each TA, and we’re aiming for 200 TAs
    • We’re launching July 13th
    • We’ll track attendance, completion metrics, students surveys, website statistics, etc. to have a 360 view of our impact. Our core metric is completion of the class and we aim for 80% completion.

    Within each committee, you can take Heilmeier’s questions and consider their relevance to your committee’s duties. The curriculum committee’s goal might be to create high-quality materials for the class, while the technical committee might aim to deliver 99% uptime during the class. The point is to try to have clarity on your goals so that when crunch time happens you can prioritize.

    Running on enthusiasm

    I was really enthusiastic about taking what I learned from Code in Place and bringing that to computational neuroscience. The core team had been thinking about a new kind of summer school for years before NMA got started and they were ready to tap into their network to jump start the effort. In the early days, being enthusiastic about the project and having a great story to tell are instrumental in finding volunteers. We’re democratizing education! We’re bringing grad-level education to underserved communities! We’re doing something no one’s ever done!

    I volunteer at a soup kitchen, and one thing they emphasized during our training is that a volunteer’s time is a gift. You have to have a pipeline that translates a volunteer’s enthusiasm into action, otherwise you’re refusing to accept that gift, and that’s frustrating for the volunteer. At first, we would bring new people into our slack workspace and expected them to find their way. We realized pretty soon that having people in the slack is just the first step – you need to have a plan to retain volunteers. This became especially true as the slack workspace shot up to hundreds, sometimes thousands of messages a day. Empathize with those poor people just joining in being bombarded with way too much information!

    For the technical committee, what worked well to engage new volunteers is to do a task triage. There’s a backlog of stuff that needs to be done – update the website, crank out some forms, clean up the github page, prepare the forum, etc. In a synchronous meeting, you go through the backlog and people volunteer to take on different tasks. That means your committee head needs to have tasks partly groomed beforehand. They pull up the Kanban board during the meeting and fill it in – we used quire, but Asana or even just a Google doc could work equally well. Everybody gets a chance to pitch in, and confusion about the scope of each task can be resolved during the meeting. It’s also a big team-building exercise: you get to see other people pitch in. We applied this method to build several efforts, including bootstrapping the technical team, the video editing team, the waxers team (see later), community managers, etc.

    If you prefer the asynchronous route, Github tasks with a good-for-first-timers tag can be useful. Regardless of the software you use, you want to embody openness: you want tasks to be done to be radically transparent and for volunteers to feel maximal agency. This is a common model for open source communities that can be fruitfully adapted to education.

    Running like clockwork

    Assigning tasks of different sizes and priority as a backlog grooming session is great to engage new volunteers, but you will also need to do time-sensitive tasks. For time-sensitive things, you need a Gantt chart with a detailed day-by-day breakdown of tasks to be done with somebody accountable for each task (the single-threaded owner). I was taught the arcane art of Gantt charts by my colleagues Keith and Frances at Facebook, alums of Fitbit and the Apple hardware team. Let me tell you something, if you want to get stuff done on time, somebody that knows how to ship hardware will make it happen – when you build consumer products, you have to ship them on time, no matter what.

    The arcane art of the Gantt chart

    For an online summer school, that means having deadlines for student admissions, TA applications, first drafts of content, second drafts of content, reviews, editing, posting on github, etc. for every single day of content. The deadlines should be visible to everybody, and the people responsible for them should be accountable. Elizabeth DuPre pointed me to this talk by the creator of Elm – in open source projects, the right culture doesn’t just happen by accident, culture happens because norms are defined and reinforced. “Deadlines matter” is a norm that is different than what most people are used to in the context of giving instruction. If you’re giving your own class, you can keep editing your slides up to 2 minutes before the class. You can’t do that at NMA – your slides will be reviewed, your video recording will be edited, your video will be captioned, etc. – if you hand in your slides late, everybody down the pipeline suffers.

    If you want deadlines to matter you have to will them into existence by making them a norm inside of an organization and reinforcing that by making them visible to everybody. Keeping track of these deadlines eventually led to a daily cadence of standups, which I ran – ironic because I always hated standups in industry. Konrad once called me “a little German” for my insistence on deadlines, which is very funny since I’m almost absurdly disorganized in my personal life. Being inside of a big org is different. It’s an embodiment of a deeper principle of contractualism – we define and agree on what we owe to each other and bind ourselves to that, and that’s what defines morality within our community. The upside of that is the contract is multilateral. When we couldn’t get the ideal matching method between TAs and students to work on time, but we had a “good enough” version that was working, I could say to others that we have to ship the good enough version today, because we agreed to it; it gave me cover, even though it was an unpopular decision. That kind of clarity in planning and expectations avoids a lot of friction later on.

    You need good tools to work productively with each other

    In The Mythical Man-Month, Fred Brooks claims that “adding [more programmers] to a late software project makes it later”. When you have dozens of contributors to an open-source education curriculum, how do you avoid that? You need good tools. One of the first things that the technical team worked on was a smoke test for notebooks. We were worried that one of the notebooks that we use for teaching might break. Notebooks are designed for exploration and can be pretty brittle – a simple cell execution order inconsistency can break a notebook. If multiple people are editing and pushing notebooks, it’s almost guaranteed that a notebook will break at some point. If a notebook is broken, an individual TA might not have enough context to diagnose the problem, and we would have to broadcast to our 200 TAs a fix, which would be stressful for TA and student alike.

    So we started with a really simple smoke test, written by Marco. When you push notebooks to Github, the continuous integration kicks in, runs your notebook from scratch, and check whether it runs. From these humble beginnings, Michael Waskom built an intricate continuous integration (CI) pipeline to check that the code runs, that the style is consistent, and to generate versions of the notebooks for students and TAs, etc. This is the thing that allowed the editors to push better versions of the notebooks on a regular basis, and proved invaluable during a time crunch.

    Similarly, we used an online video editing tool (WeVideo) rather than an offline one because it allowed multiple people to contribute to video editing. That allowed Tara to oversee a dozen video editors and reassign tasks when necessary.

    You need good organization to work productively with each

    The easiest way to get a lot of people to contribute lots of content is to have them work in silos. Each of our instruction days was more or less independent of the others, so it made sense to have different people work on each independently. The downside to this approach is that it makes for a jumbled experience for the student. Novice learners, who don’t yet have proficiency in a subject, can easily be thrown off by a change in notation, nomenclature, or tone. Day leaders created one pagers far ahead of the content so we could diagnose missing prerequisites, incompatible learning objectives and big bumps in difficulty throughout the days. But that wasn’t enough.

    The biggest contributors to a smooth experience for the students were pre-pods and waxers. When I was giving an introductory CS class, I could see the looks of confusion on student’s faces. I could adjust the content in realtime (a high-wire experience, do not recommend). Ideally I would have had another test classroom run through the content in real time to give me detailed notes so I could improve the content before I gave the class in front of the actual students. That’s exactly what we did with pre-pods: we hired a dozen TAs to test the content 3 weeks before the real class (pods before the real ones; hence pre-pods). They gave 360 feedback at the micro level (e.g. typos) and macro level (e.g. changes in difficulty from day to day) which could then be taken into account by the content creators. It’s not adversarial like peer review can sometimes feel – everybody is on the same side. It’s similar to UI/UX testing, where you put your product in front of naive users and watch them destroy it from the other side of the one-way mirror so you can make a better product. This kind of design thinking – ship early, ship often, iterate – can be applied to all aspects of open source education.

    The other thing we realized is that writing good notebooks is really hard: it requires the confluence understanding the content (domain knowledge in comp neuro), programming, and radical empathy towards the learner (caring about androgogy). You also need an understanding of the house style, context across days, and knowing the quirks of the Github CI. We needed a dedicated team to polish content, the waxers (we call them editors outside of NMA, but internally the name stuck). Waxers had to have the most stressful job of all, and I would often see messages exchanged on slack at 3-4 in the morning about content that needed to be ready for the next day. But it worked! The notebooks were a highlight of the NMA experience, and they will keep giving value to the community for a long time.

    Ella, Michael, Tara, Madineh and I presented our pipeline for the content in this talk:

    Making decisions under uncertainty

    How do you make decisions swiftly and effectively in a big org? We had hundreds of volunteers; 134 authors on the curriculum paper alone. 134 very smart people cannot all be of the same opinion at the same time in the face of uncertainty. The first step to make decisions effectively is to separate decision making and execution. It’s very easy to leave a lot of decisions to be implicit until it’s crunch time. It happened a couple of times that I made a medium-scale decision or encouraged other people to make medium-scale decisions during a daily standup meeting. Big mistake. A lot of people affected by the decisions weren’t in the room; they felt unheard and now burdened to go to a daily meeting to stay in the loop. Don’t do that – take bigger decisions in separate planning meetings. Every software methodology has some notion of a planning meeting, whether waterfall or scrum or whatever – by the end of that meeting you should be clear about what to do for the next week or month.

    One thing that can sap a lot of energy at these meetings is having circular arguments: revisiting the same question in subsequent meetings even though you thought you had come to an agreement. Sometimes, one person thinks a decision was made, while another thinks things were still under discussion. Write down decisions in meeting notes, share them with everyone.

    Some orgs in the for-profit sector are hierarchical, so that disagreements get resolved top-down. That doesn’t really work in the non-profit sector, where people are there on a volunteer basis. In non-profit and in open source communities, many orgs thus use consensus-based decision making. I think pure consensus-based decision making is very difficult to get right. I’m a member of a not-for-profit makerspace that’s built on anarchist principles. We use consensus-based decision-making, and it gets rowdy: flame wars on our message board are not infrequent, whether it’s about resource allocation and who gets the space when and whose responsibility it is to take the trash out. Pure consensus-based decision making can have an insidious effect that people that are not in agreement with the majority are left isolated as “difficult” and “not a team player”. That can breed resentment, which leads to personality clashes, which leads to dysfunction: decision making grinds to a halt.

    There’s an alternative to pure consensus-based decision making: disagree and commit. You replace the norm that consensus must be reached with an alternative norm:

    1. everybody’s opinion should be understood
    2. decisions are made based on majority or supermajority
    3. it’s ok to disagree with a decision but we all bound ourselves to the decision

    To make sure that you understand somebody’s opinion – especially an opinion you don’t agree with – you can use the Steel Man technique. You restate the strongest version of their argument and engage with that. Oftentimes that will resolve whatever argument you had in the first place, or prepare you to build a hybrid solution. If you don’t come to a resolution, at least everyone will understand that they have been heard and understood. Decisions can take place in realtime in a synchronous meeting, or through a polling app in Slack (if a poll, put a deadline on it so it doesn’t drag on; in either case, quorum should be reached). If somebody disagreed with the decision, the norm says that that’s good and healthy, as long as they commit to the decision like everyone, so it doesn’t breed resentment in the long term.

    Be the change you wish to see

    My manager at Facebook often talked about positive risk: if you shoot for the moon, sometimes you can actually overshoot and accomplish more than you intended. That always sounded like nonsense to me – I’m a firm believer in Murphy’s law – but I had a chance to witness things going unexpectedly right at NMA. We planned many things, and delivered on what we planned, but perhaps our greatest success was fairly unexpected: pods worked. When we put students together with TAs, and had them interact closely for hours at a time in peer programming, they created bonds. Those bonds tapped into the students’ intrinsic motivation to be part of a community, and they felt belonging. When the going got tough, and they felt overwhelmed, they tapped into that support network to keep them going one more day. That’s how 87% of students managed to finish the class. One TA described the students tearing up on the last day and staying up till late in the morning to say goodbyes. Yes, materials are important, but in androgogy you also need to answer students’ emotional needs. That’s the biggest differentiator between NMA and a MOOC.

    Building that experience required a ton of time from many dedicated and passionate people, people that wanted to make a difference. Sometimes emotions ran high and I butted heads – I can be a difficult person and that’s something I need to work on. Sometimes we had legitimate disagreements, but oftentimes we were just stressed and sleep-deprived, and we were able to apologize and move on. We were able to deliver a great experience, produce a new model for online learning, and we created bonds with each other that will follow us for a long time. Should you decide to embark on a expansive adventure like that? In the words of Edwin Land (stolen from Jack Gallant’s email signature):

    Don’t undertake a project unless it is manifestly important and nearly impossible

    Edwin Land

    I’d like to thank the organizers, Megan, Brad, Gunnar, Konrad, Paul, Kate, Carsen, Xaq, Aina, Athena, Eric, John, Alex, Yiota, Emma and Beth; the waxers Michael, Madineh, Ella, Matt, Richard, Jesse, Byron, Saeed, Ameet, Spiros; everybody that contributed to technical, Titipat, Jeff, Marco, Arash, Adam, Natalie, Guga, and Tara; and everyone who contributed, whether a few hours or weeks at a time.

    Do you want to join a motley crew of education disruptors? Volunteer for NMA 2021 – calls are broadcast on Twitter.

    in xcorr.net on March 25, 2021 08:30 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    An interview with protocols.io: CEO Lenny Teytelman on partnering with PLOS

    Our ongoing partnership with protocols.io led to a new and exciting PLOS ONE article type, Lab Protocols, which offers a new avenue to share research in-line with the principles of Open Science. This two-part article type gives authors the best of both platforms: protocols.io hosts the step-by-step methodology details while PLOS ONE publishes a companion article that contextualizes the study and orchestrates peer review of the material. The end result is transparent and reproducible research that can help accelerate scientific discovery.

    Read our interview with protocols.io CEO and co-founder Dr. Lenny Teytelman where he discusses what triggered the concept for the company, how researchers can benefit from this collaboration and how his team settled on such an adorable racoon logo.

    For those who are unfamiliar, what is protocols.io? 

    It is a platform for developing and sharing reproducible methods. It serves both researchers needing secure and private collaboration and those aiming to share their knowledge publicly.

    What was the inspiration behind protocols.io? Did you have any particular experiences or issues as a researcher that led to you believing protocol sharing was an under-supported element of the scientific community? 

    It was a year and a half of my postdoc that went into correcting just a single step of the microscopy methods that I was using. Instead of a microliter of a chemical, the step needed five, and instead of a fifteen-minute incubation with it, it needed an hour. The problem is that this isn’t a new technique but is a correction of a previously-published method. That means I didn’t get any credit for that year and a half, and everyone using this method was either getting misleading results or had to spend one or two years rediscovering what I know—rediscovering what I’d love to share with them, but didn’t have a good way of doing.

    This is the experience that led to my obsession with creating a central place to facilitate the sharing of such knowledge, within research groups and companies and broadly with the world.

    As authors of PLOS ONE began to share methods, we realized that all researchers and disciplines had a need for a dedicated tool for proper method sharing… We now warmly welcome methods from all fields and disciplines, including psychology, social sciences, clinical, chemistry, and so on.

    – Lenny Teytelman, CEO and co-founder of protocols.io

    Protocols.io launched in 2014 — how were the first few years? How were you able to make yourself known and show that your product is worthwhile and secure for researchers? 

    Oh, you just touched on a painful topic. I actually gave a talk at a conference in 2018, called The harsh lessons from 4 years of community building at protocols.io. The first years were surprisingly rough. As an academic with zero entrepreneurial experience, I naively expected that once we build the platform, I’d tell my scientist friends that it exists, and through word of mouth, it would go viral. Turns out that it is not how new initiatives work. There is a lot of dedicated work needed to build the trust and visibility and simply let researchers know that you exist.

    It took the support of publishers, societies, funders, and Open Science and reproducibility champions to climb out of obscurity. Speaking of that support, the 2017 partnership between PLOS and protocols.io was critical in helping the research community learn about protocols.io.

    How has the platform evolved since then? What elements have been consistent and what have been some major changes? 

    Our vision and mission have not really changed since 2014, but thanks to the constant feedback of the research community and a brilliant CTO and co-founder Alexei Stoliartchouk listening to that input, the product has grown from a rudimentary website to a powerful tool with amazing functionality to support the sharing of the method details and to help in the daily work of the researcher. For a fun comparison of where we were at launch in February 2014, take a look at our Kickstarter video.

    One of the early changes was in scope, a year after we launched. We initially thought that this is a platform for experimental wetlab scientists, but soon after launch, requests started to come to expand it to support computational/bioinformatics workflows. And then in 2017, there was another expansion in scope, catalyzed by that same 2017 partnership with PLOS. As authors of PLOS ONE began to share methods, we realized that all researchers and disciplines needed a dedicated tool for proper method sharing. This actually led us to change our welcome page from “Repository for Life Science Methods” to “Repository for Research Methods.” We now warmly welcome methods from all fields and disciplines, including psychology, social sciences, clinical, chemistry, and so on.

    How important is the new PLOS ONE Lab Protocol article type to the journey and mission of protocols.io? 

    Soon after our launch, when I realized that things don’t just “go viral”, I began to think a lot about incentives. It occurred to me some time in 2015 that it would be amazing if researchers had a way of turning their protocols into peer reviewed papers. I was looking at the F1000Research model and wondering if we could add peer review to protocols.io. The problem was that our scope was so broad, we would need thousands of academic editors to be able to support the peer review — essentially we’d have to build PLOS ONE. Then, in 2017 when we started working with PLOS, I realized, “We don’t have to build PLOS ONE! It already exists; we just need to partner!”

    And while this partnership is a big deal for protocols.io, I am particularly excited about it because of what it can do for reproducibility and Open Science. As I said when we announced the launch, “We’re thrilled to extend our partnership with PLOS by launching this new modular article type. This will provide authors with all of the benefits of rigorous peer review, plus a dynamic and interactive platform to present their protocols, with support for videos and precise details that are important for adopting and building upon the published methods.”

    What type of research does not need to be as fast as possible? Do malaria or pediatric cancer patients somehow have the luxury of time? Can our planet afford delays in climate research? Open and rapid sharing as we see today for COVID-19 must be the norm, not an exception.

    – Lenny Teytelman, on the importance of Open Science

    PLOS and protocols.io have similar mission statements and nearby offices: how did this collaboration with PLOS start? 

    Many people assume that our collaboration with PLOS is a consequence of me being co-advised as a graduate student by Professor Michael Eisen, co-founder of PLOS. That actually is not the case. It is true that as soon as I realized in 2012 that something like protocols.io needs to be built, I called Professor Eisen, described the idea, and said, “PLOS should build such a platform.” But he replied, “PLOS is a publisher, not a software developer. You need to build it.”

    So in fact, it is the Chief Science Officer of PLOS, Dr. Veronique Kiermer, who is the key early lead on the PLOS/protocols.io connection. Dr. Kiermer was the founding editor of Nature Methods and has a passion for reproducibility and a deep appreciation for the essential role that protocols play in the research cycle. We met at an Open Access mixer in San Francisco when she had just moved to PLOS. As I showed her what we had built, she asked countless questions of “can it do this and that” and I showed her the “yes” answers right on my phone. She got excited and said, “This is exactly what I always dreamed of. We should do something together!”

    What has the COVID-19 pandemic taught you about the importance of Open Science and supporting collaborative research opportunities? 

    I’ve been stunned by the extent of rapid sharing and collaborative spirit in the research world, united against the COVID-19 pandemic. It is how you ideally imagine science working. It is how science should work. We’ve seen a remarkable level of rapid method sharing in the SARS-CoV-2 group. But it’s not just the methods; researchers are sharing data, preprints, code in precisely the way that accelerates and amplifies everyone’s efforts around the globe.

    I am simultaneously inspired by what I see and frustrated that this isn’t yet the norm. I’ve been watching the world and publishers declare in 2015 that rapid sharing and immediate Open Access are essential for Ebola research. Then the same in 2016 for Zika. Then 2019 for all research related to the opioid crisis. In 2020 and 2021 it’s COVID-19. It’s frustrating that we make exceptions for pandemics and crises and then go back to the traditional stunted way of sharing. 

    What type of research does not need to be as fast as possible? Do malaria or pediatric cancer patients somehow have the luxury of time? Can our planet afford delays in climate research? Open and rapid sharing as we see today for COVID-19 must be the norm, not an exception.

    What’s next for protocols.io? (How can you continue to help scientists adopt Open Science activities?) 

    The beautiful part of our growth (we have over 9,000 public protocols now and have been roughly doubling every year since our launch), is that with increased sharing and researchers on protocols.io we also have received more feedback and requests than ever. As more requests and suggestions come in, our appetite for improving and enhancing the platform only increases. It’s kind of like with research itself – each answer leads to more questions and more thirst for experiments. 

    We are also genuinely excited about the PLOS ONE Lab Protocols and we look forward to the first papers being published and to the future developments in this partnership. We’re just getting started.

    Bonus Question: How did you decide on the cute raccoon logo? 

    Too many reasons to list here! Read this Twitter thread to find out!

    Thank you to Lenny for his time and thoughtful answers. Be sure to visit protocols.io to start browsing through study methodologies or read more about Lab Protocols.

    The post An interview with protocols.io: CEO Lenny Teytelman on partnering with PLOS appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on March 23, 2021 04:04 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    2-Minute Neuroscience: Motor Neurons

    Motor neurons are neurons that carry information from the brain or spinal cord to regulate activity in muscles or glands. In this video, I will discuss upper and lower motor neurons. I’ll cover their functions and discuss the syndromes that result from damage to motor neurons: upper motor neuron syndrome and lower motor neuron syndrome.

    in Neuroscientifically Challenged on March 23, 2021 10:23 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How to Exploit Fitness Landscape Properties of Timetabling Problem: A New Operator for Quantum Evolutionary Algorithm

    This week on Journal Club session Mohammad Hassan Tayarani Najaran will talk about a paper "How to Exploit Fitness Landscape Properties of Timetabling Problem: A New Operator for Quantum Evolutionary Algorithm".

    The fitness landscape of the timetabling problems is analyzed in this paper to provide some insight into the properties of the problem. The analyses suggest that the good solutions are clustered in the search space and there is a correlation between the fitness of a local optimum and its distance to the best solution. Inspired by these findings, a new operator for Quantum Evolutionary Algorithms is proposed which, during the search process, collects information about the fitness landscape and tried to capture the backbone structure of the landscape. The knowledge it has collected is used to guide the search process towards a better region in the search space. The proposed algorithm consists of two phases. The first phase uses a tabu mechanism to collect information about the fitness landscape. In the second phase, the collected data are processed to guide the algorithm towards better regions in the search space. The algorithm clusters the good solutions it has found in its previous search process. Then when the population is converged and trapped in a local optimum, it is divided into sub-populations and each sub-population is designated to a cluster. The information in the database is then used to reinitialize the q-individuals, so they represent better regions in the search space. This way the population maintains diversity and by capturing the fitness landscape structure, the algorithm is guided towards better regions in the search space. The algorithm is compared with some state-of-the-art algorithms from PATAT competition conferences and experimental results are presented.


    Date: 2021/03/17
    Time: 14:00
    Location: online

    in UH Biocomputation group on March 17, 2021 04:33 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Open Response to “Economic impact of UKRI open access policy” report

    On February 17, 2021, FTI Consulting released the report: “Economic assessment of the impact of the new Open Access policy developed by UK Research and Innovation” (full title) prepared for the Publisher’s Association (PA) of the UK.  While we are not setting out to provide an extensive review and analysis of this report, we do want to generally refute the assertion that OA via the UKRI policy is economically damaging, and we’ll provide some references that support this position.

    Open Access and Open Research are an opportunity

    The FTI/PA report asserts that the impact of the new UKRI policy could lead to a loss in revenue to “UK-based journals” and have a broader negative impact on UK competitiveness, economic growth and position as a “global research hub.” This assessment appears to use impact on traditional publication models as the basis for this conclusion. A problem here, however, is that it is considering the scholarly publishing industry as somehow separate from the broader economic impact of making research open. Previous reports from the PA itself have shown that the link to the economy is much broader than this depiction. The UK’s R&D roadmap clearly embraces open research and rapid open dissemination as not just good for science and health (especially during the current pandemic) but as a core part of “a world-leading system that unlocks innovation and growth throughout all parts of the economy.” Open Access enables this entire ecosystem to operate more efficiently and cost-effectively. When coupled with Open Research, it brings the additional benefits of increasing trust in science, ever more important during these difficult times. 

    Open Research (of which Open Access is one part) offers even greater opportunity both in terms of discovery and economic growth. For example, the European Data Portal’s report on the economic impact of open data identifies the potential for 15.7% growth across a range of sectors – including scientific & technical, communication, transport, finance, health, retail and agriculture – and significant cost savings across the economy. We believe that a focus on Open Research, and all the efficiencies it brings, is worth considering for the UK economy especially in light of the combined impacts of the pandemic and Brexit (whatever one’s position on Brexit is).

    Furthermore, Open Access is the fastest growing publishing model. A recent report from the scholarly analytics firm, Dimensions, shows that “in 2020 […]more outputs were published through Open Access channels than traditional subscription channels globally.” The report further show that the majority of the growth in OA is fueled by Gold OA, i.e. OA supported by some kind of business or sustainability model. It is ever more futile, in our opinion, to take such a critical and isolated position against UKRI policy when one can argue that OA is an outcome that will simply continue to happen. It is far more beneficial, as the policy itself proposes, to establish the UK as an early adopter, leader, and beneficiary of OA and its opportunities. 

    Publishers can and should lead

    The benefits of OA, the opportunities created by OA, and the desire for OA by the public and funders like UKRI, are clear. Contrary to Section 4.44 of the report, which mischaracterizes PLOS as publishing “at cost”, it is entirely possible to maintain value for authors, funders, and  institutions while operating a sustainable, surplus-generating business model. PLOS believes that publishers globally should be leading the efforts to devise and develop the next generation of business models that are able to support their operations in an Open Access context. This will, of course, require deep, and sometimes difficult, work by transitioning publishers. But we strongly believe this work is not outside the acceptable effort level of conscientious members of the scholarly publishing industry that have been aware of Open Access, and its benefits, for at least the last 20 years. 

    We would like to thank the UKRI for its efforts to hear from stakeholders in the scholarly  publishing community, and beyond, regarding the proposed policy. We understand it is a discussion rife with passionate viewpoints and unseen complexities, and so we would like to ensure that one point is clear: to successfully set up the most efficient,  frictionless Open Access ecosystem, we should leverage the existing budgets and infrastructures of scholarly publishing but with OA as the outcome. This way, Open Access is not viewed as a destructive force, or something external and different that traditional publishers are not part of, but simply as the new way to publish and communicate research that all publishers can facilitate. 



    The post Open Response to “Economic impact of UKRI open access policy” report appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on March 17, 2021 02:04 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 15 March 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.

    Please join us at the next regular Open NeuroFedora team meeting on Monday 15 March at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'

    The meeting will be chaired by @nerdsville. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on March 15, 2021 10:18 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Update: Name change policy

    arXiv is updating its name change policy to provide registered authors with more control over their online identities. This approach, which reduces barriers to changing public records, fosters diversity and reflects our inclusive ethos.

    arXiv now offers the following name change options:

    1. In full text works: the author name can be changed in the PDF and/or LaTeX source where it appears in the author list, acknowledgments, and email address
    2. In metadata: the name and email address can be changed in the author list metadata and in the submission history metadata for all existing versions
    3. In user accounts: the name, username, and email address can all be changed

    We are not currently able to support name changes in references and citations of works. Also, arXiv cannot make changes to other services, including third party search and discovery tools that may display author lists for papers on arXiv. arXiv will continue to evaluate and adapt its policies as best practices evolve, and to allow users to directly manage their identity, while maintaining discoverability.

    arXiv began discussing this issue in 2018 at the request of arXiv community members. We then consulted with arXiv’s advisory boards, in addition to librarians, publishers, and diversity experts. This latest update is an outgrowth of this work and reflects arXiv’s support for name changes.

    arXiv’s new policy aligns with guiding principles recently provided by the Committee on Publication Ethics (COPE), a global organization that aims to integrate ethical practices as a normal part of the publishing culture worldwide. The group expects to release more complete guidance on the issue later this year.

    If you would like to request a name change, please contact us through our user support portal or at help@arxiv.org.

    in arXiv.org blog on March 11, 2021 02:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    World Kidney Day 2021: The Goal of Life Participation

    Today, 11 March 2021, BMC Nephrology is celebrating World Kidney Day. This awareness campaign started in 2006 as a joint effort of the International Society of Nephrology and the International Federation of Kidney Foundations with the goal of increasing awareness of kidney disease worldwide.

    Themes over the years have highlighted the impact of risk factors of diabetes and obesity and focus on transplantation, and the health of women and children. This year, the theme will be the year of “Living Well with Kidney Disease” with the goal of patient empowerment and life participation. This is a striking reset in priorities. There has been a tremendous focus on outcomes as measured by lab values and hospitalizations rates with reimbursements affected positively or negatively when not meeting targets. Protocol developments by insurers and hospital systems and guidelines have reinforced the emphasis on data. This year’s theme reframes the care of individuals with kidney disease as improving outcomes to allow them to continue to participate in their lives. The theme also emphasizes that meeting laboratory targets and following protocols do not equate to fully taking care of the patient.

    Life participation is not something easily measured and cannot really be determined without the input of the patient. For some, it will be the ability to work, to participate in family activities, to vacation, or to control their symptoms. It puts patients and their caregivers at the center of their treatment plan and setting the goals of their care. To deliver this, healthcare providers will need to provide the education and support to empower patients and caregivers to be able to have a more active role and have discussions about what is important to them.

    This year’s World Kidney Day theme is a good reminder to all health providers that we are taking care of the individual not just treating a disease process.

    This year’s World Kidney Day theme is a good reminder to all health providers that we are taking care of the individual not just treating a disease process. Hearing our patient’s concerns, challenges, and what they value should be part of our routine. Kidney disease for many is a life-changing diagnosis. While we cannot change the diagnosis of kidney disease, addressing the anxiety and frustrations many patients have will help their overall care. The initiative of living well with kidney disease serves to remind nephrology providers that kidney disease is part of the patient’s life but should not be their entire life.  I would take this further that this may be a reminder for all of us taking care of individuals with a variety of health conditions. We need to take care of the individual, not just the disease.

    The post World Kidney Day 2021: The Goal of Life Participation appeared first on BMC Series blog.

    in BMC Series blog on March 11, 2021 08:30 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    A Farewell to ALM, but not to article-level metrics!

    Author: Chris Haumesser, Director of Platform and Engineering at PLOS

    Understanding the impact of research is of vital importance to science. The altmetrics  movement is defined by its belief that research impact is best measured by diverse real time data about how a research artifact is being used and discussed. These metrics serve as “alternatives” to solely relying  on traditional shorthand assessments like citation counts and Journal Impact Factor. 

    PLOS was an early adopter and advocate of these metrics, with its internally-developed Article Level Metrics (ALM) platform among the earliest implementations to launch in 2009. By displaying data from multiple sources, PLOS ALM showed how an article was being read, downloaded, cited, discussed and shared, accessible from the article itself by anyone.

    In its decade of service to the PLOS community, ALM has helped blaze a trail for others to follow. Today altmetrics are so commonly taken for granted that the “alternative” moniker seems anachronistic – the ubiquity of these metrics a testament to the return on our investment in ALM.

    In fact, the altmetrics movement has been so successful that it has spawned a market of providers who specialize in collecting and curating metrics and metadata about how research outputs are used and discussed. 

    One of these services, in particular, has far outpaced the reach and capabilities of ALM, and PLOS is now excited to pass the baton of our altmetrics operations to the experts at Altmetric.

    Given the historical significance of ALM, it’s not an easy decision to say goodbye. But PLOS is as committed as ever to providing our community with the best possible data about how their research is changing the world. Altmetric’s singular focus on research metrics positions them to deliver on this promise with a breadth and depth that PLOS simply can’t match as an organization with competing priorities.

    Partnering with Altmetric will unlock data from many sources beyond ALM that were previously untracked. After an extensive analysis, PLOS is confident that Altmetric will provide increased coverage across the board for the vast majority of papers in our corpus. 

    Beginning today, the data displayed on the “Metrics” tab of our published articles will all come from Altmetric, and the “Media Coverage” tab of our published articles will link to Altmetric’s media coverage. Each article will also have a link to an Altmetric details page, which displays extensive detailed metrics for the article. 

    As part of this transition, authors may see their metrics change due to the change in data provider. We expect authors to see some metrics increase due to Altmetric’s increased coverage and new sources. However, we are retiring some areas of our metrics provision entirely. Unfortunately, our articles will no longer display PMC usage counts, as these were aggregated by ALM and will no longer be available. We will also be removing recent tweets from articles and retiring the ALM Reports service.

    Moving forward we will continue to evaluate the presentation of metrics on our articles and look for ways to integrate even more relevant data from Altmetric into our user experience.

    The post A Farewell to ALM, but not to article-level metrics! appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on March 10, 2021 03:01 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Minkowski Metric, Feature Weighting and Anomalous Cluster Initializing in K-Means Clustering

    This week on Journal Club session Deepak Panday will talk about a paper "Minkowski Metric, Feature Weighting and Anomalous Cluster Initializing in K-Means Clustering".

    This paper represents another step in overcoming a drawback of K-Means, its lack of defense against noisy features, using feature weights in the criterion. The Weighted K-Means method by Huang et al. (2008, 2004, 2005) [5, 7] is extended to the corresponding Minkowski metric for measuring distances. Under Minkowski metric the feature weights become intuitively appealing feature rescaling factors in a conventional K-Means criterion. To see how this can be used in addressing another issue of K-Means, the initial setting, a method to initialize K-Means with anomalous clusters is adapted. The Minkowski metric based method is experimentally validated on datasets from the UCI Machine Learning Repository and generated sets of Gaussian clusters, both as they are and with additional uniform random noise features, and appears to be competitive in comparison with other K-Means based feature weighting algorithms.


    Date: 2021/03/10
    Time: 14:00
    Location: online

    in UH Biocomputation group on March 10, 2021 03:01 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Highlights of the BMC Series – February 2021

    BMC Public Health

    Associations between occupation and heavy alcohol consumption in UK adults aged 40–69 years: a cross-sectional study using the UK Biobank

    Alcohol consumption and its associated consequences, including cancers and heart disease, remain to be a major public health challenge. Investigating the factors that contribute to alcohol consumption can help determine where to target intervention resources.

    The authors found strong associations between occupations and heavy alcohol consumption

    Researchers from the University of Liverpool, Thompson and Pirmohamed, investigated the association between occupation and heavy alcohol consumption in working individuals aged 40-69 years in the UK. Thompson and Pirmohamed used the UK Biobank to recruit participants.

    The authors found strong associations between occupations and heavy alcohol consumption, with jobs identified as skilled trades the most likely to be associated with heavy alcohol consumption. The largest ratios for heavy drinkers were observed for publicans and managers of licensed premises, industrial cleaning process occupations, and plasterers. Whereas clergy, physicists, geologists and meteorologists and medical practitioners were least likely to be heavy drinkers. The authors findings help to determine which employment sectors may benefit most from health promotion programs.


    BMC Anesthesiology

    A gap existed between physicians’ perceptions and performance of pain, agitation-sedation and delirium assessments in Chinese intensive care units

    This highlights the need for prompt quality improvement

    Pain, agitation-sedation and delirium (PAD) management are key elements in the care of critically ill patients. However, previous research has highlighted the gap between actual clinical practices and physicians’ attitudes to PAD management. Zhou et al. investigated the current practice of PAD assessments in Chinese ICUs by a one-day point prevalence study combined with an on-site questionnaire survey.

    Fig. 5 taken from Zhou et al.

    The authors concluded that the actual PAD assessment rate was suboptimal, especially with regards to delirium screening. There was a significant gap between the actual practice and the physicians’ perception of the practice. Physicians reported performing assessing pain and agitation-sedation in approximately only 20 to 25% of patients which is lower than previous reports. Therefore, the study highlights the need for prompt quality improvement and the optimization of practices of PAD management in ICUs in China.


    BMC Gastroenterology

    Impact of improvement of sleep disturbance on symptoms and quality of life in patients with functional dyspepsia

    Many patients with functional gastrointestinal disorders have sleep disturbance and this impacts their quality of life. However, it is not yet fully understood how sleep disturbance affects the pathophysiology of functional dyspepsia (FD). Kuribayashi et al. carried out a prospective study on 20 patients to investigate the relationship between FD and sleep disturbance. Patients took sleep aids for 4 weeks and filled out questionnaires before and after taking sleep aids.

    Sleep disturbance was significantly improved by 4-week administration of sleep aids

    The authors found that sleep disturbance was significantly improved by 4-week administration of sleep aids and as a result, GI symptoms, anxiety, and quality of life in patients with FD were also improved. In addition, the authors concluded that the use of sleep-inducing drugs was associated with reduced pain as well as improvement of dyspeptic symptoms in FD patients. Overall the study highlights the potential benefits of sleep aids for patients with FD and sleep disturbance, although multicenter studies involving a larger number of cases are needed to further investigate.



    BMC Research Notes

    Trunk and lower limb muscularity in sprinters: what are the specific muscles for superior sprint performance?

    Previous research has reported that many muscles of the trunk and lower limb were greater in sprinters than in non-sprinters. However, the specific muscles that contribute to superior sprint performance for sprinters have not been fully identified. Suga et al. examined the relationships between the trunk and lower limb muscle cross-sectional areas and sprint performance in well-trained male sprinters.

    Their findings showed that larger absolute and relative cross-sectional areas of the psoas major and gluteus maximus correlated with better personal best 100m sprint times. Therefore, the psoas major and gluteus maximus may be specific muscles for superior sprint performance for sprinters. The study also corroborates previous studies suggesting that the hamstring may not be an important muscle for achieving superior sprint performance.


    BMC Women’s Health

    Alleviating psychological distress associated with a positive cervical cancer screening result: a randomized control trial

    Cervical cancer is the fourth most common cancer among women globally and cytology-based (Pap smear) screening is important for early detection and treatment. Although cervical cancer screening is beneficial and can enable early detection, a positive screening result might cause psychological burden. As a result, this may influence the decision to undergo further examination and future screening for cervical cancer.

    Psychological distress appeared to be higher in the control group

    Isaka et al. carried out a randomized control trial in Japan, with the intervention being the provision of cervical cancer information and cervical cancer screening information through a leaflet. The authors aim was to evaluate whether the leaflet would help reduce psychological distress. Women who were about to undergo cervical cancer screening received hypothetical screening results either with or without a leaflet, at random. Following the intervention, psychological distress appeared to be higher in the control group than in the intervention group among those who received a hypothetical positive screening result. Therefore, the authors concluded that information provision might help reduce psychological distress and recommend that cervical cancer screening programs provide participants with all relevant information.


    The post Highlights of the BMC Series – February 2021 appeared first on BMC Series blog.

    in BMC Series blog on March 09, 2021 11:59 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dev session: James Knight, Thomas Nowotny: GeNN

    The GeNN simulator

    James Knight and Thomas Nowotny will introduce the GeNN simulation environment and discuss its development in this dev session.

    The abstract for the talk is below:

    Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this dev session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework [1], which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. The Python interface has enabled us to develop a PyNN [2] frontend and we are also working on a Keras-inspired frontend for spike-based machine learning [3].

    In the session we will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it works inside. We will then talk in more depth about its development with a focus on testing for GPU dependent software and some of the further developments such as Brian2GeNN [4].

    in INCF/OCNS Software Working Group on March 04, 2021 12:08 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Spatiotemporal Dynamics of the Brain at Rest Exploring EEG Microstates as Electrophysiological Signatures of BOLD Resting State Networks

    This week on Journal Club session David Haydock will talk about a paper "Spatiotemporal Dynamics of the Brain at Rest Exploring EEG Microstates as Electrophysiological Signatures of BOLD Resting State Networks".

    Neuroimaging research suggests that the resting cerebral physiology is characterized by complex patterns of neuronal activity in widely distributed functional networks. As studied using functional magnetic resonance imaging (fMRI) of the blood-oxygenation-level dependent (BOLD) signal, the resting brain activity is associated with slowly fluctuating hemodynamic signals (10 s). More recently, multimodal functional imaging studies involving simultaneous acquisition of BOLD-fMRI and electroencephalography (EEG) data have suggested that the relatively slow hemodynamic fluctuations of some resting state networks (RSNs) evinced in the BOLD data are related to much faster (100 ms) transient brain states reflected in EEG signals, that are referred to as "microstates".

    To further elucidate the relationship between microstates and RSNs, we developed a fully data-driven ap- proach that combines information from simultaneously recorded, high-density EEG and BOLD-fMRI data. Using independent component analysis (ICA) of the combined EEG and fMRI data, we identified thirteen mi- crostates and ten RSNs that are organized independently in their temporal and spatial characteristics, respec- tively. We hypothesized that the intrinsic brain networks that are active at rest would be reflected in both the EEG data and the fMRI data. To test this hypothesis, the rapid fluctuations associated with each microstate were correlated with the BOLD-fMRI signal associated with each RSN.

    We found that each RSN was characterized further by a specific electrophysiological signature involving from one to a combination of several microstates. Moreover, by comparing the time course of EEG microstates to that of the whole-brain BOLD signal, on a multi-subject group level, we unraveled for the first time a set of microstate-associated networks that correspond to a range of previously described RSNs, including visual, sensorimotor, auditory, attention, frontal, visceromotor and default mode networks. These results extend our understanding of the electrophysiological signature of BOLD RSNs and demonstrate the intrinsic connec- tion between the fast neuronal activity and slow hemodynamic fluctuations.


    Date: 2021/03/05
    Time: 14:00
    Location: online

    in UH Biocomputation group on March 03, 2021 06:05 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Announcing the PLOS Scientific Advisory Council

    Author: Veronique Kiermer, Chief Scientific Officer

    We are delighted to announce the creation of the PLOS Scientific Advisory Council, a small group of active researchers with diverse perspectives to help us shape our efforts to promote Open Science, globally.

    PLOS, as a non-profit organization committed to empowering researchers to change research communication, cannot successfully pursue its mission without listening to the research community. We regularly survey and consult the research communities we work with, formally and informally, to guide our choices and developments. The organization’s governance, including our Board of Directors, has always involved active researchers. And we derive great insight and advice from our continuous exchange with the academic Editors of PLOS journals. 

    We’ve decided to take an additional formal step and create a forum where the researchers who contribute to PLOS through different channels can interact directly with each other, and to ensure that this forum includes voices from researchers around the globe. 

    We’ve created the PLOS Scientific Advisory Council, a small group of researchers who represent varied scientific and career perspectives, who will advise PLOS executive and editorial leadership on strategic questions of scientific interest. 

    At this point, the Scientific Advisory Council is deliberately small–about a dozen individuals–to ensure in-depth discussions and engagement, but we’ve strived to include different disciplinary interests, career stages and geographic representation. The group includes four PLOS Board members who are themselves active researchers, two of our journals’ academic Editors-in-Chief, alongside researchers who are not formally associated with PLOS. 

    We are delighted to welcome the following members to the PLOS Scientific Advisory Council. To see their photos and full bios, please visit their page on our website: 

    Sue Biggins
    Fred Hutchinson Cancer Research Center and University of Washington, Seattle, USA

    Yung En Chee
    University of Melbourne, Australia

    Gregory Copenhaver
    University of North Carolina, Chapel Hill, USA

    Abdoulaye A. Djimde
    University of Bamako, Mali

    Robin Lovell-Badge
    The Francis Crick Institute, London, UK

    Direk Limmathurotsakul
    Mahidol University, Bangkok, Thailand

    Meredith Niles
    University of Vermont, Burlington, USA

    Jason Papin
    University of Virginia, Charlottesville, USA

    Simine Vazire (Chair)
    University of Melbourne, Australia

    Keith Yamamoto
    University of California, San Francisco, USA

    Veronique Kiermer (Secretary, ex officio)

    The post Announcing the PLOS Scientific Advisory Council appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on March 01, 2021 08:11 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 1 March 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.

    Please join us at the next regular Open NeuroFedora team meeting on Monday 1 March at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'

    The meeting will be chaired by @major. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on March 01, 2021 09:48 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Alcohol Policies, Firearm Policies, and Suicide in the United States


    Alcohol and firearms are a dangerous combination and are commonly involved in suicide in the United States. This has only increased in importance as a public health issue as alcohol drinking, firearm sales, and suicides in the United States have all increased since the start of the covid-19 pandemic. State alcohol policies and state firearm policies might impact alcohol and firearm related suicide, but it is unknown how these policies specifically relate to these suicides, or how these policies might interact with one another.

    The study

    We conducted a cross-sectional study to assess relationships between alcohol policies, firearm policies, and U.S. suicides in 2015 involving alcohol, firearms, or both. We used the Alcohol Policy Scale, previously created and validated by our team, to assess alcohol policies and the Gun Law Scorecard from Giffords Law Center to quantify firearm policies. Suicide data came from the National Violent Death Reporting System. State- and individual-level GEE Poisson and logistic regression models assessed relationships between policies and firearm- and/or alcohol-involved suicides with a 1-year lag.


    Higher alcohol and gun law scores were associated with reduced incidence rates and odds of suicides involving either alcohol or firearms.

    In the United States in 2015, alcohol and/or firearms were involved in 63.9% of suicides. Higher alcohol and gun law scores were associated with reduced incidence rates and odds of suicides involving either alcohol or firearms. For example, a 10% increase in alcohol policy score was associated with a 28% reduction in the rate of suicides involving alcohol or firearms. Similarly, a 10% increase in gun policy score was associated with a 14% decrease in the rate of suicides involving firearms.

    These relationships were similar for suicides that involved alcohol and firearms. For example, a 10% increase in alcohol policy score was associated with a 52% reduction in the rate of suicides involving alcohol and firearms. A 10% increase in gun policy score was associated with a 26% reduction in the rate of suicides involving alcohol and firearms.

    In addition, we found synergistic effects between alcohol and firearm policies, such that states with restrictive policies for both alcohol and firearms had the lowest odds of suicides involving alcohol and firearms.

    Conclusions and next steps

    Results of the study suggest that laws restricting firearms ownership among high-risk individuals, including those who drink excessively or have experienced alcohol-related criminal offenses, may reduce firearm suicides.

    We found restrictive alcohol and firearm policies to be associated with lower rates and odds of suicides involving alcohol or firearms, and alcohol and firearms and our research suggests that alcohol and firearm policies may be a promising means by which to reduce suicide. These protective relationships were particularly striking for suicides involving both alcohol and firearms as well as in the strong protective interaction between alcohol and firearm policy variables, particularly for suicides involving alcohol. These findings, taken in the context of the broader literature, also suggest that laws restricting firearms ownership among high-risk individuals (so-called ‘may issue’ laws), including those who drink excessively or have experienced alcohol-related criminal offenses, may reduce firearm suicides.

    Because this was a cross-sectional analysis, this should be considered a hypothesis-generating study that cannot prove a causal association between alcohol or firearm policies and suicide. In future research, studies using multiple years of policy and suicide data would strengthen causal inference.

    Stronger alcohol and firearm policies are a promising means to prevent a leading and increasing cause of death in the U.S. The findings further suggest that strengthening both policy areas may have a synergistic impact on reducing suicides involving either alcohol, firearms, or both.

    The post Alcohol Policies, Firearm Policies, and Suicide in the United States appeared first on BMC Series blog.

    in BMC Series blog on March 01, 2021 07:11 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Overview of 'The Spike': an epic journey through failure, darkness, meaning, and spontaneity

    from Princeton University Press (March 9, 2021)

    THE SPIKE is a marvelously unique popular neuroscience book by Professor Mark Humphries, Chair of Computational Neuroscience at the University of Nottingham and Proprietor of The Spike blog on Medium. Humphries' novel approach to brain exposition is built around — well — the spike, the electrical signal neurons use to communicate. In this magical rendition, the 2.1 second journey through the brain takes 174 pages (plus Acknowledgments and Endnotes).

    I haven't read the entire book, so this is not a proper book review. But here's an overview of what I might expect. The Introduction is filled with inventive prose like, “We will wander through the splendor of the richly stocked prefrontal cortex and stand in terror before the wall of noise emanating from the basal ganglia.” (p. 10).

    Did You Know That Your Life Can Be Reduced To Spikes?

    Then there's the splendor and terror of a life reduced to spikes (p. 3):

    “All told, your lifespan is about thirty-four billion billion cortical spikes.”

    Spike Drama

    But will I grow weary of overly dramatic interpretations of spikes? “Our spike's arrival rips open bags of molecules stored at the end of the axon, forcing their contents to be dumped into the gap, and diffuse to the other side.” (p. 29-30).

    Waiting for me on the other side of burst vesicles are intriguing chapters on Failure (dead end spikes) and Dark Neurons, the numerous weirdos who remain silent while their neighbors are “screaming at the top of [their] lungs.” (p. 83). I anticipate this story like a good mystery novel with wry throwaway observations (p. 82):

    “Neuroimaging—functional MRI—shows us Technicolor images of the cortex, its regions lit up in a swirling riot of poorly chosen colors that make the Pantone people cry into their tasteful coffee mugs.”

    Pantone colors of 2021 are gray and yellow


    Wherever it ends up – with a mind-blowing new vision of the brain based on spontaneous spikes, or with just another opinion on predictive coding theory – I predict THE SPIKE will be an epic and entertaining journey. 


    in The Neurocritic on February 28, 2021 09:45 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The Invisibility of COVID-19

    Why is it so hard to picture COVID-19?

    in Discovery magazine - Neuroskeptic on February 28, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dialogue With Dreamers

    Researchers claim that they can ask questions and receive answers from dreaming participants.

    in Discovery magazine - Neuroskeptic on February 27, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Why arXiv needs a brand

    Pop quiz: There are so many “-Xivs” online and on social media.  Which ones are operated by arXiv?

    Answer: only arXiv.org and @arXiv

    arXiv is a highly valued tool known primarily through its digital presence. However, the use of arXiv’s name by other services has led to confusion. And despite decades of reliable service, arXiv’s inconsistent visuals and voice have projected an air of neglect. This jeopardizes our ability to raise funds for critical improvements.

    “As the role of arXiv in open science becomes more evident, its value should be made obvious lest we end up losing the system we cherish and rely upon so much,” said Alberto Accomazzi, PhD, ADS Program Manager.

    In 2020, Accomazzi joined nine other diverse community members to become part of an advisory group formed to support arXiv’s communications and identity project. The goal? To ensure that the way we present ourselves to the world reflects arXiv’s true nature as a professional, innovative tool created by and for researchers.

    Throughout the identity project, we:

    • assessed user feedback collected since 2016,
    • surveyed board members and 7,000 additional users about their perceptions of arXiv,
    • gathered ten diverse community members to serve as advisors,
    • contracted with a professional designer to produce a logo, and
    • are working with an accessibility consultant to address the needs of all arXiv readers and authors.

    To guide our branding efforts we focused on arXiv as a place of connection, linking together people and ideas, and connecting them with the world of open science. After many rounds of revision and refinement, arXiv’s first brand guide was produced, in addition to our new logo and usage guidelines, and we’d like to share them with you now.

    arXiv's logoThe intertwining arms at the heart of the logo represent arXiv as a place of connection

    The arXiv logo looks to the future and nods to the past with a font that pays homage to arXiv’s birth in the 90’s while also being forward looking. The arms of the ‘X’ retain stylistic elements of the ‘chi’ in our name, with a lengthened top left and lower right branch. Symbolically, the intertwining of the arms at the heart of the logo captures the spirit of arXiv’s core value. arXiv is a place of connection, linking together people and ideas, and connecting them with the world of open science.

    The brand guide and usage guidelines ensure that we express arXiv’s identity with consistent quality and continuity across communication channels. By strengthening our identity in this way, arXiv will be recognizable and distinct from other services. Staff will save time by having access to clear, consistent guidelines, visual assets, and style sheets, and collaborators will know the expectations regarding arXiv logo usage.

    The arXiv community will notice that the main arXiv.org site remains the same at this time. That’s because the identity rollout and implementation process will be gradual, starting with official documents before moving to core arXiv services.

    “arXiv must take control of its identity to maintain its place and grow within the scholarly communications ecosystem,” said arXiv’s executive director Eleonora Presani, PhD.

    in arXiv.org blog on February 26, 2021 03:55 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Know Your Brain: Red Nucleus

    The red nuclei are colored red in this cross-section of the midbrain.

    The red nuclei are colored red in this cross-section of the midbrain.

    Where is the red nucleus?

    The red nucleus is found in a region of the brainstem called the midbrain. There are actually two red nuclei—one on each side of the brainstem.

    The red nucleus can be subdivided into two structures that are generally distinct in function: the parvocellular red nucleus, which mainly contains small and medium-sized neurons, and the magnocellular red nucleus, which contains larger neurons. The red nucleus is recognizable immediately after dissection because it maintains a reddish coloring. This coloring is thought to be due to iron pigments found in the cells of the nucleus.

    What is the red nucleus and what does it do?

    As mentioned above, the red nucleus can be subdivided into two structures with separate functions: the parvocellular red nucleus and the magnocellular red nucleus. In the human brain, most of the red nucleus is made up of the parvocellular red nucleus, or RNp; the magnocellular red nucleus (RNm) is not thought to play a major role in the adult human brain. In four-legged mammals (e.g., cats, mice), however, the RNm is a more prominent structure—in both size and importance.

    Neurons from the cerebellum project to the RNm, and RNm neurons leave the red nucleus and form the rubrospinal tract, which descends in the spinal cord. In animals that walk on four legs, this pathway is activated around the time of voluntary movements; it seems to play an important role in walking, avoiding obstacles, and making coordinated paw movements. RNm neurons, however, also respond to sensory stimulation, and may provide sensory feedback to the cerebellum to help guide movements and maintain postural stability.

    In primates that mainly walk on two legs (including humans), the RNm is not thought to play a large role in walking and maintaining postural stability, as other tracts (e.g., the corticospinal tract) take over such functions. The RNm, however, does appear to be involved in controlling hand movements in humans and other primates. Interestingly, the RNm is more prominent in the human fetus and newborn, but regresses as a child ages, which may have to do with the development of corticospinal tract and the ability to walk on two legs.

    Despite its relatively greater import in the human brain, the RNp is poorly understood, as its diminished presence in other animals makes it more difficult to study using an animal model. Neurons from motor areas in the prefrontal cortex and premotor cortex, as well as neurons from nuclei in the cerebellum known as the deep cerebellar nuclei, extend to the RNp. There is also a collection of neurons that leave the RNp and travel to the inferior olivary nucleus, which communicates with the cerebellum and is thought to be involved in the control of movement. A number of proposed functions have been attributed to these connections between the RNp, cerebellum, and inferior olivary nucleus, such as movement learning, the acquisition of reflexes, and the detection of errors in movements. But the precise function of these pathways—and thus the RNp’s role in them—is still not clear.

    Several studies have found the red nucleus to play a role in pain sensation as well as analgesia. The latter might be due to connections between the red nucleus and regions like the periaqueductal gray and raphe nuclei, which are part of a natural pain-inhibiting system in the brain.

    In terms of pathology, dysfunction in the human red nucleus has been linked to the development of tremors, and is being investigated as playing a potential role in Parkinson’s disease. Damage to the red nucleus has also been associated with a number of other problems with movement and muscle tone.

    References (in addition to linked text above):

    Basile GA, Quartu M, Bertino S, Serra MP, Boi M, Bramanti A, Anastasi GP, Milardi D, Cacciola A. Red nucleus structure and function: from anatomy to clinical neurosciences. Brain Struct Funct. 2021 Jan;226(1):69-91. doi: 10.1007/s00429-020-02171-x. Epub 2020 Nov 12. PMID: 33180142; PMCID: PMC7817566.

    Vadhan J, M Das J. Neuroanatomy, Red Nucleus. 2020 Jul 31. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2020 Jan–. PMID: 31869092.

    in Neuroscientifically Challenged on February 26, 2021 11:26 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Preventing and controlling water pipe smoking

    Water pipe smoking as a public health crisis

    Water Pipe Smoking (WPS) accounts for a significant and growing share of tobacco use globally. WPS is a culture-based method of tobacco use and it has experienced a worldwide re-emergence since 1990 and is regaining popularity among different groups of populations, especially in school and university students. Similarly, WPS is also prevalent among highly educated groups. Although WPS is most prevalent in Asia, specifically the Middle East region and Africa, it has now been changed to a rapidly emerging problem in other continents such as Europe, North, and South America.

    WP business has remained largely unregulated and uncontrolled, which may result in the increasing prevalence of WPS.

    It has been shown that WPS’ smoking rate can be more addictive compared to that of the cigarette. It has a huge negative impact on populations’ health, health costs and the gross domestic product of the countries. WP business has remained largely unregulated and uncontrolled, which may result in the increasing prevalence of WPS. Using deceptive advertising, many cafes and restaurants offer WP services along with their orthodox services in order to earn more profit and lure more customers. The provision of flavored tobacco products or psychotropic WP, the proximity of WP cafe to the public settings such as educational or residential settings and sports clubs, tempting decoration, the provision of study places for students, live music, a variety of games and gambling, and the possibility of watching live movie and sport matches are the factors contribute to attracting children and adolescents to WP cafes.

    The importance of our study

    Despite the concerns about WPS outcomes and nearly three decades of using control measures, the prevalence of WPS has increased over the world. Due to the unique nature of WP (multi-components), little is known about the prevention and control of WPS. Thus, special interventions might be required to prevent and control WP tobacco use. Accordingly, our study published in BMC Public Health aimed to identify the management interventions in international and national levels for preventing and controlling water pipe smoking.

    Our study

    We conducted a systematic literature review. Studies aiming at evaluating, at least, one intervention in preventing and controlling WPS were included in this review, followed by performing the quality assessment and data extraction of eligible studies by two independent investigators.

    After deleting duplications, 2228 out of 4343 retrieved records remained and 38 studies were selected as the main corpus of the present study. The selected studies focused on 19 different countries including the United States (13.15 %), the United Kingdom (7.89 %), Germany (5.26 %), Iran (5.26 %), Egypt (5.26%), Malaysia (2.63%), India (2.63%), Denmark (2.63%), Pakistan (2.63%), Qatar (2.63%), Jordan(2.63%), Lebanon (2.63%), Syria (2.63%), Turkey (2.63%), Bahrain (2.63%), Israel (2.63%), the United Arab Emirates (2.63%), Saudi Arabia (2.63%), and Switzerland (2.63%). Additionally, the type of study design included cross-sectional (31.57 %), quasi-experimental (15.78 %) and qualitative types (23.68 %).

    Interventions that were identified from the content analysis process were discussed and classified into relevant categories. We identified 27 interventions that were grouped into four main categories including preventive (5,18.51%) and control (8, 29.62%) interventions, as well as the enactment and implementation of legislations and policies for controlling WPS at national (7, 25.92%) and international (7, 25.92%) levels. The interventions are shown in the following table.

    Table: Effective Interventions in Preventing and Controlling Water Pipe Smoking

    Study implications

    The current enforced legislations are old, unclear, incompatible with the needs of the adolescents and are not backed by rigorous evidence.

    In general, our findings indicated WPS related social and health crisis have not come into attention in high levels of policy making. The current enforced legislations are old, unclear, incompatible with the needs of the adolescents and are not backed by rigorous evidence. In addition, the WP industry is rapidly expanding without monitoring and controlling measures. Informing and empowering adolescents for those who have not yet experienced smoking is a sensible intervention in this regard. Besides, empowering and involving health students and professionals in WPS control programs can lead to promising results in preventing and controlling WPS. It seems that there is a paucity of evidence regarding strategies on controlling and preventing WTS, thus further research in the society is warranted in this respect.

    The post Preventing and controlling water pipe smoking appeared first on BMC Series blog.

    in BMC Series blog on February 26, 2021 07:33 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Interactions between Brassica Napus and Extracellular Fungal Pathogen Pyrenopeziza Brassicae

    This week on Journal Club session Chinthani Karandeni Dewage will talk about a paper "Interactions between Brassica Napus and Extracellular Fungal Pathogen Pyrenopeziza Brassicae".

    Light leaf spot (Pyrenopeziza brassicae) is currently the most damaging foliar disease on winter oilseed rape (Brassica napus) in the UK. Deployment of cultivar resistance remains an important aspect of effective management of the disease. Nevertheless, the genetic basis of resistance remains poorly understood and no B. napus resistance (R) genes against P. brassicae have been cloned. In this talk, I will be presenting the findings from my research on host resistance against P. brassicae and specific interactions in this pathosystem. New possibilities offered by genomic approaches for rapid identification of R genes and pathogenicity determinants will also be discussed.


    • Chinthani Karandeni Dewage et. al, "Interactions between Brassica Napus and Extracellular Fungal Pathogen Pyrenopeziza Brassicae" , 2021, in preparation

    Date: 2021/02/25
    Time: 14:00
    Location: online

    in UH Biocomputation group on February 25, 2021 10:29 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    PLOS and Yale University Announce Publishing Agreement

    Yale University posted the following announcement on its website on February 23, 2021

    Yale University Library has signed two innovative agreements that will allow Yale-affiliated authors to publish in any PLOS open access journal without paying article processing charges (APCs).

    PLOS is a non-profit, open access publisher of seven highly respected scientific and medical journals. Last year Yale authors published more than 100 articles in PLOS journals, with APCs of up to $3,000 per article. Effective Jan. 1, 2021, these author-paid APCs will be eliminated and replaced with annual fees paid by the library. The authors will maintain copyright ownership of their research.

    “Our goal is to make open access publishing a more viable option for more Yale researchers in science and medicine, and to support a publication model that will also encourage open access publishing beyond Yale,” said Barbara Rockenbach, the Stephen Gates ’68 University Librarian.

    Open access publishing has grown in popularity since the 1990s when peer-reviewed journals began publishing online with a traditional business model based on limited access and high subscription fees. Open access developed as an alternative to make new research quickly and widely available with financial support from those producing the research. However, financial support for APCs from academic departments, government, and other research funders has varied widely, with some authors having to pay from personal funds.

    The library agreements will eliminate APCs for Yale authors publishing in PLOS Biology, PLOS Medicine, PLOS One, PLOS Computational Biology, PLOS Pathogens, PLOS Genetics, and PLOS Neglected Tropical Diseases, as well as in any new PLOS publications launched during the contract term. The initial agreements are for three years and will be funded through Yale Library’s Collection Development department with support from the Cushing/Whitney Medical Library.

    “We are pleased that Yale Library can support this emerging, more sustainable model of open-access publishing,” Rockenbach said. “We are committed to facilitating equitable access to research in science and medicine–and the progress research fuels.”

    The post PLOS and Yale University Announce Publishing Agreement appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on February 23, 2021 04:10 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Temporal dispersion of spike rates from deconvolved calcium imaging data

    On Twitter, Richie Hakim asked whether the toolbox Cascade for spike inference (preprint, Github) induces temporal dispersion of the predicted spiking activity compared to ground truth. This kind of temporal dispersion had been observed in a study from last year (Wei et al., PLoS Comp Biol, 2020; also discussed in a previous blog post), suggesting that analyses based on raw or deconvolved calcium imaging data might falsely suggest continuous sequences of neuronal activations, while the true activity patterns are coming in discrete bouts.

    To approach this question, I used one of our 27 ground truth datasets (the one recorded for the original GCaMP6f paper). From all recordings in this data set, I detected events that exceeded a certain ground truth spike rate. Next, I assigned these extracted events in 3 groups and systematically shifted the detected event of groups 1 and 3 by 0.5 seconds forth and back. Note that this is a short shift compared to the timescale investigated by the Wei et al. paper. This is how the ground truth looks like. It is clearly not a continuous sequence of activations:

    To evaluate whether the three-bout pattern would result in a continuous sequence after spike inference, I just used the dF/F recordings associated with above ground truth recordings and Cascade’s global model for excitatory neurons (a pretrained network that is available with the toolbox), I infered the spike rates. There is indeed some dispersion due to the difficulty to infer spike rates from noisy data. But the three bouts are very clearly visible.

    This is even more apparent when plotting the average spike rate across neurons:

    Therefore, it can be concluded that there are conditions and existing datasets where discrete activity bouts can be clearly distinguished from sequential activations based on spike rates inferred with Cascade.

    This analysis was performed on neurons at a standardized noise level of 2% Hz-1 (see the preprint for a proper definition of the standardized noise level). This is a typical and very decent noise level for population calcium imaging. However, if we perform the same analysis on the same data set but with a relatively high noise level of 8% Hz-1, the resulting predictions are indeed much more dispersed, since the dF/F patterns are too noisy to make more precise predictions. The average spike rate still shows three peaks, but they are only riding on top of a more broadly distributed, seemingly persistent increase of the spike rate.

    If you want to play around with this analysis with different noise levels or different data sets, you do not need to install anything. You can just, within less than 5 minutes, run this Colaboratory Notebook in your browser and reproduce the above results.

    in Peter Rupprecht on February 23, 2021 01:08 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dynamique et contrôle épidémique : quelques concepts simples

    Dans ce texte, j’essaie d’expliquer quelques concepts simples concernant la dynamique et le contrôle d’une épidémie. Je l’écris bien sûr en pensant à l’épidémie de Covid-19, mais la plupart des concepts sont généraux. En préambule, je tiens à préciser que je ne suis ni médecin ni épidémiologiste, donc je ne parlerai pratiquement pas des aspects purement médicaux, ni de subtilités d’épidémiologie, mais simplement de quelques notions générales. Ma spécialité est la modélisation de phénomènes dynamiques en biologie, en l’occurrence en neurosciences. Merci donc aux collègues compétents d’apporter des précisions ou corrections éventuelles, ou des références pertinentes.

    Quelques remarques préliminaires sur les statistiques

    Avant de commencer les explications, je voudrais tout d’abord inviter le lecteur à la prudence quant à l’interprétation des statistiques, en particulier des statistiques de mortalité. A l’heure où j’écris ces lignes, on estime qu’environ 15% de la population française a été infectée. Autrement dit, l’épidémie est débutante. Les statistiques de mortalité ne sont donc pas un « bilan » de l’épidémie, mais des statistiques sur une épidémie en cours. Comparer avec le bilan d’épidémies passées, ou d’autres causes de mortalité, n’a donc pas beaucoup de sens (éventuellement, on pourrait multiplier par 5 ces statistiques pour avoir un ordre d’idée).

    Deuxièmement, la mortalité d’une maladie ne dépend pas que du virus. Elle dépend aussi de la personne malade. Un facteur majeur est l’âge, et il faut donc prendre cela en compte lorsque l’on compare des pays de démographies très différentes. En première approximation, le risque de décès de la Covid-19 augmente avec l’âge de la même manière que le risque de décès hors Covid. On peut voir cela comme le fait que les jeunes sont peu à risque, ou bien que toutes les classes d’âge voient leur risque de décès dans l’année augmenter d’un même facteur. Quoi qu’il en soit, avec ce type de profil de mortalité, l’âge moyen ou médian au décès n’est pas très informatif puisqu’il est le même avec et sans infection.

    Troisièmement, la mortalité d’une maladie dépend aussi de la prise en charge. En l’occurrence, la Covid-19 se caractérise par un fort taux d’hospitalisation et de réanimation. La mortalité observée en France jusqu’à présent correspond à celle d’un système de soin qui n’est pas saturé. Naturellement, elle serait bien plus élevée si l’on ne pouvait pas procurer ces soins, c’est-à-dire si l’épidémie n’était pas contrôlée, et la mortalité se déplacerait également vers une population plus jeune.

    Enfin, il va de soi que la gravité d’une maladie ne se réduit pas à sa mortalité. Une hospitalisation n’est généralement pas anodine, et les cas moins sévères peuvent avoir des séquelles à long terme.

    Le taux de reproduction

    Un virus est une entité qui peut se répliquer dans un hôte et se transmettre à d’autres hôtes. Contrairement à une bactérie qui est une cellule, un virus n’est pas à proprement parler un organisme, c’est-à-dire qu’il dépend totalement de l’hôte pour sa survie et sa reproduction. Par conséquent, pour comprendre la dynamique d’une épidémie virale, il faut s’intéresser au nombre d’hôtes infectés et à la transmission entre hôtes.

    Un paramètre important est le taux de reproduction (R). C’est le nombre moyen de personnes qu’une personne infectée va contaminer. On voit que l’épidémie se développe si R>1, et s’éteint si R<1. A chaque transmission, le nombre de cas est multiplié par R. Ce taux de reproduction ne dit pas à quelle vitesse l’épidémie se développe, car cela dépend du temps d’incubation et de la période de contagiosité. C’est en fait un paramètre qui est surtout intéressant pour comprendre le contrôle de l’épidémie. Par exemple, si R = 2, alors on peut contrôler l’épidémie en divisant par deux le nombre de ses contacts.

    Comme le nombre de cas est multiplié par un certain facteur à chaque épisode de contamination, une épidémie a typiquement une dynamique exponentielle, c’est-à-dire que c’est le nombre de chiffres qui augmente régulièrement. Cela prend autant de temps de passer de 10 à 100 cas que de 100 à 1000 cas, ou de 1000 à 10 000 cas. La dynamique est donc de nature explosive. C’est pourquoi la quantité à suivre avec attention n’est pas tant le nombre de cas que ce taux de reproduction : dès que R>1, le nombre de cas peut rapidement exploser et il faut agir vite.

    Naturellement, ce raisonnement suppose que la population n’a pas été déjà infectée. Si une proportion p de la population est immunisée (infectée ou vaccinée), alors chaque personne infectée va contaminer en moyenne un nombre de personnes R x (1-p). L’épidémie va donc s’arrêter quand ce nombre descend en-dessous de 1, c’est-à-dire p > 1- 1/R. Par exemple, avec R = 3, l’épidémie s’arrête quand les 2/3 de population sont immunisés.

    Ceci nous dit également l’impact de la vaccination en cours sur le contrôle de l’épidémie. Par exemple, à l’heure où j’écris ces lignes (22 février 2021), environ 2% de la population a été vaccinée (4% a reçu la première dose). Cela contribue donc à diminuer R de 2% (par exemple de 1.1 à 1.08). Il est donc clair que la vaccination n’aura pas d’effet important sur la dynamique globale avant plusieurs mois.

    Il est important de comprendre que ce taux de reproduction n’est pas une caractéristique intrinsèque du virus. Il dépend du virus, mais également de l’hôte qui peut être plus ou moins contagieux (on a parlé par exemple des « superspreaders »), d’aspects comportementaux, de mesures de protection (par exemple les masques). Ce taux n’est donc pas forcément homogène dans une population. Par exemple, il est vraisemblable que le R soit supérieur chez les jeunes actifs que chez les personnes âgées.

    Peut-on isoler une partie de la population ?

    Est-il possible de préserver la population la plus fragile en l’isolant du reste de la population, sans contrôler l’épidémie ? Cette hypothèse a été formulée plusieurs fois, bien que très critiquée dans la littérature scientifique.

    On peut comprendre assez facilement que c’est une idée périlleuse. Isoler une partie de la population a un impact quasi nul sur le taux de reproduction R, et donc la dynamique de l’épidémie est inchangée. Il faut bien garder à l’esprit que contrôler une épidémie pour qu’elle s’éteigne suppose simplement de faire en sorte que R<1, de façon à ce que le nombre de cas décroisse exponentiellement. Ainsi pendant le confinement strict de mars 2020, le taux était d’environ R = 0.7. C’est suffisant pour que l’épidémie s’éteigne, mais il n’en reste pas moins qu’une personne infectée continue à contaminer d’autres gens. Par conséquent, à moins de parvenir à isoler ces personnes fragiles beaucoup plus strictement que lors du premier confinement (ce qui semble douteux étant donné qu’il s’agit pour partie de personnes dépendantes), l’épidémie dans cette population va suivre l’épidémie dans la population générale, avec la même dynamique mais dans une version un peu atténuée. Autrement dit, il semble peu plausible que cette stratégie soit efficace.

    Les variants

    Un virus peut muter, c’est-à-dire que lorsqu’il se réplique dans un hôte, des erreurs sont introduites de sorte que les propriétés du virus changent. Cela peut avoir un impact sur les symptômes, ou sur la contagiosité. Naturellement, plus il y a d’hôtes infectés, plus il y a de variants, c’est donc un phénomène qui survient dans des épidémies non contrôlées.

    Supposons que R = 2 et qu’un variant ait un taux R = 4. Alors à chaque transmission, le nombre de cas du variant relatif au virus original est multiplié par 2. Au bout de 10 transmissions, le variant représente 99.9% des cas. Ceci reste vrai si des mesures restrictives réduisent la transmission (par exemple R=2/3 et R=4/3). Après ces 10 transmissions, le R global est celui du variant. Par conséquent, c’est le nombre de cas et le R du variant plus contagieux qui déterminent le nombre de cas et la dynamique à moyen terme (c’est-à-dire quelques semaines). Le nombre de cas du virus original et même le nombre de cas globaux sont essentiellement insignifiants.

    Cela signifie que l’on peut être dans une dynamique explosive alors même que le nombre de cas diminue. Pour savoir si l’épidémie est sous contrôle, il faut regarder le R du variant le plus contagieux. A l’heure où j’écris, on est précisément dans la situation où le virus original est encore dominant avec R<1 et les variants ont un R>1, ce qui signifie que malgré une diminution globale des cas, on est dans une dynamique explosive qui sera apparente dans le nombre global de cas lorsque les variants seront dominants.

    Le contrôle épidémique

    Contrôler l’épidémie signifie faire en sorte que R<1. Dans cette situation, le nombre de cas diminue exponentiellement et s’approche de 0. Il ne s’agit pas nécessairement de supprimer toute transmission mais de faire en sorte par une combinaison de mesures que R soit plus petit que 1. Ainsi, passer de R = 1.1 à 0.9 suffit pour passer d’une épidémie explosive à une extinction de l’épidémie.

    Naturellement, la mesure la plus sûre pour éteindre l’épidémie est d’empêcher toute relation sociale (le « confinement »). Mais il existe potentiellement de nombreuses autres mesures, et idéalement il s’agit de combiner plusieurs mesures à la fois efficaces et peu contraignantes, le confinement pouvant être utilisé en dernier recours lorsque ces mesures ont échoué. La difficulté est que l’impact d’une mesure n’est pas précisément connu pour un virus nouveau.

    Ces connaissances sont cependant loin d’être négligeables après un an d’épidémie de Covid-19. On sait par exemple que le port du masque est très efficace (on s’en doutait déjà, étant donné que c’est une infection respiratoire). On sait que le virus se propage par projection de gouttelettes et par aérosols. On sait également que les écoles et les lieux de restauration collective sont des lieux importants de contamination. Cette observation peut conduire à fermer ces lieux, mais on pourrait alternativement les sécuriser par l’installation de ventilation et de filtres (investissement qui pourrait d’ailleurs être synergique avec un plan de rénovation énergétique).

    Il y a deux grands types de mesures. Il y a des mesures globales qui s’appliquent aux personnes saines comme aux porteurs du virus, comme le port du masque, la fermeture de certains lieux, la mise en place du télétravail. Le coût de ces mesures (au sens large, c’est-à-dire le coût économique et les contraintes) est fixe. Il y a des mesures spécifiques, c’est-à-dire qui sont déclenchées lorsqu’il y a un cas, comme le traçage, la fermeture d’une école, le confinement local. Ces mesures spécifiques ont un coût proportionnel au nombre de cas. Le coût global est donc une combinaison d’un coût fixe et d’un coût proportionnel au nombre de cas. Par conséquent, il est toujours plus coûteux de maîtriser une épidémie lorsque le nombre de cas est plus grand (choix qui semble pourtant avoir été fait en France après le deuxième confinement).

    Le « plateau »

    Une remarque importante : les mesures ont un impact sur la progression de l’épidémie (R) et non directement sur le nombre de cas. Cela signifie que si l’on sait maintenir un nombre de cas haut (R=1), alors on sait tout aussi bien (avec les mêmes mesures), maintenir un nombre de cas bas. Avec un petit effort supplémentaire (R=0.9), on peut supprimer l’épidémie.

    Avoir comme objectif la saturation hospitalière n’a donc pas particulièrement d’intérêt, et est même un choix plus coûteux que la suppression. Il existe une justification à cet objectif, la stratégie consistant à « aplatir la courbe », qui a été suggérée au début de l’épidémie. Il s’agit de maximiser le nombre de personnes infectées de façon à immuniser rapidement toute la population. Maintenant qu’il existe un vaccin, cette stratégie n’a plus beaucoup de sens. Même sans vaccin, infecter toute la population sans saturer les services hospitaliers prendrait plusieurs années, sans parler naturellement de la mortalité.

    La suppression de l’épidémie

    Comme remarqué précédemment, il est plus facile de maîtriser une épidémie faible que forte, et donc une stratégie de contrôle doit viser non un nombre de cas « acceptables », mais un taux de reproduction R<1. Dans cette situation, le nombre de cas décroît exponentiellement. Lorsque le nombre de cas est très bas, il faut prendre en compte les cas importés. C’est-à-dire que sur une période de contamination, le nombre de cas va passer non plus de n à R x n mais de n à R x n + I, où I est le nombre de cas importés. Le nombre de cas va donc se stabiliser à I/(1-R) (par exemple, 3 fois le nombre de cas importés si R = 2/3). Si l’on veut diminuer encore le nombre de cas, il devient alors important d’empêcher l’importation de nouveaux cas (tests, quarantaine, etc).

    Lorsque le nombre de cas est très bas, il devient faisable d’appliquer des mesures spécifiques très poussées, c’est-à-dire pour chaque cas. Par exemple, pour chaque cas, on isole la personne, et l’on teste et on isole toutes les personnes susceptibles d’être également porteuses. Non seulement on identifie les personnes potentiellement contaminées par la personne positive, mais on recherche également la source de la contamination. En effet, si l’épidémie est portée par des événements de supercontamination (« clusters »), alors il devient plus efficace de remonter à la source de la contamination puis de suivre les cas contacts.

    A faible circulation, comme on dispose de ces moyens supplémentaires pour diminuer la transmission, il devient possible de lever certains moyens non spécifiques (par exemple le confinement général ou autres restrictions sociales, les fermetures d’établissements et même le port du masque). Pour que les moyens spécifiques aient un impact important, un point clé est que la majorité des cas puissent être détectés. Cela suppose des tests systématiques massifs, par exemple en utilisant des tests salivaires, des drive-in, des tests groupés, des contrôles de température. Cela suppose que les personnes positives ne soient pas découragées de se tester et s’isoler (en particulier, en maintenant les revenus). Cela suppose également un isolement systématique en attente de résultat pour les cas suspectés. Autrement dit, pour avoir une chance de fonctionner, cette stratégie doit être appliquée de manière la plus systématique possible. L’appliquer sur 10% des cas n’a pratiquement aucun intérêt. C’est pourquoi elle n’a de sens que lorsque la circulation du virus est faible.

    Il est important d’observer que dans cette stratégie, l’essentiel du coût et des contraintes est porté par le dispositif de test, puisque le traçage et l’isolement ne se produisent que lorsqu’un cas est détecté, ce qui idéalement arrive très rarement. Si elle demande une certaine logistique, c’est en revanche une stratégie économique et peu contraignante pour la population.

    Quand agir ?

    J’ai expliqué que maintenir un niveau élevé de cas est plus coûteux et plus contraignant que maintenir un niveau faible de cas. Maintenir un niveau très faible de cas est encore moins coûteux et contraignant, bien que cela demande plus d’organisation.

    Bien entendu, pour passer d’un plateau haut à un plateau bas, il faut que l’épidémie décroisse, et donc transitoirement appliquer des mesures importantes. Si l’épidémie n’est pas contrôlée – et je rappelle que cela est le cas dès lors qu’un variant est en croissance (R>1) même si le nombre global de cas décroît – ces mesures vont devoir être appliquées à un moment donné. Quand faut-il les appliquer ? Est-il plus avantageux d’attendre le plus possible avant de le faire ?

    Ce n’est clairement jamais le cas, car plus on attend, plus le nombre de cas augmente et donc plus les mesures restrictives devront être appliquées longtemps avant d’atteindre l’objectif de circulation faible, où des mesures plus fines (traçage) pourront prendre le relais. Cela peut sembler contre-intuitif si le nombre de cas est en décroissance mais c’est pourtant bien le cas, parce que le nombre de cas à moyen terme ne dépend que du nombre de cas du variant le plus contagieux, et non du nombre de cas global. Donc, si le variant le plus contagieux est en expansion, attendre ne fait qu’allonger la durée des mesures restrictives.

    De combien ? Supposons que le nombre de cas du virus (le variant le plus contagieux) double chaque semaine, et que les mesures restrictives divisent le nombre de cas par 2 en une semaine. Alors attendre une semaine avant de les appliquer allongent ces mesures d’une semaine (j’insiste : allongent, et non simplement décalent). Dans l’hypothèse (plus réaliste) où les mesures sont un peu moins efficaces, chaque semaine d’attente augmente la durée des mesures d’un peu plus d’une semaine.

    Il est donc toujours préférable d’agir dès que R>1, de façon à agir le moins longtemps possible, et non pas d’attendre que le nombre de cas augmente considérablement. La seule justification possible à l’attente pourrait être une vaccination massive qui laisserait espérer une décroissance de l’épidémie par l’immunisation, ce qui n’est manifestement pas le cas dans l’immédiat.


    Quelques liens pertinents:

    in Romain Brette on February 22, 2021 03:45 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dev session: Caglar Cakan: neurolib


    Caglar Cakan will introduce neurolib and discuss its development in this developer session.

    The abstract for the talk is below:

    neurolib is a computational framework for whole-brain modelling written in Python. It provides a set of neural mass models that represent the average activity of a brain region on a mesoscopic scale. In a whole-brain network model, brain regions are connected with each other based on structural connectivity data, i.e. the connectome of the brain. neurolib can load structural and functional data sets, set up a whole-brain model, manage its parameters, simulate it, and organize its outputs for later analysis. The activity of each brain region can be converted into a simulated BOLD signal in order to calibrate the model to empirical data from functional magnetic resonance imaging (fMRI). Extensive model analysis is possible using a parameter exploration module, which allows to characterize the model’s behaviour given a set of changing parameters. An optimization module allows for fitting a model to multimodal empirical data using an evolutionary algorithm. Besides its included functionality, neurolib is designed to be extendable such that custom neural mass models can be implemented easily. neurolib offers a versatile platform for computational neuroscientists for prototyping models, managing large numerical experiments, studying the structure-function relationship of brain networks, and for in-silico optimization of whole-brain models.

    in INCF/OCNS Software Working Group on February 18, 2021 09:19 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 15 February 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.

    Please join us at the next regular Open NeuroFedora team meeting on Monday 15 February at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'

    The meeting will be chaired by @ankursinha. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on February 15, 2021 12:11 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 1 February 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.

    Please join us at the next regular Open NeuroFedora team meeting on Monday 1 February at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'

    The meeting will be chaired by @bt0dotninja. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on February 15, 2021 12:11 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 18 January 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.

    Please join us at the next regular Open NeuroFedora team meeting on Monday 18 January at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 next Monday'

    The meeting will be chaired by @ankursinha (me). The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on February 15, 2021 12:11 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The Man Who Thought AIDS Was All In The Mind

    I look at one of the most remarkable articles in the history of psychology

    in Discovery magazine - Neuroskeptic on February 14, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The Hot-Crazy Matrix paper

    Lots of buzz yesterday on Twitter about a paper already published online a year ago, but assigned to the February 2021 issue of Personality and Individual Differences, an Elsevier/Science Direct journal. The paper builds upon a popular — but not scientific — YouTube video in which men are advised to only date women who are “hot and not too crazy”, and women are believed to only want to marry rich guys.

    Figures 1 and 2 of the paper — taken from this video but without giving credit — are presented in this paper as scientific data. Of course, I have concerns.

    The paper was initially called Testing the hot-crazy matrix: Borderline personality traits in attractive women and wealthy low attractive men are relatively favoured by the opposite sex, but the first part of the title has now been removed. DOI: 10.1016/j.paid.2020.109964

    The Hot-Crazy Matrix

    The Hot-Crazy Matrix (HCM) comes from a popular YouTube video (close to 3 million views; not going to link to it but easy to find) in which a man draws a plot on a whiteboard that scores women according to two scales, “Hot” and “Crazy”. The Crazy scale starts at 4, the guy explains, “because of course there is no such thing as a woman who is not at least a 4 crazy“. The matrix is then divided into subsections of women that guys should avoid, called the No Go Zone (“we do not hang around and date and marry women who are not at least a 5 hot“), the Date Zone, and the Wife Zone (at least an 8 hot but not too crazy).

    The video also features a Cute-Money Matrix (CMM) in which men are rated according to how cute they are and how much money they make. Women are advised to date only the rich guys.

    I can see that this video is meant to be funny, and I am sure there is a place in the Interwebs for it, but in my personal view it’s far too simplistic and sexist to be featured in a scientific paper. And yet it was published in PAID.

    The PAID journal – not without controversies

    The Personality and Individual Differences journal in which the HCM paper was published focuses on “the Structure and Development of Personality, and the Causation of Individual Differences.”

    The PAID journal was founded by Hans Eysenck, a German/British psychologist, who is one of the highest cited scientists in his field. His career has been very controversial, and since his death many of his papers have been investigated for data falsification and fabrication. His work is currently up to 15 retractions and 70+ expressions-of-concern (EoC), including several papers he published in his own PAID journal. PAID apparently did not want to retract these papers despite requests by academic institutions and only slapped EoCs on them. The journal has been accused of being too protective of its founder.

    The HCM paper was published in Issue 169 of the PAID journal, which celebrates its 40th anniversary. It features several references to its founder, including a reprint of his original introductory editorial.

    The paper: Hot or Not?

    It is quite unexpected to see a journal specializing in the Causes Of Individual Differences publishing a paper driven by a sexist YouTube video that rates women for hotness and craziness, and judges men by their cuteness and the amount of money they make.

    The Hot-Crazy Matrix and the Cute-Money Matrix are described in the introduction as “universal” and “popular”. The introduction of a scientific paper is usually the place where previous research is described and cited, so a naive reader might interpret the description of the HCM and CMM as genuine scientific knowledge.

    Figures 1 and 2 in the paper feature both matrices without any clarification, as if they were real scientific data, with actual measurements and actual scales. There is no citation to the YouTube video or other sources, no disclaimer that this is not actual data, and not even an explanation of the meaning and differences between the two graphs.

    The paper raises all kinds of issues, including categorizing all men as only wanting to date women and vice versa; categorizing women only on hotness and craziness and men only for their money; assuming all women are somewhat crazy, etc. On top of this it labels a person with a mental illness — borderline personality disorder — “crazy”, which is both demeaning and unscientific.

    My PubPeer comments

    Of course, I had some thoughts about the paper, and I voiced my comments on PubPeer. Here is a copy of my post.

    In this paper, two groups of male and female participants were recruited through online crowd-sourcing platforms. They were then presented with a combination of a photo of a face and a personality profile.

    • The photo was either a high- or a low-attractive Caucasian face taken from an online library.
    • The personality profile was presented in the form of a short scenario about how the participant met the person in the photo, and the events that followed. In study A, the scenario included data on the “psychopathy” traits of the person in the photo, while in study B the scenario described whether the person was rich or poor. These scenarios were written in neutral (not gendered) language.

    Study participants were then asked about the “extent you would want to be romantically involved with this person” on a short-term and long-term dating basis.

    I have several questions and concerns.

    Let’s start with some important details that appear to be missing from the Methods section.

    • Did the authors obtain IRB approval for this research? Were the participants asked for consent for this study? I cannot seem to find any statement on this.
    • Did the authors know the sexual orientation of the participants? Did the male subjects only see images of females, and vice versa? Or did the participants get to see a random photo? This seems relevant to the question asked of participants about whether they would want to be romantically involved. However I cannot seem to find these important details.
    • Which faces from the “Beautycheck” library were used for this research?

    Now let’s move on to another big concern I have with this paper, Figures 1 and 2.

    • Figure 1 represents “men’s dating options based on rating women on two dimensions: “hot” (attractiveness) and “crazy” (emotionality), in reference to a third criteria; the “hot-crazy line”.
    • Figure 2 shows “the cute money matrix (CMM) (Fig. 2) in which a man’s desirability depends on how attractive and wealthy they are.”

    At first I thought these figures represented the data as measured in this paper, since they are, well, presented as data. However, they appear to be based on what the authors describe thus: “The universal hot crazy matrix (HCM) (otherwise known as the “single guy’s guide to dating women”) is a popular cultural phenomenon, and has featured in American sitcoms and viral YouTube videos”.

    Well, I can think of a lot of popular memes that lack scientific truth, but I would not expect them to be presented as the truth in a scientific paper.

    Why is a sexist image like this presented as scientific data/truth? Why do the authors use fallacies such as “universal” and “popular”, and not present this as a hypothesis using unbiased language? Why can women only be “Hot” and “Crazy”, and why can men only be “Cute” and “Rich”? It seems completely unnecessary to include sexist language like this in a scientific paper, especially if it is presented as data.

    Also, calling people with personality disorders “crazy” is without question entirely unscientific and unacceptable.

    And finally, why was the sexist “Testing the hot-crazy matrix” part removed from the title? I would of course never use Sci-hub since it is not legal, but the copy deposited there shows that some version of this paper had a different title than it now has. Archived here: https://web.archive.org/web/20210212220559/https://sci-hub.st/10.1016/j.paid.2020.109964

    in Science Integrity Digest on February 13, 2021 01:39 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Restoration of the Daily Email Announcement

    Subscribers to arXiv’s daily email announcement recently experienced a disruption in service. Maintenance performed by arXiv’s email provider was found to be the cause of the disruption. The issue has been resolved, and we sincerely apologize for any inconvenience.

    During this time, functionality on arXiv.org remained intact. Readers and authors could continue to browse and download new papers on arXiv.org, as well as submit their own articles.

    The issue

    Daily emails announcing new papers were received late, or not at all. Other emails, such as account registration confirmations, were also affected.

    The response

    We first identified and fixed a problem related to the email disruption, and began sending the email backlog to subscribers. Soon after that process began, another issue was identified that inadvertently truncated subscriber lists, specific to several subject areas. Finally, the distribution lists for all subject areas were restored and remaining announcements were sent.

    Moving forward, if any subscribers find that the daily announcement email is not received, we advise them to resubscribe.


    We know that researchers rely on the daily announcement for their work, and we aim to avoid such disruptions in the future by:

    • Automating daily backups of the distribution lists
    • Applying fixes to prevent the root cause
    • Improving the monitoring of the daily email announcement systems
    • Reviewing our overall backup and recovery plans

    We appreciate our subscribers’ patience as we resolved the issue and, again, we apologize for any inconvenience.


    in arXiv.org blog on February 12, 2021 10:11 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    2-Minute Neuroscience: Phantom Limb

    Phantom limb is a condition in which someone who has lost a part of their body (e.g., due to amputation) continues to experience phantom sensations coming from the missing body part. In this video, I discuss some of the hypotheses that have been proposed to explain this strange phenomenon.

    in Neuroscientifically Challenged on February 12, 2021 10:42 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Lansering av nya Horisont Europa (10/2)

    En hel dag, med några korta pauser, tog det att presentera grundpelare och huvudteman för det nya ramprogrammet för EU-forskning, Horisont Europa (#HorizonEurope). Presentationen arrangerades gemensamt av 6 stora svenska finansiärer; Energimyndigheten, FORMAS, Forte, Rymdstryrelsen, Vetenskapsrådet, och Vinnova. Många bra presentationer och panelsamtal! Jag har summerat det hela i tweets här, men det finns en del att lyfta upp som inte ryms i kortfattade meningar.

    Öppen vetenskap nämndes EN gång (av John Tumpane, från FORMAS). Data-delning nämndes inte alls. Samverkan däremot nämndes upprepade gånger, framför allt med fokus på industrin. Standarder nämndes i förbifarten, framför allt som något viktigt för industrin att bevaka och delta i utvecklandet av.

    Mycket av fokus låg på “partnerships”, arrangerade och finansierade strategiska multi-partssamarbeten som är menade att ge garanterad förutsägbar nytta. Hela 50% av budgeten går dit. Partnerships är inte nya för EU, men det utökade fokuset på nytta är en lite ändrad inriktning.

    En nyhet för det här ramprogrammet är “missions“, övergripande målbilder som ska vara förankrade i samhällsbehov och ge samhällsnytta, och göra det lättare att nå målen för the European Green Deal och Europe’s Beating Cancer Plan såväl som hållbarhetsmålen.

    Forskningsinfrastrukturer nämndes en hel del i början av samtalet (det är en av pelarna), och får en hyfsad andel av budgeten, men var inte särskilt närvarande när det kom till samverkans-diskussionen (där jag tycker de absolut passar in).

    Regioner nämndes som möjliga innovationshubbar och koordinatorer, vilket var en trevlig ny vinkel – tidigare har jag mest hört regioner nämnas som trösklar och fragmentering, och ett problem för det svenska Life Science-ekosystemet.

    Universitet nämndes i början – när vikten av rätt utbildning och mobilitet i utbildning diskuterades – men inte explicit i senare diskussioner om samverkan; där låg fokus mest på individuella forskare och på industrin. Universitet borde rimligen också vara nyckelmedlemmar i samverkansprojekt.

    Två viktiga gap (”valleys of death”) i kunskapsöverföring och nyttogörande nämndes: den slingrande vägen som krävs för att ta forskning från labbet till industrin, och svårigheten att skala upp innovativa start-ups på ett hållbart sätt.

    Regeringen håller på och arbetar ut en nationell strategi för Sveriges deltagande i Horisont Europa, det är dock ännu oklart exakt när den kommer vara klar. Det ska bli intressant att se vad den innehåller; EU-ansökningar anses allmänt vara tunga, arbetskrävande och komplicerade. Stödbidrag för att frigöra tid att sätta ihop en ansökan vore kanske användbart. Utökat stöd från universitetens administratörer och Grant Offices också. (När jag jobbade med mitt första EU-projekt hade vi en EU-kunnig administratör till hjälp, och vilken skillnad det gjorde).

    Tydligen hade Norge sin motsvarande presentation redan i höstas.

    in Malin Sandström's blog on February 11, 2021 05:55 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Packaging Life: The Origin of Ion-Selective Channels

    This week on Journal Club session Reinoud Maex will talk about a paper "Packaging Life: The Origin of Ion-Selective Channels".

    Most articles dealing with early life focus on its chemical basis and the evolution of proteins, RNA, DNA, and other metabolic products. This essay, however, is concerned primarily with the energy required to produce and maintain the essential life chemicals, and the necessity to confine them in a cell where they can function cooperatively. It seems likely that life evolved in proximity to the undersea vents discovered by the submersible craft Alvin of the Woods Hole Oceanographic Institute (Woods Hole, MA). Bacteria were probably the original life form, growing in mats near the vents. Some advantages of the vents as starting points for life are:

    1. Plentiful water and essential ions. We are ~60% salt water.

    2. Plenty of chemical elements, many out of equilibrium and ready to combine. From one point of view, chemistry is simply the search of electrons, e.g., the two around a hydrogen molecule, for vacancies as close as possible to a nucleus with many protons, e.g., oxygen, or less avid electron gatherers, e.g., nitrogen, carbon, sulfur, or phosphorus. Eighteen of the 20 AAs contain only hydrogen, carbon, nitrogen, and oxygen; and the remaining two require sulfur in addition. DNA and RNA require, in addition, phosphorus. In short, it is not necessary to dip far into the periodic table to make most of the life chemicals.

    3. Energy can be derived from combining chemicals issuing from the vent, as in a fuel cell, which derives energy by passing electrons from hydrogen to oxygen.

    4. The water near a vent is warm, speeding experiments in the evolution of life chemicals.

    Given these essentials, it is easy to imagine that life developed over a period of time, say a billion years. J. D. Sutherland, who first succeeded in synthesizing RNA, was quoted as saying: ‘‘My assumption is that we are here on this planet as a fundamental consequence of organic chemistry.So it must be chemistry that wants to work’’ (1). This essay starts at the point where early chemistry has done its work. It deals with the packaging of the life chemicals, the problems that arise from packaging, and energy production.


    Date: 2021-02-12
    Time: 14:00
    Location: online

    in UH Biocomputation group on February 11, 2021 03:56 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    NeuroFedora at the INCF/OCNS Software WG dev sessions

    This was originally posted on the INCF / OCNS Software Working Group (WG)'s blog here. It is a great opportunity to learn how NeuroFedora is developed.

    Ankur Sinha will introduce the Free/Open Source Software NeuroFedora project and discuss its development in this developer session.

    The abstract for the talk is below:

    NeuroFedora is an initiative to provide a ready to use Fedora based Free/Open source software platform for neuroscience. We believe that similar to Free software, science should be free for all to use, share, modify, and study. The use of Free software also aids reproducibility, data sharing, and collaboration in the research community. By making the tools used in the scientific process easier to use, NeuroFedora aims to take a step to enable this ideal. In this session, I will talk about the deliverables of the NeuroFedora project and then go over the complete pipeline that we use to produce, test, and disseminate them.

    in NeuroFedora blog on February 11, 2021 09:56 AM.