• Wallabag.it! - Save to Instapaper - Save to Pocket -

    This butterfly is the first U.S. insect known to go extinct because of people

    It’s been roughly 80 years since the Xerces blue butterfly was last spotted flitting about on pastel wings across coastal California sand dunes. But scientists are still learning about the insect.

    New research on DNA from a nearly century-old museum specimen shows that the butterfly was a distinct species. What’s more, that finding means that the Xerces blue butterfly (Glaucopsyche xerces) is the first U.S. insect species known to go extinct because of humans, researchers report July 21 in Biology Letters. 

    The butterfly used to live only on the San Francisco Peninsula. But by the early 1940s, less than a century after its formal scientific description in the 1850s, the gossamer-winged butterfly had vanished. Its rapid disappearance is attributed to the loss of habitat and native plant food as a result of urban development and, possibly, an influx of invasive ants likely spread though the shipment of goods. 

    But it’s long been unclear if the Xerces blue butterfly was its own species, or simply an isolated population of another, more widespread species of blue butterfly, says Corrie Moreau, an entomologist at Cornell University.

    To find out, Moreau and colleagues turned to a 93-year-old Xerces specimen housed at Chicago’s Field Museum, extracting DNA from a tiny bit of the insect’s tissue. Despite the DNA being degraded from age, the team could compare selected Xerces genes with those of other closely related blue butterflies. The researchers also compared the genomes, or genetic instruction books, of the insects’ mitochondria — cellular structures involved in energy production that have their own set of DNA. 

    a photo of Xerces blue butterfly specimens pinned in boxes under glassScientists analyzed DNA from a specimen in the collection of Xerces blue butterflies (shown) at Chicago’s Field Museum to reveal that the extinct insect was a distinct species. Field Museum

    Using the genes and the “mitogenomes,” the researchers crafted an evolutionary tree, showing how all of the butterfly species are related to each other. The extinct Xerces blue butterfly was genetically distinct, thus warranting classification as a species, the team found. 

    “We sort of lost a piece of the biodiversity puzzle that made up the tapestry of the San Francisco Bay area when this species was driven to extinction,” Moreau says.

    Akito Kawahara, a lepidopterist at the Florida Museum of Natural History in Gainesville not involved with the study, thinks the results are “fairly convincing” that the Xerces blue butterfly was its own species.

    The butterfly is considered a candidate for resurrection, Moreau says, where extinct species are brought back via cloning or other genetic manipulations (SN: 10/20/17). But she cautions against it. “Maybe we should spend that time and energy and money on ensuring that we protect the blues that are already endangered that we know about,” she says.

    One of these insects is the endangered El Segundo blue (Euphilotes battoides allyni), native to the Los Angeles area. It and other butterfly populations are increasingly imperiled by numerous threats, such as climate change, land-use changes and pesticide use (SN: 8/17/16).

    For Felix Grewe, an evolutionary biologist at the Field Museum, the new finding illustrates why long-term museum collections are so important: Specimens’ true utility may not be clear for many years. After all, the genetic techniques used in the study to illuminate the Xerces blue butterfly’s true identity didn’t exist when the insect went extinct.

    “You don’t know what technology there [will be] 100 years from now,” Grewe says.

    in Science News on July 20, 2021 11:01 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    People Tend To Believe That Psychology Is A “Feminine” Discipline

    By Emily Reynolds

    Despite the fact that psychology students are more likely to be women than men, and that women outnumber men in the clinical psychology workforce, women in psychology publish less, receive fewer citations, and are underrepresented at senior positions within university departments. This juxtaposition of over and underrepresentation poses an interesting question about how people perceive gender roles within the field.

    It’s this question Guy A. Boysen and team explore in a new study, published in the Journal of Social Psychology. They find that people associate psychology more strongly with femininity than masculinity — and that this may affect how men and women feel about working or studying within the field.

    In the first study, participants filling in surveys online were asked what percentage of students studying psychology at university level were women, and what percentage of psychologists were women. The results showed that people see psychology as a more women-heavy profession and discipline, with participants estimating that 62% of psychology students and 59% of psychologists are women.

    A second study looked more directly at people’s perceptions of how “feminine” or “masculine” psychology is as a field. Both online participants and undergraduate students rated various university degrees and careers on a scale from “extremely feminine” to “extremely masculine”. The breadth of majors and careers listed were included due to their association with particular gendered stereotypes — engineering, for example, is stereotypically assumed to be a masculine job while nursing is thought to be feminine.

    A psychology major was considered by both groups to be slightly more feminine than masculine, and was rated as significantly more feminine than typically “masculine” subjects like engineering, business, and maths (though majors like nursing and education were considered more feminine than psychology). Similarly, while a career in psychology was seen as less feminine than teaching or nursing, it was considered significantly more feminine than being a neuroscientist, historian, doctor, or business person.

    In the next study, participants were asked to imagine a stereotypical person in one of three subjects at university: engineering, nursing, or psychology. After being shown stereotypically masculine and feminine traits — for example “gentle” for feminine or “egotistical” for masculine — participants rated how well each word described a person studying the subject they had been assigned.

    As expected, participants tended to believe that both positive and negative masculine traits better described engineering students than psychology students, while positive feminine traits better described psychology students than engineers. The only difference between nursing and psychology was in positive masculine traits: these were believed to better suit nursing majors than psychology students. This suggests that psychology, like nursing, is largely considered to be a “feminine” subject.

    In a fourth study, participants indicated how satisfied they believed men and women would be with a career in psychology. Participants who read that psychology students were 75% women and 25% men rated men’s satisfaction as significantly lower than women’s. But those who read there was an equal percentage of women and men didn’t show any difference in ratings of men’s and women’s satisfaction. A similar follow-up part of the study also found that men were seen as less likely to have their needs met by a career in psychology than women.

    Overall, the results suggested that psychology is considered to be significantly more feminine than it is masculine, and that people assume men’s needs may therefore not be met by it as a subject of study or a career.      

    But is this actually true? Future research could look at how men and women in psychology feel themselves — just because a field is perceived to be “feminine” doesn’t mean that men will necessarily be less satisfied or fulfilled when working within it. Whether men are actually put off careers in psychology is a different question that could be explored in more depth.

    It’s also important to go back to the fact of the overrepresentation of men in certain positions. If psychology is seen as a “feminine” occupation, and if women outnumber men, why do men dominate positions of power? Looking at ways for everyone to succeed and feel comfortable in psychology is crucial.

    Evidence for a gender stereotype about psychology and its effect on perceptions of men’s and women’s fit in the field

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on July 20, 2021 12:21 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Award-winning nursing researcher’s paper retracted for ‘failure to acknowledge the contribution of other researchers and the funding source’

    A nursing journal has retracted a 2019 paper by a researcher in Scotland after learning that she’d taken a wee bit more credit for the article than she deserved.  The paper was titled “Co-designing technology with people with dementia and their carers: Exploring user perspectives when co-creating a mobile health application” and was  written by … Continue reading Award-winning nursing researcher’s paper retracted for ‘failure to acknowledge the contribution of other researchers and the funding source’

    in Retraction watch on July 20, 2021 11:18 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Shri Pukka Science of Hindu Nationalism

    Together with "Paul Jones" I celebrate here a great Hindu nationalist: Govardhan Das. The immunology professor previously cured tuberculosis with Photoshop, and now he cured COVID-19 with BCG vaccine. All while exposing alleged research fraudsters who dare to criticise the sacred and perfect Modi government!

    in For Better Science on July 20, 2021 10:24 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Missing Antarctic microbes raise thorny questions about the search for aliens

    Even in the harshest environments, microbes always seem to get by. They thrive everywhere from boiling-hot seafloor hydrothermal vents to high on Mt. Everest. Clumps of microbial cells have even been found clinging to the hull of the International Space Station (SN: 08/26/20). 

    There was no reason for microbial ecologist Noah Fierer to expect that the 204 soil samples he and colleagues had collected near Antarctica’s Shackleton Glacier would be any different. A spoonful of typical soil could easily contain billions of microbes, and Antarctic soils from other regions host at least a few thousand per gram. So he assumed that all of his samples would host at least some life, even though the air around Shackleton Glacier is so cold and so arid that Fierer often left his damp laundry outside to freeze-dry.

    Surprisingly, some of the coldest, driest soils didn’t seem to be inhabited by microbes at all, he and colleagues report in the June Journal of Geophysical Research: Biogeosciences. To Fierer’s knowledge, this is the first time that scientists have found soils that don’t seem to support any kind of microbial life.

    The findings suggest that exceedingly cold and arid conditions might place a hard limit on microbial habitability. The results also raise questions about how negative scientific results should be interpreted, especially in the search for life on other planets. “The challenge comes back to this sort of philosophical [question], how do you prove a negative?” Fierer says.

    Noah FiererNoah Fierer and colleagues found soil samples from the Shackleton Glacier region of Antarctica that did not have traces of life, an unexpected observation since samples from the continent typically contain thousands of microbes.Courtesy of N. Fierer

    Proving a negative result is notoriously difficult. No measurement is perfectly sensitive, which means there’s always a possibility that a well-executed experiment will fail to detect something that is actually there. It took years of experiments based on multiple, independent methods before Fierer of the University of Colorado Boulder and his collaborator Nick Dragone, finally felt confident enough to announce that they’d found seemingly microbe-free soils. And the scientists intentionally stated only that they were unable to detect life in their samples, not that the soils were naturally sterile. “We can’t say the soils are sterile. Nobody can say that,” Fierer says. “That’s a never-ending quest. There’s always another method or a variant of a method that you could try.”

    Polar microbiologist Jeff Bowman interprets the team’s findings as a false-negative. “Certainly, there were things there,” says Bowman of the Scripps Institution of Oceanography in La Jolla, Calif. “This is Earth. This is an environment that is massively contaminated with life.” 

    Even if there were a few undetected microbes in the soil, said Dragone, that wouldn’t undermine his team’s evidence that cold and aridity pose a serious challenge to life. “It’s the combination of multiple very challenging environmental conditions that restricts life more than just one acting by itself,” says Dragone. “It’s a very different sort of restriction than, say, just high temperature.”

    As scientists search for evidence of life beyond Earth (SN: 7/28/20), they will inevitably be forced to walk the line between evidence of absence and absence of evidence. “What we’re trying to do on Mars is kind of the reverse of what we’ve tried to do on Earth,” says polar microbiologist Lyle Whyte of McGill University in Montreal. On Earth, claiming that an environment is lifeless is a tough scientific sell. On Mars, it will be the other way around.

    in Science News on July 20, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Pikas survive winter using a slower metabolism and, at times, yak poop

    Winter on the Qinghai-Tibetan Plateau is unfriendly to pikas. Temperatures across the barren, windy highlands routinely dip below –30° Celsius, and the grass that typically sustains the rabbitlike mammals becomes dry and brittle. It would seem the perfect time for these critters to hibernate, or subsist on stores of grass in burrows to stay warm, like the North American pika. 

    Instead, plateau pika (Ochotona curzoniae) continue foraging in winter, but reduce their metabolism by about 30 percent to conserve energy, researchers report July 19 in the Proceedings of the National Academy of Sciences. Some pikas also resort to unusual rations: yak poop.

    Camera data from four sites confirmed that pikas regularly brave the cold to forage. “Clearly they’re doing something fancy with their metabolism that’s not hibernation,” says John Speakman, an ecophysiologist at the University of Aberdeen in Scotland.

    Speakman and colleagues measured daily energy expenditure of 156 plateau pikas in summer and winter, and implanted 27 animals with temperature sensors. While many nonhibernating animals keep warm in winter by using more energy, these pikas did the opposite (SN: 1/22/14). On average, pikas reduced their metabolism by 29.7 percent, in part by cooling their bodies a couple degrees overnight. The animals were also less active, relative to summertime levels.

    But at sites with yaks, pikas were more abundant but even less active. That puzzled the researchers “until we found a sort of half-eaten yak turd in one of the burrows,” Speakman says. Eating excrement can cause sickness. But with few options, yak poop could be an abundant, easily digestible meal that “massively reduces the amount of time [pikas] need to spend on the surface,” he says.

    The researchers caught pikas scarfing scat on video, and DNA evidence from stomach contents solidified that this behavior is common. Whether dining on dung has downsides remains to be seen, but clearly, not being too picky pays off for pika.   

    Where pika and yak overlap, the rabbitlike mammals (shown) find an abundant and easily digestible source of food in yak feces. Eating the excrement helps pikas, which don’t hibernate, survive during hard winter months on the Qinghai-Tibetan Plateau.

    in Science News on July 19, 2021 07:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Engaging with article metrics: own your impact narrative

    As we wrote back in March, PLOS was an early adopter and advocate of metrics at the article level. The move to article-level metrics (often known as altmetrics) is “defined by its belief that research impact is best measured by diverse real time data about how a research artifact is being used and discussed.” 

    Article-level metrics/altmetrics have always been presented as alternatives to solely relying on traditional shorthand assessments like total citation counts and the reductive Journal Impact Factor. Hopefully, a detailed critique of the perils and misuse of journal-level impact metrics is not needed in this day and age, suffice to repeat that it is unscientific to convey more credit and reward to one researcher’s article over another’s based solely upon the Journal Impact Factor of the venue of publication. 

    We’re hopeful that culture is changing, thanks to the efforts of DORA and other organizations advocating, and now providing concrete guidance, for the responsible use of metrics in research assessment. Also, some funders, like the Howard Hughes Medical Institute (HHMI), are making explicit how they review their HHMI Investigators, clearly encouraging them to tell the full story of their impact beyond a simple publication record.  

    Researchers like you can help. The purpose of this post is to empower researchers to engage with a variety of accessible altmetrics, and signals of rigorous research practice, to define your research article’s unique “impact narrative” and more effectively showcase its strengths. 

    What metrics and signals can articles offer?

    Knowing and highlighting your article-level metrics helps facilitate and normalize more appropriate assessment. When relying on metrics to describe your work, use detailed metrics that are specific to your article such as:

    • Citations over time
    • Usage over time
    • Attention (beyond citations) over time (e.g. from news outlets, policies, social media) often measured as “altmetrics”

    The data displayed on PLOS articles’ “Metrics” tab showcases how many times the article has been viewed, cited, saved, and discussed, and clicking on any of the sections opens up the details (e.g. which articles have cited this article).

    Metrics can also be enhanced by other knowable signals that convey specific qualities about your article. Some of the most general of these are listed below:

    • Peer-review status
    • Open availability of peer review report/decision letters
    • Availability of Open data/code/protocols/materials
    • Adherence to relevant ethical standards for research
    • Adherence to standards for rigorous research reporting (e.g. those of EQUATOR network, ARRIVE, or MDAR)

    At PLOS, to showcase peer-review status, all articles show the Editor who handled the submission, the peer review timeline, and, if the authors chose the option, the published peer review report. 

    When publishing your research, you can make choices that help you demonstrate your commitment to rigor and credibility. When publishing with PLOS you commit to share your underlying data. You may also choose to share your protocols, your code, to pre-register your study, and to publish the peer review history of your papers. These are important choices that affirm your commitment to transparency and to your results being available to the scientific community to build upon. You can highlight these choices too, in addition to metrics!

    Which metrics and signals should I use to tell my Impact Narrative?

    To establish an effective Impact Narrative, showcase altmetrics that are the most relevant to the intentions of your research. Below are some ways this could be done. None of the examples below are mutually exclusive, researchers could be electing to promote all of these qualities:

    Impact narratives: examples

    • If citations are still the most useful measure of impact for your intentions, ensure you list your article with the most up to date citation counts (including the details) rather than stating the Impact Factor of the journal you published in (especially if that score is calculated based on past articles that are not your own!) 
    • If you intended your research to get the attention of the public in a geographic region, or of a particular community, showcase the media mentions and social media coverage you received in those target areas/communities. Each article has a great summary page, or you can highlight specific data points within that. Explore!
    • If having your article openly peer reviewed was of particular importance to you, ensure you link directly to the published reviewer reports.
    • If having your protocol readily available to be built upon is of particular importance, highlight this.
    • If ensuring your data is open and reusable was an impact you were striving for, ensure you always highlight that your data is openly available (your PLOS article will always show this in the Data Availability Statement), and if your data is published on a data publication platform (example datasets connected to a PLOS Genetics article), highlight that unique usage too!  In general, always remember to demonstrate your impact in additional ways than always related to the article itself — by showcasing any reuse or extension of your data, code, protocols, etc.


    We encourage showcasing the signals of rigorous research practices. We encourage engagement, but not obsession, with metrics. Article-level metrics and altmetrics are intended to be useful, and metrics themselves are neutral, but any metric has the potential to be misused, over-interpreted, over-engineered, and gamed, by actors invested in a system. Therefore, here at PLOS, we believe the key is to always:

    • Be transparent (e.g. show the maximum or most accurate detail available, even if an average or total is displayed or used)
    • Engage with metrics and signals to ensure you own the way they should relate to you (e.g. create own your impact narrative)

    Further reading

    Reimagining academic assessment: stories of innovation and change
    Case studies of universities and national consortia highlight key elements of institutional change to improve academic career assessment.

    Research Culture: Changing how we evaluate research is difficult, but not impossible
    eLife 2020;9:e58654.
    This article outlines a framework for driving institutional change that was developed at a meeting convened by DORA and the Howard Hughes Medical Institute. The framework has four broad goals: understanding the obstacles to changes in the way research is assessed; experimenting with different approaches; creating a shared vision when revising existing policies and practices; and communicating that vision on campus and beyond.

    Résumé for Researchers
    Royal Society: Résumé for Researchers is intended to be a flexible tool that can be adapted to a range of different processes that require a summative evaluation of a researcher, recognising that their relative importance will be context-specific.

    Measuring Up: Impact Factors Do Not Reflect Article Citation Rates
    Official PLOS Blog

    The post Engaging with article metrics: own your impact narrative appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on July 19, 2021 06:16 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Software WG tutorials at CNS*2021 Online: Bash, Git, and Python

    The Software Working Group is holding three beginner/intermediate level tutorials at the upcoming CNS*2021 Online conference. These will cover using the command line (Bash), using Git and GitHub, and development in the Python programming language.

    To attend these, and other tutorials at CNS*2021, please register for the conference here.

    Effective use of Bash


    The purpose of this tutorial is to introduce participants to the tools they need in order to comfortably and confidently work with a Unix/Linux command line terminal. Unlike graphical user interfaces, which are often self-explanatory or have obvious built-in help options, the purely text-based nature of a command line terminal can be intimidating and confusing to novice users. Yet, once mastered, the command line offers more flexibility and smoother workflows for many tasks, while being entirely irreplaceable for things such as cluster access.

    In this tutorial, we aim to introduce participants to the concepts and tools they need to confidently operate within a Unix/Linux command line environment. In particular, the tutorial is developed for Bash (as per the title), which should cover most Linux and MacOS* use cases. We hope to provide participants with a firm understanding of the basics of using a shell, as well as an understanding of the advantages of working from a command line.

    The tutorial is aimed not only at novices who have rarely or never used a command line, but also at occasional or even regular users of bash who seek to expand or refresh their repertoire of everyday commands and the kinds of quality-of-life tricks and shortcuts that are rarely covered on StackExchange questions.

    * While MacOS has switched from bash to zsh as its default shell, zshs operation is sufficiently similar for the purposes of this tutorial.


    A working copy of bash; participants on Linux and MacOS are all set.

    Participants on Windows have several options to get hold of a bash environment without leaving familiar territory:

    • Install Git for Windows, which includes a Git Bash emulation with most of the standard tools you might expect in a Linux/Unix environment, plus of course Git.
    • Alternatively, enable WSL2 and install Ubuntu as a virtual machine hosted by Windows. Somewhat ironically, this requires at least one use of a command line terminal (though not bash); on the upside, the Linux-on-Windows experience can be a smooth and safe first step into Linux territory.


    • Basics to refer back to: Operating your bash shell (with key bindings and patience)
    • The grammar of a shell command line
    • Getting around: navigating within and beyond your computer: ~, pwd, cd, pushd/popd, ssh
    • Seeing what’s there: ls, globbing, and strategies for naming your files
    • File system manipulations: mv, cp/scp, rm, mkdir, rmdir, ln -s, touch
    • Looking into files: cat, head & tail, more or less, grep, diff
    • Text manipulation: sed, sort, uniq, cut, column
    • Putting things together: piping and redirection
    • What to do when stuck: man, I need some help here apropos of this command…

    Effective use of Git


    Version control is a necessary skill that users writing any amount of code should possess. Git is a popular version control tool that is used ubiquitously in software development.

    This hands-on session is aimed at beginners who have little or no experience with version control systems and Git. It will introduce the basics of version control and walk through a common daily Git workflow before moving on to show how Git is used for collaborative development on popular Git forges such as GitHub. Finally, it will show some advanced features of Git that aid in debugging code errors.


    The session is intended to be a hands-on session, so all attendees will be expected to run Git commands. A working installation of Git is, therefore, required for this session. We will use GitHub as our Git remote for forking and pull/merge requests. So a GitHub account will also be required.

    • Linux users can generally install Git from their default package manager:
      • Fedora: sudo dnf install git
      • Ubuntu: sudo apt-get install git
    • Windows users should use Git for Windows.
    • MacOS users should use brew to install git: brew install git.

    More information on installing Git can be found on the project website: https://git-scm.com/


    • a brief introduction to Git
      • references, options
      • where to get help
    • using Git on a daily basis:
      • creating a new repository init
      • adding files and staging files: add, add -i
      • ignoring files: .gitingore
      • stashing: stash
      • viewing changes: diff, log
      • committing files: commit
      • using branches to organise the development workflow: branch, checkout
      • tagging: tag
      • creating an archive: archive
    • using Git for collaborative development
      • remotes, forks: remote
      • pushing and pulling: push, pull
      • pull requests and merging: merge
      • merge conflicts and resolving them
    • slightly advanced git
      • Git worktrees: worktree
      • interactive rebasing: rebase -i
      • cherry-picking: cherry-pick
      • debugging with git-bisect: bisect

    Python for beginners


    Python is amongst the most widely used programming languages today, and is increasingly popular in the scientific domain. A large number of tools and simulators in use currently are either implemented in Python, or offer interfaces for their use via Python. Python programming is therefore a very sought after skill in the scientific community.

    This tutorial is targeted towards people who have some experience with programming languages (e.g. MATLAB, C, C++, etc), but are relatively new to Python. It is structured to have you quickly up-and-running, giving you a feel of how things work in Python. We shall begin by demonstrating how to setup and manage virtual environments on your system, to help you keep multiple projects isolated. We’ll show you how to install Python packages in virtual environments and how to manage them. This will be followed by a quick overview of very basic Python constructs, leading finally to a neuroscience-themed project that will give you the opportunity to bring together various programming concepts with some hands-on practice.


    • shell (participants on Linux and MacOS are all set; see below for Windows users)
    • Python 3.6.9 or higher (see below for info on installation)

    Participants on Windows have several options to get hold of a shell environment without leaving familiar territory:

    • Install Git for Windows, which includes a Git Bash emulation with most of the standard tools you might expect in a Linux/Unix environment, plus of course Git.
    • Alternatively, enable WSL2 and install Ubuntu as a virtual machine hosted by Windows. This Linux-on-Windows experience can be a smooth and safe first step into Linux territory.

    You will find several resources online for info on installing Python. e.g. https://realpython.com/installing-python/


    • Setting up and managing virtual environments
    • Installing packages using PyPI (pip) and from Git repositories (e.g. GitHub)
    • Quick Python 101 - lists, dicts, if…else, loops, functions, error handling, import, help, numpy, matplotlib
    • Short neuroscience-themed project - modularizing the code
    • Good practices - lint (Flake8)

    in INCF/OCNS Software Working Group on July 19, 2021 03:59 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The latest picture of a black hole captures Centaurus A’s massive jets

    The Event Horizon Telescope is expanding its portfolio of black hole images.

    In 2019, the telescope unveiled the first image of a black hole, revealing the supermassive beast 55 million light-years from Earth at the center of galaxy M87 (SN: 4/10/19). That lopsided orange ring showed the shadow of the black hole on its glowing accretion disk of infalling material. Since then, observations from the Event Horizon Telescope, or EHT, have yielded more detailed views of M87’s black hole (SN: 9/23/20). Now, EHT data have revealed new details of the supermassive black hole at the heart of a galaxy near our own, called Centaurus A.

    Rather than zooming in close enough to see the black hole’s shadow, the new picture offers the clearest view yet of the powerful plasma jets erupting from the black hole. This perspective gives insight into how supermassive black holes blast such plasma jets into space, researchers report online July 19 in Nature Astronomy.

    “It’s a fairly impressive feat,” says radio astronomer Craig Walker of capturing the new high-resolution image. “These [jets] are some of the most powerful things in the universe,” says Walker, of the National Radio Astronomy Observatory in Socorro, N.M., who was not involved in the work. Because such superfast plasma streams are thought to influence how galaxies grow and evolve, astronomers are keen to understand how the jets form (SN: 3/29/19).

    Researchers pointed the global network of radio dishes that make up the EHT at Centaurus A for six hours in April 2017, during the same observing run that delivered the first picture of a black hole (SN: 4/10/19). About 12 million light-years from Earth, Centaurus A is one of the brightest galaxies in the sky and is known for the huge jets expelled by its central black hole.

    “They extend to pretty much the entire scale of the galaxy,” says Michael Janssen, a radio astronomer at the Max Planck Institute for Radio Astronomy in Bonn, Germany. “If we were to see radio light [with our eyes], and we were to look at the night sky, then we would see these jets of Centaurus A as a structure that is 16 times bigger than the full moon.”

    Using the EHT, Janssen and colleagues homed in on the base of those jets, which gush out from either side of the black hole’s accretion disk. The new image is 16 times as sharp as previous observations of the jets, probing details less than one light-day across — about four times the distance from the sun to Pluto. One of the most striking features that the image reveals is that only the outer edges of the jets seem to glow.

    images of jets from the black hole in the galaxy Centaurus AThe supermassive black hole in the galaxy Centaurus A launches two jets of plasma in opposite directions (zoomed-out view of the jets at left). In a new close-up view taken by the Event Horizon Telescope (at right; estimated location of the black hole indicated with an arrow), the jet moving toward Earth points toward the image’s top left, with two bright edges and a dark center. The jet moving away from Earth, also bright only at the edges, points toward the bottom right.M. Janssen et al/Nature Astronomy 2021

    “That’s still a puzzle,” Janssen says. One possibility is that the jets are rotating, which might cause material in some regions of the jets to emit light toward Earth, while others don’t. Or the jets could be hollow, Janssen says.

    Recent observations of a few other galaxies have hinted that the jets of supermassive black holes are brighter around the edges, says Denise Gabuzda, an astrophysicist at University College Cork in Ireland, who wasn’t involved in the work. “But it’s been hard to know whether it was a common feature, or whether it was something quirky about the few that had been observed.”

    The new view of Centaurus A’s black hole provides evidence that this edge-brightening is common, Gabuzda says. “It’s fairly rare to be able to detect the jets coming out in both directions, but in the images of Centaurus A … you can clearly see that both of them are brighter at the edges.”

    The next step will be to compare the EHT image of Centaurus A with computer simulations based on Einstein’s general theory of relativity, to test how well relativity holds up in this extreme environment, Janssen says. Examining the polarization, or orientation, of the light waves emanating from Centaurus A’s jets could also reveal the structure of their magnetic fields — just as polarization revealed the magnetism around M87’s black hole (SN: 3/24/21).

    in Science News on July 19, 2021 03:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Climate change may be leading to overcounts of endangered bonobos

    Climate change is interfering with how researchers count bonobos, possibly leading to gross overestimates of the endangered apes, a new study suggests.

    Like other great apes, bonobos build elevated nests out of tree branches and foliage to sleep in. Counts of these nests can be used to estimate numbers of bonobos — as long as researchers have a good idea of how long a nest sticks around before it’s broken down by the environment, what’s known as the nest decay time.

    New data on rainfall and bonobo nests show that the nests are persisting longer in the forests in Congo, from roughly 87 days, on average, in 2003–2007 to about 107 days in 2016–2018, largely as a result of declining precipitation. This increase in nests’ decay time could be dramatically skewing population counts of the endangered apes and imperiling conservation efforts, researchers report June 30 in PLOS ONE.

    “Imagine going in that forest … you count nests, but each single nest is around longer than it used to be 15 years ago, which means that you think that there are more bonobos than there really are,” says Barbara Fruth, a behavioral ecologist at the Max Planck Institute of Animal Behavior in Konstanz, Germany.

    Lowland tropical forests, south of the Congo River in Africa, are the only place in the world where bonobos (Pan paniscus) still live in the wild (SN: 3/18/21). Estimates suggest that there are at least 15,000 to 20,000 bonobos there. But there could be as many as 50,000 individuals. “The area of potential distribution is rather big, but there have been very few surveys,” Fruth says.

    From 2003 to 2007, and then again from 2016 to 2018, Fruth and colleagues followed wild bonobos in Congo’s LuiKotale rain forest, monitoring 1,511 nests. “The idea is that you follow [the bonobos] always,” says Mattia Bessone, a wildlife researcher at the Liverpool John Moores University in England. “You need to be up early in the morning so that you can be at the spot where the bonobos have nested, in time for them to wake up, and then you follow them till they nest again.”

    In doing so, day after day, Fruth, Bessone and colleagues were first able to understand how many nests a bonobo builds in a day, what’s known as the nest construction rate. “It’s not necessarily one because sometimes bonobos build day nests,” Bessone says. On average, each bonobo builds 1.3 nests per day, the team found.

    Tracking how long these nests stuck around revealed that the structures were lasting an average of 19 days longer in 2016–2018 than in 2003–2007. The researchers also compiled fifteen years of climate data for LuiKotale, which showed a decrease in average rainfall from 2003 to 2018. That change in rain is linked to climate change, the researchers say, and helps explain why nests have become more resilient.

    four images of nests up in treesThese images show bonobo nests at different stages of decay. Knowing the time it takes for a nest to decay is crucial for estimating accurate bonobo numbers.© B. Fruth/MPI of Animal Behavior

    By counting the numbers of nests and then dividing that number by the product of the average nest decay time and nest construction rate, scientists can get an estimate of the number of bonobos in a region. But if researchers are using outdated, shorter nest decay times, those estimates could be severely off, overestimating bonobo counts by up to 50 percent, Bessone says.

    “The results are not surprising but also highlight how indirect (and therefore prone to errors) our methods of density estimates of many species are,” Martin Surbeck, a behavioral ecologist at Harvard University, wrote in an e-mail.

    Technologies such as camera traps can be used to directly count animals instead of using proxies like nests and are the way forward for animal population studies, researchers say. But until those methods become more common, nest counts remain vital for scientists’ understanding of bonobo numbers.

    This phenomenon is probably not limited to bonobos. All great apes build nests, and nest counts are used to estimate those animals’ numbers too. So, the researchers say, the new results could have implications for the conservation of primates far beyond bonobos.

    in Science News on July 19, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    ‘Tortured phrases’, lost in translation: Sleuths find even more problems at journal that just flagged 400 papers

    What do subterranean insect provinces and motion to clamor have to do with microprocessors and microsystems? That’s an excellent question. Read on, dear reader. Earlier this month, as we reported, Elsevier announced that it had concerns about some 400 papers published in special issues in one of its journals. The publisher said that “the integrity … Continue reading ‘Tortured phrases’, lost in translation: Sleuths find even more problems at journal that just flagged 400 papers

    in Retraction watch on July 19, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 19 July 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.

    Please join us at the next regular Open NeuroFedora team meeting on Monday 19 July at 1300UTC in #fedora-neuro on IRC (Libera.chat). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'

    The meeting will be chaired by @ankursinha. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on July 19, 2021 08:11 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Summer vacations

    Just a quick note that the members of the SoftwareWG will be on vacation until the end of September. Our meetings, events, and initiatives will resume when everyone has returned.

    in INCF/OCNS Software Working Group on July 19, 2021 08:02 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Weekend reads: Ivermectin study retracted; Sci-Hub and citations; animal welfare violations at chinchilla lab supplier

    Before we present this week’s Weekend Reads, a question: Do you enjoy our weekly roundup? If so, we could really use your help. Would you consider a tax-deductible donation to support Weekend Reads, and our daily work? Thanks in advance. The week at Retraction Watch featured: Elsevier says “integrity and rigor” of peer review for 400 … Continue reading Weekend reads: Ivermectin study retracted; Sci-Hub and citations; animal welfare violations at chinchilla lab supplier

    in Retraction watch on July 17, 2021 12:27 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Only a tiny fraction of our DNA is uniquely human

    The genetic tweaks that make humans uniquely human may come in small parcels interspersed with DNA inherited from extinct ancestors and cousins.

    Only 1.5 percent to 7 percent of the collective human genetic instruction book, or genome, contains uniquely human DNA, researchers report July 16 in Science Advances.

    That humans-only DNA, scattered throughout the genome, tends to contain genes involved in brain development and function, hinting that brain evolution was important in making humans human. But the researchers don’t yet know exactly what the genes do and how the exclusively human tweaks to DNA near those genes may have affected brain evolution.

    “I don’t know if we’ll ever be able to say what makes us uniquely human,” says Emilia Huerta-Sanchez, a population geneticist at Brown University in Providence, R.I., who was not involved in the study. “We don’t know whether that makes us think in a specific way or have specific behaviors.” And Neandertals and Denisovans, both extinct human cousins, may have thought much like humans do (SN: 2/22/18).

    The results don’t mean that individual people are mostly Neandertal or Denisovan, or some other mix of ancient hominid. On average, people in sub-Saharan Africa inherited 0.096 percent to 0.46 percent of their DNA from ancient interbreeding between their human ancestors and Neandertals, the researchers found (SN: 4/7/21). Non-Africans inherited more DNA from Neandertals: about 0.73 percent to 1.3 percent. And some people inherited a fraction of their DNA from Denisovans as well.

    Using a new computational method, researchers at the University of California, Santa Cruz examined every spot of DNA in the genomes of 279 people. The team compiled results from those individual genomes into a collective picture of the human genome. For each spot, the team determined whether the DNA came from Denisovans, Neandertals or was inherited from a common ancestor of humans and those long-lost relatives.

    Although each person may carry about 1 percent Neandertal DNA, “if you look at a couple hundred people, they mostly won’t have their bit of Neandertal DNA in the same place,” says Kelley Harris, a population geneticist at the University of Washington in Seattle who wasn’t involved in the work. “So if you add up all the regions where someone has a bit of Neandertal DNA, that pretty soon covers most of the genome.” 

    In this case, about 50 percent of the collective genome contains regions where one or more people inherited DNA from Neandertals or Denisovans, the researchers discovered. Most of the rest of the genome has been passed down from the most recent common ancestor of humans and their extinct cousins. After whittling away the ancient heirloom DNA, the team looked for regions where all people have human-specific tweaks to DNA that no other species have. That got the estimate of uniquely human DNA down to anywhere between 1.5 percent and 7 percent of the genome.

    The finding underscores just how much interbreeding with other hominid species affected the human genome, says coauthor Nathan Schaefer, a computational biologist now at the University of California, San Francisco. The researchers confirmed previous findings from other groups that humans bred with Neandertals and Denisovans, but also with other extinct, unknown hominids (SN: 2/12/20). It’s not known whether those mysterious ancestors are the groups that included “Dragon Man” or Nesher Ramla Homo, which may be closer relatives to humans than Neandertals (SN: 6/25/21; SN: 6/24/21). And the mixing and mingling probably happened multiple times between different groups of humans and hominids, Schaefer and colleagues found.

    The tweaks that make the uniquely human DNA distinctive arose in a couple of evolutionary bursts, probably around 600,000 years ago and again about 200,000 years ago, the team found. Around 600,000 years ago is about the time that humans and Neandertals were forming their own branches of the hominid family tree.

    The estimate of the amount of uniquely human DNA doesn’t take into account places where humans have gained DNA through duplication or other means, or lost it, says James Sikela, a genome scientist at the University of Colorado Anschutz Medical Campus in Aurora who wasn’t involved in the study (SN: 8/6/15). Such extra or missing DNA may have allowed humans to evolve new traits, including some involved in brain evolution (SN: 3/9/11; SN: 2/26/15).

    Ancient DNA usually has been degraded into tiny fragments and researchers have pieced together only portions the genomes from extinct hominids. The fragmented genomes make it difficult to tell where big chunks of DNA may have been lost or gained. For that reason, the researchers studied only small tweaks to DNA involving one or more DNA bases — the information-carrying parts of the molecule. Given that humans and Neandertals went their separate evolutionary ways relatively recently, it’s not surprising that only 7 percent or less of the genome has evolved the uniquely human tweaks, Sikela says. “I’m not shocked by that number.” Considering DNA that humans alone have added to their genomes might produce a higher estimate of exclusively human DNA, he says.

    Or it could go the other way. As more genomes are deciphered from Neandertals, Denisovans and other extinct hominids, researchers may discover that some of what now seems like uniquely human DNA was also carried by those extinct relatives, Harris says. “This estimate of the amount of uniquely human regions is only going to go down.”

    in Science News on July 16, 2021 06:07 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    JAMA journal retracts paper on masks for children

    JAMA Pediatrics has retracted a paper claiming that children’s masks trap too-high concentrations of carbon dioxide a little more than two weeks after publishing it. The paper, by Harald Walach and colleagues, came under fire immediately after it was published on June 30, and quickly earned an editor’s note. Walach had another paper — which … Continue reading JAMA journal retracts paper on masks for children

    in Retraction watch on July 16, 2021 04:26 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Protecting an investment

    With the number of potential kidney transplant recipients exceeding available organs, there is a tremendous focus on reducing discard rates of allografts and increasing the number of live donors. In the US, there are 107,000 individuals on the wait list for kidney transplant with about 30,000 transplants performed annually in the US. Worldwide about 95,000 kidney transplants were performed with the majority occurring in North America and Europe. There has been a parallel initiative to promote the option of transplantation through education and awareness campaigns. These programs are needed to support the overall goal of availability of transplant for those that are medically eligible recipients. Since transplant is a such a limited resource, equally important to closing the gap of people wait-listed for transplant and availability of allografts should be work on preserving/prolonging allograft survival.

    Polyoma virus BK Nephropathy (PVBKN) is a challenging issue in transplant.  While PVBKN has an incidence of less than 10% of transplant recipients, it can lead to significant graft loss through rejection episodes and difficulty in clearing the virus  https://doi.org/10.1111/ajt.14314.   Early detection of the virus can provide a window for reduction of immunosuppression and possible use of antiviral agents to allow for allograft preservation.

    Nili et al. in a recently published study looked at the use of immunohistochemistry studies of biopsies to enhance the diagnosis of PVBKN, Histopathologic evaluation, the gold standard, detects nuclear viral inclusion bodies. The addition of immunohistochemistry staining for SV40 picked up additional cases of PVBKN not noted to have the classic pathologic changes. These cases were hypothesized to be identified earlier in the evolution of the PVBKN.

    Noninvasive testing has not provided a consistent reliable means for diagnosis as the degree of viremia and viruria may not correlate with disease. Given that most cases of allograft dysfunction will be evaluated by renal biopsy and the addition of the immunohistochemistry staining for SV40 is widely available and low cost, this seems an approach that should be adopted by transplant centers as routine. This may be more feasible than surveillance of screening for viruria or viremia for many years as there is the potential for PVBKN many years post transplantation. Another key factor to keep in mind is that many transplant recipients may not be followed by transplant nephrologists or transplant centers for routine care and may not be getting surveillance screening of blood and urine due to cost or availability of resources. The ability for early diagnosis and intervention to help preserve allograft function even for a few cases would be valuable for long term allograft preservation. This study highlights the need to continue efforts toward improving allograft survival and identify cost effective interventions that can readily be incorporated by many centers.  Ongoing dissemination of best practices is another key part of the improving care.

    The post Protecting an investment appeared first on BMC Series blog.

    in BMC Series blog on July 16, 2021 02:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Lakes of liquid water at Mars’ southern ice cap may just be mirages

    Maybe hold off on that Martian ice fishing trip. Two new studies splash cold water on the idea that potentially habitable lakes of liquid water exist deep under the Red Planet’s southern polar ice cap.

    The possibility of a lake roughly 20 kilometers across was first raised in 2018, when the European Space Agency’s Mars Express spacecraft probed the planet’s southern polar cap with its Mars Advanced Radar for Subsurface and Ionosphere Sounding, or MARSIS, instrument. The orbiter detected bright spots on radar measurements, hinting at a large body of liquid water beneath 1.5 kilometers of solid ice that could be an abode to living organisms (SN: 7/25/18). Subsequent work found hints of additional pools surrounding the main lake basin (SN: 9/28/20).

    But the planetary science community has always held some skepticism over the lakes’ existence, which would require some kind of continuous geothermal heating to maintain subglacial conditions (SN: 2/19/19). Below the ice, temperatures average –68° Celsius, far past the freezing point of water, even if the lakes are a brine containing a healthy amount of salt, which lowers water’s freezing point. An underground magma pool would be needed to keep the area liquid — an unlikely scenario given Mars’ lack of present-day volcanism.

    “If it’s not liquid water, is there something else that could explain the bright radar reflections we’re seeing?” asks planetary scientist Carver Bierson of Arizona State University in Tempe.

    In a study published in the July 16 Geophysical Research Letters, Bierson and colleagues describe a couple other substances that could explain the reflections. Radar’s reflectivity depends on the electrical conductivity of the material the radar signal moves through. Liquid water has a fairly distinctive radar signature, but examining the electrical properties of both clay minerals and frozen brine revealed those materials could mimic this signal.

    Adding weight to the non-lake explanation is a study from an independent team, published in the same issue of Geophysical Research Letters. The initial 2018 watery findings were based on MARSIS data focused on a small section of the southern ice cap, but the instrument has now built up three-dimensional maps of the entire south pole, where hundreds to thousands of additional bright spots appear.

    “We find them literally all over the region,” says planetary scientist Aditya Khuller, also of Arizona State University. “These signatures aren’t unique. We see them in places where we expect it to be really cold.”

    Creating plausible scenarios to maintain liquid water in all of these locations would be a tough exercise. Both Khuller and Bierson think it is far more likely that MARSIS is pointing to some kind of widespread geophysical process that created minerals or frozen brines.

    While previous work had already raised doubts about the lake interpretation, these additional data points might represent the pools’ death knell. “Putting these two papers together with the other existing literature, I would say this puts us at 85 percent confidence that this is not a lake,” says Edgard Rivera-Valentín, a planetary scientist at the Lunar and Planetary Institute in Houston who was not involved in either study.

    The lakes, if they do exist, would likely be extremely cold and contain as much as 50 percent salt — conditions in which no known organisms on Earth can survive. Given that, the pools wouldn’t make particularly strong astrobiological targets anyway, Rivera-Valentín says. (SN: 5/11/20).

    Lab work exploring how substances react to conditions at Mars’ southern polar ice cap could help further constrain what generates the bright radar spots, Bierson says.

    In the meantime, Khuller already has his eye on other areas of potential habitability on the Red Planet, such as warmer midlatitude regions where satellites have seen evidence of ice melting in the sun. “I think there are places where liquid water could be on Mars today,” he says. “But I don’t think it’s at the south pole.” 

    in Science News on July 16, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    ‘Please don’t be afraid to talk about your errors and to correct them.’

    A “systematic error” in a mental health database has led to the retraction of a 2017 paper on how people with psychosis process facial expressions. Joana Grave, a PhD student at the University of Aveiro, in Portugal, and her colleagues published their article, “The effects of perceptual load in processing emotional facial expression in psychotic … Continue reading ‘Please don’t be afraid to talk about your errors and to correct them.’

    in Retraction watch on July 16, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Schneider Shorts 16.07.2021 – Pinche Estúpido

    Schneider Shorts 15.07.2021: TCM robots and Russian lasers in space, Swedish whistleblowers at the EU Court of Justice, fraudsters and bullies retiring, an ivermectin setback, German stinginess, how Vitamin D prevents colon cancer, and how paper-mill narrative gets purged of Smut.

    in For Better Science on July 16, 2021 05:27 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Human cells make a soaplike substance that busts up bacteria

    When faced with bacterial invaders, some human cells dispense a surprising substance: soap.

    These cells, which aren’t part of the immune system, unleash a detergent-like protein that dissolves chunks of the inner membranes of bacteria, killing the infiltrators, researchers report in the July 16 Science.

    The “professional” players of the immune system, like antibodies or white blood cells, get lots of attention, but “all cells are endowed with some ability to combat infection,” says immunologist John MacMicking, a Howard Hughes Medical Institute investigator at Yale University.

    In humans, these run-of-the-mill cellular defenses have often been overlooked, MacMicking says, even though they are part of “an ancient and primordial defense system” and could inform the development of treatments for new infections. 

    Often, nonimmune cells rely on a warning from their professional counterparts to combat infections. Upon detecting outsiders, specialized immune cells release an alarm signal called interferon gamma. That signal rouses other cells, including epithelial cells that line the throat and intestines and are often targeted by pathogens, to action.

    MacMicking and colleagues looked for the molecular basis of that action by infecting laboratory versions of human epithelial cells with Salmonella bacteria, which can exploit cells’ nutrient-rich interior. Then, the team screened over 19,000 human genes, looking for those that conferred some protection from infection. 

    One gene, which contains instructions for a protein called APOL3, stood out. When this gene was disabled, the epithelial cells succumbed to a Salmonella infection, even when warned by interferon gamma. Zooming in on APOL3 molecules in action inside host cells with high-powered microscopy, the researchers found that the protein swarms invading bacteria and somehow kills them.

    APOL3 protein dissolving SalmonellaHuman epithelial cells can respond to a Salmonella infiltration by releasing a molecule called APOL3 (black dots in this microscope image), which acts like a detergent to dissolve parts of the bacteria’s internal membrane.R.G. Gaudet et al/Science 2021

    Salmonella are hardy microbes, protected by an outer and inner membrane, a feature shared by many different forms of bacteria. This double layer renders these bacteria hard to kill, but further investigation revealed how APOL3 and another molecule, GBP1, work together to do it. GBP1 somehow loosens the bacteria’s outer membrane, opening doors for APOL3 to deliver its death-by-dissolution to the inner lipid membrane. APOL3 has both water-loving and lipid-loving parts, letting it to bind to the inner membrane and dissolve it into the intracellular fluid, like soap washing away grease.

    “We were a bit surprised to find detergent-like activity inside human cells,” MacMicking says, given such a molecule could dissolve host membranes too. But the researchers found that APOL3 specifically targets lipids found in bacteria, and its activity is blocked by cholesterol, a common component of mammalian cell membranes, leaving human tissues unaffected.

    “Everything about these findings is supercool,” says Jessica Brinkworth, an evolutionary immunologist at the University of Illinois at Urbana-Champaign who was not involved in the study. Many infections start in these epithelial cells, and understanding how they fight back is crucial to developing future treatments, she says.

    “The really interesting finding is how the APOL3 is able to distinguish between bacterial membranes and host membranes,” she says. That evolution found such an elegant way to control this powerful tool “is a beautiful thing.”

    in Science News on July 15, 2021 06:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    What 20th century science fiction got right and wrong about the future of babies

    Science fiction writers have imagined just about every aspect of life in some far-off future — including how humans will reproduce. And usually, their visions have included a backlash against those who tamper with Mother Nature.

    In his 1923 stab at speculative fiction, for instance, British biologist JBS Haldane said that while those who push the envelope in the physical sciences are generally likened to Prometheus, who incurred the wrath of the gods, those who mess around with biology risk stirring something far more pointed: the wrath of their fellow man. “If every physical or chemical invention is a blasphemy,” he wrote in Daedalus, or Science and the Future, “every biological invention is a perversion.”

    Some of Haldane’s projections were remarkably specific. He wrote, for instance, that the world’s first “ectogenic babies” would be born in 1951. These lab-grown babies would come about when two fictitious scientists, “Dupont and Schwarz,” acquire a fresh ovary from a woman who dies in a plane crash. Over the next five years, the ovary produces viable eggs, which the team extracts and fertilizes on a regular basis.

    Eventually, Haldane writes, Dupont and Schwarz solve the problem of “the nutrition and support of the embryo.” Soon lab-grown babies become routine, as scientists learn to remove an ovary from any living woman, maintain it in the lab for up to 20 years, extract a new egg every month, grab some sperm (from where, he never says), and successfully fertilize 90 percent of the eggs. Then — and here the details get murky — the embryos are “grown successfully for nine months, and then brought out into the air.”

    image of JBS HaldaneJBS Haldane, a British biologist, wrote in the 1920s about the possibility of babies being conceived and gestated in laboratories outside the womb.Mirrorpix/Getty Images

    In Haldane’s imaginary future, 60,000 babies a year are “brought out into the air” in France, the first country to adopt the new technology, by the year 1968. At some later date, he wrote, ectogenic babies go international, and become more common than natural births, with only 30 percent of children “born of woman.”

    Haldane was wrong to leave out the human uterus entirely from these reproductive machinations. But he wasn’t wrong about the eventual ability of scientists to remove a living woman’s ovary and keep it in the lab as a source of eggs for a very long time. This was first reported was in 2001, when fertility doctor Kutluk Oktay, then at Weill Medical College of Cornell University, reported freezing strips of ovarian tissue taken from women who needed or wanted to delay childbearing. When the woman is ready for pregnancy, she returns to the lab to have her ovarian tissue thawed and returned to the ovary. If all goes well, the implant will, within a few months, resume secreting hormones normally again, leading the revived ovary to go back to maturing and releasing eggs on a regular cycle. Today, babies born after ovarian tissue cryopreservation number in the hundreds. (And babies born through all forms of assisted reproductive technology number in the millions.)

    British writer Aldous Huxley, too, was preoccupied with laboratory-made babies as the gateway to the future — in his case, to a totalitarian dystopia. (Haldane devoted relatively little time to the social implications of ectogenesis.) Artificial reproduction was at the heart of his 1932 novel Brave New World. Carefully selected eggs and sperm were mixed in glass dishes and grown in an artificial uterus, where they could either be cultured with nutrients to breed an intelligent and healthy upper crust, or spiked with poisons to create an underclass of not-quite-human servants.

    Huxley himself was curious about how accurate his prophesies were. So, in 1958, he took another look in Brave New World Revisited. It was still two decades before the birth of the world’s first “test tube baby” in his native England, which might explain why Huxley, by that time living in California, seemed to think he had missed the mark in his original projection of endless rows of fake wombs in the baby-making lab. “Babies in bottles and the centralized control of reproduction are perhaps not impossible,” he conceded, but they certainly were not around the corner. He added that “it is quite clear that for a long time to come we shall remain a viviparous species breeding at random.”

    image of Aldous HuxleyBritish writer Aldous Huxley, author of Brave New World, imagined a dystopian future in which people’s traits and social status are determined before they’re born.Edward Gooch Collection/Getty Images

    Yes, even 60-plus years after Huxley wrote those words, humans do indeed still breed mostly viviparously — meaning in live birth from a mother’s body — and mostly “at random.” Yet assisted reproductive technology has become almost mainstream, in a way that neither Huxley nor Haldane quite could have predicted. Nor did they really signal the emergence, within this same startling century, of a technique like CRISPR, with the potential to change an embryo’s genetic code as easily as making changes in a Word document.

    In this regard, writers from a much more recent era, such as those who wrote the screenplay for the 1997 movie Gattaca, were in a better position to get the science basically right, envisioning a grim future in which, as film critic Roger Ebert wrote in his review, genetic engineering of embryos becomes as humdrum as a kind of “preemptive plastic surgery.”

    Even as long ago as 1923, though, Haldane was able to offer one unusually provocative prediction: “We can already alter animal species to an enormous extent, and it seems only a question of time before we shall be able to apply the same principles to our own.”

    in Science News on July 15, 2021 02:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Insects had flashy, noise-making wings as early as 310 million years ago

    Modern insects are versatile wing conversationalists. Crickets can scrape a leg against a wing or rub two wings together. Some grasshoppers beat their wings like castanets; others crackle and snap the thin membranes. Many butterfly wings play with light, manipulating it to hide in plain sight or reflecting it in flashes along iridescent or multifaceted surfaces (SN: 6/21/21).

    Now, the discovery of the fossilized wing of a grasshopper-like insect suggests this conversation got started as far back as 310 million years ago. The wing structures resemble those of living insects that use light or sound to communicate, researchers report July 8 in Communications Biology.

    top: fossil of ancient insect wing. bottom: illustration of wing structures of ancient insect Theiatitan azariA fossil (top) preserving wing structures (illustrated, bottom) from the ancient insect Theiatitan azari suggests it used its wings to communicate, much like many modern insects. By comparing the arrangement of the structures with modern insect wings, researchers suggest T. azari may have made crackling noises by swiftly snapping together the thin membranes of the wing. It may also have reflected flashes of light along different surfaces in the wing.T. Schubnel et al/Communications Biology 2021

    Named Theiatitan azari — after Theia, the Titan goddess of light in Greek mythology — the insect was a member of Titanoptera, a group of giant predatory insects. Large-winged insects thrived in the Carboniferous Period, which spanned 359 million to 299 million years ago. Some grew to astounding sizes in the oxygen-rich atmosphere (SN: 12/13/05). (The terrifying dragonfly-like Meganeura was roughly the size of a small dog.)

    T. azari predates other Titanoptera by about 50 million years. But like other insects in the group, the thin membranes of its forewings are divided by networks of veins into a mosaic of smaller sections. Based on the patterns of those mosaics, Titanoptera, including T. azari, may have had a range of communication tools at their wingtips, including crackles, flashes of light, or both, say Thomas Schubnel, an evolutionary biologist at the Institute of Systematics, Evolution, Biodiversity in Paris, and colleagues.

    Scientists don’t yet know whether the ancient insects used those abilities to call to potential mates or warn off predators. But this discovery suggests there’s plenty more these ancient wings can tell them.

    in Science News on July 15, 2021 11:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Paper from company claiming phototherapy could treat COVID-19 is retracted

    A study that touted phototherapy as a way to combat the COVID-19 pandemic has been retracted after Elisabeth Bik noted a litany of concerns about the article, from duplications in the figures to the authors’ failure to disclose conflicts of interest.  The article, “Methylene blue photochemical treatment as a reliable SARS-CoV-2 plasma virus inactivation method … Continue reading Paper from company claiming phototherapy could treat COVID-19 is retracted

    in Retraction watch on July 15, 2021 10:45 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Supporting PhD researchers with Fulbright scholarships for US study

    Fulbright scholarships sponsored by Elsevier enable Dutch PhD students to study at top US universities

    in Elsevier Connect on July 15, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Millions of kids have missed routine vaccines thanks to COVID-19

    The COVID-19 pandemic has forced millions of children around the world to miss out on important childhood vaccinations, increasing the risk of dangerous outbreaks of other infectious diseases, new research suggests.

    Amid the spread of the coronavirus, an estimated 9 million more children than expected didn’t get a first dose of the measles vaccine in 2020, researchers report July 14 in a modelling study in the Lancet. Another 8.5 million children are projected to have missed a third dose of the DTP shot for diphtheria, tetanus and pertussis, or whooping cough.

    The World Health Organization and other public health agencies warned last year that the COVID-19 pandemic would disrupt routine childhood vaccinations. Those missing vaccinations could put vulnerable children at risk during outbreaks of highly contagious diseases, like the 2014 measles outbreak at Disneyland in California (SN: 11/13/20). The news also comes as health officials in Tennessee plan to halt all outreach to get adolescents vaccinated to prevent not only COVID-19 but also other infectious diseases.  

    “We’ve lost over 4 million people to COVID,” says Suzette Oyeku, a pediatrician at the Children’s Hospital at Montefiore and Albert Einstein College of Medicine, both located in New York City. “How many additional lives do we want to lose for not protecting people against stuff that we know we can protect?”

    There is some uncertainty in the new Lancet estimates because vaccine data weren’t available for all countries, says global health researcher Kate Causey of the University of Washington’s Institute for Health Metrics and Evaluation in Seattle. The actual numbers for some regions could be lower or higher.

    A separate analysis from WHO and UNICEF, described in a July 15 press release, does find lower numbers, though millions of children are still missing crucial childhood vaccines.

    Based on health care data, the WHO reports that in 2020, at least 3.5 million more children missed their first DTP dose than in 2019. Another 3 million more children missed their first measles vaccine in 2020 than in 2019. Many children in Southeast Asia, for example, missed their shots. India had the largest increase in missed vaccines, according to the WHO study. There, more than an estimated 3 million children didn’t get a first dose of the DTP vaccine in 2020 compared with around 1.4 million in 2019.

    In the Lancet study, Causey and colleagues estimated global measles and diphtheria, tetanus and pertussis vaccine coverage in 2020 by analyzing public health data as well as mobility patterns. Had the pandemic not happened, an estimated 83.3 percent of children would have been vaccinated against diphtheria, tetanus and pertussis and 85.9 percent against measles in 2020, the researchers’ models suggest. Instead, an estimated 76.7 percent of children received the DTP vaccine, the lowest rate since 2008, meaning 30 million children — 8.5 million more than expected — missed the shot, the team found. Only an estimated 78.9 percent were vaccinated against measles, meaning 27.2 million children, or 8.9 million more than expected, missed doses. Experts haven’t seen a level of measles vaccination in kids that low since 2006 (SN: 4/24/19).

    That decline is troubling, particularly given the ongoing coronavirus pandemic, Oyeku says. “The concern that we’re going to start to see clusters of outbreaks of vaccine-preventable disease” as well as outbreaks of COVID-19 in children, which could cause problems.  

    As seen in the WHO analysis, regions like South Asia had the largest decline, with administered DTP doses dropping nearly 60 percent below what was expected, the Lancet study suggests. Measles doses declined by 40 percent in that region. Sub-Saharan Africa had the smallest decline — around 4 percent for both shots.

    High-income countries including the United States had an estimated 6 percent drop in DTP vaccinations and an 8 percent drop for measles, the team found. A separate study released in the June 11 Morbidity and Mortality Weekly Report focused on the United States showed that vaccination rates for these vaccines dropped across 10 states, including Idaho, Iowa and Washington, from March to September 2020 compared with the same period in 2018 and 2019.

    in Science News on July 14, 2021 11:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Would dogs return the favor if you gave them treats? It’s complicated

    Dogs may not be inclined to return favors to people, at least when it involves food. 

    The result, published July 14 in PLOS ONE, is somewhat surprising since a previous study showed dogs will return favors in the form of food to other dogs. In other studies, dogs helped their owners when the people appeared to be trapped, and canines were able to distinguish between helpful and unhelpful people. So it seems reasonable to think dogs might reciprocate good deeds by humans.

    To find out, comparative psychologist Jim McGetrick and colleagues at the University of Veterinary Medicine, Vienna trained pet dogs how to use a button to get food from a nearby dispenser. Each dog was then paired with a human, visible in an adjacent enclosure, who pressed the button to dispense food in the dog’s enclosure. On separate occasions, the dog was also paired with another human who didn’t press the button. When it was the dogs’ turn to offer food to their human partners, the canines were no more likely to press the button to provide food for the helpful human than for the stingy one.

    Why didn’t dogs return the humans’ food favors? It may be that they aren’t willing to, or perhaps aren’t able to form this sort of complicated tit-for-tat social contract with humans. Or, there’s another possibility, the study authors note: The dogs simply may not have understood what was being asked of them, which could come down to how the experiment was designed. 

    Science News talked to McGetrick about the challenges of testing whether animals like dogs are capable of complex social behaviors. His answers have been edited for clarity and length:

    SN: What aspects of the experiment may have influenced why a dog didn’t return the favor for a human?

    McGetrick: One possible explanation is the fact that dogs don’t provide humans with food. We feed them all the time, but it’s not something natural that they do. At the same time, dogs have been shown to reciprocate the receipt of food with other dogs [even though] adult dogs also don’t normally provide food to other adult dogs. So, if one applies the argument that this is an unusual setup because dogs don’t provide food to humans, I think one also needs to explain why it would be normal for a dog to provide food to another dog. 

    SN: If trading food wasn’t the problem, what else could have been at play?

    McGetrick: Another possible explanation for why they didn’t reciprocate is that the setup is very abstract. In a lot of previous reciprocity studies, there were very clear physical mechanisms: You pull a rope which pulls a tray, or a box opens if you press a lever. The dog’s physical connection with the mechanism is very obviously connected to the outcome, so that could be way easier for dogs to understand. In our case, we used the food dispenser where the connection was not that obvious. Having said that, the dogs all learned to press the button and get the food. What they understand about it is another question.

    human man giving instructions to a dogJim McGetrick demonstrates how dogs were trained to push a button to get food from a separate dispenser for the experiment.REBECCA FRÄNZLE

    SN: Are there other elements of the experiment that the dogs might not have understood?

    McGetrick: I’m not sure that the dogs understood that another individual was helping them. It seemed they certainly saw the human. But even if the dogs look, they might see the human’s face, they might see the human’s hand pressing the button, but they might never register that, “Oh, that’s how I’m getting the food,” or “Oh, the human is doing something for me.” It’s very difficult to know what they understand about the situation.

    SN: Do you plan to follow up on any of these possible explanations?

    McGetrick: At the moment, we’re running basically the same study but using dogs as the partners [rather than humans]. You can boil our result down to two possibilities. One is that there were methodological issues. Or this is just the answer to the question: Will dogs reciprocate help received from humans? And one way to really answer that is to test them with other dogs with this setup. With the same setup, we should see reciprocity with other dogs. And if we don’t see reciprocity with other dogs as partners, then it would point more towards methodological issues.

    SN: How difficult is it to settle on a design for an experiment? 

    McGetrick: These are very artificial setups where you’re just trying to get at something real, something that reveals something about nature and reality. And there are maybe 100 of these tiny decisions you make along the way, and so many of them are just intuition. And those minor decisions you make could be the difference between a positive result or a negative result. 

    SN: Publishing negative results is somewhat uncommon. Why do you think it’s important?

    McGetrick: My feeling is that it’s becoming more common, particularly in the field that I work in. If a study is designed well, structured well and addresses a question, there’s no reason for it not to be published regardless of the result. And it is a big problem if results aren’t published because they’re negative; it hides a lot of important information. The result is the result. You can explain the reasons why you might have gotten that result, but it shouldn’t really matter either way.

    in Science News on July 14, 2021 06:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Taking care of the patient- not just the disease

    With guidelines and best practice recommendations, treatment/management plans are focused on hitting laboratory targets and physical parameters with the goal of improved outcomes. While the targeting of hemoglobin A1c and blood pressure levels are important as potential determinants of risk of morbidity and mortality, optimization of these parameters only addresses some of the factors relevant to different outcomes.

    In a previous issue, Ozieh et al. looked at the individual and cumulative impact of social determinants of health (depression, food insecurity, poverty level) on all cause mortality of 1376 individuals with diabetes and chronic kidney disease. Social determinants of health were significantly associated with mortality even with adjustments for demographics, lifestyle variables, glycemic control, and comorbidities. Depression was independently associated with mortality. Individuals with the presence of all 3 areas had a 41% higher risk of death. There has been recognition by health care providers that socioeconomic factors affect health in terms of access to care, health literacy, and adherence to treatment but the extent of the impact has been underestimated.

    With the challenge of packed office/clinic schedules and numerous competing health conditions needing evaluation, the ability to spend time to understand the economic situations or characteristics of a patient’s life is not often available. Physicians may be limiting the ability to treat patients without that information. This understanding would serve to strengthen the provider-patient relationship. Perhaps the approach for the initial assessment for diabetes care, chronic kidney disease or hypertension care would include questions to gauge food insecurity, ability to follow specific diet, or ability to get to appointments.This information would allow creation of feasible plans of care and eliminate the usually inaccurate assumptions that an individual not following a prescribed treatment is a result of lack of care by the patient.

    In the last few years, newer agents, such as SGLT-2 inhibitors, have been introduced for management of diabetes with the promise of improved cardiac and renal outcomes. These agents are often expensive and often challenging to get insurance coverage. As research and pharmaceutical development brings us newer options for treatment, we need to remember that only addressing the biochemical parameters with therapeutics will not completely address the health impact of many of our chronic diseases. We need to advocate for investment in case management and social interventions at the same time.

    The post Taking care of the patient- not just the disease appeared first on BMC Series blog.

    in BMC Series blog on July 14, 2021 03:34 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Climate change may rob male dragonfly wings of their dark spots

    Many dragonflies zip through the air with their translucent wings painted in an array of dark spots and bands. But — for males at least — those dapper decorations could soon fall out of style as a result of climate change. 

    Males’ dark wing patches are smaller in dragonflies of a given species that live in warmer climates than in cooler regions, researchers report in the July 13 Proceedings of the National Academy of Sciences. That finding suggests that dragonfly populations over time may experience their spots shrinking as temperatures rise. The evolutionary change may not only dampen the male insects’ flair, but also their dating life.

    Understanding how organisms have adapted to warmer climates over the years is key to understanding how they may adapt to future climate conditions, says Michael Moore, an evolutionary ecologist at Washington University in St. Louis. 

    Heavy wing pigmentation can help dragonflies stay warm in chillier regions, but could be dangerous in hotter weather. The dark spots absorb sunlight and can heat wings by as much as 2 degrees Celsius, which may cause tissue damage and interfere with flight, Moore says. Tossing or shrinking the spots is a way to beat the heat, and could result in a color shift response to climate change among dragonflies akin to owls (SN: 7/11/14) and hares (SN: 1/26/16). But the adaptation could also garble communication with mates.

    Male dragonflies use wing spots to attract mates and intimidate rivals, and females rely on those spots to recognize potential mates of the same species. The markings differ greatly among species, ranging from small speckles near the wing’s base to extensive bands or panels spread across the entire wing. 

    Most organisms, Moore says, “don’t just need to survive in order to persist and perpetuate their species within the habitats they live in, they also have to be able to reproduce.”

    He and colleagues compiled a database of dragonfly wing patterns from a combination of field guides and many thousands of observations from citizen scientists across North America on the nature identification app iNaturalist. The researchers found that male dragonflies from species found in warmer regions were less likely to have evolved wing spots than their cool climate counterparts. 

    To explore out how fast these color patterns could evolve, Moore and his team selected 10 species and compared wing spots on dragonflies from warmer and cooler regions of a species’s range. That way, the team could see if spot patterns within an individual species can adapt to local climatic conditions, which would be a faster evolutionary response than between different species. Where it was warmer, males in seven of the 10 species have evolved to have wings with fewer and smaller dark spots, the team found. The changes even appear to occur on the scale of decades: Male dragonflies in the warmest years from 2005 to 2019 had the smallest wing spots on average. 

    That change could have alarming consequences for the insects. “It’s not hard to imagine that really rapid declines in wing coloration might cause females to not recognize males of their own species,” Moore says. 

    Most research to date on insect color and climate change has focused just on heat tolerance, says Lauren Buckley, an ecologist at the University of Washington in Seattle who was not involved with the study. “This research reveals the value in examining multiple, competing functions of traits.” Seeing how changes in the spots affects all of the jobs that they do is important, Buckley says.

    Dragonflies frequently move in and out of areas near water that may have very different temperatures, so future research could “better account for how dragonflies experience their environments,” she says. 

    Unlike males, wing spots on females don’t appear to respond to temperature, which was surprising, Moore says. It’s possible the females’ more regular use of shaded habitats blunts the effect of higher temperatures. 

    That finding “indicates that we should not assume necessarily that males and females are going to adapt to climates in exactly the same way,” Moore says. “That has really big implications for how we think about modeling and forecasting responses to future climates.”

    For now, Moore says, he wants to get an estimate of just how much wing spot changes could disrupt the dragonflies’ dating game.

    in Science News on July 14, 2021 01:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Mixing trees and crops can help both farmers and the climate

    Maxwell Ochoo’s first attempt at farming was a dismal failure.

    In Ochieng Odiere, a village near the shores of Kenya’s Lake Victoria, “getting a job is a challenge,” the 34-year-old says. To earn some money and help feed his family, he turned to farming. In 2017, he planted watermelon seeds on his 0.7-hectare plot.

    Right when the melons were set to burst from their buds and balloon into juicy orbs, a two-month dry spell hit, and Ochoo’s fledgling watermelons withered. He lost around 70,000 Kenyan shillings, or about $650.

    Ochoo blamed the region’s loss of tree cover for the long dry spells that had become more common. Unshielded from the sun, the soil baked, he says.

    In 2018, Ochoo and some neighbors decided to plant trees on public lands and small farms. With the help of nonprofit groups, the community planted hundreds of trees, turning some of the barren hillsides green. On his own farm, Ochoo now practices alley cropping, in which he plants millet, onions, sweet potatoes and cassava between rows of fruit and other trees.

    The trees provide shade and shelter to the crops, and their deeper root systems help the soil retain moisture. A few times a week in the growing season, Ochoo takes papayas, some as big as his head, to market, bringing home the equivalent of about $25 each time.

    And the fallen leaves of the new Calliandra trees provide fodder for Ochoo’s five cows. He also discovered that he could grind up the fernlike leaves as a dietary supplement for the tilapia he grows in a small pond. He now spends less on fish food, and the tilapia grow much faster than his neighbors’ fish, he says.

    Today, nearly everything Ochoo’s family eats comes from the farm, with plenty left over to sell at market. “Whether during dry spell or rainy season, my land is not bare,” he says, “there’s something that can sustain the family.”

    a Kenran farmer squats and snacks on papaya Maxwell Ochoo eats a juicy papaya from his farm in Kenya. Papaya trees help keep moisture in the soil in drier times, benefiting the crops he grows between the trees.M. Ochoo

    Ochoo’s tree-filled farm represents what many scientists hope will be farming’s future. The present reality, where fields are often cleared of trees to raise livestock or plant row after row of single crops, called monocultures, is running out of room.

    About half of all habitable land on Earth is devoted to growing food. More than 30 percent of forests have been cleared worldwide, and another 20 percent degraded, largely to make room for raising livestock and growing crops. By 2050, to feed a growing population, croplands will have to increase by 26 percent, an area the size of India, researchers estimate.

    Humans’ collective hunger drives the twin ecological crises of climate change and biodiversity loss. Cutting down trees to make room for crops and livestock releases carbon into the atmosphere and erases the natural habitats that support so many species (SN: 1/30/21, p. 5).

    Humankind is in danger of crossing a planetary boundary with unpredictable consequences, says landscape ecologist Tobias Plieninger of Germany’s University of Kassel and University of Göttingen. As land continues to be cleared for agriculture, “there’s high pressure … to shift toward more sustainable land use practices.”

    Farmers like Ochoo, who intentionally blend crops, trees and livestock, a practice loosely called agroforestry, offer a more sustainable way forward. Agroforestry may not work in every circumstance, “but it has great potential,” Plieninger says, for working toward food production and conservation goals on the same land.

    two cows stand near some treesIn one agroforestry project, cows graze among apple trees in an orchard in Poland.AGFORWARD PROJECT/FLICKR (CC BY-NC-SA 2.0)

    Integrating trees onto farms may seem like a recipe for lower yields, as trees would replace some crops. But such mixing can actually squeeze more food from a given plot of land than when plants are grown separately, Plieninger says. In Europe, blended farms that grow wheat or sunflowers between rows of wild cherry and walnut trees, for example, can produce up to 40 percent more than monocultures of the same crops for a given area.

    Agroforestry was the norm until modern agricultural methods swept the globe, especially after the Industrial Revolution and the rise of chemical fertilizers in the mid-20th century. But small farms in the tropics are still big on trees. Worldwide, about 43 percent of land used for agriculture has at least 10 percent tree cover, according to a 2016 study in Scientific Reports.

    In Europe, blended farms that grow wheat or sunflowers between rows of wild cherry and walnut trees, for example, can produce up to 40 percent more than monocultures of the same crops for a given area.

    Increasing that percentage could have profound and wide-ranging benefits, if done right. “Trees have to be integrated [onto farms] to not create extra problems” for farmers, says Anja Gassner, a senior scientist at World Agroforestry in Bonn, Germany. And the approach looks very different depending on the region and the goals of the people who live there. What Spanish farmers need from their oak-dotted fields where pigs get fat on acorns will be different from what farmers in Ecuador want from their coffee plants growing under the cool shade of tropical inga trees.

    The way agroforestry is carried out in three very different parts of the world illustrates the promises and challenges of coupling trees and crops.

    Made in the shade

    If you’re enjoying a morning cup of coffee while reading this, there’s a chance the beans in that brew came from farms practicing agroforestry.

    Coffee plants evolved in the understory of Ethiopia’s highland forests; they are well-suited to shade, says Eduardo Somarriba, an agroecologist at the Tropical Agricultural Research and Higher Education Center in Cartago, Costa Rica.

    a verdant field with coffee plants planted in rows between treesRows of coffee plants are shaded by trees on this plantation in Ecuador. The trees help prevent the slopes from eroding and can be harvested to supply farmers with extra income.Morley Read/Alamy Stock Photo

    A diverse canopy of native trees can help coffee plants thrive. Certain trees pump nitrogen into the soil, removing the need for intensive fertilizer application, Somarriba says. Native vegetation suppresses weed growth, stabilizes soil and temperature, improves water retention and supports pollinating animals.

    But as global thirst for coffee has grown, planting practices have shifted toward shadeless plots filled only with coffee plants that require a steady stream of chemical fertilizers. From 1996 to 2010, the worldwide share of coffee grown under a canopy of diverse trees fell from 43 percent to 24 percent, researchers reported in 2014 in BioScience.

    Removing trees is seen as good for increasing yields, though the evidence is mixed. This focus on numbers misses the more diffuse benefits of diversifying farms, Somarriba says, especially small farms, which still produce most of the world’s coffee.

    From 1996 to 2010, the worldwide share of coffee grown under a canopy of diverse trees fell from 43 percent to 24 percent.

    “If coffee prices go down and stay low for five or six years, a small farmer will not be able to make it only from [selling] coffee,” Somarriba says. But adding a mix of trees can build in economic and climate resilience, he says.

    Valuable timber trees, like mahogany, can serve as savings accounts, harvested when coffee profits aren’t enough. Mango, Brazil nut or acai trees can supply income, too. But not all places have well-developed markets for these goods, Somarriba says, which presents a challenge to increasing the share of coffee grown under shade.

    Some conservationists are trying to boost consumer demand for shade-grown coffee by highlighting how it benefits biodiversity. The Smithsonian Migratory Bird Center, for example, grants a Bird Friendly certification to plantations with ample native tree cover and diversity, a boon for migratory birds. Certified farmers are able to charge a slightly higher price, on average 5 to 15 cents more per pound.

    Migratory birds flock to such plantations. “When you’re in a bird-friendly coffee farm, it kind of feels like you’re in the forest,” says Ruth Bennett, an ecologist at the Smithsonian Migratory Bird Center in Washington, D.C. “You hear a lot of bird calls, and it’s a huge diversity of birds, including really sexy tropical species like the turquoise-browed motmot,” she says.

    Bird Friendly coffee plantations also appear to be good for mammals. In Mexico, Bird Friendly coffee plantations had more native wildlife, including deer and mice, than other coffee plantations, according to a 2016 study in PLOS ONE.

    Ecosystems brimming with diverse species of plants, animals and more make the planet livable by filtering water, cycling nutrients through soils and pollinating crops. While undeveloped forest is clearly best for biodiversity, shade-grown plantations can outshine other land uses. After more than a decade, high-diversity coffee agroforestry systems in southeastern Brazil were ecologically healthier — as measured by tree canopy cover and species richness — than plots set aside for nonagricultural restoration, researchers reported in the September 2020 Restoration Ecology. About 90 percent of the canopy was intact on shaded coffee plots versus about 60 percent for restored forest areas, on average.

    Beyond the biodiversity benefits, Bennett says shade-grown coffee just tastes better. Under shade, coffee cherries take longer to develop, which can boost sugar content.

    Time to recover

    In the Shinyanga region of Tanzania, a return to traditional Indigenous practices, with a dose of modern agroforestry, helped transform what was once the “desert of Tanzania” back into productive savanna woodlands.

    The region, about a five-hour drive southeast from the Serengeti, is home to the Sukuma people, traditionally agropastoralists who raised livestock in the hilly grasslands of the region, dotted with acacia and oaklike miombo trees.

    But in the 1920s, the landscape began to change. The British colonial government cut back woodlands in a misguided effort to control the tsetse flies that were harming livestock and humans and to plant cash crops like cotton. In the 1960s, forest loss accelerated when the government took ownership of many homesteads. After they lost rights to harvest products from the forest, local Tanzanians had less incentive to conserve the trees.

    Within a few decades, the ecosystem had degraded into dry, dusty expanses largely devoid of trees. Food, firewood and water were scarce and local livelihoods suffered, says Lalisa Duguma, a sustainability scientist at World Agroforestry, an international research agency headquarted in Nairobi, Kenya.

    By the 1980s, the situation had become so dire that the Tanzanian government intervened. At first, it tried to convince local residents to plant seedlings of fast-growing exotic trees, like eucalyptus, Duguma says. But locals weren’t interested in planting or tending those seedlings. In the face of this setback, experts and officials did something not always done in development projects: They listened.

    “By just fencing in degraded land, the process of restoration starts.”

    Lalisa Duguma

    Listening to locals revealed that an age-old tradition of forming ngitilis could be the foundation for restoration. Roughly translated as “enclosure,” a ngitili cordons off a section of land for a year or two, allowing trees and grasses to recover, and then opening it to provide fodder for grazing animals during the dry season. “By just fencing in degraded land, the process of restoration starts,” Duguma says.

    Native seeds and stumps long stunted by grazing or poor soil conditions can begin to grow again, and their numbers can be supplemented with planted trees. Local institutions largely planned and monitored ngitilis, in accordance with traditional practices, often in collaboration with government scientists.

    Year by year, the benefits of ngitilis slowly accrued, giving shade and fodder to livestock and wood for energy and building. Maturing trees provided fruits and supported beehives for honey production.

    At the start of the restoration in the mid-1980s, there were only 600 hectares of ngitilis in all of the Shinyanga region. After 16 years, more than 300,000 hectares of land was restored. The return of trees in the region may have sequestered more than 20 million metric tons of carbon over 16 years (the equivalent of taking 16.7 million cars off the road for a year), according to a 2005 report by the Tanzanian government and the International Union for the Conservation of Nature. Deeper root systems bolstered soil health, and expanded tree cover cut down on wind and water erosion, halting desertification.

    dry land in Tanzania’s Shinyanga regionAfter decades of tree cutting, the landscape of Tanzania’s Shinyanga region dried up.Dr. Otsyina
    a green landscape with green ground cover and treesIn the 1980s, a focus on creating reserves of plant life called ngitilis transformed the landscape.L.A. Duguma/World Agroforestry

    Ngitilis provided benefits equal to $14 per person per month, substantially more than the $8.50 an average person spends in a month in rural Tanzania, the same report noted. Money from communal ngitilis went toward improving housing, Duguma says.

    Biodiversity flourished, too. Ngitilis collectively housed over 150 species of trees, shrubs and other plants. With habitat restored, people in the region began to hear the cries of hyenas at night, a welcome return, Duguma says. At least 10 mammalian species came back, including antelope and rabbits, and 145 bird species were recorded within the ngitilis.

    There’s an enormous need to scale up this kind of community-driven success across Africa, where roughly 60 percent of agricultural lands are degraded, says Susan Chomba, who led the Regreening Africa initiative before becoming director of Vital Landscapes at the World Resources Institute in Nairobi. Regreening Africa, an ambitious 2017 initiative led by World Agroforestry, hopes to reverse land degradation across 1 million hectares of sub-Saharan Africa by 2022 to improve the lives of people in 500,000 households.

    There are many drivers of land degradation, “but the underlying issue is poverty,” Chomba says. If a woman can feed her children only by cutting down a tree to sell firewood, her choice is clear, Chomba says. To offer better options, Regreening Africa hopes to couple agroforestry and sustainable land use practices. The aim is to generate income for local residents while restoring the landscape.

    “If I’m planting a tree that will take years to grow, and I’m not guaranteed ownership of that tree or land, what’s my incentive for investing in it? Restoration efforts must be coupled with ensuring land rights.”

    Susan Chomba

    Central to that goal is close collaboration with local people. Some farmers may want to restore water to a region that used to have streams, or people may want shea trees for making profitable shea butter, Chomba says. Tree-planting schemes that come in with preformed ideas of what a region needs, without engaging and listening to the local community, won’t get far, she says.

    And land use policies are central to resident buy-in, Chomba says. In Africa, “we are coming from a history of colonialization,” she says. As a result, much of the land that’s forested, or could be restored by farmers, is state owned. Because trees are often state property, it is difficult for locals to profit from the sales of fruits and other tree products.

    “If I’m planting a tree that will take years to grow, and I’m not guaranteed ownership of that tree or land, what’s my incentive for investing in it?” Chomba asks. “Restoration efforts must be coupled with ensuring land rights.”

    The U.S. breadbasket

    In the United States, thoughts of agriculture likely conjure images of Iowa’s endless cornfields or massive hog farms. While industrialized monoculture is the norm among big players, small-scale farmers are more able to incorporate trees into their fields, or bring crops into the forests.

    According to the U.S. Department of Agriculture’s 2017 Census of Agriculture, of the approximately 2 million farms in the United States, only 1.5 percent report practicing some form of agroforestry. This percentage is likely an underestimate, but experts say it reveals how much room there is to grow.

    Agroforestry practices vary across the United States. In the Midwest, trees serve as windbreaks for crops and line creeks to minimize fertilizer runoff. In cattle country, ranchers plant honey locust trees in their pastures to provide shade during the summer and nutrient-rich pods that feed animals. Forest farming, where nontimber crops such as wild mushrooms or ginseng are grown within a managed or wild forest, is becoming more popular across the eastern states.

    Agroforestry is all about breaking down the wall between agricultural lands and woodlands and blending them together, says John Munsell, a forest management researcher at Virginia Tech in Blacksburg. “It’s a way of thinking creatively across a landscape,” he says. Often, small-scale farmers are more game for trying.

    Anna Plattner and Justin Wexler practice forest farming, growing shiitake mushrooms on logs in wooded areas and collecting wild golden oyster mushrooms (shown) to sell at farmers markets and to local restaurants.Courtesy of Wild Hudson Valley

    Anna Plattner and Justin Wexler have had to get creative to support their farm in New York’s Hudson Valley. The 38-hectare farm grows heirloom plants used by the Mohican and Munsee peoples indigenous to the region. The farm also incorporates traditional agroforestry methods, Wexler says. Rows of pawpaw and persimmon trees are staggered between native varieties of corn, beans and squash. The farm also grows more obscure foods, including hopniss, a legume that was a staple for some Native American tribes before Europeans arrived.

    Wexler says he hopes that focusing on foods of Indigenous peoples can help others learn about the history and culture of the area. Demand for these unfamiliar crops isn’t high, so in addition to selling to wholesalers and restaurants, this year, Plattner and Wexler debuted monthly “wild harvest boxes” — a sort of local Blue Apron for native produce. The boxes come stuffed with snippets of history about the foods and recipe ideas. “Every plant has its own story to tell,” Plattner says.

    Small farms may be more willing to embrace agroforestry, but to meet the looming challenges of climate change and biodiversity loss, large farms need to as well.

    In the United States, “there is huge potential to scale up agroforestry,” says agroecologist Sarah Lovell, director of the Center for Agroforestry at the University of Missouri in Columbia.

    For Lovell, step one involves identifying marginal areas on farms where trees could be planted with minimal disruption to the status quo, such as along creeks. Putting trees around waterways can reduce flooding and erosion, improve water quality and house wildlife, Lovell says. In the “true breadbasket of the Midwest,” she estimates, only 2 to 5 percent of such areas are currently making use of trees.

    Eventually, she says she would like to see a drastic scaling up of alley cropping, with lines of fruit or nut trees fully integrated into fields. The need to move fruit and nut production east, away from increasingly drought-stricken California, may provide an extra push for bringing more trees onto monoculture farms, Lovell says.

    But corn and soybean fields dominate much of U.S. agricultural land. These lucrative crops serve as raw materials for everything from biodiesel to high fructose corn syrup. To convince farmers to replace some of those crops with trees, the fruits of those trees will have to become more mainstream. The Savanna Institute, an agroforestry nonprofit in Madison, Wis., is focused on expanding the market for chestnuts and hazelnuts.

    “We call them corn and soybean on trees,” says Savanna Institute ecologist Kevin Wolz. Chestnuts are about 90 percent starch, like corn; hazelnuts are 75 percent oil and protein, like soybeans, Wolz says. Researchers at the institute are working out just how these tree products could replace corn and soy as raw materials in production pipelines, with rows of nut trees breaking up monoculture fields. “We think these could be the next commodity crops that the Midwest can produce,” Wolz says.

    Whether we’ll be drinking soda sweetened with chestnut syrup anytime soon remains to be seen. But to transform agriculture from a climate change problem to a solution, Wolz says such bold and imaginative thinking is essential.

    Agroforestry isn’t a silver bullet for addressing climate change, the biodiversity crisis or food insecurity, Wolz says. But when applied with place and people in mind, he says it can be a Swiss Army knife.

    in Science News on July 14, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Researcher committed misconduct while at NIH, say institutes — but is allowed to publish a revised version of a paper

    An investigation by the National Institutes of Health has led to the retraction of a 2016 paper in PLOS Biology for manipulation of the data in the article. But the journal has republished a revised version of the paper — minus the bad data — on which the researcher found to have committed the misconduct … Continue reading Researcher committed misconduct while at NIH, say institutes — but is allowed to publish a revised version of a paper

    in Retraction watch on July 14, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Clustering Functional Data into Groups by Using Projections

    This week on Journal Club session Yi Sun will talk about a paper "Clustering Functional Data into Groups by Using Projections".

    We show that, in the functional data context, by appropriately exploiting the functional nature of the data, it is possible to cluster the observations asymptotically perfectly. We demonstrate that this level of performance can sometimes be achieved by the k -means algorithm as long as the data are projected on a carefully chosen finite dimensional space. In general, the notion of an ideal cluster is not clearly defined. We derive our results in the setting where the data come from two populations whose distributions differ at least in terms of means, and where an ideal cluster corresponds to one of these two populations. We propose an iterative algorithm to choose the projection functions in a way that optimizes clustering performance, where, to avoid peculiar solutions, we use a weighted least squares criterion. We apply our iterative clustering procedure on simulated and real data, where we show that it works well.


    Date: 2021/07/16
    Time: 14:00
    Location: online

    in UH Biocomputation group on July 14, 2021 09:51 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    2-Minute Neuroscience: Bell's Palsy

    Bell’s palsy is a disorder characterized by facial weakness or paralysis, typically on one side of the face. It results from the dysfunction of cranial nerve VII, the facial nerve, but the cause of the facial nerve dysfunction is unknown. In this video, I discuss the symptoms, possible causes, and prognosis for Bell’s palsy.

    in Neuroscientifically Challenged on July 14, 2021 09:33 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Student, Meet Bus

    What led to retraction of the Sensei RNA paper by Arati Ramesh in Bangalore: the "factually inaccurate, anonymous, and unverified" version, which "quite frankly, can be termed slander". And a guest post by "Paul Jones" at the end!

    in For Better Science on July 14, 2021 05:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Staying curious is key to thriving in the lab

    Iris de Jong, MD — a PhD candidate and Fulbright scholar — explains why experiencing a wide range of working environments is so beneficial for researchers

    in Elsevier Connect on July 14, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Froghoppers are the super-suckers of the animal world

    To tap an unlikely source of nutrition, insects small enough to sit on a pencil eraser have to suck harder than any known creature.

    Philaenus spumarius froghoppers pierce plants with their mouthparts to feed solely on xylem sap, a fluid made mostly of water that moves through plants’ internal plumbing. Not only is the substance largely bereft of nutrients, but it’s also under negative pressures, akin to a vacuum. Sucking the sap requires suction power equivalent to a person drinking water from a 100-meter-long straw.

    Such a feat seemed so unlikely for the tiny insects that some scientists questioned whether xylem sap truly could be under such negative pressures. But both biomechanical and metabolic evidence suggests that froghoppers can produce negative pressures greater than one megapascal, researchers report July 14 in Proceedings of the Royal Society B.

    “It’s incredibly impressive. [The scientists] used a range of techniques to tackle a long-standing problem,” says Jake Socha, a biomechanist at Virginia Tech in Blacksburg who wasn’t involved in the work. “These insects are really well-adapted for generating” extreme negative pressures.

    The problem is long-standing because measuring negative pressures is tricky. Within xylem, sap is pulled like a string, caught in a tug-of-war between spongy soil and airy leaves. Piercing the plant with pressure probes can easily break that internal tension, so scientists typically use a more indirect method. By cutting off part of a plant and sticking the leafy end in a pressure chamber with the stem sticking out, researchers can turn up the pressure exerted on the outside of the plant until it just exceeds the plant’s internal pressure and xylem sap oozes from the stem. This strategy suggests that the negative pressures of xylem sap can exceed one megapascal.

    That tiny froghoppers and other insects feed on xylem sap has stoked skepticism about these measurements, says Philip Matthews, a comparative physiologist at the University of British Columbia in Vancouver. Elephants, for example, only generate 0.02 megapascals of negative pressure when they suck large quantities of water through their trunks (SN: 6/3/21), paltry compared with froghoppers.

    Some scientists think “it’s just too energetically expensive to extract this stuff, that [xylem pressures] can’t be that negative,” he says. “It has to be easy to extract if [froghoppers are] going to be surviving on something so dilute.”

    Skeptical of the skeptics, Matthews and colleagues sought to measure froghoppers sucking abilities through two approaches, one biomechanical and one metabolic. Froghoppers produce suction power with a pumplike structure in their heads, where muscles pull on a membrane to generate negative pressures, akin to a piston. Using micro-CT scans of four insects, the researchers measured the length and strength capacity of these structures, and then calculated the insects’ sucking potential using the simple physical formula of pressure equals force divided by area. In principle, the team found that froghoppers can produce negative pressures from 1.06 to 1.57 megapascals.

    “Clearly they can generate these tensions, so they must be feeding at xylem tensions around this level,” Matthews says. “You wouldn’t evolve such a massive capacity unless you were using it.”

    The team validated this more abstract estimate by calculating how much energy froghoppers expend while sucking on bean, pea or alfalfa plants. That energy should be proportional to the pressures that the insects have to overcome in plants. By placing feeding froghoppers in chambers that measure expelled carbon dioxide, the researchers could calculate the insects’ metabolic rate. The team also used cameras to track how much liquid the bugs excreted.

    Once froghoppers started sucking, their metabolic rate spiked by 50 to 85 percent from resting rates, and the insects were excreting more than when at rest, the researchers found. The effort is “like running a marathon,” Matthews says. “They move a tremendous amount of fluid…. If a bug was human-sized, they’d be peeing 4 liters of liquid a minute.” 

    Even though xylem sap is mostly water, there’s enough nutrients to power froghoppers’ outsize ability, the researchers estimate. “They’re getting a net-energy gain,” says study coauthor Elisabeth Bergman, a comparative physiologist also at the University of British Columbia.

    Bergman and colleagues suspect that the suction power of froghoppers and other xylem sap specialists may be unmatched among animals. There simply aren’t other contexts where food is locked away under such high negative pressures, Bergman says. “These little bugs are just awesome sucking machines.”

    in Science News on July 13, 2021 11:01 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Hurricanes may not be becoming more frequent, but they’re still more dangerous

    Climate change is helping Atlantic hurricanes pack more of a punch, making them rainier, intensifying them faster and helping the storms linger longer even after landfall. But a new statistical analysis of historical records and satellite data suggests that there aren’t actually more Atlantic hurricanes now than there were roughly 150 years ago, researchers report July 13 in Nature Communications.

    The record-breaking number of Atlantic hurricanes in 2020, a whopping 30 named storms, led to intense speculation over whether and how climate change was involved (SN: 12/21/20). It’s a question that scientists continue to grapple with, says Gabriel Vecchi, a climate scientist at Princeton University. “What is the impact of global warming — past impact and also our future impact — on the number and intensity of hurricanes and tropical storms?”

    Satellite records over the last 30 years allow us to say “with little ambiguity how many hurricanes, and how many major hurricanes [Category 3 and above] there were each year,” Vecchi says. Those data clearly show that the number, intensity and speed of intensification of hurricanes has increased over that time span.

    But “there are a lot of things that have happened over the last 30 years” that can influence that trend, he adds. “Global warming is one of them.” Decreasing aerosol pollution is another (SN: 11/21/19). The amount of soot and sulfate particles and dust over the Atlantic Ocean was much higher in the mid-20th century than now; by blocking and scattering sunlight, those particles temporarily cooled the planet enough to counteract greenhouse gas warming. That cooling is also thought to have helped temporarily suppress hurricane activity in the Atlantic.  

    To get a longer-term perspective on trends in Atlantic storms, Vecchi and colleagues examined a dataset of hurricane observations from the U.S. National Oceanic and Atmospheric Administration that stretches from 1851 to 2019. It includes old-school observations by unlucky souls who directly observed the tempests as well as remote sensing data from the modern satellite era.

    How to directly compare those different types of observations to get an accurate trend was a challenge. Satellites, for example, can see every storm, but earlier observations will count only the storms that people directly experienced. So the researchers took a probabilistic approach to fill in likely gaps in the older record, assuming, for example, that modern storm tracks are representative of pre-satellite storm tracks to account for storms that would have stayed out at sea and unseen. The team found no clear increase in the number of storms in the Atlantic over that 168-year time frame. One possible reason for this, the researchers say, is a rebound from the aerosol pollution–induced lull in storms that may be obscuring some of the greenhouse gas signal in the data.  

    More surprisingly — even to Vecchi, he says — the data also seem to show no significant increase in hurricane intensity over that time. That’s despite “scientific consistency between theories and models indicating that the typical intensity of hurricanes is more likely to increase as the planet warms,” Vecchi says. But this conclusion is heavily caveated — and the study also doesn’t provide evidence against the hypothesis that global warming “has acted and will act to intensify hurricane activity,” he adds.

    Climate scientists were already familiar with the possibility that storm frequency might not have increased much in the last 150 or so years — or over much longer timescales. The link between number of storms and warming has long been uncertain, as the changing climate also produces complex shifts in atmospheric patterns that could take the hurricane trend in either direction. The Intergovernmental Panel on Climate Change noted in a 2012 report that there is “low confidence” that tropical cyclone activity has increased in the long term.

    Geologic evidence of Atlantic storm frequency, which can go back over 1,000 years, also suggests that hurricane frequency does tend to wax and wane every few decades, says Elizabeth Wallace, a paleotempestologist at Rice University in Houston (SN: 10/22/17).

    Wallace hunts for hurricane records in deep underwater caverns called blue holes: As a storm passes over an island beach or the barely submerged shallows, winds and waves pick up sand that then can get dumped into these caverns, forming telltale sediment deposits. Her data, she says, also suggest that “the past 150 years hasn’t been exceptional [in storm frequency], compared to the past.”

    But, Wallace notes, these deposits don’t reveal anything about whether climate change is producing more intense hurricanes. And modern observational data on changes in hurricane intensity is muddled by its own uncertainties, particularly the fact that the satellite record just isn’t that long. Still, “I liked that the study says it doesn’t necessarily provide evidence against the hypothesis” that higher sea-surface temperatures would increase hurricane intensity by adding more energy to the storm, she says.

    Kerry Emanuel, an atmospheric scientist at MIT, says the idea that storm numbers haven’t increased isn’t surprising, given the longstanding uncertainty over how global warming might alter that. But “one reservation I have about the new paper is the implication that no significant trends in Atlantic hurricane metrics [going back to 1851] implies no effect of global warming on these storms,” he says. Looking for such a long-term trend isn’t actually that meaningful, he says, as scientists wouldn’t expect to see any global warming-related hurricane trends become apparent until about the 1970s anyway, as warming has ramped up.

    Regardless of whether there are more of these storms, there’s no question that modern hurricanes have become more deadly in many ways, Vecchi says. There’s evidence that global warming has already been increasing the amount of rain from some storms, such as Hurricane Harvey in 2017, which led to widespread, devastating flooding (SN: 9/28/18). And, Vecchi says, “sea level will rise over the coming century … so [increasing] storm surge is one big hazard from hurricanes.”

    in Science News on July 13, 2021 03:02 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    ‘The Joy of Sweat’ will help you make peace with perspiration

    The Joy of Sweat cover

    The Joy of Sweat
    Sarah Everts
    W.W. Norton & Co., $26.95

    The telltale darkened patches under our arms before a presentation. The cold slide of a clammy handshake. Sweat reveals what we often want to hide: our nervousness, fears and exertions, all with the slight odor of what we last ate.

    But maybe it’s time to find “serenity instead of shame” in sweat, argues science journalist Sarah Everts. Through her delightful book, The Joy of Sweat, Everts delivers what she calls a “perspiration pep talk” that drips with science and history.

    Everts’ plunge into sweat is full of energy, and her open curiosity about our much-maligned bodily secretion leaks onto every page. Temperature regulation through sweat, she notes, is a trait few species can boast. Every drop tells the tale of our evolution — our ability to keep our cool has literally kept us alive and thriving.

    The book offers plenty of fascinating facts: Traces of drugs and diseases appear in our perspiration. Tiny drops of sweat create the fingerprint smudges used to identify us. Sweat may even hold clues about the nutritional content of what we eat.

    While sweat “keeps us honest,” Everts writes, it also raises questions. For instance, how long until companies start mining the potential data dripping off people’s foreheads? Forget the smell of stinky feet — we may soon have to worry about the privacy implications of sweating in public.

    But Everts is never too serious. She gamely gets her armpits professionally sniffed, and she joins naked, sweating audiences for sauna theater. She even goes smell-dating, working up a sweat in a crowd so potential mates could sniff for love — or at least, attraction.

    These stories amuse, but a more profound point lingers. People collectively spend billions of dollars each year deodorizing, wicking sweat away and pretending with all their might that it doesn’t exist. The Joy of Sweat shows how this demand was created by deodorant and antiperspirant makers who sold sweat as a problem in the first place. The clear advertising spin will make readers reflect on how much of our hygiene habits are the result of manufactured humiliation. By highlighting history, Everts shows that any perceived problems of sweat are most often cultural, not biological. Sweat simply is “a body trying its best to do its thing,” she writes. And if we let that message seep into our minds (and out our armpits), we too can revel in the joy of sweat.

    Buy The Joy of Sweat from Bookshop.org. Science News is a Bookshop.org affiliate and will earn a commission on purchases made from links in this article.

    in Science News on July 13, 2021 01:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    ‘In hindsight the mistake was quite stupid’: Authors retract paper on stroke

    File this under “doing the right thing:” A group of stroke researchers in Germany have retracted a paper they published earlier this year after finding an error in their work shortly after publication that doomed the findings.  Julian Klingbeil, of the Department of Neurology at the University of Leipzig Medical Center, and his colleagues had … Continue reading ‘In hindsight the mistake was quite stupid’: Authors retract paper on stroke

    in Retraction watch on July 13, 2021 11:50 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The first step in using trees to slow climate change: Protect the trees we have

    Between a death and a burial was hardly the best time to show up in a remote village in Madagascar to make a pitch for forest protection. Bad timing, however, turned out to be the easy problem.

    This forest was the first one that botanist Armand Randrianasolo had tried to protect. He’s the first native of Madagascar to become a Ph.D. taxonomist at Missouri Botanical Garden, or MBG, in St. Louis. So he was picked to join a 2002 scouting trip to choose a conservation site.

    Other groups had already come into the country and protected swaths of green, focusing on “big forests; big, big, big!” Randrianasolo says. Preferably forests with lots of big-eyed, fluffy lemurs to tug heartstrings elsewhere in the world.

    The Missouri group, however, planned to go small and to focus on the island’s plants, legendary among botanists but less likely to be loved as a stuffed cuddly. The team zeroed in on fragments of humid forest that thrive on sand along the eastern coast. “Nobody was working on it,” he says.

    As the people of the Agnalazaha forest were mourning a member of their close-knit community, Randrianasolo decided to pay his respects: “I wanted to show that I’m still Malagasy,” he says. He had grown up in a seaside community to the north.

    The village was filling up with visiting relatives and acquaintances, a great chance to talk with many people in the region. The deputy mayor conceded that after a morning visit to the bereaved, Randrianasolo and MBG’s Chris Birkinshaw could speak in the afternoon with anyone wishing to gather at the roofed marketplace.

    1. A man holding a Treculia fruit Courtesy of the staff of the Missouri Botanical Garden, St. Louis and Madagascar
    2. pink planthoppers clustered on a branch Courtesy of the staff of the Missouri Botanical Garden, St. Louis and Madagascar
    3. a mouse lemur grabs a branch Courtesy of the staff of the Missouri Botanical Garden, St. Louis and Madagascar

    Conserving natural forests is a double win for trapping carbon and saving rich biodiversity. Forests matter to humans (with a Treculia fruit), Phromnia planthoppers and mouse lemurs.

    The two scientists didn’t get the reception they’d hoped for. Their pitch to help the villagers conserve their forest while still serving people’s needs met protests from the crowd: “You’re lying!”

    The community was still upset about a different forest that outside conservationists had protected. The villagers had assumed they would still be able to take trees for lumber, harvest their medicinal plants or sell other bits from the forest during cash emergencies. They were wrong. That place was now off-limits. People caught doing any of the normal things a forest community does would be considered poachers. When MBG proposed conserving yet more land, residents weren’t about to get tricked again. “This is the only forest we have left,” they told the scientists.

    Finding some way out of such clashes to save existing forests has become crucial for fighting climate change. Between 2001 and 2019, the planet’s forests trapped an estimated 7.6 billion metric tons of carbon dioxide a year, an international team reported in Nature Climate Change in March. That rough accounting suggests trees may capture about one and a half times the annual emissions of the United States, one of the largest global emitters.

    Planting trees by the millions and trillions is basking in round-the-world enthusiasm right now. Yet saving the forests we already have ranks higher in priority and in payoff, say a variety of scientists.

    How to preserve forests may be a harder question than why. Success takes strong legal protections with full government support. It also takes a village, literally. A forest’s most intimate neighbors must wholeheartedly want it saved, one generation after another. That theme repeats in places as different as rural Madagascar and suburban New Jersey.

    Overlooked and underprotected

    First a word about trees themselves. Of course, trees capture carbon and fight climate change. But trees are much more than useful wooden objects that happen to be leafy, self-manufacturing and great shade for picnics.

    “Plant blindness,” as it has been called, reduces trees and other photosynthetic organisms to background, lamented botanist Sandra Knapp in a 2019 article in the journal Plants, People, Planet. For instance, show people a picture with a squirrel in a forest. They’ll likely say something like “cute squirrel.” Not “nice-size beech tree, and is that a young black oak with a cute squirrel on it?”

    This tunnel vision also excludes invertebrates, argues Knapp, of the Natural History Museum in London, complicating efforts to save nature. These half-seen forests, natural plus human-planted, now cover close to a third of the planet’s land, according to the 2020 version of The State of the World’s Forests report from the United Nation’s Food and Agriculture Organization. Yet a calculation based on the report’s numbers says that over the last 10 years, net tree cover vanished at an average rate of about 12,990 hectares — a bit more than the area of San Francisco — every day.

    This is an improvement over the previous decades, the report notes. In the 1990s, deforestation, on average, destroyed about 1.75 San Francisco equivalents of forest every day.

    a photo of a dragon's blood treeBranches of a Dracaena cinnabari dragon’s blood tree from Yemen ooze red sap and repeatedly bifurcate in even Y-splits.BORIS KHVOSTICHENKO/WIKIMEDIA COMMONS (CC BY-SA 4.0)

    Trees were the planet’s skyscrapers, many rising to great heights, hundreds of millions of years before humans began piling stone upon stone to build their own. Trees reach their stature by growing and then killing their innermost core of tissue. The still-living outer rim of the tree uses its ever-increasing inner ghost architecture as plumbing pipes that can function as long as several human lifetimes. And tree sex lives, oh my. Plants invented “steamy but not touchy” long before the Victorian novel — much flowering, perfuming and maybe green yearning, all without direct contact of reproductive organs. Just a dusting of pollen wafted on a breeze or delivered by a bee.

    To achieve the all-important goal of cutting global emissions, saving the natural forests already in the ground must be a priority, 14 scientists from around the world wrote in the April Global Change Biology. “Protect existing forests first,” coauthor Kate Hardwick of Kew Gardens in London said during a virtual conference on reforestation in February. That priority also gives the planet’s magnificent biodiversity a better chance at surviving. Trees can store a lot of carbon in racing to the sky. And size and age matter because trees add carbon over so much of their architecture, says ecologist David Mildrexler with Eastern Oregon Legacy Lands at the Wallowology Natural History Discovery Center in Joseph. Trees don’t just start new growth at twigs tipped with unfurling baby leaves. Inside the branches, the trunk and big roots, an actively growing sheath surrounds the inner ghost plumbing. Each season, this whole sheath adds a layer of carbon-capturing tissue from root to crown.

    “Imagine you’re standing in front of a really big tree — one that’s so big you can’t even wrap your arms all the way around, and you look up the trunk,” Mildrexler says. Compare that sky-touching vision to the area covered in a year’s growth of some sapling, maybe three fingers thick and human height. “The difference is, of course, just huge,” he says.

    Big trees may not be common, but they make an outsize difference in trapping carbon, Mildrexler and colleagues have found. In six Pacific Northwest national forests, only about 3 percent of all the trees in the study, including ponderosa pines, western larches and three other major species, reached full-arm-hug size (at least 53.3 centimeters in diameter). Yet this 3 percent of trees stored 42 percent of the aboveground carbon there, the team reported in 2020 in Frontiers in Forests and Global Change. An earlier study, with 48 sites worldwide and more than 5 million tree trunks, found that the largest 1 percent of trees store about 50 percent of the aboveground carbon-filled biomass.

    Plant paradise

    The island nation of Madagascar was an irresistible place for the Missouri Botanical Garden to start trying to conserve forests. Off the east coast of Africa, the island stretches more than the distance from Savannah, Ga., to Toronto, and holds more than 12,000 named species of trees, other flowering plants and ferns. Madagascar “is absolute nirvana,” says MBG botanist James S. Miller, who has spent decades exploring the island’s flora.

    a photo of a traveler's treeThe Ravenala traveler’s tree is widely grown, but native only to Madagascar.CEPHOTO, UWE ARANAS/WIKIMEDIA COMMONS (CC BY-SA 3.0)

    Just consider the rarities. Of the eight known species of baobab trees, which raise a fat trunk to a cartoonishly spindly tuft of little branches on top, six are native to Madagascar. Miller considers some 90 percent of the island’s plants as natives unique to the country. “It wrecks you” for botanizing elsewhere, Miller says.

    He was rooting for his MBG colleagues Randrianasolo and Birkinshaw in their foray to Madagascar’s Agnalazaha forest. Several months after getting roasted as liars by residents, the two got word that the skeptics had decided to give protection a chance after all.

    The Agnalazaha residents wanted to make sure, however, that the Missouri group realized the solemnity of their promise. Randrianasolo had to return to the island for a ceremony of calling the ancestors as witnesses to the new partnership and marking the occasion with the sacrifice of a cow. A pact with generations of deceased residents may be an unusual form of legal involvement, but it carried weight. Randrianasolo bought the cow.

    Randrianasolo looked for ways to be helpful. MBG worked on improving the village’s rice yields, and supplied starter batches of vegetable seeds for expanding home gardens. The MBG staff helped the forest residents apply for conservation funds from the Malagasy government. A new tree nursery gave villagers an alternative to cutting timber in the forest. The nursery also meant some jobs for local people, which further improved relationships.

    a group of people walking through an area with saplingsTrying to build trust with people living near southeastern Madagascar’s coast was the first task the Missouri Botanical Garden faced in efforts to conserve the Agnalazaha forest.Courtesy of the staff of the Missouri Botanical Garden, St. Louis and Madagascar

    The MBG staff now works with Malagasy communities to preserve forests at 11 sites dotted in various ecosystems in Madagascar. Says Randrianasolo: “You have to be patient.”

    Today, 19 years after his first visit among the mourners, Agnalazaha still stands.

    Saving forests is not a simple matter of just meeting basic needs of people living nearby, says political scientist Nadia Rabesahala Horning of Middlebury College in Vermont, who published The Politics of Deforestation in Africa in 2018. Her Ph.D. work, starting in the late 1990s, took her to four remote forests in her native Madagascar. The villagers around each forest followed different rules for harvesting timber, finding places to graze livestock and collecting medicinal plants.

    Three of the forests shrank, two of them rapidly, over the decade. One, called Analavelona, however, barely showed any change in the aerial views Horning used to look for fraying forests.

    researchers collecting samples of a small plantNear Madagascar’s Analavelona sacred forest, taxonomist Armand Randrianasolo (blue cap) joins (from left) Miandry Fagnarena, Rehary, and Tefy Andriamihajarivo to collect a surprising new species in the mango family (green leaves at front of image). The Spondias tefyi, named for Tefy and his efforts to protect the island’s biodiversity, is the first wild relative of the popular hog plum found outside of South America or Asia.Courtesy of the staff of the Missouri Botanical Garden, St. Louis and Madagascar

    The people living around Analavelona revered it as a sacred place where their ancestors dwelled. Living villagers made offerings before entering, and cut only one kind of tree, which they used for coffins.

    Since then, Horning’s research in Tanzania and Uganda has convinced her that forest conservation can happen only under very specific conditions, she says. The local community must be able to trust that the government won’t let some commercial interest or a political heavyweight slip through loopholes to exploit a forest that its everyday neighbors can’t touch. And local people must be able to meet their own needs too, including the spiritual ones.

    A different kind of essential

    A close up photo of a tree trunk. A small silver metal tag is attached to the trunk with yarn. The tag reads Douglas Gowell '52.Tied with yarn to nearly 3,000 trees in a Maryland forest, tags displayed the names of the people lost on 9/11. The memorial, organized by ecologist Joan Maloof who runs the Old-Growth Forest Network, helped protect a patch of woods where people could go for solace and meditation.Friends of the Forest, Salisbury

    Another constellation of old forests, on the other side of the world, sports some less-than-obvious similarities. Ecologist Joan Maloof launched the Old-Growth Forest Network in 2011 to encourage people to save the remaining scraps of U.S. old-growth forests. Her bold idea: to permanently protect one patch of old forest in each of the more than 2,000 counties in the United States where forests can grow.

    She calls for strong legal measures, such as conservation easements that prevent logging, but also recognizes the need to convey the emotional power of communing with nature. One of the early green spots she and colleagues campaigned for was not old growth, but it had become one of the few left unlogged where she lived on Maryland’s Eastern Shore.

    She heard about Buddhist monks in Thailand who had ordained trees as monks because loggers revered the monks, so the trees were protected. A month after the 9/11 terrorist attacks, she was inspired to turn the Maryland forest into a place to remember the victims. By putting each victim’s name on a metal tag and tying it to a tree, she and other volunteers created a memorial with close to 3,000 trees. The local planning commission, she suspected, would feel awkward about approving timber cutting from that particular stand. She wasn’t party to their private deliberations, but the forest still stands.

    a photo of the cover of Doug Hefty's written report from 1973In 1973, high school freshman Doug Hefty wrote more than 80 pages about the value of Saddler’s Woods in Haddon Township, N.J. His typed report, with its handmade cover, played a dramatic role in saving the forest. Saddler’s Woods Conservation Association

    As of Earth Day 2021, the network had about 125 forests around the country that should stay forests in perpetuity. Their stories vary widely, but are full of local history and political maneuvering.

     In southern New Jersey, Joshua Saddler, an escaped enslaved man from Maryland, acquired part of a small forest in the mid-1880s and bequeathed it to his wife with the stipulation that it not be logged. His section was logged anyway, and the rest of the original old forest was about to meet the same fate. In 1973, high school student Doug Hefty wrote more than 80 pages on the forest’s value — and delivered it to the developer. In this case, life delivered a genuine Hollywood ending. The developer relented, and scaled back the project, stopping across the street from the woods.

    In 1999, however, developers once again eyed the forest, says Janet Goehner-Jacobs, who heads the Saddler’s Woods Conservation Association. It took four years, but now, she and the forests’ other fans have a conservation easement forbidding commercial development or logging, giving the next generation better tools to protect the forest.

    Goehner-Jacobs had just moved to the area and fallen in love with that 10-hectare patch of green in the midst of apartment buildings and strip malls. When she first happened upon the forest and found the old-growth section, “I just instinctively knew I was seeing something very different.”

    a photo of a downed tree in a forest, two young girls wearing face masks walk alongside the treeSaddler’s Woods, with a scrap of old-growth forest, has survived in the rush of development in suburban New Jersey thanks to generations of dedicated forest lovers.Saddler’s Woods Conservation Association

    in Science News on July 13, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Elsevier’s Kumsal Bayazit named one of Best CEOs for Women and Diversity

    Comparably names Elsevier CEO Kumsal Bayazit to Top 10 Best CEOs for Women, per employee surveys

    in Elsevier Connect on July 13, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dogs tune into people in ways even human-raised wolves don’t

    Wiggles and wobbles and a powerful pull toward people — that’s what 8-week-old puppies are made of.

    From an early age, dogs outpace wolves at engaging with and interpreting cues from humans, even if the dogs have had less exposure to people, researchers report online July 12 in Current Biology. The result suggests that domestication has reworked dogs’ brains to make the pooches innately drawn to people — and perhaps to intuit human gestures.

    Compared with human-raised wolf pups, dog puppies that had limited exposure to people were still 30 times as likely to approach a strange human, and five times as likely to approach a familiar person. “I think that is by far the clearest result in the paper, and is powerful and meaningful,” says Clive Wynne, a canine behavioral scientist at Arizona State University in Tempe who was not involved in the study.

    Wolf pups are naturally less entranced by people than dogs are. “When I walked into the [wolf] pen for the first time, they would all just run into the corner and hide,” says Hannah Salomons, an evolutionary anthropologist studying dog cognition at Duke University. Over time, Salomons says, most came to ignore her, “acting like I was a piece of furniture.”

    But dogs can’t seem to resist humans’ allure (SN: 7/19/17). They respond much more readily to people, following where a person points, for example. That ability may seem simple, but it’s a skill even chimpanzees — humans’ close relatives — don’t show. Human babies don’t learn how to do it until near their first birthday. When wolves have been put to the task, the results have been mixed, suggesting that wolves need explicit training to learn the skill. Scientists haven’t been sure if dogs’ ability is learned or, after at least 14,000 years of domestication, has become innate (SN: 1/7/21).  

    To find out, Salomons and colleagues showered attention on wolf pups, while restricting dog puppies’ access to people. Days after birth, 37 wolves got round-the-clock human attention. Caregivers even slept amid a pile of wolf pups on outdoor mattresses. Meanwhile, 44 retriever puppies stayed with their mothers and littermates until they were 8 weeks old, with only brief visits from people.

    wolf puppyPeople raised wolf puppies (one shown) at the Wildlife Science Center in Stacy, Minn., for a new study on domestication. The pups received round-the-clock attention, including sleeping with human caregivers on outdoor mattresses.Roberta Ryan

    The researchers then exposed both types of puppies to familiar and unfamiliar people and objects. Puppies’ memories were tested by hiding treats in their view. A cylinder with food inside — solvable only by going around to an open end, but tempting to gnaw on the middle —challenged puppies’ self-control. To observe puppies’ response to human gestures, researchers pointed at hidden treats or placed a small wooden block next to a hiding spot to draw the eye.

    Wolves and dogs were evenly matched in memory and self-control, the researchers found. But in tasks involving human communication, dogs surpassed wolves. Dogs were twice as likely to follow a pointed finger or a wooden block as a clue. Dogs also made twice as much eye contact, meeting humans’ gaze in four-second bursts compared with wolf pups’ average of 1.47 seconds.

    Dogs intuit human gestures from a young age, Salomons and colleagues conclude, lending support for the idea that domestication has wired dogs’ brains for communicating with humans. Dogs “are born with this readiness to understand that a person would be trying to communicate with them,” Salomons says. “Wolves didn’t have that tendency. It wouldn’t really occur to them that a person would be trying to help them.”

    Domestication’s effects on dogs’ brains may be more emotional than cognitive, Wynne says. Though the researchers tested only wolves willing to approach people, “it doesn’t strike me as surprising” that dogs explore objects near humans more often, he says. “I think that is most likely to do with the fact that dogs are just generally happier getting close to a person.”

    One thing is clear: Domestication has molded dogs into people-seeking missiles, drawn to humans from the start. The dog pen is all licks, wiggles and eye contact, Salomons says, nothing at all like a cage full of disinterested wolf pups.

    in Science News on July 12, 2021 03:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Satellites show how a massive lake in Antarctica vanished in days

    On June 5, 2019, a massive, ice-covered lake sat atop East Antarctica’s Amery Ice Shelf. Within six days, all 600 million to 750 million cubic meters of lake water had vanished, leaving a deep sinkhole filled with fractured ice.

    “The amount of water that was in the lake was twice that of San Diego Bay. We’re talking about a lot of water,” says Helen Fricker, a glaciologist at Scripps Institution of Oceanography in La Jolla, Calif. Now, using satellite data to reconstruct the event, scientists have solved the mystery of the disappearing lake.

    Most likely, the weight of all that water fractured the ice shelf below. Channels in the ice then formed, and the water drained away all at once, in a Niagara Falls–like rush, glaciologist Roland Warner, Fricker and colleagues report June 23 in Geophysical Research Letters.

    Before they knew about the lake, the scientists first spotted the sinkhole. “It was serendipitous,” Fricker says. Warner, of the University of Tasmania in Hobart, Australia, had been browsing satellite images of Antarctica in January 2020 while tracking a path of smoke lofted into the stratosphere by Australian wildfires (SN: 3/4/20).

    Warner saw an icy depression, called a doline, spanning 11 square kilometers and about 80 meters deep. Going through the archives, he, Fricker and their colleagues pinned down when the depression formed. Older satellite images revealed that a lake had been at that spot since at least 1973. Using laser altimeter satellite data, the team gleaned estimates of surface elevation changes over time and from those estimated how much water the lake once held (SN: 12/18/05).

    It’s unclear whether the lake’s disappearance is linked to climate change. Icy lakes and dolines occur regularly on this ice shelf, Fricker says. But this is the first time that scientists have had evidence to piece together how such an event happens.

    in Science News on July 12, 2021 02:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Elsevier says “integrity and rigor” of peer review for 400 papers fell “beneath the high standards expected”

    Elsevier says it is reassessing its procedures for special issues after one of its journals issued expressions of concern for six such publications, involving as many as 400 articles, over worries that the peer review process was compromised.  The journal, Microprocessors & Microsystems, published the special issues using guest editors.   The EoCs vary slightly, but … Continue reading Elsevier says “integrity and rigor” of peer review for 400 papers fell “beneath the high standards expected”

    in Retraction watch on July 12, 2021 12:19 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Why Do Some People Without Mental Health Problems Experience Hallucinations? Replication Study Casts Doubt On Previous Theories

    By Emma Young

    Hallucinations are a common symptom of schizophrenia and related disorders, but mentally well people experience them, too. In fact, work suggests that 6-7% of the general population hear voices that don’t exist. However, exactly what predisposes well people to experience them has not been clear. Now a major new study of 1,394 people native to 46 different countries, led by Peter Moseley at Northumbria University, provides support for two hypotheses from earlier, smaller studies — namely, that a history of childhood trauma and a propensity to hear non-existent speech among background noise are both associated with experiencing hallucinations — but does not support three others.

    “In terms of reproducibility, these results may be a cause for concern in hallucinations research (and cognitive and clinical psychology more broadly),” writes the team in their paper in Psychological Science. In firming up a few ideas, the work does, though, help to clarify what aspects of cognition as well as past experience are — and are not — linked to being more prone to hallucinations.

    The participants came into one of 11 data collection labs (in the UK, France, the Netherlands, the Czech Republic, Canada, Norway and Australia) or participated online. They completed two scales that measured hallucinatory experiences, such as hearing, seeing or smelling something when there was nothing to explain those perceptions. They also reported on incidences of childhood trauma (being regularly criticised by a parent, for example). And they completed a series of tasks that tapped into cognitive processes identified in earlier work as being linked to hallucinations in the general population, as well as in patient groups.

    The team found that experience of childhood trauma and performance on just the “auditory signal detection task” was linked to hallucinations. That task (which was completed by only the 594 participants who visited the labs) measured the participants’ ability to tell whether or not brief clips of speech had been played during longer bursts of noise. Those who scored higher on the hallucinatory scales had a higher false alarm rate — they were more likely to report hearing a voice when none was present. This data supports the previously proposed idea that hallucinations are linked to the brain over-relying on expectations of what will be perceived, vs actual sensory input. A higher false alarm rate on this kind of task is also seen in patients with schizophrenia, as is a history of adverse experiences in childhood.

    However, in contrast to what has been found for patient groups and smaller, earlier studies of the general population, there was no association between hallucinatory experiences and results on tests of “dichotic listening” (which assessed the degree to which language processing was lateralised), “source memory” (remembering whether they’d actually heard or only imagined hearing various words) or verbal working memory. So, it seems that for well people who hallucinate, a failure to distinguish between what they have imagined vs what they have actually sensed, for example, doesn’t seem to be a cause. This adds to recent doubts about the idea of a “continuum model” for voice-hearing, which theorises that patients are simply further along the same voice-hearing spectrum as people in the general population who hear voices.

    However, given the previously small sample sizes, lack of standardisation of studies on patients, and sparsity of direct replications, it’s hard to be sure whether hallucinations in well vs mentally unwell people have fundamentally different causes. “Further preregistered studies with large samples in these groups are needed,” the team writes. That work would not only help to further address the reproducibility issues in this field of research, but hopefully also clarify the mechanisms underlying hallucinations in general.

    – Correlates of Hallucinatory Experiences in the General Population: An International Multisite Replication Study

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on July 12, 2021 09:15 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Call for Papers! Introducing BMC Oral Health’s New Collection titled “Teledentistry and Distance Learning: Improving Access to Oral Health Care and Education”

    The COVID-19 pandemic has thrust teledentistry and distance learning to the forefront of policy considerations. Teledentistry supports the delivery of oral health services through electronic communication means, connecting providers and patients without the usual time and space constraints. Teledentistry’s unique ability to connect disadvantaged, rural communities and the home-bound with dental providers makes this method particularly well-suited to address the lack of access during and after the pandemic.

    Teledentistry can be used for consultation and triage, allowing providers to advise patients whether their dental concerns constitute a need for urgent or emergency care, whether a condition could be temporarily alleviated at home, or whether treatment could be postponed. Teledentistry and distance learning can also be used to facilitate access to preventive services and oral health education.

    In support of Sustainable Development Goal (SDG) 3: Good Health and Wellbeing and SDG 10: Reduced inequalitiesBMC Oral Health has launched a new collection to bring together research using teledentistry or distance learning to:

    • Improve access to oral health care.
    • Support the delivery of oral health services.
    • Facilitate preventative care.
    • Facilitate professional and public oral health education.

    Articles under consideration for publication within the collection will be assessed according to the standard BMC Oral Health editorial criteria and will undergo the journal’s standard peer-review process overseen by Editorial Board Member Dr Raša Mladenović (University of Kragujevac), ‪Associate Professor Tuti Ningseh Mohd Dom (Universiti Kebangsaan Malaysia) and Dr Bharathi Purohit (People’s College of Dental Sciences & Research Centre). If accepted for publication, an article processing charge applies (with standard waiver policy).

    The collection is now open for submissions!

    To submit to the collection, please click here. Please state in your cover letter that your manuscript is for the “Teledentistry and Distance Learning: Improving Access to Oral Health Care and Education” collection. Before submitting your manuscript, please ensure you have carefully read the submission guidelines for BMC Oral Health. For pre-submission inquiries, please contact Jennifer Harman (jennifer.harman@springernature.com), the Editor of BMC Oral Health.

    The post Call for Papers! Introducing BMC Oral Health’s New Collection titled “Teledentistry and Distance Learning: Improving Access to Oral Health Care and Education” appeared first on BMC Series blog.

    in BMC Series blog on July 12, 2021 07:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Natural disasters and violence against children – what does one have to do with the other?

    We estimate that over 3% of the world’s population required humanitarian aid in 2020. As temperatures rise and weather patterns shift, one can only imagine the number of people and financial resources needed to provide services if we continue unabated on this current dangerous path. We lack accurate figures but hypothesize that the destruction of formal structures and social safety nets during natural disasters will likely lead to increases in violence against children.

    When disaster strikes, we often do not have time to ask questions of how and why violence occurs. We create research that is responsive and immediate, focusing on enumeration of children at highest risk of violence or top concerns of affected communities. However, if we do not understand fully which aspects of the disaster leads to violence, how can we prevent it?

    This question has echoed in my head many times over the years, as I spent the better part of a decade conducting research on child protection violations in humanitarian emergencies. Those who work in disaster response and humanitarian emergencies are guided by the premise that we do not cause harm, and therefore, our interventions are needed and justified. Corresponding research is process driven to improve upon existing service delivery. I often wondered if we could create a more public health approach that targets the upstream drivers of violence.

    Our systematic review published in BMC Public Health collates information on the potential pathways between natural disasters and violence against children to suggest points of intervention. We identified five pathways to violence:

    1. Environmentally induced changes in supervision, accompaniment, and child separation
    2. Transgression of social norms in post-disaster behavior
    3. Economic stress
    4. Negative coping with stress
    5. Insecure shelter and living conditions.

    The findings have three major implications for action. First, while it is likely that a greater number of pathways exists, given the scarcity of the evidence, we can say with assurance that intervening on each of these aspects of the post-disaster environment would help to prevent violence. Violence prevention programming should structure interventions that target these five pathways after natural disasters.

    We have a wealth of evidence-based interventions that can be utilized; for instance, SASA! (meaning “now” in Kiswahili) for social norms change, Parents Make the Difference for positive parenting, and Cure Violence for creating safe environments. Global standards in the Minimum Standards for Child Protection in Humanitarian Action (CPMS) and the World Health Organization’s (WHO) INSPIRE: Seven Strategies for Ending Violence against Children should be met in building interventions.

    Second, we should leverage the already existing knowledge and resilience of communities whom often experience natural disasters cyclically. We found that some families and communities were able to protect children from violence after natural disasters, particularly in terms of creating new accompaniment and supervision structures. Natural disasters may be unique from armed conflict in that community trust is greater intact.

    We should look to indigenous knowledge that bolsters local response. These strategies are likely to be effective, more easily promoted than external interventions, and supportive of efforts to localize response. We highly encourage greater documentation and evaluation of positive behaviors after natural disasters that work to prevent violence.

    Third, multi-pronged approaches are essential in preventing violence. The five outlined pathways individually lead to violence but do not act in isolation. Programming must be created to combat multiple pathways.

    Suppose a service provider creates a livelihoods program to address the economic stress pathway to violence. Unconditional cash transfers are provided to women heads of household in an effort to ensure that money is used for family needs. Violence against women and children may still occur in societies where it transgresses gender norms for women to be breadwinners of households. Humanitarians may cause more harm than good if they do not bundle livelihoods interventions with social norms change.

    The livelihoods intervention further does not alleviate other negative coping behaviors. Violence against children may still occur if women caregivers are not provided with parenting programs and mental health support to cope with acute and ongoing stress from the natural disaster.

    Without addressing all pathways, violence against children will likely continue to occur. Individual agencies should design comprehensive interventions that address multiple pathways to violence, and humanitarian coordination mechanisms can ensure that no gaps exist in government and agency services.

    The post Natural disasters and violence against children – what does one have to do with the other? appeared first on BMC Series blog.

    in BMC Series blog on July 12, 2021 06:38 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Finding the Mother Tree, by Suzanne Simard: book review

    "But man is a part of nature, and his war against nature is inevitably a war against himself" - Quote by Rachel Carson, used as opening of Simard's book.

    in For Better Science on July 12, 2021 06:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Weekend reads: How many scientists commit misconduct?; science ‘moved beyond peer review during the pandemic’; Juul pays for entire journal issue

    Before we present this week’s Weekend Reads, a question: Do you enjoy our weekly roundup? If so, we could really use your help. Would you consider a tax-deductible donation to support Weekend Reads, and our daily work? Thanks in advance. The week at Retraction Watch featured: ‘They seem to mean business’: Cardiology journal flags papers cited … Continue reading Weekend reads: How many scientists commit misconduct?; science ‘moved beyond peer review during the pandemic’; Juul pays for entire journal issue

    in Retraction watch on July 10, 2021 01:16 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    ‘They seem to mean business’: Cardiology journal flags papers cited hundreds of times

    A European cardiology journal has issued expressions of concern for seven widely-cited papers dating back to 2009 after a reader flagged suspicious images in the articles.  Although the cast of characters changes, the senior author on all seven papers is Chao-Ke Tang, of the First Affiliated Hospital of the University of South China, in Hengyang, … Continue reading ‘They seem to mean business’: Cardiology journal flags papers cited hundreds of times

    in Retraction watch on July 09, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Children Who Enjoy School Aged Six Tend To Get Better Grades Ten Years Later

    By Emily Reynolds

    Multiple factors influence how we perform educationally: the way we’re taught, our particular needs and how they’re met, our parents, and our socio-economic background to name a few. Gaps in attainment can start from very early on: some children have already fallen behind before the age of seven.

    But what about how much we enjoy school? A new study in npj Science of Learning, led by the University of Bristol’s Tim Morris, looks at this relatively under-explored factor. And the team finds that enjoyment at the age of six has a significant impact on achievement, which was visible even years later when participants took their GCSEs.

    Data was gathered from participants in the Avon Longitudinal Study of Parents and Children, which has been tracking parents and their children from 1991 onwards. At the age of six, participants were asked if they liked school, before answering further questions on their enjoyment six months later.

    Educational attainment was measured through exam results aged 16, and the team also looked at sex, month of birth and school year, ethnicity, cognitive ability aged eight, maternal education and the socioeconomic position of parents. Mothers who took part in the study also reported how much their children liked their teachers aged six, and children themselves self-reported their temperament by answering questions on how happy or angry they were.

    From the age of eight, children answered questions on their confidence in their work and intelligence, as well as how happy they were with their number of friends and the quality of their friendships. Finally, the team looked at the home learning environment through questions on how families taught their children colours, language, numbers, songs and shapes and sizes.

    There was no relationship between school enjoyment and parental socioeconomic status: those who had parents in so-called “skilled” occupations were just as likely to enjoy school as those in “unskilled” occupations. Children with higher cognitive ability were more likely to enjoy school than not, girls were twice as likely to say they enjoyed school than boys, and non-white children were almost twice as likely to report enjoying school than their white counterparts.

    Unsurprisingly, there was a strong relationship between children’s opinion of their teacher and how much they enjoyed school: those whose parents reported that they liked their teachers were more than nine times more likely to enjoy school than those who did not. Similarly, those who had confidence in their work also enjoyed school more.

    Enjoyment of school didn’t just have a short-term impact, however. Those who enjoyed school at age six scored on average 14.4 more points at GCSE — a difference of two grades — even when the researchers had controlled for other factors related to educational achievement like cognitive ability and family socioeconomic status. They were also 29% more likely to obtain five or more A*-C grades, including those Maths and English qualifications so crucial for employment. In fact, enjoyment of school aged six was almost as strong a predictor of educational achievement aged 16 as other factors such as sex and socioeconomic status.

    It may seem obvious that enjoyment of school has an impact on grades. But it’s striking that enjoyment at age six may impact grades aged 16, particularly when you consider its relative importance alongside factors like gender and cognitive ability.

    The results also appear promising for potential interventions. Enjoyment is potentially more modifiable than socioeconomic factors; designing interventions that target school enjoyment and promote positive feelings about school could therefore have a significant impact on attainment many years down the line.

    The team notes that the results should not be taken as a definitive way to address inequality in education, an issue which is clearly complex and multi-layered; future research could also explore why children do or do not enjoy school and how this interacts with external, social factors. However, thinking carefully about enjoyment could be one piece in the jigsaw of academic attainment.

    Associations between school enjoyment at age 6 and later educational achievement: evidence from a UK cohort study

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on July 09, 2021 09:48 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Schneider Shorts 9.07.2021: Eunuchs and Eugenics

    Schneider Shorts of 9.07.2021: Academic violence in China, teleportation in Elsevier, some confused glyphosate shills, a life-extension recipe (gentlemen only), English middle-class eugenics and other privileges, Berkeley uncovering a giant conspiracy, and finally, if only IHU Marseille went for COVID-19 stool transplants instead of that other brown s***.

    in For Better Science on July 09, 2021 06:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How is food insecurity portrayed in UK newspapers?

    What is food insecurity?

    Food insecurity can be defined as “the inability to consume an adequate quality or sufficient quantity of food in socially acceptable ways, or the uncertainty that one will be able to do so”. As well as hunger, this definition captures the dimensions of food insecurity (or food poverty) that are sometimes overlooked: stability, agency and sustainability. Can people always access food? Can they do so in a way that’s free of shame and stigma? Will the current food system work for us all in the long-term?

    Portrayal of food insecurity in UK newspapers

    Food insecurity presents a large and growing problem in the UK. We estimated that 24% of UK adults were experiencing food insecurity in 2017 – resorting to food banks among other coping mechanisms to feed their families. Our study aimed to understand the ways in which newspapers covered the topic of food insecurity – how the problem and drivers were framed. This provides insight on the attitudes towards the problem, the potential solutions and their acceptability among different stakeholders.

    The picture our work painted was a nuanced one, where many societal, political and economic factors were portrayed as contributing to the growing problem of food insecurity. At the heart of it, people did not have enough money to meet their needs. The reasons often cited for this were: low wages, insecure jobs, unemployment and high cost of living.

    But further upstream, we saw more contention. Charities and advocacy groups highlighted the structural barriers to people meeting their living costs – austerity, funding cuts, policies that exacerbate social inequalities. The roll-out of the Universal Credit social security system (introduced to replace several means-tested benefits) was also considered a major contributor to the problem, especially the long wait (of at least 5 weeks) when switching from the old system to Universal Credit.

    The voices of charities, advocacy groups and the general public appeared united within the news articles – people were urging the government to take action. The government, however, was portrayed as reluctant to admit links between welfare policies and increasing food insecurity, with government representatives calling food insecurity a ‘cash flow’ issue and promoting charity to help those experiencing food insecurity.

    Our work highlights a disconnect between the problem and the solutions. The problem was presented as systemic, whilst the solutions were heavily reliant on charity – solutions that, at best, provide short-term relief for families experiencing hunger, unhealthy diets and the social consequences of food insecurity.

    Holiday hunger

    The issue of ‘holiday hunger’ (hunger experienced by children during the school holidays, often because they are no longer receiving free school meals) has brought more attention onto the problem. This has resulted in some political action, e.g. the Children’s Future Food Inquiry and the House of Lord’s Select Committee Inquiry on Food, Poverty, Health, and Environment.

    A debate surrounding food vouchers for children who usually receive free school meals at the start of the COVID-19 pandemic was played out in the media. Campaigners pressured the government to provide food vouchers to low-income families throughout the summer. Their success in reversing the government’s decision to not provide support during the summer holidays shows news media to be a powerful tool for advocacy.

    Food insecurity in post-pandemic UK

    As we emerge from the COVID-19 pandemic, many of the economic factors contributing to food insecurity will have worsened. Urgent action must be taken to protect families from food insecurity, with a focus on solutions that address the upstream drivers rather than relying on food banks and other forms of charitable food aid.

    The post How is food insecurity portrayed in UK newspapers? appeared first on BMC Series blog.

    in BMC Series blog on July 09, 2021 05:25 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Strategy and Membership Webinar: Recording now available

    arXiv envisions its future as a central hub for accessing open research, aiming to make its vast scientific content highly accessible and interoperable for the benefit of the community — and to maximize the impact of the research produced. On June 30, 2021, more than 225 people joined arXiv’s Strategy and Membership Webinar to learn how we are working together with our diverse community to reinvent scientific communications. The recording is now available for viewing.



    in arXiv.org blog on July 08, 2021 02:53 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Kids As Young As Five Underestimate How Much Their Peers Like Them

    By Emma Young

    A striking paper in Psychological Science in 2018 revealed consistent evidence for the “liking gap” — that other people like us more than we think. Now, for the first time, researchers have looked at how this phenomenon arises during childhood. The study, led by Wouter Wolf at Duke University, US, on children aged 4 to 11, found that the liking gap emerged by around 5, and then grew wider with age. The findings have theoretical but also practical implications: parents and teachers can reassure kids that their judgements about what their peers think of them are likely to be overly negative, which could be of particular help to those who are worried about their relationships with classmates.

    Wolf and his colleagues recruited pairs of children who didn’t know each other from a local museum and other events. They first spent five minutes building a tower together. Then each child used a seven-item emoticon scale, which ranged from a crying face to a beaming face, to rate their feelings about the other child (how much they liked the other boy or girl, wanted to play with them again, and wanted them to be their friend) and to indicate what ratings they thought their partner would give them. The difference between a child’s perceptions of their partner’s ratings of them and their partner’s actual ratings gave a “liking gap” score for each participant. 

    Data on a total of 261 children were included in the analysis. The gender make-up of the pairs made no difference to the results. However, age did: a liking gap appeared by around the age of five, and was more extreme as age increased.

    The causes for the gap among the youngest and older children were different, though. When the gap first emerged, it was driven by more positive partner evaluations — five-year-olds liked other kids more than four-year-olds did. The team thinks this reflects greater exposure to children (kindergarten enrolment is mandatory at five in the area of the US where the study was conducted), which could reduce anxiety about strangers and make social interactions more enjoyable.

    The expansion of the liking gap after age five was driven by something different, however: as they grew older, the kids had increasingly less positive perceptions of their partner’s feelings about them. “This suggests that after the emergence of the liking gap between ages four and five, its subsequent development is primarily driven by increased social concern with other people’s evaluations of the self,” the team writes.

    This would fit with other work finding that children develop a more complex theory of mind around age six, when they also start to become more concerned about the impression they are making on others. They may start to realise, for example, that another child might be being friendly because they want to come across as friendly and likeable, rather than because that other child is genuinely enjoying their company. As the gap widened with age, this suggests that, up to 11, at least, children become less certain about what another child’s behaviour really signals.

    Individual differences might influence the scale of an individual’s liking gap. “It is not implausible that, in some cases, shyness, social anxiety or insecure attachment could be a manifestation or a consequence of a relatively high discrepancy between how much children like other people and how much they think other people generally like them back,” the team notes.

    Clearly, more work is needed to explore this. But perhaps simply explaining the existence of the liking gap to kids might be one way to improve how children feel about their interactions with their peers, especially with strangers — when joining a new school, say. That’s a study I’d really like to see.

    The Development of the Liking Gap: Children Older Than 5 Years Think That Partners Evaluate Them Less Positively Than They Evaluate Their Partners

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on July 07, 2021 11:32 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Feature Weighted Density-based Clustering

    This week on Journal Club session Na Helian will talk about her work done together with her PhD student Stiphen Chowdhury about "Feature Weighted Density-based Clustering".

    DBSCAN is arguably the most popular density-based clustering algorithm, and it is capable of recovering non-spherical clusters. One of its main weaknesses is that it treats all features equally. In this paper, we propose a density-based clustering algorithm capable of calculating feature weights representing the degree of relevance of each feature, which takes the density structure of the data into account. First, we improve DBSCAN and introduce a new algorithm called DBSCANR. DBSCANR reduces the number of parameters of DBSCAN to one. Then, a new step is introduced to the clustering process of DBSCANR to iteratively update feature weights based on the current partition of data. The feature weights produced by the weighted version of the new clustering algorithm, W-DBSCANR, measure the relevance of variables in a clustering and can be used in feature selection in data mining applications where large and complex real-world data are often involved. Experimental results on both artificial and real-world data have shown that the new algorithms outperformed the DBSCAN type algorithms in recovering clusters in data.

    Date: 2021/07/09
    Time: 14:00
    Location: online

    in UH Biocomputation group on July 07, 2021 11:09 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Nanotheranostics with a decisive action

    "We will look in each instance thoroughly and take a decisive action in consultation with journals and university in each instance as appropriate", Sasha Kabanov, winner of the Lenin Komsomol Prize 1988

    in For Better Science on July 07, 2021 07:15 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How to create a successful global North-South collaboration

    Prof Cesar Pulgarin shares steps to setting up successful North-South collaborations – and pitfalls to avoid

    in Elsevier Connect on July 07, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Methods as a scientific asset

    A research article is an orderly summation of a complex and circuitous process. It is characterized by detailed planning, iterative trial and error, meticulous execution and thoughtful analysis. As a summary, articles are invaluable, but detailed insight into processes and procedures is required to truly understand and reproduce research. Detailed public methodological documentation can contextualize results, subvert bias, enhance reproducibility, and increase efficiency across the entire scientific ecosystem. In recent years, the research community has gained a deeper appreciation for the value of detailed methodological documentation. Here’s why:

    Methods are research

    Developing a methodology and then executing it is arguably the largest component of research. It’s the day-to-day business of science, the mechanism for capturing data that can later be used to investigate a research question. Depending on the field and approach, methods can take many forms, from a recipe-like protocol, to a script, to a database of behavioral stimuli. But whatever the specifics, a clear and complete method is key to running a consistent and reliable study, reproducing the work in future, and understanding the results as a reader.

    Just as with research articles, restricted access to methods creates inefficiencies and slows progress. Researchers expend time (and funding) developing similar approaches, repeating experiments and pursuing dead ends, unaware of related work. Out-dated procedures remain in use long after they’ve been superseded because practitioners cannot access the latest advances. And there is an opportunity cost. Researchers rely on other research for inspiration, direction and advancement. When the products of research remain hidden, promising new leads may be left unexplored.

    Methods are lasting

    Methods are highly transferable. More than any other research artifact, methods have the potential for adaptation or reuse in different contexts and across a broad range of research questions and disciplines. For evidence, look no further than researchers’ frequently expressed frustration with “citation trails” in which an article methods section cites the methods section of an earlier work, and that work cites another work, and so on, often going back years, or even decades. Similarly, stand-alone methods articles are often among the most highly-cited publications, and they continue to receive citations over a longer period of time than standard research articles.

    Results are subjective—methods less so

    In order to generate conclusions, researchers make choices about how to analyze data and interpret the patterns they observe. These analytical and interpretive choices are valuable, but subjective. They benefit from the researchers’ intuition, inspiration and expertise—but those same factors can also introduce bias and lead to misinterpretation. The unencumbered, unprocessed artifacts of research—such as methods and raw data—are much less likely to be influenced either positively or negatively by personal interests, insights or opinions. They are closer to neutral—a clear accounting, rather than an argument with a personal perspective.

    Methods are insufficiently described

    A research article is not an instruction manual. Often, published research cannot be reproduced, not because the original work was flawed, but simply because the article format is not conducive to a really in-depth description of the process.


    Do your methods pass the Golden Rule test?

    Yes, definitelyYes, probablyPossiblyNot very likelyDefinitely not
    Yes, definitelyYes, probablyPossiblyNot very likelyDefinitely not
    This field is for validation purposes and should be left unchanged.

    Methods serve authors, readers and the research community

    Clear, complete, and open methods increase credibility and support lasting impact. Documenting and sharing methodologies has interrelated scientific and reputational benefits for individuals and the community. 

    • Making methods public creates a positive impression. Having the option to review detailed methods increases readers’ trust, whether or not they consult the documentation. 
    • Researchers can more easily reproduce results with detailed open methods. Authors who want to apply the method in their own research can do so more efficiently if the approach is described in detail and easy to find online.
    • Strong, easy-to-follow methods are more likely to be used in future research, and by extension more likely to be cited, bringing fresh eyes to the original and helping it to remain relevant over time.

    Under the best circumstances, these three factors can feed off one another in an expanding cycle of trust, reuse, and readership.

    The future of methods as permanent assets

    This combination of relevance, usefulness, potential for reuse, and demand suggests that methods deserve a more formalized, permanent place in the scientific record. Methods should be reviewed and validated, indexed and archived, searched for and cited. They should be treated as stand-alone research artifacts, rather than supplementary material of interest only within the context of a research article. 

    Alongside their enormous potential to positively impact the research community, sharing methods has a comparatively low barrier to entry for authors. From a practical perspective, process documentation often already exists in some form (for example, in instructions for internal use, or as part of a funding application), providing a convenient basis for a publication. More importantly, detailed, published methods are a natural extension of existing practice. Some discussion of methods has always been standard in the sciences. Increasing depth and detail, establishing norms, implementing peer review, and formalizing publication are all important and logically consistent extensions, analogous to the evolution of scientific discourse from published correspondence in the 18th century into the modern peer-reviewed research article. It will take time, community discussion and experimentation to refine and establish new norms for communicating methods—but the notion of communicating methods at all does not inspire the same heated debate as other aspects of Open Science practice. 

    How can we communicate methods more effectively?

    The scholarly community is developing new ways to document, preserve and communicate methods for future use. In a future post, we’ll explore some of them.

    The post Methods as a scientific asset appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on July 06, 2021 05:57 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How Should You Talk To A Loved One Who Believes In Conspiracy Theories?

    By Emily Reynolds

    Conspiracy theories have surged over the last few years, as we’ve frequently reported. One 2018 study, for example, found that 60% of British people believed in a conspiracy theory. Meanwhile, the rise of QAnon in America has been particularly alarming.

    It’s easy to dismiss conspiracy theorists — but this is not a productive way to tackle the issue. Instead, researchers are exploring why people get sucked into such belief systems, even at the expense of personal relationships. This work can help us understand why conspiracies spread, and provide some useful guidance for talking to loved ones who may have fallen for a conspiracy theory.

    Why do people believe in conspiracy theories?

    There are a number of reasons someone may be attracted to a conspiracy theory, often related to frustrated psychological needs. 

    “The first of these needs are epistemic, related to the need to know the truth and have clarity and certainty” explains Karen M. Douglas, Professor of Social Psychology at the University of Kent. “The other needs are existential, which are related to the need to feel safe and to have some control over things that are happening around us, and social, which are related to the need to maintain our self-esteem and feel positive about the groups that we belong to.”

    So, for instance, if someone is anxious about the pandemic and feels out of control, they may be drawn to theories that suggest it is false, satisfying their existential needs. If they are frustrated about a particularly political situation, they may start exploring apparently clear-cut solutions to unanswerable questions, satisfying their epistemic needs.

    There are also numerous risk factors related to conspiratorial thinking: conspiracy theories can be fuelled by a desire to feel special or political apathy, for example. People with lower levels of critical thinking are also more likely to believe in conspiracy theories, as Stephan Lewandowsky, co-author of a recent Conspiracy Theory Handbook and Chair in Cognitive Psychology at the University of Bristol, explains.

    Those who endorse conspiracy theories are “usually people who believe that intuition is a better way to know the truth than data — people who think their gut feeling is telling them what to believe and who don’t need or want evidence to make a decision,” he says. “They don’t have a healthy level of scepticism.”

    Conspiracy theories, by their nature, are also “self-sealing”, meaning that evidence can’t be used to refute them — one of the reasons they are so hard to counter. “The absence of any evidence is taken to be evidence for the theory”, Lewandowsky explains. “To give you one example, there was someone claiming on YouTube last year that Anthony Fauci was personally directing money into a lab in Wuhan. When the interviewer said there was no evidence, her reply was ‘see, that’s how good the cover-up is. There’s no evidence because they cover it up so well’.”

    How to talk to somebody who believes in conspiracy theories

    In an ideal world, we would prevent conspiracy theories from taking root in the first place. As Douglas and her colleague Daniel Jolley note in their study on the anti-vaccination movement, “inoculation” can prevent the influence of conspiracy theories to begin with.

    They found that anti-conspiracy arguments increased intention to vaccinate a child when presented before conspiracy theories. But once these conspiracies were established, they were much more difficult to correct, even with arguments that were factual and seemed logical.

    So talking to somebody before they become immersed in the world of conspiracy theories could be a way of preventing it altogether — something Lewandowsky and other authors refer to as “prebunking”. As David Robson wrote for The Psychologist last year, this isn’t just a case of presenting new information — rather, it’s about encouraging people to think critically, arming them with techniques to protect against misinformation. (The “Bad News” game, developed by University of Cambridge researchers, is one example of an intervention oriented around critical thinking.)

    Dispelling a conspiracy theory once it’s entrenched, however, is not an easy task. “When people believe something so strongly, it’s difficult to change their minds,” Douglas says. “People are very good at selecting and interpreting information that seems to confirm what they already believe, and to reject or misinterpret information that goes against those beliefs.”

    But as academic Jovan Byford writes in The Conversation, “underpinning conspiracy theories are feelings of resentment, indignation and disenchantment about the world”. So it’s important to understand the emotions that might be behind someone’s false beliefs, and to try and empathise with them.

    One study published in Personality and Individual Differences earlier this year found that those espousing COVID-19 conspiracy theories were more likely to experience anxiety, while another found that many conspiracy theorists also felt that they had little control over their lives or the political situations they found themselves in.

    Douglas points out that people believing in conspiracies may feel “confused, worried and alienated”. It would be counterproductive, therefore, to behave in a hostile or ridiculing way towards them. “This just dismisses their views and might alienate them even further and push them further towards conspiracy theories,” she says. “It’s important to keep calm and listen.” “The whole thing is about empathy,” agrees Lewandowsky. “Ridiculing people doesn’t help — and there is evidence to suggest that you shouldn’t do that.”

    As anyone who has had a strained family conversation about politics will attest, it can be hard not to respond in a combative manner if you fundamentally disagree with the way somebody sees the world. But research from Harvard Business School, published in Organizational Behavior and Human Decision Processes, suggests that being receptive might be the way forward instead.

    The team, led by Mike Yeomans, argues that “conversational receptiveness” is key to de-escalating conflict: if you talk to someone in a way that indicates you’re receptive to their views and beliefs, they’re more likely to be persuaded by yours.

    Simple phrases like “I understand that…” or “What you’re saying is…” could therefore bridge the gap between you and somebody with entirely different views — and even if this doesn’t mean they disavow a conspiracist belief, it could help a relationship remain friendly and non-antagonistic.

    Power and purpose

    As Lewandowsky points out, empowering people may also help to combat conspiratorial thinking. As we’ve seen, belief in conspiracy theories is closely linked to feelings of powerlessness — so it follows that instilling a sense of control could help ward off conspiracism.

    On a personal level, people can be empowered through interventions that encourage analytic thinking and that remind them of times they were in control. In one study, for example, participants who were asked to recall a situation in which they were in control were less likely to believe in a conspiracy theory than those asked to recall a situation in which they were out of control. Such approaches may help you get through to someone you care about.

    “One thing that can be done is to restore people’s sense of control,” Lewandowsky says. “One of the reasons people become conspiracy theorists is because they feel they’ve lost control of their lives and they’re afraid — that’s one of the reasons why a pandemic will trigger more of this thinking, because people have lost control of their lives.”

    “So that’s one indirect way of getting at it — don’t try to talk someone out of it, but make them feel good about being in charge of their lives. Then they may gradually give up, because they don’t need it anymore.”

    This isn’t to say that it will always be possible to disabuse someone of their beliefs. “The hardcore believers who are really down the rabbit hole… they are extremely difficult to reach,” Lewandowsky says. It’s also important to protect your own wellbeing when having conversations that may be frustrating or upsetting. But treating people who believe in conspiracy theories with empathy and calmness may be the first step towards a productive conversation.

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on July 06, 2021 02:15 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Half day hands-on NeuroML tutorial at CNS*2021 Online

    A half day hands-on NeuroML tutorial was held at the 30th annual meeting of the Organization for Computational Neurosciences (OCNS): CNS*2021 Online on 2nd July. The goal of the tutorial was to teach users to: build, visualise, analyse and simulate models using NeuroML.

    More information on the tutorial can be found here: https://docs.neuroml.org/Events/202107-CNS2021.html

    in The Silver Lab on July 06, 2021 10:25 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    New paper in Nature Neuroscience: Cerebellar granule cell axons support high-dimensional representations (Frederic Lanore, N. Alex Cayco-Gajic, Harsha Gurnani, Diccon Coyle & R. Angus Silver)

    New paper in Nature Neuroscience: Cerebellar granule cell axons support high-dimensional representations (Frederic Lanore, N. Alex Cayco-Gajic, Harsha Gurnani, Diccon Coyle & R. Angus Silver)

    In classical theories of cerebellar cortex, high-dimensional sensorimotor representations are used to separate neuronal activity patterns, improving associative learning and motor performance. Recent experimental studies suggest that cerebellar granule cell (GrC) population activity is low-dimensional. To examine sensorimotor representations from the point of view of downstream Purkinje cell ‘decoders’, we used three-dimensional acousto-optic lens two-photon microscopy to record from hundreds of GrC axons. Here we show that GrC axon population activity is high dimensional and distributed with little fine-scale spatial structure during spontaneous behaviors. Moreover, distinct behavioral states are represented along orthogonal dimensions in neuronal activity space. These results suggest that the cerebellar cortex supports high-dimensional representations and segregates behavioral state-dependent computations into orthogonal subspaces, as reported in the neocortex. Our findings match the predictions of cerebellar pattern separation theories and suggest that the cerebellum and neocortex use population codes with common features, despite their vastly different circuit structures.


    in The Silver Lab on July 06, 2021 10:18 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Is your company ready for Industry 4.0?

    To stay competitive, chemical companies must prioritize the transformation to digital enterprise

    in Elsevier Connect on July 06, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The Pain Of Social Rejection Is Similar Whether We Are Being Excluded By A Partner Or A Stranger

    By Emma Young

    Imagine that you’re with your partner at a party and you both get chatting to a stranger. Your partner and the stranger get on really well. Before long, they’re laughing away and ignoring you. Which would hurt most: rejection by the stranger, or by your partner?

    The answer, according to new research in Social Psychology is that — in the moment, at least — they would hurt the same. It seems that we have such a deep need for affiliation that any form of ostracism triggers similar levels of immediate pain.

    Anne Böckler at Leibniz University in Germany and colleagues asked participants to come to the lab with either a friend of the same sex or a romantic partner (of the opposite sex, in this case). While in a separate room from their friend/partner, they played a screen-based ball-tossing game with what they thought was their friend/partner and a third online player. In fact, the passes from this other “player” as well as the passes that the participant believed their friend/partner to be making were controlled by the researchers.

    Photographs of the participant, their friend/partner and the third player (who was of the same gender as the friend/partner) were displayed on the screen next to individual icons. By clicking different buttons on the keyboard, the participant could choose who to pass the ball to. But how many times they were themselves passed the ball depended on their experimental condition. “Included” participants received 20 of the overall 60 passes (as would happen if the ball tosses were equally shared). Those who were “excluded by their friend/partner” received 10 passes overall, all from the stranger. Those who were “excluded by the stranger” also received a total of 10 passes, but all from their friend/partner. People in the fourth group were more completely excluded: after receiving two passes at the beginning of the game, they were then ignored.

    Immediately after the game, participants completed questionnaires about their mood during and after playing. These revealed that exclusion dampened people’s mood during (though not after) the game, whether they had been excluded by a partner, friend or stranger. Total exclusion had a bigger effect on mood.

    Fully-included participants also scored higher than all the others on measures of “belongingness”, self-esteem, meaningful existence (having a central reason for being), and overall basic need satisfaction. Those who had been fully excluded showed even lower basic need satisfaction than the other groups, though full exclusion did not worsen effects on self-esteem. “Crucially, when comparing exclusions by close other with exclusion by a stranger we found no significant differences for any individual or overall basic needs,” the researchers note. Scores for relationship satisfaction were also lower when the participants were excluded by a stranger, friend, or partner, and lowest among those who were totally excluded. So, while the degree of ostracism clearly mattered, being excluded by a stranger, friend, or partner had the same impacts.

    This is a small study, however. And it focused on impacts during and immediately after the game. Rejection by a romantic partner or close friend would surely have longer-lasting effects — even if indirectly, by altering perceptions of the quality of the relationship itself. (I should note that the participants were all debriefed afterwards.) A bigger study might also find impacts on the participants’ behaviour. In this work, exclusion didn’t affect who the participants chose to pass the ball to — there was no evidence that excluded participants tried either to “tend and befriend” their rejector, or stopped passing them the ball.

    Though this new study is one of the first to simultaneously compare the effects of ostracism by friends/partners vs strangers, earlier studies have found that even ostracism by a computer, or someone belonging to a detested group, such as the Ku Klux Klan, has negative effects.

    All of these findings speak to the fundamental nature of the effects of ostracism on our wellbeing. Our ancestors’ lives depended on their having strong social networks — and the drive to have and preserve one, is, it seems, still with us.

    Stranger, Lover, Friend?: The Pain of Rejection Does Not Depend

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on July 05, 2021 01:18 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The Anti-Cancer Effect of Vitamin D Is More Noticeable in Old Age

    Countries With More UVB Sunlight Have Lower Colon Cancer Rates

    If you live in a country with more UVB sunlight, you might have an advantage when it comes to risk of cancer.

    Ultraviolet B (UVB) sunlight is used by your skin to make vitamin D, which is anti-carcinogenic. For those of us farther from the equator, we only get a little bit of UVB, usually in the middle of the day (around 10am-2pm, depending on where you live) and during summer months. By comparison, people living in countries closer to the equator are bathed in UVB year-round for most of the day.

    UVB sunlight, with redder shades indicating greater UVB. Data from NASA

    How Does Vitamin D Prevent Cancer?

    Vitamin D has tons of effects on cells. In cancer research, vitamin D is DINAMIT! This acronym describes the sequence of events at the cellular level when vitamin D is low. DINOMIT spells disjunction, initiation, natural selection, overgrowth, metastasis, involution, and transition (see accompanying illustration from Annals of Epidemiology). With good vitamin D levels, cells can communicate better with their neighboring cells due to higher production of cadherin molecules. With low vitamin D levels, cells don’t know that they are surrounded by other cells, so they divide uncontrollably, causing cancer.

    You Might Not Notice Until You’re Older

    Garland CF et al. (2009). Vitamin D for Cancer Prevention: Global Perspective. Annals of Epidemiology 19(7):468-483.

    Our research published in BMC Public Health suggests that the anti-cancer effect of vitamin D is hard to notice in people under age 45. In this study, we analyzed the global differences in colorectal cancer, one of the most common forms of cancer. We found that younger people in low-UVB countries had nearly the same rates of cancer as younger people in high-UVB countries. However, when looking at people older than 45, those in low-UVB countries had dramatically higher rates of cancer than those in high-UVB countries.

    Does this mean that you don’t need vitamin D until you’re older? Not quite. The cancer-causing effects of low vitamin D take a long time to develop. (In fact, this is a key reason that researchers didn’t see high rates of cancer in low-UVB countries until they looked at older age groups.) Young or old, vitamin D could help you prevent cancer.

    What Should I Do If I Don’t Live Near The Equator?

    If you don’t live near the equator, you may need to take additional steps to ensure you have good vitamin D levels. This can include eating high vitamin D foods (such as fatty fish and dairy) or taking a vitamin D supplement. You can also get a few extra rays of mid-day sunshine, as long as you don’t get sunburned.

    If you do live near the equator, you probably don’t need to worry quite as much. However, if you work indoors and rarely go outside, you may want to consider alternate ways to get your vitamin D.

    Regardless of where you live, you can ask your doctor to measure your vitamin D levels, or you can order an at-home vitamin D test. Previous research suggests that 40 ng/mL is a good minimum target for cancer prevention.

    The post The Anti-Cancer Effect of Vitamin D Is More Noticeable in Old Age appeared first on BMC Series blog.

    in BMC Series blog on July 05, 2021 08:18 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Faking Raw Data with an Iron Fist

    "you can rest your concerns. As you can see, we have not manipulated any images." - Dr Arati Ramesh

    in For Better Science on July 05, 2021 06:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Athletes And Art: The Week’s Best Psychology Links

    Our weekly round-up of the best psychology coverage from elsewhere on the web

    The Olympic Games begin in Tokyo later this month — but things are going to be very different from normal. How will the rules and restrictions surrounding the games affect athletes’ wellbeing? Jo Batey takes a look at The Conversation.

    Undark has a fascinating podcast this week about psychologists’ attempts to understand the minds of extremists.

    As pandemic restrictions continue to ease up, you will probably be having more face-to-face conversations than you’ve had in the past year-and-a-half. But it’s only natural to find things a bit awkward at first, Tara Well tells Alex Abad-Santos at Vox.

    We’re still in the early stages of understanding “long Covid”, the persistent symptoms some people experience after a coronavirus infection and which can include psychological problems such as anxiety and depression. Careful research is needed to help people with long Covid, writes Stuart Ritchie at Unherd, who warns that that these patients are in danger of being “pawns in our debates over pandemic policy”.

    There’s a cool video at The Verge about neuroscientist Simón(e) Sun, who is studying homeostatic plasticity and making music out of the recordings she takes from neurons.

    Researchers have developed an algorithm which can predict what kind of art people will like.  The programme analysed how the “low-level” features of art like colours and edges related to people’s judgements of the artwork. It could then determine to a high degree of accuracy whether they would like a new painting, reports Sarah Wells at Inverse.  

    Finally, all your questions about learning languages are answered in this post by linguist Natalie Braber at BBC Science Focus.

    Compiled by Matthew Warren (@MattBWarren), Editor of BPS Research Digest

    in The British Psychological Society - Research Digest on July 02, 2021 02:55 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Schneider Shorts 2.07.2021- Rethink Evolution!

    Schneider Shorts 2.07.2021: fraudulent COVID-19 quackery gets learned society approval, China colonizes Mars, CNRS explains the difference between plagiarism and "unacknowledged borrowings", with Botox against depression from Germany, Nasal Telomere Extension from Harvard and Eugenics Superhero Vaccines from Stanford, and Smut Clyde slandering Ukrainians behind my back.

    in For Better Science on July 02, 2021 05:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Call for Papers! Introducing BMC Ecology and Evolution’s New Collection: Environmental DNA and RNA in Ecology

    In the face of global change, greater knowledge of the planet’s past and present ecosystems is more important than ever to manage biodiversity loss and set conservation priorities. A deeper understanding of interactions among living things and their environment is also required to manage ecosystem services and prevent the spread of harmful pathogens worldwide. Fueled by recent technological advances, DNA and RNA from environmental samples (eDNA/eRNA) are increasingly being used to help meet this need for knowledge. The application of eDNA/eRNA methods helps us evaluate community composition and answer various questions regarding biotic interactions within and between ecosystems. Furthermore, eDNA/eRNA approaches can provide insights into direct and indirect biotic interactions and various evolutionary phenomena.

    Given the actual and potential impacts of eDNA and eRNA technology on the field of ecology, BMC Ecology and Evolution has launched a new Collection to bring together research using eDNA/eRNA approaches to:

    Despite the many and growing strengths of eDNA/eRNA technology, the tool is often reported to have methodological and financial constraints. Such problems include sample collection, sample degradation, inference of spatiotemporal dynamics and taxonomic assignment. Therefore, we also encourage submissions that address technological and economic challenges in applying eDNA and eRNA technology in ecology and conservation biology. 

    Articles under consideration for publication within the collection will be assessed according to the standard BMC Ecology and Evolution editorial criteria and will undergo the journal’s standard peer-review process overseen by Editorial Board Member Associate Professor Luke Jacobus (Indiana University, USA), Associate Professor Cyprian Katongo (University of Zambia) and Associate Professor Luisa Orsini (University of Birmingham, UK). If accepted for publication, an article processing charge applies (with standard waiver policy).

    The collection is now open for submissions!

    To submit to the collection, please click here. Please state in your cover letter that your manuscript is for the “Applications of Environmental DNA and RNA in Ecology” collection. Before submitting your manuscript, please ensure you have carefully read the submission guidelines for BMC Ecology and Evolution. For pre-submission inquiries, please contact Jennifer Harman (jennifer.harman@springernature.com), the Editor of BMC Ecology and Evolution.

    To view the articles already published in this Collection, please visit our website: https://www.biomedcentral.com/collections/aedre

    The post Call for Papers! Introducing BMC Ecology and Evolution’s New Collection: Environmental DNA and RNA in Ecology appeared first on BMC Series blog.

    in BMC Series blog on July 01, 2021 04:07 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Can’t Buy Happiness? Research On Money, Digested

    By Emma Young

    Poverty can have long-lasting psychological effects. But for people who live above the poverty line, expectations about how much money we should have or need, as well as decisions about what to spend our money on and what to save for the future, can all affect psychological wellbeing, too. However, some well-worn ideas about this are being challenged, as we explore here.

    Is it true that money can’t buy you happiness?

    Received wisdom is that it can’t — at least, so long as your income already covers your basic needs plus a few conveniences, such as a car, perhaps. But according to a recent paper in PNAS, this is not correct. Matthew A. Killingsworth at the University of Pennsylvania analysed data from more than 33,000 employed adults in the US, who had been asked to report on their own wellbeing at random timepoints via a smartphone app. Contrary to the findings of some highly influential earlier work, the analysis of over 1.7 million reports found no evidence for a “wellbeing plateau” above an income level of US$75,000 a year. Instead, Killingsworth found that wellbeing rose with income, with incomes in this study ranging from US$15k a year to over US$480,000. “This suggests that higher incomes may still have potential to improve people’s day-to-day wellbeing,” even in wealthy countries, he writes.

    However, it’s worth stressing that the data shows that wellbeing increases by a similar amount every time income is doubled — so an increase of $30,000 to $60,000, for example, is associated with a much bigger rise in happiness than an increase of $120,000 to $150,000.

    What should you buy to maximise happiness?

    Not more things, according to most research — but there’s a caveat to this, which we’ll get to shortly.

    Certainly, there’s plenty of evidence that buying experiences rather than possessions makes for greater wellbeing. For example, in 2020, a team that included Killingsworth but which was led by Amit Kumar at the University of Texas, Austin reported a study of 2,635 US-based adults, who received regular texts during the day asking about their current emotions and any purchases. The researchers found that people were happier when spending on experiences, such as attending a sporting event or eating at a restaurant, than when buying goods that cost the same amount, such as jewellery or clothing.

    Another study, published in Social Psychological and Personality Science, reported that although most people say they would choose to have more money over more time, participants who chose money reported being happier (the participants’ household income and free time were taken into account in this analysis). The team did also find that happier people are more likely to choose more time vs more money. But their analysis suggests that the effect does work in both directions, with a prioritization of time vs money and greater happiness boosting each other. (The participants in this study were thousands of Americans representing a range of ages, income levels and occupations.)

    However, there is also evidence that buying experiences and time really only makes you happier than buying objects if you’re already reasonably well-off, compared with those around you. As we reported in 2018, research (yet again in the US) has shown that less well-off people get just the same — if not more — happiness from buying objects.

    How does income disparity affect happiness?

    People living in areas where incomes are more similar tend to report greater wellbeing — and this holds not just for overall high-income regions, such as Scandinavian countries, but regions where money isn’t used much at all.

    There’s plenty of research finding that it’s not so much how much we earn (above a basic level) but how much we earn compared with those around us that affects wellbeing. In one recent study of this, a pair of researchers analysed decades worth of data from the US and also several western European countries, including the UK. They found that, in Europe especially, rising levels of income inequality were associated with higher levels of happiness — up to a critical point. Beyond that point, happiness dropped.

    The researchers think that limited inequality is encouraging — people see that some social mobility is possible and expect that they might achieve it themselves. However, when income inequality becomes too high, “more aspiring individuals may replace their upward mobility dream with despair and feel jealous of the rich”.

    “Too high” was notably higher for the US than for Europe. The researchers think this could be because even though there is lower social mobility and also greater income inequality in the US compared with western Europe, Americans are greater believers in the possibility of social mobility.

    One last note on income inequality: highlighting it can of course be important. Certainly, there’s work finding that visible reminders of inequality can make disadvantaged people more likely to want to do something about it.

    What about giving money away….

    Throughout human history and across cultures, humans have helped one another in times of need — that, at least, is the message from the influential Human Generosity Project. Anthropological studies of a wide range of communities suggest that we are generous by nature. Though this research has focused on generosity within communities, we are of course also motivated to give anonymously, in the form of charitable donations. Studies in this field have found that giving boosts happiness, and also that happier people give more, creating a virtuous spiral of increasing benefits.

    Other studies have investigated the factors that influence our decisions to give to charities. A 2019 paper in Nature Communications, which analysed millions of dollars of donations given via the GoFundMe Platform, found that donors gave significantly more to people who shared their surname. Also, men and women donated more at times when donors of the opposite sex were visible on the screen.

    That same year, we reported on a study finding that simple “moral nudges” encourage people to donate much more to charity. Nudging people to reflect on what was the morally “right thing” to do increased actual donations by close to half.

    ….And keeping hold of it?

    You really want to save for a deposit on a flat, or for your retirement — but that ridiculously expensive dress, or shirt, or holiday is just so appealing. Most of us have experienced feelings like this. It is much harder to put money away for the future than it is to spend it now. Finding ways to close the gap that we feel between our present and future selves should help, in theory. And a questionnaire that got participants in Portugal to think more about their own future ageing did prompt them to invest more in retirement funds, reports a 2018 study in the Journal of Applied Social Psychology.

    Other groups have looked at different practical ways to encourage people to save. In 2020, a team led by Hal Hershfield at UCLA reported a study of thousands of new users of a financial technology app. They found that suggesting smaller, more regular deposits vs larger, less regular ones encouraged less well-off people to save. In this US study, three times as many people in the highest, compared with the lowest, income bracket signed up to make a $150 deposit each month. When this was framed as $5 per day instead, the difference in participation was eliminated (even though the total savings for each individual were, of course, the same).

    There’s also evidence that some personality traits put you at greater risk of financial hardship and even bankruptcy. Perhaps surprisingly, one of these traits is agreeableness. The reason, according to the team behind this 2018 report, is that agreeable people value money less, and so are more likely to mismanage their own. “The relationship was much stronger for lower-income individuals, who don’t have the financial means to compensate for the detrimental impact of their agreeable personality,” commented co-author Joe Gladstone at UCL.

    This article also appears in the summer issue of The Psychologist magazine.

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on July 01, 2021 09:53 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How societies and journals can engage Chinese authors through webinars

    4 lessons we learned in adapting our outreach to the new normal

    in Elsevier Connect on July 01, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    We Find It Hard To Identify The Emotions Of Intense Screams And Moans

    By Emily Reynolds

    Facial expressions can be hard to read — and not just when someone is experiencing a mild emotion or feels ambivalent. Research has suggested that when we witness someone in the throes of a particularly acute emotional state, like intense joy or pain, we find it hard to pinpoint exactly what they’re feeling.

    A new study looks at a similar phenomenon, this time focusing on vocalisations such as laughter, cries, screams and moans. Writing in Scientific Reports, Natalie Holz and colleagues from the Max Planck Institute for Empirical Aesthetics find that our ability to identify emotions increases as vocalisations become more intense — but only to a point. When these sounds reach peak intensity, we find it surprisingly hard to classify them.

    Participants were split into three groups and assigned different tasks; each involved listening to non-verbal vocalisations representing different emotions and intensities. The first group focused on categorising emotions, assigning one of seven possible affective states to the sounds: anger, fear, pain, achievement, positive surprise, sexual pleasure, or none of the above. Next, participants indicated how intense they felt the emotion was, and how authentic.

    The second group took part in an emotion rating task, indicating how clearly they could perceive a specified emotion in the vocalisations. And the third group rated the sounds according to their valence (how positive or negative the sound was) and arousal (how calm or intense it was). This group also indicated how authentic they felt each sound was.

    In the emotion categorisation task, participants were largely able to correctly classify different emotions. Participants were also able to successfully identify how intense an emotion was.

    However, there appeared to be an intensity “sweet spot” at which participants were most accurate in their classifications. Participants were able to judge emotions more accurately when they were of moderate or strong intensity compared to low intensity — but after this, when the vocalisations reached “peak” intensity, accuracy decreased again.

    Participants also showed some confusion when it came to categorising sounds as negative or positive. Although they heard an equal number of sounds of each valence, participants often miscategorised positive expressions as negative, particularly at high intensities.

    Overall, then, the results show that while participants were able to correctly identify intensity and arousal across the sounds, working out the exact nature and emotion expressed became more difficult as intensity reached its peak. This might be the case because of the “relevance” of certain vocalisations. If someone is expressing something at peak intensity, lead author Holz suggests, “the most vital job might be to detect big events and to assess relevance. A more fine-grained evaluation of affective meaning may be secondary”.

    In other words, extremely emotionally intense sounds indicate something is happening that requires our attention: what it is exactly, in that moment, may be less important and can come later on.

    Previous studies have pinpointed clear, objective markers in intense facial expressions that in theory should allow us to determine whether they indicate positive or negative valence. In the moment, however, we frequently fail to do so. Further research could explore the cues that exist in intense emotional sounds, giving an idea of why — and how — we mix up their meaning.

    The paradoxical role of emotional intensity in the perception of vocal affect

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on June 30, 2021 02:07 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Requiem for Celixir

    How the Nobel Prize winner Sir Martin Evans and the lying crook Ajan Reginald almost succeeded, were it not for Patricia Murray.

    in For Better Science on June 30, 2021 01:16 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The rs-FC fMRI Law of Attraction (i.e., Resting-State Functional Connectivity of Speed Dating Choice)

    Feeling starved for affection after 15 months of pandemic-mandated social distancing? Ready to look for a suitable romantic partner by attending an in-person speed dating event? Just recline inside this noisy tube for 10 minutes, think about anything you like, and our algorithm will Predict [the] Compatibility of a Female-Male Relationship!

    This new study by Kajimura and colleagues garnered a lot of attention on Twitter, where it was publicized by @INM7_ISN (Simon Eickhoff) and @Neuro_Skeptic. The prevailing sentiment was not favorable (check the replies)...



    Full disclosure: I was immediately biased against the claims made in this study...

    This research emphasizes the utility of neural information to predict complex phenomena in a social environment that behavioral measures alone cannot predict.

    ...and have covered earlier attempts at linking speed dating choice to a proxy of neural activity. But I wanted to be fair and see what the authors did, since their results reflect an enormous amount of work.

    Here I will argue that a 10 minute brain scan cannot predict who you will choose at a speed dating event. The resultant measures are even further away from identifying a compatible mate for you, since only 5% of speed dating interactions result in a relationship of any sort (6% for sexual relationships and 4% for romantic relationships, according to one study).

    I was flabbergasted that anyone would think a “resting” state MRI scan (looking at “+” for 10 min) and its resulting pattern of correlated BOLD signal fluctuations would reflect a level of superficial desirability that can be detected by a potential mate at greater than chance level. Another disclosure: this is far from my field of expertise. So I searched the literature. Apparently, “patterns of functional brain activity during rest encode latent similarities (e.g., in terms of how people think and behave) that are associated with friendship” (Hyon et al., 2020). However, that study was conducted in a small town in South Korea (total population 860), allowing a detailed social network analysis. Plus, people knew each other well and experienced many of the same day to day events, which could shape their functional connectomes. Not exactly relevant for predicting strangers' speed dating choices, eh?

    Another paper identified a “global personality network” based on data from 984 participants in the Human Connectome Project (Liu et al., 2019). The sample was large enough to support a training set of n=801 and a “hold-out” dataset (n=183) for validation purposes. The results supported the authors' “similar brain, similar personality” hypothesis. But in the dating world, how much do “similars” attract (compared to the popular saying, “opposites attract”)? Well why not construct (dis)similarity profiles between potential pairs by taking the absolute value of differences in functional connectivity (FC), and combine those with values of similarities in FC? Does that make sense?? And in order to arrive at this metric, there's a whole lot of machine learning (but with much smaller training sets)...

    Identity Classification 

    A separate sample of 44 individuals from the Human Connectome Project was used to construct the Similarity of Connectivity Pattern between pairs (Kajimura et al., 2021). These 44 participants had each been scanned twice, allowing 44 self-self pairs (Jessica at time 1 vs. Jessica at time 2), which were compared to 44 self-other pairs (Jessica at time 1 vs. Jennifer at time 2). Self-self “feature values” always show a positive correlation, and these were used to define “individual-specific information.”

    26,680 feature values?

    To start, 116 regions of interest (ROIs) were defined by Automated Anatomical Labeling (AAL). Pairwise comparisons of these for Self scan #1 vs. Self scan #2 (or vs. Other scan #2) resulted in a vector of 6,670 functional connectivities for each data point [(116 × 115)/2]. Then multiply this by four (!!) and you get 26,680 values fed into a machine learning classifier. Why four? Because the slow fluctuating BOLD signals were decomposed into four frequency bands for the classification procedure. Was this necessary? Does it add robustness, or merely more opportunities for false positive results?

    Fig. 3 (Kajimura et al., 2021). Top 100 feature values, i.e. absolute values of differences between functional connectivity that contributed to identity classification for three frequency bands [the fourth was eliminated because the classifier could not distinguish between self-self and self-other pairs].

    The machine learning algorithm was sparse logistic regression with elastic-net regularization (SLR-EN), which usually prevents overfitting, but I don't know if the algorithm can overcome 26,680 feature values with only 44 subjects. Maybe I'm misunderstanding (and others can correct me if I'm wrong), but the number of participants is rather low for SLR-EN given the number of input parameters? Then...
    The classification accuracy was evaluated using a stratified k-fold cross-validation procedure. ... The ratio of the number of correctly classified labels was then obtained as the classification accuracy.

    The regional results are below, showing a 7 x 7 brain network matrix with similarity in red (positive coefficients) and dissimilarity in blue (negative coefficients). We're still in the realm of correctly classifying self-self, so dissimilarites were considered artifacts of overfitting [but similarities were not?]. If the contribution from similar > dissimilar with binomial tests, this was considered an indicator of self. This was true of F1 (53 out of 67, p<.001) and F2 (52 out of 67, p<.001), but not F3, which was at chance (33 out of 67).

    Fig. 4 (modified from Kajimura et al., 2021). Ratio of self-self classification connectivity in terms of brain networks. Red and blue matrices display the results of similarity- and dissimilarity-based contributions [at three frequency bands]. ... Vis, visual network; Som, somatosensory-motor network; Sal, salience network; Lim, limbic system; Con, executive control network; Def, default mode network; Cer, cerebellum.

    Separate Statistical Analysis — a bevy of Pearsons 

    Before we turn to speed dating, two more analyses are shown below for the identity classification study. The first involved a boatload of FDR-corrected Pearson’s correlations of the functional connectivity vectors for self–self pairs vs. self–other pairs (Fig 2A). The next shows the effectiveness of the machine learning (ML) algorithm in classifying these pairs (Fig 2B).


    Fig. 2 (modified from Kajimura et al., 2021). Identity classification. (A) Similarities in overall functional connectivity profile was significantly higher for the self–self pair (dark-colored distribution) than the self–other pair (light-colored distribution) for all frequency bands. [I've included arrows to point out where they start to diverge] (B) Distribution of differences between ML classification accuracy.

    As the authors predicted, self-self comparisons yielded more similar connectivites than self-other pairs. The ML algorithm identified self for three of the frequency bands (F1-F3) at greater than chance levels (12.4%. 14.8%, and 16.3% better than chance, respectively). However, the algorithm is still wrong a lot of the time. This is especially important for the matchmaking study...

    Speed Dating

    The authors provided a nice self-explanatory graphic presenting an overview of the Speed Dating study (click on image for a larger view). Data collection and analysis followed the flow of the Identity experiment.

    Participants and Social Event

    The participants were 42 heterosexual young adults (20-23 yrs), with 20 females and 22 males. Why these numbers were not perfectly matched, I do not know. The resting state fMRI scan was several days before the first speed dating session. [I'm assuming it was the first, because the Methods say there were three speed dating events. There was also a post-dating scan, which was described in another paper]. The three hour event was held in a large room where pairs of participants had 3 min long conversations with every member of the opposite sex. After each conversation, all the men moved to the next table. When all the speed dates were over, each person was asked to identify at least half of the opposite sex individuals they'd like to chat with again.

    Well, there's a problem here — a requirement to select half the dates could result in less-than-optimal choices in some individuals. This requirement was necessary for sampling purposes, but it makes you wonder about the quality of the matches. Also, there was a strong possibility of unilateral matches — one individual thinks they've found their dream partner but that feeling was not reciprocated. When both members of a pair said "yes" they were considered compatible. Out of a total of 440 possible pairs, 158 were compatible and 282 incompatible.

    The Compatible vs. Incompatible comparisons are the key findings of the study (Fig. 5, with A and B panels as above). Unlike the Identity comparison, compatible male-female pairs did not show more similar functional connnectivity patterns than incompatible pairs (Fig. 5A).

    Well then...

    “This indicates that the compatibility of female–male relationships is not necessarily represented by the similarity of functional connectivity patterns.”


    “Unlike identity classification, compatibility classification was supported by the considerable negative coefficients of the features” (shown in Fig. 6 of the paper). We shall not interpret this as opposites attract


    Fig. 5 (Kajimura et al., 2021). Compatibility classification. (A) Similarity of overall functional connectivity profile. There was no significant difference between compatible (dark-colored distribution) and incompatible (light-colored distribution) pairs. (B) Distribution of differences between the classification accuracy with true labels of pairs and that with a randomized label for each frequency band. Vertical lines indicate chance levels.

    Fig. 5B shows classification accuracy for compatible pairs, which was above chance for F1 and F2. Before investing in a commercial venture,  however, you should know that the benefit beyond guessing is only 5.47% and 4.95%, respectively. Thus, I disagree with the claim that...
    ...the current results indicate that resting-state functional connectivity has information about behavioral tendencies that two individuals actually exhibit during a dyadic interaction, which cannot be measured by self-report methods and thus may remain hidden unless we use neuroimaging methods concurrently.

    To review the potential limitations of the study, we can't assess the quality of matches (meh vs. enthusiastic), we don't know what the participants were thinking about during their rsfMRI scan (see Gonzalez-Castillo et al., 2021), and we don't know their mental state during the scan. Although rs-FC fMRI is often considered a stable trait”, state factors and motion artifacts can affect the results on a given day (Geerligs et al., 2015). Indeed, ~35% of the time, the present paper was unsuccessful in classifying the same person run on two different days (and that's excluding one of four frequencies that was not above chance).

    Is there something intrinsic encoded in BOLD signal fluctuations that can predict who we will find appealing (and a potential “match) after a three minute interaction?  Decisions at speed dating events are mostly based on physical attractiveness, so it seems very implausible to me.

    Further Reading (the Speed Dating Collection)

    The Neuroscience of Speed Dating Choice

    The Electroencephalogram Cocktail Party

    EEG Speed Dating

    The Journal of Speed Dating Studies

    Winner of Best Title

    How I Meditated with Your Mother: Speed Dating at Temples and Shrines in Contemporary Japan



    Geerligs L, Rubinov M, Henson RN. (2015). State and trait components of functional connectivity: individual differences vary with mental state. Journal of Neuroscience 35(41):13949-61.

    Gonzalez-Castillo J, Kam JW, Hoy CW, Bandettini PA. (2021). How to Interpret Resting-State fMRI: Ask Your Participants. Journal of Neuroscience 41(6):1130-41.
    Hyon R, Youm Y, Kim J, Chey J, Kwak S, Parkinson C. (2020). Similarity in functional brain connectivity at rest predicts interpersonal closeness in the social network of an entire village. Proceedings of the National Academy of Sciences 117(52):33149-60.


    in The Neurocritic on June 30, 2021 04:49 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Research collaborations – how can we balance national and global interests?

    That was the question posed to 3 academic thought leaders at the Pan-European conference. Here’s what they had to say

    in Elsevier Connect on June 30, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    We Feel Happier When Behaving More Extraverted Than Normal

    By Emily Reynolds

    Research has shown many benefits to extraversion. One 2019 study on personality traits in the workplace found that extraverts are more motivated, experience more positive emotions, work harder and have fewer adverse experiences at work, while another found that extraversion was associated with more creative thinking.

    If you’re not naturally extraverted, however, these wellbeing benefits are not necessarily out of reach. One intervention suggested that acting like an extravert could bring the benefits of natural extraversion, while another generated similar findings a year later. However, some of this work also suggests that for people who are particularly introverted, acting like an extravert could be exhausting and actually produce negative emotions.

    A new study, published in Personality and Social Psychology Bulletin, looks in more detail at what happens when we deviate from our “baseline” levels of extraversion. The team finds that higher-than-normal levels of extraversion-related behaviours are associated with more positive feelings — even for those who aren’t extraverted to begin with.

    At the start of the first study, 92 participants completed a measure of trait extraversion (i.e. baseline levels of extraversion). Over the next four weeks, they then completed survey questions five times a day related to state extraversion (i.e. in-the-moment extraversion-related feelings and behaviours such as being talkative or energetic) and positive feelings, which was measured with a single question, “How are you feeling right now?”.

    The results showed that during weeks in which participants had behaved in a more introverted way than they usually do — i.e. when their state extraversion that week was lower than the average state extraversion across the entire time period —they experienced lower levels of positive feeling. But when they behaved more extraverted than usual, even when their average level of extraversion was not high, they had higher levels of positive feeling. This suggests that behaving in an extraverted way may increase feelings of wellbeing.

    The second study replicated the first — only this time, positive affect was explored in more depth: participants indicated how much they agreed with numerous statements about how they felt in the moment (for example, “at this moment, I feel inspired”) rather than simply answering one question. (This study also took place over a shorter time period, with the researchers comparing responses during two three-day periods rather than across several weeks). Again, participants reported lower levels of positive affect during periods in which they had behaved in a more introverted fashion, while in more extroverted periods they experienced higher levels — although these effects were only trends which did not reach statistical significance.

    In these studies, behaving in a more extraverted manner than normal did not seem to have negative impacts, even for the more introverted participants, in the longer term. However, other work has found less clear-cut benefits: even if introverts experienced momentary gains in positive affect these didn’t last, and there were other troublesome impacts including fatigue and negative emotion. This may be because those previous studies had participants “act like an extravert” or behave in a way that felt unnatural to them, while in the new work the researchers simply looked at the changes in people’s behaviour on a day-to-day basis.

    As the team acknowledges, the study can’t determine the direction of causality between extraversion and positive feelings. Rather than extraverted behaviour increasing positive feelings, could it be that positive affect induces more extraverted behaviour? Future work could look more closely at the direction of this relationship.

    If the results have made you want to boost your trait extraversion, it may not be so simple — changing your personality altogether appears to be a bit more tricky than just behaving in an extraverted way a few times a week. If you succeed in tasks designed to make you behave consistently with a particular trait, one study found, change can indeed occur. But trying and failing — which may well happen to introverts attempting to become more extraverted — can have the opposite effect, making people even less likely to embody certain traits.

    Do You Feel Better When You Behave More Extraverted Than You Are? The Relationship Between Cumulative Counterdispositional Extraversion and Positive Feelings

    Emily Reynolds is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on June 29, 2021 03:33 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Bedtime Music Can Disrupt Your Sleep By Triggering Earworms

    By Emma Young

    Do you listen to quiet music to help you wind down before sleep? If you do, you’re following the advice of all kinds of organisations, including the US National Institutes of Health and the National Sleep Foundation. However, this advice could be counter-productive, according to a new study by Michael K. Scullin and colleagues at Baylor University. The work, published in Psychological Medicine, found that bedtime music was associated with more sleep disruptions — and that instrumental music is even worse than music with lyrics.

    In the first study, 199 online participants living in the US reported on their sleep quality and music listening frequency and timing, as well as their beliefs about how this affected their sleep. Almost all — 87% — believed that music improves sleep, or at least does not disrupt it. However, the team found that more overall time spent listening to music was associated with poorer sleep and daytime sleepiness. Just over three quarters of the participants also reported experiencing frequent “earworms” — having a song or tune “stuck” and replaying in their minds. A quarter reported experiencing these during the night at least once per week, and these people were (unsurprisingly) six times as likely to report poor sleep quality. The team’s analysis suggested that listening specifically to instrumental music near bedtime was linked to more sleep-related earworms and poorer sleep quality.

    The team then ran an experimental study on 48 young adults. After arriving at the sleep lab at 8.45pm, participants went to a quiet, dimly lit bedroom, where they completed a host of questionnaires that included measures of stress, sleep quality and daytime sleepiness. They also had electrodes applied, ready for the night-time polysomnography (which recorded their brain wave activity, as well as heart rate and breathing), and reported on how relaxed, nervous, energetic, sleepy and stressed they felt.

    At 10pm, they were given some “downtime”, with quiet music playing. Half were randomised to hear three songs: “Don’t Stop Believin’’’ by Journey, “Call Me Maybe” by Carly Rae Jepsen and “Shake It Off” by Taylor Swift, while the other half heard instrumental-only versions of these same songs. (The team chose these songs because they are known to cause earworms and were likely to be very familiar to the participants.)

    Participants reported decreases in stress and nervousness and increased relaxation after listening to either set of songs, and also showed decreases in blood pressure. So — as earlier studies have also suggested — quiet music at bedtime was indeed relaxing at the time. However, a quarter of the participants woke from sleep with an earworm, and the polysomnography data showed that instrumental versions of the songs were more likely to trigger these awakenings as well as to cause other sleep disruptions, such as shifts from deeper sleep to lighter sleep. Taken together, the findings represent “causal evidence for bedtime instrumental music affecting sleep quality via inducing earworms,” the team writes.

    The EEG data showed that the participants who woke up with an earworm had significantly greater “frontal slow oscillations” — a classic signature of memory consolidation during sleep. Earworm-associated slow oscillations were also seen in the auditory cortex, which processes sounds. Earworm awakenings seem, then, to result from the reactivation of melodies heard during the day during sleep, as part of the memory consolidation process. The team’s overall data suggests that this is more likely to happen when the melodies are heard around bedtime, and are instrumental.

    Why instrumental-only songs should have a bigger impact than music with lyrics isn’t clear. The three songs used in this study were chosen because they were likely to be familiar. Hearing them without the lyrics might have prompted the participant’s brains to try to add the words, which might have made earworms more likely. If this is the case, all instrumental music may not have the same effect. However, the data from the first study is consistent with the idea that instrumental music generally is more of a problem.

    This work has practical implications, of course. “Just because music listening is enjoyable does not mean that more music is always better for health outcomes,” the team writes.

    And for anyone who finds themselves struggling with earworms at night, the researchers have a few recommendations: limit how much music you listen to during the day, and avoid listening to music before bed. Instead, perhaps spend 5-10 minutes writing out a to-do list for the next day, as earlier work involving Scullin has found that this helps people to get to sleep. If you do struggle with sleep, you might also want to look into digital interventions, or even a rocking bed.

    Bedtime Music, Involuntary Musical Imagery, and Sleep

    Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest

    in The British Psychological Society - Research Digest on June 28, 2021 02:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Nobody Expects the Spanish Inquisition!

    "Recently we realized that some images were used wrongly in the paper, so I want to retract this article. The key message of the paper is very solid and results have been reproduced independently in many laboratories, but I find unacceptable the wrong use of some images during figure preparation" - Pedro L Rodriguez

    in For Better Science on June 28, 2021 06:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Tackling bias by rethinking human anatomy education

    Gender and racial bias is pervasive in human anatomy education, but how should we address it? 3D4Medical’s Chief Design Officer asks participants at Women in Tech event

    in Elsevier Connect on June 28, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Highlights of the BMC Series – May 2021

    BMC Genomic Data: De novo genome assembly and analysis unveil biosynthetic and metabolic potentials of Pseudomonas fragi A13BB

    Akansha Jain, Sampa Das, CC BY 4.0 via Wikimedia Commons

    Pseudomonas fragi A13BB, recovered as part of study to identify novel antibiotic-producing bacterial strains from soil samples collected from the rhizosphere, has been sequenced for antibiotic producing gene clusters and has been published as the first Data Note in BMC Genomic Data.  Plant growth-promoting rhizobacteria (PGPR) are one of the components of the rhizosphere, where they promote plant growth by enhancing uptake of nutrients, inorganic elements, or by increasing resistance to heavy metals, high salt concentrations, phytopathogens and more.

    Two β-lactone secondary metabolite biosynthetic gene clusters (smBGC) known for their antibiotic, antiobesity and anticancer properties, were identified and demonstrated low homology (20%) to known smBGCs. A siderophore (a small compound secreted usually to transfer iron across cell membranes) smBGC was identified even though P. fragi is considered a non-siderophore producing member of the genus Pseudomonas.


    BMC Nutrition: Did imports of sweetened beverages to Pacific Island countries increase between 2000 and 2015?

    דוד שי, CC BY-SA 3.0 via Wikimedia Commons

    This study was intended to identify trends in the variety and amount of imported sweetened beverages (SB) to Pacific Island countries (and the impact of the SB taxes on imports in Fiji and Tonga). A major cause of death and sickness in these countries is nutrition-related chronic diseases.

    The authors found there was an increase in SB imports into pacific island countries between these dates which needs to be considered for health implications by both exporting and importing countries as this has a detrimental impact on diet and self-sufficiency. Diet high in processed foods is well known to have an impact on health. There was a statistically significant increase in SB imports to Tonga but too little data for assessment of taxes.


    BMC Psychiatry: A qualitative study of experiences of NHS mental healthcare workers during the Covid-19 pandemic

    Murat Karabulut via Wikimedia Commons

    Thirty five members of NHS secondary mental health services (England) undertook an interview based study and were found to face difficulties that impacted significantly on their well-being during the pandemic.

    The lockdown in England in response to the pandemic caused by SARS-CoV-19 had impact on all Healthcare workers but that on the mental Healthcare workers has been under-reported. The quality of life for respondents was affected by their challenge of providing care in constrained and potentially dangerous situations and losing patients to the disease. Taking on additional tasks to provide a better service for their patients led to additional strain. Feelings of grief, helplessness, isolation, distress, and burnout were all reported. The response to the pandemic meant that services had to rapidly adapt to major incident mode at the same time as the needs of service users increased. Moral injury has resulted to some after exposure to this event.

    This study has pointed to areas where additional support could be targeted. One Implication is loss of staff but this has not been explicitly stated.


    BMC Medical Ethics: Cultures and cures: neurodiversity and brain organoids

    Cells from donors with autism spectrum disorder (ASD) can now be grown into brain organoids and the resulting research is opening up our understanding of the aetiology of ASD. Brain organoids are allowing the exploration into genetic, developmental and other factors involved which makes this unique from other forms of research. Researchers often incorporate the medical model of disability into ASD research.

    The neurodiversity movement is likely to disagree with approaches and aims of cerebral research in ASD as the developmental disability movement and paradigm understands autism as a form of human diversity and advocates a model based on social, attitudinal, and environmental barriers rather than any deficits.

    NIH Image Gallery from Bethesda, Maryland, USA, via Wikimedia Commons

    The authors give three recommendations that should minimise any conflict and achieve a more holistic, inclusive approach to cerebral organoid research on ASD.

    1 – neurodiverse individuals should be included as co-creators in both the scientific process and research communication.

    2 – clinicians and neurodiverse communities should have open and respectful communication.

    3 – a continual reconceptualization of illness, impairment, disability, behavior, and person should be achieved.

    Researchers are advised to be careful not to combine disease with impairment or disability and also consider how best to communicate their ideas with respect and care.


    BMC Endocrine Disorders: The effects of vitamin and mineral supplementation on women with gestational diabetes mellitus

    Gestational diabetes mellitus (GDM) is defined as impaired glucose tolerance with onset or first recognized during pregnancy. It is fairly prevalent with a figure of 15-20% reported globally. Hyperglycaemia is associated with adverse outcomes for mother and baby.

    Towle Neu, CC BY 2.5 via Wikimedia Commons

    Vitamin and mineral supplementation such as vitamin D, vitamin E, magnesium, and selenium  may regulate glucose metabolism and have beneficial roles in anti-inflammatory and anti-oxidative stress.; does it help in GDM? The effects of vitamin and mineral supplementation on women with GDM have not been well established but this meta-analysis hoped to rectify that.

    The Authors’ findings are that vitamin and mineral supplementation; magnesium, zinc, selenium, calcium, vitamin D and E (alone or in combination), significantly improved glycemic control, lessened inflammation and oxidative stress in women with GDM through decreasing serum Fasting Plasma Glucose, insulin, and other levels, and increasing total antioxidant capacity levels.

    The post Highlights of the BMC Series – May 2021 appeared first on BMC Series blog.

    in BMC Series blog on June 25, 2021 08:10 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    5 crucial lessons for gender equality and the SDGs

    At Gender Summit, experts use data analysis and research mapping to reveal the best ways to support gender equality — and why current approaches are falling short

    in Elsevier Connect on June 25, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    An easier way to submit research at PLOS

    We’re always evolving our services to make sharing discoveries as easy as possible for researchers. To help authors save time and effort in the article submission process, all PLOS journals offer format-free initial submission for authors submitting research articles. Learn more about this service before you get started on your next submission.

    What is Format-Free Submission?

    Format-free submission is part of our 3-step submission process which includes:

    1. Initial submission. Participating journals waive all formatting requirements and will request formatting changes only after initial editorial evaluation or initial peer review depending on the journal. Simply upload your manuscript in a single PDF file (which can include text and figures) and submit to the journal’s online submission system. 
    1. Editorial assessment. Our editors and reviewers review and provide initial feedback on your submission as quickly as possible.
    1. Full submission. Make any additional formatting changes only after initial feedback is received. 

    Together, these 3 steps are designed to help ease initial submission so you spend as little time in the article submission process as possible. 

    Getting started

    To ensure your format-free submission is complete and gets handled as quickly as possible, be sure to review the submission checklists and guidelines provided by our journal editorial teams. Click a journal name below to learn more:


    PLOS Biology

    PLOS Medicine

    PLOS Neglected Tropical Diseases

    PLOS Pathogens

    PLOS Genetics

    PLOS Computational Biology

    PLOS Digital Health

    PLOS Sustainability and Transformation

    PLOS Water

    PLOS Climate

    PLOS Global Public Health

    The post An easier way to submit research at PLOS appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on June 24, 2021 08:11 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Whodunit in Bulgaria

    A microbiology institute in Sofia is investigating a string of problematic papers on arthritis. Lead author Nina Ivanovska: "I consider myself the major culprit". She might be right.

    in For Better Science on June 24, 2021 06:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Lessons learned in developing pharma’s go-to drug safety platform

    As PharmaPendium turns 15, we reflect on our collaborations with pharma companies, regulatory agencies – and the researchers who use it

    in Elsevier Connect on June 24, 2021 12:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    A Spiking Neural Program for Sensorimotor Control during Foraging in Flying Insects

    This week on Journal Club session Shavika Rastogi will talk about a paper "A Spiking Neural Program for Sensorimotor Control during Foraging in Flying Insects".

    Foraging is a vital behavioral task for living organisms. Behavioral strategies and abstract mathematical models thereof have been described in detail for various species. To explore the link between underlying neural circuits and computational principles, we present how a biologically detailed neural circuit model of the insect mushroom body implements sensory processing, learning, and motor control. We focus on cast and surge strategies employed by flying insects when foraging within turbulent odor plumes. Using a spike- based plasticity rule, the model rapidly learns to associate individual olfactory sensory cues paired with food in a classical conditioning paradigm. We show that, without retraining, the system dynamically recalls memories to detect relevant cues in complex sensory scenes. Accumulation of this sensory evidence on short time scales generates cast-and-surge motor commands. Our generic systems approach predicts that population sparseness facilitates learning, while temporal sparseness is required for dynamic memory recall and precise behavioral control. Our work successfully combines biological computational principles with spike-based machine learning. It shows how knowledge transfer from static to arbitrary complex dynamic conditions can be achieved by foraging insects and may serve as inspiration for agent-based machine learning.


    Date: 2021/06/25
    Time: 14:00
    Location: online

    in UH Biocomputation group on June 23, 2021 10:28 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Accomplishing career transitions in STEM with the help of scientific societies

    Greater inclusion of individuals from underrepresented demographics in STEM fields has long been a goal for many disciplines. As the culture of STEM evolves towards greater inclusivity, providing equitable access to talented individuals of all backgrounds has become a key priority. Diversity not only in academic expertise but in demographics is increasingly recognized as a source of collective strength that correlates with success and innovation.

    Scientific societies have historically sought to cultivate diverse membership through an educational model that supports scientist participation in activities that prepare them for upcoming career transitions. While this methodology is broadly applied across many different disciplines, societies are now coordinating their efforts and using data to evaluate outcomes and identify opportunities to improve impact.

    In the process, scientific societies have found themselves well-suited to offer field-specific career development opportunities that complement the traditional instruction and training provided at academic institutions. Trainees in the process of developing into independent academic scientists must cultivate a combination of disciplinary knowledge and complementary soft skills in order to navigate demands such as grant-writing, collaboration, mentoring, and publishing—skills that are fundamental to success in the professoriate.

    Scientists from backgrounds that are underrepresented in STEM are less likely to have access to mentoring, making it more challenging to develop the soft skills necessary to facilitate success as tenure-track faculty. Scientific society professional development programming geared towards these scientists can have an equalizer effect, enabling them to successfully build their academic niche and professorial career in field areas of interest to BioMed Central (BMC) such as science, technology, engineering and medicine. The Minorities Affairs Committee (MAC) of the American Society for Cell Biology (ASCB) has a strong track record of creating professional development programs to help relieve these disparities in access to mentoring.

    One example of ASCB MAC programming is its Accomplishing Career Transitions (ACT) program. ACT engages cell biologists who are starting out their independent careers to individualize their professional development and training through a longitudinal mentoring framework. ASCB aims to prepare ACT Fellows for a successful transition into the academic STEM workforce.

    Our new BMC Proceedings Supplement, highlights topics including effective mentorship, obtaining a faculty position, starting a lab, preparing for tenure and promotion, and professional development through experiential learning.

    The content of this BMC Proceedings Supplement is more important now than ever, as rising faculty adapt their career trajectories to global challenges. Our short term goal with this BMC Proceedings Supplement is to make the ACT program available to a wider range of  scientists to help them prepare for success in academia. Their success will contribute to a diverse professoriate that is inclusive and welcoming of the next generation of scientists.

    The post Accomplishing career transitions in STEM with the help of scientific societies appeared first on BMC Series blog.

    in BMC Series blog on June 22, 2021 10:00 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 21 June 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.

    Please join us at the next regular Open NeuroFedora team meeting on Monday 21 June at 1300UTC in #fedora-neuro on IRC (Libera.chat). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'

    The meeting will be chaired by @ankursinha. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on June 21, 2021 10:35 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Predicting Kidney Failure

    The Kidney Failure Risk Equation (KFRE) was first assessed by Tangri et al. in 2011.  It was then multi-nationally evaluated in 2016.  I frequently use their online calculator to help counsel my patients in clinic but more so as informational to relieve patient concerns, and not to drive clinical practice.  The KFRE predicts the 2- and 5-year risk of end-stage kidney disease (ESKD) in patient with chronic kidney disease (CKD) stage 3a-5.  There are two forms of the equation: 4-variable and 8-variable.  The 4-variable includes age, sex, estimated glomerular filtration rate (eGFR) and albuminuria.  The 8-variable has these components but also incorporates serum calcium, phosphate, albumin, and bicarbonate.

    Ali et al. set out to validate the KFRE in advanced CKD taking into account their cause of CKD (diabetic nephropathy, hypertensive nephropathy, glomerulonephritis, autosomal dominant polycystic kidney disease (ADPKD), and other causes.  The KFRE performed well across all diseases and was found to have adequate discrimination and calibration.  What I found to be very interesting was that the KFRE provided better clinical utility for decision-making like pre-ESKD planning than basing it on GFR cutoffs alone.  Typically, we start planning for transplant evaluations, dialysis education, dialysis access placement, etc when the eGFR is < 30 mL/min/1.73m2 (in this study they used < 20 and < 15 mL/min/1.73m2).  with using the KFRE in Ali et al., they identified more patients who were likely to progress to ESKD and delay in others by using KFRE thresholds of > 40% for 2-year risk and > 50% for 5-year risk of ESKD.

    Identifying patients who are progressing to ESKD is kept in planning for dialysis modality, access placement, and transplant referral.  If we can catch individuals early and start the process then there are fewer hospitalizations and poor outcomes.  Other studies have addressed using the KFRE in the necessity of nephrology referrals and using it as a risk-based approach to guide CKD care.

    Based on my review of the literature and Ali et al.’s article, I definitely will start using the KFRE in my clinical assessment and planning for patients with advanced CKD.  Maybe it will help identify those patients who will benefit from early referral to transplant and access placement or at the least early education endeavors for those patients.  I can only see positives from using the KFRE as another clinical tool.

    What are other additional uses of the KFRE in your practice?

    The post Predicting Kidney Failure appeared first on BMC Series blog.

    in BMC Series blog on June 18, 2021 11:13 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Adding interactive citation maps to arXiv

    We’re pleased to announce a new arXivLabs collaboration with Litmaps. The new arXivLabs feature allows arXiv users to quickly generate a citation map of the top connected articles, and then explore the citation network using the Litmaps research platform.

    A citation network is a visualization of the literature cited by a research paper. The network shows how papers are related to each other in terms of concepts, subject areas, and history — and they’re valuable for analyzing the development of research areas, making decisions on research directions, and assessing the impacts of research, researchers, institutes, countries, and individual papers.

    Readers can now view a Litmap citation network for a specific paper, directly from the arXiv abstract page by clicking on the “Bibliographic Tools” tab at the bottom of an abstract page and activating “Litmaps.” Using this tool, arXiv readers can now easily jump from articles they are interested in and use Litmaps’ custom visualization and automated search tools to find other critical articles they may have missed.

    screenshot of litmaps tool on arXiv abstract pageFrom an arXiv abstract page, readers can activate the Litmaps tool to view citation networks


    An arXivLabs Collaboration

    “We are excited to be able to offer arXiv readers new visual ways to explore scientific literature. Litmaps will enrich the arXiv reading experience and help readers find what they need,” said Eleonora Presani, executive director of arXiv.

    This tool supports arXiv’s mission to provide an open platform where researchers can share and discover new, relevant, emerging science and establish their contribution to advancing research.

    “We are arXiv users ourselves at Litmaps, so it’s great to be able to collaborate and integrate with arXiv, which is a key resource for the research community,” said Axton Pitt, cofounder of Litmaps. The company aims to make the process of literature discovery easy and fun, and welcomes feedback via Twitter, or email.


    A version of this blog post appears here.







    in arXiv.org blog on June 17, 2021 08:45 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Toward a New Theory of Motor Synergies

    This week on Journal Club session Bente Riegler will talk about a paper "Toward a New Theory of Motor Synergies".

    Driven by recent empirical studies, we offer a new understanding of the degrees of freedom problem, and propose a refined concept of synergy as a neural organization that ensures a one-to-many mapping of variables providing for both stability of important performance variables and flexibility of motor patterns to deal with possible perturbations and/or secondary tasks. Empirical evidence is reviewed, including a discussion of the operationalization of stability/flexibility through the method of the uncontrolled manifold. We show how this concept establishes links between the various accounts for how movement is organized in redundant effector systems.


    Date: 2021/06/18
    Time: 14:00
    Location: online

    in UH Biocomputation group on June 16, 2021 10:23 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The Computer Vision Foundation becomes first Gold Affiliate

    Image of the CVF logo with white letters on a blue backgroundWith a generous donation of $125,000, The Computer Vision Foundation becomes arXiv’s first 2021 Gold Affiliate. The foundation fosters and supports research on all aspects of computer vision, a field of study that develops techniques to see and interpret the visual world, with applications ranging from diagnosing disease to evaluating agricultural crop quality.

    “We didn’t donate because we were asked,” said Ramin Zabih, president and founder of CVF, Cornell Tech professor, and Google research scientist. “We donated because we love arXiv.”

    Open access is a central tenet of the foundation, which (along with IEEE) cosponsors the two main computer vision conferences, CVPR and ICCV. Before the pandemic, CVPR drew more than 10,000 attendees. CVF also provides open access to the proceedings for these conferences.

    “Without arXiv, the computer vision field would not advance at anything close to its current speed,” said Zabih. Conferences and journals are both useful venues for presenting and peer reviewing results. But the pace of research is fast, and arXiv provides an invaluable vehicle for rapid research sharing between conference and journal cycles. Plus, while the high bar for acceptance to conferences and journals serves an important purpose, arXiv provides a platform for lesser-known researchers to make their mark. “High profile researchers will always have ways to share their work, say, by being invited to give talks. arXiv levels the playing field,” said Zabih.

    arXiv recently launched an updated sustainability model, which distributes arXiv’s operational costs fairly across our dedicated community. This donation was the first Gold Affiliate through this sustainability model and is a continuation of The Computer Vision Foundation’s support, which began in 2019.

    Like earlier donations from the Allen Institute for Artificial Intelligence, Heising-Simons Foundation, and Google, Inc, this contribution supports arXiv’s critical infrastructure improvements and integration with the scholarly communications landscape.

    “arXiv has been operating in maintenance mode for quite some time,” said Eleonora Presani, arXiv’s executive director. “With generous support from organizations like The Computer Vision Foundation, arXiv can innovate for the future.”

    in arXiv.org blog on June 15, 2021 02:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dev session: Rick Gerkin: SciUnit


    Rick Gerkin will introduce the SciUnit framework and discuss its development in this dev session.

    The abstract for the talk is below:

    SciUnit is a discipline-agnostic framework for model validation, handling all of the testing workflow by using a implementation-independent interface to models. SciUnit also contains code for visualization of model results, and command line tools for incorporating testing into continuous integration workflows.

    SciUnit is used in model validation in neuroscience via NeuronUnit, which implements an interface to several simulators and model description languages, handles test calculations according to domain standards, and enables automated construction of tests based on data from several major public data repositories.

    in INCF/OCNS Software Working Group on June 14, 2021 03:21 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    All the Research That’s Fit to Print: Open Access and the News Media

    Abstract:  The goal of the open access (OA) movement is to help everyone access the scholarly research, not just those who can afford to. However, most studies looking at whether OA has met this goal have focused on whether other scholars are making use of OA research. Few have considered how the broader public, including the news media, uses OA research. This study sought to answer whether the news media mentions OA articles more or less than paywalled articles by looking at articles published from 2010 through 2018 in journals across all four quartiles of the Journal Impact Factor using data obtained through Altmetric.com and the Web of Science. Gold, green and hybrid OA articles all had a positive correlation with the number of news mentions received. News mentions for OA articles did see a dip in 2018, although they remained higher than those for paywalled articles.

    in Open Access Tracking Project: news on June 13, 2021 08:38 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Research and Intuition

    So far I was very fortunate with my scientific long-term mentors and supervisors: both of them are kind, open, creative and stunningly intelligent. I could not wish for more. However, when asked about a role model, I would mention a person that influenced my take on research, during a time when I still was studying physics, probably more than others: Pina Bausch.

    Pina Bausch was a dancer and choreographer who mostly worked in the small town of Wuppertal, Germany, where she developed her own way of modern dance. Her works are creative and inventive in very unexpected ways, and the way she explored body movements as a dancer struck me as surprisingly similar to what I think is “research”.

    Research in its purest form is the exploration of the unknown, the discovery of what is not yet discovered, without a clear path ahead. The question that I’m working on in the broadest sense, “How does the brain work?”, enters the unknown very quickly as soon as you take the question seriously. How, in general, can we see what cannot be seen yet, how can we find ideas that do not yet exist?

    Pina Bausch was a master in this art. Her craft was not science or biology but dancing, but I think one can learn some lessons from her. It was typical of her to explore her own movements and to “invent” new movements, like wrist movements or coordinated movements of elbows and the head, or simply a slowed-down or delayed movement of the fingers. In regular life we use a rather limited and predefined combination of motor actions, and it takes some creativity to come up with movements that are unexpected and new but still interesting. One way to find new ways to move would be to consciously become aware of the own patterns and limitations and then try to systematically break those rules. However, Pina Bausch performed this discovery process in a different way. Her research was not guided by intellectual deduction or conclusion, but by her intuition. In 1992, she said:

    “Ich weiß nämlich immer, wonach ich suche, aber ich weiß es eher mit meinem Gefühl als mit meinem Kopf.”

    “Because I always know what I’m searching for. But I know it with my heart and with my feeling rather than with my brain.”

    This might come over as a bit naive at first glance. Sure, an artist uses her heart, a scientist uses his brain, that sounds more or less normal, doesn’t it? However, when I saw Pina Bausch do this kind of searching, that is, when she danced, I was very impressed.

    She seemed to rely on her intuition on every single moment of her explorations; and when I heard her talk about it (unfortunately, I’m only aware of interviews in German without translation), it was also clear that she did not have and did not need an explanation of what was going on. Most impressively for me, her way of exploring the unknown really struck me as similar to what is going on in a researcher, no matter the subject. What made her such an excellent researcher?

    To me, it seems that the prerequisites of her impressive ability are the following: First of all, of course, a deeply engrained knowledge of and skill with her art, together with a honest care about the details. There’s no intuition without experience and knowledge. Second, an openness to whatever random things might happen and to accept them, coming from the outside or her inside. Third, an acceptance of the fact that she doesn’t really know what she’s doing. Or, to put this differently, a certain humility in the face of what is going to happen and what is going on in her own subconsciousness. I believe that these are qualities that also make for a good researcher in science.

    It also reflects my own experience of doing research (at least partially). Even when I was working with mathematical tools, for example when I was modeling diffusion processes in inhomogeneous media during my diploma thesis, I had the impression that my intuition was always a couple of steps ahead of myself. Often I could see the shape of the mathematical goal ahead of my derivations, and it would take me several days before I could bring it down to the paper.

    Of course there are other ways to develop new ideas, and for some problems intuition also fails systematically (maybe complex systems?). And of course there are other kinds of research, for example the gradual optimization of methods, or the development of devices to solve a specific problem, or the broad and systematic screening of candidate genes or materials for a defined purpose.

    These systematic and step-wise procedures are more predictable than “pure” research, and the grant-based scientific research reinforces this kind of research. In a grant proposal, there are typically a defined number of “aims”. The more clearly defined these aims are, the better the chances of the grant proposal to be accepted. This makes sense. It would be ridiculous to fund a project with loosely defined aims, especially if other, competing proposals have a clear and realistic goal.

    However, this necessary side-effect of grant-based research narrows our perspective on a kind of research that can be more or less clearly described even before doing it. It narrows down also the way how we talk about research and about results. We do not directly encourage young researchers to use and develop their intuition, as if this had nothing to do with the scientific process. In grants and progress reports and talks and papers, we try to use very concise, precise language, sharp and clean as steel (often completed by pieces of superficial math that are supposed to demonstrate precision), not only when describing our methods – but also when describing results and when interpreting the results. This is not bad by itself, but it shapes also the way we think about research, and it can lead to a situation where we internally might reject ideas or results that do not satisfy the desired clarity and cleanliness in a first step.

    I think that also researchers in “hard” sciences like neuroscience could benefit from a technique that uses intuitive thinking, and at least I have learnt a lot from the way Pina Bausch approached her subject of study using these techniques. Ultimately, understanding in neuroscience should always aim for descriptions in terms of words or math. But the way towards this goal does not need to be guided by these clear ways of thinking alone. From my experience, the power of intuition is only unleashed if we accept that we cannot really understand the process itself. Therefore, I see the humility that Pina Bausch showed towards her own intuitive thought process not simply as a simple virtue of a human being, but rather as a tool and a way of thinking that enables creativity.

    in Peter Rupprecht on June 12, 2021 11:31 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    2-Minute Neuroscience: Tinnitus

    Tinnitus, sometimes called “ringing in the ears,” involves hearing a sound that cannot be linked to an external stimulus. In this video, I discuss nervous system mechanisms that might underlie tinnitus.

    in Neuroscientifically Challenged on June 12, 2021 10:24 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Feed-Forward and Noise-Tolerant Detection of Feature Homogeneity in Spiking Networks with a Latency Code

    This week on Journal Club session Michael Schmuker will talk about a paper "Feed-Forward and Noise-Tolerant Detection of Feature Homogeneity in Spiking Networks with a Latency Code".

    In studies of the visual system as well as in computer vision, the focus is often on contrast edges. However, the primate visual system contains a large number of cells that are insensitive to spatial contrast and, instead, respond to uniform homogeneous illumination of their visual field. The purpose of this information remains unclear. Here, we propose a mechanism that detects feature homogeneity in visual areas, based on latency coding and spike time coincidence, in a purely feedforward and therefore rapid manner. We demonstrate how homogeneity information can interact with information on contrast edges to potentially support rapid image segmentation. Furthermore, we analyze how neuronal crosstalk (noise) affects the mechanism's performance. We show that the detrimental effects of crosstalk can be partly mitigated through delayed feed-forward inhibition that shapes bi-phasic postsynaptic events. The delay of the feed-forward inhibition allows effectively controlling the size of the temporal integration window and, thereby, the coincidence threshold. The proposed model is based on single-spike latency codes in a purely feed-forward architecture that supports low-latency processing, making it an attractive scheme of computation in spiking neuronal networks where rapid responses and low spike counts are desired.


    Date: 2021/06/11
    Time: 14:00
    Location: online

    in UH Biocomputation group on June 09, 2021 06:36 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    My stack for research ML projects

    For the past few months, I’ve been working on a machine learning research project, which I just submitted to NeurIPS. The scale is, all things considered, fairly small: the output limited to one paper and a handful of figures. Yet, I still needed to distribute the work along a mini-cluster in the cloud, keep track of thousands of different experiments, track key performance metrics over time, and integrate a half-dozen datasets and a dozen models.

    At that scale, it’s pretty easy to shoot yourself in the foot: introduce a data loading error that invalidates your results, forget to keep track of which parameters you used for which experiment you ran months ago, have a figure that’s not easily reproducible because the steps to reproduce the figure have been lost through the sands of time, etc. 25% of papers in machine learning actually publish their code, and I have a feeling that a big chunk of the other 75% is because people hesitate to share their messy creations, or the code is so brittle that it self-destroys as soon as you see it and they don’t want to support it.

    Here’s the stack that I used for this project, the reason I took some decisions, and some lessons I learned. I think that the organization I show here is appropriate for small research teams like academic labs and early-stage startups. Some of this is a little pedestrian, but I hope you learn a thing or two. Let me know about your experiences in the comments.

    Decision points

    Local vs. cloud

    The Full Stack Deep Learning class from Berkeley is a good place to learn how to organize deep learning projects. I had originally intended to go all-in on cloud infra to run models and analyze data. The class makes a solid argument that it ends being cheaper to build your own machine for local testing, then scale on the cloud as needed. I had a several years old deep learning rig with a 1080 Titan lying around, so I ended up wiping the drive and installing Ubuntu 20 on it for local bleeding edge development. I would then push the code to github when the code was ready to be scaled and run on cloud infra. Having a local dev machine doesn’t mean you’re locked to that place: you can still set up remote ssh access, but given that it was winter and we were on lockdown I never had a reason to do it.

    I had spent far too many afternoons figuring out (and yelling at and crying about) kubernetes on Google Cloud last year, so for this project I decided to avoid managing cloud infra by using a managed on-demand service. I wanted a central server where I could submit a batch job and it would provision the right machine to run the job and store the results somewhere in the cloud. Importantly, I wanted it set up in a way that I wouldn’t have to pay for a central server, only for the workers. I ended up using paperspace gradient for this purpose, which at the time, had cheap prices for GPU instances, and offered a straightforward product called experiments which was meant for exactly the purpose I wanted. This setup worked great for the first 3 months, but then Gradient changed their product to focus on a more turnkey solution, and from that point on the original product stopped working properly (multiple jobs would hang, you couldn’t find jobs from the interface, customer support was abysmal). Next time, I will try AWS Batch, and perhaps I will dip my toe into SageMaker to get access to a cloud jupyter notebook for analysis.

    Code organization

    One unexpected advantage of using this hybrid local/cloud setup is that it immediately imposed a certain discipline on the project: gradient runs your code inside a docker container, so you have to have a docker container ready, hence you are documenting your dependencies as you go along. I used the Pytorch docker container base and kept adding dependencies as needed, and used dockerhub to build the containers in the cloud.

    This setup also imposes a workflow that focuses on creating batch jobs as Python scripts as opposed to jupyter notebooks. Jupyter notebooks are great for analysis, but building models in jupyter notebook is a bad idea – you pollute your global namespace, you risk running cells out of order, the editor has poor introspection so won’t tell you if you reference a variable that doesn’t exist following a long script, etc. I much prefer writing models and batch scripts in VSCode, using Black as an autoformatter.

    If you’re used to a jupyter-notebook-based workflow, you’re probably used to writing small snippets of code, testing them in the cell on a handful of inputs, then lifting the code to a function which then you can integrate into the larger notebook narrative. With a batch-based workflow you lose this fast feedback process. The key to getting back that productivity – and then some – is to switch to test-driven development. Rather than informally testing snippets of your code in jupyter notebook, you create a separate test suite that calls the functions in your Python script and checks that their output is as expected. This forces you to write reasonably clean code with small functions that don’t have a lot of side effects. In practice, I didn’t aim for 100% code coverage, but instead focused on tests for data loaders and models. Testable code is also much easier to refactor, because you can easily see if you broke you code by re-running the test suite. Each test should only take a few milliseconds to run.

    I enabled continuous integration (CI) in the Github repo to run my pytest suite via nose2 every time I pushed code. Unfortunately, a lot of that code depended on having GPU access on Github, which would have become expensive, so I didn’t end up using it. It does look like it’s possible to run test suites on a GPU-enabled machine via nvidia-docker.

    I used an ad-hoc project folder structure inspired by PyRSE. Elizabeth DuPre told me about cookiecutter, which allows one to create projects from templates. I ended up using the datascience cookiecutter for a subset of the project, and generally I liked it (although it is a bit too nested for my taste).

    TF vs. PyTorch

    I’ve used TF in the past, but I decided to use PyTorch this round, since a lot of the models I wanted to test were released in PyTorch. The PyTorch documentation is excellent. A surprising chunk of my codebase ended up being training loops, so for next time I will consider PyTorch Lightning.

    Storing results

    One thing I never really spent time thinking about in grad school is how to store results. I have folders and folders in an old DropBox that are filled to the brim with mat files with long names from gosh knows what experiment. The problem with such ad hoc results storage is that you don’t know how the results were generated, because the results and the code are not linked.

    Ideally, you’d like to store results in a centralized location with the following features:

    • each result comes with metadata about how the result was generated (command line parameters, git hash, etc.)
    • you can safely delete stale results
    • you can query results to find the subet which you’re interested in
    • you can visualize metadata about the results in graphical form (dashboard)

    You could implement these functionalities yourself, for example with a local mongo database instance to store the results, and a dashboard of your creation, but I ended up instead using Weights and Biases (wandb). This turned out to be a great idea, highly recommended. In your code, you push your results along with metadata to their server using a simple library that you can pip install. The library takes care of adding the git hash to the payload so you know what code generated the result. You can also use the libary to send and store weights, images, checkpoints, etc.

    Wandb dashboard

    You get a fully customizable dashboard built on vega-lite to keep track of your results. To retrieve your results, you use mongodb queries, so you can filter results by whatever property you desire. To have meaningful properties to query, I specify command line arguments to my training scripts via argparse, for instance number of filters, learning rate, epochs, etc. Then I send the argparse arguments to wandb when I save my results. That way I can easily filter by learning rate, for example.

    Locally, I do still use tensorboard to babysit my networks, especially with a new network when it’s not clear whether the training will converge or not. I find that tensorboard is good to diagnose one model, but it’s not great to keep track of multiple models (say, more than 5) – pretty soon all you see is a mess of training curves.

    Analyzing results

    I use jupyterlab locally to analyze my results. One thing that I don’t like about jupyter notebooks is that it doesn’t play well with git – the diffs are meaningless. I will try out jupytext, which allows to write jupyter notebooks as plain text.

    Mistakes I made

    I accumulated a lot of technical debt around a seemingly innocuous decision about data storage. I downloaded brain data in ad hoc formats from multiple public sources (crcns), and I originally wrote a PyTorch data loader for each data source. That meant that for every new brain dataset I wanted to integrate, I would write a new data loader, which would take care of filtering out bad records, splitting the data into train and test set, and transforming it into the format that PyTorch expected.

    That worked fine for the first 2 or 3 datasets, but pretty soon it became a liability. The ad-hoc formats were not always designed for rapid loading – some datasets were really slow to load because they used small block sizes in hdf5. Other datasets needed some manual curation. What I needed is an intermediate format – I would preprocess the datasets into the intermediate format once, and then load the intermediate format in the data loaders.

    What I should have done is have one true intermediate format – e.g. NWB – I should also have decided train/test splits during the preprocessing. The data needed to be in a FAIR format in the intermediate stage – it needed, especially to be interopable. What I ended up doing is having a bunch of ad hoc intermediate formats – I would minimally preprocess the data and save it in a format parallel to the original one. Ultimately, that meant having to make a lot of micro decisions (e.g. how do I specify the train/test splits – two datasets, or one dataset and two sets of indices?) that I made inconsistently across datasets, and it became a mess that I never bothered fixing. My test suite for loaders ballooned as I had to cover for corner cases for every loader. Doing it over again, I would find settle on one true intermediate format.

    I experimented a bit with using make to create the intermediate format, and I think it’s a great solution for this use case. make will create intermediate results if they don’t exist but won’t try to recreate them if they’re already there; an ideal solution to manage complex pipelines.

    Releasing code and models

    I didn’t really think much about making the code easy to run for others, but when it came time to release it (for reviewers), it only took me about a half day to clean up the codebase and write docs. The code was already logically organized, the dependencies documented, most everything ran automatically with bash scripts, so all in all it was much less trouble to release than a typical project I would have undertook in grad school. That being said, I would like to release the model in an easy-to-use way. It’s unfortunate that huggingface models are restricted to transformer models, that would have been ideal. I was recently made aware of cog, which aims to make model releases easy, complete with docker environment and downloadable weights. The weights themselves can be stored on figshare (free bandwidth!)


    KeepChangeTry (more)
    PytorchPaperspace -> AWS batchJupytext
    wandbAd hoc intermediate format -> NWBMake
    Mixed local/cloud developmentGoogle Cloud Storage -> S3Data science cookie cutter
    vscodePyTorch Lightning
    Test-driven developmentcog

    What’s your stack? What tool has made your life easier?

    in xcorr.net on June 09, 2021 05:04 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The Ocean Womxn program – supporting a new generation of black oceanographers

    The University of Cape Town (UCT) in South Africa is in a prime location to conduct oceanographic research and features a wealth of scientific expertise. However, as in many institutions, the pool of oceanographic researchers can be quite homogenous. UCT researchers Katye AltieriSarah Fawcett and Isabelle Ansorge realized that while the academic staff in the Department of Oceanography at UCT is comprised of ~50% womxn, it is 100% white. Furthermore, in 2019, amongst postgraduates in the department, there were only 12 Black African and Coloured South African womxn out of 73 students. The barriers that prevent higher representation of Black womxn in oceanography and climate science are numerous and include financial restrictions and lack of resources or training opportunities. The feeling of ‘otherness’ can also prevent Black womxn from feeling like a part of the research community. Altieri says that she and her colleagues saw an opportunity to create “a dedicated programme that offers field work preparation, mentorship and support for black womxn so they can become future leaders of oceanography in South Africa and the global south”. The three womxn joined forces with Juliet Hermes from the South African Environmental Observation Network (SAEON), and they decided to apply for a grant from the Advancing Womxn initiative, created by UCT Vice-Chancellor Professor Mamokgethi Phakeng. 

    The result is Ocean Womxn, a program funded for an 5 year initial period, that aims to remove barriers and provide support for Black womxn in oceanography. In the first year, 5 womxn joined the program, with 2 more following in 2021. The Fellows receive comprehensive support – this includes financial cover for the duration of their degree, relocation costs, field gear budget and a laptop. They also have sea-going opportunities, such as SEAmester cruises, where they can develop lab skills and receive hands-on experience. Additionally, they are offered the option of swimming, boating and scuba lessons. 

    PLOS ONE recently spoke with some of the Ocean Womxn Fellows to learn more about the program and how it has impacted their journey as early career oceanographers. 

    Faith February is a Ph.D. candidate in Ocean and Atmosphere Science at the University of Cape Town. Her research focuses on characterising atmospheric aerosols in False Bay to improve the understanding and inputs to prediction models for climate change. At the same time, she is also addressing the scarcity of observational aerosol data in the Southern Hemisphere. Faith loves the exposure and support that she and her research is getting through the Advancing Womxn Fellowship. She is now reaching out and mentoring other young women in STEM fields. Her goal is to advance the representation of women in the scientific research arena.

    Black woman oceanographer standing next to a helicopter

    Kolisa Sinyanya is a Ph.D. candidate and her research is part of a growing body of work that critically examines biogeochemical cycling in the ocean, particularly regions that are currently under-sampled. Her research which is documented in The Conversation Africa aims to involve exploring phytoplankton community dynamics and microbe-nutrient interactions in the Indian Ocean, including subtropical and Southern Ocean waters. She’s a Black Women In Science South Africa 2019 Fellow, an Inspiring Fifty Women in STEM South Africa nominee, FameLab Cape Town runner up and a Pint of Science South Africa and TEDxUCT speaker. Kolisa is passionate about learning and sharing her science! 

    PLOS ONE: Doctoral training programs and fellowships often provide valuable resources and opportunities for PhD candidates. What opportunities have you had as an Ocean Womxn Fellow that you might not have otherwise had as a general PhD student? 

    Faith: I would definitely say the emotional and moral support by being in a cohort. Also, the mentor sessions with other women and role models in STEM fields. An added opportunity is definitely the exposure that we get on social media and all over for changing the landscape for black womxn in Oceanography. 

    Kolisa: My most memorable and unique opportunities that I am receiving through Ocean Womxn are networking with influential women in the world of STEM. Every month we have visiting mentors ranging from women Professors to women PhD candidates who are making waves in their industries and fields of expertise. This is unique to Ocean Womxn as no other fellowship does this in our department. From these face-to-face interactions we get to ask whatever questions we want answered by the invited mentor. This has largely enriched not just my thinking, but I have formed relationships with some of our guests and have been recommended for opportunities by some.   

    PLOS ONE: Researchers from under-represented groups/communities often experience a “burden of service”, whereby they are asked to do more outreach and engagement activities than their colleagues. This can take time away from doing actual research. Have you ever previously experienced this, and how has the Ocean Womxn program helped you to avoid being in this situation? 

    Faith: For me the outreach and engagement activities are not a “burden of service”, but rather an act of love and kindness to reach out to other under-represented groups. The support that I experienced through the Ocean Womxn program inspired me to support other young womxn in STEM research fields. I got involved in the Project Kuongoza Mentorship program as a mentor to mentees across Africa and am now “serving” my continent outside the borders of the country! I believe that you need to have a balance in life and sharing about your experiences in research should be seen as part of enriching your life. 

    Kolisa: Yes, I have experienced this. More so because I am a well-known science communicator and so in addition to having the responsibility of “burden of service” I have the responsibility to “teach” science to the masses. Ocean Womxn has enhanced this and for me it really is not a burden but a privilege. I see it as a privilege because I get to be an underrepresented group representative doing the service so well, encouraging others to follow in my footsteps and see that it is possible. It is key for me to be this type of scientist because I did not have this as I grew up in a world where I had no black women ocean scientists to look up to. Ocean Womxn has magnified my role as a role model in not just ocean sciences but in science at large. 

    PLOS ONE: What is the most memorable fieldwork or laboratory experience you have had so far? 

    Faith: I was involved in the First European-South African Transmission Experiment which was an international collaboration between Germany, Netherlands, Norway, and South Africa. It was a huge challenge and honour for me to be responsible for the data acquisition, analysis, storage, and maintenance of the aerosol equipment. It also led to me doing my PhD! 

    Kolisa: My most memorable fieldwork to date is being part of the Prince Edward Island sampling expedition for my PhD. On the expedition we sailed to the subantarctic Southern Ocean on the SA Agulhas II which is South Africa’s research vessel. I spent about 2 and half months in the middle of the ocean living though sea storms and calm waters. It was my first experience out that far into the open ocean. I returned to land as a new woman. I was transformed mentally and had this urge to change the world with my science and I believe that is exactly what I am currently doing. I wanted to make academia and science inclusive, especially for black women! 

    PLOS ONE: Your research focuses on understanding the impacts of climate change. How did you become interested in this field?  

    Faith: My research on atmospheric aerosols, and more specifically sea spray aerosols, seeks to address the uncertainty about the drivers of climate change. As sea spray aerosols occur naturally in the atmosphere, it can be used as a proxy for pre-industrial conditions and to determine how anthropogenic activities contributed to climate change. I am a Physicist and was involved in characterizing atmospheric effects over the ocean on visual and infrared cameras, when it became apparent that aerosols (microscopic particles) play a huge role in the atmosphere. I then decided to do my PhD atmospheric aerosols to improve the understanding and inputs of aerosols to prediction models of climate change. 

    Kolisa: Our planet is changing as we know it and our role as scientists is key in uncovering the shifts. As science majors we always heard, read about, and were constantly reminded of global warming which leads to climate change. This sparks interest in many of us to want to uncover and dissect the intricacies of how this happens, how it affects our planet and what we need to do to mitigate it. Therefore, my interests I can say were sparked long before I even knew I would take my research focuses this far into academia. I wanted to be one of the first and few black women scientists from Africa who are part of this global research to understand our planet. 

    PLOS ONE: More and more researchers are committing to the mission of Open Science: to making the entire research process openly accessible, transparent, and reproducible. Examples of this include publishing in open access journals, preregistering research plans, publishing protocols, as well as sharing data and code. What are your thoughts on Open Science and how do you feel this kind of improved transparency, accessibility and equity impacts scientific research? 

    Faith: I am all for Open Science! It was such a relief when articles that were previously behind the paywall became available due to the pandemic. I think that scientific research will now reach higher heights and swifter turnarounds with more diverse collaborations. The Covid-19 vaccine rollouts is a sterling example of how the “openness” of scientific science can contribute to quicker implementations and finding solutions for all. 

    Kolisa: I personally have chosen the path of science communication to merge with my research because I support open science. Open science starts with transparency, inclusivity and being accommodative. When our science is openly accessible and well understood we achieve more with our discoveries. The aim of us doing science is to understand how systems work and report on those. These reports are in forms of difficult journal papers that take a lot to understand. I believe that we need to evolve with the times and make our science open and easily understandable by anyone who reads our findings and recommendations. How will the world improve, and people be educated if we do not articulate ourselves strategically to be open and transparent about our data, our research plans and our publishing and communication? 

    Here are some other Ocean Womxn Fellows:

    Thando Mazomba

    Wanjiru Thoithi

    Sizwekazi Yapi

    Fortunate Mogane

    Philile Mvula


    Lerato Mpheshea

    The post The Ocean Womxn program – supporting a new generation of black oceanographers appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on June 08, 2021 01:47 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    New paper in Biomedical Optics Express: Precompensation of 3D field distortions in remote focus two-photon microscopy (Antoine M. Valera, Fiona C. Neufeldt, Paul A. Kirkby, John E. Mitchell, and R. Angus Silver)

    Remote focusing is widely used in 3D two-photon microscopy and 3D photostimulation because it enables fast axial scanning without moving the objective lens or specimen. However, due to the design constraints of microscope optics, remote focus units are often located in non-telecentric positions in the optical path, leading to significant depth-dependent 3D field distortions in the imaging volume. To address this limitation, we characterized 3D field distortions arising from non-telecentric remote focusing and present a method for distortion precompensation. We demonstrate its applicability for a 3D two-photon microscope that uses an acousto-optic lens (AOL) for remote focusing and scanning. We show that the distortion precompensation method improves the pointing precision of the AOL microscope to < 0.5 µm throughout the 400 × 400 × 400 µm imaging volume.


    in The Silver Lab on June 07, 2021 06:18 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    New paper in Neuron: Multidimensional population activity in an electrically coupled inhibitory circuit in the cerebellar cortex (Gurnari, Silver)

    Inhibitory neurons orchestrate the activity of excitatory neurons and play key roles in circuit function. Although individual interneurons have been studied extensively, little is known about their properties at the population level. Using random-access 3D two-photon microscopy, we imaged local populations of cerebellar Golgi cells (GoCs), which deliver inhibition to granule cells. We show that population activity is organized into multiple modes during spontaneous behaviors. A slow, network-wide common modulation of GoC activity correlates with the level of whisking and locomotion, while faster (<1 s) differential population activity, arising from spatially mixed heterogeneous GoC responses, encodes more precise information. A biologically detailed GoC circuit model reproduced the common population mode and the dimensionality observed experimentally, but these properties disappeared when electrical coupling was removed. Our results establish that local GoC circuits exhibit multidimensional activity patterns that could be used for inhibition-mediated adaptive gain control and spatiotemporal patterning of downstream granule cells.



    in The Silver Lab on June 07, 2021 06:16 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 07 June 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.

    Please join us at the next regular Open NeuroFedora team meeting on Monday 07 June at 1300UTC in #fedora-neuro on IRC (Libera.chat). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'

    The meeting will be chaired by @ankursinha. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on June 07, 2021 09:25 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Tips for making open access books - Digital Scholarship Leiden

    Tips for making open access books We look at six different ways that academic authors at Leiden University have found to make their books open access, and pass on their tips. More and more academic books are published as Open Access (OA) making them freely available to read without any payment being necessary. The Directory of Open Access Books (DOAB) already includes more than 42,000 peer-reviewed OA books from around 500 publishers worldwide. These books are integrated into most library catalogues. Research funders increasingly demand that output produced as a result of their funding, and which is published in books, should be published as OA. It is expected that in the future, more and more academics will publish their book OA as a lot of publishers are willing to publish books immediately OA. In this blog post the Open Access Team from the Centre for Digital Scholarship, Leiden University Libraries, interviewed some Leiden academics about how they went about publishing their books OA. These interviews show what you can do to publish a book OA, and also how you can make a book that is already published accessible by open access. The examples are in no particular order.

    in Open Access Tracking Project: news on June 04, 2021 08:50 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dev session: James Knight, Thomas Nowotny: GeNN

    The GeNN simulator

    James Knight and Thomas Nowotny will introduce the GeNN simulation environment and discuss its development in this dev session.

    • Date: March 9, 2021, 1700 UTC (Click here to see your local time).
    • Location (Zoom): (link no longer valid)

    The abstract for the talk is below:

    Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this dev session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework [1], which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. The Python interface has enabled us to develop a PyNN [2] frontend and we are also working on a Keras-inspired frontend for spike-based machine learning [3].

    In the session we will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it works inside. We will then talk in more depth about its development with a focus on testing for GPU dependent software and some of the further developments such as Brian2GeNN [4].

    in INCF/OCNS Software Working Group on June 03, 2021 10:35 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dev session: Ankur Sinha: NeuroFedora

    The NeuroFedora project

    Ankur Sinha will introduce the Free/Open Source Software NeuroFedora project and discuss its development in this developer session.

    • Date: Feb 16, 2021 1700 UTC (Click here to see your local time).
    • Location (Zoom): (link no longer valid)

    The abstract for the talk is below:

    NeuroFedora is an initiative to provide a ready to use Fedora based Free/Open source software platform for neuroscience. We believe that similar to Free software, science should be free for all to use, share, modify, and study. The use of Free software also aids reproducibility, data sharing, and collaboration in the research community. By making the tools used in the scientific process easier to use, NeuroFedora aims to take a step to enable this ideal. In this session, I will talk about the deliverables of the NeuroFedora project and then go over the complete pipeline that we use to produce, test, and disseminate them.

    in INCF/OCNS Software Working Group on June 03, 2021 10:33 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dev session: Marcel Stimberg: Brian Simulator

    The Brian Simulator

    Marcel Stimberg will introduce the Brian Simulator and discuss its development for the first developer session of the year.

    • Date: Feb 11, 2021 1700 UTC (Click here to see your local time).
    • Location (Zoom): (link no longer valid)

    The abstract for the talk is below:

    The Brian Simulator is a free, open-source simulator for spiking neural networks, written in Python. It provides researchers with the means to express any kind of neural model in mathematical notation and takes care of translating these model descriptions into efficient executable code. During this dev session I will first give a quick introduction to the simulator itself and its code generation mechanism. I will then walk through Brian’s code structure, our automatic systems for tests and documentation, and demonstrate how we work on its development. The Brian simulator welcome contributions on many levels, hopefully this dev session will give you an idea where to start.

    in INCF/OCNS Software Working Group on June 03, 2021 10:31 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dev session: Caglar Cakan: neurolib


    Caglar Cakan will introduce neurolib and discuss its development in this developer session.

    • Date: February 23, 2021. 1700 UTC/ 1800 Berlin time (Click here to see your local time).
    • Location (Zoom): (link no longer valid)

    The abstract for the talk is below:

    neurolib is a computational framework for whole-brain modelling written in Python. It provides a set of neural mass models that represent the average activity of a brain region on a mesoscopic scale. In a whole-brain network model, brain regions are connected with each other based on structural connectivity data, i.e. the connectome of the brain. neurolib can load structural and functional data sets, set up a whole-brain model, manage its parameters, simulate it, and organize its outputs for later analysis. The activity of each brain region can be converted into a simulated BOLD signal in order to calibrate the model to empirical data from functional magnetic resonance imaging (fMRI). Extensive model analysis is possible using a parameter exploration module, which allows to characterize the model’s behaviour given a set of changing parameters. An optimization module allows for fitting a model to multimodal empirical data using an evolutionary algorithm. Besides its included functionality, neurolib is designed to be extendable such that custom neural mass models can be implemented easily. neurolib offers a versatile platform for computational neuroscientists for prototyping models, managing large numerical experiments, studying the structure-function relationship of brain networks, and for in-silico optimization of whole-brain models.

    in INCF/OCNS Software Working Group on June 03, 2021 10:12 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Showcase at CNS*2021

    Photo by Greg Rosenke on Unsplash

    Photo by Greg Rosenke on Unsplash.

    Join us for a NeuroFedora showcase at the 30th Annual meeting of the Organization for Computational Neuroscience (OCNS) on July 3, 2021. You can register for CNS*2021 here. The showcase will also be recorded for later viewing. The description of the showcase is below:

    Open Neuroscience is heavily dependent on the availability of Free/Open Source Software (FOSS) tools that support the modern scientific process. While more and more tools are now being developed using FOSS driven methods to ensure free (free to use, study, modify, and share---and so also free of cost) access to all, the complexity of these domain specific tools makes their uptake by the multi-disciplinary neuroscience target audience non-trivial.

    The NeuroFedora community initiative aims to make it easier for all to use neuroscience software tools. Using the resources of the FOSS Fedora community, NeuroFedora volunteers identify, package, test, document, and disseminate neuroscience software for easy usage on the general purpose FOSS Fedora Linux Operating System (OS). As a result, users can easily install a myriad of software tools in only two steps: install any flavour of the Fedora OS; install the required tools using the in-built package manager.

    To make common computational neuroscience tools even more accessible, NeuroFedora now provides an OS image that is ready to download and use. Users can obtain the CompNeuroFedora OS image from the community website at https://labs.fedoraproject.org/ . They can either install it, or run it “live” from the installation image. The software showcase will introduce the audience to the NeuroFedora community initiative. It will demonstrate the CompNeuroFedora installation image and the plethora of software tools for computational neuroscience that it includes. It will also give the audience a quick overview of how the NeuroFedora community functions and how they may contribute.

    User documentation for NeuroFedora can be found at https://neuro.fedoraproject.org

    in NeuroFedora blog on June 03, 2021 07:40 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Concerns about Marseille’s IHUMI/AMU papers – Part 1

    Last March, I shared my concerns about a paper from the IHU-Méditerranée Infection (IHUMI)/ Aix-Marseille University (AMU) claiming that Hydroxychloroquine in combination with Azithromycin could reduce coronavirus viral loads faster than no treatment.

    Other papers by this group of researchers led by Professor Didier Raoult and/or his right-hand man Professor Eric Chabrière, also appear to contain problems, ranging from potentially duplicated images to ethical concerns.

    In this blog post I have gathered the papers by the Raoult and Chabrière group that have image concerns. This post is not an accusation of misconduct, but a compilation of the potential problems found in 22 different papers by this group. I welcome the authors removing any concerns by providing the original figures.

    Concerns about duplicated images

    A set of at least 22 papers from the AMU/IHUMI group appear to have problematic images, including unexpected similarities across or even within figure panels. All of these have been posted on Pubpeer, including nineteen that I found and posted there. As of today the authors have not replied to most of these PubPeer posts, nor have they corrected the errors.

    Category I duplications

    Some papers, such as DOI: 10.1093/ajcp/101.3.318 [PubPeer] or 10.1016/j.ijid.2006.10.005 [PubPeer] appear to contain ‘simple’ duplications in figure panels. As explained in a previous post, I call these Category I duplications, where the exact same photo is used to represent two different experiments. These kinds of simple duplications can often be honest errors, where something went wrong during the paper submission or manuscript handling. A cheerful “Oops, yes you are right, we made an error” would have been enough, and these duplicated panels could be easily addressed with a Corrigendum. Unfortunately, the authors have not replied to most of the PubPeer posts.

    Source: DOI: 10.1093/ajcp/101.3.318 [PubPeer]
    Figures 1 and 2 appear to show the same photo
    Source: 10.1016/j.ijid.2006.10.005 [PubPeer]
    Panels b and c look more similar to each other than expected
    Source: DOI 10.1371/journal.pntd.0001540 [PubPeer]
    Figure 2 appears to be identical too Figure 1 in https://doi.org/10.4269/ajtmh.12-0212 [PubPeer]
    as noted by François-Xavier Coudert

    In DOI 10.1002/jcb.22135 [PubPeer], the two 32h flow cytometry panels look unexpectedly similar, while the gated percentages are different. One of the authors initially replied on Pubpeer and Twitter that the two images did not look identical, but later admitted they looked ‘intriguingly similar’.

    Source: DOI 10.1002/jcb.22135 [PubPeer],
    where two panels look very similar but have different gated percentages

    The same photo appears to have been used in two papers, DOI: 10.1016/j.nmni.2017.12.006 [PubPeer] where it represents Bacillus salis bacteria, and DOI: 10.1002/mbo3.638 [PubPeer] where it represents Gracilibacillus timonensis bacteria.

    Source: DOI 10.1016/j.nmni.2017.12.006 [PubPeer] and DOI: 10.1002/mbo3.638 [PubPeer].

    Category II duplications

    Two papers in this set contain Category II duplications, where images overlap or might have been rotated, mirrored, etc.

    DOI 10.4269/ajtmh.15-0436 [PubPeer] has been corrected after posting it on PubPeer. In this paper, three of the four panels shown in Figure 1 were showing the same specimen. The aspect ratio of these overlapping areas, marked below with blue and magenta boxes, was not always the same. It appears the images might have been stretched differently, which would be unexpected if the errors were unintentional. Yet, the journal accepted a new set of panels in the July 2020 correction. The authors wrote: “These resulted from errors by a researcher that were missed by other authors. Specifically, a single image was inappropriately inserted to represent three different experiments.

    Source: DOI 10.4269/ajtmh.15-0436 [PubPeer]
    Three panels appear to overlap with each other. This concern was addressed with a correction.

    Paper DOI 10.1371/journal.pone.0010041 [PubPeer] contains a figure in which two panels might be showing the same leaf.

    Source: DOI 10.1371/journal.pone.0010041 [PubPeer]
    Leaf A looks very similar to leaf B

    In DOI 10.1128/JCM.01714-06 [PubPeer], published by the American Society for Microbiology (ASM), Figure 3A and 4A appear to show the same blot, albeit cropped differently. Of note, the name of senior author Didier Raoult was removed some time between the acceptance of the paper [JCM Accepts version; archived] and the publication of the PDF on the journal’s website. In 2006, the year this paper was published, Raoult was banned from publishing in ASM journals for 1 year, which might have something to do with this author name removal (source: https://science.sciencemag.org/content/335/6072/1033).

    Source: DOI 10.1128/JCM.01714-06 [PubPeer]
    Panel A in Figure 3 looks very similar to panel A in Figure 4

    In DOI 10.1089/vbz.2012.1083 [PubPeer], two Western blot strips look unexpectedly similar, albeit at different exposures and cropping. Found by PubPeer user Trichoderma Viridescens.

    Source: DOI 10.1089/vbz.2012.1083 [PubPeer]
    Two Western blot strips representing different sera look more similar than expected

    In DOI: 10.1016/j.cimid.2018.06.004 [PubPeer], two blot panels incubated with different sera look remarkably similar, albeit shown at different exposures and with different labels. The authors did not reply on PubPeer, but admitted to the journal that two panels were indeed duplicated. The paper was corrected a couple of months later.

    Source: DOI: 10.1016/j.cimid.2018.06.004 [PubPeer] – now corrected
    two blot panels look remarkably similar

    Category III duplications

    At least 10 AMU/IHUM papers with image problems contain Category III duplications, where figures appear to have been altered or to contain duplicated elements. These types of duplications are the most likely to be the result of an intention-to-mislead.

    Category III duplications were found in these papers:

    Source: DOI 10.1128/jcm.35.7.1715-1721.1997 [PubPeer]
    Two pairs of lanes in Western blots look more similar than expected
    Source: DOI 10.1128/iai.68.10.5673-5678.2000 [PubPeer]
    Some Western blot lanes appear to look more similar than expected
    Source: DOI 10.1128/IAI.69.4.2520–2526.2001 [PubPeer]
    Cells in these microscopy panels appear to be surrounded by sharp horizontal and vertical background transitions.
    Source: DOI 10.1128/JCM.39.2.430–437.2001 [PubPeer]
    Boxes of the same color show areas (some including bands) look more similar to each other than expected in these DNA gels
    Source: DOI 10.1086/379080 [PubPeer]
    Certain areas in this Southern blot look unexpectedly similar
    Source: DOI 10.1128/JCM.43.2.945–947.2005 [PubPeer]
    Some parts of these Western blots look more similar than expected, as marked with boxes of the same color
    Source: DOI 10.1128/AEM.03075-05 [PubPeer]
    Some areas in this Transmission Electron Microscopy photo look unexpectedly similar to each other
    Source: DOI 10.3389/fmicb.2010.00151 [PubPeer]
    The bottom photo appears to have been created with parts of the top photo, which in turn was taken from Wikipedia without attribution. The original by Philippinjl can be found here:  https://commons.m.wikimedia.org/wiki/File:Api20e.jpg
    Source: DOI 10.1371/journal.ppat.1003827 (now retracted) [PubPeer]
    Three lanes in this DNA gel appear to look very similar to each other
    Source: DOI 10.1016/B978-0-323-55512-8.00069-7 [PubPeer]
    Two areas in Western blots A and C appear to look very similar to each other, while the marker lane looks different

    Part 2 of this series — still to be written — will explore papers by the IHUMI/AMU group with potential ethical concerns.

    in Science Integrity Digest on June 03, 2021 05:46 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Dynamic scientific visualizations in the browser for Python users

    As a scientist, interacting with data allows you to gain new insight into the phenomena you’re studying. If you read the New York Times, the D3 docs or you browse distill, you’ll see impressive browser-based visualizations – interactive storytelling that not only accurately represent data but bring your attention to surprising aspects of it. Making good visualizations of your work can increase the accessibility of your research to a wide audience. My favorite example here (already several years old) is the semantic map of the brain from James Gao, Alex Huth, Jack Gallant and crew – at the time, visualizing brain map data in the browser was unheard of, and here was a visualization of how semantics map to different brain areas, in the browser, no downloads necessary.

    Snapshot of brain data visualization from James Gao, Jack Gallant, Alex Huth and crew. Built on three.js and tornado. You can make these kind of visualizations yourself with the pycortex library.

    Coming from the Python world, most visualizations you’ll make on a day-to-day basis will be very different from these: static plots, because that’s what’s convenient to make in matplotlib. How can you go from the Python world of static plots to the wizardry of Javascript?

    A big wall you’ll run into is that modern web development is big. Getting to proficiency is hard: it’s very easy to get discouraged and fall off the wagon before you get to build something interesting. What I’ve assembled here is a kind of roadmap so that you can start building interesting dynamic visualizations with the lowest barrier to entry as possible, while building a self-reinforcing skillset.

    TechnologyApplication areasProsConsWhat you learn
    Python-based generatorsDashboards, lightweight interactionsEasy distribution, low barrier to entryLimited interactivityGrammar of interactive graphics
    Jupyter notebookDaily data explorationYou’re already using it!Distribution not straightforwardSnippets of HTML, JS, SVG
    Walled gardenAll-in-one visualizations (widgets)Limited things to learnWhat you learn is non-transferableThe walled garden
    JS notebooksLiterate programsREPL environment, good docsLearning curve a bit steepCore JS plus visualization libraries
    The webWeb datavizYou control everythingFloodgates of tech stack, learning curve quite steepPackaging, components, deployment

    Finally, at the end of this article, I cover JS libraries you might want to learn to do for plotting or numeric work. I’ve built this from experience over the last year experimenting with about a half-dozen interactive visualization methods; of course, if you know better curricula, books or tutorials, please let me know in the comments.

    Step 0: Python-based libraries

    Your first option is to not code for the web at all by leveraging pure Python-based libraries. Many libraries aim to fill a niche for everyday interactive graphics, which include minimal interactions – hover interactions, tooltips, interactive legends, etc. This includes Plotly, Bokeh w/Holoviews and Altair w/vegalite. On the mapping side, ipyleaflet is worth mentioning.

    Dashboard tools

    There has been an explosive growth in the past few years. These solutions fill the same niche as the ubiquitous Shiny fills in the R ecosystem: a way for data science teams to create dashboards. You can nevertheless use them for interactive scientific visualizations. Getting familiar with this kind of visualization will help you build a grammar of interactive visualizations and rapidly prototype more fully interactive visualizations.

    Here’s what to expect: a Python script defines a visualization (plots, videos, images, maps, etc.) along with widgets, such as sliders and combo boxes. The whole thing gets served via a server which caches requests intelligently – changes in sliders trigger remote fetches which refresh the visualization. Some custom visualizations may be written in JS. Oftentimes, dashboard solutions are associated with a specific plotting library. so all the docs will be written in terms of that library – but generally you have some flexibility to use other frameworks.

    One clarification: if you want to share your visualizations, you’ll need a server. All solutions have a basic open source offering allowing you to DIY deploy such a server yourself via e.g. a cloud provider. If that sounds messy, you can instead opt for a managed cloud solution – there’s often a free tier and paid tiers with more capabilities availables. Here they are:

    ProductDeploymentPeculiaritiesAssociated plotting library
    PanelDIYWorks inside of jupyter and standaloneBokeh, with support other charting libraries
    StreamlitDIY & commercialVery easy deploysAgnostic
    DashDIY & commercialSupports lots of customization, aimed at enterprise usersPlotly, with support for other libraries

    Of these, Streamlit seems to have the most momentum according to Github stars.

    See this article for a detailed breakdown.

    Step 1: build for jupyter notebook

    Perhaps the lowest friction way of doing dynamic HTML-based visualization in Python is in the jupyter notebook environment (or jupyter lab or colab). After all, you’re writing Python inside the browser right now, why not use the same environment to whip up a quick interactive visualization? A big advantage of this type of solution is that it minimizes the contortions you have to go through to translate Python data types to javascript. And because you’re writing HTML, JS and Python side-by-side, your iteration cycles are very fast. The biggest problem is that most widgets you will create here will not display correctly when you publish them to the web, e.g. on GitHub: people will have to download the notebook to get the interactive effect. They will work inside of Binder, however. They are a great way to dip your toe into this world – at this point, you can probably get by by copying and pasting snippets of HTML, CSS, SVG and Javascript rather than formally learning them.

    _repr_html_ and IPython.core.display

    ImageBasic demo of _repr_html_

    When you print a variable at the end of a Jupyter cell, how does Jupyter know how to display it? The answer is that it looks for a _repr_html_ method for the associated class and calls it. This function can return any arbitrary snippet of HTML, including an iframe which does interactive things, or a snippet which embeds javascript. See this colab for examples on how to do this. You can do fairly sophisticated things with this simple method, for instance building an interactive widget for pandas.

    You may also trigger a rich display from anywhere inside a cell, not just at the end, using the ipython API. Again, you can include any number of web technologies, including embedding scripts inside the embedded HTML.

    from IPython.core.display import display, HTML
    display(HTML('<h1>Hello, world!</h1>'))

    ipywidgets and interact

    ipywidgets, also known as jupyter-widgets or simply widgets, are interactive HTML widgets for Jupyter notebooks and the IPython kernel. Notebooks come alive when interactive widgets are used. Users gain control of their data and can visualize changes in the data. Learning becomes an immersive, fun experience. Researchers can easily see how changing inputs to a model impact the results.

    The ipywidget manual

    Next in line in complexity are ipywidgets. Like IPython.core.display, ipywidgets can display rich HTML; however, widgets can also have events which cause them to call Python. See this notebook for an example. You can use them, for instance, to navigate through a plot with widgets. In fact, this pattern is so common that there’s a convenience function called interact which allows you to do just slider-based interactions.


    Voila example, from their docs.

    Building on jupyter widgets, you can release fully fledged dashboards based in jupyter notebooks. This is called voila. The idea is that you write a regular jupyter notebook, and then you can serve it through a tornado server on the web. The end user sees a dashboard which is the concatenation of the output cells of the dashboard, without the code. You can deploy this on Heroku or Google App Engine to make your application world accessible.

    Step 2: Walled gardens

    At this point, you might feel stifled by the limited interactivity afforded by the solutions offered so far. And you might feel intimidated by full-fledged modern web development. Many products exist to fill the gap. However, I argue here that it’s better to bite the bullet and skip straight ahead to writing real javascript in a notebook environment, because you will continue building on core skill sets around web technology. Nevertheless, I mention these options here for completeness and because they may satisfy your particular use case.


    Unity is a game development engine and IDE. It’s a great environment to build offline games and VR visualizations. It transpiles exported scenes and C#-based scripting to WebAssembly (WASM), so it also runs in the browser. C# is a modern language with operator overloading. 2d and 3d interaction are easily accommodated. Cons: widgets (sliders, input boxes, etc.) must be created in the Unity interface and are non-native. The Unity applet acts as a walled garden, making it difficult to interact with the native DOM. The C# code cannot be compiled with a command line tool – it must be combined with the project and compiled in the Unity editor, and the project is several GBs, hence can’t be tracked in Github. Furthermore, the math libraries in C# are pretty limited. Overall, a pretty limited option for the web, though it has considerable appeal if you’re doing 3d viz.

    p5.js and openprocessing

    OpenProcessing sketch, Self-organized critical model

    p5.js and openprocessing bring processing – a creative coding platform – to javascript. You write sketches, with an init function and a loop function, which define how the sketch works. You draw into the sketch with functions that allow you to draw lines, rectangles, sprites, polygons, etc. The appeal of processing is that you’re learning creative coding and there are very little barriers to entry. The biggest con is that while you learn a little bit of javascript in the environments, you really don’t learn modern web technologies. At the end of the day, you’ll know how to draw into your sketch, but you won’t know about canvas, svg, css, etc. – and you’ll be limited in the libraries that you can import into that environment. That being said, they can be part of your toolkit, especially p5.js which can interop with plain JS.

    Step 3: Javascript notebooks

    The next step in complexity is Javascript-based notebooks. Just as there are Python-based notebook environments that use remote kernels (jupyter, colab), there are local notebook-based environments that use Javascript that runs locally in the browser. That means you get the full power of the web – html, canvas, css, svg, javascript with remote libraries, etc. – but you’re doing it in an environment on wheels which supports the familiar REPL workflow of jupyter notebooks. These are environments which are ideal for literate programming which mixes text, math, code and visualization.

    For many, I suspect this will be the right level of complexity. In practice, using notebooks mean you will need to export your data out of Python in order to visualize it – for many use cases, json will do the trick.


    Text, code, equations, dynamic visualizations can all be mixed in observable. Code here.

    Observable is a notebook environment that runs in the browser, created by Mike Bostock, creator of d3. Observable is a little like jupyter notebooks, but it uses a kind of functional style where each cell may only have one output, and it resolves the DAG of dependencies to figure out which cells to recompute at what time, thus avoiding the pitfalls of ipython out-of-order execution issues. It has an excellent set of starter tutorials. The observable runtime is open-source, but the notebook environment is closed-source. It continues to be feverishly developed – they just added multi-user collaboration. Strictly speaking, it does not use Javascript, and it doesn’t make it very easy to export strict JS out of it – however, you can embed single cells on a webpage. Overall, a great place to get your REPL workflow going.


    An interactive visualization of coupled oscillators in Starboard, mixed Python and JS. Source.

    Starboard is an open-source interactive notebook environment. It uses a different metaphor than Observable, sticking much closer to the Jupyter metaphor with its advantages and disadvantages. The biggest thing it has going for is that it supports both javascript and python via Pyodide. What’s that you say? Python in the browser? Why yes, in fact:

    “Pyodide brings the Python 3.8 runtime to the browser via WebAssembly, along with the Python scientific stack including NumPy, Pandas, Matplotlib, SciPy, and scikit-learn. The packages directory lists over 75 packages which are currently available. In addition it’s possible to install pure Python wheels from PyPi. Pyodide provides transparent conversion of objects between Javascript and Python. When used inside a browser, Python has full access to the Web APIs.”

    Pyodide docs

    That means you can keep some of the numerical computation in Python and mix and match JS and Python as necessary – though you’ll probably still want to use Javascript to manipulate the DOM. Pyodide was originally spun out by Mozilla from another (now defunct) notebook project called iodide, so starboard, along with a few others like jupyterlite are carrying the torch. I really like starboard – the editor feels great, it’s snappy, it has surprisingly little confusing magic going on. However, the documentation and examples are minimal, and it would benefit from a series of tutorials. I think observable is the more viable and mature option in this space for now but starboard is one to watch.

    Step 4: the web

    Now you’ve had a chance to learn javascript, html, svg, enough css to do the job in the context of dynamic visualizations, and now you’re ready to tackle the world – the world wide web! It’s tempting at this point to learn modern web development, with its complex toolchains (npm + react + webpack + …) . I had an interesting conversation with Chris Olah recently who mentioned that when it comes to doing visualizations, the core skillsets – html, the dom, css, svg, etc. – are more important than learning any particular framework, and it helped organize this article. If you do need a component framework to work within a larger page, a more lightweight framework, like Svelte or vue.js, might satisfy your needs.

    One very annoying gotcha you’ll run into is packaging differences between node and JS built for the web. A lot of packages you’ll use for data analysis in JS – like mathjs – were originally built for the node server environment. You need to build them to use them, which means learning about build tools, and pretty soon you’ve wasted a couple of days figuring out your transpiler. There are now cloud transpilers, in particular skypack and JSPM, that will compile node JS libraries for you so you can use them immediately without using a complex toolchain. That means you can import node packages in JS much in the same way you would import pip installable packages in Python:

    import * as mathjs from 'https://cdn.skypack.dev/mathjs@8.0.1';

    Thanks to Guido Zuidhof for patiently explaining this to me.

    Some templates exist to facilitate the creation of interactive blog posts or articles. Most notably, the distill template and the Idyll language facilitate the creation of explanatory articles (think those cool articles on the nytimes website which highlight different parts of graphs when you scroll down). Some of the best examples of this type of article can be seen on the VISxAI website.

    For deployment, provided you don’t rely on external services, you could use github pages, or use a CDN service like netlify or Vercel. If you need to interact with data, then you may need to both create and deploy a data source via a REST API. Something lightweight like flask deployed on heroku would do the trick.

    JS libraries

    As you’re going through this process, it will often feel like the garden of forking paths: seemingly infinite decisions to make. However, there’s a small core set of JS libraries you might interact with:

    I think this is enough of a curriculum to carry you from 0 to 1 over a 6 month period. Practice, learn, and in no time you’ll be ready to make solid visualizations!

    Further resources

    in xcorr.net on June 03, 2021 02:39 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    arXiv’s membership program is now based on submissions

    arXiv’s members have provided approximately 25% of our operating budget for the past ten years, supporting arXiv’s mission to provide a reliable open platform for sharing research. By becoming arXiv members, more than 230 institutions around the world have made a strong statement in favor of open access, open science, and sustainable academic publishing. Thank you, members!

    We are happy to announce our updated membership program, which was developed in collaboration with the Membership Advisory Board. This program is part of our sustainability model, complements arXiv’s diverse funding sources, including societies and other organizations, and ensures that arXiv will have the funding required to continue meeting researchers’ evolving needs.

    arXiv membership is inclusive, flexible, and offers your institution a high value, low-risk, budget-conscious option to serve your scholarly community. Members receive public recognition, institutional usage statistics, eligibility to serve in arXiv’s governance, and more.

    How it works

    Universities, libraries, research institutes, and laboratories are invited to join or renew. For standard memberships, annual fees are based on submissions by institution, averaged over three years.

    First, find your institution’s overall submission rank here, as shown in the screenshot.

    screenshot of interface to find submissions by institutionHover your cursor over the area indicated by the purple arrow to reveal the search icon.


    Then, match the overall rank to one of the contribution levels here:

    If an institution ranks in the top 100 submitting institutions, their fee is $5000. Institutions ranked between 101 and 500 are charged $2500. Institutions ranked above 500 are charged $1000.

    The top 100 institutions submitting articles to arXiv are invited to become members at the $5,000 level. The fee for institutions ranked between 101 and 500 is $2,500, and the fee for institutions ranked above 501 is $1,000.

    Institutions that join or renew now will lock in their membership fee for 2022, 2023, and 2024. Institutions joining as part of a consortium will receive a 10% discount. Country-level consortia will receive a higher discount, depending on the level of participation.


    In the example below, the university’s overall submission rank is 182, which corresponds to the $2500 standard membership fee.

    Example submission dataThe overall rank, which is the three year submission average, is indicated by the purple arrow.


    Two flexible alternatives

    If your institution is a champion of open access and has the budget to support a higher contribution, we offer the Champion membership at $10,000 and above. Other institutions use arXiv primarily for reading or face significant budget limitations. For these institutions, Community membership is our most inclusive option, at an amount between $250 and $1000. All members receive the same benefits and are eligible to serve in arXiv’s governance.

    Join or renew today

    Ready to commit to open access, open science, and sustainable academic publishing? Join, renew, or confirm your membership here.

    Do you represent a nonprofit or a company? Confirm your support here.

    Questions? We’re happy to help.

    Again, we are grateful to our strong community of supporters and everyone who contributed to developing our updated sustainability model and membership program.

    Thank you!

    in arXiv.org blog on June 02, 2021 03:44 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Announcing arXiv’s updated sustainability model

    In collaboration with the Membership Advisory Board and arXiv’s leadership team, we’re pleased to announce arXiv’s updated sustainability model. This model, which is an expansion of the existing membership program, complements funding received from Cornell University, the Simons Foundation, and individual donors — and ensures that arXiv thrives for years to come.

    For nearly ten years, members have supported arXiv with funding and expertise — and for that we are grateful! As arXiv has grown, the number of organizations we interact with has expanded greatly, reflecting arXiv’s interdisciplinary nature. To welcome this wonderful diversity and encourage the entire arXiv community to contribute to arXiv’s financial future, we’ve expanded our sustainability model to include three programs for specific types of supporters:


    Our membership program will remain central to arXiv’s sustainability. Universities, libraries, laboratories, and research institutes across the globe are invited to become members (or renew their existing memberships) and represent the scholars who use arXiv to read and share research. Members support arXiv with essential funding and scholarly communications expertise, and members are also eligible to serve in arXiv’s governance. Learn more about updates to the membership program here.


    Professional societies, government agencies, and other nonprofits collaborate with arXiv in a variety of ways, ranging from facilitating conference proceeding submissions to promoting open access compliance. Suggested contribution levels range from $5,000 to $100,000, or the in-kind equivalent. Become an affiliate or renew your support.


    Companies that benefit from arXiv’s services are encouraged to support the infrastructure with funding and in-kind donations such as developer time and cloud services. Suggested contribution levels range from $10,000 to $200,000, or the in-kind equivalent. Become a sponsor or renew your support.


    We are excited to grow our base of supporters, remain transparent about our funding sources, ensure that the cost to operate arXiv is distributed fairly across the community, and continue to meet researchers’ evolving needs. Specifically, our priority is to provide:

    • continuous level of service for supporting exponential growth in submissions and downloads
    • seamless interoperability with other platforms
    • increased completeness and quality of metadata records, including support for affiliation and funding information
    • modernization of backend services and technological infrastructure for improved stability, performance and innovation

    If you already support arXiv, thank you! If not, we encourage you to consider how you or your organization can get involved. You’re invited to contact us to start the conversation.

    in arXiv.org blog on June 02, 2021 03:43 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Class-Balanced Loss Based on Effective Number of Samples

    This week on Journal Club session Minghua Zheng will talk about a paper "Class-Balanced Loss Based on Effective Number of Samples".

    With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of longtailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re- weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula (1-β^n)/(1-β), where n is the number of samples and β ∈ [0, 1) is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.


    Date: 2021/06/04
    Time: 14:00
    Location: online

    in UH Biocomputation group on June 02, 2021 10:10 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Did dreams evolve to transcend overfitting?

    A fascinating new paper proposes that dreams evolved to help the brain generalize, which improves its performance on day to day tasks. Incorporating a concept from deep learning, Erik Hoel (2021):

    “...outlines the idea that the brains of animals are constantly in danger of overfitting, which is the lack of generalizability that occurs in a deep neural network when its learning is based too much on one particular dataset, and that dreams help mitigate this ubiquitous issue. This is the overfitted brian [sic] hypothesis.”


    The Overfitted Brain Hypothesis (OHB) proposes that the bizarre phenomenology of dreams is critical to their functional role. This view differs from most other neuroscientific theories, which treat dream content as epiphenomenal — a byproduct of brain activity involved in memory consolidation, replay, forgetting, synaptic pruning, etc.  

    In contrast, Hoel suggests that “it is the very strangeness of dreams in their divergence from waking experience that gives them their biological function.”

    The hallucinogenic, category-breaking, and fabulist quality of dreams means they are extremely different from the “training set” of the animal (i.e., their daily experiences).
    . . .

    To sum up: the OBH conceptualizes dreams as a form of purposefully corrupted input, likely derived from noise injected into the hierarchical structure of the brain, causing feedback to generate warped or “corrupted” sensory input. The overall evolved purpose of this stochastic activity is to prevent overfitting. This overfitting may be within a particular module or task such a specific brain region or network, and may also involve generalization to out-of-distribution (unseen) novel stimuli.

    Speaking of overfitting, I was reminded of Google's foray into artificial neural networks for image classification, which was all the rage in July 2015. The DeepDream program is a visualization tool that shows what the layers of the neural network have learned:

    One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation.

    The image above is characteristic of the hallucinogenic output from the DeepDream web interface, and it illustrates that the original training set was filled with dogs, birds, and pagodas.  DeepDream images inspired blog posts with titles like, Do neural nets dream of electric sheep? and Do Androids Dream of Electric Bananas? and my favorite, Scary Brains and the Garden of Earthly Deep Dreams.


    Hoel E. (2021). The overfitted brain: Dreams evolved to assist generalization. Patterns 2(5):100244.


    in The Neurocritic on June 01, 2021 06:50 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Know Your Brain: Spinothalamic Tract

    Where is the spinothalamic tract?

    The spinothalamic tract is a collection of neurons that carries information to the brain about pain, temperature, itch, and general or light touch sensations. The pathway starts with sensory neurons that synapse in the dorsal horn of the spinal cord. Next, neurons extend from the dorsal horn and decussate, or cross over to the other side of the spinal cord, before traveling up the spinal cord, through the brainstem, and to the thalamus. These neurons synapse with neurons in the thalamus, which then carry the information to the somatosensory cortex. See below for more details on the pathway of the spinothalamic tract.

    What is the spinothalamic tract and what does it do?

    The spinothalamic tract actually consists of two pathways that are distinct in function: the anterior spinothalamic tract and the lateral spinothalamic tract.

    Lateral spinothalamic tract

    The pathway of the lateral spinothalamic tract.

    The pathway of the lateral spinothalamic tract.

    The lateral spinothalamic tract is the main pathway for carrying information about pain and temperature from the body to the brain. It is also thought to carry information about itch.

    Sensations that are carried by the lateral spinothalamic tract begin with receptors such as nociceptors, which detect painful sensations, or thermoreceptors, which detect changes in temperature. These receptors pass a signal to the initial neurons of the spinothalamic tract, which transmit the signal to the spinal cord. Here, the neurons briefly either ascend or descend as part of a tract called Lissauer’s tract before synapsing on neurons in the dorsal horn of the spinal cord that belong to cell groups like the nucleus proprius or substantia gelatinosa; the latter is an important area for the modulation of pain signals.

    The secondary neurons in the lateral spinothalamic tract cross over to the other side of the spinal cord and then ascend in the spinal cord, through the brainstem, and to the ventral posterolateral (VPL) nucleus in the thalamus. Because this pathway travels in the anterolateral portion of the spinal cord and brainstem, it is often referred to as the anterolateral system. In the thalamus, spinothalamic neurons synapse on cells that will carry the sensory information to the primary somatosensory cortex, which is the main processing area for sensations from the body.

    Anterior spinothalamic tract

    the pathway of the anterior spinothalamic tract.

    the pathway of the anterior spinothalamic tract.

    The anterior spinothalamic tract (aka ventral spinothalamic tract) carries general touch or light touch sensations from the body. This includes touch sensations that don’t involve pressure, such as the stroking of hair or air lightly blowing on the skin.

    These sensations begin with sensory receptors in the skin, which pass a signal onto neurons that travel to the spinal cord. In the spinal cord, these neurons give rise to ascending and descending branches that synapse on neurons in the dorsal horn of the spinal cord. Secondary neurons arise from the nucleus proprius in the dorsal horn, cross over to the other side of the spinal cord, and ascend to the thalamus near the lateral spinothalamic tract. From there, the information is carried on to the somatosensory cortex.

    Damage to the spinothalamic tract

    A spinal cord injury that involves the spinothalamic tract can lead to distinctive sensory deficits. Because neurons in the tract cross over to the other side of the spinal cord before traveling up to the brain, they are carrying information from the opposite side of the body. Thus, if there is damage to one side of the spinal cord, it can cause a loss of pain, temperature, and light touch sensations on the side of the body opposite from where the damage occurred.


    Augustine JR. Human Neuroanatomy. 2nd edition. Hoboken, New Jersey: Wiley & Sons, Inc.; 2017.

    Learn more:

    2-Minute Neuroscience: Pain and the Anterolateral System

    in Neuroscientifically Challenged on May 27, 2021 10:07 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Study of weak / strong multiplication in gain modulation

    This week on Journal Club session Aamir Khan will talk about a paper "Study of weak / strong multiplication in gain modulation".

    This is an extension of my previous work on the multiplicative response of spiking neural networks. In our artificial life platform I evolve a gain modulated network where there are two Inputs namely d (driving input) and m ( the modulatory input). The network aims for a multiplicative Sigmoidal/gaussian response modulated by the modulatory input m. The response of the network exhibits three properties such as 1. Saturation , 2. Non linearity and 3.Multiplicity. The multiplicative capacity of the network varies from strong to weak in a manner that for some modulatory input there is strong multiplication and for some there is a weak multiplication, I have differentiated the response of the network in weak/ strong response. Along with the classification some further constraints are introduced in the system like The size of the network tends to be larger i.e 5 nodes, in order to reduce the size I reward the response of the one interneuron and treat it as the output node.

    Prevous work


    Date: 2021/05/28
    Time: 14:00
    Location: online

    in UH Biocomputation group on May 26, 2021 10:11 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Open for submissions (Part 1)

    It has already been a month since we announced we’re launching five new journals. For those of you patiently waiting, we are excited to announce that the journals are now OPEN for submissions! 

    The responses we’ve seen so far have been overwhelmingly positive. We’d like to say a HUGE THANK YOU to those of you who have expressed your support on social media, via email, and the many hundreds of you who have applied to join the editorial boards of the journals–we have been delighted to see how our reasons for launching them have resonated with you.

    The journals’ websites now contain much more information on each  journal’s mission, scope, personnel, submission instructions and hopefully everything you need to consider submitting your work:

    If what you want to do is explore the journal most relevant to you, and consider it for your future work, please consider this blog post an appropriate “jumping off point” for you to do that! In summary: we have affordable, APC-free business models for all our journals available through institutional partnerships which allow unlimited publishing by all authors at the institution. We also have publishing fees/APCs for those who still need or prefer them, and an established APC-waiver program for those who cannot afford APCs but whose institutions are not yet under an institutional agreement. Please presume there is a way, that is appropriate for you and your context, that you will be able to publish in these new journals!

    For those of you who want to know more details about our institutional models, in particular librarians and others managing Open Access budgets, please read Part 2 of this announcement.

    Photo by Virginia Johnson on Unsplash

    The post Open for submissions (Part 1) appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on May 25, 2021 02:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Open for submissions (Part 2 – equitable OA models)

    Our five new journals are now Open for Submissions! This post (Part 2) is specifically geared towards librarians and/or those of you who manage budgets that support Open Access publishing. Or, simply those of you who would like more details about how we are supporting our activities. 

    If you have been keeping up to date with our business model innovations, you already know we have been working hard to move beyond the APC to develop more equitable and regionally appropriate ways to support Open Access. In the past eighteen months we’ve developed Flat Fee agreements and Community Action Publishing. (Please see those links for the full details or recap of those models.) 

    We’re developing new institutional models to experiment and get things right, and we thank all of you in the library community who have given us feedback. We’re learning that, right now, different types of journals can be better supported by different types of institutional models. Also, we’ve learned that some authors, funders, and countries still prefer APCs due to how they are currently funding or administering their OA publishing. We will continue to support such publication fee models for those who prefer or need them, while continuing to experiment with and develop these new institutional models to make OA work for all authors. 

    Institutional models for the five new journals

    Global Equity

    Our new Global Equity model empowers institutions in every region of the world to provide unlimited Open Access publication support for their authors through a single, annual fee that is affordable and equitably reflects regional economies. The annual fee is based on:

    • an institution’s research output in the relevant subject area, helping us understand the likely potential to publish in the journal, and 
    • the relevant country’s World Bank lending tier (i.e. Low/Low-Middle Income Country (L/LMIC), Upper Middle Income Country (UMIC), High Income Country (HIC). 

    The Global Equity model’s annual price will therefore allow a large institution in a LMIC to pay less than one of comparable size in an HIC, but more than a small institution in an LMIC. 

    Authors at institutions in Research4Life (R4L) countries (both group A and group B) can publish for free, regardless of the size of the institution.

    The following table shows the annual prices for institutions, per journal. Tier 1 is a high output institution in the subject area, to Tier 6 being a low output institution in the subject area. For example, a low output institution in a LMIC interested in PLOS Climate would pay $350 per year to cover unlimited publishing by its authors in that journal.

    Tier 1$6,000$5,400$2,100
    Tier 2$4,500$4,050$1,575
    Tier 3$3,750$3,375$1,313
    Tier 4$3,000$2,700$1,050
    Tier 5$2,000$1,800$700
    Tier 6$1,000$900$350
    Global Equity model prices by research output tier and country category

    Summary Information and Comparison Table

    Many of you reading this post are likely librarians or other people administering OA budgets. However, if you are an author and are simply interested in submitting to a PLOS journal that is of interest to you, please presume there is a way, that is appropriate for you and your context, that you will be able to publish in these new journals:

    • If APCs do not work for you and you would like your institution to join any of these institutional agreements, please send your librarian this link, or refer them via this form.
    • If you are a librarian, please contact us via this link
    • If APCs are preferred, or are necessary for you, they are listed in the table below. 
    • Our waiver policies and fee assistance programs for APCs are unchanged.

    Below is a table that summarizes the information for our entire portfolio of twelve journals, with the five new journals indicated.

    JournalInstitutional ModelAction for authorsPublishing fee (if preferred/ necessary)
    PLOS Climate NEWGlobal EquityAsk your librarian to explore a Global Equity model with PLOS hereAPC:
    PLOS Water NEWGlobal EquityAsk your librarian to explore a Global Equity model with PLOS hereAPC:
    PLOS Global Public Health NEWGlobal EquityAsk your librarian to explore a Global Equity model with PLOS hereAPC:
    PLOS ONEFlat Fee agreement (covers all six journals listed here)Ask your librarian to explore a Flat Fee agreement with PLOS hereAPC:
    $1,749 Regular article
    $1,339 Registered Report Protocol 
    $775 Registered Report Article 
    $1,100 Lab Protocols & Study Protocols
    PLOS Computational Biology(see above)APC:
    PLOS Digital Health NEW(see above)APC:
    PLOS Genetics(see above)APC:
    PLOS Neglected Tropical Diseases(see above)APC:
    PLOS Pathogens(see above)APC:
    PLOS Sustainability and Transformation NEWCommunity Action PublishingAsk your librarian to explore Community Action Publishing with PLOS hereNon-Member Fee for authors not covered by CAP:
    PLOS BiologyCommunity Action PublishingAsk your librarian to explore Community Action Publishing with PLOS hereNon-Member Fee for authors not covered by CAP:
    $4,000 Research Article
    $3,350 Discovery Report
    $2,250 Update Articles
    PLOS MedicineCommunity Action PublishingAsk your librarian to explore Community Action Publishing with PLOS hereNon-Member Fee for authors not covered by CAP:
    Available institutional models and publishing fees for all 12 PLOS journals

    The post Open for submissions (Part 2 – equitable OA models) appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on May 25, 2021 01:59 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 24 May 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.

    Please join us at the next regular Open NeuroFedora team meeting on Monday 24 May at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'

    The meeting will be chaired by @ankursinha. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on May 24, 2021 07:52 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Feasibility of Topological Data Analysis for Event-Related fMRI

    This week on Journal Club session Shabnam Kadir will talk about a paper "Feasibility of Topological Data Analysis for Event-Related fMRI".

    Recent fMRI research shows that perceptual and cognitive representations are instantiated in high-dimensional multivoxel patterns in the brain. However, the methods for detecting these representations are limited. Topological data analysis (TDA) is a new approach, based on the mathematical field of topology, that can detect unique types of geometric features in patterns of data. Several recent studies have successfully applied TDA to study various forms of neural data; however, to our knowledge, TDA has not been successfully applied to data from event-related fMRI designs. Event-related fMRI is very common but limited in terms of the number of events that can be run within a practical time frame and the effect size that can be expected. Here, we investigate whether persistent homology- a popular TDA tool that identifies topological features in data and quantifies their robustness- can identify known signals given these constraints. We use fmrisim, a Python-based simulator of realistic fMRI data, to assess the plausibility of recovering a simple topological representation under a variety of conditions. Our results suggest that persistent homology can be used under certain circumstances to recover topological structure embedded in realistic fMRI data simulations.How do we represent the world? In cognitive neuroscience it is typical to think representations are points in high-dimensional space. In order to study these kinds of spaces it is necessary to have tools that capture the organization of high- dimensional data. Topological data analysis (TDA) holds promise for detecting unique types of geometric features in patterns of data. Although potentially useful, TDA has not been applied to event-related fMRI data. Here we utilized a popular tool from TDA, persistent homology, to recover topological signals from event-related fMRI data. We simulated realistic fMRI data and explored the parameters under which persistent homology can successfully extract signal. We also provided extensive code and recommendations for how to make the most out of TDA for fMRI analysis.


    Date: 2021/05/21
    Time: 14:00
    Location: online

    in UH Biocomputation group on May 20, 2021 08:12 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    2-Minute Neuroscience: Ketamine

    Ketamine is an anesthetic, analgesic, antidepressant, and recreationally used drug. In this video, I discuss hypotheses about how ketamine produces its effects.

    in Neuroscientifically Challenged on May 15, 2021 11:57 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Online spike rate inference with Cascade

    To infer spike rates from calcium imaging data for a time point t, knowledge about the calcium signal both before and after time t is required. Our algorithm Cascade (Github) uses by default a window that is symmetric in time and feeds this window into a small deep network to use the data points in the window for spike inference (schematic below taken from Fig. 2A of the preprint; CC-BY-NC 4.0):

    However, if one wants to perform spike inference not as a post-processing step but rather during the experiment (“online spike inference”), it would be ideal to perform spike inference with a delay as short as possible. This would allow for example to use the result of spike inference for a closed-loop interaction with the animal.

    Dario Ringach recently came up with this interesting problem. With the Cascade algorithm already set up, I was curious to check very specifically: How many time points (i.e., imaging frames) are required after time point t to perform reliable spike inference?

    Using GCaMP/mouse datasets from the large ground truth database (the database is again described in the preprint), I addressed this question directly by training separate models. For each model, the time window was shifted such that a variable number of data points (between minimally 1 and maximally 32) were used for spike inference. Everything was evaluated at a typical frame rate of 30 Hz, and also at different noise levels of the recordings (color-coded below); a noise level of “2” is pretty decent, while a noise level of “8” is quite noisy – explained with examples (Fig. S3) and equations (Methods) again in the preprint.

    The results are quite clear: For low noise levels (black curve, SEM across datasets as corridor), spike inference seems to reach a saturating performance (correlation with ground truth spike rates) around a value of almost 8 frames. This would result in a delay of almost 8*33 ms ≈ 260 ms after a spiking event (dashed line).

    But let’s have a closer look. The above curve was averaged across 8 datasets, mixing different indicators (GCaMP6f and GCaMP6s) and induction methods (transgenic mouse lines and AAV-based induction). Below, I looked into the curve for each single dataset (for the noise level of 2).

    It is immediately clear that for some datasets fewer frames after t are sufficient for almost optimal spike inference, for others not.

    For the best datasets, optimal performance is already reached with 4 frames (left panel; delay of ca. 120 ms). These are datasets #10 and #11, which use the fast indicator GCaMP6f, which in addition is here transgenically expressed. The corresponding spike-triggered linear kernels (right side; copied from Fig. S1 of the preprint) are indeed faster than for other datasets.

    Two datasets with GCaMP6s (datasets #15 and #16) stand out as non-ideal, requiring almost 16 frames after t before optimal performance is reached. Probably, expression levels in these experiment using AAV-based approaches were very high, resulting in calcium buffering and therefore slower transients. The corresponding spike-triggered linear kernels are indeed much slower than for the other GCaMP6s- or GCaMP6f-based datasets.

    The script used to perform the above evaluations can be found on Cascade’s Github repository. Since each data point requires retraining the model from scratch, it cannot be run on a CPU in reasonable time. On a RTX 2080 Ti, the script took 2-3 days to complete.


    1. Only few frames (down to 4 frames) after time t are sufficient to perform almost ideal spike inference. This is probably a consequence of the fact that the sharp step increase is more informative than the slow decay of a spike-triggered event.
    2. To optimize the experiment for online spike-inference, it is helpful to use a fast indicator (e.g., GCaMP6f). It also seems that transgenic expression might be an advantage, since indicator expression and calcium buffering is typically lower for transgenic expression than for viral induction, preventing a slow-down of the indicator by overexpression.

    in Peter Rupprecht on May 13, 2021 06:19 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Linking to Datasets on arXiv

    We’re excited to announce our collaboration with Papers With Code to support links to datasets on arXiv!

    Machine learning articles on arXiv now have a Code & Data tab to link to datasets that are used or introduced in a paper. Readers can activate Links to Code & Data from the paper abstract page to see links to official code, community implementations of the code, and the datasets used.

    screenshot of links to datasets from arXiv abstract pagesFrom the “Code & Data” tab on the arXiv article abstract page, readers can find links to datasets used in the paper.

    Authors can add datasets to their arXiv papers by going to arxiv.org/user and clicking on the “Link to code & data” Papers with Code icon (see below). From there they will be directed to Papers with Code where they can add their datasets. Once added, these will show on the arXiv article’s abstract page.

    This makes tracking dataset usage across the community and quickly finding other papers using the same dataset easier. From Papers with Code, readers can discover other papers using the same dataset, track usage over time, compare models, and find similar datasets.

    All data on Papers with Code is freely available and is licensed under CC-BY-SA (same as Wikipedia).


    An arXivLabs Collaboration

    arXiv’s mission is to provide an open platform where researchers can share and discover new, relevant, emerging science and establish their contribution to advancing research. Datasets are a critical component of this.

    This is the second stage of our arXivLabs collaboration with Papers with Code, following the introduction of code on arXiv last October.

    “Members of our community want to contribute tools that enhance the arXiv experience, and we value that kind of community engagement,” said Eleonora Presani, arXiv Executive Director.

    screen shot of arXiv user page showing icon to link to datasets on Papers with CodeFrom the arXiv user account page, authors can click on the Papers with Code icon to link their work to relevant code and data.


    A version of this blog post also appears here.


    in arXiv.org blog on May 13, 2021 03:26 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Partnering with TCC Africa

    Author: Roheena Anand, Director, Global Publishing Development, PLOS

    Two weeks ago we announced five new journals and their role in our plans to spread our roots deeper and absorb researchers and local practices more fully into our business. However, this is only one part of our work in this area. In the transition to an Open future we need to keep asking ourselves “Open for Whom?” Openness in itself, while valuable, does not tackle inequality in the scholarly communications ecosystem, or increase inclusion.  

    Therefore, we need to be intentional in addressing power imbalances and the legacy of devaluing knowledge from particular groups or regions, e.g. from communities often marginalized in North American/Western European publications, including researchers from Low-to-Middle-Income countries. 

    Over the course of this year we will be expanding our presence into different continents, embedding ourselves into local communities to work alongside them, listening and learning, so that we can understand and reflect their needs and values.  

    Today, as our first major step, we’re sharing the wonderful news that we are formally partnering with the Training Centre in Communication, based in the University of Nairobi, Kenya, commonly known as TCC Africa. TCC Africa is a nonprofit trust that has been doing valuable work across the continent since 2006. They’re committed to improving African researchers’ visibility (and therefore impact) through training in scholarly communication. Like us, they are heavily invested in an Open future and work with stakeholders across the scholarly communication ecosystem to promote and increase uptake of open access and open science more broadly.

    Working with TCC Africa will help us to ensure that the interests and values of African research communities are represented in PLOS publications, policies, and services and to ensure that Open Science practices work for local stakeholders. We believe that all Open Science and Open Research activities should be informed and co-created by local communities at a global scale, helping us all to rebuild the system better.

    We’ve started to build our strategic plan to achieve our joint goals: it’s the first step on our path to a more inclusive Open Science future.

    Please follow PLOS and TCC Africa on social media, and/or follow our blogs, to keep up to date on the progress of this partnership!

    TCC Africa Blog | TCC Africa Twitter | TCC Africa LinkedIn | TCC Africa Facebook
    PLOS Blog | PLOS Twitter | PLOS LinkedIn | PLOS Facebook

    The post Partnering with TCC Africa appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on May 12, 2021 02:59 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Event-Driven Sensing for Efficient Perception: Vision and Audition Algorithms

    This week on Journal Club session Nik Dennler will talk about a paper "Event-Driven Sensing for Efficient Perception: Vision and Audition Algorithms".

    Event sensors implement circuits that capture partial functionality of biological sensors, such as the retina and cochlea. As with their biological counterparts, event sensors are drivers of their own output. That is, they produce dynamically sampled binary events to dynamically changing stimuli. Algorithms and networks that process this form of output representation are still in their infancy, but they show strong promise. This article illustrates the unique form of the data produced by the sensors and demonstrates how the properties of these sensor outputs make them useful for power-efficient, low-latency systems working in real time.


    Date: 2021/05/14
    Time: 14:00
    Location: online

    in UH Biocomputation group on May 12, 2021 12:13 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    10+ years of Brainhack: an open, inclusive culture for neuro tool developers at all levels

    Brainhacks and similar formats are increasingly recognized as a new way of providing academic training and conducting research that extends traditional settings. There is a new paper out in Neuron, by 200+ authors, describing the format and what makes it valuable to the community. This post aims to highlight some of the core themes of the paper.

    Brainhacks have been held since 2012, organized by local communities, sometimes in sync with other hackathons taking place elsewhere. In 2016 the format developed into the Brainhack Global – a synchronous swarm of hybrid meetings arranged by local communities with local and virtual participants. In 2020, during the pandemic, the BG went fully virtual.

    Figure 1D from https://doi.org/10.1016/j.neuron.2021.04.001

    A similar growth of hackathons has occurred in communities adjacent to Brainhack. INCF started funding hackathons early because our community asked for it; and when we saw the value and inspiration hackathons brought to the community, it became a regular line item in the INCF budget. Since funding our first hackathon in 2012, we have funded or partially funded at least one hackathon each year (see our entry in Acknowledgements).

    The Brainhack format is inspired by the hackathon model and centers on ad-hoc, informal collaborations for building, updating and extending community software tools developed by the participants’ peers, with the goal to have functioning software by the end of the event. Unlike many hackathons, Brainhacks welcome participants from all disciplines and with any level of experience—from those who have never written a line of code to software developers and expert neuroscientists, and also feature informal dissemination of ongoing research through unconferences. Also unlike some traditional hackathons, Brainhacks do not have competitions. Brainhacks value education; recent major Brainhack events even have a TrainTrack, a set of entirely education-focused sessions that run in parallel with regular hacking on projects.

    The five defining Brainhack features: 

    1) a project-oriented approach that fosters active participation and community-driven problem-solving

    2) learning by doing, which enables participants to gain more intensive training, particularly in computational methods

    3) training in open science and collaborative coding, which helps participants become more effective collaborators

    4) focus on reproducibility, which leads to more robust scientific research; and

    5) accelerated building and bridging of communities, which encourages inclusivity and seamless collaboration between researchers at different career stages

    Brainhacks have increased insight in the value of tool usability and reusability and the need for long-term maintenance; shifting community culture from individuals creating tools for their own needs to a community actively contributing to an existing resource. They also help to disseminate good practices for writing code and documentation, ensuring code readability, using version control and licensing. 

    Brainhacks promote awareness of reproducible practices that integrate easily into research workflows, and show the value of data sharing and open data. They introduce participants to data standards, such as BIDS, allowing them to experience the benefits of a unified data organization and provides them with the skillset to use these formats in their own research. 

    Brainhacks create a scientific culture around open and standardized data, metadata, and methods, as well as detailed documentation and reporting.

    The Brainhack community are currently also working to collate Brainhack-related insights and expertise into a Jupyter Book  that will serve as a centralized set of resources for the community.


    Brainhack: Developing a culture of open, inclusive, community-driven neuroscience


    Det här inlägget skrevs för och publicerades först (12/5 2021) på INCF’s blogg

    in Malin Sandström's blog on May 12, 2021 09:53 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next WG meeting: 12 May at 1200 UTC

    Photo by Daria Nepriakhina on Unsplash

    Photo by Daria Nepriakhina on Unsplash.

    The next open community meeting for the working group will be at 1200 UTC on 12th May, 2021. The primary agenda for this meeting is to plan what tutorials the working group intends to undertake at CNS*2021 in July. Please join us via Zoom.

    You can see the local time for the meeting using this link. On Linux style systems, you can also use this command in the terminal:

    date --date='TZ="UTC" 1200 this wednesday'

    We hope to see you there.

    in INCF/OCNS Software Working Group on May 10, 2021 10:27 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 10 May 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.

    Please join us at the next regular Open NeuroFedora team meeting on Monday 10 May at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'

    The meeting will be chaired by @gicmo. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on May 10, 2021 10:21 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Fast Odour Dynamics Are Encoded in the Olfactory System and Guide Behaviour

    This week on Journal Club session Maria Psarrou will talk about a paper "Fast Odour Dynamics Are Encoded in the Olfactory System and Guide Behaviour".

    Odours are transported in turbulent plumes, which result in rapid concentration fluctuations that contain rich information about the olfactory scenery, such as the composition and location of an odour source. However, it is unclear whether the mammalian olfactory system can use the underlying temporal structure to extract information about the environment. Here we show that ten-millisecond odour pulse patterns produce distinct responses in olfactory receptor neurons. In operant conditioning experiments, mice discriminated temporal correlations of rapidly fluctuating odours at frequencies of up to 40 Hz. In imaging and electrophysiological recordings, such correlation information could be readily extracted from the activity of mitral and tufted cells the output neurons of the olfactory bulb. Furthermore, temporal correlation of odour concentrations reliably predicted whether odorants emerged from the same or different sources in naturalistic environments with complex airflow. Experiments in which mice were trained on such tasks and probed using synthetic correlated stimuli at different frequencies suggest that mice can use the temporal structure of odours to extract information about space. Thus, the mammalian olfactory system has access to unexpectedly fast temporal features in odour stimuli. This endows animals with the capacity to overcome key behavioural challenges such as odour source separation, figure ground segregation and odour localization by extracting information about space from temporal odour dynamics.


    Date: 2021/05/06
    Time: 14:00
    Location: online

    in UH Biocomputation group on May 06, 2021 10:21 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Northeastern University professor with 69 papers on PubPeer has resigned

    A chemistry professor at Northeastern University in Boston, MA who has almost 70 papers flagged on PubPeer resigned yesterday, May 4, 2021.

    On his blog For Better Science (Update May 5, at the bottom), Leonid Schneider shared an email from the Chair of the Department of Engineering, which states that Thomas J Webster has resigned from the university.

    Webster has 69 papers flagged on PubPeer, mostly for concerns about image irregularities. I reported 59 to the journals and institution in March 2020.

    Some of these papers, which appeared to have duplications of features within the same photo, were quietly corrected. Perhaps coincidentally, these had been published in the Elsevier journal Nanomedicine : Nanotechnology, Biology, and Medicine, where Webster is an Associate Editor. See e.g. here and here.

    The apparent duplications in the colorful image on the right below this text were explained by the first author on PubPeer as “the voltage of the instrument is not insufficient in that time, so that the carbon membrane (which was bought homemade) on the copper screen may affect the background and the resolution of the picture, which leads to this fuzzy image, bringing you some identification troubles” — sort of blaming me for seeing these duplications. You may remember that I awarded the journal a This Image Is Fine Award in November 2020.

    The image got replaced with something less obnoxious, without the journal even blinking when the authors wrote “there was an inadvertent mistake for Figure 3, B which appeared as a replicated image”. In my professional opinion, both images contain duplicated parts — and both papers should have been retracted. But the journal had a severe conflict of interest in both cases. One of the senior authors is an Associate Editor at the journal, so the papers were not handled according to COPE guidelines.

    Two examples of papers with severe image concerns, published in a journal where one of the senior author serves on the Editorial Board. In both cases, the authors were allowed to replace these figures with a clean panel, and the paper got a correction. From: https://pubpeer.com/publications/582FCA40E662923EAC611828E80CBE and https://pubpeer.com/publications/8994973E1DC6C384321CBAC47F273C

    Several other papers with image problems were published in Dove’s International Journal of Nanomedicine, where Webster is the founding Editor-in-Chief, and not surprisingly these have not been acted upon. It is always complicated to investigate such cases if one of the authors is the founding father of the journal.

    Despite the muddy corrections at the journals, it has to be said that Northeastern University appears to have handled this case appropriately and swiftly.

    As reported by For Better Science, the Webster lab was suspended in November 2020, and now in May 2021 Webster has left the university.

    Still, only 8 of these 69 papers have been corrected so far (and as we have seen above, that was not always a good decision), while as of today zero papers have been retracted. I hope the university will contact the journals with the findings of their investigation, advising them which of these cases involved research misconduct.

    in Science Integrity Digest on May 05, 2021 10:44 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    An update on Lab and Study Protocols at PLOS ONE

    Author: Emily Chenette, Editor-in-Chief, PLOS ONE

    PLOS empowers researchers to transform science by offering more options for credit, transparency and choice. The launch of Lab Protocols and Study Protocols in PLOS ONE earlier this year supported this crucial goal by bringing reproducibility and transparency to research, and enabling those who contributed to study design to receive credit for their contributions.  

    Today, we’re delighted to share the news that two Study Protocols have now been published in PLOS ONE.

    The first, by Satoru Joshita and colleagues, describes a protocol for studying the prevalence and etiology of portopulmonary hypertension in a cohort of Japanese people with chronic liver disease. 

    The second, by John Cole and colleagues, provides the protocol for the Copy Number Variation and Stroke (CaNVAS) Risk and Outcome study. This study aims to identify copy-number variations that are associated with a risk of ischemic stroke in diverse populations. 

    Cole writes, “Very little is known about the genetics of stroke outcome. In CaNVAS, copy number variation (CNV) variation as associated with both stroke-risk and-outcome will be explored on a large scale basis. Providing the scientific community with the CaNVAS protocol early in the study will help identify other researchers interested in these efforts, with the goal to increase collaboration and scientific discovery regarding CNV throughout the project.”

    Furthermore, by having their Study Protocols reviewed and published, these authors have had the opportunity to ensure that their study designs are robust and reproducible before the research is completed. They’re also contributing to reducing publication bias by sharing the study aims before the results are available. 

    If you’re interested in submitting your own Study Protocol for consideration, our Submission Guidelines have more information about the submission and review process. One author-friendly feature of Study Protocols is that they are eligible for expedited review if the study has received funding after peer review from an external funding source. 

    We also encourage researchers to share their detailed, verified research methodology by publishing a Lab Protocol in PLOS ONE. This unique article type was developed in partnership with protocols.io, and consists of two interlinked components: 1) a step-by-step protocol on protocols.io, with access to specialized tools for communicating methodological details and facilitating use of the protocol; and 2) a peer-reviewed PLOS ONE article contextualizing the protocol, with sections discussing applications, limitations and expected results. Several Lab Protocols are under review right now, and we look forward to publishing the first article soon! 

    Thank you to our authors, reviewers, editors and readers for contributing to these article types and supporting Open Science at PLOS ONE.

    The post An update on Lab and Study Protocols at PLOS ONE appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on May 05, 2021 12:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    OSA Biophotonics Congress: Optics in the Life Sciences 2021

    Angus was invited to speak at the OSA Biophotonics Congress: Optics in the Life Sciences, 2021:

    In this OSA Congress, the latest advances in molecular probe development, life science imaging, novel and more powerful optical instrumentation and its application to study fundamental biological processes and clinical investigations will be presented. This progress in instrumentation development and its rapid application represents important enablers that permit studies not possible a few years ago.

    Cumulatively, the meetings in this congress bring together leaders in the field whose contributions are significantly advancing the state of the art in biological and medical research through the use of optical technologies.

    Their presentation was titled “Imaging Neurons and Circuits Using Acousto-Optic Lens 3D Two-Photon Microscopy”.

    More information on the conference can be found on the event website.

    in The Silver Lab on May 01, 2021 11:19 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Heating up the objective for two-photon imaging

    To image neurons in vivo with a large field of view, a large objective is necessary. This big piece of metal and glass is in indirect contact with the brain surface, with only water and maybe a cover slip in between. The objective touching the brain effectively results in local cooling of the brain surface through heat conduction (Roche et al., eLife, 2019; see also Kalmbach and Waters, J Neurophysiology, 2012). Is this a problem?

    Maybe it is: Cooling by only few degrees can result in a drop of capillary blood flow and some side-effects (Roche et al., eLife, 2019). And it has also been shown (in slice work) that minor temperatures changes can affect the activity of astrocytic microdomains (Schmidt and Oheim, Biophysical J, 2020), which might in turn affect neuronal plasticity or even neuronal activity.

    For a specific experiment, I wanted to briefly test how such a temperature drop affects my results. Roche et al. used a commercial objective heating device with temperature controller, and a brief email exchange with senior author Serge Charpak was quite helpful to get started. However, the tools used by Roche et al. are relatively expensive. In addition, they used a fancy thermocouple element together with a specialized amplifier from National Instruments to probe the temperature below the objective.

    Since this was only a brief test experiment, I was hesitant to buy expensive equipment that would maybe never be used again. As a first attempt, I wrapped a heating pad, which is normally used to keep the temperature of mice during anesthesia at physiological levels, around the objective; however, the immersion medium below the objective could only heated up to something like 28°C, which is quite a bit below the desired 37°C.

    Heating pad, wrapped around a 16x water immersion objective. Not hot enough.

    Therefore, I got in touch with Martin Wieckhorst, a very skilled technician from my institute. He suggested a more effective heating of the objective by using a very simple solution. After a layer of insulation tape (Kapton tape, see picture below), we wrapped a constantan wire, which he had available from another project, in spirals around the objective body, followed again by a layer of insulation tape. Then, using a lab power supply, we just sent some current (ca. 1A at 10 V) through the wire. The wire acts as a resistor – therefore it is important that adjacent spirals do not touch each other – and produces simply heat that is taken up by the objective body.

    Constantan wire wrapped in spirals around the objective body. Semi-transparent Kapton tape used for insulation makes the wires barely visible on this picture.

    To measure the temperature below the objective, we needed a sensor as small as possible. A typical thermometer head would simply not fit into the space between objective and brain surface. We decided to use a thermistor or RTD (resistance temperature detectors). How can we read out the resistance and convert it into temperature? Fortunately, Martin found an old heating block which contained a temperature controller (this one). These controllers are typically capable to use information from standardized thermistors of different kinds or thermocouples.

    Next, we bought the sensor itself, a PT100 thermistor (I think it was this one) with a very small spatial footprint. The connection from the PT100 to the temperature controller is pretty straightforward once you understand the connection scheme based on three wires (explained here). This three-wire scheme serves to eliminate the effect of the electrical resistance of the cables on the measurement. Then, we dipped the head of the PT100 into non-corrosive hot glue in order to prevent a shortcut of the PT100 resistor once it dips into the immersion medium. The immersion medium is at least partially conductive and would therefore affect the measure resistance and also the measured temperature. Once we had everything set up, we checked the functionality of the sensor in a water bath, using a standard thermometer for calibration. Another way to perform this calibration would be an ice bath, which is stably at 0°C.

    A repurposed heating block to read out a thermistor. We first looked up the data sheet of the built-in controller (bottom right) and then connected a PT100 thermosensor to its inputs. The PT100 sensor is located at the tiny end of the blue cable (inset), covered by a thin film of non-corrosive hot glue.

    The contact surface of my objective with the immersion medium is mostly glass and a bit of plastic, therefore it took roughly 30-60 min until the temperature below the objective reached a stable value of around 37°C. In order to prevent that the heat is distributed throughout the whole microscope, we used a plastic objective holder that does not conduct heat.

    Together, I found this small project very instructive. First, I was surprised to learn how reliable and fast an objective heater based on simple resistive wire can be. Heating up the metal part of the objective up to >60°C within minutes was no problem. It took however much longer until the non-metal parts of the objective also reached the desired temperature. I was also glad to see that the objective (16x Nikon) was not damaged and its resolution during imaging was not affected by its increased temperature!

    The problem of designing a very small temperature sensor was more complicated, also due to the standard three-wire scheme to measure with thermistors. However, all components that we used were relatively cheap, and I think that these temperature measurement devices are interesting tools that could be used also for other experiments, e.g., to monitor body temperature or to build custom-made temperature controllers of water bath temperature for slice experiments.

    in Peter Rupprecht on May 01, 2021 09:34 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    arXiv’s Giving Week is May 2 – 8, 2021

    arXiv is free to read and submit research, so why are we asking for donations?

    arXiv is not free to operate, and, as a nonprofit, we depend on the generosity of foundations, members, donors, volunteers, and individuals like you to survive and thrive. If arXiv matters to you and you have the means to contribute, we humbly ask you to join arXiv’s global community of supporters with a donation during arXiv’s Giving Week, May 2 – 8, 2021.

    Less than one percent of the five million visitors to arXiv this month will donate. If everyone contributed just $1 each, we would be able to meet our annual operating budget and save for future financial stability.

    Would you like to know more about our operations and how arXiv’s funds are spent? Check out our annual report for more information.

    Thank you for your support!

    Donate Button

    in arXiv.org blog on April 30, 2021 01:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Announcing the first OA book funded entirely by library membership programme ‘Opening the Future’

    Roll out of first open access books fully funded by Opening the Future We're thrilled to announce that our Opening the Future library membership programme has reached the threshold needed to begin funding the first titles in open access. The Opening the Future platform is a CEU Press and COPIM initiative, launched earlier this year to facilitate transitioning the entire monograph program of CEU Press into open access together with its partners Project MUSE, LYRASIS and Jisc. Within the model, which is a first of its kind, CEU Press provides access to portions of their highly-regarded backlist, to which members subscribe. The revenue from these subscriptions is allocated entirely to allow the frontlist to be OA from the date of publication. Words in Space and Time: Historical Atlas of Language Politics in Modern Central Europe by Dr Tomasz Kamusella will be the first title to be published OA through the programme. The atlas, available this autumn, offers novel insights into the history and mechanics of Central Europe’s languages as products of human history and a part of culture. It includes forty-two annotated and interactive maps. Further titles will be announced soon and advance notice will be given to avoid any doubledipping. Opening the Future is a simple and cost-effective way for libraries to support OA books for HSS subjects especially. With 250 libraries, each participating at appropriate pricing tiers in the model, CEU Press can publish 25 OA books at a cost of 11 EUR / 13 USD / 10 GBP per monograph for each library (if we reach our targets). “With the Opening the Future model, CEU Press and COPIM developed a fair pricing system in the true sense of the word “fair” that allows libraries of all sizes and budgets to support OA monograph publishing in a sustainable way,” said Curtis Brundy, Associate University Librarian for Scholarly Communication and Collections at Iowa State University, a member of the OtF programme. “During the past five years, the move toward open access for all publicly-funded research publications has become a new norm and goal. CEU Press with its own OtF project trailblazes this new ground as a leader in this regard,” said Dr Tomasz Kamusella, Reader at the University of St. Andrews, UK. More Information For libraries or other institutions that want to support the move to immediate OA, without author-facing charges, more information can be found at https://ceup.openingthefuture.net/. For further details on the pricing and structure of the model, see the FAQs and resources web pages.

    in Open Access Tracking Project: news on April 30, 2021 07:27 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The importance of early career researchers for promoting open research

    Author: Iain Hrynaszkiewicz, PLOS’ Director of Open Research Solutions

    Early career researchers again appear to be at the vanguard of open research, with them reporting more positive attitudes towards sharing of code compared to more experienced researchers – as found in PLOS research released as a preprint this week.

    At the end of March 2021 PLOS Computational Biology introduced a more stringent policy on sharing code associated with articles published in the journal. This was in response to a desire of members of the journal’s community to go further to promote open science, which appears to be reflected in the community at large – determined through collaborative research between PLOS and the journal’s community of editors and researchers.

    While more than 40% of papers published in PLOS Computational Biology voluntarily shared their code already, requiring more authors to share more of their research outputs as a condition of publication is not a decision to be taken lightly by any journal. Therefore, to better understand the attitudes and experiences of the computational biology community in relation to code sharing, we designed a survey to help us understand:

    • Is a mandatory code sharing policy suitable for researchers in the community?
    • What proportion of researchers’ papers generate code?
    • What concerns do they have about code sharing?
    • How common are these concerns?
    • How much would submissions to the journal be affected?
    • Are there differences in different segments of researchers (regions, disciplines, career stages)?

    Supporting the editorial announcing the policy and survey dataset that were released in March, a more in-depth analysis of our survey of more than 200 researchers has been released as a preprint. As well as supporting the journal’s plans, we hope this work will be a resource for other stakeholders considering adoption of new policies on sharing of code. Along with research data and protocols, sharing of code is important to help ensure research is reusable and reproducible but, as we discovered when developing the policy, there is limited evidence on the prevalence of code sharing – relative to other open research practices such as data sharing – and researchers’ experiences with sharing code and software.

    The authors surveyed report that, on average, 71% of their research articles have associated code, and that for the average author, code has not been shared for 32% of these papers. A lot of researchers had not shared their code in the past due to practical or technical issues such as insufficient time, skills or systems dependencies, which — at least in principle — would not prevent compliance with a mandatory code sharing policy. Twenty two percent of respondents who had not shared their code in the past, however, cited intellectual property (IP) concerns — a legitimate issue that might prevent public sharing of code under a mandatory policy. In combination with these survey results and testing draft versions of the policy with researchers in the field, we concluded that an inclusive policy would need to permit exemptions in certain cases. However, the results also implied that more of the respondents’ previous publications could have shared code.

    Figure 1. Reasons given by respondents for not sharing code publicly in the past.

    Another key finding was differences in levels of acceptance of a mandatory code sharing policy between research fields, and career stage (as determined by the number of previous publications respondents had published). Medical researchers reported being less likely to submit to the journal if it had a mandatory code sharing policy, as did researchers with more than 100 publications. Whereas, researchers with fewer than 20 published papers showed more positive responses towards submitting to the journal if it implemented a code sharing policy. Other studies have found greater affinity for open research amongst early career researchers, including a 2021 peer-reviewed survey of Early Career Researchers (ECRs) within the Max Planck society, which concluded ECRs seem to hold a generally positive view toward open research practices.

    Figure 2. Respondents were asked “If PLOS Computational Biology required you to publicly share any computer code you created to interpret your results, how would this affect your likelihood to submit to the journal?”

    Also, similar to what we discovered about researchers’ needs and priorities for data sharing in 2020, respondents were satisfied with their ability to share their own code but were less satisfied with their ability to access other researchers’ code. From this we infer that offering researchers new products or services to share code, at least in this community, in the absence of a stronger policy, would be unlikely to achieve the goal: of measurably increasing the availability of code with the journal’s publications. However, as with research data, we see opportunities for journals and publishers to increase findability and accessibility – and ultimately reuse – of code generated by researchers, which in turn can help realise more benefits of open research.

    Read the preprint here and survey dataset here. Please note our results have not yet been peer reviewed, but will be submitted to a peer-reviewed journal soon.

    The post The importance of early career researchers for promoting open research appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on April 29, 2021 02:40 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Learning Compositional Sequences with Multiple Time Scales through a Hierarchical Network of Spiking Neurons

    This week on Journal Club session Muhammad Yaqoob will talk about a paper "Learning Compositional Sequences with Multiple Time Scales through a Hierarchical Network of Spiking Neurons".

    Sequential behaviour is often compositional and organised across multiple time scales: a set of individual elements developing on short time scales (motifs) are combined to form longer functional sequences (syntax). Such organisation leads to a natural hierarchy that can be used advantageously for learning, since the motifs and the syntax can be acquired independently. Despite mounting experimental evidence for hierarchical structures in neuroscience, models for temporal learning based on neuronal networks have mostly focused on serial methods. Here, we introduce a network model of spiking neurons with a hierarchical organisation aimed at sequence learning on multiple time scales. Using biophysically motivated neuron dynamics and local plasticity rules, the model can learn motifs and syntax independently. Furthermore, the model can relearn sequences efficiently and store multiple sequences. Compared to serial learning, the hierarchical model displays faster learning, more flexible relearning, increased capacity, and higher robustness to perturbations. The hierarchical model redistributes the variability: it achieves high motif fidelity at the cost of higher variability in the between-motif timings.


    Date: 2021/04/30
    Time: 14:00
    Location: online

    in UH Biocomputation group on April 28, 2021 03:53 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Know Your Brain: Olivary Nuclei

    Where are the olivary nuclei?

    The olivary nuclei, which consist of the inferior olivary nucleus and superior olivary nucleus, are found in brainstem. The olivary nuclei are paired structures, with one inferior and one superior olivary nucleus on each side of the brainstem. The inferior olivary nuclei are located in the medulla oblongata, and the superior olivary nuclei are found in the pons. Both nuclei are typically subdivided into collections of smaller nuclei.

    What are the olivary nuclei and what do they do?



    The inferior and superior olivary nuclei are distinct in function. The inferior olivary nucleus is typically subdivided into the principal olive, medial accessory olive, and dorsal accessary olive, and is thought to play an important role in movement, coordination, and movement-related learning. The superior olivary nucleus consists of the lateral superior olive and medial superior olive, as well as a number of surrounding nuclei known as the periolivary nuclei. The superior olivary nuclei are thought to be involved in hearing, and specifically with identifying the location of sounds.

    The inferior olivary nuclei receive movement-related information from several sources, including the spinal cord and motor cortex. This includes information about current movement, body position, muscle tension, and intention. The inferior olivary nuclei use this information to communicate with the cerebellum to fine-tune movements and aid in movement-related learning.

    Superior olivary nuclei indicated in a cross-section of the brainstem AT THE LEVEL OF THE PONS

    Superior olivary nuclei indicated in a cross-section of the brainstem AT THE LEVEL OF THE PONS

    The superior olivary nuclei receive projections from the cochlear nuclei that carry information about hearing. Neurons leave the superior olivary nuclei to extend to the inferior colliculus, which is an important part of the auditory system. The superior olivary nuclei receive information from both ears, and that information is compared to detect differences in qualities like intensity and to determine the location of a sound in the environment. The information is then sent to the inferior colliculus and processed further before being sent on to other regions like the thalamus and cerebral cortex. Additionally, neurons in the superior olivary nuclei project back to the cochlear nuclei. These projections are thought to be involved in negative feedback mechanisms that help to inhibit auditory stimuli that are deemed less important, such as background conversations.


    Paul MS, M Das J. Neuroanatomy, Superior and Inferior Olivary Nucleus (Superior and Inferior Olivary Complex) [Updated 2020 Jul 31]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2021 Jan-. Available from: https://www.ncbi.nlm.nih.gov/books/NBK542242/

    Schweighofer N, Lang EJ, Kawato M. Role of the olivo-cerebellar complex in motor learning and control. Front Neural Circuits. 2013 May 28;7:94. doi: 10.3389/fncir.2013.00094. PMID: 23754983; PMCID: PMC3664774.

    in Neuroscientifically Challenged on April 28, 2021 10:24 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Accelerating progress in brain recording tech

    In Stevenson and Kording (2011), the authors estimated that every 7.4 years, the number of neurons we can record with doubles. Think of it as Moore’s law for brain recordings. Since then, Stevenson has updated the estimate, which now stands at 6 years. Could it be that progress itself is accelerating?

    Matteo Carandini raised a question: why should progress be log-linear anyway? Technological phenomena have been argued to follow a double-exponential curve: the pace of progress itself accelerates over time. This is only noticeable when we look over a very long time horizon, for instance, when we look at computation per dollars over more than a century:

    Doubling timesimage CC BY Steve Jurvetson

    But we have 60 years to look at, so we can make these inferences! I took the data in Urai et al. (2021) – generously released under a CC-BY license – and fit a Bayesian Poisson regression model over time (code here). I fit only the electrophysiology data. It’s clear here that early times are underfit by the line. The doubling time estimated here is shorter than what has been noted in the literature – 4.5 years.

    A log-linear model of progress in electrophysiology

    On a technical note, a Poisson regression model will tend to give larger weight to higher numbers – hence, it focuses on fitting the right-hand side of the graph, while the linear regression model that’s conventionally used gives equal weight everywhere. With an accelerating trend, that means the Poisson regression model give a shorter doubling time.

    We can do one better – fit a double-exponential model. This is only a few lines of code in PyMC3 – a miracle of automatic differentiation and Hamiltonian Monte Carlo. Here’s what that looks like:

    You can see visually this is a much better fit, and it implies something pretty dramatic: progress itself is accelerating. That means that doubling time itself has changed over time – and it currently stands at 3.6 years under this model [95% CI 3.5-3.7].

    These results project a 1M neuron average recording capability by 2045 – of course, this discounts ceiling effects and potential paradigm shifts, which could adjust these bounds far upward or downward. What about optical methods? It turns out that the Poisson model works poorly because of overdispersion. I used a negative binomial to model the noise in the curve. I tried to let the overdispersion parameter be free, but I was getting convergence problems. Hence I fixed it to 2.0.

    The implied doubling rate is a little less than 2 years. These numbers could swing wildly as we add more data, but we see that the doubling rate for imaging is at least twice the current rate for electrodes. This is due to the market pressures in cellphone sensors and telecommunications (fiber optics and LiDAR), making good sensors very cheap. Many in neurotech have taken note, including Facebook, which is building light-based BCIs, and Paradromics, which is adapting some of the fabrication methods from imaging sensors to electrophysiology.

    Thus, this generalized Moore’s law of recordings is likely to continue decreasing in doubling time over the foreseeable future. Does this mean recording from every cell in the brain ($10^{11}$ cells) in the next 25 years? Probably not with electrodes – but if progress with light-based sensing continues at the same pace, perhaps. There is the vexing issue of scatter – and some of the people in this thread have some ideas on how to solve this.

    Regardless of the exact course of progress, I think that 7 years is far too long a doubling time – perhaps 3.5 years for ephys, 2 years for imaging. The future ain’t what it used to be, and it’s coming far faster than we’ve perhaps imagined. What will we do with all this data? There’s some great hints in the Urai paper. An interesting research question si how holographic the brain is – perhaps we will get most of the understanding with far less than 100% coverage. Regardless, I think Adam Calhoun put it best:

    Update: Ian Stevenson re-did the analysis with slightly different models, and found some slightly different results (longer doubling times than those reported here), from 4.5 to 5.6 years, depending on the assumptions. These are doubling times are nevertheless shorter than the ones reported previously in the literature so far, so the larger point still stands: the future is happening faster than we thought just a few years ago. Read the thread here.

    Further reading

    in xcorr.net on April 27, 2021 05:48 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    To boldly grow: five new journals shaped by Open Science

    PLOS announces new journals

    We are extremely excited to announce the imminent launch of five new journals, our first new launches in fourteen years. These new journals are unified in addressing global health and environmental challenges and are rooted in the full values of Open Science:

    But, before we go on, let’s address this: “Yet more journals?” (Yes, we heard you…!)

    Yes. We set out, with our original seven journals, to transform research communication by making research content openly accessible. Over the eighteen years since our first journal launch we’ve helped prove the viability of Open Access which, despite the occasional disagreement about how best to achieve it, is now a mainstream notion. We changed the publishing landscape via PLOS ONE, and this new type of “mega-journal” now features at almost every publisher and in every research field. We helped focus the conversation about peer review on rigor and transparency, and more people now understand how impact-seeking should never be at the expense of these notions. And, via open data policies, open peer review options, protocols, preregistration, preprint facilitation, etc., we’ve worked towards Open Science at a scale and in ways that increase the transparency and rigor of the entire research communication process, not just our journals. 

    All this is to say that we do not take lightly the responsibility of introducing new journals into the world. While we don’t believe the long-term future of research communication is always going to be the journal as we know it today, we do appreciate the impacts that journals can still have, and the new communities of practice they can empower. 

    Therefore, we are launching these five journals in the spirit of them being additional change-making vehicles.

    We’ve invited key members of the PLOS Leadership Team along with our CEO, Alison Mudditt, to field specific questions, below: 

    “Why launch new journals at this point in PLOS’ journey?” 

    Niamh O’Connor, Chief Publishing Officer, PLOS

    “We’re a nonprofit, driven by our mission, and we need to adapt to continue to deliver on it. Even though Open Access is now widely adopted, and Open Science is advancing, there are still key voices missing. We are expanding our global footprint in locally responsible ways to get closer to researchers. Researchers have always driven our mission forward, and in order for us to have a meaningful impact we need to include the broadest range of their voices, globally. This way, we ensure the co-creation of paths to Open Science that work for diverse communities and do not simply extend existing power structures. These new journals create new and diverse communities of practice, and ensure that they are at the forefront of shaping how we address the most pressing health and environmental issues facing our society. 

    Additionally, all these journals will be underpinned by our existing, and new, institutional business models that move beyond the APC to ensure more equitable and regionally appropriate ways to support Open Access publishing. Our existing institutional models are our Community Action Publishing (CAP) and Flat Fee agreement models. We’ll talk more about our brand new models when the journals open for submissions, suffice to say that they are also not based on APCs. With all this said, author fee-based models will still be available for those authors who prefer or need them.”

    “What are the special characteristics of these new journals?”

    Dan Shanahan, Publishing Director, PLOS

    “As Niamh says, our next phase of work is not just about Open Access. These new journals not only complement and naturally extend the existing PLOS suite of journals, but will hardwire a lens of social responsibility into sharing research. Via these new journals we can work together to address the most pressing ‘Openness’ issue specific to each field, and enable the researchers addressing these challenges, everywhere, to have the broadest impact. 

    In full alignment with the proposed UNESCO Recommendation on Open Science, these titles will ensure diversity and equity of representation at all levels – editors, editorial boards, reviewers, authors – and will actively seek out research from under-represented communities. The journals will play a part in broader efforts to create a more equitable system of knowledge-sharing, accelerating and increasing the benefits of scientific endeavour for global society as a whole. 

    The new journals all focus on some of the world’s most globally-relevant issues, to which locally-relevant research can, and must, become more visible. This approach will directly challenge the unfortunate norm that most ‘global’ forums remain dominated by research from Western Europe and North America.”

    “How will these journals contribute to the increased adoption of Open Science practices?”

    Veronique Kiermer, Chief Scientific Officer, PLOS

    “PLOS has always led on key Open Science matters, we are experimenters, but we’re also striving to be better listeners. We know a rigid approach to Open Science won’t foster equitable participation from all communities. As a publisher, we’ve never been driven by tradition, but by a willingness to question the status quo and an eagerness to explore how we can understand and improve the system. We’ll continue to investigate and test new ways of sharing, assessing, and recognizing research. We’ll be partnering with leaders across research communities, and the Open Science communities, to enact change. Not every solution will be journal-shaped, but all our journals will be shaped by Open Science, enabling those who publish with us to join communities of practice. We can use our newly expanded journal portfolio to influence norms and advance Open Science practices in considered and appropriate ways. We stand firm on any policies like open data that promote rigor and advance trust in research, but we want to understand the specific challenges that such policies represent for new communities, and work with them to find solutions that empower them, in their contexts, to practice Open Science. Overall, we want to further empower new communities to join us and inform us, working more tightly together towards ever more trust in science.”

    “How does this expansion of the portfolio fit into PLOS’ wider aims for the future?”

    Alison Mudditt, Chief Executive Officer, PLOS

    “PLOS has grown, times have changed, and how we deliver our mission has to evolve. Science is a global, collaborative enterprise. Challenging times help us see where we are and where we need to be. Global collaboration, transparency, and trust in science (and policies…and interventions…)  have all become recurring themes. We’re also coming to terms with how we, as a society, need to do a lot more to address systemic barriers to inclusion. How we think of ourselves as an organization, especially a research communication organization, isn’t separate from everything else going on in the world today. We have always worked to raise the bar for Openness. To continue this work we need to continue to grow – but not just in the traditional business sense, and not just in counting the number of journals we publish: we also need to, and have concrete plans to, spread our roots deeper, create more global hubs for PLOS, absorb researchers’ local practices more fully into our business, and, as others commented before me, ensure that any journal we publish is informed by local communities at a global scale, challenging problems, and rebuilding the system better.”

    Watch these spaces!

    We have Editors in place, we have dedicated staff, and the journals will open for submissions a little later this year. All updated information will appear on the respective journal websites (linked from the list above) as the journals take shape. Visit them often for submission guidelines, how our editorial boards are developing, how to apply to be a member of the board, where to follow them on social media, etc.

    Please share this news on your preferred social media via the buttons above. If you would like to join the Facebook and LinkedIn groups “PLOS Open Science Champions” for early announcements of this type, please visit the groups and request to be added: LinkedIn; Facebook

    Thank you for reading, and thank you to all of you who have supported us since the launch of PLOS Biology eighteen years ago in 2003. PLOS, and Open Access itself, would not be this far along without you!



    We would like to thank everyone who supported us and provided input and insight for these new journals, including Jamie Bartram, University of Leeds, UK; Clarissa Brocklehurst, Water and Sanitation Specialist, Canada; Alexandros Gasparatos, University of Tokyo, Japan; Alex Godoy-Faúndez, Universidad del Desarrollo, Chile; Ashantha Goonetilleke, Queensland University of Technology, Australia; Suzanne Hulscher, University of Twente, the Netherlands; Lawrence E. Hunter, University of Colorado School of Medicine, USA; Christopher Jackson, Imperial College London, UK; Malte Meinshausen, University of Melbourne, Australia; Angus Morrison Saunders, Edith Cowan University, Australia; Lucila Ohno-Machado, University of California, San Diego, USA; Farhana Sultana, Syracuse University, USA;  among others, as well as the PLOS Scientific Advisory Council.

    The post To boldly grow: five new journals shaped by Open Science appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on April 27, 2021 02:00 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    The MDAR Framework – a new tool for life sciences reporting

    PLOS note: We are delighted to share this announcement by the MDAR Working Group (Materials, Design, Analysis, Reporting) of a new framework for transparent reporting in the life sciences. PLOS has always put emphasis—through editorial policies and submission guidelines—on complete and transparent reporting to facilitate the analysis, trust, reproduction and reuse of the research we publish. We’ve supported and participated in the MDAR Working Group by sharing our experience and helping to develop and test the new reporting framework. We hope that the MDAR Framework will help bring consistency to journals reporting guidelines and make it easier for authors to adopt transparency norms. The MDAR Framework is consistent with what is practiced at PLOS and we will work with our editorial boards to explore the most effective ways to implement it.

    Incomplete or imprecise reporting of life sciences research contributes to challenges with reproducibility, replicability, and biomedical applications. For the last three years we – a group of journal editors and researchers – have been working together to develop a new framework for transparent reporting of life sciences research. This framework has just been published in PNAS.

    The MDAR Framework establishes the four domains – research Materials, Design, Analysis, and Reporting – in which we define both a set of basic minimum requirements, and best practice recommendations.

    We were motivated to develop the MDAR Framework as part of our own and others’ attempts to improve reporting to drive research improvement and ultimately greater trust in science. Existing tools, such as the ARRIVE guidelines, guidance from FAIRSharing, and the EQUATOR Network, speak to important sub-elements of biomedical research. This new MDAR Framework aims to be  more general and less deep, and therefore complements these important specialist guidelines.  

    Previous approaches have led to improved reporting, but often at considerable cost to both authors’ and editors’ time. A recent period of experimentation has resulted in a thorough but fragmented landscape of reporting guidelines for life science journals. A drive for efficiency  inspired us to learn from each other’s experiences and to harmonize the most effective practices. 

    The MDAR Framework provides flexibility along with broad applicability. The standard articulation of expectations across different journals will make it easier for: (i) authors to better understand what is expected of them, and (ii) for more journals to adopt an established approach rather than develop it from scratch. Journals can choose a level of implementation appropriate to their needs, enabling greater adoption potential. 

    We also hope that the MDAR Framework will be helpful for other organizations such as funders, who can signal reporting expectations early and therefore have an effect at the time the studies are designed, and tool/software developers, who can devise means of facilitating compliance for authors and journals. 

    Alongside the framework, the project provides a checklist (for authors, journals or reviewers) as an optional implementation tool, and an explanation and elaboration document. The checklist was piloted on over 289 manuscript submissions across 13 journals, seeking feedback from authors and editors actually using the checklist. Our team analysed agreement between observers, sought feedback from outside experts, and revised the framework in the light of this experience. 

    The full set of MDAR resources will be maintained and updated as a community resource, in a Collection on the Open Science Framework. 

    We are sharing this update on the MDAR Framework through coordinated posts on working group member platforms. Working group members have been free to add any additional context as appropriate.

    On behalf of the MDAR working group:

    • Andy Collings (eLife)
    • Chris Graf (Wiley)
    • Veronique Kiermer (PLOS; vkiermer@plos.org)
    • David Mellor (Center for Open Science)
    • Malcolm Macleod (University of Edinburgh)
    • Sowmya Swaminathan (Nature Portfolio/Springer Nature; s.swaminathan@us.nature.com)
    • Deborah Sweet (Cell Press/Elsevier)
    • Valda Vinson (Science/AAAS)

    The post The MDAR Framework – a new tool for life sciences reporting appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on April 26, 2021 02:41 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Next Open NeuroFedora meeting: 26 April 1300 UTC

    Photo by William White on Unsplash

    Photo by William White on Unsplash.

    Please join us at the next regular Open NeuroFedora team meeting on Monday 26 April at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

    You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

    $ date --date='TZ="UTC" 1300 today'

    The meeting will be chaired by @ankursinha. The agenda for the meeting is:

    We hope to see you there!

    in NeuroFedora blog on April 26, 2021 08:37 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Hoarders and Collectors

    Andy Warhol's collection of dental models

    Pop artist Andy Warhol excelled in turning the everyday and the mundane into art. During the last 13 years of his life, Warhol put thousands of collected objects into 610 cardboard boxes. These Time Capsules were never sold as art, but they were meticulously cataloged by museum archivists and displayed in a major exhibition at the Andy Warhol Museum. “Warhol was a packrat. But that desire to collect helped inform his artistic point of view.” Yet Warhol was aware of his compulsion, and it disturbed him: “I'm so sick of the way I live, of all this junk, and always dragging home more.”

    Where does the hobby of collection cross over into hoarding, and who makes this determination? 

    Artists get an automatic pass into the realm of collectionism, no matter their level of compulsion. The Vancouver Art Gallery held a major exhibition of the works of Canadian writer and artist Douglas Coupland in 2014. One of the sections consisted of a room filled with 5,000 objects collected over 20 years and carefully arranged in a masterwork called The Brain. Here's what the collection looked like prior to assembly.

    Materials used in the The Brain, 2000–2014, mixed-media installation with readymade objects. Courtesy of the Artist and Daniel Faria Gallery. Photo: Trevor Mills, Vancouver Art Gallery.

    Hoarding, on the other had, lacks the artistic intent or deliberate organization of collection. Collectors may be passionate, but their obsessions/compulsions do not hinder their everyday function (or personal safety). According to Halperin and Glick (2003):
    “Characteristically, collectors organize their collections, which while extensive, do not make their homes dysfunctional or otherwise unlivable. They see their collections as adding a new dimension to their lives in terms of providing an area of beauty or historical continuity that might otherwise be lacking.”
    The differential diagnosis for the DSM-5 classification of Hoarding Disorder vs. non-pathological Collecting considers order and value of primary importance.

    Fig. 2 (Nakao & Kanba, 2019).
    If possessions are well organized and have a specific value, the owner is defined as a ‘collector.’ Medical conditions that cause secondary hoarding are excluded from Hoarding Disorder. The existence of comorbidities such as obsessive-compulsive disorder (OCD), autism spectrum disorder (ASD), and attention deficit hyperactivity disorder (ADHD) must be excluded as well.

    I've held onto the wish of writing about this topic for the last eight months...

    ...because of the time I spent sorting through my mother's possessions between July 2020 and November 2020 after she died on July 4th. This process entailed flying across the country five times in a total of 20 different planes in the midst of a pandemic.
    Although my mother showed some elements of  hoarding, she didn't meet clinical criteria. She had various collections of objects (e.g., glass shoes, decorator plates, snuff bottles, and ceremonial masks), but what really stood out were her accumulations — organized but excessive stockpiles of useful items such as flashlights, slippers, sweatshirts, kitchen towels, and watches (although most of the latter were no longer useful).

    Ten pairs of unworn gardening gloves

    During the year+ of COVID sheltering-in-place, some people wrote books, published papers, started nonprofits, engaged in fundraising, held Zoom benefit events, demonstrated for BLM, home-schooled their kids, taught classes, cared for sick household members, mourned the loss of their elder relatives, or endured COVID-19 themselves.
    I dealt with the loss of a parent, along with the solo task of emptying 51 years of accumulated belongings from her home. To cope with this sad and lonely and emotionally grueling task, I took photos of my mother's accumulations and collections. It became a mini-obsession unto itself. I tried to make sense of my mother's motivations, but the trauma of her suffering and the specter of an unresolved childhood were too overwhelming. Besides, there's no computational model to explain the differences between Collectors, Accumulators and Hoarders.

    Additional Reading

    Compulsive Collecting of Toy Bullets

    Compulsive Collecting of Televisions

    The Neural Correlates of Compulsive Hoarding

    Welcome to Douglas Coupland's Brain


    Halperin DA, Glick J. (2003). Collectors, accumulators, hoarders, and hoarding perspectives. Addictive Disorders & Their Treatment 2(2):47-51.

    Nakao T, Kanba S. (2019). Pathophysiology and treatment of hoarding disorder. Psychiatry Clin Neurosci. 73(7):370-375. doi:10.1111/pcn.12853

    in The Neurocritic on April 26, 2021 06:08 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    How the Brain Works

    Every now and then, it's refreshing to remember how little we know about “how the brain works.” I put that phrase in quotes because the search for the Holy Grail of [spike trains, network generative models, manipulated neural circuit function, My Own Private Connectome, predictive coding, the free energy principle (PDF), or a computer simulation of the human brain promised by the Blue Brain Project] that will “explain” how “The Brain” works is a quixotic quest. It's a misguided effort when the goal is framed so simplistically (or monolithically).

    First of all, whose brain are we trying to explain? Yours? Mine? The brain of a monkey, mouse, marsupial, monotreme, mosquito, or mollusk? Or C. elegans with its 306 neurons? “Yeah yeah, we get the point,” you say, “stop being so sarcastic and cynical. We're searching for core principles, first principles.”

    In response to that tweet, definitions of “core principle” included:

    • Basically: a formal account of why brains encode information and control behaviour in the way that they do.
    • Fundamental theories on the underlying mechanisms of behavior. 
      • [Maybe “first principles” would be better?]
    • Set of rules by which neurons work?


    Let's return to the problem of explanation. What are we trying to explain? Behavior, of course [a very specific behavior most of the time]: X behavior in your model organism. But we also want to explain thought, memory, perception, emotion, neurological disorders, mental illnesses, etc. Seems daunting now, eh? Can the same core principles account for all these phenomena across species? I'll step out on a limb here and say NO, then snort for asking such an unfair question. Best that your research program is broken down into tiny reductionistic chunks. More manageable that way.

    But what counts as an “explanation”? We haven't answered that yet. It depends on your goal and your preferred level of analysis (à la three levels of David Marr):

    computation – algorithm – implementation



    Again, what counts as “explanation”? A concise answer was given by Lila Davachi during a talk in 2019, when we all still met in person for conferences:

    “Explanations describe (causal) relationships between phenomena at different levels.”

    from Dr. Lila Davachi (CNS meeting, 2019)
    The Relation Between Psychology and Neuroscience
    (see video, also embedded below)

    UPDATE April 25, 2021: EXPLANATION IS IMPOSSIBLE, according to Rich, de Haan, Wareham, and van Rooij (2021), because "the inference problem is intractable, or even uncomputable":
    "... even if all uncertainty is removed from scientific inference problems, there are further principled barriers to deriving explanations, resulting from the computational complexity of the inference problems."

    Did I say this was a “refreshing” exercise? I meant depressing... but I'm usually a pessimist. (This has grown worse as I've gotten older and been in the field longer.)  
    Are there reasons for optimism?

    You can follow the replies here, and additional replies to this question in another thread starting here.

    I'd say the Neuromatch movement (instigated by computational neuroscientists Konrad Kording and Dan Goodman) is definitely a reason for optimism!

    Further Reading

    The Big Ideas in Cognitive Neuroscience, Explained (2017)

    ... The end goal of a Marr-ian research program is to find explanations, to reach an understanding of brain-behavior relations. This requires a detailed specification of the computational problem (i.e., behavior) to uncover the algorithms. The correlational approach of cognitive neuroscience and even the causal-mechanistic circuit manipulations of optogenetic neuroscience just don't cut it anymore.

    An epidemic of "Necessary and Sufficient" neurons (2018)

    A miniaturized holy grail of neuroscience is discovering that activation or inhibition of a specific population of neurons (e.g., prefrontal parvalbumin interneurons) or neural circuit (e.g., basolateral amygdala → nucleus accumbens) is “necessary and sufficient” (N&S) to produce a given behavior.

    Big Theory, Big Data, and Big Worries in Cognitive Neuroscience (from CNS meeting, 2018)
    Dr. Eve Marder ... posed the greatest challenges to the field of cognitive neuroscience, objections that went mostly unaddressed by the other speakers.  [paraphrased below]:
    • How much ambiguity can you live with in your attempt to understand the brain? For me I get uncomfortable with anything more than 100 neurons
    • If you're looking for optimization (in [biological] neural networks), YOU ARE DELUSIONAL!
    • Degenerate mechanisms produce the same changes in behavior, even in a 5 neuron network...
    • ..so Cognitive Neuroscientists should be VERY WORRIED



    The Neuromatch Revolution (2020)

    “A conference made for the whole neuroscience community”


    An Amicable Discussion About Psychology and Neuroscience (from CNS meeting, 2019)

    • the conceptual basis of cognitive neuroscience shouldn't be correlation
    • but what if the psychological and the biological are categorically dissimilar??

    ...and more!

    The video below is set to begin with Dr. Davachi, but the entire symposium is included.

    in The Neurocritic on April 25, 2021 07:38 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Characterization of an open access medical news platform readership during the COVID-19 pandemic

    Abstract:  Background: There now exists many alternatives to direct journal access, such as podcasts, blogs, and news sites for physicians and the general public to stay up-to-date with medical literature. Currently however, there is a scarcity of literature that investigates these readership characteristics of open access medical news sites and how they may have shifted with coronavirus disease 19 (COVID-19). Objective: The current study aimed to employ readership and survey data to characterize open access medical news readership trends in relation to COVID-19 in addition to overall readership trends regarding pandemic related information delivery. Methods: Anonymous aggregate readership data was obtained from 2 Minute Medicine® (www.2minutemedicine.com), an open-access, physician-run medical news organization that has published over 8000 original physician-written text and visual summaries of new medical research since 2013. In this retrospective observational study, the average article views, actions (defined as the sum of views, shares, and outbound link clicks), read times, and bounce rate (probability to leave a page in <30s) were compared between COVID-19 articles published between January 1 to May 31, 2020 (N = 40) to non-COVID-19 articles (N = 145) published in the same time period. A voluntary survey was also sent to subscribed 2 Minute Medicine readers to further characterize readership demographics and preferences scored by Likert Scale. Results: COVID-19 articles had significantly more median views than non-COVID-19 articles (296 vs. 110, U = 748.5, P < 0.001). There were no differences in average read times or bounce rate. Non-COVID-19 had more median actions than COVID-19 articles (2.9 vs. 2.5, U = 2070.5, P < 0.05). On a Likert scale of 1 (Strongly Disagree) to 5 (Strongly Agree), survey data revealed that 66% (78/119) of readers Agreed or Strongly Agreed that they preferred staying up to date with emerging literature surrounding COVID-19 using sources such as 2 Minute Medicine versus direct journal access. A greater proportion of survey takers also indicated open access news sources to be one of their primary means of staying informed (71.7%) than direct journal article access (50.8%). A lesser proportion of readers indicated reading one or less full length medical study following introduction to 2 Minute Medicine compared to prior (16.9% vs. 31.8%, P < 0.05). Conclusions: There is a significantly increased readership in one open-access medical literature platform during the pandemic, reinforcing that open-access physician-written sources of medical news represent an important alternative to direct journal access for readers to stay up to date with medical literature.

    in Open Access Tracking Project: news on April 25, 2021 09:40 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    WikiLala, 'Google' of Ottoman-Turkish documents, launches in full | Daily Sabah

    The online digital library project, “WikiLala,” which brings together and aims to digitize all the printed texts from the Ottoman Empire since the introduction of the printing press, has recently launched a full version of its website which had been in beta for a while. Since its launch, the website has attracted more than 200,000 visitors from 107 countries.

    in Open Access Tracking Project: news on April 22, 2021 04:54 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Bursting Neurons Signal Input Slope

    This week on Journal Club session Volker Steuber will talk about a paper "Bursting Neurons Signal Input Slope".

    Brief bursts of high-frequency action potentials represent a common firing mode of pyramidal neurons, and there are indications that they represent a special neural code. It is therefore of interest to determine whether there are particular spatial and temporal features of neuronal inputs that trigger bursts. Recent work on pyramidal cells indicates that bursts can be initiated by a specific spatial arrangement of inputs in which there is coincident proximal and distal dendritic excitation (Larkum et al., 1999). Here we have used a computational model of an important class of bursting neurons to investigate whether there are special temporal features of inputs that trigger bursts. We find that when a model pyramidal neuron receives sinusoidally or randomly varying inputs, bursts occur preferentially on the positive slope of the input signal. We further find that the number of spikes per burst can signal the magnitude of the slope in a graded manner. We show how these computations can be understood in terms of the biophysical mechanism of burst generation. There are several examples in the literature suggesting that bursts indeed occur preferentially on positive slopes (Guido et al., 1992; Gabbiani et al., 1996). Our results suggest that this selectivity could be a simple consequence of the biophysics of burst generation. Our observations also raise the possibility that neurons use a burst duration code useful for rapid information transmission. This possibility could be further examined experimentally by looking for correlations between burst duration and stimulus variables.


    Date: 2021/04/23
    Time: 14:00
    Location: online

    in UH Biocomputation group on April 21, 2021 10:37 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    CRL and East View Release Open Access Imperial Russian Newspapers | CRL

    "CRL and East View Information Services have opened the first release of content for Imperial Russian Newspapers (link is external), the fourth Open Access collection of titles digitized under the Global Press Archive (GPA) CRL Charter Alliance. This collection adds to the growing body of Open Access material available in the Global Press Archive by virtue of support from CRL members and other participating institutions. The Imperial Russian Newspapers(link is external) collection, with a preliminary release of 230,000 pages, spans the eighteenth through early twentieth centuries and will include core titles from Moscow and St. Petersburg as well as regional newspapers across the vast Russian Empire. Central and regional “gubernskie vedomosti” will be complemented by a selection of private newspapers emerging after the Crimean War in 1855, a number of which grew to be influential...."

    in Open Access Tracking Project: news on April 21, 2021 10:21 AM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Editor’s tips for passing journal checks

    You’ve painstakingly mapped out your research goal: to answer that unanswered question. You’ve conducted your experiments, analyzed the results and written your paper. Now it’s off to a journal. And the process begins. PLOS editors have seen it all and want to help get your paper published as quickly as possible.

    What does the journal office look for, and what are the potential pitfalls? More importantly, how can you ensure that your manuscript passes journal checks and moves on to peer review quickly? Here, PLOS staff discuss a few of the most common reasons why a manuscript is rejected during the initial technical check, and how to avoid them.

    For a bit of background, after a manuscript is submitted to a scientific journal it undergoes a series of technical and ethical checks. Submissions that pass this initial screening go on to editorial assessment and peer review. Submissions that don’t meet requirements, or don’t provide enough information, may be returned to the authors for clarification. This can extend review times and even lead to a manuscript being rejected without review. Below are 5 checks and tips on how to smoothly get past them.

    Check #1: Sense check

    Quite simply, does the manuscript make sense as a submission? Is it a scientific article? Are all the typical parts of an article (abstract, introduction, methods, results/discussion, figures citations) all present? Is the language clear and understandable?

    How to pass it: Make sure that your manuscript is complete, and that the writing is clear and unambiguous. Note that it doesn’t have to be perfect at this stage, just precise enough for fellow researchers in your field to understand and evaluate your work.

    Read PLOS’ guide to editing your work

    Check #2: Journal fit and scope

    Journals tend to specialize in particular subjects and types of studies. “The biggest reason we reject without review is scope.” Explains Kendall McKenzie, Managing Editor of PLOS Neglected Tropical Diseases.Our scope page breaks down the diseases and categories of research we’re interested in, and even specifically states the kinds of things we don’t consider.”

    How to pass it: It comes down to submitting the right manuscript to the right publication. Carefully investigate the journal’s scope before submitting to ensure that your manuscript has a good chance of publication. If your particular article is on the edge of the journal’s expressed scope, or if you’re just not sure, search the journal for similar articles; if there are no comparable publications, your study is likely out of scope.

    Read PLOS’ guide to choosing a journal

    Check #3: Acceptance criteria

    Laura Simmons, Managing Editor of PLOS Genetics agrees. “In addition to scope, our Editors in Chief and Section Editors may reject without review if a submission is lacking in biological or mechanistic insight (i.e. if it is too descriptive), or if the research doesn’t represent a significant advance in the field.”

    How to pass it: This one is all about doing your research. Different journals have different criteria for publication. Consult the journal website and consider whether your study fulfills the requirements and mission of the journal. Does the journal publish the type of research your study describes? Will your article appeal to the readers the journal serves? If not, consider a more specialized publication that focuses specifically on the type of research you are conducting, or, alternatively, a journal with a broader, more inclusive scope.

    Check #4: Plagiarism

    Most journals run an automated check that looks for similarities between your manuscript and previously published works. If the manuscript scores above a certain threshold, members of the journal staff will take a closer look at your manuscript to ensure that any direct quotes are framed within quotation marks and properly cited. “Overall the most common issue we see is authors reusing their own methods section, introduction, or conclusion from previous or related studies,” explains PLOS ONE Publishing Editor Emma Stillings. Authors don’t always realize that “you have to cite everyone, even yourself, to avoid any delay in the peer review process.”

    How to pass it: Any direct quotes must be framed within quotation marks and properly attributed. That includes your own prior works. Try to avoid reusing text, and especially copy-pasting from your other papers. Check to make sure that any summaries or allusions are properly cited as well.

    Check #5: Complete and consistent ethical, funding, data, and other statements

    If the statements in the submission form are unclear, lacking detail, or otherwise incomplete, the process will pause while the journal office contacts the authors for more information. Similarly, if the statements within the manuscript are different from those in the submission system, the journal office will work with the authors to reconcile them before the manuscript can advance.

    How to pass it: Label and save the paperwork from the early part of your research process: funding information, committee approval documents, permits, permission forms, patient disclosure statements, study designs, and any other materials. You may need them to complete your submission form. When you are ready to submit, proofread carefully to ensure that everything in your manuscript is up-to-date and clear. Double check to make sure that any placeholder text has been replaced with the final version.

    Read PLOS’ guide to scientific ethics & preparing data

    Final words of wisdom

    “It’s so important to familiarize yourself with a journal before submitting. What’s the scope of the journal? What article types do they publish? Are you adhering to the guidelines for that particular article type? Making sure you’re informed about what type of work the journal publishes and how, can go a long way in deciding where to submit and speeding your manuscript through the initial submission stages.” Eileen Clancy, Managing Editor of PLOS Pathogens

    The post Editor’s tips for passing journal checks appeared first on The Official PLOS Blog.

    in The Official PLOS Blog on April 20, 2021 02:53 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    Center for Research Libraries (CRL) and East View Release Open Access Imperial Russian Newspapers | LJ infoDOCKET

    CRL and East View Information Services have opened the first release of content for Imperial Russian Newspapers, the fourth Open Access collection of titles digitized under the Global Press Archive (GPA) CRL Charter Alliance. This collection adds to the growing body of Open Access material available in the Global Press Archive by virtue of support from CRL members and other participating institutions.

    in Open Access Tracking Project: news on April 20, 2021 12:15 PM.

  • Wallabag.it! - Save to Instapaper - Save to Pocket -

    2-Minute Neuroscience: Chronic Traumatic Encephalopathy (CTE)

    Chronic traumatic encephalopathy, or CTE, is a neurological condition linked primarily to repetitive head trauma. In this video, I discuss what happens in the brain during CTE.

    in Neuroscientifically Challenged on April 18, 2021 12:16 PM.