Posted on

Researchers Say they Know How the Universe Began

This image has an empty alt attribute; its file name is ai-free.png

A team of researchers has analyzed more than one million galaxies to explore the origin of the present-day cosmic structures, reports a recent study published in Physical Review D.
Until today, precise observations and analyses of the cosmic microwave background (CMB) and large-scale structure (LSS) have led to the establishment of the standard framework of the universe, the so-called ΛCDM model, where cold dark matter (CDM) and dark energy (the cosmological constant, Λ) are significant characteristics.
This model suggests that primordial fluctuations were generated at the beginning of the universe, or in the early universe, which acted as triggers, leading to the creation of all things in the universe including stars, galaxies, galaxy clusters, and their spatial distribution throughout space. Although they are very small when generated, fluctuations grow with time due to the gravitational pulling force, eventually forming a dense region of dark matter, or a halo. Then, different halos repeatedly collided and merged with one another, leading to the formation of celestial objects such as galaxies (https://journals.aps.org/prd/abstract/10.1103/PhysRevD.108.083533).
The researchers simultaneously analyzed the spatial distribution and shape pattern of approximately one million galaxies from the Sloan Digital Sky Survey (SDSS), the world’s largest survey of galaxies today.
As a result, they successfully constrained statistical properties of the primordial fluctuations that seeded the formation of the structure of the entire universe. A statistically significant alignment of the orientations of two galaxies’ shapes more than 100 million light years apart. Their result showed correlations exist between distant galaxies whose formation processes are apparently independent and causally unrelated.
The methods and results of this study will allow researchers in the future to further test inflation theory. Details of this study were published on October 31 in Physical Review D as an Editors’ Suggestion.

AR #75

Taking Aim at the Big Bang

Posted on

Can A.I. Predict Events in the Lives of Real People?

By Peter Aagaard Brixen

This image has an empty alt attribute; its file name is ai-free.png

In a new scientific article, ‘Using sequences of life-events to predict human lives’, published in Nature Computational Science, researchers have analyzed health data and attachment to the labor market for 6 million Danes in a model dubbed life2vec. After the model has been trained in an initial phase, i.e., learned the patterns in the data, it has been shown to outperform other advanced neural networks and predict outcomes such as personality and time of death with high accuracy (https://www.nature.com/articles/s43588-023-00573-5).
The predictions from Life2vec are answers to general questions such as: ‘death within four years’? When the researchers analyze the model’s responses, the results are consistent with existing findings within the social sciences; for example, all things being equal, individuals in a leadership position or with a high income are more likely to survive, while being male, skilled or having a mental diagnosis is associated with a higher risk of dying. Life2vec encodes the data in a large system of vectors, a mathematical structure that organizes the different data. The model decides where to place data on the time of birth, schooling, education, salary, housing and health.
The researchers behind the article point out that ethical questions surround the life2vec model, such as protecting sensitive data, privacy, and the role of bias in data. These challenges must be understood more deeply before the model can be used, for example, to assess an individual’s risk of contracting a disease or other preventable life events.
According to the researchers, the next step would be to incorporate other types of information, such as text and images or information about our social connections. This use of data opens up a whole new interaction between social and health sciences.
A transformer model is a type of AI, deep learning data architecture used to learn about language and other tasks. The models can be trained to understand and generate language. The transformer model is designed to be faster and more efficient than previous models and is often used to train large language models on large datasets.
A neural network is a computer model inspired by the brain and nervous system of humans and animals. There are many different types of neural networks (e.g. transformer models).
Like the brain, a neural network is made up of (artificial) neurons. These neurons are connected and can send signals to each other. Each neuron receives input from other neurons and then calculates an output that is passed on to other neurons.
A neural network can learn to solve tasks by training on large amounts of data. 
Neural networks rely on training data to learn and improve their accuracy over time. But once these learning algorithms are fine-tuned for accuracy, they are powerful tools in computer science and artificial intelligence that, according to researchers, allow us to classify and group data at high speed. One of the most well-known neural networks is Google’s search algorithm.

AR #112

The Artificial Intelligence Threat

by Stephen Robbins, Ph.D.

Posted on

Brains and the Daydream Factor?

By Catherine Caruso

This image has an empty alt attribute; its file name is ai-free.png

You are sitting quietly, and suddenly your brain tunes out the world and wanders to something else entirely — perhaps a recent experience, or an old memory. You just had a daydream.

Yet despite the ubiquity of this experience, what is happening in the brain while daydreaming is a question that has largely eluded neuroscientists.
Now, a study in mice, published Dec. 13 in Nature, has possibly brought a team led by researchers at Harvard Medical School one step closer to figuring it out (https://www.nature.com/articles/s41586-023-06810-1).
The researchers tracked the activity of neurons in the visual cortex of the brains of mice while the animals remained in a quiet waking state. They found that occasionally these neurons fired in a pattern similar to one that occurred when a mouse looked at an actual image, suggesting that the mouse was thinking — or daydreaming — about the image. Moreover, the patterns of activity during a mouse’s first few daydreams of the day predicted how the brain’s response to the image would change over time.
The research provides tantalizing, if preliminary, evidence that daydreams can shape the brain’s future response to what it sees. This causal relationship needs to be confirmed in further research, the team cautioned, but the results offer an intriguing clue that daydreams during quiet waking may play a role in brain plasticity — the brain’s ability to remodel itself in response to new experiences.
“We wanted to know how this daydreaming process occurred on a neurobiological level, and whether these moments of quiet reflection could be important for learning and memory,” said lead author Nghia Nguyen, a PhD student in neurobiology in the Blavatnik Institute at HMS.
Scientists have spent considerable time studying how neurons replay past events to form memories and map the physical environment in the hippocampus, a seahorse-shaped brain region that plays a key role in memory and spatial navigation.
By contrast, there has been little research on the replay of neurons in other brain regions, including the visual cortex. Such efforts would provide valuable insights about how visual memories are formed.
In the new study, the researchers repeatedly showed mice one of two images, each consisting of a different checkerboard pattern of gray and dappled black and white squares. Between images, the mice spent a minute looking at a gray screen. The team simultaneously recorded activity from around 7,000 neurons in the visual cortex.
The researchers found that when a mouse looked at an image, the neurons fired in a specific pattern, and the patterns were different enough to discern image one from image two. More important, when a mouse looked at the gray screen between images, the neurons sometimes fired in a similar, but not identical, pattern, as when the mouse looked at the image, a sign that it was daydreaming about the image. These daydreams occurred only when mice were relaxed, characterized by calm behavior and small pupils.
Unsurprisingly, mice daydreamed more about the most recent image — and they had more daydreams at the beginning of the day than at the end, when they had already seen each image dozens of times.
Throughout the day, and across days, the activity patterns seen when the mice looked at the images changed — what neuroscientists call “representational drift.” Yet this drift wasn’t random. Over time, the patterns associated with the images became even more different from each other, until each involved an almost entirely separate set of neurons. Notably, the pattern seen during a mouse’s first few daydreams about an image predicted what the pattern would become when the mouse looked at the image later.

AR #76

Deathbed Visitations

by Michael Tymn

Posted on

Cognitive strategies to augment body with robotic arm

Alain Herzog CC-BY-SA

This image has an empty alt attribute; its file name is ai-free.png

EPFL scientists show that breathing may be used to control a wearable extra robotic arm in healthy individuals, without hindering control of other parts of the body.

Neuroengineer Silvestro Micera develops advanced technological solutions to help people regain sensory and motor functions that have been lost due to traumatic events or neurological disorders. Until now, he had never before worked on enhancing the human body and cognition with the help of technology.
Now in a study published in Science Robotics, Micera and his team report on how diaphragm movement can be monitored for successful control of an extra arm, essentially augmenting a healthy individual with a third – robotic – arm.
“This study opens up new and exciting opportunities, showing that extra arms can be extensively controlled and that simultaneous control with both natural arms is possible,” says Micera, Bertarelli Foundation Chair in Translational Neuroengineering at EPFL, and professor of Bioelectronics at Scuola Superiore Sant’Anna.
The study is part of the Third-Arm project, previously funded by the Swiss National Science Foundation (NCCR Robotics), that aims to provide a wearable robotic arm to assist in daily tasks or to help in search and rescue. Micera believes that exploring the cognitive limitations of third-arm control may actually provide gateways towards better understanding of the human brain.
Micera continues, “The main motivation of this third arm control is to understand the nervous system. If you challenge the brain to do something that is completely new, you can learn if the brain has the capacity to do it and if it’s possible to facilitate this learning. We can then transfer this knowledge to develop, for example, assistive devices for people with disabilities, or rehabilitation protocols after stroke.”

“We want to understand if our brains are hardwired to control what nature has given us, and we’ve shown that the human brain can adapt to coordinate new limbs in tandem with our biological ones,” explains Solaiman Shokur, co-PI of the study and EPFL Senior Scientist at the Neuro-X Institute. “It’s about acquiring new motor functions, enhancement beyond the existing functions of a given user, be it a healthy individual or a disabled one. From a nervous system perspective, it’s a continuum between rehabilitation and augmentation.”
To explore the cognitive constraints of augmentation, the researchers first built a virtual environment to test a healthy user’s capacity to control a virtual arm using movement of his or her diaphragm. They found that diaphragm control does not interfere with actions like controlling one’s physiological arms, one’s speech or gaze.
In this virtual reality setup, the user is equipped with a belt that measures diaphragm movement. Wearing a virtual reality headset, the user sees three arms: the right arm and hand, the left arm and hand, and a third arm between the two with a symmetric, six-fingered hand.
“We made this hand symmetric to avoid any bias towards either the left or the right hand,” explains Giulia Dominijanni, PhD student at EPFL’s Neuro-X Institute.
In the virtual environment, the user is then prompted to reach out with either the left hand, the right hand, or in the middle with the symmetric hand. In the real environment, the user holds onto an exoskeleton with both arms, which allows for control of the virtual left and right arms. Movement detected by the belt around the diaphragm is used for controlling the virtual middle, symmetric arm. The setup was tested on 61 healthy subjects in over 150 sessions.
“Diaphragm control of the third arm is actually very intuitive, with participants learning to control the extra limb very quickly,” explains Dominijanni. “Moreover, our control strategy is inherently independent from the biological limbs and we show that diaphragm control does not impact a user’s ability to speak coherently.”
The researchers also successfully tested diaphragm control with an actual robotic arm, a simplified one that consists of a rod that can be extended out, and back in. When the user contracts the diaphragm, the rod is extended out. In an experiment similar to the VR environment, the user is asked to reach and hover over target circles with her left or right hand, or with the robotic rod.
Besides the diaphragm, but not reported in the study, vestigial ear muscles have also been tested for feasibility in performing new tasks. In this approach, a user is equipped with ear sensors and trained to use fine ear muscle movement to control the displacement of a computer mouse.
“Users could potentially use these ear muscles to control an extra limb,” says Shokur, emphasizing that these alternative control strategies may help one day for the development of rehabilitation protocols for people with motor deficiencies.
Part of the third arm project, previous studies regarding the control of robotic arms have been focused on helping amputees. The latest Science Robotics study is a step beyond repairing the human body towards augmentation.
“Our next step is to explore the use of more complex robotic devices using our various control strategies, to perform real-life tasks, both inside and outside of the laboratory. Only then will we be able to grasp the real potential of this approach,” concludes Micera.

AR #84

Molecular Machines that Defy Darwin

by Casey Luskin

Posted on

Finnish team found out the composition of asteroid Phaethon

This image has an empty alt attribute; its file name is ai-free.png

The asteroid that causes the Geminid shooting star swarm has also puzzled researchers with its comet-like tail. The infrared spectrum of rare meteorites helped to determine the composition of the asteroid.

Postdoctoral Researcher Eric MacLennan holds in his hands a very rare type of meteorite, the so-called CY carbonaceous chondrite. Only six specimens of the same type are known. The sample is on loan from the Natural History Museum in London. (Image: Susan Heikkinen)
Asteroid Phaethon, which is five kilometers in diameter, has been puzzling researchers for a long time. A comet-like tail is visible for a few days when the asteroid passes closest to the Sun during its orbit.
However, the tails of comets are usually formed by vaporizing ice and carbon dioxide, which cannot explain this tail. The tail should be visible already at Jupiter’s distance from the Sun.
When the surface layer of an asteroid breaks up, the detached gravel and dust continue to travel in the same orbit and give birth to a cluster of shooting stars when it encounters the Earth. Phaethon causes the Geminid meteor shower, which also appears in the skies of Finland every year around mid-December. At least according to the prevailing hypothesis because that’s when the Earth crosses the asteroid’s path.
Until now, theories about what happens on Phaethon’s surface near the Sun have remained purely hypothetical. What comes off the asteroid? How? The answer to the riddle was found by understanding the composition of Phaethon.
A rare meteorite group consisting of six known meteorites
In a recent study published in the journal Nature Astronomy by researchers from the University of Helsinki, the infrared spectrum of Phaethon previously measured by NASA’s Spitzer space telescope is re-analyzed and compared to infrared spectra of meteorites measured in laboratories.
The researchers found that Phaethon’s spectrum corresponds exactly to a certain type of meteorite, the so-called CY carbonaceous chondrite. It is a very rare type of meteorite, of which only six specimens are known.
Asteroids can also be studied by retrieving samples from space, but meteorites can be studied without expensive space missions. Asteroids Ryugu and Bennu, the targets of recent JAXA and NASA sample-return missions, belong to CI and CM meteorites.
All three types of meteorites originate from the birth of the Solar System, and partially resemble each other, but only the CY group shows signs of drying and thermal decomposition due to recent heating.
All three groups show signs of a change that occurred during the early evolution of the Solar System, where water combines with other molecules to form phyllosilicate and carbonate minerals. However, CY-type meteorites differ from others due to their high iron sulfide content, which suggests their own origin.
Phaethon’s spectrum match the spectra of CY carbonaceous chondrites
Analysis of Phaethon’s infrared spectrum showed that the asteroid was composed of at least olivine, carbonates, iron sulfides, and oxide minerals. All of these minerals supported the connection to the CY meteorites, especially iron sulfide. The carbonates suggested changes in water content that fit the primitive composition, while the olivine is a product of thermal decomposition of phyllosilicates at extreme temperatures.
In the research, it was possible to show with thermal modeling what temperatures prevail on the surface of the asteroid and when certain minerals break down and release gases. When Phaethon passes close to the Sun, its surface temperature rises to about 800°C. The CY meteorite group fits this well. At similar temperatures, carbonates produce carbon dioxide, phyllosilicates release water vapor and sulfides sulfur gas.
According to the study, all the minerals identified on Phaethon appear to correspond to the minerals of CY-type meteorites. The only exceptions were the oxides portlandite and brucite, which were not detected in the meteorites. However, these minerals can form when carbonates are heated and destroyed in the presence of water vapor.
The tail and the meteor shower get an explanation
Asteroid composition and temperature explained the formation of gas near the Sun, but do they also explain the dust and gravel forming the Geminid meteors? Did the asteroid have enough pressure to lift dust and rock from the surface of the asteroid?
The researchers used experimental data from other studies in conjunction with their thermal models, and, based on them, it was estimated that when the asteroid passes closest to the Sun, gas is released from the mineral structure of the asteroid, which can cause the rock to break down. In addition, the pressure produces by carbon dioxide and water vapor is high enough to lift small dust particles from the surface of the asteroid.
“Sodium emission can explain the weak tail we observe near the Sun, and thermal decomposition can explain how dust and gravel are released from Phaethon,” says the study’s lead author, postdoctoral researcher Eric MacLennan from the University of Helsinki.
“It was great to see how each one of the discovered minerals seemed to fall into place and also explain the behavior of the asteroid,” sums up associate professor Mikael Granvik from the University of Helsinki.

AR #87

Russians Warn of Asteroid Hit in 2036

Posted on

2000-yr-old coin cache discovered in ancient Pakistan

This image has an empty alt attribute; its file name is ai-free.png

Archaeologists in Pakistan have discovered an extremely rare treasure trove of copper coins, believed to be more than 2000 years old. The discovery was made in the ruins of a Buddhist shrine built on the even more ancient site of Mohenjo Daro.

The coins and the shrine, known as a stupa, are believed to date from the Kushan Empire. It ruled the region from about the second century BC to the third century AD, Live Science reports.
The temple is located among the vast ruins at Mohenjo Daro in what is now southeastern Pakistan, which date back to around 2600 BC and originate from the ancient Indus Valley or Harappan civilization, one of the oldest civilizations in the world.


The coins will now be thoroughly cleaned in an archaeological laboratory. The newly found coins are colored green because copper is corroded by air

AR #82

Ancient Cities in the Forest”

by William B. Stoecker

Posted on

Did Killing off the Dinosaurs Take More than Rocks from Space?

This image has an empty alt attribute; its file name is ai-free.png

What wiped out the dinosaurs? A meteorite plummeting to Earth is only part of the story, a new study suggests. Climate change triggered by massive volcanic eruptions may have ultimately set the stage for the dinosaur extinction, challenging the traditional narrative that a meteorite alone delivered the final blow to the ancient giants. That’s according to a study published in Science Advances, co-authored by Don Baker, a professor in McGill University’s Department of Earth and Planetary Sciences ( https://www.science.org/doi/10.1126/sciadv.adg8284 ).

The research team delved into volcanic eruptions of the Deccan Traps—a vast and rugged plateau in Western India formed by molten lava. Erupting a staggering one million cubic kilometers of rock, it may have played a key role in cooling the global climate around 65 million years ago.
The work took researchers around the world, from hammering out rocks in the Deccan Traps to analyzing the samples in England and Sweden.
In the lab, the scientists estimated how much sulfur and fluorine was injected into the atmosphere by massive volcanic eruptions in the 200,000 years before the dinosaur extinction.
Remarkably, they found the sulfur release could have triggered a global drop in temperature around the world—a phenomenon known as a volcanic winter.
“Our research demonstrates that climatic conditions were almost certainly unstable, with repeated volcanic winters that could have lasted decades, prior to the extinction of the dinosaurs. This instability would have made life difficult for all plants and animals and set the stage for the dinosaur extinction event. Thus our work helps explain this significant extinction event that led to the rise of mammals and the evolution of our species,” said Prof. Don Baker.

AR #101

Facing the Extinction Threat

by William B. Stoecker

Posted on

Neanderthals Genes from ‘Cousins’ of Modern Humans Found

This image has an empty alt attribute; its file name is ai-free.png

Modern humans migrated to Eurasia 75,000 years ago, where they encountered and interbred with Neanderthals. A new study published in the journal Current Biology shows that at this time Neanderthals were already carrying human DNA from a much older encounter with modern humans. A new study shows that an ancient lineage of modern humans migrated to Eurasia over 250,000 years ago where they interbred with Neanderthals. Over time, these humans died out, leaving a population with predominantly Neanderthal ancestry ( https://www.cell.com/current-biology/fulltext/S0960-9822(23)01315-5 )

“We found this reflection of ancient interbreeding where genes flowed from ancient modern humans into Neanderthals,” says Alexander Platt, a senior research scientist in the Perelman School of Medicine and one of the study’s first authors. “This group of individuals left Africa between 250,000 and 270,000 years ago. They were sort of the cousins to all humans alive today, and they were much more like us than Neanderthals.”
Because most Neanderthal-human interbreeding is thought to have occurred in Eurasia, not in Africa, Neanderthal ancestry is expected to be limited in sub-Saharan Africa; however, a recent study made the puzzling observation that several sub-Saharan populations contain chunks of DNA that resemble Neanderthal DNA. The study was unable to determine how this Neanderthal-like DNA entered these populations, whether it originated from modern humans who had migrated from Africa, interbred with Neanderthals in Eurasia, and then returned, or whether it was the result of an earlier encounter between Neanderthals and humans. Because the study relied on a limited number of genomes from the 1,000 Genomes Project, all of which share a relatively recent common ancestry in Central and Western Africa, it was also unclear whether Neanderthal-like DNA is widespread among sub-Saharan populations.
To better understand how widespread these Neanderthal-like DNA regions are across sub-Saharan Africa and to elucidate their origins, Tishkoff’s team leveraged a genetically diverse set of genomes of 180 individuals from 12 different populations in Cameroon, Botswana, Tanzania, and Ethiopia. For each genome, the researchers identified regions of Neanderthal-like DNA and looked for evidence of Neanderthal ancestry.
Then, they compared the modern human genomes to a genome belonging to a Neanderthal who lived approximately 120,000 years ago. For this comparison, the team developed a novel statistical method that allowed them to determine the origins of the Neanderthal-like DNA in these modern sub-Saharan populations, whether they were regions that Neanderthals inherited from modern humans or regions that modern humans inherited from Neanderthals and then brought back to Africa.
They found that all of the sub-Saharan populations contained Neanderthal-like DNA, indicating that this phenomenon is widespread. In most cases, this Neanderthal-like DNA originated from an ancient lineage of modern humans that passed their DNA on to Neanderthals when they migrated from Africa to Eurasia around 250,000 years ago. As a result of this modern human-Neanderthal interbreeding, approximately 6% of the Neanderthal genome was inherited from modern humans.
In some specific sub-Saharan populations, the researchers also found evidence of Neanderthal ancestry that was introduced to these populations when humans bearing Neanderthal genes migrated back into Africa. Neanderthal ancestry in these sub-Saharan populations ranged from 0 to 1.5%, and the highest levels were observed in the Amhara from Ethiopia and Fulani from Cameroon.
To try to understand whether carrying modern human DNA was helpful or harmful when introduced into the Neanderthal genome, the researchers also investigated where these chunks of modern human DNA were located. To try to understand whether carrying modern human DNA was helpful or harmful, the researchers also investigated where these chunks of modern human DNA were located within the Neanderthal genome. They found that most of the modern human DNA was in noncoding regions of the Neanderthal genome, indicating that modern human gene variants were being preferentially lost from coding sections of the genome, which suggests that having modern human genes in a Neanderthal background is detrimental to fitness. 

AR #81

The Fate of the Watchers

by Andrew Collins

Posted on

Astronomers’ Search for Vanishing Stars Fails

Universe Today

This image has an empty alt attribute; its file name is ai-free.png

On July 19, 1952, Palomar Observatory was undertaking a photographic survey of the night sky. Part of the project was to take multiple images of the same region of sky, to help identify things such as asteroids. At around 8:52 that evening a photographic plate captured the light of three stars clustered together. At a magnitude of 15, they were reasonably bright in the image. At 9:45 pm the same region of sky was captured again, but this time the three stars were nowhere to be seen. In less than an hour they had completely vanished.

Stars don’t just vanish. They can explode, or experience a brief period of brightness, but they don’t vanish. And yet, the photographic proof was there. The three stars are clearly in the first image, and clearly not in the second. The assumption then is that they must have suddenly dimmed, but even that is hard to accept. Later observations found no evidence of the stars to dimmer than magnitude 24. This means they likely dimmed by a factor of 10,000 or more. What could possibly cause the stars to dim by such an astounding amount so quickly?
One idea is that they are not three stars, but one. Perhaps a star happened to brighten for a short time, such as a fast radio burst from a magnetar. While this happened, perhaps a stellar-mass black hole passed between it and us, causing the flare to gravitationally lens as three images for a brief time. The problem with this idea is that such an event would be exceedingly rare, but other photographic images taken during the 1950s show similar rapid disappearances of multiple stars. In some cases, the stars are separated by minutes of arc, which would be difficult to produce by gravitational lensing.
Another idea is that they weren’t stars at all. The three bright points are within 10 arcseconds of each other. If they were three individual objects, then something must have triggered their brightening. Given the timespan of about 50 minutes, causality and the speed of light would require they were no more than 6 AU apart. This means they would have to be no more than 2 light-years away. They could have been Oort Cloud objects where some event caused them to brighten around the same time. Later observations couldn’t find them because they had since drifted on along their orbits.
A third idea is that they weren’t objects at all. Palomar Observatory isn’t too far from the New Mexico deserts where nuclear weapons testing occurred. Radioactive dust from the tests could have contaminated the photographic plates, creating bright spots on some images and not others. Given similar vanishings seen on other photographic plates of the 1950s, this seems quite possible.
At this point, we can’t be sure. What we really need is to capture a few of these events in modern sky surveys, where we can quickly go back and make additional observations. For now, it’s a mystery waiting to be solved (https://arxiv.org/pdf/2310.09035.pdf).

AR #117

Secrets of the Stars

Posted on

Brains Can Now Communicate with Thought Alone

This image has an empty alt attribute; its file name is ai-free.png

A speech prosthetic developed by a collaborative team of Duke neuroscientists, neurosurgeons, and engineers can translate a person’s brain signals into what they’re trying to say.

Appearing in the journal Nature Communications, the new technology might one day help people unable to talk due to neurological disorders regain the ability to communicate through a brain-computer interface.
“There are many patients who suffer from debilitating motor disorders, like ALS (amyotrophic lateral sclerosis) or locked-in syndrome, that can impair their ability to speak,” said Gregory Cogan, PhD, a professor of neurology and neurosurgery at Duke and one of the lead researchers involved in the project. “But the current tools available to allow them to communicate are generally very slow and cumbersome.”
Imagine listening to an audiobook at half-speed. That’s the best speech decoding rate currently available, which clocks in at about 78 words per minute. People, however, speak around 150 words per minute.
The lag between spoken and decoded speech rates is partially due the relatively few brain activity sensors that can be fused onto a paper-thin piece of material that lays atop the surface of the brain. Fewer sensors provide less decipherable information to decode.
To improve on past limitations, Cogan teamed up with Jonathan Viventi, PhD, associate professor of biomedical engineering, whose lab specializes in making high-density, ultra-thin, and flexible brain sensors.
Viventi and his team packed a record-breaking 256 microscopic brain sensors onto a postage stamp-sized piece of flexible, medical-grade plastic. Neurons just a grain of sand apart can have wildly different activity patterns when coordinating speech, so it’s necessary to distinguish signals from neighboring brain cells to help make accurate predictions about intended speech.
After fabricating the new implant, Cogan and Viventi teamed up with several Duke University Hospital neurosurgeons, including Derek Southwell, MD, PhD, Nandan Lad, MD, PhD, and Allan Friedman, MD, who helped recruit four patients to test the implants. The experiment required the researchers to place the device temporarily in patients who were undergoing brain surgery for some other condition, such as  treating Parkinson’s disease or having a tumor removed. Time was limited for Cogan and his team to test drive their device in the OR.
Read the full story from Duke Institute of Brain Sciences on Duke Today and in Newsweek.

AR #79

Brain to Brain Communication