Posted on

NASA AI Gives 30 Min. Warning

by Vanessa Thomas

This image has an empty alt attribute; its file name is ai-free.png

Like a tornado siren for life-threatening storms in America’s heartland, a new computer model that combines artificial intelligence (AI) and NASA satellite data could sound the alarm for dangerous space weather.

The model uses AI to analyze spacecraft measurements of the solar wind (an unrelenting stream of material from the Sun) and predict where an impending solar storm will strike, anywhere on Earth, with 30 minutes of advance warning. This could provide just enough time to prepare for these storms and prevent severe impacts on power grids and other critical infrastructure.

The Sun constantly sheds solar material into space – both in a steady flow known as the “solar wind,” and in shorter, more energetic bursts from solar eruptions. When this solar material strikes Earth’s magnetic environment (its “magnetosphere”), it sometimes creates so-called geomagnetic storms. The impacts of these magnetic storms can range from mild to extreme, but in a world increasingly dependent on technology, their effects are growing ever more disruptive.

For example, a destructive solar storm in 1989 caused electrical blackouts across Quebec for 12 hours, plunging millions of Canadians into the dark and closing schools and businesses. The most intense solar storm on record, the Carrington Event in 1859, sparked fires at telegraph stations and prevented messages from being sent. If the Carrington Event happened today, it would have even more severe impacts, such as widespread electrical disruptions, persistent blackouts, and interruptions to global communications. Such technological chaos could cripple economies and endanger the safety and livelihoods of people worldwide.

In addition, the risk of geomagnetic storms and devastating effects on our society is presently increasing as we approach the next “solar maximum” – a peak in the Sun’s 11-year activity cycle – which is expected to arrive sometime in 2025.

To help prepare, an international team of researchers at the Frontier Development Lab – a public-private partnership that includes NASA, the U.S. Geological Survey, and the U.S. Department of Energy – have been using artificial intelligence (AI) to look for connections between the solar wind and geomagnetic disruptions, or perturbations, that cause havoc on our technology. The researchers applied an AI method called “deep learning,” which trains computers to recognize patterns based on previous examples. They used this type of AI to identify relationships between solar wind measurements from heliophysics missions (including ACE, Wind, IMP-8, and Geotail) and geomagnetic perturbations observed at ground stations across the planet.

From this, they developed a computer model called DAGGER (formally, Deep Learning Geomagnetic Perturbation) that can quickly and accurately predict geomagnetic disturbances worldwide, 30 minutes before they occur. According to the team, the model can produce predictions in less than a second, and the predictions update every minute.

AR #92

Bracing for a Carrington Event

by Frank Joseph

Posted on

Device Can Read Text from Human Minds

This image has an empty alt attribute; its file name is ai-free.png

A new artificial intelligence system called a semantic decoder can translate a person’s brain activity — while listening to a story or silently imagining telling a story — into a continuous stream of text. The system developed by researchers at The University of Texas at Austin might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again.

The study, published in the journal Nature Neuroscience, was led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin. The work relies in part on a transformer model, similar to the ones that power Open AI’s ChatGPT and Google’s Bard.

Unlike other language decoding systems in development, this system does not require subjects to have surgical implants, making the process noninvasive. Participants also do not need to use only words from a prescribed list. Brain activity is measured using an fMRI scanner after extensive training of the decoder, in which the individual listens to hours of podcasts in the scanner. Later, provided that the participant is open to having their thoughts decoded, their listening to a new story or imagining telling a story allows the machine to generate corresponding text from brain activity alone.

“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” Huth said. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”

The result is not a word-for-word transcript. Instead, researchers designed it to capture the gist of what is being said or thought, albeit imperfectly. About half the time, when the decoder has been trained to monitor a participant’s brain activity, the machine produces text that closely (and sometimes precisely) matches the intended meanings of the original words.

Could this technology be used on someone without them knowing, say by an authoritarian regime interrogating political prisoners or an employer spying on employees?

No. The system has to be extensively trained on a willing subject in a facility with large, expensive equipment. “A person needs to spend up to 15 hours lying in an MRI scanner, being perfectly still, and paying good attention to stories that they’re listening to before this really works well on them,” said Huth.

Could training be skipped altogether?
No. The researchers tested the system on people whom it hadn’t been trained on and found that the results were unintelligible.

Are there ways someone can defend against having their thoughts decoded?
Yes. The researchers tested whether a person who had previously participated in training could actively resist subsequent attempts at brain decoding. Tactics like thinking of animals or quietly imagining telling their own story let participants easily and completely thwart the system from recovering the speech the person was exposed to.
What if technology and related research evolved to one day overcome these obstacles or defenses?
“I think right now, while the technology is in such an early state, it’s important to be proactive by enacting policies that protect people and their privacy,” Tang said. “Regulating what these devices can be used for is also very important.”

AR #61

Telephone Telepathy

by John Kettler

Posted on

Conscious Artificial Brains: Research Ethics

One way in which scientists are studying how the human body grows and ages is by creating artificial organs in the laboratory. The most popular of these organs is currently the organoid, a miniaturized organ made from stem cells. Organoids have been used to model a variety of organs, but brain organoids are the most clouded by controversy.

Current brain organoids are different in size and maturity from normal brains. More importantly, they do not produce any behavioral output, demonstrating they are still a primitive model of a real brain. However, as research generates brain organoids of higher complexity, they will eventually have the ability to feel and think. In response to this anticipation, Associate Professor Takuya Niikawa (Kobe University) and Assistant Professor Tsutomu Sawai (Kyoto University’s Institute for the Advanced Study of Human Biology (WPI-ASHBi)), in collaboration with other philosophers in Japan and Canada, have written a paper on the ethics of research using conscious brain organoids. The paper can be read in the academic journal Neuroethics (https://link.springer.com/article/10.1007/s12152-022-09483-1).


Working regularly with both bioethicists and neuroscientists who have created brain organoids, the team has been writing extensively about the need to construct guidelines on ethical research. In the new paper, Niikawa, Sawai and their coauthors lay out an ethical framework that assumes brain organoids already have consciousness rather than waiting for the day when we can fully confirm that they do.


“We believe a precautionary principle should be taken,” Sawai said. “Neither science nor philosophy can agree on whether something has consciousness. Instead of arguing about whether brain organoids have consciousness, we decided they do as a precaution and for the consideration of moral implications.”


To justify this assumption, the paper explains what brain organoids are and examines what different theories of consciousness suggest about brain organoids, inferring that some of the popular theories of consciousness permit them to possess consciousness.


Ultimately, the framework proposed by the study recommends that research on human brain organoids follows the ethical principles similar to those for animal experiments. Therefore, recommendations include using the minimum number of organoids possible and doing the upmost to prevent pain and suffering while considering the interests of the public and patients.


“Our framework was designed to be simple and is based on valence experiences and the sophistication of those experiences,” said Niikawa.

AR #115

“High IQ and a Big Brain: Is there a Connection”