AI creates images of what people see by analyzing brain scans

A modification of the popular text-to-image artificial intelligence allows it to convert brain signals directly into images. However, the system requires extensive training using bulky and expensive imaging equipment, so simple mind reading is far from reality.

Several research groups have previously generated images from brain signals using energy-intensive artificial intelligence models that require fine-tuning from millions to billions of parameters.

Now, Shinji Nishimoto and Yu Takagi at Osaka University in Japan have developed a much simpler approach using Stable Diffusion, a text-to-image generator released by Stability AI in August 2022. Their new method includes thousands, not millions of parameters.

In normal use, Stable Diffusion turns a text cue into an image by starting with random visual noise and tweaking it to create images that resemble images in training data with similar text captions.

Nishimoto and Takagi created two additional models to make AI work with brain signals. The pair used data from four people who had taken part in a previous study that used functional magnetic resonance imaging (fMRI) to scan their brains as they viewed 10,000 different images of landscapes, objects and people.

Using about 90 percent of the brain imaging data, the pair trained the model to make connections between fMRI data from an area of ​​the brain that processes visual signals, called the early visual cortex, and the images people viewed.

They used the same dataset to train a second model to form links between textual descriptions of images made by five annotators in a previous study and fMRI data from an area of ​​the brain that processes the meaning of images called the ventral visual cortex.

Once trained, these two models, which had to be customized for each individual, could convert brain imaging data into forms that were directly fed into the stable diffusion model. It could then reconstruct about 1,000 human-viewed images with about 80 percent accuracy without learning from the original images. This level of accuracy is similar to that achieved earlier in a study that analyzed the same data using a much more tedious approach.

“I couldn’t believe my eyes, I went to the toilet and looked in the mirror, then went back to my desk to take another look,” Takagi says.

However, the study only tested the approach on four people, and mind-reading AI works better for some people than others, Nishimoto says.

What’s more, because the models must be tailored to each individual’s brain, this approach requires long brain scans and huge fMRI machines, says Xikun Lin of the University of California. “It’s completely impractical for everyday use,” she says.

In the future, more practical versions of this approach will allow people to create drawings or modify images with their imagination or add new elements to the gameplay, Lin said.

Themes:

Thanks for reading Dallas Press News

Dallas Press News – Latest News:
Dallas Local News || Fort Worth Local News | Texas State News || Crime and Safety News || National news || Business News || Health News

Related Articles

Back to top button