Within a decade, computers will be reading images from the brain.

Within a decade, computers will be reading brain images

Eidetic memory is the ability to recreate complex images, sounds, and other objects with very high precision and accuracy. People with this ability can easily imagine interiors of rooms they have been in for a while and have the ability to recreate the road they drove on some time ago. Eidetic images are much more common in children. Statistics say that about 8% of 7-12 year olds and only 0.1% of adults have this ability. Researchers from Kyoto University, using generative networks, have created an advanced artificial intelligence system. The tool can read the images that a person “sees” with his/her eyes and convert them into digital photos with 99% accuracy.

Current status and direction of development

Artificial intelligence has already been created, but it is not yet very accurate. Currently, the algorithm reads images seen through the eyes of the imagination, but the result of the analysis is still low resolution, and the person must be in an MRI machine. As technology advances and brain research equipment gets better, computers will be able to convert our thoughts into images that we can save and share. Researchers at Kyoto University conducted their experiments back in 2018. The first publication on the system appeared in the prestigious scientific journal PLOS Computational Biology in 2019. Even before the research paper was published, the first news about the Japanese system had already appeared in Science Magazine in 2018, where the workings of artificial intelligence were described in detail.

Researchers placed subjects in an fMRI scanner, which performs a special type of magnetic scan. The scanner’s cylindrical tube contains a very powerful electromagnet that has a field of 3 tesla (T), which is about 50,000 times greater than the Earth’s magnetic field. The magnetic field inside the machine affects the magnetic nuclei of the atoms. Under normal conditions, atomic nuclei are oriented randomly, but under the influence of a generated field they align themselves according to the direction of the field. The stronger the field, the greater the degree of alignment that occurs. The key to the success of such a scanner is that the signal from hydrogen nuclei varies in strength depending on the environment. This provides a way to distinguish between gray matter, white matter and cerebrospinal fluid in structural images of the brain. Based on this process, the researchers recorded the activities in the brains of the subjects in the experiment.

Unlike a traditional MRI scanner. fMRI enables monitoring of blood flow in the brain, thereby allowing researchers to determine which areas of the brain are most active during a task. While recording brain signals from subjects’ visual systems, scientists showed them thousands of images, displaying each image several times. These activities have generated a huge database of brain signals with each set of signals corresponding to a specific image. The next step was to feed the information into a deep neural network (DNN), which was transformed to create images. Neural networks are very good detectors of patterns, and for each image shown to the subjects, the researchers attempted to have the neural network generate an image that matched the observed patterns of brain activity, improving its score by more than 200 times. The end result of these efforts was a system that could take fMRI data showing a subject’s brain activity and paint a picture based on what it thinks each subject sees. The researchers then made a U-turn, as they gave the DNN data to an already trained generative network. This type of connection is relatively new and represents one of the most impressive advances in artificial intelligence over the past decade. These specialized neural networks take basic information and generate images and videos that are not unlike those recorded with a professional camera.

Generative networks are a technology conducive to deepfake. Allows the creation of artificial humans and filters used m.in. on Snapchat. In this case, the Japanese researchers used the network to normalize the images read from the subjects’ brains and make them more realistic. Eventually, artificial intelligence took information about brain activity, transformed it into primitive images using a DNN, then a generative network was used to refine the images. To test the final image files, a group of “judges” was appointed, who were shown a set of possible output images and asked to match the images read from the subjects’ brains to the most similar input image. In more than 99% of cases, people matched the images generated from the brain to those previously shown to the subjects.

Incredibly, using only brain signals and artificial intelligence, researchers at Kyoto University were able to reconstruct images in subjects’ brains so well that neutral judges could match them to “real world” images almost 100 percent. Researchers after the first success have raised the bar. They wanted to get the image that the subjects of the experiment imagined in their heads. Information from the first part of the study was used to inform these measures, as imagining illustrations uses similar areas in the brain as viewing real ones. Drawings created solely from imagination were not as good as those created when subjects looked at an actual photograph. When subjects in an experiment were asked to imagine simple, contrasting shapes (a + sign, a circle on a blank background), and their brains were scanned, the resulting images matched real-world examples 83.2%. At present, illustrations created from neural signals are relatively crude. Research conducted by employees of Kyoto University has proved that it is possible to read images from people’s brains.

Higher resolution and better quality would provide an accurate look into the mind and brain of the person being studied and make it easier for the computer to reconstruct the images. If this were to happen, the invention could revolutionize the art industry, and design more broadly would be transformed. Then cameras would be useless, because each of us could take pictures with our minds. As artificial intelligence advances, moral dilemmas and human rights issues deepen. Undoubtedly, such tools could be operated by the wrong people for unknowable purposes. Everyone’s security and privacy issues with the invention could cease to exist, and extracting information would never be easier.

About This Site

This may be a good place to introduce yourself and your site or include some credits.

Find Us

Address
123 Main Street
New York, NY 10001

Hours
Monday–Friday: 9:00AM–5:00PM
Saturday & Sunday: 11:00AM–3:00PM