Japanese scientists utilized AI to decipher mental imagery. Now Meta, the parent company of Facebook, is doing the same thing.
The initial breakthrough was announced in December of last year, with striking images of what the technology has been able to accomplish so far. The noninvasive setup monitored participants’ brain activity while looking at 1,200 images of various objects, which was fed to the AI along with the original images. It was then asked to match brain activity to elements of the image and recreate it. And while the resulting images aren’t recognizable on their own, next to the originals it’s possible to see the resemblance.
Earlier this month, Meta was able to produce similar results. Their methodology was different, including text descriptions of objects alongside the images and brainwaves. The sample size was also larger, 22,448 unique images. When asked to match the brainwaves to their corresponding images, Meta’s DINOv2 had 70% accuracy at its highest. Most of the time though, the AI’s ability to match images leaves something to be desired.
But the technology is promising enough to bring up potential ethical concerns. Like whether reading minds is a goal scientists should be working toward. The fact that Meta is working to improve this technology concerns many, as they have already been shown to engage in a variety of practices to get consumer data.
However, AI can barely interpret images someone sees, much less intangible thoughts. Besides, the machinery required is bulky and difficult to hide. So, for now, there’s no reason to fear mind reading at the grocery store. Still, the practice will continue to advance, and society will be forced to reckon with the ethical questions of privacy that come with the ability to peek into someone’s brain.