Chinese team large model mind reading brain activity directly changes text NeurIPS 2023

Mondo Psychological Updated on 2024-01-30

Cressy from the Qubit | qbitai

A new study included in Neurips has allowed large models to learn "mind reading"!

By learning Xi EEG data, the model successfully translated the subjects' EEG signals into text.

And the whole process does not require large equipment, just a special "headscarf" can be realized.

The work, called DEW**E, is able to interpret and translate brain waves into text without the use of invasive devices and MRIs.

Because of the use of large models to read the brain, iFLSCIENCE, which reported on Dew**E, also called it Braingpt.

Although DEW**E is not the first technology to realize brainwave decoding, it is the first to achieve non-invasive and MRI-free brainwave-to-text conversion.

If used on a large scale, DEW**E will provide communication assistance for people with cerebral palsy.

So, how does dew**e perform?

The assessment score is better than SOTA

Due to the non-intrusive method used by DEW**E, the noise in the signal is stronger and the analysis is more difficult, but compared with the previous SOTA method, DEW**E's test score is still improved.

The research team used a publicly available ZUCO dataset containing more than 10,000 non-repeating sentences; While the subjects were reading naturally, the research team recorded their brain signals and the text they were reading. The EEG signal is sampled at 500 Hz and contains 128 channels.

If the input EEG information has been segmented according to the features of the eye tracking method, then DEW**E can accurately interpret about one-third of the sentence; Even if you don't cut it, you can successfully capture a subset of keywords.

The results also show that dew**e is more accurate than whole sentences and verbs more accurate than nouns.

In terms of data, the research team asked DEW**E to collect and analyze the EEGs of a total of 29 subjects.

The results show that DEW**E has a 3-18% higher score than the traditional method on the Blue-N dataset and a maximum of 635% improvement.

Without slicing, DEW**E delivers up to 120% better performance than conventional methods under the same conditions.

To assess the robustness of DEW**E, the team conducted a cross-subject test.

There were 18 subjects in this round of testing, and one of them was believed to have had brain waves used for training.

The team then looked at how the model performed when tested on 17 other people, and the smaller the gap with the people used for training, the more robust the model was.

The results show that the score drop value of DEW**E is lower than that of the traditional model, showing stronger robustness and generalization ability.

So, how does DEW**E achieve brainwave decoding?

Interpret brain waves with large models.

At the heart of DEW**E is the introduction of a concept called "discrete codebooks".

With a vectorized encoder, the continuous EEG signals are split into discrete forms that are individually aligned with the vocabulary.

After that, the research team fed the discrete data into the Transformer encoder to obtain a vector representation of contextual semantic fusion.

Using the vectorized text information as supervised data, the BART large model is trained with the obtained vectorized signal, and the DEW**E is obtained.

The new signal parsing process is similar – discretization and vectorization encoding, then interpreting it with BART to obtain textual information.

At the same time, in order to enhance the decodeability, the research team also adjusted the encoding through positive and negative samples, so that the semantics parsed by dew**e are closer to the target text word vector.

About the Author. There are five members of the DEW**E team, all of whom are Chinese.

The first author is Yiqun Duan from the University of Technology Sydney, from the H (Human-Centric) AI Research Center, whose research interests are machine intelligence and brain-computer interfaces.

In addition to Dew**E, Duan also has a "reverse result" based on the diffusion model - BrainDiffusion, a tool that converts words into brainwaves.

Professor Chin-Teng Lin, director of the research center, is the corresponding author of this paper.

Jinzhao Zhou and Yu-kai Wang from the same lab, as well as Zhen Wang from the University of Sydney, also participated in the project.

*Address: Reference link:

Related Pages