This app provides a BCI speller based on code-modulated visual evoked potential (c-VEP) under the circular-shifting paradigm. The use of c-VEP as control signals is a recent but promising alternative to achieve reliable, high-speed BCIs for communication and control. Here, the commands are encoded using shifted versions of a pseudorandom sequence with perfect autocorrelation properties (i.e., a maximal length sequence, or m-sequence). The "reference method" for circular-shifting c-VEP paradigm is implemented to decode the command the user is looking at in real time. In the end, this paradigm generally allows obtaining accuracies greater than 90% with a very short calibration of only 30 secs. More information on paradigm and signal processing can be found in: Martínez-Cagigal, Víctor, et al. "Brain–computer interfaces based on code-modulated visual evoked potentials (c-VEP): a literature review." Journal of Neural Engineering (2021).
This app provides a BCI speller based on code-modulated visual evoked potential (c-VEP) under the circular-shifting paradigm. The app allows to develop high-speed, reliable BCIs for communication and control by encoding the application commands using shifted versions of pseudorandom sequences. Read the description below to learn more.
The c-VEPs are visual evoked potentials generated by looking at a flickering source that follows a pseudorandom sequence. Usually, this sequence is binary (i.e., it only has values 0 or 1), and thus the flickering is encoded with black and white flashes. However, it is rare to see c-VEP-based BCIs that employ random sequences, but they normally use sequences that have special characteristics.
Although each command could be modulated by a different code, finding a family of codes with suitable cross-correlation properties is not trivial. Thus, the classical approach relies on finding a pseudorandom binary sequence that presents low auto-correlation values for non-zero circular shifts, then encoding each command with time-delayed versions of the original sequence. This is known as the “circular shifting” paradigm.
Maximal length sequences (i.e. m-sequences), easily generated by linear feedback shift registers (LFSR), are often employed in c-VEP-based BCIs due to their excellent autocorrelation properties; i.e. 1 for a null shift, and −1/N otherwise, where N is the length of the m-sequence. Although the stimuli of different commands will be uncorrelated, it cannot be claimed that the EEG responses will be uncorrelated as well. This effect can happen when brain is modeled as a linear system, and even more when a nonlinear dynamic system is assumed. For the case of time-shifted stimuli, despite responses not being completely uncorrelated for certain lags like in the underlying bit-sequence, usually there is enough distinction to identify the time-shift of the EEG responses. This is achieved by creating templates for each command, circularly shifting the main template according to their lags. In online sessions, whenever an EEG response to several test cycles arrives, it is pre-processed and compared with all the templates. Hence, the selected command is identified as the one whose template reaches the maximal correlation with the processed EEG response.
Run Settings:
Encoding and matrix:
Colors:
Background:
Model training:
C-VEPs are exogenous signals generated naturally by our brains in response to stimuli. For that reason, c-VEP-based BCIs do not require users to be trained, but just a small calibration. In calibration stage, user is asked to pay attention to a flickering command encoded with the original m-sequence. We recommend to user, at least, 100 entire cycles (i.e., a full stimulation of the m-sequence) to train the model. That is, two runs of 5 trials each, in which trials are composed of 10 cycles. It is important to avoid blinking when trials are being displayed. Users can freely blink in the inter-trial time window.
If your monitor is capable to refresh at 120 Hz, we recommend using a “Target FPS (Hz)” that matches the monitor refresh rate. Imagine that you are using a 63-bit m-sequence. For a 60 Hz presentation rate, each cycle will last 1.05 s (i.e., 63/60). You can reduce that duration by half using 120 Hz, lasting 0.525 s (i.e., 63/120).
If you are using a 120 Hz presentation rate, we recommend you use more than a single filter. For instance, a filter bank composed of 3 IIR filters: (1, 60), (12, 60) and (30, 60) usually gives good results.
If you want to know more about the paradigm, the signal processing pipeline or the state-of-the-art methods that are used in c-VEP-based BCIs, we recommend to read the following paper: Martínez-Cagigal, Víctor, et al. "Brain–computer interfaces based on code-modulated visual evoked potentials (c-VEP): a literature review." Journal of Neural Engineering (2021).
Introduced parameters for adjusting color opacity and setting an image as the background.
Minor fix to work with configurations built in other computers
Improved exception handling. Now users can also chose if artifact rejection must be applied in calibration.
Improved the method to assign lags to commands.
Minor fix
Adaptation to v2024 (KRONOS): - Changed from PyQt5 to PySide6 - Now the app can save all recored signals (not just the EEG) - The app detects several monitors and warns user if monitor refresh rate is not the same
Improved EEG stream detection for streams with invalid lsl_type.
Updated encoded visualization
Fixed a bug where a call to a (now) obsolete PyQt5 function was done
Fixed a bug where a call to a (now) obsolete PyQt5 function was done
Updated TCPClient in Unity
Fixed a bug where an additional trial was displayed in training.
Initial "cvep_speller" app for MEDUSA Platform v2022.0. This app implements a c-VEP-based BCI speller that uses the circular-shifting paradigm. Currently, only binary m-sequences are supported. Signal processing was implemented following the common "reference method" for circular shifting (i.e., CCA + correlation analysis).
Definitely give this one a try if you're looking for practical BCI communication! It's not hard to get 100% accuracy with less than 1 minute of calibration =)
Please, use the editor below to modify the functionalities of your app
Please, use the editor below to modify the tutorial of your app
Rate this app