This app provides a BCI speller based on code-modulated visual evoked potential (c-VEP) under the circular-shifting paradigm that supports non-binary m-sequences.Despite their usefulness in achieving reliable, high-speed BCIs for communication and control, c-VEP paradigms using binary m-sequences often cause eyestrain for some users. This visual fatigue is likely to be caused by the high-contrast flickering (i.e., black and white) of the binary m-sequence. One of the current approaches to overcome this limitation is to use non-binary m-sequences, i.e., sequences with a higher base resulting in more values encoded with different shades of gray. However, the creation of p-ary m-sequences is not trivial, which limits its application in research studies. In this application we offer the user the possibility of using p-ary sequences of bases 2, 3, 5, 7, 11 and 13. More information can be found in: Martínez-Cagigal, V. et al. (2023). Non-binary m-sequences for more comfortable brain–computer interfaces based on c-VEPs. Expert Systems with Applications, 232, 120815.
This app provides a BCI speller based on code-modulated visual evoked potential (c-VEP) under the circular-shifting paradigm that can encode commands using non-binary codes. The app allows to develop high-speed, reliable BCIs for communication and control by encoding the application commands using shifted versions of p-ary m-sequences. Read the description below to learn more.
The c-VEPs are visual evoked potentials generated by looking at a flickering source that follows a pseudorandom sequence. Usually, this sequence is binary (i.e., it only has values 0 or 1), and thus the flickering is encoded with black and white flashes. This high-contrast stimuli may cause visual eyestrain for some users. However, it is possible to encode commands using non-binary codes, encoded with different shades of gray or different colors, with the objective to be more comfortable to the end user. If you want to learn more about binary-based c-VEP spellers and/or circular shifting paradigms, please check the "c-VEP speller" app: https://medusabci.com/market/cvep_speller/
Finding non-binary codes with good autocorrelation functions is not trivial, thus this app does that for you. Here, we have use different primitive polynomials over Galois Fields GF(p^r), where p is the base (i.e., number of levels or colors) and r is the order of the polynomial; to generate p-ary m-sequences. Currently, bases 2, 3, 5, 7, 11 and 13 are implemented for different orders to vary the code length. Please, refer to the basic "c-VEP speller" app if you are interested in the signal processing pipeline.
Run Settings:
Encoding and matrix
Colors:
Model training:
C-VEPs are exogenous signals generated naturally by our brains in response to stimuli. For that reason, c-VEP-based BCIs do not require users to be trained, but just a small calibration. In calibration stage, user is asked to pay attention to a flickering command encoded with the original m-sequence. We recommend to user, at least, 100 entire cycles (i.e., a full stimulation of the m-sequence) to train the model. That is, two runs of 5 trials each, in which trials are composed of 10 cycles. It is important to avoid blinking when trials are being displayed. Users can freely blink in the inter-trial time window.
If your monitor is capable to refresh at 120 Hz, we recommend using a “Target FPS (Hz)” that matches the monitor refresh rate. Imagine that you are using a 63-bit m-sequence. For a 60 Hz presentation rate, each cycle will last 1.05 s (i.e., 63/60). You can reduce that duration by half using 120 Hz, lasting 0.525 s (i.e., 63/120).
If you are using a 120 Hz presentation rate, we recommend you use more than a single filter. For instance, a filter bank composed of 3 IIR filters: (1, 60), (12, 60) and (30, 60) usually gives good results.
If you want to know more about the paradigm, we recommend to read the following paper: Martínez-Cagigal, V., Santamaría-Vázquez, E., Pérez-Velasco, S., Marcos-Martínez, D., Moreno-Calderón, S., & Hornero, R. (2023). Non-binary m-sequences for more comfortable brain–computer interfaces based on c-VEPs. Expert Systems with Applications, 232, 120815.
The signal processing pipeline or the state-of-the-art methods that are used in this paradim are detailed here: Martínez-Cagigal, Víctor, et al. "Brain–computer interfaces based on code-modulated visual evoked potentials (c-VEP): a literature review." Journal of Neural Engineering (2021).
Minor fix to work with configurations built in other computers
Improved exception handling. Now users can also chose if artifact rejection must be applied in calibration.
Minor fix
Minor fix in GetOpenFileNames from PySide6.QtWidgets
Adaptation to v2024 KRONOS: - Changed from PyQt5 to PySide6 - Now the app can save all recored signals (not just the EEG)
Improved EEG stream detection for streams with invalid lsl_type.
> Minor update in TCPClient
Initial version of the P-ary c-VEP speller, supporting bases 2, 3, 5, 9, 11 and 13; as well as customized color encoding.
Please, use the editor below to modify the functionalities of your app
Please, use the editor below to modify the tutorial of your app
Rate this app