Performance Evaluation of Current State of Brain Computer Interface (BCI) and its Trends

: The goal of this study was to broaden a technique to the rapidly growing field of BCI analysis. The goal was to broaden a deep understanding of the neurophysiological processes that might be overloaded to incorporate a BCI system by focusing in the electroencephalogram as the BCI input modality. The research is being carried out in order to gain a better understanding of the human brain's neurophysiology. The purpose of this research is to look into electroencephalography as a method of detecting mental activity. We present a comprehensive overview of EEG-based BCI systems that have been implemented so far. We also explored the future of BCI technology and evaluated by comparing the performance of various feature classification techniques.


I. INTRODUCTION
A brain computer interface (BCI), also known as a brain machine interface (BMI), is a hardware / software communications system that includes controlling signals created from EEG (electroencephalographic) activity to allow people to collaborate with their enviroment without using peripheral muscles and nerves. A brain-computer interface (BCI) is an intelligent system that can acknowledge a specific sequence in brain signals by going through five phases: signal acquisition, data pre -processing or signal improvement, extraction of features, categorization, and the controlling interface [1]. The brain signals are captured in the signal acquisition phase, that may also include noise removal and artefact processing. The pre -processing stage transforms the signals into a format that can be processed further. The feature extraction process helps to identify distinguishable details in the captured brain signals. The signal is then plotted onto a vector containing efficient and divergent validity characteristics from the observed signals after it has been evaluated. The work of extracting this fascinating data is extremely difficult. Numerous different signals from such a finite set of brainwaves that coexistence in both time and space are blended with brain signals. Furthermore, the signal is rarely permanent and is susceptible to artefacts such as electromyography (EMG) or electrooculography (EOG).
Historically, BCI innovation has always been unappealing to meaningful research exploration. The concept of satisfactorily decrypting opinions or behavioural intention through activity in the brain has long been dismissed as unusual and farfetched. As a result, research on the topic of brain function has typically been restricted to the clinical diagnosis of neurodevelopmental problems or the research lab investigation of brain processes. Due to the obvious limitations and usefulness of data noticable in the brain, as well as its wide variation, the BCI project is deemed too complicated.

Figure 1 Architectural layout of Brain Computer Interface
Healthcare, neuro-ergonomics and environmental sensing, neuromarketing and marketing, knowledge and self, games and entertainment, and identification and verification disciplines have all benefited from brain-computer interfaces.

II. LITERATURE REVIEW
Q. Zheng et al. [1] Electroencephalography (EEG) information varies from topic to topic in the frame of reference of motor imagery, so the achievement of a classifier trained on data from various topics from such a particular domain usually deteriorates when employed correctly out of a different topic. Whereas obtaining adequate samples out of each topic might solve the problem, it is sometimes too time -consuming process and inconvenient. To address this issue, researchers presents a new end-to-end deep expertise adaptation method that uses beneficial data from multiple areas of study (source domain) to strengthen classifier performance on a single topic (target domain). The current technique, in particular, combines the optimization of three components: a feature extractor, a classifier, and a domain discriminator. By charting raw EEG signals into a deep representation space, the feature extractor tries to learn racist and discriminatory latent factors. In addition, a centre loss is used to restrict a latent feature space and reduce intra-subject non-stationarity. Moreover, the entity differentiator uses a confrontational instructional strategy to complement the feature allocation transition between the source domain.
J. S. Kirar et al. [2] To enhance the productivity of motor imagery assignment categorization, researchers present a novel framework to to choose a subcategory of applicable frequency bands using only a sequences upwards feature selection algorithm from a composite filter bank that comprised of Prior-known EEG frequency bands and a set of variable length intersecting frequency bands. The efficiency of the proposed method is validated by the exploratory achieved by the proposed work on public datasets. The suggested model outperforms the existing methods considerably, according to the Friedman statistical test.
S. Sakhavi et al. [3] a classification framework for MI data is introduced in this document by using a convolutional neural network (CNN) architectural style and bringing in a new temporal description of the information. The novel representation is created by tweaking the filter-bank common spatial methods technique, and the CNN is built and optimised to fit the new depiction. On the BCI competition IV-2a 4-class MI data set, the framework outshines the finest classification approach in the literary works by 7% in mean subject precision. Besides that, they gain insight into the durations of EEG by learning the convolutional weights of the trained connections.
Van Erp J et al. [4] the use of brain-computer interaction has progressed from assistive technology to implementations including playing games. Functionality, equipment, signal processing, and system integration developments should lead to implementations in non-medical fields.
Bi L et al. [5] in this document, researchers include a thorough overview of brain-controlled mobile robots' system requirements, important methodologies, and assessment concerns, and some deep insight into connected study and future research issues. From of the standpoint of their operation conditions, researchers first investigate and characterise numerous components and systems of braincontrolled mobile robots into two main categories. The braincomputer interface techniques and able to share control mechanisms are among the important strategies shown in these brain-controlled mobile robots. This one is followed by analysis of the problems involving the assessment of braincontrolled mobile robots, such as participants, activities, and surroundings, as well as evaluation metrics. The article concludes with a discussion of recent research obstacles and areas for future research.
L. F. Nicolas-Alonso et al. [6] A brain-computer interface (BCI) is an equipment and software interaction system that enables computer systems or peripheral device to be controlled solely by cerebral activity. The primary objective of BCI research is to give seriously handicapped people who are absolutely paralysed or 'locked in' caused by neurological neuromuscular diseases like amyotrophic lateral sclerosis, brain stem stroke, or spinal injury communication networks. Researchers look at the various activities that make up a conventional BCI: signal acquisition, preprocessing or signal enhancement, feature extraction, categorization, and the control interface in this article.

III. METHODOLOGY
The research framework is broken down into four sections. The very first step entails gathering EEG data. The second stage was signal preprocessing, which removed unnecessary information and noise. The third stage is to extract features from the EEG signals after the noise has been removed. The fourth stage is to identify EEG signals to the movement patterns they correspond to, such as hand and leg motions. The information comes from the BCI project, which focused on right-handed 21-year-old men with no renowned health conditions. With the eyes shut, the EEG focuses on real random motion of the left and right hands. Each electrode is represented by a row. The order of the electrodes is FP1 FP2 F3 F4 C3 C4 P3 P4 O1 O2 F7 F8 T3 T4 T5 T6 FZ CZ PZ. The recording was performed at 500 Hz using the Neurofax EEG system using a chain link.
Background subtraction is required because EEG data is typically noisy. To eliminate unnecessary artefacts, the EEG signals were purified using a Butterworth filter with a band pass filter among 8 and 25Hz.
Feature Classification-Once the characteristics have been achieved, they are merged into a vector that can be used. The available information set has been divided into two groups: training phase and testing phase. After the data has been split, the classifiers are applied to the training data for every ratio to create classification guidelines. The information was then classified using the testing dataset. A stacked deep auto encoder can be used for categorization.

IV. STACKED DEEP AUTOENCODER
A stacked deep autoencoder is formed by blending a stacked autoencoder with a softmax classification function, that has an expected amount of cascaded autoencoder layers. Because autoencoder networks do not use labelled data, the characteristics learning phase is unsupervised. An unsupervised autoencoder's underlying structure is a forward movement with an input layer, one or more hidden layers, and an output layer.

Figure 2 Structure of Stacked Autoencoder
When the architectural style comes in the form of a bottleneck, an autoencoder could be used for pre-training or dimension minimization. Take into account an autoencoder with a hidden layer; by stacking hidden layers, the autoencoder can learn numerous levels of depictions. It is a feature extraction algorithm that aids in the discovery of a data representation. The autoencoders' characteristics truly describe the data point than even the data points themself. The layer of deep autoencoder contains input layer, hidden layer and output layer.

The input layer is for passing the input values
The value at hidden layer is evaluated as: weight matrix is represented by W, X is represented for input data values coming from input layers and,

B is represented for Bias Matrix
And transfer function is evaluated as : Where, error value is given by "e". In the field of machine learning (ML), the softmax function is frequently used to equate a score with every output likelihood, which will then be transformed into probability using the softmax function. When there's more over two different classes in a classification problem, a softmax output layer is commonly used. It's the most recent layer, and it allows you to forecast a discrete probabilistic model across classifications.

V. RESULT AND DISCUSSION
The performance of the various classification models, such as SVM, KNN, RF, Neural Network, Nave Bayes, and Stacked Deepauto Encoder, is analysed in the results obtained. The EEG hand and leg motion dataset is cleaned first before evaluating the effectiveness of these classification techniques.
The Butterworth filter is utilized to clean the dataset by eliminating excess noise. Additional features are obtained from the cleaned dataset, and the dataset is then categorized into two parts: training phase and testing phase. Training sets and testing sets ratios are splitted into 60:40, 70:30, and 80:20 ratios, respectively, for more effective result and discussion.   www.ijoscience.com 5

Figure 3 70:30 Training and Testing Ratio Mean Performance Measurement
A graph comparing the suggested classifier's achievement with that of several Available classifiers is shown in Figure 3 , but can be seen that the stacked deep auto encoder outperforms them. The 70:30 ratio dataset was subjected to a data analyses on five separate experimenting data samples, with the average value serving as an output. The stacked deep auto encoder accomplished approx. 90% accuracy and approx. 87 percent precision, which would be the greatest with all other classification models, according to the results.

Figure 4 80:20 Training and Testing Ratio Mean Performance Measurement
A graph comparing the suggested classifier's achievement to those of some classification methods is shown in figure 4, and this can be seen that the stacked deep auto encoder outperforms them. The 80:20 ratio data -set was subjected to a results obtained on five separate testing samples, with the average value serving as an output. The stacked deep auto encoder attained approx. 91 percent accuracy and approx. 87 percent precision, which is really the maximum among all the other classification methods, according to the results.

VI. CONCLUSION
BCI is a gift for disabled persons, particularly those who are unable to use the normal route out and brain muscular contractions. Various methodologies for recognising the qualities of pre-processed EEG signals and monitoring equipment are required by various BCI methodologies, depending on the specific application. This study examined the current state of BCI and its trends. Since no surgical transplant is necessary, non-invasive methodologies such as EMG, fMRI, and NIRS are more common and extremely easy to use.
Non-invasive BCI records brain signals with noise fully integrated in them, such as electromyography, signals generated by muscular activity, eye blinking, and so on. Filters can be used to filter out all these unwanted substances. The proposed method is intended for BCI applications in which left hand forward-backward motion, right hand forwardbackward motion, and left leg movement and right leg movement are categorized. These datasets are used to retrieve features for classification of movement types. The performance measures are classified and evaluated using SVM, KNN, RF, Neural Network, Nave Bayes, and Stacked Deepauto Encoder methodologies. After analysing the results, it was discovered that the stacked Deepauto encoder performs admirably all other classifiers with respect of accuracy.
BCI is a cutting-edge implementation that is being established on a daily basis using artificial intelligence. BCI development will also necessitate more research. Experiments of beneficial brain signals, signal recording methodologies, extraction and classification and translation techniques, methodologies for having to engage short-and long-term different variants among user and system to optimise performance, appropriate BCI applications, and clinical validation, dissemination, and support are among them. As a result, in future work, it will be necessary to be used more implemented in real settings in all of the above areas.