Masters Thesis

Machine Learning using Brain Computer Interface (BCI) System

Engineers in the field of control systems have been recently drawn to the development of creating a hands-free and speech-free controller interface over computers and robotic devices. The primary individuals who would use this type of controller suffer from progressive nervous system diseases or other forms of paralysis that have severely restricted any movement of the limbs. Despite their physical limitations, these same individuals have an uncompromised brain full of cognitive and sensory functions. As a result, one solution to restore mobility and autonomy to the paralyzed is to create a controller that utilizes their brain signals. A brain computer interface (BCI) applies brain signals as input to a controller that will then drive a robot arm or transporter. By linking a specific mental task (i.e. imagine squeezing the right hand) to a command a robot (i.e. make a right turn), users have the ability to navigate an electrically powered wheel chair or robot-aid for themselves. While there is potential to create a wide range of controller commands, brainwaves come with their own set of challenges. These signals are non-stationary and non-linear; meaning, brainwaves constantly vary and are extremely difficult to model. In addition, noise from other involuntary functions (i.e. blinking and facial muscle activation) may bury the unique signals associated to the mental task. To overcome these obstacles, control system engineers have implemented a signal preprocessing step and machine learning approach to these controllers. The combination of selecting the right preprocessor, machine learning algorithm, and training the user to conduct clear mental tasks creates an accurate and responsive BCI controller. The main goal of this project is to design a six-class hybrid BCI controller for a semi-autonomous mobile robotic arm. The controller is designed to operate the robotic base and arm separately. To do this, a set of EEG motor imagery hand and feet signals serves two primary functions: they navigate the robot base in the environment and move a cursor on the robot's camera screen to highlight what object to grab. In addition, a jaw clench, which is an electromyogram (EMG) signal, is used to switch between commanding the base and the arm. Designing a controller with this capability for multiple users requires a compilation of hardware to record/stream brainwaves and software to preprocess and train a machine learning algorithm. A modified 14-channel commercial grade non-invasive electroencephalogram (EEG) headset from Emotiv Epoch was used to output the brain waves of three healthy males (ages 22 - 27) to the computer. Each subject recorded five sessions, each with four tests, of their responses to OpenViBE's stimulus presentation program. The recordings were then uploaded to EEGLAB, an open source MATLAB plug-in, where the signals were preprocessed with filters and the implementation of Independent Component Analysis (ICA). Additionally, EEGLAB was used to plot Event Related Potential (ERP) plots and topographical maps to observe each subject's brain activity. After reviewing all the plots, each subject shared the same behavior in electrodes C1, C3, C5, C2, C4, and C6. For comparison, two machine learning algorithms, linear discriminant analysis (LDA) and relevance vector machine (RVM) were chosen to process and classify the subjects' recordings. The performance for each classifier was recorded for a 2-class, 3-class, 5-class, and 6-class controller. RVM out performed LDA with multi-class controllers. For a 5-class controller, the error rate percentages were: 45% for subject S01, 30.8% for subject S02, and 29.2% for subject S03. With the proper electrodes and machine learning algorithms identified, the official 6-class controller was created with a common spatial pattern (CSP) filter and RVM classifier. It was observed that the accuracy of the controller decreased as the number of classes increased. The 6-class BCI controller was integrated into a virtual model of the semi-autonomous robotic arm where it successfully demonstrated the ability to separately move the base, move the cursor on the robot's camera screen, and activate the action to pick up/drop off an object.

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.