BrainBot was invited to give a lecture to scientists at Stanford on our brain-computer interface technology at the AAAI artificial intelligence spring symposium!

Thank you for all the amazing feedback from the research community.

Historical background

The first time humans recorded the brain’s electrical activity was in 1924, at the clinical offices of the German psychiatrist Hans Berger. The first EEG (electroencephalogram) recording of the brain’s signature by Dr. Berger showed convincingly that even a simple EEG reading could provide a window into the human mind. This visionary technology was unfortunately confined to academic and medical research for decades. Several practical difficulties stood in the way of a simple, useful brainwave reading system for individuals: insufficient PC processing power, the advanced technical knowledge to set-up and analyze neural data, and the extremely high cost of traditional brain research tools.

Brain-computer interface
Applied research into brain-computer interface (BCI) technology began in earnest at the University of California Los Angeles in the 1970’s,1 and today has burgeoned into a broad and multidisciplinary field of its own. Among the many prominent successes of BCI research, a few striking ones include enabling quadraplegics to communicate, move a wheelchair and surf the Internet2, along with the partial restoration of sight to the blind3. At the end of the so-called “Decade of the Brain” from 1990-2000, there were over a half-dozen research machines capable of recording and analyzing the human brain, and this wealth of information has accelerated our understanding of the mind. TheEEG and it’s cousin MEG (magnetoencephalography) represent the best temporal-resolution tools for getting a picture of the brain in action. The stunning images of the brain that often splash onto newspapers were once mostly PET images (positron emission tomography), but now tend to increasingly be fMRI (functional magnetic resonance imaging) or NIRS (near infrared spectroscopy.) There are even clinical devices designed to be directly implanted into the brain in severe medical cases, e.g. ECoG (electrocorticography) in the case of epilepsy and deep-brain stimulators in advanced Parkinson’s disease.
However, all of these technologies present both advantages and disadvantages. Both PET and fMRI, while offering a good 3-D view of the brain, suffer from very poor timescale sensitivity. MEG, PET, and fMRI machines all operate using extremely bulky (up to the tons) and prohibitively expensive (up to the millions USD) equipment. And obviously, ECoG and other surgical devices can’t be considered a viable option for the general public. The only BCI technology which is portable, non-invasive and comparatively cheap by contrast remains the original EEG. As a corollary of the EEG’s accepted utility and efficacy in cognitive neuroscience and clinical neurology, there is today a vast wealth of understanding of the electroencephalogram. This has led to increasingly sophisticated analyses of the human EEG, along with highly advanced research proofs-of-concept. In fact, normal volunteers wearing research EEG caps can now spell words, navigate robots, and much more with minimal training time, using only their mind.4 Unfortunately, academic EEG concentrated for years on bulky, wire-laden systems that require significant technical knowledge and time to set up, especially for a home practitioner. Traditional EEG consists of dozens of electrodes affixed, painstakingly, by a trained technician onto the scalp of a willing participant. Often sticky electrolyte pastes and solutions have to be applied at each sensor contact. Even with these drawbacks, the cost for hospital and research-grade systems can still easily cross $10,000 U.S. dollars per device.
The commercial BCI industry
The birth of the commercial BCI (Brain-Computer-Interface) industry in 2009 changed the EEG consumer dynamic. By the virtue of large-scale manufacturing of EEG-specific hardware, there are currently at least 3 major companies able to market personal EEG devices for below $300, viz. Neurosky, Emotiv and OCZ. This represents the initial foray into what is predicted to become a proliferation of consumer-oriented neurofeedback tools.5 This consumer orientation has reshaped the standard paradigm for EEG acquisition and analysis by focusing on user comfort and ease-of-use of the technology. The modern EEG system is, by contrast to its academic forerunner, simple to wear and exclusively dry, active sensor electrodes which do not need sticky paste or electrolyte goo. Rather than covering the scalp with electrodes (as is common for research purposes), consumer EEG focuses on the minimum number of sensors needed for the specific purpose of the user.
Meditation and neuroscience
Broadly, there are three gross mental states which have been studied in clinical EEG and research settings: waking, dream (REM) sleep, and deep (NREM) sleep. These three states are also described in the modern and ancient meditative texts of experienced practitioners of the “contemplative sciences”. Many traditions also posit the existence of a “fourth” state of consciousness, underlying waking, dream and deep sleep, called “turiya” in the Himalayan context. “There is another Samadhi which is attained by the constant practice of cessation of all mental activity, in which the mind retains only the unmanifested impressions.” -the Yoga Sutras of Patanjali ~200 B.C. Neuroscience has the ability today to distinguish the three states of waking, REM and NREM sleep using only a single EEG electrode 7. The question which drove the formation of BrainBot is whether the state detection can learn to recognize the “fourth” state of mind as well; the state of meditation. The answer, found through our research in Asia and elsewhere, is that it is in fact possible to recognize various meditation types using EEG.
The role of feedback
The vast range of meditation practices and styles can be visualized as a large “internal map”, and ordinarily a student without a teacher’s guidance finds it is difficult to know when you are progressing towards the “destination” on the map, and even sometimes when you are in the right neighborhood! Of course, it is possible to teach oneself meditation, just as it is possible to teach oneself science or engineering– the role of previous generations experience is simply to act as a foundation for faster and easier learning. Therefore, as has been advised for many thousands of years, the best method to learn meditation is still directly from an accomplished guru. Essentially, an accomplished teacher acts as a tour-guide on this “internal map” of your consciousness. Because these men and women have spent a great deal of time familiarizing themselves with their own internal maps, it is easier for them to guide you to those same destinations with gentle feedback. However, if a master is very far away, even perhaps on a different continent, direct guidance along this consciousness map is very difficult or impossible. Before neurofeedback and mental state detection technologies, the only option was to travel personally to seek out a master’s teaching. With the extensive mental state neurofeedback research conducted by BrainBot, however, in close collaboration with meditation practitioners around the world, it is now possible for you to meditate yourself, in the comfort of your home, with the guidance of authentic masters from India, Nepal, Tibet and elsewhere. Today, the stars are proverbially aligned for meditation neurotechnology– for the individual, and from the masters.

1- J. Vidal, “Toward Direct Brain–Computer Communication”, Annual Review of Biophysics and Bioengineering, L.J. Mullins, Ed., Annual Reviews, Inc., Palo Alto, Vol. 2, 1973, pp. 157-180. 2- M Bensch, AA Karim, Jemmlinger et al, “Nessi: an EEG-controlled web browser for severely paralyzed patients”, Computational Intelligence and Neuroscience, 2007 3- LB Merabet, JF Rizzo, A Amedi et al, “What blindness can tell us about seeing again: merging neuroplasticity and neuroprostheses “, Nature Reviews Neuroscience, 2005 4 C Guger, S Daban, E Sellers et al, “How many people are abe to control a P300-based brain-computer interface (BCI)?”, Neuroscience Letters, Volume 462, Issue 1, September 2009, pp. 94-98 5- NeuroInsights, Inc, “The Neurotechnology Industry 2009 Report”, white paper 6- 7- Dr. Philip Low, Stanford School of Medicine and MIT Media lab,