To learn more, view our Privacy Policy. To browse Academia. Log in with Facebook Log in with Google. Remember me on this computer. Enter the email address you signed up with and we'll email you a reset link.
Need an account? Click here to sign up. Download Free PDF. Rahul Sharma. A short summary of this paper. Download Download PDF. Translate PDF. Submitted to:- Prachi Parashar Astt. Rahul Sharma Enrollment no.
Any accomplished requires the effort of many people and this work is no different. This seminar difficult due to numerous reasons some of error correction was beyond my control. Sometimes, I was like rudderless boat without knowing what to do next. It was then the timely guidance of that has seen us through all these odds. I would be very grateful to them for their inspiration, encouragement and guidance in all phases of the endeavor.
It is my great pleasure to thank Dr. Soni Changlani, HOD of Electronics and Communication for her constant encouragement and valuable advice for this seminar. I also wish to express my gratitude towards all other staff members for their kind help.
Finally, I would thank Pro. Prachi Parashar who was tremendously contributed to this seminar directly as well as indirectly; gratitude from the depths of my heart is due to her. Regardless of source I wish to express my gratitude to those who may contribute to this work, even though anonymously. These ideas have captured the imagination of humankind in the form of ancient myths and modern science fiction stories.
However, it is only recently that advances in cognitive neuroscience and brain imaging technologies have started to provide us with the ability to interface directly with the human brain. In these systems, users explicitly manipulate their brain activity instead of using motor movements to produce signals that can be used to control computers or communication devices. The impact of this work is extremely high, especially to those who suffer from devastating neuromuscular injuries and neurodegenerative diseases such as amyotrophic lateral sclerosis, which eventually strips individuals of voluntary muscular activity while leaving cognitive function intact.
This ability is made possible through the use of sensors that can monitor some of the physical processes that occur within the brain that correspond with certain forms of thought.
In Berger was the first to record human brain activity by means of EEG. Berger was able to identify oscillatory activity in the brain by analyzing EEG traces.
One wave he identified was the alpha wave 8—13 Hz , also known as Berger's wave. Berger's first recording device was very rudimentary. He inserted silver wires under the scalps of his patients. These were later replaced by silver foils attached to the patients' head by rubber bandages.
Berger connected these sensors to a Lippmann capillary electrometer, with disappointing results. More sophisticated measuring devices, such as the Siemens double-coil recording galvanometer, which displayed electric voltages as small as one ten thousandth of a volt, led to success.
Berger analyzed the interrelation of alternations in his EEG wave diagrams with brain diseases. The papers published after this research also mark the first appearance of the expression brain—computer interface in scientific literature.
Rather, it is a complex assemblage of competing sub-systems, each highly specialized for particular tasks Carey By studying the effects of brain injuries and, more recently, by using new brain imaging technologies, neuroscientists have built detailed topographical maps associating different parts of the physical brain with distinct cognitive functions.
The brain can be roughly divided into two main parts: 2. Since this is the largest and most complex part of the brain in the human, this is usually the part of the brain people notice in pictures.
This is the region that current BCI work has largely focused on. For instance, most language functions lie primarily in the left hemisphere, while the right hemisphere controls many abstract and spatial reasoning skills. Also, most motor and sensory signals to and from the brain cross hemispheres, meaning that the right brain senses and controls the left side of the body and vice versa. Sixtus Okwuoha. A short summary of this paper. Download Download PDF.
Translate PDF. Acknowledgement The satisfaction that accompanies the successful completion of any task would be incomplete without the mention of persons whose ceaseless cooperation made it made it possible, whose constant guidance and encouragement crown all efforts with success. I am much grateful to my project supervisor Mrs.
Ijeoma Emeagi. My sincere gratitude goes to my parents; Mr. Maurice Okwuoha and Mrs. Benadetth Okwuoha for their financial and moral support in my Academics and Social well-being in this institution.
Although the dot's gyrations are directed by a computer, the machine was only carrying out the orders of the test subject. Though computers can solve extraordinarily complex problems with incredible speed, the information they digest is fed to them by such slow, cumbersome tools as typewriter keyboards or punched tapes. The key to his scheme: the electroencephalograph, a device used by medical researchers to pick up electrical currents from various parts of the brain.
If we could learn to identify brain waves generated by specific thoughts or commands, we might be able to teach the same skill to a computer. The machine might even be able to react to those commands by, say, moving a dot across a TV screen.
So far the S. I, computer has been taught to recognize seven different commands—up, down, left, right, slow, fast and stop. This is true even when they are interacting with machines. The ability to attribute mental states to others from their behavior and to use that knowledge to guide our own actions and predict those of others is known as theory of mind or mind-reading. A computer may wait indefinitely for input from a user who is no longer there, or decide to do irrelevant tasks while a user is frantically working towards an imminent deadline.
As a result, existing computer technologies often frustrate the user, have little persuasive power and cannot initiate interactions with the user. Even if they do take the initiative, like the now retired Microsoft Paperclip, they are often misguided and irrelevant, and simply frustrate the user.
Erik What is Mind-reading computer? A computational model of mind-reading Drawing inspiration from psychology, computer vision and machine learning, the team in the Computer Laboratory at the University of Cambridge has developed mind-reading machines — computers that implement a computational model of mind-reading to infer mental states of people from their facial signals.
The goal is to enhance human-computer interaction through empathic responses, to improve the productivity of the user and to enable applications to initiate interactions with and on behalf of the user, without waiting for explicit input from that user. It determine the oxygen level and the blood flows around the subjects brain, and determine that the users thought by his facial expression.
In a complex marriage of medical and computer technology, Lawrence Pinneo, a neurophysiologist and electronics engineer at the Stanford Research Institute in Menlo Park, Calif. Experiments have shown that a test subject can manipulate the position of a dot on a television screen simply by thinking or willing what the movement of the dot should be. According to Peter , A key element in Pinneo's scheme of thought translation is a device which could bridge the gap between the electrical signals generated by the human brain and the signal inputs that a computer needs for analysis.
For this function he chose an electroencephalograph, a device used by medical researchers to measure and records the electrical activity of the brain. Utilizing the signals provided by the electroencephalograph, the computer is programmed by means of sophisticated software techniques to recognize and identify brain-wave patterns generated by specific thoughts or commands.
So far the SRI computer is capable of recognizing seven different commands—up, down, left, right, slow, fast, and stop. Brain waves, however, like speech patterns, vary in some detail from person to person, often fooling the computer. To circumvent this problem, the computer's memory stores a library of command patterns against which the thoughts of a given test subject are compared.
At this embryonic stage in its development, Pinneo's system is already capable of identifying the thought commands of 25 different people with an accuracy of 60 percent. Pinneo is convinced that with further research these results can be vastly improved. Speculating upon future developments, Pinneo suggests that eventually technology may well be sufficiently advanced to reverse the thought process and feed information from a computer into the human brain.
A computational model of mindreading Drawing inspiration from psychology, computer vision and machine learning, the team in the Computer Laboratory at the University of Cambridge has developed mind-reading machines — computers that implement a computational model of mind-reading to infer mental states of people from their facial signals.
Prior knowledge of how particular mental states are expressed in the face is combined with analysis of facial expressions and head gestures occurring in real time. The model represents these at different granularities, starting with face and head movements and building those in time and in space to form a clearer model of what mental state is being represented. Peter Robinson Software from Never vision identifies 24 feature points on the face and tracks them in real time. Movement, shape and colour are then analyzed to identify gestures like a smile or eyebrows being raised.
Combinations of these occurring over time indicate mental states. For example, a combination of a head nod, with a smile and eyebrows raised might mean interest. The relationship between observable head and facial displays and the corresponding hidden mental states over time is modeled using Dynamic Bayesian Networks.
Why mind reading? Monitoring a car driver The mind-reading computer system presents information about your mental state as easily as a keyboard and mouse present text and commands. Imagine a future where we are surrounded with mobile phones, cars and online services that can read our minds and react to our moods. How would that change our use of technology and our lives?
We are working with a major car manufacturer to implement this system in cars to detect driver mental states such as drowsiness, distraction and anger. Current projects in Cambridge are considering further inputs such as body posture and gestures to improve the inference. We can then use the same models to control the animation of cartoon avatars.
We are also looking at the use of mind- reading to support on-line shopping and learning systems. The mind-reading computer system may also be used to monitor and suggest improvements in human-human interaction. How does it work? Futuristic headband The mind reading actually involves measuring the volume and oxygen level of the blood around the subject's brain, using technology called functional near- infrared spectroscopy FNIRS.
The user wears a sort of futuristic headband that sends light in that spectrum into the tissues of the head where it is absorbed by active, blood-filled tissues. The headband then measures how much light was not absorbed, letting the computer gauge the metabolic demands that the brain is making. The results are often compared to an MRI, but can be gathered with lightweight, noninvasive equipment. Wearing the functional near-infrared spectroscopy sensor, experimental subjects were asked to count the number of squares on a rotating onscreen cube and to perform other tasks.
A computer program which can read silently spoken words by analyzing nerve signals in our mouths and throats has been developed by NASA. Preliminary results show that using buttonsized sensors, which attach under the chin and on the side of the Adam's apple, it is possible to pick up and recognize nerve signals and patterns from the tongue and vocal cords that correspond to specific words.
Just the slightest movement in the voice box and tongue is all it needs to work, he says. Participants hooked up to the sensors silently said the words to themselves and the software correctly picked up the signals 92 per cent of the time. Then researchers put the letters of the alphabet into a matrix with each column and row labeled with a single-digit number.
0コメント