Translate

14 Sept 2020

The Future of Artificial Intelligence and Cybernetics


The Future of Artificial Intelligence and Cybernetics
#WakeUp
😈💩👎



Coventry University, Coventry, UK

If you could improve by implanting a chip in your brain to expand your nervous system through the Internet, 'update yourself' and partially become a machine, would you? What Kevin Warwick, professor of cybernetics at the University of Reading, poses may sound like science fiction but it is not; he has several implanted chips, which makes him a cyborg: half man, half machine. In this fascinating article, Warwick explains the various steps that have been taken to grow neurons in a laboratory that can then be used to control robots, and how chips implanted in our brains can also move muscles in our body at will. It won't be long before we also have robots with brains created with human neurons that have the same types of skills as human brains. Should they, then, have the same rights as us?
INTRODUCTION

Science fiction has, for many years, looked to a future in which robots are intelligent and cyborgs — human/machine amalgams — are commonplace: The Terminator, The Matrix, Blade Runner and I, Robot are all good examples of this. However, until the last decade any consideration of what this might actually mean in the future real world was not necessary because it was all science fiction and not scientific reality. Now, however, science has not only done a catching-up exercise but, in bringing about some of the ideas thrown up by science fiction, it has introduced practicalities that the original story lines did not appear to extend to (and in some cases have still not extended to).

What we consider here are several different experiments in linking biology and technology together in a cybernetic fashion, essentially ultimately combining humans and machines in a relatively permanent merger. Key to this is that it is the overall final system that is important. Where a brain is involved, which surely it is, it must not be seen as a stand-alone entity but rather as part of an overall system, adapting to the system’s needs: the overall combined cybernetic creature is the system of importance.

Each experiment is described in its own section. Whilst there is a distinct overlap between the sections, they each throw up individual considerations. Following a description of each investigation, some pertinent issues on the topic are therefore discussed. Points have been raised with a view to near term future technical advances and what these might mean in a practical scenario. It has not been the case of an attempt here to present a fully packaged conclusive document; the aim has rather been to open up the range of research being carried out, to see what is actually involved and to look at some of its implications.
BIOLOGICAL BRAINS IN A ROBOT BODY

We start by taking a look at an area that might not immediately be at all familiar to the reader. Initially when one thinks of linking a brain with technology then it is probably in terms of a brain already functioning and settled within its own body — could there possibly be any other way? Well in fact there can be! Here we consider the possibility of a fresh merger where a brain is firstly grown and then given its own body in which to operate.

When one first thinks of a robot it may be a little wheeled device that springs to mind (Bekey 2005) or perhaps a metallic head that looks roughly human-like (Brooks 2002). Whatever the physical appearance, our thoughts tend to be that the robot might be operated remotely by a human, as in the case of a bomb disposal robot, or it may be controlled by a simple computer programme, or may even be able to learn with a microprocessor as its technological brain. In all these cases we regard the robot simply as a machine. But what if the robot has a biological brain made up of brain cells (neurons), possibly even human neurons?

Neurons cultured/grown under laboratory conditions on an array of non-invasive electrodes provide an attractive alternative with which to realise a new form of robot controller. An experimental control platform, essentially a robot body, can move around in a defined area purely under the control of such a network/brain and the effects of the brain, controlling the body, can be witnessed. Of course this is extremely interesting from a robotics perspective but it also opens up a new approach to the study of the development of the brain itself because of its sensory-motor embodiment. Investigations can in this way be carried out into memory formation and reward/punishment scenarios — the elements that underpin the basic functioning of a brain.

Growing networks of brain cells in vitro (around 100 000 to 150 000 at present) typically commences by separating neurons obtained from foetal rodent cortical tissue. They are then grown (cultured) in a specialised chamber, in which they can be provided with suitable environmental conditions (e.g. appropriate temperature) and nutrients. An array of electrodes embedded in the base of the chamber (a multielectrode array, MEA) acts as a bidirectional electrical interface to/from the culture. This enables electrical signals to be supplied to stimulate the culture and also for recordings to be taken as outputs from the culture. The neurons in such cultures spontaneously connect, communicate and develop within a few weeks, giving useful responses for typically three months at present. To all intents and purposes, it is rather like a brain in a jar!

In fact the brain is grown in a glass specimen chamber lined with a flat ‘8×8’ MEA which can be used for real-time recordings (see Figure 1). In this way, it is possible to separate the firings of small groups of neurons by monitoring the output signals on the electrodes. Thereby a picture of the global activity of the entire network can be formed. It is also possible to electrically stimulate the culture via any of the electrodes to induce neural activity. The MEA therefore forms a bidirectional interface with the cultured neurons (Chiappalone et al. 2007; DeMarse et al. 2001).

The brain can then be coupled to its physical robot body (Warwick et al. 2010). Sensory data fed back from the robot is subsequently delivered to the culture, thereby closing the robot–culture loop. Thus, the processing of signals can be broken down into two discrete sections: a) “culture to robot”, in which live neuronal activity is used as the decision-making mechanism for robot control; and b) “robot to culture”, which involves an input mapping process, from robot sensor to stimulate the culture.

The actual number of neurons in a brain depends on natural density variations in seeding the culture in the first place. The electrochemical activity of the culture is sampled and this is used as input to the robot’s wheels. Meanwhile the robot’s (ultrasonic) sensor readings are converted into stimulation signals received by the culture, thereby closing the loop.

Once the brain has grown for several days, which involves the formation of some elementary neural connections, an existing neuronal pathway through the culture is identified by searching for strong relationships between pairs of electrodes. Such pairs are defined as those electrode combinations in which neurons close to one electrode respond to stimulation from the other electrode at which the stimulus was applied more than 60 percent of the time and respond no more than 20 percent of the time to stimulation on any other electrode.

A rough input–output response map of the culture can therefore be created by cycling through all the electrodes in turn. In this way, a suitable input/output electrode pair can be chosen in order to provide an initial decision-making pathway for the robot. This is then employed to control the robot body — for example if the ultrasonic sensor is active and we wish the response to cause the robot to turn away from the object being located ultrasonically (possibly a wall) in order to keep moving.

For simple experimentation purposes at this time, the intention is for the robot (which can be seen in Figure 2) to follow a forward path until it reaches a wall, at which point the front sonar value decreases below a threshold, triggering a stimulating pulse. If the responding/output electrode registers activity, the robot turns to avoid the wall. In experiments, the robot turns spontaneously whenever activity is registered on the response electrode. The most relevant result is the occurrence of the chain of events: wall detection–stimulation–response. From a neurological perspective, it is of course also interesting to speculate on why there is activity on the response electrode when no stimulating pulse has been applied.

As an overall control element for direction and wall avoidance, the cultured brain acts as the sole decision-making entity within the overall feedback loop. Clearly one important aspect then involves neural pathway changes in the culture, with respect to time, between the stimulating and recording electrodes.

In terms of research, learning and memory investigations are generally at an early stage. However, the robot can be clearly seen to improve its performance over time in terms of its wall avoidance ability in the sense that neuronal pathways that bring about a satisfactory action tend to strengthen purely though the process of being habitually performed: learning due to habit.

However, the number of variables involved is considerable and the plasticity process, which occurs over quite a period of time, is (most likely) dependent on factors such as initial seeding and growth near electrodes as well as environmental transients such as temperature and humidity. Learning by reinforcement — rewarding good actions and punishing bad — is more in terms of investigative research at this time.

On many occasions, the culture responds as expected. On other occasions it does not, and in some cases it provides a motor signal when it is not expected to do so. But does it “intentionally” make a different decision to the one we would have expected? We cannot tell but merely guess.

In terms of robotics, it has been shown by this research that a robot can successfully have a biological brain with which to make its “decisions”. The 100 000–150 000 neuron size is merely due to the present day limitations of the experimentation described. Indeed, three-dimensional structures are already being investigated. Increasing the complexity from two dimensions to three realises a figure of approximately 30 million neurons for the three-dimensional case, not yet reaching the 100 billion neurons of a perfect human brain, but well in line with the brain size of many other animals.

This area of research is expanding rapidly. Not only is the number of cultured neurons increasing, but the range of sensory inputs is being expanded to include audio, infrared and even visual stimuli. Such richness of stimulation will no doubt have a dramatic effect on culture development. The potential of such systems, including the range of tasks they could deal with, also means that the physical body could take on different forms. For example, there is no reason why the body could not be a two-legged walking robot, with a rotating head and the ability to walk around in a building.

It is certainly the case that understanding neural activity becomes more difficult as the culture size increases. With a three-dimensional structure, monitoring activity deep within the central area, as with a human brain, becomes extremely complex, even with needle-like electrodes. In fact the current 100 000–150 000 neuron cultures are already far too complex at present for us to gain an overall insight. When they are grown to sizes such as 30 million neurons and beyond, clearly the problem is significantly magnified.

Looking a few years ahead, it seems quite realistic to assume that such cultures will become larger, potentially growing into sizes of billions of neurons. On top of this, the nature of the neurons may be diversified. At present, rat neurons are generally employed in studies. However, human neurons are also being cultured even now, thereby bringing about a robot with a human-neuron brain. If this brain then consists of billions of neurons, many social and ethical questions will need to be asked (Warwick 2010).


We regard the robot simply as a machine. But what if the robot has a biological brain made up of brain cells (neurons), possibly even human neurons?

For example, if the robot brain has roughly the same number of human neurons as a typical human brain, then could/should it have similar rights to humans? Also, what if such creatures have far more human neurons than in a typical human brain — e.g. a million times more — would they make all future decisions rather than regular humans? Certainly it means that as we look to the near future, we will shortly witness thinking robots with brains not too dissimilar to those of humans.
GENERAL PURPOSE BRAIN IMPLANTS

Many human brain–computer interfaces are used for therapeutic purposes, in order to overcome a medical/neurological problem — an example being the deep brain stimulation electrodes employed to overcome the effects of Parkinson’s Disease (Pinter et al. 1999; Pan et al. 2007; Wu et al. 2010). However, even here it is possible to consider employing such technology in alternative ways to give individuals abilities not normally possessed by humans: human enhancement!

With more general brain–computer interfaces the therapy/enhancement situation is more complex. In some cases, those who have suffered an amputation or have suffered a spinal injury due to an accident may be able to regain control of devices via their (still functioning) neural signals (Donoghue et al. 2004). Meanwhile, stroke patients can be given limited control of their surroundings, as indeed can those who have motor neurone disease.

With these cases, the situation is not straightforward, as each individual is given abilities that no normal human has — for example, the ability to move a cursor around on a computer screen using nothing but neural signals (Kennedy et al. 2004). The same quandary exists for blind individuals who are allowed extrasensory input, such as sonar (a bat-like sense). This doesn’t repair their blindness but rather allows them to make use of an alternative sense.

Some of the most impressive human research to date has been carried out using the microelectrode array, as shown in Figure 3. The individual electrodes are 1.5 mm long and taper to a tip diameter of less than 90 microns. Although a number of trials using non humans as a test subject have occurred, human tests are at present limited to two groups of studies. In the second of these, the array has been employed in a recording only role, most notably recently as part of what was called the “BrainGate” system.

Essentially, electrical activity from a few neurons monitored by the array electrodes was decoded into a signal to direct cursor movement. This enabled an individual to position a cursor on a computer screen, using neural signals for control combined with visual feedback. The same technique was later employed to allow the individual recipient, who was paralysed, to operate a robot arm (Hochberg et al. 2006). However, the first use of the microelectrode array (shown in Figure 3) has considerably broader implications which extend the capabilities of the human recipient.

Deriving a reliable command signal from a collection of monitored neural signals is not necessarily a simple task, partly due to the complexity of signals recorded and partly due to the real-time constraints in dealing with the data. In some cases, however, it can be relatively easy to look for and obtain a system response to certain anticipated neural signals, especially when an individual has trained extensively with the system. In fact, neural signal shape, magnitude and waveform with respect to time are considerably different to other apparent signals (such as noise) and this makes the problem a little easier.

The interface through which a user interacts with technology provides a layer of separation between what the user wants the machine to do and what the machine actually does. This separation imposes a cognitive load on the individual concerned that is proportional to the difficulties experienced. The main issue is interfacing the human motor and sensory channels with the technology in a reliable, durable, effective and bidirectional way. One solution is to avoid this sensorimotor bottleneck altogether by interfacing directly with the human nervous system.

An individual human so connected can potentially benefit from some of the advantages of machine/artificial intelligence, for example rapid and highly accurate mathematical abilities in terms of “number crunching”, a high-speed, almost infinite, internet knowledge base, and accurate long-term memory. Additionally, it is widely acknowledged that humans have only five senses that we know of, whereas machines offer a view of the world which includes infrared, ultraviolet and ultrasonic signals, to name but a few.

Humans are also limited in that they can only visualise and understand the world around them in terms of three-dimensional perception, whereas computers are quite capable of dealing with hundreds of dimensions. Perhaps most importantly, the human means of communication, essentially transferring a complex electrochemical signal from one brain to another via an intermediate, often mechanical slow and error-prone medium (e.g. speech), is extremely poor, particularly in terms of speed, power and precision. It is clear that connecting a human brain, by means of an implant, with a computer network could in the long term open up the distinct advantages of machine intelligence, communication and sensing abilities to the implanted individual.

As a step towards a broader concept of brain–computer interaction, the microelectrode array (as shown in Figure 3) was implanted into the median nerve fibres of a healthy human individual (the author) during two hours of neurosurgery in order to test bidirectional functionality in a series of experiments. A stimulation current applied directly into the nervous system allowed information to be sent to the user, while control signals were decoded from neural activity in the region of the electrodes (Warwick et al. 2003). In this way, a number of trials were successfully concluded (Warwick et al. 2004), in particular:
Extrasensory (ultrasonic) input was successfully implemented (see Figure 4 for the experimentation).
Extended control of a robotic hand across the Internet was achieved, with feedback from the robotic fingertips being sent back as neural stimulation to give a sense of force being applied to an object (this was achieved between Columbia University, New York, USA and Reading University, UK).
A primitive form of telegraphic communication directly between the nervous systems of two humans (the author’s wife assisted) was performed (Warwick et al. 2004).
A wheelchair was successfully driven around by means of neural signals.
The colour of jewellery was changed as a result of neural signals, as was the behaviour of a collection of small robots.


Many human brain-computer interfaces are used for therapeutic purposes, in order to overcome a medical/neurological problem — an example being the deep brain stimulation electrodes employed to overcome the effects of Parkinson’s Disease. It is possible to consider employing such technology in alternative ways to give individuals abilities not normally possessed by humans: human enhancement! 

In most, if not all, of the above cases, the trial could be considered useful for purely therapeutic reasons, e.g. the ultrasonic sense could be useful for an individual who is blind; telegraphic communication could be very useful for those with certain forms of motor neurone disease.

Each trial can, however, also be seen as a potential form of enhancement beyond the human norm for an individual. Indeed, the author did not need to have the implant for medical purposes to overcome a problem but rather the experimentation was performed purely for scientific exploration. Therefore, the question arises: how far should things be taken? Clearly enhancement by means of brain–computer interfaces opens up all sorts of new technological and intellectual opportunities; however, it also throws up a raft of different ethical considerations that need to be addressed directly.

When experiments of the type described above involve healthy individuals who have no reparative need for a brain–computer interface, but rather the main purpose of the implant is to enhance an individual’s abilities, it is difficult to regard the operation as being for therapeutic purposes. Indeed, the author, in carrying out such experimentation, specifically wished to investigate actual, practical enhancement possibilities (Warwick et al. 2003; Warwick et al. 2004).

From the trials it is clear that extrasensory input is one practical possibility that has been successfully trialled; however, improving memory, thinking in many dimensions and communicating by thought alone are other distinct potential — yet realistic — benefits, with the latter of these also having been investigated to an extent. To be clear, all these things appear to be possible (from a technical viewpoint at least) for humans in general.

As we presently stand, to get the go-ahead for an implantation in each case (in UK anyway) requires ethical approval from the local authority governing the hospital in which the procedure is carried out, and, if it is appropriate for a research procedure, also approval from the research and ethics committee of the establishment involved. This is quite apart from Devices Agency approval if a piece of equipment, such as an implant, is to be used on many individuals. Interestingly, no general ethical clearance is needed from any societal body, yet the issues are complex.

However, as we look to the future it is quite possible that commercial influences coupled with the societal wishes to communicate more effectively and perceive the world in a richer form will drive a market desire. Ultimately, direct brain-to-brain communication, possibly using implants of the type described, is a tremendously exciting proposition, ultimately resulting in thoughts, emotions, feelings, colours and basic ideas being transmitted directly from brain to brain. Whilst this raises many questions as to how it would work in practice, clearly we would be foolish not to push ahead to achieve it.

But then we come to the big questions. As communication is such an extremely important part of human intelligence, surely it follows that anyone who has an implant of this type will necessarily have a considerable boost to his or her intelligence. Clearly this will stretch intellectual performance in society with the implanted section outperforming those who have elected to stay as mere (unchipped) humans. Will this bring about the digital divide, an “us and them” situation, leaving regular humans far behind on the evolutionary ladder? Well, we’ll just have to see!
NON-INVASIVE BRAIN-COMPUTER INTERFACES

For some, brain–computer interfaces of the type described above are perhaps a step too far at present, particularly if it means tampering directly with the brain. As a result, by far the most studied brain–computer interface to date is that involving electroencephalography (EEG) and this is due to several factors. Firstly it is non-invasive; hence there is no need for surgery with its risks of infection and/or side effects. As a result, ethical approval requirements are significantly less and, because electrodes are easily available, the costs involved are significantly lower than for other methods.

EEG is also a portable procedure, involving electrodes which are merely stuck onto the outside of a person’s head and can be set up in a lab with relatively little training, little background knowledge and taking little time — it can be done then and there, on the spot.

The number of electrodes actually employed for experimental purposes can vary from a small number (4–6) to the most commonly encountered (26–30) to well over 100 for those attempting to achieve better resolution. As a result, individual electrodes may be attached at specific locations or a cap can be worn in which the electrodes are pre-positioned. The care and management of the electrodes also varies considerably between experiments from those in which the electrodes are positioned dry and external to hair, to those in which hair is shaved off and gels are used to improve the contact made.

Some studies are employed more in the medical domain, for example, to study the onset of epileptic seizures in patients; however, the range of applications is widespread. A few of the most typical and/or interesting are included here to give an idea of possibilities and ongoing work rather than to provide a complete overview of the present state of play.

Typical are those in which subjects learn to operate a computer cursor in this fashion (Trejo et al. 2006). However, it must be pointed out that even after significant periods of training (many months), the process is slow and usually requires several attempts before success is achieved. Along much the same lines, numerous research groups have used EEG recordings to switch on lights, control a small robotic vehicle and control other analogue signals (Millan et al. 2004; Tanaka et al. 2005). A similar method was employed, using a 64-electrode skullcap, to enable a quadriplegic to carry out simple hand movement tasks by means of stimulation through embedded nerve controllers (Kumar 2008).

It is also possible to consider the uniqueness of specific EEG signals, particularly in response to associated stimuli, potentially as an identification tool (Palaniappan 2008). Meanwhile, interesting results have been achieved using EEG for the identification of intended finger taps, whether the taps occurred or not, with high accuracy. This is useful as a fast interface method as well as a possible prosthetic method (Daly et al. 2011).

Whilst EEG experimentation is relatively cheap, portable and easy to set up, it is still difficult to see its widespread use in the future. It certainly has a role to play in externally assessing some aspects of brain functioning for medical purposes (e.g. assessing epileptic seizures and neural activity during obsessive compulsive disorder) and surely these applications will increase in due course. However, the possibility of regular people driving around whilst wearing a skullcap of electrodes, with no need for a steering wheel, is not thought to be at all realistic; completely autonomous vehicles on the roads are much more likely.
CONCLUSIONS

In this chapter, a look has been taken at several different cybernetic enhancements and resultant types of artificial intelligence. Experimental cases have been reported in order to indicate how humans, and/or animals for that matter, can merge with technology in this way, which throws up a plethora of social and ethical considerations as well as technical issues. In each case reports on actual practical experimentation have been given, rather than merely some theoretical concept.

In particular when considering robots with biological brains, this could ultimately mean perhaps human brains operating in a robot body. Therefore, should such robots be given rights of some kind? If one was switched off, would this be deemed cruelty to robots? More importantly at this time, should such research forge ahead regardless? Before too long we may well have robots with brains made up of human neurons that have the same sort of capabilities as those of the human brain.

In the section on a general-purpose invasive brain implant as well as implant employment for therapy, a look was taken at the potential for human enhancement. Extrasensory input has already been scientifically achieved, extending the nervous system over the Internet and a basic form of thought communication. So it is likely that many humans will upgrade and become part machine themselves. This may mean that ordinary (non-implanted) humans are left behind as a result. If you could be enhanced, would you have any problem with it?

Then came a section on the more standard EEG electrodes which are positioned externally and which therefore are encountered much more frequently. Unfortunately, the resolution of such electrodes is relatively poor and they are indeed only useful for monitoring and not for stimulation. Hence the issues surrounding them are somewhat limited. We may well be able to use them to learn a little more about how the brain operates, but it is difficult to see them ever being used for highly sensitive control operations when several million electrodes feed into the information transmitted by each electrode.

As well as taking a look at the procedures involved, the aim of this article has been to have a look at some of the likely ethical and social issues as well. Some technological issues have though also been pondered on in order to open a window on the direction that developments are heading in. In each case, however, a firm footing has been planted on actual practical technology and on realistic future scenarios rather than on mere speculative ideas. In a sense, the overall idea is to open up a sense of reflection so that the further experimentation which we will now witness can be guided by the informed feedback that results.
Bibliography

Bekey, G. 2005. Autonomous Robots: from Biological Inspiration to Implementation and Control. Cambridge, MA: MIT Press.

Brooks, R. A. 2002. Robot: the Future of Flesh and Machines. London: Penguin.

Chiappalone, M., et al. 2007. “Network dynamics and synchronous activity in cultured cortical neurons.” International Journal of Neural Systems 17: 87–103.

Daly, I., S. Nasuto, and K. Warwick. 2011. “Single Tap Identification for Fast BCI Control.” Cognitive Neurodynamics 5 (1): 21–30.

DeMarse, T., et al. 2001. “The Neurally Controlled Animat: Biological Brains Acting with Simulated Bodies.” Autonomous Robots 11: 305–310.

Donoghue, J., et al. 2004. “Development of a Neuromotor Prosthesis for Humans.” Advances in Clinical Neurophysiology, Supplements to Clinical Neurophysiology 57: 588–602.

Hochberg, L., et al. 2006.“Neuronal Ensemble Control of Prosthetic Devices by a Human with Tetraplegia.” Nature 442: 164–171.

Kennedy, P., et al. 2004. “Using Human Extra-Cortical Local Field Potentials to Control a Switch.” Journal of Neural Engineering 1 (2): 72–77.

Kumar, N. 2008. “Brain Computer Interface.” Cochin University of Science & Technology Report, August.

Millan, J., et al. 2004. “Non-Invasive Brain-Actuated Control of a Mobile Robot by Human EEG.” IEEE Transactions on Biomedical Engineering 51 (6): 1026–1033.

Pan, S. et al. 2007. “Prediction of Parkinson’s Disease Tremor Onset with Artificial Neural Networks.” IASTED conference papers, Artificial Intelligence and Applications. Innsbruck, Austria: 341–345, 14–16 February.

Palaniappan, R. 2008. “Two-Stage Biometric Authentication Method using Thought Activity Brain Waves.” International Journal of Neural Systems 18 (1): 59–66.

Pinter, M., et al. 1999. “Does Deep Brain Stimulation of the Nucleus Ventralis Intermedius Affect Postural Control and Locomotion in Parkinson’s disease?” Movement Disorders 14 (6): 958–963.

Trejo, L., R. Rosipal, and B. Matthews. 2006. “Brain-computer interfaces for 1-D and 2-D cursor control: designs using volitional control of the EEG spectrum or steady-state visual evoked potentials.” IEEE Transactions on Neural Systems and Rehabilitation Engineering 14 (2): 225–229.

Tanaka, K., K. Matsunaga, and H. Wang. 2005. “Electroencephalogram-Based Control of an Electric Wheelchair.” IEEE Transactions on Robotics 21 (4): 762–766.

Warwick, K. 2010. “Implications and Consequences of Robots with Biological Brains.” Ethics and Information Technology 12 (3): 223–234.

Warwick, K., et al. 2003. “The Application of Implant Technology for Cybernetic Systems.” Archives of Neurology 60 (10): 1369–1373.

Warwick, K., et al. 2004. “Thought Communication and Control: A First Step Using Radiotelegraphy.” IEE Proceedings on Communications 151 (3): 185–189.

Warwick, K., et al. 2011. “Experiments with an In-Vitro Robot Brain.” In Yang Cai (ed.), Computing With Instinct: Rediscovering Artificial Intelligence. New York: Springer, 1–15.

Wu, D., et al. 2010. “Prediction of Parkinson’s Disease Tremor Onset using Radial Basis Function Neural Networks.” Expert Systems with Applications 37 (4): 2923–2928 (2010).

Labels

. (5) 4K (1) a (14) ABANDONED (51) Abandoned Medieval Castle (1) Abandoned Mine (2) Advanced Civilization (36) AI Weapons (16) ALIEN EVIDENCE (29) Alien Life (3) Alien Technology (3) Aliens and Robots (4) Almost DIED (4) ancient (31) Ancient Artifacts (9) Ancient Artifacts Pyramids (9) Ancient Ruins (7) Ancient technology using physics and chemistry. Ancient technology (5) Ancient White People. waga (1) antennas (3) Archaix (2) Artifacts (1) Artificial Sun (1) as IAM a420rhbrh. (1) Best Evidence Proving Aliens Exist (7) bravo (1) CABBAGE PATCH (1) Camping (3) catastrophe (31) Caught on Camera (1) CERN (1) change the future (20) chemical engineering (1) Civilization (1) Classified (2) CLONING (1) Conspiracy (1) Corporate (10) Cover-Ups (29) cp freaks waste of skin trash sicko. repent or die! no more (1) creative frequencies (27) Creepiest TikToks (4) Creepy (1) Creepy and Scary (3) CREEPY TikTok (1) Creepy TikTok's (14) Creepy TikToks (6) Creepy videos (2) CRIMINAL (7) Criminal Messaging Network (1) Crusade (3) Cursed UUnlockednlUnlockedocked (2) Dark Corners of the internet (125) DARK MATTER (14) DARK MATTER EXISTS 2022 (VERIFIED BY THE SHADOW THEORY AND LAW PARTICLE) (2) Dark Matter Portals (1) DARK WEB (2) Defy The Laws Of Physics (3) DEMON (5) DEMONIC ENTITIES (5) Demons (3) destructive modern technologies (22) destructive technology (1) Did a (7) did you catch this??? life from the inanimate (1) dielectric fields (1) Disturbing (1) Disturbing Discoveries (1) Documentary (3) eclipse (1) electric fields (1) Electricity (1) Electrogravitic (1) energy (1) engineering (1) enhancing water (1) entities (12) Evidence (20) existence (1) exowomb (1) facts (1) fake moon (1) fake sun (2) FBI (1) fermi (1) ffake x (6) food (1) Fractal Toroidal Moment (1) fucked up shit (1) funding help (11) genius (9) ghosts (79) giving back (1) Glitch (64) Graveyard (2) guns (4) Harvesting Human Souls (1) HAUNTED (11) HAUNTED f (50) Haunted House (5) he Amazon Rainforest (1) hemisync (17) HIDDEN . (2) history (17) Hole (1) huanitarian aid (1) Human History (1) human psyche. (5) humanity (9) illegal weapons systems (3) investigations (40) ionosphere. HAARP (5) Jerusalem (1) Kryptos Code 4 solved (2) law (8) Levitating Statue (1) Lidar (1) Lost Citied (1) Lost Cities (31) Lost Civilization Found (14) Lost Ruins (7) Lost Technology (89) LOVE (16) magnetic fields (1) magnetism (1) Mandela effect (9) Mansion (2) maps (17) Mars (1) Martian (1) matrix (82) Mega Machines (10) Megalithic (4) megaliths (7) Megastructure (3) military (32) Military Lasers and Directed Energy Weapons (8) missing (7) Monoliths (1) moon (20) moon and sun simulator (1) MORONS (1) mpox the facts (2) Mysterious (9) Mysterious Creatures (4) mysterious discoveries (41) Mystery history (1) n (1) nanobubble (1) NASA and the government (17) NASA government (3) NASA LIES (1) nazi Experiments (8) Nazi in plain-sight (1) Nazi in plainsight (1) nazi inplainsight (9) NEWS (54) non-human entities (16) nvestigations (6) OCCULT (88) Ocean Mysteries (11) on the Moon (2) Paranormal Files Marathon: Mind Boggling Sightings and Abductions (1) PARANORMAL INVESTIGATION (1) Patents (1) Phobos (1) Physics (2) police abuse (1) policy (1) Portal (2) Practical Application (2) Pre-Egyptian Technology (10) Pre-Flood -Civilization (1) Pre-Flood Ruins (9) Project Looking Glass (1) propaganda (16) Propulsion (2) psychological experimen (1) psychological experiment (5) Psychotronics (6) pump (4) Pyramid (8) Pyramids (7) quantum (1) Questions (1) REACTION (1) reaction creepy (11) Reality (9) red vs blue & white triangle (5) relic (4) research (4) Reverse Speech (1) ritual (1) rocket (8) Ruins (1) Secrets (1) sharing is caring (1) shipwrecks (3) SITES LINKED TO THE HIDDEN (5) Skinwalker (1) Sky Trumpets (1) Solomon's Temple (1) solutions (1) Sonic Magic (1) Sound (1) space (16) Space Programs (1) space weather (2) Strange Case (8) Strange Things Caught On Live TV (1) STRANGE Tik Toks. Realitys. R (2) sun (1) symbology (28) Temple (2) Terrifying Creatures From The Bible (1) Terrifying Experiments (5) the dark side of YouTube. (7) The Hidden (53) The Hidden banner ad (2) The Human Mind (6) The Moon (3) the True Cross. Holy (1) The Universe (1) The Unknown. (12) Tik Toks (1) Tik Toks. (2) TikTok (1) TikTok. cult (4) TikTok. culy (1) TikToks (1) time (1) Tomb Discovered (1) Treasure (1) Treasure and Artifact's Finds (2) truth (105) Tunnel (29) Tunnels (2) uap (1) ufo (68) UFOs (11) Underground (3) Unexplained (24) Unexplained Mysteries (2) Unknown Civilization (16) Unsolved Mysteries (166) Vampires Immortals (1) VIMANA (1) water (1) weather sat tools (17) Weird videos (1) Where did this COME FROM (1) white triangle (16)