Can humans learn like Artificial Intelligence?
I always find it funny when I talk to computer scientists about how humans learn, which is often by repeatedly giving them rewards (or punishments) to shape their behavior over time, and they respond with, “hey, that’s just like a computer”!
It’s hilarious; the idea that nature somehow copies technology very recently made by humans and not the other way around! Indeed, many advancements in technology are humans’ way of replicating what nature has already given us. Perhaps if psychology courses were mandatory for computer engineers, then AI may have come even farther than it has. Topics such as conditioning, learning, and memory are standard for undergraduate psychology majors, but these are likely not the students learning to develop the algorithms that will change our lives forever.
The story of “Little Albert” is a great example of how learning works. Albert, an infant, was experimented on by the prominent psychologist Dr. John Watson. On the first day of Albert being with the researchers, Albert sat patiently on the floor with a look of relaxed confidence on his face. There was no fear in his eyes, no uncertainty or insecurity, just the natural curiosity of a baby, as they brought out a series of animate and inanimate objects to determine Albert’s response. As the objects were brought in; a dog, a rabbit, fire, a monkey, and so forth, little Albert sat calmly curious and overall interested, but unafraid.
Then Dr. Watson brought in a white rat. This is where everything changed. As Albert sat looking at the rat, on the other side of a curtain behind little Albert an accomplice of Watson’s rang a loud gong. This startled little Albert, who fell forward. The accomplice did this repeatedly, and whenever the white rat was near Albert, the loud clanging noise would happen. As the researchers expected, Albert cried in fear.
Eventually, the sound was no longer necessary and just the sight of the white rat was upsetting to Albert. Every time he was shown the white rat afterwards, he would begin to cry. This is what we refer to as “classical conditioning”. Classical conditioning is when a neutral item (psychologists say stimulus) like a sight or sound, is closely associated with another stimulus that produces a strong natural response. When the two are repeatedly presented together, the person learns to associate one with the other and then begins responding to the neutral item as though it were the original stimulus. People and animals both learn in this way and many things, even an aversion to certain foods, can be conditioned.
Another type of conditioning, referred to as operant conditioning, is when someone learns what to do and what not to do, because they are either rewarded or punished after the fact. Here, learning happens through consequences instead of association. This is why we are so much less likely to repeat a painful experience or why getting praise from our parents works as motivation. This is also referred to as reinforcement learning (which is how most machine models learn) and means the likelihood of a behavior taking place will increase or decrease based on the outcome afterwards.
There are other types of learning that humans use, including observational learning, where people learn how to behave by watching what figures that are perceived as important do. This explains why people follow “influencers” on social media. There is also latent learning, when information was learned a long time ago but is not used until needed, like the same dance you see at every wedding and suddenly you know how to do, without practice, when it’s your turn. Finally, and most magically, there is insight learning, which explains those “aha” moments where we go from not knowing the answer to a clear solution in an instant.
I have just explained five ways in which humans learn naturally. But can machines learn the way humans do? As a cognitive scientist, I bridge the gap between cognition and computer science to try and understand the collaborative relationship between man and machine. There is no doubt that people would not be as far along without our technology helpers. But if we truly want machines to be capable of learning in the same way that humans do, then we need to provide a framework for learning that is top-down. In cognitive psychology, perceiving and understanding is largely top-down, meaning it comes from the experiences and context of the situation. Bottom-up perception and behavior occurs when there is no prior context or experience with a stimulus. This is why children are able to learn all of the rules and intricacies of language, because even though every sentence and word they know has not been explicitly reinforced, they have a top-down framework for incorporating meaning which translates to using it in new contexts. Machines, by contrast, must learn (nearly) everything in a bottom-up fashion, e.g., seeing 5,999,999 pictures of bridges before recognizing the 6th million picture as a bridge. Machines are also unlikely to “wake up in the morning with the answer” like humans often do.
Machines can learn in the same way as humans, provided that the humans who program them can enable them to do two things: a) process emotion as part of learning and b) learn in a top-down manner. Little Albert only cried at the sight of the white rat because the loud startling noise made him afraid. Humans have emotions and these play an important part in what we are motivated to pursue and what we are motivated to avoid. Humans also learn by developing cognitive frameworks (often referred to as schemas) for different types of information and situations. For example, you might recognize the number 3 regardless of the font, color, or size in which it is depicted. This is helped by context, knowing the situation you’re in and what you are expecting to encounter can help to guide your perception and learning. You recognize new instances of the number 3 not because you have witnessed every 3 in the world or completed every possible calculation with the number 3, but because you can conceive of a new version of the number based on your pre-existing experiences not only with numbers but with letters and other information. A 3 presented horizontally, for example, is still easily recognized to be a 3 even by a child and without explicit training on horizontal 3’s. Finally, when things are personally relevant (were you born on the 3rd?), they will be easier to recognize. Machines don’t (yet) have these gifts of learning.
There is a way forward, and I recommend the policy for all first- and second-year computer science University students to take the following courses: Introduction to Psychology, Learning and Behavioral Psychology, Cognitive Psychology, and Memory and Cognition alongside their courses on Machine Learning. These courses will give them an understanding of how nature created an automatic mechanism for learning and understanding. True intelligence is not neatly categorized but is a mish mash of short-term, long-term, and working memory, pattern recognition, neural connections, processing speed, vocabulary, and behavioral habits. We (humans) learn and remember in many different ways. I believe God designed the most perfect and amazing machine in the world – the human brain, and until we are God this will be hard to replicate. But we can start by teaching people who teach machines how to learn the inner workings of what nature intended.
Notes:
Here’s an additional study by Rodney Brooks in 1987 about how AI can be better developed in accordance with theories of inner representation (for top-down approaches): https://people.csail.mit.edu/brooks/papers/representation.pdf
Comments