Glossary
Acquisition
The initial stage in operant conditioning when a new response is established and gradually strengthened through reinforcement.
Example:
When a pigeon first learns to peck a key for food, it is in the acquisition phase of learning.
B.F. Skinner
A leading behaviorist who extensively studied operant conditioning and developed the Skinner box to systematically analyze learning through consequences.
Example:
B.F. Skinner's work demonstrated how precise control over consequences could shape complex behaviors in animals.
Biological Predispositions
Innate tendencies or genetic influences that make certain behaviors or associations easier to learn than others.
Example:
It's easier to teach a bird to peck for food than to teach it to sing for food, due to its biological predispositions.
Chaining
Linking together a series of behaviors, where each step serves as a cue for the next and the final step is reinforced.
Example:
Teaching a child to get ready for bed by reinforcing the completion of the entire sequence: pajamas, brush teeth, read book, get into bed, is an example of chaining.
Cognitive Interpretations
The role of thoughts, expectations, and beliefs in influencing learning and behavior, suggesting that learning is not just mechanical.
Example:
A student's belief that they can succeed on a test (cognitive interpretation) can influence their study habits and performance.
Cognitive Map
A mental representation of the layout of one's environment, often formed through latent learning.
Example:
After exploring a new school building, a student develops a cognitive map that allows them to navigate it efficiently.
Continuous Reinforcement
Reinforcing the desired response every time it occurs, which leads to rapid learning but also rapid extinction.
Example:
Giving a dog a treat every single time it sits on command is an example of continuous reinforcement.
Discrimination
The learned ability to distinguish between a conditioned stimulus and other irrelevant stimuli, responding only to the specific stimulus that signals reinforcement.
Example:
A dog learns to only bark at the doorbell, not at other similar sounds like the TV, demonstrating discrimination.
E.L. Thorndike
A pioneer in operant conditioning who formulated the Law of Effect based on his experiments with cats in puzzle boxes.
Example:
E.L. Thorndike's research showed how behaviors leading to satisfying outcomes were more likely to be repeated.
Extinction
The diminishing of a conditioned response when reinforcement for the behavior ceases.
Example:
If a child stops getting praise for cleaning their room, their cleaning behavior may eventually undergo extinction.
Fixed (in reinforcement schedules)
Refers to a predictable and constant pattern of reinforcement, where the number of responses or time interval is set.
Example:
A factory worker who gets paid for every 10 items assembled is on a fixed schedule.
Fixed Interval (FI)
A reinforcement schedule that reinforces a response only after a specified time has elapsed.
Example:
Checking the mailbox more frequently as the delivery time approaches is characteristic of a fixed interval schedule.
Fixed Ratio (FR)
A reinforcement schedule that reinforces a response only after a specified number of responses.
Example:
A coffee shop offers a free drink after every 10 purchases, which is a fixed ratio schedule.
Fixed-Interval Schedule
A partial reinforcement schedule that reinforces a response only after a specified time has elapsed, producing a scalloped pattern of behavior.
Example:
Getting a paycheck every two weeks is an example of a fixed-interval schedule, as the reward comes after a set amount of time.
Fixed-Ratio Schedule
A partial reinforcement schedule that reinforces a response only after a specified number of responses, producing high rates of response.
Example:
A coffee shop offering a free drink after every 10 purchases is using a fixed-ratio schedule.
Generalization
The tendency for a behavior to occur in response to stimuli similar to the one that was originally reinforced.
Example:
A child who is rewarded for saying 'please' to their parents might also start saying 'please' to their teachers or friends, showing generalization.
Insight Learning
A form of problem-solving in which the organism suddenly grasps the solution to a problem, often described as an 'aha!' moment, without trial-and-error.
Example:
A monkey suddenly figures out how to stack boxes to reach a banana, rather than through gradual shaping, illustrating insight learning.
Instinctive Drift
The tendency for an animal to revert to its natural, species-specific behaviors, even after being trained using operant conditioning.
Example:
Despite extensive training, a raccoon might revert to washing coins in water instead of dropping them into a bank, illustrating instinctive drift.
Interval (in reinforcement schedules)
Refers to reinforcement being based on the passage of time since the last reinforcement.
Example:
Receiving a paycheck every two weeks is based on an interval of time.
Latent Learning
Learning that occurs but is not immediately apparent or demonstrated until there is an incentive to do so.
Example:
A student might learn the layout of their town by driving around, but only demonstrate this latent learning when asked for directions.
Law of Effect
Thorndike's principle stating that behaviors followed by favorable consequences become more likely, and behaviors followed by unfavorable consequences become less likely.
Example:
If a cat accidentally presses a lever and gets food, it's more likely to press the lever again due to the Law of Effect.
Law of Effect
Thorndike's principle stating that behaviors followed by favorable consequences become more likely, and behaviors followed by unfavorable consequences become less likely.
Example:
If a student studies hard and gets an A, the Law of Effect suggests they will be more likely to study hard again.
Learned Helplessness
The hopelessness and passive resignation an animal or human learns when unable to avoid repeated aversive events.
Example:
A student who repeatedly fails tests despite studying may eventually stop trying altogether, exhibiting learned helplessness.
Negative (in operant conditioning)
Refers to the *removal* of a stimulus from the environment, regardless of whether it is pleasant or unpleasant.
Example:
Taking away a toy from a misbehaving child involves a negative action, as something is being removed.
Negative Punishment
Removing a desirable stimulus to decrease the frequency of a behavior.
Example:
Taking away a teenager's video game console after they break curfew is an example of negative punishment.
Negative Punishment
Removing a desirable stimulus to decrease the frequency of a behavior.
Example:
A child loses screen time (negative punishment) for not finishing their dinner, reducing the likelihood of them refusing food again.
Negative Reinforcement
Removing an aversive stimulus to increase the frequency of a behavior.
Example:
Putting on your seatbelt to stop the annoying beeping sound in your car is an example of negative reinforcement.
Negative Reinforcement
Removing an undesirable stimulus to increase the frequency of a behavior.
Example:
Fastening your seatbelt to stop the annoying beeping sound in your car is an example of negative reinforcement.
Operant Conditioning
A type of learning where behaviors are strengthened or weakened based on the consequences that follow them.
Example:
A student learns to study more effectively because good grades (a positive consequence) follow their diligent efforts, demonstrating operant conditioning.
Operant Conditioning
A type of learning in which behavior is strengthened if followed by a reinforcer or diminished if followed by a punisher.
Example:
A dog learns to sit on command because it consistently receives a treat for sitting.
Partial Reinforcement
Reinforcing a response only part of the time, which results in slower acquisition but greater resistance to extinction.
Example:
A gambler playing a slot machine is on a partial reinforcement schedule, as they don't win every time they play.
Positive (in operant conditioning)
Refers to the *addition* of a stimulus to the environment, regardless of whether it is pleasant or unpleasant.
Example:
When you add a chore to a child's list, you are using a positive approach, as something is being introduced.
Positive Punishment
Adding an aversive stimulus to decrease the frequency of a behavior.
Example:
A parent scolding a child for running into the street is using positive punishment to stop the dangerous behavior.
Positive Punishment
Adding an undesirable stimulus to decrease the frequency of a behavior.
Example:
A driver gets a speeding ticket (positive punishment) for driving too fast, making them less likely to speed in the future.
Positive Reinforcement
Adding a desirable stimulus to increase the frequency of a behavior.
Example:
A teacher gives a student a sticker for completing their homework, which is an example of positive reinforcement to encourage future homework completion.
Positive Reinforcement
Adding a desirable stimulus to increase the frequency of a behavior.
Example:
A student receives a gold star (positive reinforcement) for completing their homework, making them more likely to do it again.
Primary Reinforcers
Stimuli that are innately satisfying and do not require learning to be reinforcing, typically fulfilling biological needs.
Example:
Food, water, and warmth are all primary reinforcers because they are inherently rewarding.
Punishment
Any event that weakens or decreases the likelihood of a behavior occurring again.
Example:
A dog getting a squirt of water for barking excessively serves as punishment, aiming to reduce the barking behavior.
Punishment
The process by which a stimulus or event decreases the likelihood of a behavior occurring again.
Example:
A parent giving a timeout for hitting a sibling is a form of punishment aimed at reducing hitting behavior.
Ratio (in reinforcement schedules)
Refers to reinforcement being based on the number of responses or behaviors performed.
Example:
A salesperson earning a commission for every sale they make is on a ratio schedule.
Reinforcement
Any event that strengthens or increases the likelihood of a behavior occurring again.
Example:
Giving a child praise for sharing their toys acts as reinforcement, making them more likely to share in the future.
Reinforcement
The process by which a stimulus or event increases the likelihood of a behavior occurring again.
Example:
Giving a dog a treat every time it rolls over is a form of reinforcement to encourage that trick.
Reinforcement Discrimination
The ability to distinguish between stimuli and respond only to specific ones that signal reinforcement.
Example:
A dog learns to sit only when its owner says 'sit' and not when a stranger says it, demonstrating reinforcement discrimination.
Reinforcement Generalization
The tendency for a behavior to occur in situations similar to the one in which it was originally reinforced.
Example:
A child praised for cleaning their room might start cleaning other areas of the house, showing reinforcement generalization.
Reinforcer
Any event or stimulus that strengthens the behavior it follows, making that behavior more likely to occur again.
Example:
For a child, getting praise after cleaning their room acts as a reinforcer for future cleaning behavior.
Secondary Reinforcers
Stimuli that gain their reinforcing power through association with primary reinforcers; they are learned rewards.
Example:
Money is a secondary reinforcer because it can be exchanged for primary reinforcers like food or shelter.
Shaping
An operant conditioning procedure in which reinforcers guide behavior toward closer and closer approximations of the desired behavior.
Example:
Training a dog to fetch the newspaper by first rewarding it for picking up any object, then for picking up the paper, and finally for bringing it to you, is an example of shaping.
Shaping
An operant conditioning procedure in which reinforcers guide behavior toward closer and closer approximations of the desired behavior.
Example:
Training a dog to fetch the newspaper by first rewarding it for going near the paper, then for touching it, then for picking it up, is an example of shaping.
Skinner Box
A chamber containing a bar or key that an animal can manipulate to obtain a food or water reinforcer, with devices to record the animal's responses.
Example:
A rat in a Skinner box learns to press a lever to receive a food pellet.
Spontaneous Recovery
The reappearance, after a pause, of an extinguished conditioned response.
Example:
After a dog has stopped begging for treats (extinction), it might suddenly start begging again a few days later, demonstrating spontaneous recovery.
Superstitious Behavior
Behavior that is accidentally reinforced, leading an individual to believe there's a causal link between the behavior and a positive outcome, even if none exists.
Example:
A baseball player always wears the same 'lucky' socks because they once hit a home run while wearing them, which is an example of superstitious behavior.
Variable (in reinforcement schedules)
Refers to an unpredictable and changing pattern of reinforcement, where the number of responses or time interval varies.
Example:
Playing a slot machine involves a variable schedule, as payouts are unpredictable.
Variable Interval (VI)
A reinforcement schedule that reinforces a response at unpredictable time intervals.
Example:
Constantly checking your phone for new social media notifications, which arrive at unpredictable times, is an example of a variable interval schedule.
Variable Ratio (VR)
A reinforcement schedule that reinforces a response after an unpredictable number of responses.
Example:
The unpredictable wins from playing a lottery ticket are an example of a variable ratio schedule, leading to high rates of behavior.
Variable-Interval Schedule
A partial reinforcement schedule that reinforces a response at unpredictable time intervals, producing a slow, steady rate of response.
Example:
Checking your email for a new message is on a variable-interval schedule, as you don't know when the next email will arrive.
Variable-Ratio Schedule
A partial reinforcement schedule that reinforces a response after an unpredictable number of responses, producing high and consistent rates of response and high resistance to extinction.
Example:
Playing a slot machine is the classic example of a variable-ratio schedule, as the number of plays needed for a win is random.