Delay reduction hypothesis

In classical conditioning, the delay reduction hypothesis states that certain discriminative stimuli (DS) are more effective signals for conditioned reinforcers (CR) if they signal a decrease in time to a positive reinforcer or an increase in time to an aversive stimulus or punishment. This is often applied in chain link schedules, with the final link being the aversive stimulus or positive (unconditioned) reinforcer.[1]

History

The delay reduction hypothesis was developed in 1969 by Edmund Fantino. As a hypothesis, delay reduction proposes that delays are aversive to organisms and that choices will be made by the organism to reduce delay.[2] When an organism was rewarded for an act it would repeat that action and hope for the same outcome. This would make that organism conditioned to either act or not act on the specific stimulius.[3]

gollark: Macron idea: LyricLy rotates apioforms at 2π rad/s.
gollark: !idea
gollark: <@319753218592866315>
gollark: Testbot, take Macron's specification.
gollark: It should not, however, not work.

See also

References

  1. W. David Pierce and Carl D. Cheney, Behavior Analysis and Learning 3rd ED
  2. O'Daly & Fantino (2003): Delay Reduction Theory. The Behavior Analyst Today, 4 (2), 141–155. BAO accessed 26 September 2010 Archived 29 December 2010 at the Wayback Machine
  3. Matthew O'Daly, Edmund Fantino. "Delay reduction theory: choice, value, and conditioned reinforcement".


This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.