Behavior in the present depends critically on experience in similar environments in the past. Such past experience may be important in controlling behavior not because it determines the strength of a behavior, but because it allows the structure of the current environment to be detected and used. We explore a prospective-control approach to understanding simple behavior. Under this approach, order in the environment allows even simple organisms to use their personal past to respond according to the likely future. The predicted future controls behavior, and past experience forms the building blocks of the predicted future. We explore how generalization affects the use of past experience to predict and respond to the future. First, we consider how generalization across various dimensions of an event determines the degree to which the structure of the environment exerts control over behavior. Next, we explore generalization from the past to the present as the method of deciding when, where, and what to do. This prospective-control approach is measurable and testable; it builds predictions from events that have already occurred, and assumes no agency. Under this prospective-control approach, generalization is fundamental to understanding both adaptive and maladaptive behavior.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
April, L. B., Bruce, K., & Galizio, M. (2013). The magic number 70 (plus or minus 20): Variables determining performance in the rodent odor span task. Learning & Motivation, 44, 143–158. https://doi.org/10.1016/j.lmot.2013.03.001.
Bai, J. Y., Cowie, S., & Podlesnik, C. A. (2017). Quantitative Analysis of local-level resurgence. Learning & Behavior, 45(1), 78-88. https://doi.org/10.3758/s13420-016-0242-1.
Baum, W. M. (1974). On two types of deviation from the matching law: Bias and undermatching. Journal of the Experimental Analysis of Behavior, 22, 231–242. https://doi.org/10.1901/jeab.1974.22-231.
Baum, W. M. (2012). Rethinking reinforcement: Allocation, induction, and contingency. Journal of the Experimental Analysis of Behavior, 97, 101–124. https://doi.org/10.1901/jeab.2012.97-101.
Bizo, L. A., & White, K. G. (1994). The behavioral theory of timing: Reinforcer rate determines pacemaker rate. Journal of the Experimental Analysis of Behavior, 61, 19–33. https://doi.org/10.1901/jeab.1994.61-19.
Bizo, L. A., & White, K. G. (1995). Reinforcement context and pacemaker rate in the behavioral theory of timing. Animal Learning & Behavior, 23, 376–382. https://doi.org/10.3758/BF03198937.
Blough, D. S. (1972). Recognition by the pigeon of stimuli varying in two dimensions. Journal of the Experimental Analysis of Behavior, 18, 345–367. https://doi.org/10.1901/jeab.1972.18-345.
Blough, D. S. (1975). Steady state data and a quantitative model of operant generalization and discrimination. Journal of Experimental Psychology: Animal Behavior Processes, 1(1), 3. https://psycnet.apa.org/doi/10.1037/0097-7403.1.1.3.
Bouton, M. E., & Bolles, R. C. (1979). Contextual control of the extinction of conditioned fear. Learning & Motivation, 10, 445–466. https://doi.org/10.1016/0023-9690(79)90057-2.
Bouton M. E., Todd, T. P., Vurbic, D.,& Winterbauer, N. E. (2011). Renewal after the extinction of free operant behavior. Learning and Behavior, 39(1), 57-67. https://doi.org/10.3758/s13420-011-0018-6.
Branch, C. L., Galizio, M., & Bruce, K. (2014). What-where-when memory in the rodent odor span task. Learning & Motivation, 47, 18–29. https://doi.org/10.1016/j.lmot.2014.03.001.
Cowie, S. (2018). Behavioral time travel: Control by past, present, and potential events. Behavior Analysis: Research & Practice, 18, 174–183. https://doi.org/10.1037/bar0000122.
Cowie, S. (2019). Some weaknesses of a response-strength account of reinforcer effects. European Journal of Behavior Analysis, 1–16. https://doi.org/10.1080/15021149.2019.1685247.
Cowie, S., & Davison, M. (2016). Control by reinforcers across time and space: A review of recent choice research. Journal of Experimental Analysis of Behavior, 105, 246–269. https://doi.org/10.1002/jeab.200.
Cowie, S., & Davison, M. (2020). Being there on time: Reinforcer effects on timing and locating. Journal of the Experimental Analysis of Behavior, 13, 340–362. https://doi.org/10.1002/jeab.581.
Cowie, S., Davison, M., & Elliffe, D. (2011). Reinforcement: Food signals the time and location of future food. Journal of the Experimental Analysis of Behavior, 96, 63–86. https://doi.org/10.1901/jeab.2011.96-63.
Cowie, S., Elliffe, D., & Davison, M. (2013). Concurrent schedules: Discriminating reinforcer-ratio reversals at a fixed time after the previous reinforcer. Journal of the Experimental Analysis of Behavior, 100, 117–134. https://doi.org/10.1002/jeab.43.
Cowie, S., Davison, M., & Elliffe, D. (2014). A model for food and stimulus changes that signal time-based contingency changes. Journal of the Experimental Analysis of Behavior, 102(3), 209-310. https://doi.org/10.1002/jeab.105
Cowie, S., Davison, M., & Elliffe, D. (2016) A model for discriminating reinforcers in time and space. Behavioural Processes, 127, 62-73. https://doi.org/10.1016/j.beproc.2016.03.010.
Cowie, S., Davison, M., & Elliffe, D. (2017). Control by past and present stimuli depends on the discriminated reinforcer differential. Journal of the Experimental Analysis of Behavior, 108, 184–203. https://doi.org/10.1002/jeab.268.
Davison, M., & Jones, B. M. (1998). Performance on concurrent variable-interval extinction schedules. Journal of the Experimental Analysis of Behavior, 69, 49–57. https://doi.org/10.1901/jeab.1998.69-49.
Davison, M., & Nevin, J. A. (1999). Stimuli, reinforcers, and behavior: An integration. Journal of the Experimental Analysis of Behavior, 71, 439–482. https://doi.org/10.1901/jeab.1999.71-439.
Davison, M., & Baum, W. M. (2006). Do conditional reinforcers count? Journal of the Experimental Analysis of Behavior, 86(3), 269–283. https://doi.org/10.1901/jeab.2006.56-05.
Davison, M., & Baum, W. M. (2010). Stimulus effects on local preference: Stimulus—response contingencies, stimulus—food pairing, and stimulus—food correlation. Journal of the Experimental Analysis of Behavior, 93(1), 45–59. https://doi.org/10.1901/jeab.2010.93-45.
Davison, M., & Cowie, S. (2019). Timing or counting? Control by contingency reversals at fixed times or numbers of responses. Journal of Experimental Psychology: Animal Learning and Cognition, 45(2), 222. https://psycnet.apa.org/doi/10.1037/xan0000201.
Estes, W. K. (1944). An experimental study of punishment. Psychological Monographs, 57(3), 1–40. https://doi.org/10.1037/h0093550.
Gibbon, J. (1977). Scalar expectancy theory and Weber's law in animal timing. Psychological Review, 84, 279–325. https://doi.org/10.1037/0033-295X.84.3.279.
Gomes-Ng, S., Elliffe, D., & Cowie, S. (2018). Generalization of response patterns in a multiple peak procedure. Behavioural Processes, 157, 361–371. https://doi.org/10.1016/j.beproc.2018.07.012.
Herrnstein, R. J. (1961). Relative and absolute strength of response as a function of frequency of reinforcement. Journal of the Experimental Analysis of Behavior, 4, 267–272. https://doi.org/10.1901/jeab.1961.4-267.
Herrnstein, R. J. (1970). On the law of effect. Journal of the Experimental Analysis of Behavior, 13, 243–266. https://doi.org/10.1901/jeab.1970.13-243.
Hull, C. L. (1933). Differential habituation to internal stimuli in the albino rat. Journal of Comparative Psychology, 16, 255–273. https://doi.org/10.1037/h0071710.
Hunter, M., & Rosales-Ruiz, J. (2019). The power of one reinforcer: The effect of a single reinforcer in the context of shaping. Journal of the Experimental Analysis of Behavior, 111, 449–464. https://doi.org/10.1002/jeab.517.
Killeen, P. R., & Jacobs, K. W. (2017). Coal is not black, snow is not white, food is not a reinforcer: The roles of affordances and dispositions in the analysis of behavior. The Behavior Analyst, 40, 17–38. https://doi.org/10.1007/s40614-016-0080-7.
Krägeloh, C. U., & Davison, M. (2003). Concurrent-schedule performance in transition: Changeover delays and signaled reinforcer ratios. Journal of the Experimental Analysis of Behavior, 79, 87–109. https://doi.org/10.1901/jeab.2003.79-87.
Krägeloh, C. U., Davison, M., & Elliffe, D. M. (2005). Local preference in concurrent schedules: The effects of reinforcer sequences. Journal of the Experimental Analysis of Behavior, 84(1), 37–64.
Lazareva, O. F. (2012). Relational learning in a context of transposition: A review. Journal of the Experimental Analysis of Behavior, 97, 231–248. https://doi.org/10.1901/jeab.2012.97-231.
Lazareva, O. F., Young, M. E., & Wasserman, E. A. (2014). A three-component model of relational responding in the transposition paradigm. Journal of Experimental Psychology: Animal Learning & Cognition, 40, 63–80. https://doi.org/10.1037/xan0000004.
Leeper, R. (1935). The role of motivation in learning: A study of the phenomenon of differential motivational control of the utilization of habits. The Pedagogical Seminary & Journal of Genetic Psychology, 46, 3–40. https://doi.org/10.1080/08856559.1935.10533143.
Machado, A., & Rodrigues, P. (2007). The differentiation of response numerosities in the pigeon. Journal of the Experimental Analysis of Behavior, 88, 153–178. https://doi.org/10.1901/jeab.2007.41-06.
McCarthy, D., Corban, R., Legg, S., & Faris, J. (1995). Effects of mild hypoxia on perceptual-motor performance: A signal-detection approach. Ergonomics, 38, 1779–1792. https://doi.org/10.1080/00140139508925245.
Miranda-Dukoski, L., Davison, M., & Ellife, D. (2014). Choice, time and food: Continous cyclical changes in food probability between reinforcers. Journal of the Experimental Analysis of Behavior, 101(3), 406-421. https://doi.org/10.1002/jeab.79.
Miranda-Dukoski, L., Bensemann, J., & Podlesnik, C. A. (2016). Training reinforcement rates, resistance to extinction, and the role of context in reinstatement. Learning & Behavior, 44, 29–48. https://doi.org/10.3758/s13420-015-0188-8.
Nevin, J. A., & Grace, R. C. (2000). Behavioral momentum and the law of effect. Behavioral & Brain Sciences, 23, 73–90. https://doi.org/10.1017/S0140525X00002405.
Nevin, J. A., Mandell, C., & Atak, J. R. (1983). The analysis of behavioral momentum. Journal of the Experimental Analysis of Behavior, 39, 49–59. https://doi.org/10.1901/jeab.1983.39-49.
Pfeiffer, B. E., & Foster, D. J. (2013). Hippocampal place-cell sequences depict future paths to remembered goals. Nature, 497, 74–79. https://doi.org/10.1038/nature12112.
Podlesnik, C. A., & Miranda-Dukoski, L. (2015). Stimulus generalization and operant context renewal. Behavioural Processes, 119, 93–98. https://doi.org/10.1016/j.beproc.2015.07.015.
Rayburn-Reeves, R. M., Molet, M., & Zentall, T. R. (2011). Simultaneous discrimination reversal learning in pigeons and humans: Anticipatory and perseverative errors. Learning & Behavior, 39, 125–137. https://doi.org/10.3758/s13420-010-0011-5.
Reid, R. L. (1958). The role of the reinforcer as a stimulus. British Journal of Psychology, 49, 202–209. https://doi.org/10.1111/j.2044-8295.1958.tb00658.x.
Shahan, T. A. (2010). Conditioned reinforcement and response strength. Journal of the Experimental Analysis of Behavior, 93, 269–289. https://doi.org/10.1901/jeab.2010.93-269.
Shahan, T. A. (2017). Moving beyond reinforcement and response strength. The Behavior Analyst, 40, 107–121. https://doi.org/10.1007/s40614-017-0092-y.
Shahidi, N., Schrater, P., Wright, A., Pitkow, X., & Dragoi, V. (2019). Population coding of strategic variables during foraging in freely-moving macaques. BioRxiv, 811992. https://doi.org/10.1101/811992.
Sharp, R. A., Williams, E., Rörnes, R., Lau, C. Y., & Lamers, C. (2019). Lounge layout to facilitate communication and engagement in people with dementia. Behavior Analysis in Practice, 12, 637–642. https://doi.org/10.1007/s40617-018-00323-4.
Skinner, B. F. (1938). The behavior of organisms: An experimental analysis. New York, NY: Appleton-Century-Crofts.
Spence, K. W. (1937). The differential response in animals to stimuli varying within a single dimension. Psychological Review, 44, 430–444. https://psycnet.apa.org/doi/10.1037/h0062885.
Stubbs, D. A. (1980). Temporal discrimination and a free-operant psychophysical procedure. Journal of the Experimental Analysis of Behavior, 33, 167–185. https://doi.org/10.1901/jeab.1980.33-167.
Tan, L., Grace, R. C., Holland, S., & McLean, A. P. (2007). Numerical reproduction in pigeons. Journal of Experimental Psychology: Animal Behavior Processes, 33, 409–427. https://doi.org/10.1037/0097-7403.33.4.409.
Trask, S., Schepers, S. T., & Bouton, M. E. (2015). Context change explains resurgence after the extinction of operant behavior. Revista mexicana de analisis de la conductal/Mexican Journal of Behavior Analysis, 41, 187–210.
Ward, R. D., & Odum, A. L. (2006). Effects of prefeeding, intercomponent-interval food, and extinction on temporal discrimination and pacemaker rate. Behavioural Processes, 71, 297–306. https://doi.org/10.1016/j.beproc.2005.11.016.
Wearden, J. H., & Lejeune, H. (2008). Scalar properties in human timing: Conformity and violations. Quarterly Journal of Experimental Psychology, 61, 569–587. https://doi.org/10.1080/17470210701282576.
Zentall, T. R., Singer, R. A., & Stagner, J. P. (2008). Episodic-like memory: pigeons can report location pecked when unexpectedly asked. Behavioural Processes, 79, 93–98. https://doi.org/10.1016/j.beproc.2008.05.003.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The present article is intended to illustrate a conceptual approach to understanding why the environment exerts imperfect control over behavior. For these purposes, we adopt the equations used by Cowie and Davison (2020) to model the generalization of reinforcers across time and location shown in Fig. 1. To model temporal generalization, reinforcers in each time bin were redistributed across surrounding time bins according to a Gaussian function with standard deviation (s) at time t since a marker event (Panels C and D in Fig. 1):
In Equation 1, the parameter a is the extent of the increase in generalization between the times at which generalization is least (s0) and most likely (i.e., the asymptote). X0 is the time (x-value) at which st is halfway between its asymptotically low and high values, and β is the slope of the function around this point (i.e., the speed with which generalization increases).
Because of the discrete nature of the two response locations in the procedure, we modeled generalization across location by shifting a proportion of reinforcers at each time m to the other alternative. The proportion of reinforcers generalized to the other location (m) at time t (Panels E and F in Fig. 1) was calculated as:
The parameters in Equation 2 are the same as in Equation 1, but apply to generalization across location (m) rather than time (s). As Cowie and Davison (2020) did, we used the same X0 parameter for both temporal (s) and spatial (m) generalization.
The discriminated reinforcers (R’) in Panels E and F of Fig. 1 are thus derived from the obtained reinforcers using the equation:
In this instance, the parameters are the same as in Equations 1 and 2, and tmax is the maximum time since a marker event, dictated by the procedure itself. In the example in the present article, we displayed the effects of the two generalization processes sequentially to illustrate their separate effects on the discriminated structure of the environment. As Equation 3 shows, both processes are in fact applied simultaneously when fitting the quantitative model to the data.
About this article
Cite this article
Cowie, S., Davison, M. Generalizing from the Past, Choosing the Future. Perspect Behav Sci 43, 245–258 (2020). https://doi.org/10.1007/s40614-020-00257-9
- Stimulus control
- Prospective control