Keywords

What Is Applied Behavior Analysis?

Applied behavior analysis (ABA) is one of the three branches of the science of behavior analysis , the other two being the experimental analysis of behavior and behaviorism, or the philosophy of behavior (Cooper, Heron, & Heward, 2007). As a science, ABA can be described as a systematic approach to understanding behavior of social interest. ABA is deeply rooted in the influential work of individuals such as Edward Thorndike, John Watson, Ivan Pavlov, and B.F. Skinner, to name a few. In 1968, Baer, Wolf, and Risley outlined some of the defining characteristics research in ABA should exhibit in their seminal paper “Some Current Dimensions of Applied Behavior Analysis.” While there are many examples of applied behavior analytic research prior to Baer et al. (e.g., Allen, Hart, Buell, Harris, & Wolf, 1964; Ayllon, 1963; Ayllon & Azrin, 1965; Ayllon & Michael, 1959; Etzel & Gerwitz, 1967; Sherman, 1963; Wolf, Risley, & Mees, 1963), its publication, along with the establishment of the Journal of Applied Behavior Analysis, is commonly cited as what established the field of ABA.

Baer, Wolf, and Risley (1968, 1987) urged research in the field of ABA to be applied, behavioral, analytic, technological, conceptually systematic, effective, and generalizable; applied in the sense that the subject matter is selected due to its importance to the individual, community, and/or society. ABA research is behavioral in that the subject matter is observable, objectively defined, and measurable. Research demonstrates the analytic dimension when there has been a believable demonstration that the intervention, or independent variable, is solely responsible for changes in the behavior in question, or the dependent variable. This dimension is typically assessed through the research design used in the study. ABA research is technological when the procedures are described completely to allow the possibility of replication. To be conceptually systematic, research in the field of ABA provides descriptions of interventions and changes in behavior that align with relevant principles of behavior analysis. Baer et al. (1968, 1987) considered research that has demonstrated effects that have practical value and are meaningful to the participants as effective. Generality is demonstrated when the results are lasting and occur across different contexts (e.g., environments, people, times of day, with different materials).

An additional important component of ABA, while not included in Baer et al. (1968, 1987)’s description of some of the dimensions of ABA, is social validity. The importance of which was discussed by Wolf (1978). Judgments on social validity often involve inquiry on three factors: (1) the significance of the goals selected, (2) the appropriateness of the procedures utilized, and (3) the importance of the effects demonstrated (Wolf, 1978). Unlike most measures within behavior analytic work, social validity is often subjective (e.g., done through questionnaires, rating scales, and interviews). Social validity measures combined with objective measures allow researchers and practitioners to measure the effectiveness and social acceptability of interventions.

As a practice, ABA refers to the application of behavior analytic principles to improve socially important behaviors, for example, the use of shaping to expand the food repertoire of an individual exhibiting food selectivity (e.g., Koegel et al., 2012). In this example, shaping, an empirically evaluated behavioral technique, is employed to improve an assumed socially relevant difficulty. While the clinical application of ABA may not require the experimental rigor common to research in ABA, it still should align with the dimensions outlined at its conception. In practice, the principles of ABA have been employed across a wide spectrum of challenges. Some examples include, but are not limited to, the treatment of developmental disabilities, such as autism spectrum disorder (ASD ; e.g., Lovaas, 1987; Ivar Lovaas, Koegel, Simmons, & Long, 1973), as well as gerontology (e.g., Green, Linsk, & Pinkston, 1986), education (e.g., Hall, Lund, & Jackson, 1968), juvenile delinquency (e.g., Phillips, Phillips, Fixsen, & Wolf, 1971), nonhuman welfare (e.g., Dorey, Rosales-Ruiz, Smith, & Lovelace, 2009), healthcare (e.g., Lichtenstien, 1997), addiction (e.g., Silverman, Roll, & Higgens, 2008), relationships (e.g., Sanders, 1999), and sustainability (e.g., Bekker et al., 2010).

Basic Principles of ABA

As mentioned previously, ABA-based procedures are derived from the principles of the science of behavior analysis to allow for socially significant behavior change to occur. Behavior can be defined as:

That portion of an organism’s interaction with its environment that is characterized by detectable displacement in space through time of some part of the organism and that results in a measureable change in at least one aspect of the environment. (Johnston & Pennypacker, 1993, p. 23)

The principles of behavior analysis began their development from early work on respondent and operant conditioning. In respondent conditioning, behavior is elicited through a conditioned or unconditioned stimulus. For example, presenting food, an unconditioned stimulus, elicits salivation, an unconditioned response. If a light is paired with the onset of food, eventually the light alone will elicit salivation. While respondent conditioning has been utilized within ABA-based procedures and should be considered in some contexts, the principles of operant behavior are more common within practice.

Within the operant conditioning paradigm, behavior is changed through manipulating antecedents and consequences (i.e., what comes before and after the behavior in question). Antecedent manipulation involves changes to the stimulus conditions prior to the potential onset of the targeted behavior. Consequent manipulation involves reinforcement and punishment. Reinforcement occurs when a stimulus change occurs contingent upon a behavior that results in a corresponding increase in the probability of similar behavior occurring in similar situations in the future. Punishment occurs when a stimulus change occurs contingent upon a behavior that results in a corresponding decrease in the probability of similar behavior occurring in similar situations in the future.

What follows are brief descriptions and research examples of some procedures that utilize the principles of ABA to modify behavior. This list is not meant to be exhaustive, but rather a sample of some commonly used procedures within practice and research. Additionally, the research examples selected for each procedure were done to simply provide an example of the procedure used in the professional literature. These examples are not meant to be representative of a review of the body of literature as a whole for any given procedure .

ABA-Based Procedures

Discrete Trial Teaching

One of the most common approaches to teaching within a behavior analytic framework is discrete trial teaching (DTT; Lovaas, 1981, 1987). This systematic procedure is commonly used to teach a variety of skills. Each discrete trial consists of three primary components: (1) a discriminative stimulus (e.g., an instruction from the interventionist), (2) a response by the learner, and (3) a consequence (i.e., reinforcement or punishment) provided by the interventionist. An optional, but common, fourth step involves providing a prompt, prior to the learner’s response, that increases the likelihood of the learner responding correctly. Other important components which have been explored within experimental evaluations of DTT include inter-trial intervals, methods of data collection, and establishing operations (EO; Keller & Schoenfeld, 1950; Michael, 1988). Researchers have demonstrated that DTT has been an effective approach to teach a variety of skills such as receptive and expressive labels (e.g., Conallen & Reed, 2016; DiGennaro-Reed, Reed, Baez, & Maguire, 2011), conversation skills (e.g., Ingvarsson & Hollobaugh, 2010), and play and social skills (e.g., Nuzzolo-Gomez, Leonard, Ortiz, Rivera, & Greer, 2002; Shillingsburg, Bowen, & Shaprio, 2014).

In a recent specific example, Conallen and Reed (2016) used a DTT approach to teach several children (ages 6–9 years), diagnosed with autism, to label the emotions of others. Situational cards were placed in front of the participants that depicted various scenarios that are likely to occasion a specific emotion (e.g., a boy at a birthday party). The participant was then given a picture of a boy displaying a happy, sad, or angry facial expression and asked to match the card to the situational card. Following the match-to-sample condition, the researchers then presented each participant with a situational card and asked a question related to that card (e.g., “It is his birthday, how does he feel?”). The participants answered by selecting the picture of the boy displaying an emotion (i.e., happy, sad, or angry). Conallen and Reed found that the procedure was successful at teaching the participants to label emotions within this context. For a more in-depth description of DTT, we refer the reader to Ghezzi (2007), Leaf and McEachin (1999), Lerman, Valentino, and LeBlanc (2016), Smith (2001), and Leaf, Cihon, Leaf, McEachin, and Taubman (2016).

Prompting

To minimize errors, increase correct responding, and increase the rate of reinforcement , prompts are often provided to assist the learner. Prompts are any antecedent behavior the interventionist engages in that alters stimulus conditions to increase the likelihood of the desired response (Green, 2001; Grow & LeBlanc, 2013; MacDuff, Krantz, & McClannahan, 2001; Wolery, Ault, & Doyle, 1992). There are many ways an interventionist can provide a prompt, which include, but is not limited to, pointing to the correct response (e.g., Soluaga, Leaf, Taubman, McEachin, & Leaf, 2008), physically guiding the learner to the correct response (e.g., Leaf, Sheldon, & Sherman, 2010), reducing the number of choices in the field (e.g., Soluaga et al., 2008), verbally modeling the correct response (e.g., Leaf, Sheldon, & Sherman, 2010), or placing the target stimulus closer to the learner (e.g., Soluaga et al., 2008).

Although researchers have shown that prompting can be effective across multiple populations and behaviors, it may be difficult for clinicians to know when to prompt, fade prompts, and what prompts to provide. Thus, researchers have evaluated various prompting systems to help guide clinicians to effectively utilize prompts. One way to provide and fade prompts is to develop a prompting hierarchy. One method is known as least-to-most prompting which starts with interventionist providing the least amount of assistance and gradually increasing the assistance based on learner responding. A second hierarchical prompting system is known as most-to-least prompting which starts with the most assistive prompt (e.g., full physical guidance), and, over successive trials or sessions, the interventionist reduces the level of assistance. When using hierarchical prompting systems, professionals typically determine the number of steps in the prompting hierarchy, what types of prompts will be provided, the level of assistance, the criteria to fade or reintroduce prompts, and what types of reinforcers will be utilized for unprompted and prompted responses.

A second way to provide and fade prompts is based on manipulation of the time until a prompt is provided. One common way to do this is to implement a prompting system referred to as a progressive time delay. During initial teaching with progressive time delay prompting, the interventionist presents a set number of simultaneously prompted trials (i.e., 0 s delay). After a set number of simultaneously prompted trials, the interventionist implements the time delay trials. The amount of time systematically increases (e.g., from 1 to 2 s delay) until a terminal time criterion is met. A second way to provide prompting in a time-based system is known as the constant time delay prompting system. During initial teaching with constant time delay, the interventionist provides immediately prompted trials (i.e., 0 s delay). After a set number of immediate prompted trials or sessions, the interventionist implements time delay trials (e.g., 5 s delay). In time delay trials, the interventionist provides an instruction to the learner (e.g., “Touch the ball”) followed by a brief time delay, typically ranging from 3 to 5 s, for the learner to respond to the instruction.

There are many other types of prompting systems which include graduated guidance (e.g., MacDuff, Krantz, & McClannahan, 1993), simultaneous prompting (e.g., Leaf et al., 2010), and no-no prompting (e.g., Leaf et al., 2010). The aforementioned studies typically have strict rules and protocols for interventionists to follow. In contrast, flexible prompt fading (FPF; Soluaga et al., 2008) is a prompting system which does not provide interventionists with strict protocols of when to prompt and when not to prompt, but, instead, provides guidelines. In doing so, the interventionist makes changes based upon in-the-moment assessment of several variables (e.g., current learner responding, affect, responses to previous prompts; Leaf, Cihon, Leaf, et al. 2016; Leaf, Leaf, McEachin, et al. 2016). Within FPF the interventionist can use any and all prompt types with the goal of keeping the learner averaging 80% correct responding. In doing so, the interventionist should always implement the least assistive prompt whenever possible and fade prompts as quickly as possible. To determine what prompt to provide, the interventionist must factor in many variables including the learner’s history , recent responding, any undesired behavior, length of teaching session, what prompts typically have been successful, and what reinforcers are currently motivating.

Researchers have shown that FPF has been successful in teaching receptive and expressive labels (e.g., Soluaga et al., 2008). Soluaga et al. (2008) provided the first study to measure FPF in which the researchers compared a time delay prompt to FPF with five individuals diagnosed with ASD . Time delay and FPF were effective, but FPF was more efficient. Additional studies have shown that FPF was more effective than most-to-least prompting (e.g., Leaf, Leaf, Alcalay, et al. 2016) and error correction (e.g., Leaf et al., 2014).

Incidental Teaching

Incidental teaching is a procedure commonly used to expand language utilizing the principles of behavior analysis. Incidental teaching has been used to teach conversation skills (e.g., Hart & Risley, 1975), play skills (e.g., Wong, Kasari, Freeman, & Paparella, 2007), complex language (e.g., Hart & Risley, 1978), social skills (e.g., McGee, Almeida, Sulzer-Azaroff, & Feldman, 1992), receptive labels (e.g., McGee, Krantz, Mason, & McClannahan, 1983), and early reading skills (e.g., McGee, Krantz, & McClannahan, 1986).

Hart and Risley developed incidental teaching procedures in 1968 while working with children from low-income families to increase the complexity of their children’s language. Hart and Risley (1975) defined incidental teaching as “the interaction between an adult and a single child, which arises naturally in an unstructured situation, which is used by the adult to transmit information or give the child practice in developing a skill” (p. 411). Hart and Risley (1968) found that the incidental teaching method expanded children’s verbal communication skills and generalized into other settings.

Incidental teaching consists of four components: (1) environmental arrangement, (2) child initiation, (3) elaboration, and (4) reinforcement. Incidental teaching should take place in the learner’s natural environment, but the environment should be arranged so that the learner needs to initiate and request desired items, activities, and any other materials (McGee et al., 1983). Incidental teaching focuses on the learner’s interests and is dependent on the learner’s initiations. Once the environment has been arranged appropriately, the interventionist should wait for the learner to initiate. The nature of the initiation will vary for each learner, which could be a gesture toward an item or activity, a one-word request, a manual sign, a full sentence, etc. The interventionist may then target an elaboration of the learner’s request. This could be in the form of a question (e.g., “What color paint?”) or a vocal model (e.g., “I want the giraffe”). The form of the elaboration should also be individualized for the learner. The goal is the learner then imitates the expanded model or provides the expanded response based on the prompt provided by the interventionist. After the learner provides the expanded response, the interventionist should immediately provide the requested item/activity. The requested item/activity should function as a reinforcer and increase the likelihood of the expanded vocal response occurring on future occasions.

Teaching language through incidental teaching has several potential benefits including greater generalization compared to other procedures, less prompt dependence, and a variety of interventionists can easily implement the procedure, including parents, teachers, and caregivers (McGee, Krantz, McClannahan, 1985, 1986; McGee, Morrier, & Daly 1999).

Token Economies

A token economy is a type of reinforcement system in which the interventionist provides some form of tokens (e.g., check marks, points, stickers) contingent upon the learner engaging in a targeted response(s). Once the learner earns enough tokens, she/he exchanges the tokens for a preferred item or activity (e.g., toy, edible, game) which presumably functions as a reinforcer. Since the acquisition of tokens is paired with the delivery of a preferred item or activity, the tokens function as a conditioned reinforcer. This is considered as a bridge in the gap to reinforcement as the delivery of tokens marks the occurrence of the desired behavior, but no preferred item or activity will be accessed until the learner has acquired a certain number of tokens. The application of token economies has a long history in research and clinical practice within the field of ABA.

Ayllon and Azrin (1965) conducted a seminal study in which they used a token economy to evaluate the effects of extrinsic reinforcement on behavior that was presumed to be intrinsically motivating. The study consisted of six experiments examining the effects of a token economy and other operant procedures on the behavior of adult patients, identified as psychotic, who resided in a state hospital. The researchers implemented a token economy throughout all six experiments in which tokens could be exchanged for privacy, leave from the ward, social interaction with staff, devotional opportunities, recreational opportunities, and commissary items. The dependent variables across the six experiments were selection and engagement in various jobs inside and outside of the hospital. The contingent application of the token economy system effected choice of job as well as the patient’s performance on the job.

Since Ayllon and Azrin’s (1965) seminal study using a token economy, there have been several investigations across multiple populations (e.g., developmental disabilities; Harchik, Sherman, & Sheldon, 1992; juvenile delinquency; Phillips, 1968) and targeted responses (e.g., decreased symptoms of depression; Hersen, Eisler, Alford, & Agras, 1973; increased activity levels for chronic pain patients; Ritchie, 1976) on the implementation of token economies. In one study, Charlop-Christy and Haymes (1998) evaluated two variations of token economies for three individuals diagnosed with autism. One variation used the participants’ perseverations as tokens (e.g., if the perseveration was cars, then small toy cars were used as tokens). The second variation used stars as tokens. The percentage of correct responding during performance tasks was higher when perseverative objects were used as opposed to stars. In a more recent study, Dotson, Richman, Abby, Thompson, and Plotner (2013) evaluated a class-wide token economy paired with the teaching interaction procedure to teach job-related skills to eight adults with various developmental disabilities (e.g., intellectual disability, Down syndrome, and autism). The combination of the two procedures was successful in improving the work-related behavior for all participants.

The research on token economies has helped lead to the procedures widespread clinical use. There are some variables that clinicians should consider when implementing a token economy that are worth noting. First, as with any reinforcement system, what behavior will be reinforced through the token economy must be determined. Second, the form tokens will take must be selected (e.g., points, stickers, check marks). Third, which preferred activities will be available for exchange (e.g., toys, breaks, social praise, edibles). Fourth, how many tokens must be earned before an exchange can occur. Fifth, if tokens can also be lost (i.e., response cost; described later); sixth, how to fade the token system; and, finally, how the token economy will be introduced should be planned. The final decision can often be the most important decision as properly introducing the token system is essential for its success. Leaf, McEachin, and Taubman (2012) have provided training materials on how to introduce the token economy. Leaf and colleagues’ recommendation is to start with delivering tokens for a simple behavior (e.g., the learner placing his or her hands in the lap) and gradually expanding in complexity. Additionally, Leaf and colleagues recommended starting with the learner initially only earning one token and then expanding to more tokens before an exchange occurs. After these decisions have been made by the clinician, she/he can begin to implement the token economy.

Response Cost

Another procedure which can be utilized to reduce the rate of undesired behavior is response cost. Response cost consists of the removal of a reinforcing event contingent upon demonstration of an undesired behavior. This procedure is commonly used within a token economy (described earlier) in which the interventionist removes tokens (e.g., points, stickers); however, response cost can occur in the absence of a token system (e.g., removing certain tangible reinforcers contingent upon the learner engaging in an undesired behavior). Phillips et al. (1971) conducted a seminal study in which the researchers evaluated the effectiveness of a response cost system. In their study, all participants were part of Achievement Place, a community-based treatment facility, and were considered predelinquent youths. All youths participated in a token economy, in which participants could earn points for engaging in appropriate behavior and exchange points earned for various reinforcers (e.g., snacks, TV, allowances). Within this study, the researchers showed that a token economy with response cost could increase punctuality for meetings and answering questions correctly about an event that was just observed (e.g., watching the news). Since this study, there have been many evaluations of response cost which have included evaluating response cost with typically developing individuals (e.g., Tiano, Forston, McNeil, & Humphreys, 2005) and individuals diagnosed with attention deficit hyperactivity disorder (ADHD; e.g., McGoey & DuPaul, 2000), intellectual disabilities (e.g., Myers, 1975); ASD (Jowett, Dozier, & Payne, 2016), developmental disabilities (e.g., Piazza, Fisher, & Sherer, 1997), and emotionally disturbed learners (e.g., Sprute & Williams, 1990).

Before a clinician uses response cost, there are several considerations that must be taken into account. First, decide if response cost will be paired with systematized reinforcement system such as a token economy. Second, decide what behavior will result in a loss. Third, decide on the cost (e.g., loss of a specific duration of time, loss of three tokens versus one token). This is an important consideration, as the clinician needs to ensure that the cost is high enough to have an effect on the target behavior, but not too great resulting in prolonged lapses in engaging in appropriate behavior. Finally, decide if the contingencies will be discussed with the learner before implementation. If the learner has the prerequisite skills required, discussing the system with the learner may result in faster behavior change . However, for some learners, discussing the contingencies may not be appropriate and should be avoided.

Differential Reinforcement

Differential reinforcement procedures are common for developing new behavior and decreasing the probability of undesired behavior. Broad definitions of differential reinforcement vary from “reinforcing one response class and withholding reinforcement for another response class” (Cooper et al., 2007, p. 470) to “provide the strongest reinforcers for the best behaviors or performance” (Leaf & McEachin, 1999, p. 34). Differential reinforcement procedures have demonstrated effectiveness across a wide variety of populations and target behaviors. Four common differential reinforcement procedures include differential reinforcement of other behavior (DRO) , differential reinforcement of low rates of behavior (DRL), differential reinforcement of incompatible behavior (DRI), and differential reinforcement of alternative behavior (DRA).

DRO

Within DRO, a reinforcing event is delivered contingent on the absence of a specific topography of response (Reynolds, 1960; Weiher & Harman, 1975). The delivery of the reinforcing event occurs based upon the absence of the targeted response for a specified duration of time or if the targeted response is not occurring at a specified time. There are several distinctions among DRO procedures based upon how the delivery of the reinforcing event is determined that are beyond the scope of this chapter (see Cooper et al., 2007 for a detailed description). The effectiveness and variables affecting the effectiveness of DRO procedures have been well documented within the research literature.

For instance, in a recent study, Heffernan and Lyons (2016) examined the effectiveness of a DRO procedure to decrease the frequency of nail biting for a 4-year-old boy diagnosed with ASD . Prior to the onset of intervention, the researchers conducted a functional behavior assessment (FBA) and a preference assessment. Heffernan and Lyons identified several items that may provide similar sensory feedback to nail biting (e.g., containers of dry rice and pasta to run his fingers through) that could potentially serve as reinforcers. Initially, the preferred items were available following 20 s without nail biting. The interval was reset each time nail biting occurred. The intervention was successful at decreasing the frequency of nail biting and, throughout the course of the intervention, the interval was increased to 60 min. For a detailed review of recent applied literature utilizing DRO procedures, we refer the reader to Jessel and Ingvarsson (2016).

DRL

Ferster and Skinner (1957) first described DRL as delivering a reinforcing event contingent upon the lapse of a minimum amount of time without the occurrence of the target behavior and subsequent increasing of the periods of time between responses to further reduce the target behavior. Another variation of DRL may also involve a predetermined criterion level of responding that must not be exceeded during a specified timeframe to receive access to a reinforcing event (e.g., no more than three occurrences of a target behavior in 10 min regardless of the time between responses). Thus, DRL may not completely suppress the targeted response but rather work toward systematically decreasing the target behavior to more appropriate or acceptable levels. Since Ferster and Skinner’s first description, the DRL procedure has been utilized clinically and evaluated empirically within the literature.

In one example, Austin and Bevan (2011) used a DRL procedure to reduce the frequency of requests for interventionist attention with three young children in an elementary school classroom in South Wales. Baselines were taken for all three participants to determine individual target rates. To begin each session, boxes signifying the number of times the participant could request attention were outlined on an index card, plus one additional box. For instance, if the targeted rate was three, that participant had four boxes on her index card. Each time the participant requested attention , a box was checked. At the end of each session, the interventionist delivered a reinforcer if the participant requested interventionist attention less often than the targeted rate (i.e., if all the boxes were not checked). The results of a reversal design showed that the DRL was effective at decreasing the rate of requests for attention for all three participants.

DRI

DRI differs from the DRO in that it specifies the response topography upon which the delivery of reinforcement will be contingent. Within this procedure, reinforcement is contingent upon the occurrence of a predetermined response topography that is incompatible with the undesired behavior that is targeted for decrease, however, not necessarily functionally equivalent. For example, if head hitting with one’s hand is the undesired behavior, hands in lap or in pockets could be selected for reinforcement because they are incompatible with head hitting. Recent reviews of the empirical literature have shown that DRI procedures are less common among differential reinforcement procedures and that positive treatment effects are commonly only observed when the DRI is paired with other procedures (Chowdhury & Benson, 2011).

For example, Neufeld and Fantuzzo (1987) examined the effectiveness of a DRI procedure to decrease the frequency of self-injurious behavior (SIB) for three adults at a state hospital. The incompatible behavior selected during the intervention was placing rings onto a peg which was related to the participants’ current habilitative programming and incompatible with the SIB. Reinforcement was delivered at 10 s intervals for engaging in the incompatible task. This DRI procedure was only partially effective as the rate of SIB still occurred at variable rates across all three participants. However, when paired with contingent application of a helmet in combination with the DRI procedure, SIB was reduced to near zero levels for all three participants.

DRA

DRA is similar to the DRI in that it specifies a response upon which reinforcement is contingent. However, unlike the DRI, the response selected within the DRA is not necessarily incompatible with the undesired behavior. Consider the head hitting example used in the description of the DRI above. Alternative responses for head hitting that are not necessarily incompatible with head hitting may be requesting squeezes to the head, resting hand on the head, or asking for a break. DRA and DRO procedures are the most commonly used differential reinforcement procedures used among the literature (Chowdhury & Benson, 2011).

In one example within the literature , Rehfeldt and Chambers (2003) utilized a DRA procedure to decrease the frequency of perseverative verbal behavior and increase the frequency of appropriate verbal behavior for a 23-year-old male diagnosed with autism and mental retardation. There was no single appropriate verbal response selected for reinforcement; rather, all appropriate verbal responses were candidates for reinforcement. Attention and eye contact (presumed reinforcing events) were delivered contingent upon engaging in appropriate verbal behavior. The results indicated that the DRA procedure was effective at increasing the frequency of appropriate verbal behavior and decreasing the frequency of perseverative verbal behavior.

While differential reinforcement is commonly used for the reduction of the rates of undesired behavior, it is also used to strengthen response classes and is a key component of shaping (described later). For an in-depth description of the differential reinforcement procedures described here, we refer the reader to Cooper et al. (2007), Chowdhury and Benson (2011), and Sulzer-Azaroff and Mayer (1977).

Time-Out from Reinforcement

Time-out from reinforcement is a procedure which is used to decrease the rate of undesired behavior. When implementing time-out, the interventionist removes or delays reinforcement for a certain period of time contingent upon the learner engaging in undesired behavior. For example, if one wants to reduce screaming while playing a video game, one may pause or remove the video game for a brief period of time. It should be noted that time-out from reinforcement does not necessarily mean moving an individual from one area to another, as is commonly done in mainstream society. Instead, time-out refers to temporarily removing access to reinforcement, the specifics of which are dependent on the nature of the reinforcement.

In a seminal study, Bostow and Bailey (1969) evaluated the implementation of a brief time-out procedure to decrease undesired behavior for residents in a large state hospital. A 58-year-old woman, in a wheel chair, who engaged in frequent loud vocalizations and swearing behaviors participated in the first experiment. The researchers implemented a 2 min time-out procedure plus a DRI (described previously). The time-out procedure consisted of moving the participant to the corner of the room and placing her on the floor. The results of the study showed that the time-out procedure resulted in an immediate change in the participant’s behavior, with loud vocalizations occurring at near zero rates. The same procedure was used in the second experiment with a 7-year-old boy who engaged in frequent aggressive behavior. The results replicated those from the first experiment with the rate of aggression decreasing immediately and occurring at near zero rates. Since this study, there have been numerous investigations of time-out to decrease the severity of aberrant behavior across various populations including typically developing children (e.g., Miller & Kratochwill, 1979), individuals diagnosed with ASD (e.g., Donaldson & Vollmer, 2011), individuals diagnosed with attention deficit disorder (ADD) and ADHD (e.g., Fabiano et al., 2004), individuals diagnosed with developmental disabilities (e.g., Mace & Heller, 1990), and individuals diagnosed with intellectual disabilities (e.g., Ritschl, Mongrella, Presbie, 1972).

There are several variables for clinicians to consider before implementing a time-out procedure. First, define what behavior will result in time-out from reinforcement. In considering this, the function of the behavior is important. The clinician must ensure that the learner is not placed in a time-out when the function of the behavior is to escape their present environment, as this would have the opposite effect and would likely reinforce the behavior. Second, and perhaps most importantly, the clinician must ensure that the time-in environment is reinforcing. If the time-in environment is not reinforcing, then the cost for leaving that environment will not result in the desired behavior change. Third, decide the duration of time-out. Research on the amount of time a learner remains in time-out has been mixed with some studies showing that a shorter duration is more effective (e.g., Pendergrass, 1971) and some studies showing a longer duration is more effective (e.g., White, Nielsen, & Johnson, 1972). Fourth, decide the criteria for leaving time-out (e.g., waiting for the learner to refrain from engaging in undesired behavior). Fifth, decide if time-out is to be exclusionary (i.e., the individual removed from all elements of the environment) or non-exclusionary (i.e., only partial elements of the environment removed). It is very important to ensure that all state laws, federal laws, and ethical codes are being followed in making such decisions. Finally, decide what procedures (e.g., differential reinforcement, token economies, prompting) to implement in conjunction with time-out to ensure that the individual learns appropriate replacement behaviors.

Shaping

Shaping is usually described as differentially reinforcing (described previously) successive approximations toward a terminal response or goal (e.g., Cooper et al., 2007; Skinner, 1953). This leads to the common view that shaping is a linear process in which the reinforcement of an approximation leads to another and another until the terminal response is obtained. For instance, when using shaping to improve upon selective eating, it is common to develop a set of steps leading to consumption of a food (e.g., touch, pick up, move toward mouth, touch to lips, hold between teeth , bite down, chew, swallow). However, others have described the shaping process as a method to expand general response classes, which, in turn, provide the shaper with more responses from which to select and the learner with more responses in which to engage (Bernal, 1972; Cihon, 2015). Take the aforementioned approach to address food selectivity as an example. A nonlinear shaping approach, such as Bernal (1972), would focus on expanding critical classes of responding (e.g., tolerating, interacting, tasting). Shaping is frequently used within practice and evaluated empirically to develop or expand upon a number of response classes.

In a classic demonstration, Wolf et al. (1963) used shaping to teach a 3-year-old boy to wear glasses. The researchers started by placing empty frames (i.e., without lenses) around the room which, if the boy picked up, held, or carried the frames, a reinforcer would be delivered. Reinforcement was then delivered for bringing the frames closer to his eyes. Once the boy was putting his glasses on independently, the prescription lenses were introduced. Reinforcement was gradually faded, and the boy wore his glasses for approximately 12 h each day. Ricciardi, Luiselli, and Camare (2006) provided a more recent demonstration in which shaping was used to increase the frequency of approach responses to electronic animated figures (e.g., dancing Elmo® doll) with an 8-year-old boy diagnosed with autism. Preferred items were available for maintaining the targeted distance from the animated figures. The distance started at 6 m and gradually increased in steps to 1 m from the figures. The criterion distance was decreased upon success with staying within the criterion distance for 90% of intervals across two consecutive sessions. The results showed that the shaping procedure was successful at increasing approach responses to previously avoided electronic animated figures .

Teaching Interaction Procedure/Behavioral Skills Training

Two procedures that use instruction, modeling, practice, and feedback to teach a wide variety of skills are behavioral skills training (BST; Miltenberger, 2012) and the teaching interaction procedure (TIP ; Phillips, Phillips, Fixsen, & Wolf, 1974); however, some components between the two procedures differ. BST begins with the interventionist outlining the components of the targeted skill. The interventionist provides a model during or after this instruction. Following the model, the learners are provided with an opportunity to practice. The interventionist provides feedback during or after the practice. A TIP begins with the interventionist labeling and identifying the skill. Next, the interventionist provides meaningful rationales, followed by breaking the skill down into smaller steps (i.e., a task analysis of the targeted skill). The interventionist then demonstrates the correct and incorrect way to engage in the targeted skill. Following this demonstration, the learner is provided with opportunities to identify why the demonstration was correct or incorrect. Next, the learner practices the targeted skill, while the interventionist provides feedback. This last step continues until the learner meets a specified criterion. The overlap of the components within BST and the TIP often leads to confusion (Leaf et al., 2015). The differences have been discussed at length elsewhere (e.g., Leaf et al., 2015) and will not be discussed here; however, the authors encourage interested readers to look at the corresponding literature.

BST and the TIP have been well documented to teach a wide variety of skills to a wide variety of learners. For instance, Gunby and Rapp (2014) used BST to teach three children (ages 5–6 years) diagnosed with autism to engage in behavior to prevent abduction from strangers. The intervention consisted of (1) a discussion of the safety response and potential lures, (2) video models of potential scenarios and safe responses, and (3) opportunities to practice the safety skills, followed by (4) feedback based on practice opportunities (corrective and reinforcing). The skills were also probed within a high probability instructional sequence for each participant. A multiple baseline across participants showed that BST was effective for teaching abduction prevention skills for all three participants.

In another recent evaluation, Ng, Schulze, Rudrud, and Leaf (2016) examined the effectiveness of a modified TIP to teach four individuals (9–15 years old) diagnosed with an ASD various social skills. At the time of the study, each participant had an IQ score less than 75. Targeted social skills included providing help, negotiating, giving a compliment, passing the phone, responding to offers of help, requesting without grabbing, and responding to comments. All teaching sessions occurred in a small group instructional format. The TIP was modified to include the use of demonstrations of the rationales, picture prompts for identifying situations in which to engage in the skills, picture prompts to identify the steps of the skills, and only providing demonstrations of the correct way to engage in the targeted skill to avoid the potential of imitating undesirable examples. The modified TIP was effective in teaching the targeted skills for all four participants.

Functional Analysis

The analog functional analysis methodology developed by Iwata, Dorsey, Slifer, Bauman, and Richman (1982, 1994) has become the standard approach when it comes to assessing and treating aberrant behavior. Iwata et al. (1982, 1994)’s approach to treating aberrant behavior first experimentally manipulates antecedents and consequences, in an analog setting, that may affect the occurrence of aberrant behavior, determining the function that maintains the aberrant behavior and then proceeding to treatment based upon these results. Once the function of the aberrant behavior is determined, an intervention is developed to teach a replacement behavior for the aberrant behavior(s). It is common for targeted replacement behaviors to be functional communicative responses (e.g., Carr & Durand, 1985; Hanley, Sandy Jin, Vanselow, & Hanratty, 2014) which are commonly targeted using differential reinforcement, while the aberrant behavior is put on extinction (Tiger, Hanley, & Bruzek, 2008).

To determine the likely function of aberrant behavior, Iwata et al. (1982, 1994) used four analog conditions which were systematically alternated. Each condition manipulates antecedent events that precede aberrant behavior and the consequences that follow. The attention condition assesses if the aberrant behavior is maintained by social positive reinforcement. In the attention condition, the therapist ignores the individual while typically occupying themselves with another activity (e.g., reading a magazine, cleaning, etc.). Once the individual exhibits aberrant behavior , the therapist provides attention. In the escape condition, the environment is arranged to assess if negative reinforcement is the maintaining function. In this condition, a task demand is continually presented; if the individual engages in aberrant behavior, the task demand is delayed for a certain period of time. The alone condition in an analog functional analysis is used to determine if the aberrant behavior is maintained by automatic reinforcement. In the alone condition, the therapist and any other materials are not present in the room. Additionally, no programmed consequences are provided contingent on aberrant behavior. The play condition serves as a control condition. Within the play condition, attention is given noncontingently on a predetermined schedule, no task demands are placed, and free access to toys is available. Another condition commonly used in an analog functional analysis is the tangible condition. Similar to the attention condition, the tangible condition is used to determine if positive reinforcement is the controlling contingency. In the tangible condition, a preferred item and/or activity is present in the room with the therapist which, contingent on aberrant behavior, is provided to the individual (Rooker, Iwata, Harper, Fahmie, & Camp, 2011).

Since the landmark Iwata et al. study (1982, 1994), research in the area of analog functional analyses has become a staple within behavior analytic research. Many different topics of research have stemmed from the initial research on the functional treatment of aberrant behavior including descriptive assessments (Anderson & Long, 2002; Lerman & Iwata, 1993; Touchette, MacDonald, & Langer, 1985), anecdotal assessments (Smith, Smith, Dracolby, & Pace, 2012; Iwata, DeLeon, & Roscoe, 2013), brief functional assessments (Bloom, Lambert, Dayton, & Samaha, 2013), interview-informed synthesized contingency analysis (IISCA; Hanley et al., 2014), and functional analyses via telehealth (Wacker et al., 2013).

Functional Communication Training

Functional communication training (FCT) is an intervention in which appropriate communicative behavior is taught as a replacement for aberrant behavior (Cooper et al., 2007). For an FCT intervention to be successful, a functional assessment must first occur to determine the function of the aberrant behavior. After the function is determined, an appropriate communicative response can be taught that serves the same function as the aberrant behavior.

For example, in Carr and Durand’s (1985) hallmark study, four children with developmental disabilities were taught desired requests for escape from task demands (negative reinforcement) or teacher attention (positive reinforcement). Carr and Durand developed several conditions to determine the social function of each participant’s aberrant behavior (i.e., attention or escape from a demand). Once the functions were determined, Carr and Durand identified a communicative response that would serve as a replacement behavior for each of the participant’s aberrant behavior. To assess the importance of functionally equivalent replacement behavior, the experimenters taught each participant an irrelevant communicative response that did not result in similar consequences to the aberrant behavior. Functionally equivalent communicative responses were taught through verbal prompts and differential reinforcement. The aberrant behaviors for each participant decreased once the functional communicative response was taught and the irrelevant communicative responses were not effective in reducing aberrant behavior.

Since Carr and Durand (1985), FCT has been used to reduce a wide variety of aberrant behaviors including aggression, self-injurious behavior, vocal disruptions, property destruction, elopement, body rocking, pica, and inappropriate sexual behavior (Durand & Carr, 1991; Fisher et al., 1993; Fyffe, Kahng, Fittro, & Russell, 2004; Hagopian, Fisher, Sullivan, Acquisto, & LeBlanc, 1998; Wacker et al., 1990). FCT has also been shown to be effective across a wide range of populations including adults (Wacker et al., 1990) and children diagnosed with developmental disabilities (Durand & Carr, 1991), children with cerebral palsy (Durand, 1999), children with traumatic brain injury (Fyffeet al. 2004), typically developing children (Hanley, Heal, Tiger, & Ingvarsson, 2007), and children diagnosed with autism (Sigafoos & Meikle, 1996), among others.

When implementing FCT several variables should be considered. First, the function of the individual’s aberrant behavior should be identified. This could be done through anecdotal assessments, descriptive assessments, or experimental functional analyses. After the function, or hypothesized function, of the aberrant behavior is determined, a functionally equivalent communicative response should be selected. Interventionists should consider response effort, the speed of response acquisition, and if the response taught will be recognized and reinforced in other environments (Tiger et al., 2008). When teaching the functional communicative response, the initial teaching location, the type of prompting system, how to fade prompts, and how to promote generalization should also be considered depending on the learner’s skill level (Tiger et al., 2008). Finally, the interventionist should decide if the aberrant behavior in question will be put on extinction, if the aberrant behavior will be reinforced during teaching, or if punishment will be utilized (Tiger et al., 2008).

Chaining

Chaining is a procedure used to teach new responses by linking a sequence of discrete responses together to form a new behavior (Cooper et al., 2007). In a behavioral chain, each discrete response produces a stimulus change which then serves as a reinforcer for the response that produced it and serves as a discriminative stimulus for the next response in the chain (Cooper et al., 2007). Chaining procedures have been used to teach shoe tying for individuals with ASD (Rayner, 2011), a sequence of dance moves (Slocum & Tiger, 2011), janitorial skills for individuals with intellectual disabilities (Cuvo, Leaf, & Barakove, 1978), adding with a calculator and accessing a computer program (Werts, Caldwell, & Wolery, 1996), and swallowing liquids (Hagopian, Farrell, & Amari, 1996), among many others.

To teach a behavioral chain , a task analysis of the necessary steps in the chain must happen first. A task analysis involves breaking down a complex skill (e.g., shoe tying) into smaller units in sequential order (Cooper et al., 2007). In order to ensure the task analysis is correct and complete, the interventionist should validate the task analysis by observing the completion of the task by individuals who are fluent with the task, consulting experts, or performing the skill using the task analysis (Cooper et al., 2007; Sulzer-Azaroff & Mayer, 1977).

Teaching a behavioral chain is typically done through forward chaining or backward chaining. Forward chaining is when each response in the behavioral chain is taught sequentially. For example, if hand washing was taught through forward chaining, then the first step taught would be turning the faucet on, then putting hands under the water stream, pumping the soap on to hands, rubbing hands together, etc. until hand washing was completed. Backward chaining is when the instructor completes the initial responses in the behavioral chain except for the terminal response in the behavioral chain. Reinforcement is then delivered contingent upon the learner completing this final response. For example, if backward chaining were used to teach shoe tying, then the interventionist would complete all the responses in the chain except for the last step (i.e., pulling the bow tight). If the learner pulls the bow tight, then reinforcement would be delivered. The interventionist would then teach the learner the second to last step in the behavioral chain (i.e., pulling loop through). The learner would then be responsible for completing the last two steps in the behavioral chain. This process would be repeated until the learner is completing all the responses in the behavioral chain independently.

When using chaining procedures in a clinical setting, there are several variables to consider, for instance, the length of the behavioral chain and length of the discrete responses. Depending on the learner’s skill level, longer chains with more complex individual responses may be too difficult for the learner to master (Sulzer-Azaroff & Mayer, 1977). Utilizing responses already in a learner’s repertoire, or closer to the learner’s repertoire , may lead to faster acquisition of a behavioral chain.

Conclusion

ABA has come a long way in the past 50 plus years. Our forefathers (e.g., B.F. Skinner, Donald Baer, Montrose Wolf, Todd Risley, James Sherman, Ivar Lovaas, Sid Bijou, Ted Ayllon, and Nate Azrin) and foremothers (e.g., Judith Leblanc, Barbara Etzel, Sandra Harris, Beth Sulzer-Azaroff, Rosalie Rayner, Mary Cover Jones) laid a strong foundation of methodology which can be used to develop desired behavior and decrease undesired behavior. Today the number of professionals going into the field of ABA continues to rise (Carr, Howard, & Martin, 2015), and the procedures based upon these principles are implemented in a wide variety of settings (e.g., home, school, clinic, university, residential, hospital, and community settings). Although many professionals in the field of ABA work with individuals diagnosed with ASD, ABA-based procedures are effective for a wide variety of populations. When the principles of ABA were first explored, they were being implemented with juvenile delinquents (Phillips et al., 1971), typically developing individuals (Hersen et al., 1973), and children with intellectual disabilities (Ayllon & Azrin, 1968).

There is no question that the field of ABA has made tremendous improvements in the lives of many individuals; however, there still remain areas in which the field may improve upon. For instance, with the growing need for well-trained behavior analysts, it is imperative that education and training is thorough, ongoing, and comprehensive (Ellis & Glenn, 1995; Shook, Ala’i-Rosales, & Glenn, 2002). As one can determine based on the content of this chapter, ABA and its applications are broad and require sophisticated repertoires. Dependent upon the behavior analyst’s cliental, education and training should include the relevant procedures described throughout this chapter in addition to the principles of ABA, in-the-moment assessment, critical thinking, clinical judgment, and problem solving.

ABA is a broad field with broad applications. The procedures described in this chapter are simply an introduction to effective procedures in the field. These and many other procedures based upon the science of ABA continue to make socially significant gains in the lives of individuals around the world. There is no doubt that the field of ABA will continue to make meaningful contributions to society with a strong adherence to the core principles of the science and continued development of meaningful solutions to societal challenges.