Discrete trial training is one of the most researched and effective interventions to teach individuals with developmental disabilities new skills (National Autism Center, 2015). While the general paradigm of instruction is very effective, there are many variations in how this treatment is delivered. The different components of discrete trial instruction, such as instructions, prompts, consequences, and criteria are make up the learning arrangement.
In recent years, there has been considerable research looking at different learning arrangements for children with autism and developmental disabilities. Researchers have investigated variables such as the distribution of trials across sessions (Haq & Kodak, 2015), inter-trial intervals (Majdalany, Wilder, Greif, Mathisen, & Saini, 2014), reinforcer magnitude (Paden & Kodak, 2015), the distributions of trials and reinforcers (Kocher, Howard, & Fienup, 2015), and delays to reinforcement (Majdalany, Wilder, Smeltz, & Lipshultz, 2016).
At ACE Lab, we have been investigating a few areas of learning arrangements. The first is how the distribution of trials and reinforcers in a teaching session affect learner performance. Second, we have begun examining mastery criteria and asking questions about what are appropriate levels of performance to promote the maintenance and generalization of academic responding. Third, we have explored prompting procedures and how one could assess which prompts are likely to be effective.
Which do you prefer: Small reinforcers after a few responses or large reinforcers after a large number of responses? This is one of the questions that drives our research in the area of response-reinforcer arrangements. Many children with developmental disabilities have skill acquisition programs that deliver reinforcers after a specific number of responses. Our research in this area has found that different arrangements of responses and reinforcers are preferred by children and produce different effects on performance.
In our research, we discuss two types of arrangement. With continuous arrangements, an individual completes a sessions worth of responses and receives a large reinforcer. For instance, in Fienup, Ahlers, and Pace (2011), the participant completed six 20-problem math worksheets to gain access to 18 consecutive minutes with a preferred activity. With discontinuous arrangements, an individual completes a small portion of the total responses, receives a small portion of the reinforcer, and completes this cycle several times. In Fienup et al., the participant complete one 20-problem math worksheet to gain access to 3 minutes with the preferred activity and this was repeated six times in a session.
Our research has identified some advantages to continuous arrangements. For instance, children tend to prefer to complete sessions in accordance with continuous arrangements (Fienup et al., 2011; Ward-Horner, Pittenger, Pace, & Fienup, 2014; DeLeon et al., 2014; Bukala, Hu, Lee, Ward-Horner, & Fienup, 2015) when the work-tasks have been previously mastered. Additionally, there are performance advantages for continuous arrangements. Children tend to respond quicker or complete sessions in less time with continuous arrangements (DeLeon et al.; Bukala et al.; Kocher, Howard, & Fienup, 2015).
Our more recent work has examined the effects of response-reinforcer arrangements on the acquisition of skills. In this study, Kocher et al. (2015) found that continuous arrangements produced similar or better acquisition in terms of sessions to criterion and shorter continuous sessions.
We continue to conduct research in this area. We understand the limitations of our previous research. Most of the previous research has been conducted with older children who are completing maintenance tasks. These are children with a long history of discrete trial instruction and who have learned to tolerate delays to reinforcement - which is likely a necessary prerequisite to preferring the large delayed reinforcer associated with continuous schedules. Future research will address these limitations and aim to begin understanding the variables that affect preference for different response-reinforcer arrangements. (Ward-Horner, Cengher, Ross, & Fienup, 2016).
Mastery Criterion - What does mastery mean?
Educators use mastery criterion as a signal to terminate the current phase of teaching and move on to a new phase of treatment that may involve teaching new behavior or assessing the maintenance of generalization of behavior. But what is the right mastery criterion and what does it mean to have mastered a behavior? If you comb through the literature, you will find that there are limited answers.
In the 1970s, some behavior analytic researchers investigated this topic in college classrooms and the researchers generally found that having higher performance criteria produced higher academic performance (Johnston & O’Neill, 1973; Semb, 1974). In our own college-student study, we evaluated the effects of mastery criterion on deriving equivalence relations. We found that higher criteria produced a higher likelihood of deriving novel relations (Fienup & Brodsky, accepted at JABA).
We have begun researching the effects of mastery criteria with children with autism and developmental disabilities. In this context, many clinicians and researchers use criteria such as 80% correct across 3 sessions or 90% correct across 2 sessions. In her Master's thesis (Fuller & Fienup, under review), Jessy Fuller conducted a parametric analysis of mastery criterion levels and evaluated the effects of criterion level on maintenance. She taught children academic skills (e.g., reading sight words) until the child responded with 50%, 80%, or 90% accuracy in 1 session, terminated training, and then measured maintenance of responding once a week for 1-month. She found that 90% criteria produced 1-month maintenance of 90% accuracy and she found mixed results for the 50% and 80% criteria with these criteria producing some favorable and unfavorable outcomes, albeit always less favorable than the 90% criterion.
Much research remains to be understood about mastery criterion, such as the effects of mastery criterion on generalization as well as how the length of mastery criterion (e.g., across 1, 2, or 3 observations) affects performance.
Fuller, J., & Fienup, D. M. (under review). A parametric analysis of mastery criterion level: Effects on maintenance of academic responding.
Cengher, M., Budd, A., Farrel, N., & Fienup, D. M. (under review). A review of transfer of stimulus control procedures: Implications for selecting effective and efficient prompting strategies.
Fienup, D. M., & Brodsky, J. (in press). Effects of mastery criterion on the emergence of derived equivalence relations. Journal of Applied Behavior Analysis.
Ward-Horner, J. C., Cengher, M., Ross, R. K., & Fienup, D. M. (2016, OnlineFirst). Arranging work requirements and the distribution of reinforcers: A brief review of preference and performance outcomes. Journal of Applied Behavior Analysis. doi: 10.1002/jaba.350 [link]
Cengher, M., Shamoun, K., Moss, T., Roll, D., Feliciano, G., & Fienup, D. M. (2016). A comparison of the effects of two prompt-fading strategies on skill acquisition in children with Autism Spectrum Disorders. Behavior Analysis in Practice, 9, 115-125. doi: 10.1007/s40617-015-0096-6 [link]
Kocher, C. P., Howard, M. R., & Fienup, D. M. (2015). The effects of work-reinforce schedules on skill acquisition for children with Autism. Behavior Modification, 39, 600-621. doi: 10.1177/0145445515583246 [link]
Bukala, M., Hu, M. Y., Lee, R., Ward-Horner, J. W., & Fienup, D. M. (2015). The effects of work schedules on performance and preference in students with Autism. Journal of Applied Behavior Analysis, 48, 215-220. doi: 10.1002/jaba.188 [link]
Ward-Horner, J. C., Pittenger, A., Pace, G., & Fienup, D. M. (2014). Effects of reinforcer magnitude and distribution on preference for work schedules. Journal of Applied Behavior Analysis, 47, 623-627. doi: 10.1002/jaba.133 [link]
Fienup, D. M., Ahlers, A. A., & Pace, G. (2011). Preference for fluent v. disfluent work schedules. Journal of Applied Behavior Analysis, 44, 847-858. doi: 10.1901/jaba.2011.44-847 [article]