Many exciting things are happening in the field of applied behavior analysis. The number of board certified behavior analysts increases steadily each year, which in turn, increases the overall capacity for clinical services for clients. This continued yearly influx of people into the applied workforce means that a growing proportion of practicing behavior analysts are within their first few years of accumulating clinical experiences and mastery of the published literature. Without a wealth of experience and knowledge of the evidence base, it can be difficult to make nuanced and balanced clinical decisions to ensure a high quality of client services.
One solution to this problem is publication of systematic clinical decision making models that can guide important choices in applied settings. As an example, Geiger, Carr and LeBlanc (2012) provide a series of questions that can be asked and answered to identify function-based treatments for escape-maintained problem behavior. The answers to these questions help clinicians determine which of the many evidence-backed treatments (e.g., FCT, NCR) are best suited to particular clients and the available environmental resources for their care. The selection model is accompanied by descriptions of each treatment and citations of the research that supports the effectiveness of the intervention.
In addition, a table lays out the strengths and limitations of each intervention. These type of synthesis of the experimental literature along with important practical nuggets gleaned from decades of clinical experience (e.g., not every parent is excited to implement extinction!) can provide a useful resource to assist with clinical decision-making.
Perhaps the most important decision that a behavior analyst can make is selection of the right measurement system and data collection procedures. Since data-based decisions are a critical part of our practice, the quality of the data set the upper limit for the quality of our data-based decisions! Recently, LeBlanc, Raetz, Sellers, and Carr (2016) created a decision model for selecting appropriate measures for problem behavior (shhh . . . don’t tell anyone, but a parallel version for selecting measures for skill acquisition programs could be headed out the door soon!). The answers to a series of questions about the topography of problem behavior and the available resources for measurement help the clinician make choices about the appropriateness of various measures. Of course, these suggested models can and should be refined and enhanced and empirically tested.
The biggest fulfillment of the promise of these decision-making models will come to fruition as they become integrated into electronic tools such as DataFinch/Catalyst so that the questions can be answered and a click immediately produces the appropriate measurement system that a clinician needs for a particular client. This represents the ultimate technology partnership . . . our technology of behavior analysis with the technology of electronic gadgets. Exciting stuff awaits us in the future of applied behavior analysis.