BST Insights

Quality Services Impact on Client

Written by Abigail Blackman | Jan 24, 2024 7:35:00 PM

We have data from organizations that use our software to show there is a wide variability and inconsistency in the quality of services being provided to clients. Ultimately, we do this work because we care about those that we serve and we strive to provide the highest quality service possible. Without having access to treatment fidelity data and doing regular analysis, it is unknown whether the quality of services provided are in fact optimal. 

Of the several hundred clients we’ve analyzed these data for, multiple of them are receiving services that are less than optimal which impacts the progress that they make. Specifically, for skill acquisition programming, 1 in 5 children are receiving treatment implemented below industry standards.

The above graph depicts data for 10 different skill acquisition procedures or areas of treatment. We have data for DTT, verbal behavior, pairing, play, social skills, NET, FCT, instructional control, other, and general. 

Thankfully, most of these program types are implemented with high to optimal performance (i.e., dark and light green bars). However, you do see some moderate to non-adherence across (i.e., yellow, orange, and red bars) categories. Therefore, there is still room for improvement. 

For behavior reduction, 1 in 3 children are receiving treatment implemented below industry standards.

The above graph depicts data for 7 different behavior reduction procedures. They are a general behavior reduction evaluation, DRA, DRH, DRI, DRL, DRO, and NCR. 

The frequency of behavior reduction program implementation evaluation is concerning. The data reveal that one in every five skill acquisition program evaluations contains an evaluation of behavior program implementation. It is hard to know whether these data are representative of the programs that are in place in practice or if more behavior reduction evaluations should be occurring more frequently in practice. 

These data reveal a different pattern than we observed in the previous graphs, as only two evaluation tools (i.e., DRH, DRL) have been implemented with more than 80% optimal adherence. The other five tools display a lot of low and nonadherence to protocol implementation. 

There are several questions we can ask ourselves as a field to determine how to improve these fidelity scores. First, we need to determine what providers are struggling with. Completing component analyses can help to determine what items within the fidelity checklist individual providers or providers as a whole are struggling with. Our analysis of the behavior reduction component integrity revealed no overarching area for concern. Rather, individual providers struggled to implement a variety of steps in practice. 

An analysis of the data revealed that many evaluations only contained data for one occurrence of a provider responding to problem behavior. It is possible that this does not reveal an accurate depiction of how people implement the program. We would be interested in seeing what the data look like in this area if more data were collected on provider implementation to determine if these data are truly representative of what is happening in practice. 

Regardless, we need to do something about these concerning data! Providers need to know how to respond to problem behavior. Supervisors need to understand how their providers are responding to problem behavior; thus, we recommend more data be collected so individual supervisors and organizational leaders can determine how to best support providers in responding to problematic behaviors. 

This is not only important from the perspective of decreasing relevant client behaviors, but it also may be potentially putting providers and clients are risk for injury if it is not responded to appropriately.

We can do better and we are already starting to do so because we have access to industry data to make impactful decisions for organizations and the field.