The need to improve quality in healthcare delivery through embracing efficient and cost-effective healthcare paradigms have evolved rapidly over the past decade (Davies et al, 2008). PineBreeze Health Services, a 300-bed public health facility, has recognized this need and is in the process of undertaking sweeping changes aimed at improving quality in healthcare delivery. The health facility, in my view, needs to focus on three critical areas of potential improvement, namely: process improvement to reduce cases of medical errors and lack of consistency in the standard of care received by patients; embrace a patient-centered approach in healthcare delivery, and; embrace innovative health information technologies to spur effectiveness and efficiency. It is however known that without the collection of quality data on the mentioned critical areas, any efforts directed towards facility-level and system-level improvement of the health facility may not succeed, particularly with respect to prioritization or evaluation (Massoud et al, n.d.). This paper aims to identify and describe the various data collection tools that can be used to collect performance information on the stated areas of improvement, in addition to evaluating a number of tools that could be used to measure and display the quality improvement (QI) data gathered using the discussed tools.
specifically for you
for only $16.05 $11/page
Data on financial results, patient satisfaction, internal organizational processes, alignment and commitment of healthcare professionals in the hospital, process improvement capability, and patient feedback is needed to monitor process and performance improvement (Statz, 2005). Subjective data on the patient’s feelings, perceptions, desires, preferences, beliefs, ideas and values is needed to monitor performance improvement towards a patient-centered approach to healthcare delivery (Davies et al, 2008). Lastly, data on effectiveness, productivity, efficiency, quality, employee participation, flexibility, planning and goal setting, information management and accessibility is needed to monitor performance improvement of innovative health information technologies (Harley, 2003).
A number of tools can be used to collect data on performance improvement. In this context, the Consumer Assessment of Healthcare Providers and Systems (CAHPS) program can be used to develop standardized surveys that can in turn be used to collect quantitative and qualitative data on patients’ experiences, assess the patient-centeredness of care, and formulate benchmarks for the improvement of care (Davies et al, 2008). According to Kass-Bertelmes & Rutherford (2002), “CAHPS is a family of rigorously tested and standardized questionnaires and reporting formats that can be used to collect and report meaningful and reliable information about the experiences of consumers with a variety of health services” (p. 2). Second, Program and agency records can be used to collect important quantitative data related to process improvement. This data collection tool is a good source of clinical practices, outcome information, input information, and characteristics of the workload (Berg & Franken, 2009). Third, interviews can be used to collect data about innovative health information technologies from stakeholders, including the health facility administrators and health personnel. In simple terms, an interview process is a planned effort aimed at collecting data through posing structured or unstructured questions (Damberg et al, 2009).
Each of the above data collection tools has its own advantages and disadvantages. By virtue of the fact that CAHPS employs a survey framework to collect data, it is relatively inexpensive to collect large amounts of data, not mentioning that the program can be used to effectively describe the characteristics of a large population of healthcare consumers (Davies et al, 2008). These benefits coupled with the fact that surveys can be administered from remote locations using mail makes the CAHPS a credible tool in collecting data on the experiences of healthcare consumers in the health facility. However, it may become extremely difficult to deal with ‘context’ issues, which are substantially important in evaluating the consumers’ subjective feelings. This method of data collection is also largely inflexible in that it requires an initial study design, not mentioning that surveys are frequently faced with the challenge of low response rate (Davies et al, 2008).
Using program and agency records as a means to collect data is advantageous in that data are readily available at low cost, and the procedures for transforming the data into viable indicators are familiar to most program personnel (Beschen et al, 2001). This implies that the technique will also reduce costs related to engaging external consultants to collect data. Among the disadvantages, program and agency records rarely contain enough service quality and outcome data to generate or design an adequate set of performance indicators, not mentioning that it can be administratively challenging to raise issues of confidentiality. Still, “…modifications to existing record collection processes are often needed to generate useful performance indicators” (Beschen et al, 2001, p. 16). On their part, interviews are effective in gathering small or large amounts of data quickly, guarantee confidentiality, and are effective in collecting personal information and subjective perceptions of individuals directly impacted by the program (Beschen et al, 2001). However, interviews are not only expensive and time consuming to conduct, but they may demonstrate higher reactive effects among interviewees (e.g., only showing what is socially desirable) and higher investigator effects (e.g., data distortion arising from personal biases or poor interviewing skills).
The stated data collection tools are similar in that they can compliment each other in the data collection process. They are also similar in that they can be effectively employed simultaneously to collect qualitative or quantitative data needed to evaluate performance improvement in the health facility. The CAHPS surveys and interviews can be employed remotely through utilizing various mediums such as the internet or telephone. However, using agency records as a tool of data collection is different from the rest since it basically relies on in-house documentation and evidence-finding processes (Berg & Franken, 2009).
Bar and pie charts and run and control charts can be effectively used to measure and display the quality improvement data gathered using the CAHPS surveys, program and agency records, and interviews. According to Massoud et al (n.d.), “bar and pie charts can be used in defining or choosing problems to work on, analyzing problems, verifying causes, or judging results” (p. 67). In the context of this paper, bar charts can be effective in measuring and displaying data on process improvement, patients’ perceptions, and in comparing the outcomes of using innovative health information technology when compared with traditional techniques of healthcare delivery. A key advantage of bar and pie charts is that they are easily understandable because they display data as a picture, not mentioning that it is easier to highlight important changes or results when using bar or pie charts (Massoud et al, n.d.). Bar and pie charts also makes it easier to display results that compare different groups, and can be used with variable data that have already been grouped. This implies that bar and pie charts will assist the health facility monitor indicators related to patient-centeredness and clinical practices that leads to process and performance improvement. However, bar and pie charts can only be used with discrete data, and the categories can be reordered to emphasize certain effects that may not be critical to the projection of performance indicators.
100% original paper
on any topic
done in as little as
According to Massoud et al (n.d.), “run charts give a picture of a variation in some process over time and help detect special (external) causes of that variation” (p. 68). In the context of this paper, therefore, run and control charts may be used to display performance improvements of adopting health information technologies such as electronic health records (EHRs), computerized physician order entry (CPOE), and data exchange networks. When used in this context, run and control charts will have the obvious advantage of making it easier to understand the trends of adopting health information technologies, and also highlight other non-random variations in the process. The charts are also beneficial when it comes to comparing past and present trends as well as predicting future performance of the health information technologies (Massoud et al, n.d.). However, run and control charts are unable to identify a subgroup of variables which may be at a greater risk, not mentioning that they incapable of identifying and displaying unexpected adverse events.
These two tools are similar in that they use diagrammatic representations to display results, implying that it is easier for stakeholders to understand the effects of various performance improvement initiatives. The two tools can also be used to plot and display improvement initiatives over time. A major difference, though, is that while bar and pie charts may allow for various categories of data to be plotted, therefore displaying multiple trends of a number of variables, run and control charts are mainly used to display the trends of a single variable over time based on the data of that particular variable (Massoud et al, n.d.). These two tools, however, are helpful for healthcare organizations as they assist to present results to team members, administrators, and other interested parties in a simple and understandable way. More importantly, these tools assist healthcare organizations to develop benchmarks that could be used to improve the quality of healthcare (Berg & Franken, 2009). Lastly, they assist healthcare facilities to prioritize issues based on the resources available.
Berg, M., Franken, R., & Bal, R. (2009). Quantitative data management in quality improvement collaborative. BMC Health Services Research, 9(2), 175-185. Retrieved from Academic Search Premier Database
Beschen, D., Day, R., Jordan, G., & Rohm, H. (2001). The performance-based management hand handbook, volume 4: Collecting data to access performance. Web.
Damberg, C.L., Ridgely, M.S., Shaw, R., Meili, R.C., Sorbero, M.E.S., Bradley, L.A., & Farley, D.O. (2009). Adopting information technology to drive improvements in patient safety: Lessons from the agency for healthcare research and quality health information technology grantees. Health Services Research, 44(2), 684-700. Retrieved from Academic Search Premier Database
Davies, E., Shaller, D., Edgman-Levitan, S., Safran, D.G., Oftedahl, G., Sakowski, J., & Cleary, P.D. (2008). Evaluating the use of modified CAHPS survey to support improvements in patient-centered care: Lessons from a quality improvement collaborative. Health Expectations, 11(2), 160-176. Retrieved from Academic Search Premier Database
Harley, D. (2003). Elements of effectiveness for health technology assessment programs. Web.
Kass-Bartelmes, B., & Rutherford, M.K. (2002). AHRQ tools and resources for better health care. Research in Action, Issue 10, No. 03-0008, 1-15. Web.
Massoud, R., Askov, K., Reinke, J., Franco, L.M., Bornstein, T., Knebel, E., & MacAulay, C. (n.d.). A Modern paradigm for improving healthcare quality. Center for Human Services. Web.
Statz, J. (2005). Measurement for process improvement. Web.