Investigation of Binge Drinking Survey Tool

Introduction

This paper is dedicated to the development of an online four-item tool (see Appendix A) that is meant to evaluate binge drinking (BD) in patients visiting a particular mental health clinic. It will include the definition of its construct, the discussion of its methodology, and the consideration of various steps that need to be taken to ensure its validity, reliability, and ethical nature. In addition, the implementation issues will be considered. Finally, the tool’s current features will be discussed, and suggestions for improvement will be included. The paper demonstrates the complex questions that need to be answered when developing a tool.

Definition of the Construct

For the development of an instrument, it is necessary to define a construct and determine the ways in which it is measured. BD refers to the consumption of large amounts of alcohol within a limited timeframe, and it has historically been difficult to outline (Hingson, Zha, & White, 2017; Kuntsche, Kuntsche, Thrul, & Gmel, 2017). This issue is partially associated with the fact that the reaction to alcohol, including the level and speed of intoxication, may differ for individuals. However, the more important issue is that people do not track their drinking in terms of pure ethanol most of the time.

Indeed, it would be most convenient to define BD with the help of ethanol, but for a survey instrument, this approach is not very feasible since most people cannot report how much ethanol their drinks contain. As a result, drinks are frequently used as a BD measure, even though this method is not very specific (Hingson et al., 2017; Kuntsche et al., 2017). It is usually assumed that taking four to five drinks (for women and men, respectively) within 2 hours would constitute BD (Hingson et al., 2017; Kuntsche et al., 2017; Lovatt et al., 2015). It is especially appropriate for the US, which uses relatively large drinks when compared to, for example, the UK.

Indeed, according to Kuntsche et al. (2017), five US drinks may contain up to 70 grams of ethanol, which likely qualifies as binge drinking due to the possibility of negative effects (Kuntsche et al., 2017). Given that this instrument focuses on the US participants, this definition appears appropriate. As for the aspects of BD that are typically measured, they include the amount of alcohol (in drinks) and frequency (Hingson et al., 2017; Kuntsche et al., 2017). The significance of the proposed tool is associated with the prevalence and dangers of alcohol use and abuse, as well as alcohol use disorder (AUD).

Significance

Problematic alcohol use is evidenced to be associated with multiple negative outcomes. For BD, the consequences of intoxication are the most acute problem, but alcohol use has also been linked to important long-lasting effects, for instance, cirrhosis and stroke (Grant et al., 2017; Kuntsche et al., 2017). BD and alcohol use may also be a cause of risky behaviors and injuries. Alcohol-related accidents, especially those associated with driving, are not unlikely to be connected to BD (Grant et al., 2015; Kuntsche et al., 2017). BD and other drinking patterns can result in an individual’s alienation from their community, friends, and family (Grant et al., 2015).

For younger people, alcohol use, including BD, may lead to worse academic achievements (Kuntsche et al., 2017). Some evidence indicates that BD also increases the likelihood of contracting a sexually transmitted disease (Kuntsche et al., 2017). The costs that are associated with such issues are also apparent. In general, high-risk drinking, including BD, is a danger to a person’s well-being and even life, and it can be a burden for the healthcare system.

Alcohol use has a very high prevalence, which makes this problem even more acute. Based on the US data for 2012-2013, which was gathered as a part of the National Epidemiologic Survey on Alcohol and Related Conditions III (NESARC-III), over 12.6% of Americans had exhibited high-risk drinking habits within the 12 months before the survey was conducted (Grant et al., 2015). The combination of the data from NESARC-III and NESARC-I (a similar survey performed between 2001 and 2002) demonstrates that alcohol use is becoming more widespread (Grant et al., 2017). The tendency is especially common among marginalized populations (for instance, indigenous people), as well as the populations that had previously been affected to a lesser extent, for example, women.

Regarding BD, its prevalence varies rather significantly in terms of geography and individual populations (Kuntsche et al., 2017). The 12-month prevalence for the US was 23% for years 2001-2002 and 33% for years 2012-2013 (Hingson et al., 2017). This pattern of alcohol use is especially common among youngsters, but it is present in other demographics as well (Delker, Brown, & Hasin, 2016). Also, BD is becoming more prevalent in quantity and frequency, especially among pregnant women (2.3% by 2013) (Delker et al., 2016). NESARC-III reports that alcohol abuse is not very often treated and is typically associated with stigma (Grant et al., 2015). Thus, it is clearly a very significant problem for modern-day healthcare professionals.

Finally, it should be pointed out that the study of alcohol use epidemiology reveals important disparities. With the exception of women, minorities tend to be affected by AUD to a greater extent (Delker et al., 2016; Grant et al., 2015). Such minorities include people with mental disorders; for example, AUD is currently evidenced to be comorbid with major depressive and bipolar disorders, as well as other substance use disorders. From this perspective, the investigation of the prevalence of BD, as well as other problematic alcohol use, among the patients of mental institutions is a relevant topic for a quantitative, survey-based study.

Summary

The importance of investigating BD is defined by several factors. First, it is a drinking pattern that can have very significant immediate and delayed consequences. Second, its prevalence is varied, especially among different populations. Third, AUD and other types of problematic alcohol use remain undertreated, even though they are particularly likely to affect minorities, including people with mental health problems. Therefore, the investigation of BD within the population of a particular mental clinic may help its professionals to learn more about its prevalence among their patients, which would be a step toward developing improved services for them. On the other hand, the lack of investigation would mean that the clinic would have to apply the generalized prevalence rates to its policy decisions, which may be a problem because of BD’s variable prevalence.

Method

Participants

The proposed tool is meant specifically for the patients of an individual clinic. The clinic (and, therefore, the studied population) is not very big, which is why the sample is not going to be very large either; most likely, about 50 patients will be surveyed. The clinic is located in the US; it serves a rather diverse population of people who have different mental disorders. The most frequent diagnoses include substance use disorders, different types of depressive disorders, and post-traumatic stress disorder. A notable portion of the served population consists of older people.

For the proposed tool, it is assumed that the researcher has a sufficient amount of resources to perform probability sampling. Since the clinic is small, and so is its population of patients, probability sampling can indeed be feasible. This approach is preferable since it is more likely to result in a representative sample (Polit & Beck, 2017). As a simple and cost-effective approach, which is well-suited for a survey, systematic random sampling is planned (Trochim, Donnelly, & Arora, 2016).

The researcher will get access to the patients’ database, determine the sampling interval (based on the total population and desired sample), and proceed to choose every patient who corresponds to the interval. It is very important to obtain the necessary approval of the site and other review boards to ensure the existence of an ethical oversight over the project. Patients with cognitive impairments will not be recruited to avoid ethical issues.

The selected patients will be contacted (using phone or e-mail) and provided with information about the project. Interested patients will be given detailed information, informed consent, and a link to the survey. The phone-based interview option will be available as well for reasons that are explained below. A waiver of consent documentation might be successfully applied for, which is possible due to the minimal risks and the lack of identifying information in the tool (Polit & Beck, 2017). If so, all the consent-related information will be within the survey form, which means that the consenting participants will not have to provide any documentation of their consent. The people who respond to the survey in full will become the final sample.

Instrument and the Procedure of Its Creation

The specifics of the provided instrument are associated with the considerations that were made while preparing it. Before the tool was created, the existing literature on BD and the tool developed was studied to define the construct and determine the best ways of measuring it(Polit & Beck, 2017). The goal of this project was to create a short tool, which is why the presented instrument (see Appendix A) has only four items. It was designed to provide information about BD. As a result, it includes questions about the amount of BD alcohol and the frequency of binges; those are the aspects that are most commonly considered for BD (Hingson et al., 2017; Kuntsche et al., 2017).

In addition, one of the questions directly asks if a person engages in BD or not. This question was included because it is possible to discontinue an online or phone-based survey upon receiving a negative response to this question. This way, the participants who do not binge drink do not have to wait through the rest of the questions.

The survey includes different types of questions, which are mostly meant for participants’ convenience. Thus, the respondents can use multiple-choice and closed-ended questions to select the option that fits them quickly, but when they need to provide specific, individual information, they may use the open-ended question field (Polit & Beck, 2017). Due to its shortness and the use of multiple-choice or closed-ended questions, the survey should not take a lot of time (Mrug, 2010; Pew Research Center, 2019). This decision was made on purpose since retaining respondents may be difficult, especially when a survey discusses a stigmatized topic.

Finally, the tool’s language needs to be considered. It is common practice to use an easy-to-read language, including sentences and vocabulary, especially when the level of education for participants is not known (Mrug, 2010; Pew Research Center, 2019; Trochim, 2006). Furthermore, it is unreasonable to assume that all the participants are familiar with peculiar terms or that they can define them similarly to the researcher (Trochim, 2006).

BD is difficult to define, as well as the concept of “one drink,” which may vary for every person (Hingson et al., 2017; Kuntsche et al., 2017). As a result, the tool includes the definitions of BD and drinks to ensure that responses are consistent. Thus, the content, format, and language of the tool were chosen to ensure participant retention, their ability to provide correct answers, and the tool’s ability to produce the required information, which is the data on BD and mental health status.

Steps

Reliability and Validity

In this paper, the completion of the necessary analyses is presumed to have been performed. Certain validity and reliability methods are associated with administering the tool to a sufficiently large number of people, and other ones require the recruitment of experts (Polit & Beck, 2017). It is assumed that these procedures are carried out after the development of the relevant methodology and with all the necessary reviews completed.

Multiple options are available for testing validity and reliability, which is also connected to the fact that there are different types of both validity and reliability. Thus, content validity is commonly determined through the creation of an expert panel who provide their feedback regarding the relevance and usefulness of each item, as well as the tool’s ability to measure the studied construct (Polit & Beck, 2017; Shirali, Shekari, & Angali, 2018).

It should be noted that the presented tool did receive some feedback, which assisted in improving its face validity. However, for content validity, the procedure would include a more detailed and structured analysis and more experts. It would use a questionnaire that would enable rating each item and the entire tool. The ratings would be used to calculate the validity index, which is determined by dividing the number of high ratings by the number of raters (Polit & Beck, 2017). The average index would be applied as the content index to the whole tool.

For construct validity, it would be helpful to determine the convergent validity (the comparison of the tool to other tools that measure BD) and discriminant validity (the ability of the construct to measure BD and not a similar drinking pattern). The multitrait-multimethod matrix method (MTMM) is the common approach to determining these two features (Polit & Beck, 2017), and it could involve using existing BD scales, as well as those aimed at heavy drinking, which is a similar pattern that is more prolonged than BD (Grant et al., 2017; Lovatt et al., 2015). Factor analysis would also be performed to determine the ability of the tool to measure the dimensions of BD (frequency and amounts), although Polit and Beck (2017) point out that this method is best applied to large tools. From the perspective of reliability, it would be necessary to ensure that the tool is capable of yielding the same results by re-administering it to the exact same sample.

Alternative validity and reliability tests could also be considered, but they might be insufficiently suitable. For example, it would be possible to apply the tool to a group of people with AUD and a group without AUD. In fact, it would not be unreasonable to suppose that alcohol use patterns between the two samples would differ, which may serve as evidence in support of construct validity (Hulland, Baumgartner, & Smith, 2018; Polit & Beck, 2017).

However, the MTMM approach, according to Polit and Beck (2017), maybe more useful. Similarly, the coefficient alpha approach to measuring internal consistency (a reliability assertion approach) would be difficult to apply to a short tool, in which no score summing is performed (Hulland et al., 2018; Polit & Beck, 2017). Thus, the proposed procedures should help in determining the tool’s validity and reliability, although another researcher might consider different steps instead.

Avoiding Bias

There are numerous bias-related threats to surveys that need to be taken into account. Common issues include the non-response bias (Hulland et al., 2018; Karlsen et al., 2018; Mrug, 2010). To control it, non-response rates need to be calculated and reported; the removal of respondents from the final sample is not advisable. In addition, the use of the online (contact-less) survey method is likely to increase response rates (Hulland et al., 2018). However, it also excludes the people who do not use computers, which is a form of bias (McInroy, 2016). Within the chosen project, people who do not have access to computers may be offered offline survey options if they request to participate in response to the initial contact.

Another issue is the social desirability bias; participants might find BD-associated stigma a barrier to honestly reporting it. The use of nonjudgmental language and online anonymity is supposed to assist with this issue. Furthermore, there exists the possibility of participants failing to remember their drinking patterns correctly or, possibly, having an incorrect impression of them (recall bias) (Polit & Beck, 2017). The inclusion of the measurements and definition of BD should resolve the problem, but it must also be acknowledged with the report.

Several biases have been avoided while preparing the tool and planning for its study. Order effects are not likely to be a problem since there are only five items that are ordered logically and have different responses (Pew Research Center, 2019; Polit & Beck, 2017). Acquiescence bias is unlikely to be a problem since there are no assertion-based items or even items that encourage a particular response (Karlsen et al., 2018; Pew Research Center, 2019). Still, the named biases require reporting and consideration during the data interpretation.

Following Ethical Guidelines

The ethics of online research are not too different from those of offline studies. In general, the protection of the rights and confidentiality of participants is the primary goal. Consent is necessary to establish, even if its documentation is waived; the consent forms (or notices) are supposed to be easy-to-read and understand (with translation options if necessary) (McInroy, 2016). For the described project, the responses will not be traceable to participants, which makes its use simpler from the perspective of ethics. Still, it is important to check the security of the online platform chosen for the project since some of the exiting options might not offer sufficient protections (Halim, Foozy, Rahmi, & Mustapha, 2018).

In the case of the studied tool, however, confidentiality is relatively easy to protect since complete anonymity is a major advantage of online tools (McInroy, 2016). Thus, the advantages of online surveys will be exploited to the benefit of participants, and potential weaknesses will be checked and controlled.

Issues

While administering the survey, a researcher might experience certain difficulties, which may differ for online and offline surveys. Here, the issues that are common for the latter type will be considered since they are most likely to be applicable. First, reaching participants can be an issue, especially for a project that focuses on a sensitive topic. Potential participants may be reluctant from the very beginning, or they might be unwilling to finish a survey they started because they are uncomfortable with the topic. Non-response rates are also a bias issue, which is why avoiding this concern is very important. To facilitate participant engagement and retention, the specifics of the tool design included non-judgmental language and simple, short questions with multiple-choice answers.

The use of online tools may help to address non-response as well. Indeed, with online surveys, participants remain completely anonymous and do not have to spend any time or money on getting to a meeting location (McInroy, 2016). However, this issue also implies the exclusion of the participants who do not use technology or are not entirely comfortable with it. To overcome this issue, a phone interview will be offered to participants. While this option is more personal than an online survey, it is still rather detached (Tourangeau & Yan, 2007), and participants may feel safer during it than during an actual meeting. Therefore, the problem of participant engagement and retention is resolved by accommodating them through different options that ensure their safety and comfort.

The instructions for participants are of the utmost importance. It should be clear for them what they are expected to do and how to use the survey. Preferably, a platform with an easy-to-use interface should be chosen, and they are rather numerous nowadays (Halim et al., 2018). The instructions may be located in the link-containing e-mails, but it is also necessary to repeat the instructions before the items of the survey. If the survey is programmed to show one item at a time, the instructions can be displayed separately before any other item, which will increase the likelihood of participants reading them. Instructions can also be used to encourage participants to be honest.

Technical issues that may arise need to be considered as well. The link to the survey should be functional; it is reasonable to check the survey regularly, monitor any problems with the platform that hosts it, and ensure the continuation of its subscription (if needed). Options for participants who experience technical issues should be offered as well; if any complaints are filed, the same option of phone interview should be available. The monetary concerns may become acute if many participants prefer phone interviews; the budget of the project needs to account for the possibility. Finally, the ethical concerns that are described above remain important during the administration period; the protection of participants’ rights and confidentiality is an ongoing task. In summary, the issues of administration may be rather diverse, which highlights the significance of their consideration before the project’s implementation.

Results

Type of Analysis

As an epidemiology project, the proposed survey could employ the methodologies used by other similar studies. Their examples include the articles by Grant et al. (2015), which is based on NESARC-III, and Grant et al. (2017), which also considers NESARC-I. As is relatively common for epidemiology research, both of them employ descriptive statistics (including means and frequencies). They also use cross-tabulation for representing the data. Furthermore, the correlations between demographics and alcohol consumption are determined in these reports with the help of multiple logistic regression.

This approach would fit the described project perfectly. While the primary aim of the tool is to determine the prevalence, it includes a demographics question with the general intent to check for correlations. Therefore, it can use cross-tabulations with frequencies for the former goal and regression for the latter one. As can be shown by some reliable literature that reviews survey and correlational designs (Polit & Beck, 2017; Trochim et al., 2016), these types of analyses fit the task.

Certain aspects of the example articles are not applicable to the current tool. Thus, NESARC is a US-based project that surveyed the non-institutionalized adult population while oversampling minorities and this feature led to the authors adjusting their findings to this oversampling. For the proposed tool (in its current form), this step is not required. However, if the tool includes a more extensive demographics section in the future, it might be reasonable to perform adjustments based on the over-and under-represented populations.

It is also noteworthy that as a part of the steps aimed at improving the quality of the project, additional data will have to be included in the report. Thus, the description of the project’s sample will involve the summarization of its demographics. Furthermore, the response rate will need to be reported (Hulland et al., 2018; Karlsen et al., 2018). The number of fully completed surveys will be used to calculate the final sample, and it will be compared to the number of links that will have been sent. These steps will be taken to convey the generalizability and applicability of findings, as well as the potential sources of bias.

Results

The existing literature and some additional data can help to predict some of the findings of the project. First, the sample and its possible features can be reviewed. It is not unlikely that some patients will refuse to respond, but it is hoped that significantly over half of them will participate. To achieve that, the initial contact with the patients will involve stating that the survey is very short and completely anonymous. Hopefully, the rate of non-response will not exceed 30%. Furthermore, it is apparent that all the participants will have a disorder to name; based on the current knowledge of the investigator, most of them will probably report a form of substance disorder and depression, although anxiety and post-traumatic stress disorder will also be named.

From the perspective of prevalence, up to 46% of the general population may report engaging in BD (Kuntsche et al., 2017), although more modest figures have also been reported, including 23% in the US in 2001 and 33% in 2013 (Hingson et al., 2017). However, the numbers are likely to be different for the studied population, which is people with various mental disorders. Thus, alcohol use, in general, is associated with mental issues, which is especially true for depression, but it may also be possible for various anxiety disorders and post-traumatic stress disorder (Grant et al., 2015). AUD is shown to be a factor in BD as well (Hingson et al., 2017). Therefore, it is reasonable to expect higher rates within the studied population, especially if the people with AUD end up being overrepresented.

Finally, when correlations are concerned, it is likely that the already demonstrated patterns will be found. Grant et al. (2015) report that correlations with alcohol use are strong for depression, as well as other substance use, but for other types of disorders, they may be moderate, weak, or non-existent. However, it should be noted that the project will not have a large sample and will not be able to adjust the findings based on ethnicities, gender, and age (at least, not in its current form). Therefore, it might not be able to replicate the NESARC studies fully. Since its goal is not to replicate the data but assess BD in a particular population for tailored interventions, this fact is not a problem.

Discussion

Instrument’s Properties

The constructed survey tool consists of four items, including multiple-choice, open-ended, and closed-ended questions. It is dedicated to measuring one variable (BD), and it includes one demographic question (mental health), which enables it to collect data for both prevalence and correlation investigation. In other words, the survey can gather the general prevalence data for BD in the patients of the selected clinic, but it can also show it within more specific groups (people with specific disorders) and check for statistically significant correlations between these variables.
The tool’s choices have certain negative and positive implications.

First, the limited features of the data provided by the tool need to be mentioned; one of the primary issues is that it includes few demographics questions, which makes the prevalence of BD in them impossible to determine with this tool. However, these demographics were not a target of the tool, which was also supposed to be short, which explains the issue. Second, online distribution is associated with limited access to some populations.

However, the tool’s topic is relatively sensitive, which is why it is meant to be administered without any direct contact, that is, through online options. As an alternative for the interested participants who cannot use computers, phone interviews will also be offered. The fact that the survey is short is also a benefit; it might make its completion simple and quick, which may improve response rates. In general, the tool’s features and properties are reflective of a series of trade-off decisions, which were meant to make it suitable for the task that it is supposed to accomplish.

Revision and Improvement

The tool has already received some feedback, which was mostly positive. The only comment, which was used to revise the tool, required the consideration of the terminology of BD and its reflection in the response options. Specifically, the term was initially determined as “over five drinks” in its second question, but the instrument also included the option of five drinks in its third question, which is dedicated to the amount of alcohol consumed by participants. This option was added to the third question because the studied literature includes five drinks into BD (Hingson et al., 2017; Kuntsche et al., 2017). Basically, in this case, the problem was in phrasing BP’s definition incorrectly, and the latter was revised to include five drinks.

In addition to that, the third question was revised as well. Instead of proposing options for the number of drinks, it was transformed into an open-ended question. Aside from the inconvenience of listing all the possible options, this solution allowed avoiding a potentially uncomfortable situation in which a respondent would have to insert the number of drinks greater than the largest number in the list. Alcohol problems, especially when they can be related to “excessive” drinking, are stigmatized (Grant et al., 2015). Therefore, it is important to keep the instrument from appearing judgmental in any way.

Regarding future improvements, the current version is very short on purpose; it only includes four items. It is highly advisable to include a demographics section that includes items for sex, age, and race or ethnicity; as it was mentioned, BD prevalence is dependent on these factors (Kuntsche et al., 2017). In addition, for sex, the amount of alcohol that constitutes BD may also differ. As a result, to be able to describe the extent to which the respondents exceed the BD threshold, including a sex item would be necessary. For the present item, however, the goal was limited to providing sufficient data for BD prevalence and its correlation with mental health conditions, which determined the type of demographic information collected by it.

Finally, while the instrument has already received some feedback, it did not involve the target demographic. As a result, it is possible that some flaws might be found in the readability of the instrument, the accessibility and neutrality of its language, and the convenience of its items (Polit & Beck, 2017). The correction of such flaws would further improve the tool, and from this perspective, future study recommendations should be considered.

Further Study Recommendations

As shown above, certain reliability and validity tests might not provide the most helpful data, but they might still be useful. For example, testing the tool with groups that are likely and not likely to exhibit BD (the known-group technique) could be a way of demonstrating validity. Depending on how the process of tool validation progresses, it may be a viable option. Furthermore, target population feedback is a common tool for instrument improvement (Polit & Beck, 2017). It might not provide data on validity and reliability, but it would be very helpful in determining the potential flaws of the tool, especially as related to the usability and accessibility of the tool. In general, additional research is the primary method of the tool’s improvement.

Conclusion

The presented tool and its analysis illustrate the various considerations that need to be studied while creating such an instrument. The paper was guided by the literature on tool development and examples of similar studies on BD. The topics of construct operationalization, validity, reliability, ethics, and practical concerns were the most prominent ones. The tool is a simple one, but it can be enhanced, especially with suitable validation research and pilot studies.

References

Delker, E., Brown, Q., & Hasin, D. S. (2016). Alcohol consumption in demographic subpopulations: an epidemiologic overview. Alcohol Research: Current Reviews, 38(1), 7-15.

Grant, B. F., Chou, S. P., Saha, T. D., Pickering, R. P., Kerridge, B. T., Ruan, W. J.,… Hasin, D. S. (2017). Prevalence of 12-month alcohol use, high-risk drinking, and DSM-IV Alcohol Use Disorder in the United States, 2001-2002 to 2012-2013. JAMA Psychiatry, 74(9), 911-923. Web.

Grant, B. F., Goldstein, R. B., Saha, T. D., Chou, S. P., Jung, J., Zhang, H.,… Hasin, D. S. (2015). Epidemiology of DSM-5 Alcohol Use Disorder. JAMA Psychiatry, 72(8), 757-766. Web.

Halim, M., Foozy, C., Rahmi, I., & Mustapha, A. (2018). A review of live survey application: SurveyMonkey and SurveyGizmo. JOIV: International Journal on Informatics Visualization, 2(4-2), 309-312. Web.

Hingson, R., Zha, W., & White, A. (2017). Drinking beyond the binge threshold: Predictors, consequences, and changes in the U.S. American Journal of Preventive Medicine, 52(6), 717-727. Web.

Hulland, J., Baumgartner, H., & Smith, K. (2018). Marketing survey research best practices: Evidence and recommendations from a review of JAMS articles. Journal of the Academy of Marketing Science, 46(1), 92-108. Web.

Karlsen, M. C., Lichtenstein, A. H., Economos, C. D., Folta, S. C., Rogers, G., Jacques, P. F.,… McKeown, N. M. (2018). Web-based recruitment and survey methodology to maximize response rates from followers of popular diets: The Adhering to Dietary Approaches for Personal Taste (ADAPT) feasibility survey. Current Developments in Nutrition, 2(5), 1-11. Web.

Kuntsche, E., Kuntsche, S., Thrul, J., & Gmel, G. (2017). Binge drinking: Health impact, prevalence, correlates and interventions. Psychology & Health, 32(8), 976-1017. Web.

Lovatt, M., Eadie, D., Meier, P., Li, J., Bauld, L., Hastings, G., & Holmes, J. (2015). Lay epidemiology and the interpretation of low-risk drinking guidelines by adults in the United Kingdom. Addiction, 110(12), 1912-1919. Web.

McInroy, L. (2016). Pitfalls, potentials, and ethics of online survey research: LGBTQ and other marginalized and hard-to-access youths. Social Work Research, 40(2), 83-94. Web.

Mrug, S. (2010). Survey. In N. J. Salkind (Ed.), Encyclopedia of research design (pp. 1473-1476). Thousand Oaks, CA: Sage Publications.

Pew Research Center. (2019). Questionnaire design. Web.

Polit, D. F., & Beck, C. T. (2017). Nursing research: Generating and assessing evidence for nursing practice (10th ed.). Philadelphia, PA: Lippincott, Williams & Wilkins.

Shirali, G., Shekari, M., & Angali, K. (2018). Assessing reliability and validity of an instrument for measuring resilience safety culture in sociotechnical systems. Safety and Health at Work, 9(3), 296-307. Web.

Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133(5), 859-883. Web.

Trochim, W. (2006). Survey research. Web.

Trochim, W., Donnelly, J., & Arora, K. (2016). Research methods. Boston, MA: Cengage Learning.

Appendix A

The Tool

  • Do you have any of the following diagnoses? Check all that apply.
    • Alcohol use disorder.
    • Antisocial personality disorder.
    • An anxiety disorder (the specific diagnosis may be, for example, generalized anxiety disorder).
    • Bipolar disorder.
    • Borderline personality disorder.
    • A depressive disorder (the specific diagnosis may be, for example, major depressive disorder).
    • Obsessive-compulsive disorder.
    • Other substance use disorder(s).
    • Panic disorder.
    • Post-traumatic stress disorder.
    • Other (please specify).
  • Do you binge drink? Check the information below for definitions.
    • Yes.
    • No.
      • Usually, binge drinking refers to:
      • If you are a woman: taking 4 or more drinks over a short period of time (usually, two hours).
      • If you are a man: taking 5 or more drinks over a short period of time (usually, two hours).
      • 1 drink = one standard beer glass OR one standard wine glass OR one standard shot of a stronger beverage.
  • If you binge drink, how much do you usually drink at once?
    • Please specify.
  • How often do you binge drink?
    • Never.
    • Once a year.
    • Several times a year.
    • Once a month.
    • A couple of times a month.
    • Once a week.
    • A couple of times a week
    • Every day.
    • Other (please specify).

Thank you for your time!

Cite this paper

Select style

Reference

StudyCorgi. (2021, August 10). Investigation of Binge Drinking Survey Tool. https://studycorgi.com/investigation-of-binge-drinking-survey-tool/

Work Cited

"Investigation of Binge Drinking Survey Tool." StudyCorgi, 10 Aug. 2021, studycorgi.com/investigation-of-binge-drinking-survey-tool/.

* Hyperlink the URL after pasting it to your document

References

StudyCorgi. (2021) 'Investigation of Binge Drinking Survey Tool'. 10 August.

1. StudyCorgi. "Investigation of Binge Drinking Survey Tool." August 10, 2021. https://studycorgi.com/investigation-of-binge-drinking-survey-tool/.


Bibliography


StudyCorgi. "Investigation of Binge Drinking Survey Tool." August 10, 2021. https://studycorgi.com/investigation-of-binge-drinking-survey-tool/.

References

StudyCorgi. 2021. "Investigation of Binge Drinking Survey Tool." August 10, 2021. https://studycorgi.com/investigation-of-binge-drinking-survey-tool/.

This paper, “Investigation of Binge Drinking Survey Tool”, was written and voluntary submitted to our free essay database by a straight-A student. Please ensure you properly reference the paper if you're using it to write your assignment.

Before publication, the StudyCorgi editorial team proofread and checked the paper to make sure it meets the highest standards in terms of grammar, punctuation, style, fact accuracy, copyright issues, and inclusive language. Last updated: .

If you are the author of this paper and no longer wish to have it published on StudyCorgi, request the removal. Please use the “Donate your paper” form to submit an essay.