Skip to main content
Log in

Applying the Randomized Response Technique in Business Ethics Research: The Misuse of Information Systems Resources in the Workplace

  • Published:
Journal of Business Ethics Aims and scope Submit manuscript

Abstract

Mitigating response distortion in answers to sensitive questions is an important issue for business ethics researchers. Sensitive questions may be asked in surveys related to business ethics, and respondents may intend to avoid exposing sensitive aspects of their character by answering such questions dishonestly, resulting in response distortion. Previous studies have provided evidence that a surveying procedure called the randomized response technique (RRT) is useful for mitigating such distortion. However, previous studies have mainly applied the RRT to individual dichotomous questions (e.g., yes/no questions) in face-to-face survey settings. In this study, we focus on behavioral research examining the relationships between latent variables, which are unobserved variables measured by multiple items on Likert or bipolar scales. To demonstrate how the RRT can be applied to obtain valid answers from respondents answering a self-administered online questionnaire with Likert and bipolar scales, we build a behavioral model to study the effect of punishment severity on employees’ attitudes toward misuse of information systems resources in the workplace, which in turn influence misuse behavior. The survey findings meet our expectations. The respondents are generally more willing to disclose sensitive data about their attitudes and actual behavior related to misuse when the RRT is implemented. The RRT’s implications for causal modeling and the advantages and challenges of its use in online environments are also discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Abul-Ela, A. A., Greenberg, B. G., & Horvitz, D. G. (1967). A multi-proportions randomized response model. Journal of the American Statistical Association, 62, 990–1008.

    Article  Google Scholar 

  • Armacost, R. L., Hosseini, J. C., Morris, S. A., & Rehbein, K. A. (1991). An empirical comparison of direct questioning, scenario, and randomized response methods for obtaining sensitive business information. Decision Sciences, 22(5), 1073–1090.

    Article  Google Scholar 

  • Blair, G., Imai, K., & Zhou, Y. Y. (2015). Design and analysis of the randomized response technique. Journal of the American Statistical Association, 110, 1304–1319.

    Article  Google Scholar 

  • Bock, G.-W., & Ho, S. L. (2009). Non-work related computing (NWRC). Communications of the ACM, 52(4), 124–128.

    Article  Google Scholar 

  • Bratman, M. (1984). Two faces of intention. The Philosophical Review, 93(3), 375–405.

    Article  Google Scholar 

  • Burton, B. K., & Near, J. P. (1995). Estimating the incidence of wrongdoing and whistle-blowing: Results of a study using randomized response technique. Journal of Business Ethics, 14(1), 17–30.

    Article  Google Scholar 

  • Chu, A. M. Y., & Chau, P. Y. K. (2014). Development and validation of instruments of information security deviant behavior. Decision Support Systems, 66, 93–101.

    Article  Google Scholar 

  • Chu, A. M. Y., Chau, P. Y. K., & So, M. K. P. (2015a). Explaining the misuse of information systems resources in the workplace: A dual-process approach. Journal of Business Ethics, 131(1), 209–225.

    Article  Google Scholar 

  • Chu, A. M. Y., Chau, P. Y. K., & So, M. K. P. (2015b). Developing a typological theory using a quantitative approach: A case of information security deviant behavior. Communications of the Association for Information Systems, 131(1), 25.

    Google Scholar 

  • D’Arcy, J., & Devaraj, S. (2012). Employee misuse of information technology resources: Testing a contemporary deterrence model. Decision Sciences, 43(6), 1091–1124.

    Article  Google Scholar 

  • D’Arcy, J., & Hovav, A. (2007a). Deterring internal information systems misuse. Communications of the ACM, 50(10), 113–117.

    Article  Google Scholar 

  • D’Arcy, J., & Hovav, A. (2007b). Towards a best fit between organizational security countermeasures and information systems misuse behaviors. Journal of Information System Security, 3(2), 3–31.

    Google Scholar 

  • D’Arcy, J., & Hovav, A. (2009). Does one size fit all? Examining the differential effects of IS security countermeasures. Journal of Business Ethics, 89(Supplement 1), 59–71.

    Article  Google Scholar 

  • D’Arcy, J., Hovav, A., & Galletta, D. (2009). User awareness of security countermeasures and its impact on information systems misuse: A deterrence approach. Information Systems Research, 20(1), 79–98.

    Article  Google Scholar 

  • Dalton, D. R., Daily, C. M., & Wimbush, J. C. (1997). Collecting “sensitive” data in business ethics research: A case for the unmatched count technique (UCT). Journal of Business Ethics, 16(10), 1049–1057.

    Article  Google Scholar 

  • Eichhorn, B. H., & Hayre, L. S. (1983). Scrambled randomized response methods for obtaining sensitive quantitative data. Journal of Statistical Planning and Inference, 7(4), 307–316.

    Article  Google Scholar 

  • Ernst and Young. (2014). Get ahead of cybercrime: EY’s global information security survey. http://www.ey.com/Publication/vwLUAssets/EY-global-information-security-survey-2014/$FILE/EY-global-information-security-survey-2014.pdf.

  • Fidler, D. S., & Kleinknecht, R. E. (1977). Randomized response versus direct questioning: Two data-collection methods for sensitive information. Psychological Bulletin, 84(5), 1045–1049.

    Article  Google Scholar 

  • Fisher, R. J. (1993). Social desirability bias and the validity of indirect questioning. Journal of Consumer Research, 20(2), 303–315.

    Article  Google Scholar 

  • Frost and Sullivan. (2015). The 2015 (ISC)2 global information security workforce study. https://www.isc2cares.org/uploadedFiles/wwwisc2caresorg/Content/GISWS/FrostSullivan-(ISC)%C2%B2-Global-Information-Security-Workforce-Study-2015.pdf.

  • Geerken, M. R., & Gove, W. R. (1975). Deterrence: Some theoretical considerations. Law and Society Review, 9(3), 497–513.

    Article  Google Scholar 

  • Goodstadt, M. S., & Gruson, V. (1975). The randomized response technique: A test on drug use. Journal of the American Statistical Association, 70, 814–818.

    Article  Google Scholar 

  • Greenberg, B. G., Abul-Ela, A. A., Simmons, W. R., & Horvitz, D. G. (1969). The unrelated question randomized response model: Theoretical framework. Journal of the American Statistical Association, 64, 520–539.

    Article  Google Scholar 

  • Greenberg, B. G., Kuebler, R. R., Abernathy, J. R., & Horvitz, D. G. (1971). Application of the randomized response technique in obtaining quantitative data. Journal of the American Statistical Association, 66, 243–250.

    Article  Google Scholar 

  • Guo, K. H., Yuan, Y., Archer, N. P., & Connelly, C. E. (2011). Understanding nonmalicious security violations in the workplace: A composite behavior model. Journal of Management Information Systems, 28(2), 203–236.

    Article  Google Scholar 

  • Gupta, S., Gupta, B., & Singh, S. (2002). Estimation of sensitivity level of personal interview survey questions. Journal of Statistical Planning and Inference, 100(2), 239–247.

    Article  Google Scholar 

  • Harrington, S. J. (1996). The effect codes of ethics and personal denial of responsibility on computer abuse judgments and intentions. MIS Quarterly, 20(3), 257–278.

    Article  Google Scholar 

  • Hoonakker, P., Bornoe, N., & Carayon, P. (2009). Password authentication from a human factors perseptive: Results of a survey among end-users. Proceedings of the 53rd annual meeting of the human factors and ergonomics society annual meeting, 53(6) (pp. 459–463).

  • Hovav, A., & D’Arcy, J. (2012). Applying an extended model of deterrence across cultures: An investigation of information systems misuse in the US and South Korea. Information & Management, 49(2), 99–110.

    Article  Google Scholar 

  • Hu, Q., Xu, Z., Dinev, T., & Ling, H. (2011). Does deterrence work in reducing information security policy abuse by employees? Communications of the ACM, 54(6), 54–60.

    Article  Google Scholar 

  • Kotulic, A. G., & Clark, J. G. (2004). Why there aren’t more information security research studies. Information & Management, 41(5), 597–607.

    Article  Google Scholar 

  • Kwan, S. S. K., So, M. K. P., & Tam, K. Y. (2010). Applying the randomized response technique to elicit truthful responses to sensitive questions in IS research: The case of software piracy behavior. Information System Research, 21(4), 941–959.

    Article  Google Scholar 

  • Lee, S. M., Lee, S.-G., & Yoo, S. (2004). An integrative model of computer abuse based on social control and general deterrence theories. Information & Management, 41(6), 707–718.

    Article  Google Scholar 

  • Lee, Y., Lee, Z., & Kim, Y. (2007). Understanding personal web usage in organizations. Journal of Organizational Computing and Electronic Commerce, 17(1), 75–99.

    Google Scholar 

  • Liao, Q., Luo, X., Gurung, A., & Li, L. (2009). Workplace management and employee misuse: Does punishment matter? Journal of Computer Information Systems, 50(2), 49–59.

    Google Scholar 

  • Lim, V. K. (2002). The IT way of loafing on the job: Cyberloafing, neutralizing and organizational justice. Journal of Organizational Behavior, 23(5), 675–694.

    Article  Google Scholar 

  • Locander, W., Sudman, S., & Bradurn, N. (1976). An investigation of interview method, threat and response distortion. Journal of the American Statistical Association, 71, 269–275.

    Article  Google Scholar 

  • Long, J. S. (1983). Covariance structure models: An introduction to LISREL. Beverly Hills, CA: Sage.

    Book  Google Scholar 

  • Lowry, P. B., Moody, G., Galletta, D., & Vance, A. (2012). The drivers in the use of online whistle-blowing reporting system. Journal of Management Information Systems, 20(1), 153–177.

    Article  Google Scholar 

  • Mahmood, M. A., Siponen, M., Straub, D., & Rao, H. R. (2010). Moving toward black hat research in information systems security: An editorial introduction to the special issue. MIS Quarterly, 34(3), 431–433.

    Article  Google Scholar 

  • Peace, A. G., Galletta, D. F., & Thong, J. Y. L. (2003). Software piracy in the workplace: A model and empirical test. Journal of Management Information Systems, 20(1), 153–177.

    Article  Google Scholar 

  • Pearson, F. S., & Weiner, N. A. (1985). Toward an integration of criminological theories. Journal of Criminal Law and Criminology, 76(1), 116–150.

    Article  Google Scholar 

  • Pee, L. G., Woon, I. M. Y., & Kankanhalli, A. (2008). Explaining non-work-related computing in the workplace: A comparison of alternative models. Information & Management, 45(2), 120–130.

    Article  Google Scholar 

  • Pollock, K. H., & Bek, Y. (1976). A comparison of three randomized response models for quantitative data. Journal of the American Statistical Association, 71, 884–886.

    Article  Google Scholar 

  • Posey, C., Bennett, R. J., & Roberts, T. L. (2011a). Understanding the mindset of the abusive insider: An examination of insiders’ causal reasoning following internal security changes. Computers & Security, 30(6), 486–497.

    Article  Google Scholar 

  • Posey, C., Bennett, R. J., Roberts, T. L., & Lowry, P. B. (2011b). When computer monitoring backfires: Privacy invasions and organizational injustice as precursors to computer abuse. Journal of Information Systems Security, 7(1), 24–47.

    Google Scholar 

  • Randall, D. M., & Gibson, A. M. (1990). Methodology in business ethics research: A review and critical assessment. Journal of Business Ethics, 9(6), 457–471.

    Article  Google Scholar 

  • Siponen, M., & Vance, A. (2010). Neutralization: New insights into the problem of employee information systems security policy violations. MIS Quarterly, 34(3), 487–502.

    Article  Google Scholar 

  • Siponen, M., Willison, R., & Baskerville, R. (2008). Power and practice in information systems security research. Proceedings of the international conference on information systems (paper 26). Paris, Association for Information Systems.

  • Stanton, J. M., Stam, K. R., Mastrangelo, P., & Jolton, J. (2005). Analysis of end user security behavior. Computers & Security, 24(2), 124–133.

    Article  Google Scholar 

  • Stem, D. E., Jr., & Steinhorst, R. K. (1984). Telephone interview and mail questionnaire applications of the randomized response model. Journal of the American Statistical Association, 79, 555–564.

    Article  Google Scholar 

  • Straub, D. W., Jr. (1990). Effective IS security: An empirical study. Information Systems Research, 1(3), 255–276.

    Article  Google Scholar 

  • Straub, D. W., Jr., & Nance, W. D. (1990). Discovering and disciplining computer abuse in organizations: A field study. MIS Quarterly, 14(1), 45–60.

    Article  Google Scholar 

  • van den Hout, A., & Kooiman, P. (2006). Estimating the linear regression model with categorical covariates subject to randomized response. Computational Statistics & Data Analysis, 50(11), 3311–3323.

    Article  Google Scholar 

  • Vance, A., & Siponen, M. (2012). IS security policy violations: A rational choice perspective. Journal of Organizational and End User Computing, 24(1), 21–41.

    Article  Google Scholar 

  • Vitak, J., Crouse, J., & LaRose, R. (2011). Personal internet use at work: Understanding cyberslacking. Computers in Human Behavior, 27(5), 1751–1759.

    Article  Google Scholar 

  • Warner, S. L. (1965). Randomized response: A survey technique for eliminating evasive answer bias. Journal of the American Statistical Association, 60, 63–69.

    Article  Google Scholar 

  • Webster, J., & Watson, R. T. (2002). Analyzing the past to prepare for the future: Writing a literature review. MIS Quarterly, 26(2), 13–23.

    Google Scholar 

  • Willison, R., & Siponen, M. (2009). Overcoming the insider: Reducing employee computer crime through situational crime prevention. Communications of the ACM, 52(9), 133–137.

    Article  Google Scholar 

  • Workman, M., Bommer, W. H., & Straub, D. (2008). Security lapses and the omission of information security measures: A threat control model and empirical test. Computers in Human Behavior, 24(6), 2799–2816.

    Article  Google Scholar 

  • Zdep, S. M., & Rhodes, I. N. (1976). Making the randomized response technique work. Public Opinion Quarterly, 40(4), 531–537.

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the editor in chief, professor Michelle Greenwood, the section editor, professor Samuel Michael Natale, and two anonymous reviewers for their valuable suggestions and comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mike K. P. So.

Appendices

Appendix 1: An Example of the First Version of the RRT (with a 0.25 Probability of Answering the Sensitive Question)

Appendix 2

See Table 3.

Table 3 Literature review of empirical studies on unethical behavior in IS in the workplace

Appendix 3: Sample Formats of the Three Questionnaire Versions in the Current Study

Appendix 4: Technical Notes on the Implementation and Computation Involved in UQD in Our Study

  1. (i)

    Estimating the mean population response to a sensitive question under UQD

Suppose we are interested in the mean frequency of employees to install untrusted applications for personal purposes at work. This question is sensitive and respondents may not be willing to disclose honest answers. UQD is designed to reduce this type of bias. In UQD, the respondent is presented with two questions:

Question X: Installing untrusted applications for personal purposes at work, and

Question Y: Having dinner at home,

So that Questions X and Y are unrelated. Then, the respondent is asked to generate an outcome from a randomization device to determine which question (X or Y) he/she will answer, on a 7-point Likert scale (1 = never … 7 = very many times), without disclosing which question he/she actually answers. In this setting, the interviewer does not know which question the respondent answers, and hence the response to the sensitive question is hidden.

Define X and Y as the responses of a respondent to Questions X and Y, respectively, and Z as the observed response obtained from the randomization procedure. Let \( \mu_{X} \), \( \mu_{Y} \) and \( \mu_{Z} \) be the population mean values of X, Y and Z, respectively, and let p be the probability that the respondent will answer Question X. Using the tree diagram in Fig. 4 to illustrate the randomization procedure, the mean response obtained from the randomization procedure, \( \mu_{Z} \), can be derived as

$$ \mu_{Z} = p \mu_{X} + (1 - p)\mu_{Y} . $$
(1)
Fig. 4
figure 4

Tree diagram of the randomization procedure for a single-randomized response

The whole sample is divided into two subsamples: Samples 1 and 2. They are assigned different probabilities of answering Question X. The probabilities for Samples 1 and 2 are denoted by \( p_{\left( 1 \right)} \) and \( p_{\left( 2 \right)} \), respectively. Thus, once the sample mean of the randomized response Z within Sample k (k = 1, 2), denoted by \( \bar{Z}_{(k)} \), is available, we can replace p and \( \mu_{Z} \) in Eq. (1) with \( p_{\left( k \right)} \) and \( \bar{Z}_{(k)} \), respectively. Then, the following simultaneous equations can be obtained

$$ \left\{ { \begin{array}{*{20}c} {\bar{Z}_{\left( 1 \right)} = p_{(1)} \hat{\mu }_{X} + (1 - p_{\left( 1 \right)} )\hat{\mu }_{Y} } \\ {\bar{Z}_{\left( 2 \right)} = p_{(2)} \hat{\mu }_{X} + \left( {1 - p_{\left( 2 \right)} } \right)\hat{\mu }_{Y} } \\ \end{array} } \right. , $$

where \( \hat{\mu }_{X} \) and \( \hat{\mu }_{Y} \), denoting the estimates of \( \mu_{X} \) and \( \mu_{Y} \), are unknowns in the above two equations. Provided that \( p_{\left( 1 \right)} \ne p_{\left( 2 \right)} \), we can solve the two equations and get \( \hat{\mu }_{X} = \frac{{\left( {1 - p_{\left( 2 \right)} } \right)\bar{Z}_{\left( 1 \right)} - \left( {1 - p_{\left( 1 \right)} } \right)\bar{Z}_{\left( 2 \right)} }}{{p_{\left( 1 \right)} - p_{\left( 2 \right)} }} \) and \( \hat{\mu }_{Y} = \frac{{p_{\left( 1 \right)} \bar{Z}_{\left( 2 \right)} - p_{\left( 2 \right)} \bar{Z}_{\left( 1 \right)} }}{{p_{\left( 1 \right)} - p_{\left( 2 \right)} }} \). Note that the whole estimation involves only the sample mean of the randomized responses, \( \bar{Z}_{\left( 1 \right)} \) and \( \bar{Z}_{\left( 2 \right)} \), and that the responses of individual respondents to Question X are kept confidential throughout the estimation. Take the result of our survey as an example. The mean randomized responses from Samples 1 and 2 (\( \bar{Z}_{\left( 1 \right)} \) and \( \bar{Z}_{\left( 2 \right)} \)) are equal to 4.200 and 3.733, respectively. \( p_{\left( 1 \right)} \) is set to 1/3 while \( p_{\left( 2 \right)} \) is set to 2/3 in our survey. Using the formula above, the estimated population mean of X, \( \hat{\mu }_{X} \), is equal to 3.266.

  1. (ii)

    Estimating the covariance matrix of responses to both sensitive and direct questions under UQD

In this section, we discuss three cases on how to estimate (a) the variance of a sensitive response, (b) the covariance of two sensitive responses, and (c) the covariance of a sensitive response and a direct response. The full covariance matrix of responses to both sensitive and direct questions can be estimated provided that the estimates of the above variance/covariance terms are available.

  1. (a)

    Variance of a sensitive response

As in part (i), the respondent is presented with two questions:

Question X1: Installing untrusted applications for personal purposes at work, and

Question Y1: Having dinner at home.

Then, he/she is asked to respond to either one of them based on a 7-point Likert scale (1 = never … 7 = very many times) according to an outcome from a randomization device which assigns the respondent with probability p to answer Question X1. Let X1 and Y1 be the response of the respondent to Questions X1 and Y1, respectively. Let Z1 be the response obtained from the randomization procedure. In addition, let \( \sigma_{X11} \), \( \sigma_{Y11} \) and \( \sigma_{Z11} \) be the variances of X1, Y1, and Z1, respectively. Without loss of generality, assume that the means of X1, Y1, and Z1 are all 0. As such, \( \sigma_{X11} \), \( \sigma_{Y11} \) and \( \sigma_{Z11} \) are equal to the means of the square of X1, Y1, and Z1 or, in mathematical notation, \( E\left[ {X_{1}^{2} } \right] \),\( E\left[ {Y_{1}^{2} } \right] \) and \( E\left[ {Z_{1}^{2} } \right] \), respectively. Thus, the rationale of deriving Eq. (1) is still applicable, and we can simply replace \( \mu_{X} \), \( \mu_{Y} \) and \( \mu_{Z} \) with \( \sigma_{X11} \), \( \sigma_{Y11} \) and \( \sigma_{Z11} \), respectively, to obtain

$$ \sigma_{Z11} = p \sigma_{X11} + \left( {1 - p} \right)\sigma_{Y11} . $$

Following part (i), the whole sample is divided into two subsamples: Samples 1 and 2. They are then assigned different probabilities of responding to Question X1, denoted by \( p_{\left( 1 \right)} \) and \( p_{\left( 2 \right)} \), respectively. Provided that the sample means of \( Z_{1}^{2} \) in both Samples 1 and 2, denoted by \( \bar{Z}_{1\left( 1 \right)}^{2} \) and \( \bar{Z}_{1\left( 2 \right)}^{2} \), respectively, are available, we can replace \( \sigma_{Z11} \) with \( \bar{Z}_{1\left( k \right)}^{2} \) and \( p \) with \( p_{\left( k \right)} \), respectively (k = 1, 2) to obtain

$$ \left\{ { \begin{array}{*{20}c} {\bar{Z}_{1\left( 1 \right)}^{2} = p_{(1)} \hat{\sigma }_{X11} + (1 - p_{\left( 1 \right)} )\hat{\sigma }_{Y11} } \\ {\bar{Z}_{1\left( 2 \right)}^{2} = p_{(2)} \hat{\sigma }_{X11} + \left( {1 - p_{\left( 2 \right)} } \right)\hat{\sigma }_{Y11} } \\ \end{array} } \right. , $$

where \( \hat{\sigma }_{X11} \) and \( \hat{\sigma }_{Y11} \) are the estimates of \( \sigma_{X11} \) and \( \sigma_{Y11} \), respectively. Solving the above two equations, we obtain \( \hat{\sigma }_{X11} = \frac{{\left( {1 - p_{\left( 2 \right)} } \right)\bar{Z}_{1\left( 1 \right)}^{2} - \left( {1 - p_{\left( 1 \right)} } \right)\bar{Z}_{1\left( 2 \right)}^{2} }}{{p_{\left( 1 \right)} - p_{\left( 2 \right)} }} \) and \( \hat{\sigma }_{Y11} = \frac{{p_{\left( 1 \right)} \bar{Z}_{1\left( 2 \right)}^{2} - p_{\left( 2 \right)} \bar{Z}_{1\left( 1 \right)}^{2} }}{{p_{\left( 1 \right)} - p_{\left( 2 \right)} }} \). Again, the whole estimation involves only the sample mean of the squared randomized responses \( \bar{Z}_{1\left( 1 \right)}^{2} \) and \( \bar{Z}_{1\left( 2 \right)}^{2} \). The responses of individual respondents to Question X1 are kept confidential and unknown to interviewers.

  1. (b)

    Covariance of two sensitive responses

In addition to Questions X1 and Y1 in case (a), we introduce another pair of unrelated sensitive and non-sensitive questions:

Question X2: Using untrusted network (e.g., Internet) for data transmission at work, and

Question Y2: Taking public transportation.

Under UQD, Questions X2 and Y2 are shown simultaneously to the respondent, and he/she is asked to respond to either one of them on a 7-point Likert scale (1 = never … 7 = very many times) according to an outcome from a randomization device which assigns the respondent with probability p to answer Question X2. The responses to Questions X2 and Y2 are denoted by X2 and Y2, respectively, and the response collected under the randomization procedure is denoted by Z2. In this case, we would like to estimate the covariance of X1 and X2, i.e., the covariance of two sensitive responses.

Define \( \sigma_{X12} \), \( \sigma_{Y12} \), \( \sigma_{X1Y2} \), \( \sigma_{X2Y1} \) and \( \sigma_{Z12} \) as the covariance of X1 and X2, Y1 and Y2, X1 and Y2, X2 and Y1, and Z1 and Z2, respectively. Without loss of generality, assume that the means of X1, Y1, X2, Y2, Z1, and Z2 are all zero. Under this assumption, the covariance of any pair of those variables is equal to the mean of their product. In other words, \( \sigma_{X12} = E[X_{1} X_{2} ] \), \( \sigma_{Y12} = E[Y_{1} Y_{2} ] \), \( \sigma_{X1Y2} = E[X_{1} Y_{2} ] \), \( \sigma_{X2Y1} = E[X_{2} Y_{1} ] \) and \( \sigma_{Z12} = E[Z_{1} Z_{2} ] \). Moreover, as Questions X1 and X2 are unrelated to Questions Y1 and Y2, we can further impose an assumption that \( \sigma_{X1Y2} \) and \( \sigma_{X2Y1} \) are equal to 0. Thus, based on the tree diagram shown in Fig. 5a, we can derive the following equation

$$ \begin{array}{*{20}c} {\sigma_{Z12} = p^{2} \sigma_{X12} + p \left( {1 - p} \right)\sigma_{X1Y2} + \left( {1 - p} \right)p \sigma_{X2Y1} + \left( {1 - p} \right)^{2} \sigma_{Y12} } \\ { = p^{2} \sigma_{X12} + \left( {1 - p} \right)^{2} \sigma_{Y12} . } \\ \end{array} $$
Fig. 5
figure 5

Tree diagram of the randomization procedure for multiple randomized response

Similar to the process in part (i) and case (a), we divide the whole sample into two subsamples and assign them with different probability, \( p_{\left( 1 \right)} \) and \( p_{\left( 2 \right)} \), to respond to the sensitive questions. In this way, we can derive that \( \sigma_{X12} \) and \( \sigma_{Y12} \) can be estimated by \( \hat{\sigma }_{X12} = \frac{{\left( {1 - p_{\left( 2 \right)} } \right)^{2} \bar{Z}_{12\left( 1 \right)} - \left( {1 - p_{\left( 1 \right)} } \right)^{2} \bar{Z}_{12\left( 2 \right)} }}{{\left( {p_{\left( 1 \right)} + p_{\left( 2 \right)} - 2p_{\left( 1 \right)} p_{\left( 2 \right)} } \right) (p_{\left( 1 \right)} - p_{\left( 2 \right)} )}} \) and \( \hat{\sigma }_{Y12} = \frac{{p_{\left( 1 \right)}^{2} \bar{Z}_{12\left( 2 \right)} - p_{\left( 2 \right)}^{2} \bar{Z}_{12\left( 1 \right)} }}{{\left( {p_{\left( 1 \right)} + p_{\left( 2 \right)} - 2p_{\left( 1 \right)} p_{\left( 2 \right)} } \right) (p_{\left( 1 \right)} - p_{\left( 2 \right)} )}} \), respectively, by formulating the following two simultaneous equations as in estimating the variance of a sensitive response,

$$ \left\{ { \begin{array}{*{20}c} {\bar{Z}_{12\left( 1 \right)}^{{}} = p_{(1)}^{2} \hat{\sigma }_{X12} + (1 - p_{\left( 1 \right)} )^{2} \hat{\sigma }_{Y12} } \\ {\bar{Z}_{12\left( 2 \right)}^{{}} = p_{(2)}^{2} \hat{\sigma }_{X12} + \left( {1 - p_{\left( 2 \right)} } \right)^{2} \hat{\sigma }_{Y12} } \\ \end{array} } \right. , $$

where \( \bar{Z}_{12\left( 1 \right)} \) and \( \bar{Z}_{12\left( 2 \right)} \) are the sample means of Z1Z2 in Samples 1 and 2, respectively. Take the result of our survey as an example. The sample mean products of randomized responses Z1Z2 in Samples 1 and 2 (after being subtracted by the mean), \( \bar{Z}_{12\left( 1 \right)} \) and \( \bar{Z}_{12\left( 2 \right)} \), are equal to 0.315 and 0.640, respectively. Hence, the formula above implies that the estimated covariance of X1 and X2, \( \hat{\sigma }_{X12} \), is equal to 1.346. The responses of individual respondents to Questions X1 and X2 are kept confidential throughout the estimation.

  1. (c)

    Covariance of a sensitive response and a direct response

In addition to the pair of sensitive and non-sensitive questions, Questions X1 and Y1, in case (a), we consider an additional direct question

Question D: I would probably be caught eventually, after engaging in IS resource misuse.

The respondent is just asked to directly answer the question on a 7-point Likert scale (1 = strongly disagree … 7 = strongly agree), instead of going through the randomization procedure. Let D be the response to Question D. What we are interested is the covariance of X1 and D, which is the covariance of a sensitive response and a direct response. Recall that Z1 is the randomized response corresponding to Questions X1 and Y1. Let \( \sigma_{X1D} \), \( \sigma_{Y1D} \) and \( \sigma_{Z1D} \) be the covariances of X1 and D, Y1 and D, and Z1 and D, respectively. Without loss of generality, assume that the population means of X1, Y1, Z1, and D are 0. Again, under this assumption, \( \sigma_{X1D} = E[X_{1} D] \), \( \sigma_{Y1D} = E[Y_{1} D] \) and \( \sigma_{Z1D} = E[Z_{1} D] \). Moreover, as Question Y1 and Question D are unrelated, we can further assume that \( \sigma_{Y1D} = 0 \). Based on the tree diagram shown in Fig. 5b, we derive that

$$ \sigma_{Z1D} = p \sigma_{X1D} + \left( {1 - p} \right)\sigma_{Y1D} = p \sigma_{X1D} . $$

Using two samples under UQD and going through the sample process mentioned in cases (a) and (b), we can derive that the estimate of \( \sigma_{X1D} \) given by \( \hat{\sigma }_{X1D} = \frac{{\overline{{Z_{1} D}}_{\left( 1 \right)} + \overline{{Z_{2} D}}_{\left( 2 \right)} }}{{p_{\left( 1 \right)} + p_{\left( 2 \right)} }} \), where \( \overline{{Z_{1} D}}_{\left( 1 \right)} \) and \( \overline{{Z_{1} D}}_{\left( 2 \right)} \) are the sample mean of Z1D in Samples 1 and 2.

Remarks

If the means of the variables are not 0, we can subtract each entry of the data by the corresponding sample mean, so that the method introduced above is still applicable.

Moreover, the estimation method introduced above provides a way to estimate the variance of a sensitive response, the covariance of two sensitive responses and the covariance of a sensitive response and a direct response. Thus, we can estimate the full covariance matrix of all of the sensitive and direct responses. The estimated covariance matrix can then be inputted into statistical software like SPSS Amos, to obtain the estimated paths of the designed SEM.

Appendix 5: R Code to Compute the Estimated Mean and Covariance of the Randomized Data Under the UQD Framework

Dat1:

The data matrix of Sample 1, with row representing an observation and column representing an attribute (the columns of all attributes under direct questioning should be on the left of those under randomized questioning)

Dat2:

The data matrix of Sample 2, with row representing an observation and column representing an attribute (the order of attributes in Dat2 is required to be the same as that in Dat1)

p1:

The probability for respondent in Sample 1 to answer a sensitive question

p2:

The probability for respondent in Sample 2 to answer a sensitive question

ns:

Number of direct questions

nz:

Number of randomized questions

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chu, A.M.Y., So, M.K.P. & Chung, R.S.W. Applying the Randomized Response Technique in Business Ethics Research: The Misuse of Information Systems Resources in the Workplace. J Bus Ethics 151, 195–212 (2018). https://doi.org/10.1007/s10551-016-3240-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10551-016-3240-5

Keywords

Navigation