Kesan Identiti Pelaku (Manusia atau AI) dan Sikap terhadap AI dalam Psikologi Atribusi Moral dalam kalangan Pelajar Prasiswazah UKM (The Effect of Perpetrator Identity (Human or AI) and Attitude toward AI in Psychological Moral Attribution Among UKM Undergraduate Students)

Siti Norjanatul Abd Ghaffar@Mohd Zain, Rozainee Khairudin

Abstract


AI is an important asset in the transition of the digitalization era. However, AI raises various ethical concerns that have gained attention recently. Understanding society's perceptions and expectations regarding the moral behavior of AI systems is important to develop ethical guidelines that align with society's values and norms. Therefore, a within-group experimental study on the effect of perpetrator identity and attitude towards AI on moral attribution was carried out on 50 UKM undergraduate students. Data related to attitudes towards AI was collected using a questionnaire based on structure while moral attribution data was collected using 7 moral attribution questions after subjects read 10 scenarios of human misconduct and 10 scenarios of AI misconduct. One-way ANOVA, Pearson correlation, and discriminant analysis were performed using SPSS. The results show that there is a significant difference in moral attribution between human perpetrators and AI perpetrators where subjects attribute moral wrongness, responsibility, awareness, intentionality, and blame. While AI's misconduct is considered more justifiable and has a higher permissibility compared to human misconduct. The findings of the study do not show a relationship between attitudes toward AI and moral attribution regardless of whether it is an AI perpetrator or a human perpetrator. Moral attribution of awareness, responsibility, intentionality, and blame are the main factors that influence the difference in human-AI moral attribution. The findings of this study shed insights into understanding the need for moral attribution for human actors and AI for legal and ethical guidelines in using AI.


Full Text:

PDF

References


Alicke, M. D. (2000). Culpable control and the psychology of blame. Psychological bulletin, 126(4), 556.

Asbrock, F., Mayerl, J., Holz, M., Andersen, H. K., & Maskow, B. (2022). „AI Takeover… doesn’t sound that bad!”–Authoritarian ambivalence towards artificial intelligence. DOI 10.17605/OSF.IO/CRBYM

Awad, E., Levine, S., Kleiman-Weiner, M., Dsouza, S., Tenenbaum, J. B., Shariff, A., ... & Rahwan, I. (2020). Drivers are blamed more than their automated cars when both make mistakes. Nature human behaviour 4(2): 134-143.

Behdadi, D., & Munthe, C. (2020). A normative approach to artificial moral agency. Minds and Machines, 30(2), 195-218.

Bigman, Y. E. & Gray, K. (2018). People are averse to machines making moral decisions. Cognition 181: 21-34.

Bleher, H., & Braun, M. (2022). Diffused responsibility: attributions of responsibility in the use of AI-driven clinical decision support systems. AI and Ethics, 2(4), 747-761.

Busch, F., Hoffmann, L., Truhn, D., Palaian, S., Alomar, M., Shpati, K., ... & Adams, L. C. (2024). International pharmacy students' perceptions towards artificial intelligence in medicine—a multinational, multicentre cross‐sectional study. British Journal of Clinical Pharmacology, 90(3), 649-661.

Caccavale, F., Gargalo, C. L., Gernaey, K. V. & Krühne, U. (2022). To be fAIr: ethical and fair application of artificial intelligence in virtual laboratories. Towards a New Future in Engineering Education, New Scenarios That European Alliances of Tech Universities Open Up.

Çalışkan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases.

Challen, R., Denny, J. C., Pitt, M., Gompels, L., Edwards, T. L. & Tsaneva-Atanasova, K. (2019). Artificial intelligence, bias and clinical safety. BMJ Quality Safety, 28(3): 231-237.

Chokshi, N. (2018). Is Alexa listening? Amazon echo sent out recording of couple’s conversation. The New York Times: 25.

Cohen, L., Manion, L., & Morrison, K. (2002). Research methods in education. routledge.

Cooper, A. F., Moss, E., Laufer, B., & Nissenbaum, H. (2022). Accountability in an algorithmic society: relationality, responsibility, and robustness in machine learning. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 864-876).

Duus‐Otterström, G., & Kelly, E. I. (2019). Injustice and the Right to Punish. Philosophy Compass, 14(2), e12565.

Etikan, I., Musa, S. A. & Alkassim, R. S. (2016). Comparison of convenience sampling and purposive sampling. American journal of theoretical and applied statistics, 5(1): 1-4.

Ferrell, O. C. & Ferrell, L. (2021). Applying the Hunt Vitell ethics model to artificial intelligence ethics. Journal of Global Scholars of Marketing Science 31(2): 178-188.

Gall, M. D., Borg, W. R., & Gall, J. P. (1996). Educational research: An introduction. Longman Publishing.

Geddam, S. M., Nethravathi, N., & Hussian, A. A. (2024). Understanding AI Adoption: The Mediating Role of Attitude in User Acceptance. Journal of Informatics Education and Research, 4(2).

Glenn, J. C. (2016). Future work/technology 2050 real-time delphi study: excerpt from the 2015-16 state of the future report. Journal of Socialomics, 5(3).

Guglielmo, S. (2015). Moral judgment as information processing: an integrative review. Frontiers in psychology, 6, 166023.

Guingrich RE and Graziano MSA (2024) Ascribing consciousness to artificial intelligence: human-AI interaction and its carry-over effects on human-human interaction. Front. Psychol. 15:1322781. doi: 10.3389/fpsyg.2024.1322781.

Hamedani, Z., Moradi, M., Kalroozi, F., Manafi Anari, A., Jalalifar, E., Ansari, A., ... & Karim, B. (2023). Evaluation of acceptance, attitude, and knowledge towards artificial intelligence and its application from the point of view of physicians and nurses: A provincial survey study in Iran: A cross‐sectional descriptive‐analytical study. Health Science Reports, 6(9), e1543.

Hunkenschroer, A. L. & Kriebitz, A. (2022). Is ai recruiting (un)ethical? a human rights perspective on the use of ai for hiring. AI and Ethics, 3(1): 199-213.

Husain, A. (2017). The sentient machine: The coming age of artificial intelligence. Simon and Schuster.

Khan, A., Akbar, M. A., Fahmideh, M., Peng, L., Waseem, M., Ahmad, A. & Abrahamsson, P. (2022). AI ethics: software practitioners and lawmakers points of view.

Kneer, M. & Machery, E. (2019). No luck for moral luck. Cognition 182: 331-348.

Kneer, M. & Stuart, M. T. (2021, March). Playing the blame game with robots. In Companion of the 2021 ACM/IEEE international conference on human-robot interaction (pp. 407-411).

Ladak, A., Loughnan, S. & Wilks, M. 2024. The moral psychology of artificial intelligence. Current Directions in Psychological Science, 33(1): 27-34.

Lechterman, T. M. (2022). The concept of accountability in AI ethics and governance.

Lechterman, Theodore (2023). The Concept of Accountability in AI Ethics and Governance. In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.), The Oxford Handbook of AI Governance. Oxford University Press.

Lee, T. K., & Shapiro, M. A. (2014). The interaction of affective dispositions, moral judgments, and intentionality in assessing narrative characters: Rationalist and intuitionist sequences. Communication Theory, 24(2), 146-164.

Li, W. & Zheng, X. (2024). Social Media Use and Attitudes toward AI: The Mediating Roles of Perceived AI Fairness and Threat. Human Behavior and Emerging Technologies 3448083.

Lima, G., Cha, M., Jeon, C. & Park, K. S. (2021). The conflict between people’s urge to punish AI and legal systems. Frontiers in Robotics and AI, 8: 756242.

Liu, P., & Du, Y. (2022). Blame attribution asymmetry in human–automation cooperation. Risk Analysis, 42(8), 1769-1783.

Longin, L., Bahrami, B., & Deroy, O. (2023). Intelligence brings responsibility-Even smart AI assistants are held responsible. Iscience, 26(8).

Lutyens, M. & Christov-Moore, L. (2020). Neglect and the kaleidoscopic mind: psychology and mental health in contemporary art. Arts, 9(2): 47.

Manap, N. A. & Abdullah, A. (2020). Regulating artificial intelligence in Malaysia: The two-tier approach. UUM Journal of Legal Studies, 11(2): 183-201

McCarthy, K. (2018). 2018 ain’t done yet... Amazon sent Alexa recordings of man and girlfriend to stranger. The Register.

Mogi, K. (2024) Artificial intelligence, human cognition, and conscious supremacy. Front. Psychol. 15:1364714. doi: 10.3389/fpsyg.2024.1364714

Photong, J. (2017). Alexa plays music without command. Amazon forum. https://www.amazonforum. com/forums/devices/echo-alexa/2643-alexa-plays-musicwithout-command [31 Oktober 2017]

Reinecke, M. G., Mao, Y., Kunesch, M., Duéñez‐Guzmán, E. A., Haas, J., & Leibo, J. Z. (2023). The puzzle of evaluating moral cognition in artificial agents. Cognitive Science, 47(8), e13315.

Shank, D. B. & Gott, A. (2019). People’s self-reported encounters of perceiving mind in artificial intelligence. Data in Brief, 25: 1–5.

Shank, D. B., DeSanti, A. & Maninger, T. (2019). When are artificial intelligence versus human agents faulted for wrongdoing? Moral attributions after individual and joint decisions. Information, Communication & Society, 22(5): 648-663.

Shank, D. B., North, M., Arnold, C., & Gamez, P. (2021). Can mind perception explain virtuous character judgments of artificial intelligence? Technology, Mind, and Behavior, 2(2). https://doi.org/10.1037/tmb0000047.

Sigfrids, A., Nieminen, M., Leikas, J., & Pikkuaho, P. (2022). How should public administrations foster the ethical development and use of artificial intelligence? A review of proposals for developing governance of AI. Frontiers in Human Dynamics, 4, 858108.

Smith, N., & Vickers, D. (2021). Statistically responsible artificial intelligences. Ethics and Information Technology, 23(3), 483-493.

Stuart, M. T. & Kneer, M. (2021). Guilty artificial minds: folk attributions of mens rea and culpability to artificially intelligent agents. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2): 1-27.

Suleiman, D. A., Awan, T. M. & Javed, M. (2021). Enhancing digital marketing performance through usage intention of ai-powered websites. IAES International Journal of Artificial Intelligence (IJ-AI), 10(4): 810.

V. Fietta, F. Zecchinato, B. D. Stasi, M. Polato & M. Monaro. (2022)."Dissociation Between Users’ Explicit and Implicit Attitudes Toward Artificial Intelligence: An Experimental Study," in IEEE Transactions on Human-Machine Systems, vol. 52, no. 3, pp. 481-489, June 2022, doi: 10.1109/THMS.2021.3125280.

Van Den Bosch, K., Schoonderwoerd, T., Blankendaal, R. & Neerincx, M. (2019). Six challenges for human-AI Co-learning. In Adaptive Instructional Systems: First International Conference, AIS 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings 21 (pp. 572-589). Springer International Publishing

Wilson, A., Stefanik, C. & Shank, D. B. (2022). How do people judge the immorality of artificial intelligence versus humans committing moral wrongs in real-world situations?. Computers in Human Behavior Reports, 8: 100229.

Zalla, T., Barlassina, L., Buon, M. & Leboyer, M. (2011). Moral judgment in adults with autism spectrum disorders. Cognition 121 (1):115-126.

Zhang, W., Cai, M., Lee, H.J., Evans, R., Zhu, C. & Ming, C. (2023). AI in Medical Education: Global situation, effects and challenges. Education and Information Technologies 10.1007/s10639-023-12009-8


Refbacks

  • There are currently no refbacks.


DISCLAIMER

The editors and publisher of Jurnal Psikologi Malaysia have made every possible effort to verify the accuracy of all information contained in this publication. Any opinions, discussions, views and recommendations expressed in the article are solely those of the authors and are not of Jurnal Psikologi Malaysia, its editors or its publisher. Jurnal Psikologi Malaysia, its editors and its publisher will not be liable for any direct, indirect, consequential, special, exemplary, or other damages arising therefrom.