Artificial Intelligence and Human Cognition: A Systematic Review of Thought Provocation through AI ChatGPT Prompts
Akram Shebani Ahmad Klella
Zawia University, College of Education, Abi-Isa, English department, Zawia, Libya
a.klella@zu.edu.ly
https://orcid.org/0009-0009-7064-8228
Zakria Mohammed Omar Mrghem
Zawia University, College of Education, Abi-Isa, Computer Science Department, Zawia, Libya
z.mrghem@zu.edu.ly
https://orcid.org/0009-0005-4986-6520
Abstract
This paper aims to provide a conceptual investigation over exploring the emergent processes and explanations formulated by a capacity to think, especially when extending reasoning abilities over an AI prompt, such as ChatGPT. These emergencies conceptualize special pathways, forcing human-like frameworks and processes of thought to be instanced, often covering personal qualifiers, in addition to motivations and other personal and emotional idiosyncrasies. Investigating and calling such processes thought whilst pulling for this instance human-like cognitive and behavioral articulation can provide us not only with an interesting standpoint counter to standard platonic maneuvers in cognitive science and AI but can also be utilised and reappraising to more useful and socio-culturally molar notions of optimization, embeddedness, intelligence, soft consciousness, and learning processes.
Keywords: Artificial Intelligence, ChatGPT prompts, human cognition, learning Processes, thought provocation
DOI:
https://doi.org/10.70091/atras/AI.27
How to Cite this Paper :
Klella, A., & Mrghem, Z. (2024). Artificial intelligence and human cognition: A systematic review of thought provocation through AI ChatGPT prompts. Atras Journal, 5 (Special Issue), 432-444.
References:
Alto, V. (2023). Modern generative AI with ChatGPT and OpenAI models: Leverage the capabilities of OpenAI’s LLM for productivity and innovation with GPT-3 and GPT-4. Packt Publications. Birmingham, United Kingdom
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623. https://doi.org/10.1145/3442188.3445922
Brennen, J. S. (2020). What happens when AI is everywhere? Journal of Strategic Information Systems, 29 (2), 22-38. https://doi.org/10.1016/j.jsis.2020.101602
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., & Amodei, D. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems 33, 1877-1901 https://arxiv.org/abs/2005.14165
Buchanan, B. G., & Shortliffe, E. H. (1984). Rule-based expert systems: The MYCIN experiments of the Stanford heuristic programming project. Addison-Wesley. United States
Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. Oxford University Press.
Colby, K. M. (1975). Artificial paranoia: A computer simulation of paranoid processes. Artificial Intelligence, 2 (1), 1-25. https://doi.org/10.1016/0004-3702(75)90001-2
Cross, E. S., & Ramsey, R. (2021). Mind meets machine: Towards a cognitive science of human-machine interactions. Trends in Cognitive Sciences, 40 (2), 615-622. Available at https://www.x-mol.net/paper/article/1343658818779750400
Dennett, D. C. (1984). Elbow room: The varieties of free will worth wanting. MIT Press. United States.
Dergaa, I., Chamari, K., Zmijewski, P., & Saad, H. B. (2023). From human writing to artificial intelligence generated text: Examining the prospects and potential threats of ChatGPT in academic writing. Biology of Sport, 40(2):615-622 http://dx.doi.org/10.5114/biolsport.2023.125623
Finke, R. A., Ward, T. B., & Smith, S. M. (1992). Creative cognition: Theory, research, and applications. MIT Press. United States.
Fitria, T. N. (2023). Artificial intelligence (AI) technology in OpenAI ChatGPT application: A review of ChatGPT in writing English essays. ELT Forum: Journal of English Language Teaching, 12(1), 44-58 https://doi.org/10.15294/elt.v12i1.64069
Flanagan, T. R., & Christakis, A. N. (2021). The talking point: Creating an environment for exploring complex meaning (2nd ed.). IAP Publications.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28 (4), 689-707. https://doi.org/10.1007/s11023-018-9482-5
Goleman, D. (1995). Emotional intelligence: Why it can matter more than IQ. Bantam Books. New York, United States.
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.United States.
Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9 (8), 1735-1780.
Hoy, M. B. (2018). Alexa, Siri, Cortana, and more: An introduction to voice assistants. Medical Reference Services Quarterly, 37 (1), 81-88. https://doi.org/10.1080/02763869.2018.1404391
Joksimovic, S., Ifenthaler, D., Marrone, R., De Laat, M., & Siemens, G. (2023). Opportunities of artificial intelligence for supporting complex problem-solving: Findings from a scoping review. Computers and Education: Artificial Intelligence, 4(100138), 1-15. https://doi.org/10.1016/j.caeai.2023.100138
Jurafsky, D., & Martin, J. H. (2019). Speech and language processing (3rd ed.). Prentice Hall. New Jersey, USA
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux. https://psycnet.apa.org/record/2011-26535-000
Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., & Zhao, S. (2021). Advances and open problems in federated learning. Foundations and Trends in Machine Learning, 14 (1–2), 1-210. https://doi.org/10.1561/2200000083
Lin, C. C., Huang, A. Y. Q., & Yang, S. J. H. (2023). A review of AI-driven conversational chatbot implementation methodologies and challenges (1999–2022). Sustainability, 15 (5), 4012. https://doi.org/10.3390/su15054012
Lucci, S., Kopec, D., & Musa, S. M. (2022). Artificial intelligence in the 21st century. Mercury Learning and Information Publishing.
Marcus, G., & Davis, E. (2019). Rebooting AI: Building artificial intelligence we can trust. Pantheon. New York. United States of America
McCosker, A., & Wilken, R. (2020). Creative co-creation and artificial intelligence: New perspectives on digital creativity. Media International Australia, 176 (1), 66-79. https://doi.org/10.1177/1329878X20937689
Moritz, E. (2024). Chatting with chat (GPT-4): Quid est understanding. A report published in the Centre for Digital Philosophy at Western University. Ontario, Canada. https://philarchive.org/rec/MORCWC-2
Picard, R. W. (1997). Affective Computing. MIT Press. United States.
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1 (8), 1-9 https://www.bibsonomy.org/bibtex/61ea7e007d6c95171a2ff3396b1af7d9
Roumeliotis, K. I., & Tselikas, N. D. (2023). ChatGPT and Open-AI models: A preliminary review. Future Internet, 15 (6), 1-24. https://doi.org/10.3390/fi15060192
Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson. United Kingdom.
Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3 (3), 417-424. https://doi.org/10.1017/S0140525X00005756
Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human-Computer Interaction, 36 (6), 495-504. https://doi.org/10.1080/10447318.2020.1741118
Shum, H. Y., He, X., & Li, D. (2018). From Eliza to XiaoIce: Challenges and opportunities with social chatbots. Frontiers of Information Technology & Electronic Engineering, 19 (1), 10-26. https://doi.org/10.1631/FITEE.1700826
Shrager, J. (2024). ELIZA reinterpreted: The world’s first chatbot was not intended as a chatbot at all. 2406.17650. https://arxiv.org/abs/2406.17650
Strickland, J. (2015). SmarterChild: The most popular bot you never knew. IEEE Spectrum. 52(10), 34-39
Wallace, R. (2009). The anatomy of A.L.I.C.E. In R. Epstein, G. Roberts, & G. Beber (Eds.), Parsing the Turing test (pp. 181-210). ISBN: 978-1-4020-6710-5. Springer, Dordrecht.
Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9 (1), 36-45. https://doi.org/10.1145/365153.365168
Wu, X., Duan, R., & Ni, J. (2024). Unveiling security, privacy, and ethical concerns of ChatGPT. Journal of Information and Intelligence, 2 (2), 102-115 https://arxiv.org/abs/2307.14192

Copyright for all articles published in ATRAS belongs to the author. The authors also grant permission to the publisher to publish, reproduce, distribute, and transmit the articles. ATRAS publishes accepted papers under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) License. Authors submitting papers for publication in ATRAS agree to apply the CC BY-NC 4.0 license to their work. For non-commercial purposes, anyone may copy, redistribute material, remix, transform, and construct material in any media or format, provided that the terms of the license are observed and the original source is properly cited.