{"id":6089,"date":"2025-04-19T09:31:16","date_gmt":"2025-04-19T07:31:16","guid":{"rendered":"https:\/\/site.nord.no\/didaktiskebetraktninger\/?page_id=6089"},"modified":"2025-06-29T12:20:51","modified_gmt":"2025-06-29T10:20:51","slug":"summary","status":"publish","type":"page","link":"https:\/\/site.nord.no\/didaktiskebetraktninger\/summary\/","title":{"rendered":"Summary"},"content":{"rendered":"\n<p>&laquo;<em>The shameless exaggerations surrounding large language models (LLMs) would be laughable if LLMs were not diverting so much energy, money, and talent into a seemingly bottomless pit<\/em>.\u00bb<br>&#8211; Professor Gary Smith, 2025 (<a href=\"https:\/\/mindmatters.ai\/2025\/02\/why-llms-chatbots-wont-lead-to-artificial-general-intelligence\/\" data-type=\"link\" data-id=\"https:\/\/mindmatters.ai\/2025\/02\/why-llms-chatbots-wont-lead-to-artificial-general-intelligence\/\" target=\"_blank\" rel=\"noreferrer noopener\">1<\/a>)<\/p>\n\n\n\n<p>&laquo;<em>LLMs have basically indexed the internet, and in processing this data have created links between datasets based on fixed rulesets. This means that similar information is grouped together, weighted and linked to other similar areas. A big mesh of interconnected data. This is not AI. This is data mining<\/em>.\u00bb<br>&#8211; Ian Venner, 2025 (<a href=\"https:\/\/hurricanecommerce.com\/how-ai-has-been-hijacked-the-agi-fallacy-and-leveraging-vertical-ai\/\" data-type=\"link\" data-id=\"https:\/\/hurricanecommerce.com\/how-ai-has-been-hijacked-the-agi-fallacy-and-leveraging-vertical-ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">2<\/a>)<\/p>\n\n\n\n<p class=\"has-text-align-left\">&laquo;<em>GenAI models are trained to fill in the blanks. To claim that they do more than this would be an extraordinary claim that should require extraordinary evidence. Instead, we are mostly treated to the logical fallacy of affirming the consequent. The models perform as if they are reasoning (for example), so they must be reasoning. The alternative hypothesis is that they are copying (approximately) the language that was used by reasoning humans.<\/em>\u00bb<br>&#8211; Herbert Roitblat, 2024 (<a href=\"https:\/\/open.substack.com\/pub\/buildcognitiveresonance\/p\/who-and-what-comprises-ai-skepticism?utm_campaign=comment-list-share-cta&amp;utm_medium=web&amp;comments=true&amp;commentId=81138274\" data-type=\"link\" data-id=\"https:\/\/open.substack.com\/pub\/buildcognitiveresonance\/p\/who-and-what-comprises-ai-skepticism?utm_campaign=comment-list-share-cta&amp;utm_medium=web&amp;comments=true&amp;commentId=81138274\" target=\"_blank\" rel=\"noreferrer noopener\">3<\/a>)<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1152\" height=\"360\" src=\"https:\/\/site.nord.no\/didaktiskebetraktninger\/wp-content\/uploads\/sites\/6\/2023\/12\/linjer.jpg\" alt=\"\" class=\"wp-image-2799\" style=\"width:210px;height:auto\" \/><\/figure>\n<\/div>\n\n\n<p><strong>Below is a concise overview of this blog book\u2019s chapters generated by Microsoft Copilot, using its &laquo;Think Deeper&raquo; option<\/strong><em>.<\/em> <strong>Copilot\u2019s statistical analysis of the letters and sentences across the various chapters is by no means poor, offering a clear and accessible overview of this book and its chapters.<\/strong><\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"310\" height=\"56\" src=\"https:\/\/site.nord.no\/didaktiskebetraktninger\/wp-content\/uploads\/sites\/6\/2023\/01\/linje.jpg\" alt=\"\" class=\"wp-image-4931\" style=\"width:199px;height:auto\" \/><\/figure>\n<\/div>\n\n<div class=\"wp-block-image\">\n<figure class=\"alignleft size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"225\" height=\"225\" src=\"https:\/\/site.nord.no\/didaktiskebetraktninger\/wp-content\/uploads\/sites\/6\/2025\/04\/CoPilot.jpg\" alt=\"\" class=\"wp-image-6007\" style=\"width:55px;height:auto\" \/><\/figure>\n<\/div>\n\n\n<p>This blog book is focusing on the capabilities, limitations, and implications of modern AI, particularly large language models like ChatGPT and Microsoft Copilot :<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Prologue:<\/strong><br>It introduces the theme by linking AI criticism with academic skepticism. The author recounts extensive testing of various chatbots over several years and sets the stage for a detailed exploration of AI\u2019s overhyped reputation and its real limitations .<\/li>\n\n\n\n<li><strong>Chapter 1:<\/strong><br>This chapter argues that while AI chatbots such as ChatGPT can mimic human language, they fundamentally operate on statistical pattern recognition\u2014not genuine intelligence. It questions the appropriateness of the term \u201cartificial intelligence,\u201d suggesting that \u201cconversation robots\u201d is a more accurate label .<\/li>\n\n\n\n<li><strong>Chapter 2:<\/strong><br>The discussion here centers on common misconceptions. It stresses that despite advanced outputs, these systems do not \u201cunderstand\u201d input in any human sense; they simply utilize machine learning algorithms, and true general AI remains a distant prospect .<\/li>\n\n\n\n<li><strong>Chapter 3:<\/strong><br>Through various tests, the chapter reveals that ChatGPT and similar models often generate responses that are superficially convincing but lack depth and accuracy, especially when tackling complex academic tasks requiring higher-order thinking .<\/li>\n\n\n\n<li><strong>Chapter 4:<\/strong><br>This section delves into the mechanics of LLMs. It compares modern chatbots to earlier systems (like ELIZA), emphasizing that they merely pattern-match rather than engage in critical analysis or reflection. It also highlights issues such as limited memory and susceptibility to misinformation .<\/li>\n\n\n\n<li><strong>Chapter 5:<\/strong><br>Focusing on higher-order thinking, the chapter critiques AI-generated texts for their tendency to hallucinate or fabricate information. Even though these systems can produce structured and fluent responses, they falter when deeper analysis and logical consistency are required .<\/li>\n\n\n\n<li><strong>Chapter 6:<\/strong><br>Here, the evaluation of multiple AI tools on a challenging academic assignment demonstrates that all tested systems struggle with creative reasoning and problem solving. The chapter underscores the gap between the tools\u2019 evolving capabilities and the demands of academic rigor .<\/li>\n\n\n\n<li><strong>Chapter 7:<\/strong><br>The discussion shifts to education, arguing that banning AI tools like ChatGPT is impractical. Instead, it calls for a balanced use that emphasizes ethical integration, critical thinking, and the development of students\u2019 evaluative skills to make informed use of these technologies .<\/li>\n\n\n\n<li><strong>Chapter 8:<\/strong><br>This chapter raises multifaceted ethical, legal, and security concerns. It points out the risks\u2014ranging from data privacy and bias in training data to the environmental impact of massive computing resources\u2014and stresses the need for greater transparency and ethical practices in AI development .<\/li>\n\n\n\n<li><strong>Chapter 9:<\/strong><br>The focus here is on rethinking assignment design in higher education. To counteract the easy overreliance on AI-generated content, the chapter recommends assignments that demand deep reflection, critical analysis, and authentic personal insight, thereby reinforcing academic integrity .<\/li>\n\n\n\n<li><strong>Chapter 10:<\/strong><br>The concluding chapter reiterates that current LLMs function more as sophisticated predictive systems than as repositories of true knowledge. It highlights the gap between the media\u2019s portrayal of AI and its practical, often limited, application\u2014especially in rigorous academic environments .<\/li>\n\n\n\n<li><strong>Epilogue:<\/strong><br>Final reflections express persistent skepticism; although AI chatbots can organize and summarize content, they fall short in providing genuine comprehension and complex reasoning. Tests conducted between 2022 and 2025 on tasks from the field of Social Informatics demonstrate that while tools like ChatGPT can craft superficially coherent academic responses, they lack the critical analysis, deep reasoning, and synthesis essential for higher-level work in the Social Sciences and Humanities.<\/li>\n<\/ul>\n\n\n\n<p>Each chapter builds toward a critical view of today\u2019s AI tools, urging educators, researchers, and policymakers to temper expectations and focus on integrating AI responsibly while acknowledging its current limits.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1152\" height=\"360\" src=\"https:\/\/site.nord.no\/didaktiskebetraktninger\/wp-content\/uploads\/sites\/6\/2023\/12\/linjer.jpg\" alt=\"\" class=\"wp-image-2799\" style=\"width:210px;height:auto\" \/><\/figure>\n<\/div>\n\n\n<h3 class=\"wp-block-heading has-text-align-center\">To the blog book<\/h3>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><a href=\"https:\/\/site.nord.no\/didaktiskebetraktninger\/chatgpt-et-talefort-eksempel-pa-kunstig-intelligens-eller\/\"><img loading=\"lazy\" decoding=\"async\" width=\"244\" height=\"174\" src=\"https:\/\/site.nord.no\/didaktiskebetraktninger\/wp-content\/uploads\/sites\/6\/2024\/09\/backto.jpg\" alt=\"\" class=\"wp-image-4925\" style=\"width:66px;height:auto\" \/><\/a><\/figure>\n<\/div>\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>&laquo;The shameless exaggerations surrounding large language models (LLMs) would be laughable if LLMs were not diverting so much energy, money, and talent into a seemingly bottomless pit.\u00bb&#8211; Professor Gary Smith, 2025 (1) &laquo;LLMs have basically indexed the internet, and in processing this data have created links between datasets based on fixed rulesets. This means that &hellip; <a href=\"https:\/\/site.nord.no\/didaktiskebetraktninger\/summary\/\" class=\"more-link\">Fortsett \u00e5 lese<span class=\"screen-reader-text\"> \u00abSummary\u00bb<\/span><\/a><\/p>\n","protected":false},"author":11,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"coauthors":[2],"class_list":["post-6089","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/site.nord.no\/didaktiskebetraktninger\/wp-json\/wp\/v2\/pages\/6089","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/site.nord.no\/didaktiskebetraktninger\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/site.nord.no\/didaktiskebetraktninger\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/site.nord.no\/didaktiskebetraktninger\/wp-json\/wp\/v2\/users\/11"}],"replies":[{"embeddable":true,"href":"https:\/\/site.nord.no\/didaktiskebetraktninger\/wp-json\/wp\/v2\/comments?post=6089"}],"version-history":[{"count":13,"href":"https:\/\/site.nord.no\/didaktiskebetraktninger\/wp-json\/wp\/v2\/pages\/6089\/revisions"}],"predecessor-version":[{"id":6444,"href":"https:\/\/site.nord.no\/didaktiskebetraktninger\/wp-json\/wp\/v2\/pages\/6089\/revisions\/6444"}],"wp:attachment":[{"href":"https:\/\/site.nord.no\/didaktiskebetraktninger\/wp-json\/wp\/v2\/media?parent=6089"}],"wp:term":[{"taxonomy":"author","embeddable":true,"href":"https:\/\/site.nord.no\/didaktiskebetraktninger\/wp-json\/wp\/v2\/coauthors?post=6089"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}