Credit: Midjourney
Credit: Midjourney
The breakthrough of Generative Artificial Intelligence platforms (Generative AI - GenAI) such as ChatGPT, Midjourney, and others are leading to far-reaching changes in the employment market, learning, and humanity in general. They pose unprecedented challenges to academic teaching and learning methods, as we have known them so far. Some see the current technological development as an expression of disruptive innovation that threatens the traditional ecosystem of academic education - to the point of its possible suppression. Moreover, some see this development as a real revolution, equal in importance to the invention of the Windows Operating System, the Internet, and even the printing. A historical examination reveals that the fear of new technologies, initially perceived as threatening, is nothing new1. On the other hand, this innovative development can represent a transformative catalyst allowing higher education systems to improve their relevance and sustainability.
Whatever the position, GenAI technologies are here to stay! As a society, we must embark on a journey to discover and explore the possibilities that these applications summen. In our opinion, it is necessary to focus on finding the human-social value from the use of GenAI, and alongside it relates to highly critical considerations such as ethics, reliability of information & data, prevention of biases, and the right to privacy. In the end, it must be remembered that GenAI is a mirror image of human knowledge, as it is based on content (texts, images, videos, etc.) and context created by human users.
A team of researchers from the Faculty of Instructional Technologies at Holon Institute of Technology (HIT) in Israel gathered to write a position paper examining the latest developments of GenAI and the possibilities of leveraging it to benefit academic teaching and learning in Israel. During April - August 2023, we had continuous communication and mostly asynchronous dialogue. The main goal of this joint work process was to ensure that the entire team's voices were heard and reflected in the final product.
Artificial Intelligence (AI) is a branch of Sciences/Computer Science that focuses on creating intelligent machines that can perform tasks that typically require human intelligence, such as visual perception, speech & text processing, decision-making support, and dialog conversation analysis and generation. AI systems are designed to learn from data, adapt to new situations, and improve their performance over time with various kinds of machine learning algorithms.
In education and learning systems, AI has the potential to revolutionize the way we teach and learn. AI-powered systems can personalize learning experiences for individual students, and provide real-time feedback and support. AI can also help teachers and administrators to identify areas where students are struggling and provide targeted interventions to help them overcome and succeed. Following are two examples of AI-powered education and learning systems:
Artificial Intelligence is usually classified into one of four types: General AI, Narrow AI, Edge AI and Generative AI our focus in this chapter.
Generative AI is a subfield of Artificial Intelligence (AI) that focuses on creating machines that can generate new content, such as images, music, and text. Unlike other AI systems that are designed to recognize patterns and make predictions based on existing data, Generative AI systems are designed to create new data that has never been seen before and is remarkably on the high level and quality of human creativity.
The history of GenAI systems can be traced back to the 1950s. However, it wasn't until the 1990’s that GenAI began to gain a status as a field of study. In 1997, a program called AARON was developed by artist Harold Cohen, which could generate original artwork using a set of rules and algorithms. In the early 2000’s, GenAI began to evolve rapidly, thanks to advancements in machine learning and deep learning. In 2014, a team of researchers at Google developed a GenAI system called DeepDream, which could generate images by analyzing existing images and identifying patterns and features. In 2016, a GenAI system called WaveNet was developed by researchers at Google, which could generate realistic-sounding speech by modeling the human voice. This system was a significant breakthrough in the field of GenI and paved the way for the development of other systems.
In 2018, a Generative AI system called GPT2 wad developed by OpenAi , which could generate human-like text by analyzing large amounts of data. This system was capable of generating coherent and convincing text, which raised concerns about the potential misuse of GenAI. These capabilities has continued to evolve, with GPT 3,4 and new systems that are being developed by Google, Apple and Amazon generating more complex and sophisticated content. GenAI has the potential to transform many industries and change the way we interact with machines. As technology continues to advance, a whole echo-system has evolved, and we can expect to see even more exciting developments in the field of Gen AI in the years to come.
Natural Language Processing (NLP) is a subfield of Artificial Intelligence (AI) that deals with the interaction between computers and human language (Text/Speech). Over the past 30 years, NLP has undergone significant changes, and its history and timeline are important for the understanding of AI development in general and Generative AI specifically.
In the early 1990s, NLP was primarily focused on rule-based systems, where experts would manually create rules to process language. However, this approach was limited in its ability to handle the complexity and variability of natural language. In the late 1990s, statistical methods were introduced, which allowed computers to learn from large amounts of data. This approach led to significant improvements in NLP, particularly in tasks such as language translation and speech recognition. In the early 2000s, machine learning techniques such as deep learning and neural networks were introduced, which further improved the accuracy of NLP systems. These techniques allowed computers to learn from vast amounts of data and make predictions based on that data. As a result, NLP systems became more sophisticated and could handle more complex tasks such as sentiment analysis and natural language understanding. In recent years, there has been a significant shift towards the use of pre-trained language models such as BERT and GPT3. These models are trained on massive amounts of data and can perform a wide range of NLP tasks with high accuracy. They have revolutionized the field of NLP and have made it possible to develop applications such as chatbots and virtual assistants that can understand and respond to in a conversational dialog.
NLP has evolved to become a sophisticated field that can handle complex tasks such as natural language understanding and Sentiment Analysis. One of the most challenging NLP tasks is ambiguity resolution, (disambiguation)resulting from the high level of language ambiguity at all levels; lexical, semantics, pragmatics and referential. Even highly pre-trained statistical models are challenged handling cases easily resolved by humans especially in pragmatic context.
Large Language Models (LLMs) are a type of machine learning model that is trained on large amounts of text data. The goal of an LM is to learn the patterns and structures of language so that it can generate new text that is similar to the training data. LLMs are used in many GeneAI systems, including text generators and chatbots. The problem and challenge of LLMS is that it’s based on the train data which is harvested automatically from the web sources. If these data contain fake or biased information it will be hard to detect and final results will be tented. That is why many of the systems use “a human in the loop” methodology to verify the process and the quality of the data and results.
Transformers (Transformers) are a type of deep learning model that was introduced in 2017. They are designed to process sequential data, such as text or speech, and are particularly effective at generating natural language. In the Transformer architecture, the input sequence is first transformed into a set of vectors called embeddings. These embeddings are then processed by a series of layers that perform operations such as attention mechanism and feedforward neural networks. The output of the final layer is then transformed back into the original sequence. Transformers use a technique called self-attention. This mechanism in the transformer works by computing a set of attention weights for each position in the input sequence. These attention weights indicate how much attention should be paid to each position when generating a more accurate and coherent output.
Two more mechanisms are widely used in Generative AI and are briefly mentioned here: Generative Adversarial Networks (GANs). GANs are a type of deep learning model that consists of two neural networks: a generator and a discriminator. Recurrent Neural Networks (RNNs) designed to process sequential data and are particularly effective at generating text and speech. They work by maintaining a hidden state that captures the context of the input sequence and using it to generate the output in case of ambiguity for example.
The main use of AI as it developed since the 1990s was mainly in research laboratories and academic institutions and very little by real mainstream users. Since the end of 2022, we have witnessed a huge increase in awareness, familiarity and use of AI tools, especially those considered GenAI: ChatGPT, Midjourney, DAL-E-2, DID and more, among users without prior knowledge or training in AI.
In addition to research, higher education systems in the world and in Israel are responsible for teaching and learning, that is, for the transfer and acquisition of knowledge. The imparting of knowledge is the responsibility of the faculty. On the part of the students, learning is a process of acquiring knowledge, the results of which will lead to a change in professional abilities in the present and/or in the future. The common teaching and learning method in the academy are based on a physical location (campus, lecture hall, classroom), a limited learning time frame (year, semester, day, hour) a pre-structured curriculum (degree, specialization, course, workshop, exercise), known scope (semester hours, credit points), assessment of learning outcomes known in advance (exam, work) and a distinctive organizational structure (faculties, departments).
Until the COVID-19 crisis, at the beginning of 2020, most of the teaching and learning in the Israeli academy (except for the Open University of Israel, which specializes in distance learning) took place on the institution campus. In the common systemic model, learning was seen as a mechanical process and the teaching approach as industrial in the format of a mass, centralized, uniform and standardized production line. In such a system, the emphasis is on the evaluation of the final product (test, work) and its measurable dimensions (a grade) and less on the process and the personal learning experience of the students.
The digital revolution and the opening of the Internet to the public, starting at the end of the nineties of the last century, essentially did not change the classical teaching and learning processes. Although extensive and huge information sources that can be reached with a click have been added, the industrial concept remains the same.
Another development of the digital revolution is the opening of course websites that accompany the classroom teaching and are supervised by the teaching staff. The purpose of such a learning site (usually within a learning management system such as Moodle) is to be a content anchor (such as lesson presentations, remote access to databases), interactive (forum, chat) and learning management. That is, these sites enriched the educational process but did not essentially change the concept of the industrial teaching method.
The COVID-19 crisis forced an immediate route recalculation. The transition to emergency and distance learning was similar to a hurricane shaking the normal, familiar and well-known in the academy. Higher education institutions all over the world had to adapt to the changing reality, which did not allow it to continue operating in the accepted format until now. Not surprisingly, the lecturers, most of whom did not have knowledge or were not trained to teach in this format at all, expressed distress at the complexity involved in interacting with the students in remote teaching and the technological challenges2. A similar distress was conveyed by students in a survey conducted by the Edmund de Rothschild Foundation where they expressed their difficulties in continuing academic studies in a format of distance learning3..
Looking back, there is no doubting the success of the forced technical-technological-administrative transition to distance teaching and learning during the COVID-19 crisis. The beginning was indeed challenging, among other things due to a technological infrastructure and front-end means that required adjustment. However, the adaptation to the new situation was fast. The initial surprise and fear of the sudden transition to distance teaching was later replaced by feelings of competence and seeing the benefits of the transition to the new format. In addition, as time passed the lecturers understood the "verdict" and became more precise and relaxed about distance teaching. Similarly, as time passed, most students learned to appreciate the convenience of distance learning that does not require physical arrival at the educational institution.
Compared to the technical success, the pedagogical success of the transition was partial. Teaching and learning remotely requires planning, adaptations and teaching and learning skills that consider the characteristics of the technological environment. The transition during the COVID-19 crisis was done at once and somehow improvised and intuitive. Due to the urgency, adequate evaluations in terms of the suitability of the learning materials, the teaching methods, the assessment and the imparting of learning skills adapted to the physical distance between the lecturer and the students, were not made.
No doubt, COVID-19 epidemic contributed to the realization of a change in the acceptable framework of the interrelationship between the lecturer - students and academic knowledge:
Above all, the COVID-19 period sharpened the concept that human qualities and behavior is the added value of the academic staff. It is clear to everyone that personal, social, emotional, supportive contact is just as important as acquiring knowledge.
Just a little over a year has passed since the end of the COVID-19 era, and the world is shaken again with the appearance of Generative Artificial Intelligence. GenAI applications have the potential to fundamentally change almost every aspect of our lives, including how we work, learn, communicate and even how we think. In the context of academic education, GenAI has the potential to improve, empower and even fundamentally change teaching and learning and research processes. However, it poses significant challenges and risks that require in-depth discussion. Some fear that humanity as a whole is on the brink of an abyss, on a fast track to the loss of human intelligence and with it, academic education. In contrast, some believe that the introduction of GenAI applications will help in the process of democratization and accessibility to the best of human knowledge, and may upgrade the processes of learning, teaching and research.
These conflicting viewpoints require an in-depth discussion. The position paper is our humble contribution to the academic - human dialog. We have gathered eight professionals with academic, research, education and training background and experience in the field, in order to draw a current snapshot of the integration of GenAI in teaching and learning processes in academic education. We believe that the academy must take a proactive position to examine the technological and educational benefits of AI, without ignoring its ethical and social aspects.
In this section we will present key implications of integrating GenAI into learning and teaching in higher education in Israel. As we are in the midst of a "tornado, this incubation phase allows us to present a partial answer, one that is current and valid at the time of writing the position paper (October 2023). We have chosen to briefly present key characteristics that we believe will be dramatically affected by the development of GenAI. These key characteristics are divided into three key categories:
4.2.1 ”A buddy to study with" - AI can assist in the learning process by providing information, asking questions, cultivating reflective-critical thinking, solving problems, and deepening the understanding of concepts. It can offer new perspectives and help see things in different and divers ways. Concrete examples are:
Expanding the ways of learning can empower the students and encourage them to take responsibility for their personal learning process as well as to develop self-proactive learning skills instead of passively absorbing knowledge (as is customary in lecture rooms). However, over-reliance has the potential to harm intellectual development as it may cause the student to transfer the responsibility for learning to GenAI without engaging in critical thinking (or thinking at all).
4.2.2 Personalization in learning - an educational approach according to which a learning program must be adapted personally (student-centered learning), in real time and continuously to each student according to his needs - learning rate, interest, strengths, etc8. With the help of AI tools, the learning process can be adapted to the individual need and provide customized feedback and guidance. This can help promote an interactive learning experience that contributes to the student's interest and engagement and allows the student to understand the learning process qualitatively and according to his/her personal characteristics. However, over-reliance on technology may lead the student feel detachment from the human lecturer and as a result develop feelings of alienation, "getting lost", and a decrease in motivation to learn.
4.2.3 Shortening time in teaching and learning - GenAI tools can enable an immediate response to questions or requests for assistance from a lecturer and student. This can help reduce workload for teaching staff. For example: relying on the activation of automatic evaluation and review mechanisms that provide immediate feedback to students, and for students the option of facilitation of homework and papers preparation. For example: using GenAI tools to find sources, initial drafting of work, finding keying typos and more. These easier paths can help learners focus on more complex or challenging aspects of the teaching-learning process. However, over-reliance on shortcuts and over-reliance on GenAI tools, in particular for writing assignments and exercises, may lead to superficial products that indicate a lack of understanding.
4.3.1 Critical Thinking - is the ability to think clearly and rationally, understanding the logical connection between ideas. It's about being active (not reactive) in the learning processes, and it involves being open-minded, inquisitive, and able to think in a reasoned way. Critical thinking is also, a balanced and reflective thinking that focuses on deciding ‘what to believe and what to do’9. This skill is considered a solid learning skill as it focuses on "how to think" instead of "what to think". Solid skills like critical thinking are more immune to GenAI advances because they represent many of the tasks that GenAI systems struggle with today. This is in contrast to temporal skills, such as a programming language or a specific tool, which lose value quickly due to technological progress10.
In our opinion, the skill of critical thinking should be put at the top of academic discourse. In an era where texts will be created automatically, human contribution will be in examining the content, criticizing it and amending it. The variety of sources, and points of view allows for a critical and comparative examination of the content. Avoiding critical thinking and over-reliance on AI tools can create for the learner a false sense of security in the precision of the created content. This may lead students to trust the results generated by AI without questioning their validity or accuracy.
4.3.2 Creativity – is the process of producing original and relevant ideas for the given situation. Creative thinking consists of originality and flexibility11.. It can be an inspiration for new and creative ideas, suggest new practices, provide new and original perspectives on an educational issue. However, over-reliance on GenAI tools may limit personal creativity and impair independent human voice.
4.3.3 Originality - refers to creating or inventing new learning products. Original work is not work that is received from others or work that is copied or based on another's work, such as GenAI. The challenge in training learners to prepare learning materials and products that they themselves have created, is finding the balance between using GenAI as a "scaffolding" for learning process and just transferring the responsibility for preparing the papers and exercises to GenAI tools.
4.4.1 Ethics - is a branch of philosophy that examines moral principles and values to determine what is right and wrong in different situations. In the academic context, ethical behavior is characterized by integrity and respect for rights and intellectual property. The ethical complexity of integrating GenAI tools invites possibilities for developing critical thinking, establishing an ethical code and creating a safe and open teaching-learning environment that can help develop an ethical compass among students and faculty alike. As a result of the development of GenAI technology, it may become difficult to evaluate works and content submitted by students and faculty members, without knowing whether they were written by a human or a machine.
Moreover, it will be difficult to trace the knowledge sources and its lack of transparency, which may harm the privacy and protection of the intellectual property of the content owners, and even harm the privacy and security of the users themselves. Relying on existing content also raises the concern of creating bias in the new content that will be generated, which will specifically exclude disadvantaged populations (because the written text is based on what the "majority" has written on the subject so far). Added to this is the fear of "hallucinations”, in cases machines sometimes invents content. Thus, require careful human caution to identify these cases. Since in the era of GenAI it is difficult to detect unethical conduct and even more so, to enforce it through disciplinary means, a difficulty may raise in balancing between academic freedom and personal responsibility.
4.4.2 Algorithmic bias - as a private case of ethics according to which GenAI technologies are based on collecting data from users’ activity according to an algorithmic basis unknown to the end users ("black box") and understanding the automatic decisions that these systems produce.
Autonomous systems may increase or perpetuate existing biases and thus increase the deepening of discrimination and bias against disadvantaged population
4.4.3 Data protection and privacy – as a private case of ethics, it is known that GenAI technologies are based on the collection and analysis of personal and group information of learners. There is a real risk of information leakage to unauthorized sources/factors. The challenge is in preventing information leakage through technological means as well as establishing a clear protection strategy and privacy policy for every individual in the society.
4.4.4 Accessibility to all (?) - Availability and accessibility of GenAI technologies to every person regardless of socio-demographic status or any other irrelevant characteristic invites the expression for democratization of human knowledge since, GenAI tools such as ChatGPT include all human knowledge accumulated on the Internet (with certain reservations). It follows that GenAI technologies can make teaching and learning open to every student regardless of disabilities or limitations, thus promoting a culture of inclusion and diversity (DIEB - Diversity, Inclusion, Equality & Belonging) that emphasizes differences between learners, and develops a sense of belonging. The richness of the content, the exposure to a variety of opinions and points of view, can encourage student awareness of the complexity of acquiring knowledge and developing empathy and tolerance for different opinions. However, the lack of clarity regarding the costs of the technology and the accessibility it will provide, raises a fear of harming equality, diversity and inclusion and may actually increase disparities instead of reducing them.
In May 2023, we conducted a large-scale survey among students studying in a variety of higher education institutions in Israel. About 700 male and female students answered an online questionnaire. The purpose of the survey was to examine how students use GenAI applications during their studies and what are the ethical issues they face related to the usage of GenAI applications in learning.
The findings of the survey show that the vast majority of students (70%) use one or more GenAI applications in their academic studies. ChatGPT is the winner! 95% of the students chose it as the main application that helps them in learning.
Table 1 shows that the main uses are getting ideas for inspiration and explaining unclear educational content. A little behind; preparation of exercises and/or papers. In the last places there is an expansion on educational content, drafting and/or editing educational content as well as a summarization. It can be concluded from these findings that the most common and prominent use of GenAI is as a personal assistant (copilot) for assistance in the learning process. However, there are also those who use AI applications as a substitute/shortcut for learning processes to prepare exercises and/or papers.
Items | Mean | SD |
---|---|---|
Getting ideas for inspiration | 3.01 | 1.273 |
Explanation of unclear educational content | 3.00 | 1.383 |
Drafting and/or editing educational content | 2.65 | 1.350 |
Expansion of educational content | 2.66 | 1.343 |
Summary of learning content | 2.49 | 1.387 |
Examining the ethical issues, students face through their studies (Figure 1 below) reveals that:
Most of the students have a realistic and critical ethical view on ethics issues. Most agree that the use of GenAI applications can perpetuate mistakes; Most of them think that the use of these applications raises ethical dilemmas of violation of privacy and perpetuates social inequality. It also appears that most of the students attest to being strict about their academic integrity, since the frequency of students who agree that GenAI applications can be used when it is prohibited is the lowest.
The vast majority (more than 60%) claim that lecturers cannot expect them to behave ethically without presenting appropriate training for the use of GenAI tools. Questions and issues that the students face, for example: "Is it ethical to use GenAI tools’ answers for academic work and assignments?"; “To what extent can artificial intelligence be incorporated without it being considered copying, stealing or plagiarize writing?"; "Where does the line crosses in using GenAI - where did I use it and improved my work as opposed to where I let it do the work or the task for me?"
More than that, the students demand that the academy prepare them for the AI-saturated employment world: "I'm not stealing from anyone, I'm just better preparing myself for the outside world. It's a shame that the academy doesn't understand this and is stuck behind."
The GenAI revolution is currently unfolding, and its implications are continuously evolving, making them challenging to fully grasp. On one side, there's a promising potential for enhancing the efficiency, comfort, and quality of education. This includes introducing innovative methods that tailor learning experiences to individual students based on their unique needs, knowledge gaps, and prior understanding. Additionally, it offers novel ways to extend academic education to underserved populations. Such advancements also equip students for the future job market.
However, on the flip side, the integration of GenAI in academia brings forth significant concerns. These encompass issues of transparency, fairness, integrity, inclusion, and equality. There's also a growing apprehension about over-relying on technology, potentially sidelining the essential human elements of teaching.
The main challenge is how to establish hybrid intelligence that seamlessly merges human and machine capabilities. We stand at a historic juncture where humanity is intertwining with technology. While some express fears of technology overshadowing human existence, it's crucial to ensure this doesn't transpire, especially within academic institutions. Higher education must aim for a harmonious blend of human and artificial intelligence. In this model, faculty and students collaboratively engage with AI, leveraging its strengths to further elevate the quality of academic education.
In recent decades, researchers and professionals in the field of Technology-Enhanced Learning (TEL) have tirelessly sought innovative ways to support educational processes. In the past year, we have witnessed significant advancements as these specialists concentrated their efforts on exploring and harnessing the potential of Artificial Intelligence (AI) tools to enhance their practices and outcomes. These tools draw upon various sub-domains of AI, including Machine Learning (ML), Deep Machine Learning (DML), and Natural Language Processing (NLP), providing capabilities that researchers and practitioners are leveraging to enhance learning and training processes.
The adoption of AI-related technologies can be attributed to the pressing and genuine needs that arise in diverse educational settings, ranging from K-12 education to universities and corporate training. Traditionally, researchers in the TEL field describe the elements involved in their efforts while referencing the framework of Technological Pedagogical Content Knowledge (TPACK). This framework is employed by professionals to delineate the organized relationship between content, the educational approach employed to teach it, and the technology utilized to support and enhance the learning and training process. Historically, technology has been seen as a separate tool capable of amplifying the educational message in learning and training processes—a distinct facet within the TPACK triangle that offers support for conveying educational messages 12. However, with the increasing integration of AI into TEL, the relationship between content, educational approach, and technology is poised for a significant transformation.
Given AI's capabilities, technology can now serve as a recommender for the content to be presented. Furthermore, AI can suggest an educational approach that optimizes outcomes while considering specific educational requirements. Consequently, AI technologies have the potential to encroach upon domains that were traditionally the purview of human specialists. Such a paradigm shift within the TPACK framework necessitates that researchers and specialists reevaluate the nature of their efforts in their respective practices.
In light of this emerging reality in the realm of AI, I strongly encourage these practitioners to:
Two references concerning the nature in which GenAI will affect the academy, especially the people at the core of the process (faculty and students).
The term "lecturer" refers to the method of achieving academic learning goals and implies that the role's core is the delivery or transfer of content in the faculty member's area of expertise to the students. Technology in general, and GenAI in particular, changes the role's core, leaving the "human" very relevant while contributing to the student's ability to apply the skills he learned and strengthen complementary skills beyond the field of specialization. Today (the year 2023), we are in a reality where the technology is in intensive development processes, and it is impossible to track the number of GenAI tools being launched and used in various fields, including research and academic teaching. My personal prediction is that the main application affecting the learning processes in general, and academia in particular, will be a collection tools scattered over several fields today, as detailed in this document. The application I am referring to will enable the personalization of learning processes with a far-reaching accuracy relative to the tools known today. For each learner, a learning environment will be created in which the content, assessment, communication, experience, and practice will be, on the one hand, accessible and tailored to the learner's needs and, on the other hand, serve the syllabus and educational goals within the content area being studied and in accordance with the institution's requirements for obtaining a degree in the field.
In a figurative way, we can say that we will succeed in "duplicating" the best and most relevant lecturers so that every student (or "leaner" in the education system) will have a personal "lecturer" who accompanies and enables effective learning in an independent and adapted framework. Parallel to this process, it was found that the role of the human lecturer/teacher remains critical in the educational process, but as a person who enables integration and application to obtain practical skills. The "lecturer", responsible for the content, relevancy, learning experience, and evaluation, will deal with the learning process's emotional, social and human aspects. Hence, naturally, our skills as teaching staff will require updating, and we will apparently be the pioneer generation of faculty members who embrace or reject the change and hence need to invest resources in the area of change management.
For years, there have been various arguments in the world about the irrelevancy of academia as the institution trusted for training the future generation of workers. The academy, whose initial role is research, also undertakes the process of training professionals in various fields and is constantly required to update the professional content and its study method.
According to most estimates, GenAI will dramatically impact the global employment market. Like previous technological revolutions, this time, too, there will be trades that will disappear, new ones that will be created, and above all, there will be a profound impact on a wide range of existing professions, including in the fields of medicine, law, software development, marketing, and many others.
For the academy, the relevancy challenge becomes more substantial as there is the potential for an even more significant gap between the nature of the students' training and the work environment, they will encounter upon transitioning to the labor market. Accordingly, academia must already take steps today to understand the nature of the change in each content area and reflect it in teaching and assessment processes in the study framework.
There is no doubt that GenAI tools and their availability, along with their ease of use, create a new ecosystem for educators and students of all ages. Much has been said about students' need for creativity and soft skills even before, but this need is now even more evident in light of technological developments. Questions about whether to use these tools as part of the curriculum and if so, in what way, are part of the discussion, but not only... Several other issues arise:
All experiments and tests show that the instructions used to run the GenAI tools are critical to the quality and substance of the results. Thus, a new skill was created - Prompt Engineering (again with a whole ecosystem around it). Should we as educators teach this skill as a leading subject in the curriculum? How skilled should our students be? Is it enough to discuss the user interface or dive into the algorithms and mechanisms in the depth of the core technology? Another issue concerns the quality of the results and the quality assurance processes we deploy when using the GenAI tools to create educational materials or when we receive assignments that are done using those tools. - As we know, the mechanism behind Generative AI tools such as: transformers LLMS and Attention are a "black box" for the end users and even for the code writers. We have no way to explain or track the hidden levels of the procedures. Because of the lack of explanation and transparency, part of the learning process should include developing other means of examining the results that used the data and some of the ingredients. AI Explainability and Transparency mechanisms are needed. At this time, it seems that every country is busy defining its regulatory approach to AI. But it seems that this process is going to be long, iterative and repetitive. As educators we must strive to start with small cycles of internal discussions in the higher education sector and find the most important issues in regards to ethical behavior and internal regulation that allow us to perform the best in the inclusion of the generative tools and artificial intelligence but no less important through internal criticism, discipline and maintaining a high academic level.
As academics, we are responsible for training the next generation for the wise use of GenAI applications and ensuring that the learning processes belong to the students and not to the AI. In light of this, we must first teach the students about the way GenAI applications work, and at the same time instruct them to use the GenAI applications only after acquiring in-depth knowledge in the content area of their research. Only in this way will they could examine the quality of the answers they receive to the queries they presented. This examination will allow students to develop critical skills in a world where GenAI is an integral part of it. These skills include critical thinking, higher order thinking levels, creativity, and originality. Our way as academics to help students acquire and develop these skills alongside learning the various content areas, in an era where GenAI applications are developing at a dazzling pace, is to move to project-based learning, that is, learning based on building authentic products. This learning approach may even reduce unethical behaviors associated with students' use of GenAI applications. Indeed, another issue that academia has to consider is related to ethics. We should not assume that the students understand the complex ethical issues related to the uses of GenAI applications (issues some of which, in my opinion, we still do not understand). Therefore, we must teach them ethical thinking as an integral part of the curriculum, and even make sure that all students have an equal opportunity to use these applications wisely.
One of my interpretations of the student survey presented in this report regarding the use of AI is that, overall, students understand what artificial intelligence can do and what it cannot do, and many of them know how to use it intelligently to receive "inputs" into their learning and doing process. This is a healthy approach, because among the reactions that AI evokes there is quite a bit of mystification, fears, opposition, and fantasies, and these should be dispersed a little. If we could make sure that this is how most students understand artificial intelligence, I would be happy to see 100% of them report using it: without fear, with criticality, curiosity, as a tool that supports their learning and creation, and with an understanding of the limitations. But fortunately, we can actually take care of that. We can integrate deploying with AI in in courses, and in research processes, and make sure that students do not have a "split" between their AI-free academic experience and the AI-rich work experience "at home". The more we do this ourselves and, in the classroom, the more we will know how to adjust the perceptions, expectations and skills of the students, and we will even discover, in a daily and experiential way, what the presence of artificial intelligence actually does to our disciplines.
The world will no longer be the same, and our methods of learning, assessment and teaching cannot remain the same. It is a time for change, and it is a great and exciting gift of renewal, creation and providing greater value to our learners. So here are 3 recommendations - to assist adapting ourselves to the new era:
The most significant challenge in the introduction of artificial intelligence to academic learning processes lies in the opportunity to increase our value and significance as a faculty that provides academic training. We are required to use AI to work smarter, do more meaningful things in a faster manner while discovering new opportunities. Otherwise, the graduates will not be best prepared for entering the labor market, which is already adopting new technologies at a dizzying pace. Therefore, we are required to help students embrace the use of artificial intelligence. This adoption requires attention as to how the student or faculty member uses AI in learning–teaching processes. I propose four levels of a scalable AI adoption model. Each level is a more advanced stage of development in the adoption of AI and at each level I offer courses of action to help progress to the next stage in the adoption of AI:
When we embark on the challenging journey of adopting artificial intelligence, we must remember that the basic elements of academic learning are intellectual curiosity and commitment to growth. As a faculty, it is very important that we do not adhere to "cat and mouse" teaching patterns such as forbidding and careful testing of the use of artificial intelligence.
An understanding of the adoption model I proposed and the possibilities of progress from stage to stage will allow us to shape both ourselves and our students as relevant to the current world of research and industry.
As the cliché says: "Artificial intelligence will not replace humans." Humans who use artificial intelligence will replace humans.' We, as the leaders of the implementation of artificial intelligence, have the duty to help the faculty and students not to fail in the first part of the sentence.
Finally, we have put together key recommendations for decision-makers, including academic staff, preparing for the era of GenAI academic learning and teaching: