Artificial intelligence (AI) is no longer a futuristic idea confined to science fiction. In recent years, AI has moved rapidly into classrooms, lecture halls, and online learning platforms (source: https://education.illinois.edu/about/news-events/news/article/2024/10/24/ai-in-schools–pros-and-cons).  From personalized tutoring systems and grading assistants to automated plagiarism detection and AI-driven chatbots, education is undergoing a profound transformation.  But with these opportunities come significant challenges. As schools, universities, and policymakers grapple with this technological shift, the debate intensifies: Is artificial intelligence good or bad for education?

 

Many of the priorities for bettering teaching and learning are still unfulfilled today.  Teachers look for safe, efficient, and scalable technology-enhanced methods to achieve these needs.  Teachers naturally question whether the quick changes in technology in daily life could be beneficial.  In their daily lives, educators utilize AI-powered services like voice assistants in their homes, tools that can create essays, correct grammar, and complete sentences, and automated travel planning on their phones, just like everyone else.  Since AI tools have just been made available to the general public, many instructors are actively investigating them.  Teachers see potential to expand the support provided to kids with disabilities, multilingual learners, and others who can benefit from more adaptability and personalization in digital learning tools by utilizing AI-powered features like speech recognition.  They are investigating how AI may help them write or enhance lessons, as well as how they locate, choose, and modify content for their classes.

 

Teachers are also conscious of emerging dangers.  Powerful, helpful features may also come with additional security and privacy dangers.  Teachers are aware that AI is capable of automatically generating incorrect or improper output.  They are concerned that unintended biases may be amplified by the associations or automations produced by AI.  They have observed fresh ways in which pupils could pass off other people’s work as their own.  They are fully aware of pedagogical techniques and “teachable moments” that a human teacher can address but that AI models miss or misinterpret.  They are concerned about the fairness of algorithmic recommendations.  The worries of educators are numerous.  It is the duty of everyone involved in education to use the positive aspects of AI integration in edtech to forward educational goals while simultaneously guarding against potential risks.

 

Three arguments for addressing AI immediately were made by participants in the listening sessions:

 

First, AI could make it possible to accomplish educational goals more effectively, more cheaply, and at a larger scale (source: https://www.faulkner.edu/news/the-future-of-learning-positive-applications-of-ai-in-education/).  AI may increase the adaptability of learning resources to students’ needs and strengths. Addressing the diverse incomplete learning of students as a result of the epidemic is a policy priority.  Enhancing teaching is a top priority, and AI may help teachers more by way of automated assistants or other technologies.  When teachers run out of time, AI might also allow them to continue providing support to specific kids.  A top concern is creating resources that are sensitive to the experiences and knowledge that students bring to their education—their cultural and community assets—and artificial intelligence (AI) may make it possible to better tailor curriculum materials to local requirements.  AI has the potential to improve educational services, as demonstrated by voice assistants, navigation tools, shopping recommendations, essay-writing capabilities, and other well-known uses.

Second, worry over possible future threats and awareness of system-level risks give rise to urgency and importance.  For instance, students might be more closely watched.  Although the U.S. Department of Education adamantly denies that AI might replace teachers, some educators are concerned that they might be replaced.  A voice recognition system that struggles with regional dialects or an exam monitoring system that might unjustly flag some student groups for disciplinary action are two examples of algorithmic bias-based discrimination that the public is aware of.  Some applications of AI might be opaque and infrastructural, raising questions about trust and transparency.  AI frequently appears in novel applications with a sense of enchantment, but educators and procurement regulations demand that edtech demonstrate effectiveness.  Artificial intelligence (AI) may produce information that seems true but is erroneous or unfounded in reality.  Above all, AI poses new risks beyond the well-known ones related to data security and privacy, like the potential for pattern detectors and automations to scale and cause “algorithmic discrimination” (i.e., systematic unfairness in the resources or learning opportunities recommended to certain student populations).

 

Third, the magnitude of potential unforeseen or unintentional repercussions creates urgency.  Teachers may find unintended repercussions when AI allows for the large-scale automation of educational decisions.  As a basic illustration, achievement inequalities may increase if AI adjusts by accelerating the curriculum for some students and slowing it for others (based on insufficient information, subpar ideas, or skewed presumptions about learning).  The quality of the data that is now accessible can occasionally lead to surprising outcomes.  An AI-powered teacher hiring system, for instance, would be thought to be more impartial than one that scores resumes by hand.  However, the AI system may deprioritize applicants who could provide both talent and diversity to a school’s teaching staff if it is dependent on past data of low quality.

 

In conclusion, in order to take advantage of important potential, avoid and reduce emerging hazards, and deal with unforeseen consequences, AI in education must be addressed immediately.

 

According to the Stanford Institute for Human-Centered AI’s 2025 AI Index Report (source: https://hai.stanford.edu/ai-index/2025-ai-index-report), there has been a noticeable uptick in both AI investment and ethical research, including studies on fairness and transparency.

 

Top Takeaways:

 

– New benchmarks introduced in 2023 (MMMU, GPQA, SWE-bench) saw large performance gains in one year (e.g. +18.8, +48.9, +67.3 percentage points respectively).

 

– AI systems are increasingly capable at video generation and programming under time constraints, occasionally outperforming humans in restricted settings.

 

– In healthcare, the FDA approved 223 AI-enabled medical devices in 2023 (versus just 6 in 2015).

 

– Self-driving and robotic mobility solutions are scaling: e.g. Waymo giving 150,000 autonomous rides weekly; Baidu’s Apollo Go robotaxi deployed across Chinese cities.

 

– In 2024, U.S. private AI investment reached $109.1 billion — far exceeding China’s $9.3B and the U.K.’s $4.5B.

 

– Generative AI alone drew $33.9 billion globally, an increase of 18.7% over 2023.

 

– 78 % of organizations reported using AI by 2024, up from 55% in 2023.

 

– U.S. institutions produced 40 “notable” AI models in 2024, compared to 15 in China and 3 in Europe.

 

– U.S. federal agencies proposed 59 AI-related regulations in 2024 — more than twice the number in 2023.

 

– In the U.S., computing bachelor’s degrees have grown 22% over the past decade.

 

– Among U.S. K–12 CS teachers: 81% believe AI should be included in foundational CS education, but less than half feel prepared to teach it.

 

– 90% of “notable” AI models in 2024 originated from industry (versus 60 % in 2023).

 

– Academia still leads in highly cited research.

– Scale continues to grow: training compute doubles every 5 months, datasets every 8 months, power use annually.

 

– Two Nobel Prizes acknowledged deep learning foundations (physics) and applications (protein folding in chemistry).

 

– The Turing Award also honored advances in reinforcement learning.

 

– While AI models perform well on many tasks (e.g. Olympiad math problems), they struggle with logic and precise reasoning benchmarks (e.g. PlanBench).

 

– This limitation is especially relevant in high-stakes domains where error tolerance is low.

 

AI developments are not just occurring in research labs; they are also garnering attention from the general public and publications devoted to education.

 

A variety of ideas and frameworks for ethical AI, as well as for associated ideas like human-centered, egalitarian, and responsible AI, have been developed by researchers.  Participants in the listening session called for expanding on these ideas and frameworks while acknowledging the need to go further. They pointed out that, given the speed at which AI is being incorporated into mainstream technologies, there is an urgent need for rules and regulations that ensure the safe use of AI advancements in education.  Together, policymakers and stakeholders in education must begin defining the requirements, disclosures, rules, and other frameworks that can help create a secure and happy future for all parties involved, particularly kids and teachers, as policy creation takes time.

 

We’ve highlighted how adaptivity is impacted by AI advancements, but we’ve also highlighted how adaptivity is constrained by the intrinsic qualities of the model.  We pointed out that the term “personalized” was employed differently in a previous wave of edtech, and that it was frequently necessary to define what personalization meant for a certain good or service.  Therefore, our main suggestion is to identify the advantages and disadvantages of AI models in upcoming edtech products and concentrate on AI models that closely match desired learning visions.  Since artificial intelligence is now developing quickly, we need distinguish between products with basic AI-like capabilities and those with more complex AI models.

There is a noticeable push and effort being made to overcome these restrictions when we look at what is occurring in research and development.  We pointed out that since there is no such thing as generic artificial intelligence, decision makers should exercise caution when choosing AI models that could limit their capacity for learning.

 

Furthermore, we must continue using systems thinking, which involves people in the loop and takes into account the advantages and disadvantages of the particular educational system, as AI models will always be more limited than real-world experience.  We maintain that the learning system as a whole is more comprehensive than just its AI component.

 

Potential Benefits of AI in Education:

 

– Personalized Learning: AI can tailor educational content to each student’s individual pace, style, and needs, leading to deeper engagement and understanding.

 

– Increased Efficiency: AI tools can automate tasks like grading and administrative duties, freeing up educators’ time to focus on teaching and student support.

 

– Enhanced Accessibility: AI can provide access to high-quality educational resources and virtual tutoring, potentially bridging gaps for diverse learners.

 

– Improved Feedback: Students can receive real-time, detailed feedback on their work, which helps them identify strengths and weaknesses and improves learning outcomes.

 

– Data-Driven Insights: AI can provide educators with valuable data on student performance, helping them to identify trends and areas for instructional improvement.

 

Potential Downsides of AI in Education:

 

– Privacy and Security Risks:  AI systems collect and process sensitive student data, raising concerns about data privacy and the potential for misuse (source: https://www.eschoolnews.com/digital-learning/2024/02/05/what-is-the-impact-of-artificial-intelligence-on-students/).

 

– Algorithmic Bias: AI models can perpetuate and even amplify biases present in the data they are trained on, leading to unfair or inequitable outcomes in assessments.

 

– Over-Reliance on Technology: Students may become too dependent on AI tools, which could hinder the development of essential non-cognitive skills and creative problem-solving.

 

– Reduced Human Interaction: An overemphasis on technology might lead to less face-to-face interaction, impacting students’ social and emotional development.

 

– Implementation Costs and Skills: The initial cost of implementing AI systems can be high, and teachers may lack the necessary skills or resources to use these tools effectively.

 

Teachers have a famously difficult profession because they have to make thousands of judgments every day.  Teachers take part in classroom operations, interactions with students outside of the classroom, collaboration with other educators, and administrative duties.  They are required to engage with families and caregivers because they are also members of their communities.

 

We consider how much simpler some daily chores have gotten.  We are able to send and receive event alerts and notifications.  Even with digital music, choosing the music we want to listen to used to require a number of steps. However, these days, we can simply say the name of a song we want to hear, and it will start playing.  Similar to how mapping a route used to involve a laborious study of maps, cell phones now allow us to select from a variety of modes of transportation in order to get to our destination.  Why can’t educators be given the tools they need to implement a technology-rich lesson plan and the assistance they need to recognize the evolving requirements of their students?  Why is it so difficult for them to arrange the learning paths of their students?  Since classroom dynamics are continually changing, why don’t the resources available to instructors help them quickly adjust to the needs and skills of their students?

 

The loop that decides which resources are available and what they do in the classroom is the most comprehensive one in which teachers should participate.  Teachers are already involved in the design and selection of technologies nowadays.  Teachers can comment on practicality and usability.  Teachers look at efficacy data and report back to other school administrators on their results.  Teachers already exchange ideas about how to effectively use technology.

 

These worries will persist, but AI will also give rise to new ones.  These worries go beyond data security and privacy; they draw attention to the ways in which technology may unjustly restrict or guide some kids’ educational opportunities.  One important lesson to be learned from this is that instructors will require time and assistance to stay up to date on both the more recent and well-known challenges that are emerging, as well as to fully engage in risk-reduction design, selection, and evaluation processes.

 

Using the teacher’s knowledge of the needs and strengths of each student, AI could assist educators in personalizing and tailoring resources for their students.  Customizing curriculum materials takes a lot of effort, and educators are already looking at how AI chatbots may assist them in creating new materials for their students.  An elementary school teacher could receive strong support for altering a storybook’s illustrations to excite their kids, changing vocabulary that doesn’t fit local speech patterns, or even rewriting narratives to include additional educational components.  We pointed out that AI might be useful in determining a learner’s capabilities.  When a student is in another teacher’s physics class, for instance, a math teacher might not be aware of how they are understanding graphs and tables regarding motions, and they might not see that utilizing comparable graphs about motion could aid in their lesson on linear functions.  By developing or modifying educational materials, AI may assist educators in identifying and utilizing students’ abilities.  However, the four pillars we previously described—human in the loop, equity, safety and efficacy, and evaluation of AI models—must be used to address the wide equity concerns of preventing algorithmic prejudice while enhancing community and cultural responsiveness.

 

We now add another layer to our criteria for good AI models based on the needs of teachers (as well as students and their families/caregivers): explainability. Some AI models are able to identify patterns in the world and take the appropriate action, but they are unable to provide an explanation for their actions (e.g., how they came to the connection between the pattern and the action).  Teachers will need to understand how an AI model evaluated a student’s work and why the model suggested a specific tutorial, resource, or next step to the student. This lack of explainability won’t be enough for instruction.

 

Therefore, a teacher’s capacity to evaluate an AI system’s conclusion depends on how explainable it is.  Teachers can create appropriate levels of confidence and distrust in AI with the aid of explainability AI, especially when it comes to identifying areas where the AI model tends to make bad decisions.  Explainability is also essential for teachers to be able to spot instances in which an AI system can be acting unfairly based on incorrect information.

 

The concept of explainability revolves around the requirement that educators be able to examine the actions of an AI model.  For instance, which pupils are receiving what kinds of instructional recommendations?  In a never-ending cycle, which kids are receiving remedial assignments?  Which are advancing?  Dashboards in existing products show some of this data, but with AI, educators might want to learn more about which decisions are being made, for whom, and what student-specific factors an AI model had access to (and perhaps which factors had an impact on a given decision).  Some of the adaptive classroom tools available today, for instance, use limited recommendation models that only take into account a student’s performance on the last three math problems; they ignore other factors that a teacher would be aware to take into account, like whether a kid has an Individualized Education Program (IEP) or other needs.

 

Information about how discriminatory bias may manifest in specific AI systems and what developers have done to overcome it is necessary to support our plea for equality issues to be taken into account when evaluating AI models.  This can only be accomplished by being transparent about how the tools employ datasets to achieve results and what data they have on hand or that a teacher may use to make decisions but that the system does not have access to (the example above uses IEP status).

 

Additionally, teachers will need to be able to observe and judge automated judgments, like which set of arithmetic problems a student should work on next, for themselves.  When they disagree with the reasoning behind an instructional advice, they must have the ability to step in and override decisions.46 When teachers exercise human judgment over an AI system’s choice, they must be protected from unfavorable consequences.

 

Formative assessments may be strengthened by AI models and AI-enabled technologies.  For instance, AI algorithms can be used to assess a question type that asks students to draw a graph or construct a model, and then combine comparable student models for the teacher to interpret.  Teachers may be able to respond more effectively to students’ comprehension of a topic like “rate of change” in a complicated, real-world scenario if they use enhanced formative assessment.  Additionally, AI may provide feedback to students on difficult skills like speaking a foreign language or learning American Sign Language, as well as in other practice scenarios where no human is available to offer prompt input.

 

In general, teachers may find that an AI helper can lighten their workload by evaluating easier parts of student responses, freeing up their specialized judgment to concentrate on crucial elements of a lengthy essay or intricate project.  With accessibility, we might also be able to give comments more effectively.  Without requiring the student to view a screen or type at a keyboard, an AI-enabled learning tool might, for instance, be able to speak with them about how they responded to an essay prompt and give them questions that help them to clarify their position.

 

As demonstrated by the examples presented earlier, artificial intelligence (AI) can be integrated into the learning process to provide students feedback while they are working on a problem rather than after they have arrived at an incorrect solution.  More integration of formative assessment can enhance learning, and prompt feedback is essential.

 

Even though there are many similarities between AI and formative assessments, our listening sessions also showed that participants wanted to address some of the formative assessment’s current drawbacks, such as how time-consuming and occasionally burdensome tests, quizzes, and other assessments are, as well as how little teachers and students value the feedback loop.

 

A few AI-powered tools and systems aim to resolve this possible contradiction.  One AI-enabled reading tutor, for instance, listens to students read aloud and offers immediate feedback to help them read better. Students indicated that reading aloud was enjoyable, and the method worked.  In order to allow students to demonstrate their mastery of Newtonian physics as they progress through increasingly challenging game levels, researchers have also incorporated formative assessments into games. If students can more readily ask for and receive assistance when they are feeling confused or frustrated, it can be a positive thing.  To demonstrate their learning, students must feel secure, certain, and trusting of the feedback produced by these AI-enabled tools and systems.  It is best to concentrate on learning progress and gains (without unfavorable outcomes or a high-stakes setting).

AI-enhanced formative exams could potentially free up teachers’ time (such as time spent grading) so they can devote more time to student assistance.  Teachers may also gain from enhanced assessments if they offer comprehensive insights into students’ needs or strengths that aren’t always apparent and if they encourage instructional modification or development by offering a limited number of recommendations based on research to assist students grasp the material.  If these tests can give feedback when the teacher is not there, such when students are doing their homework or rehearsing a topic during study hall, they might also be useful outside of the classroom.  As we covered in the Teaching section, putting instructors at the heart of system design is crucial to implementing AI-based formative assessment.

 

Adoption decisions are also heavily influenced by educators, students, and their families/caregivers.  When educators challenge or overrule an AI model based on their professional judgment, parents and leaders must stand by them.  Technology developers must be open about the models they employ, and legislators may need to establish transparency standards so that the market may operate based on knowledge about AI models rather than just assertions of their advantages.

 

Many key ideas, such as how to arrange learning activities and provide students with feedback were incorporated.  However, the fundamental idea was frequently deficit-based.  The algorithm selected pre-existing learning materials that would address the student’s weakness by concentrating on what was wrong with them.  We need to use AI’s capacity to identify and capitalize on learner strengths going future.  We also know that learning is strongly social and that people are inherently social, despite the individualistic techniques of the last years (source: https://ijisae.org/index.php/IJISAE/article/view/5928/4680).  In the future, we must develop AI capabilities that align with social and collaborative learning concepts and value students’ entire human skill set, not just their cognitive ability.  In the future, we must also work to develop AI systems that are both culturally sensitive and culturally sustaining, taking advantage of the expanding body of documented methods for this purpose.  Additionally, the majority of early AI systems offered limited assistance for English language learners and pupils with disabilities.  In the future, we need to make sure that learning materials powered by AI are purposefully inclusive of these pupils.  Edtech that enhances each student’s capacity for decision-making and self-control in progressively complicated situations has not yet been developed by the field.  We must create educational technology that enhances students’ capacity for creative learning as well as their capacity for discussion, writing, leadership, and presentation.

 

Additionally, we urge educators to reject AI applications that rely only on machine learning from data, without incorporating knowledge from experience and learning theory.  It takes more than just processing “big data” to create equitable and successful educational systems, and while we want to use data to gain insights, human interpretation of data is also crucial.  We oppose technological determinism, which holds that data patterns alone dictate our course of action.  AI applications in education must be based on well-established, contemporary learning theories, the knowledge of educational professionals, and the knowledge of the educational assessment community on bias detection and equity enhancement.

 

So, is AI good or bad for education? The answer is not simple.

 

AI offers enormous potential to personalize learning, improve access, and support teachers (source: https://www.nea.org/resource-library/artificial-intelligence-education). It can reduce administrative burdens and provide valuable insights into student performance. At the same time, it raises pressing concerns about privacy, ethics, inequality, and the erosion of critical thinking.  Ultimately, AI in education is neither inherently good nor bad—it is a tool. Like any tool, its impact depends on how we use it. If integrated thoughtfully, with safeguards for ethics and equity, AI could transform education for the better. If adopted recklessly, it risks undermining the very goals of learning.  The key lies in balance: embracing innovation while preserving the human heart of education. Teachers, students, and policymakers must work together to shape a future where AI empowers rather than replaces, complements rather than dominates.

 

Teachers have already risen to the task of developing broad standards, coming up with targeted applications for the AI-enabled tools and systems that are currently accessible, and identifying issues.  However, it is impossible to predict how educators will affect AI-enabled goods in the future; instead, stakeholders require policies that support this.  Would it be possible to establish a national corps of top educators from each state and area to serve as leaders?  Could we make a commitment to creating the supports for professional development that are required?  Can we figure out how to pay teachers so they can play a key role in shaping education’s future?  Teachers should be allowed to actively participate in the development of AI-enabled educational systems thanks to the new policies.

 

Jeff Palmer is a teacher, success coach, trainer, Certified Master of Web Copywriting and founder of https://EbookACE.com. Jeff is a prolific writer, Senior Research Associate and Infopreneur having written many eBooks, articles and special reports.