More than ChatGPT

ChatGPT Isn’t the Only Way to Use AI in Education

NABEEL GILLANI  WIRED Magazine & Online

AI can be a tool to create meaningful connections and learning experiences for children—and may help foster more equitable outcomes.

SOON AFTER ChatGPT broke the internet, it sparked an all-too-familiar question for new technologies: What can it do for education?  Many feared it would worsen plagiarism and further damage an already decaying humanism in the academy, while others lauded its potential to spark creativity and handle mundane educational tasks.  

Of course, ChatGPT is just one of many advances in artificial intelligence that have the capacity to alter pedagogical practices.  The allure of AI-powered tools to help individuals maximize their understanding of academic subjects (or more effectively prepare for exams) by offering them the right content, in the right way, at the right time for them has spurred new investments from governments and private philanthropies.  

Image may contain Pac Man and Rug

There is reason to be excited about such tools, especially if they can mitigate barriers to a higher quality or life—like reading proficiency disparities by race, which the NAACP has highlighted as a civil rights issue.  Yet underlying this excitement is a narrow view of the goals of education. In this framework, learners are individual actors who might acquire new knowledge and skills with the help of technology.  The purpose of learning, then, is to master content—often measured through grades and performance on standardized tests.   

But is content mastery really the purpose of learning?  Naming reading proficiency as a civil rights issue likely has less to do with the value of mastering reading itself, and more to do with the fact that mastery of reading (or math, or other subjects) can help lay a foundation for what learning can unlock: breaking the intergenerational cycle of poverty, promoting greater self-awareness and self-confidence, and cultivating a stronger sense of agency over one’s destiny and the destinies of one’s communities.  Content mastery is part of this equation, but making it the primary focus of education misses the fact that so much of a child’s future is shaped by factors beyond the classroom.  Critically, networks, or who children and their families are connected to, and how, matter for helping children prepare to live fulfilling lives.  This is especially true for networks that cut across socioeconomic, demographic, and other lines.  Indeed, a large recent study highlighted how social capital, defined as friendships across socioeconomic divides, can play a larger role in fostering intergenerational economic mobility than school quality (often measured by the test scores of students who go there).  

Networks that connect parents to coaches to help them navigate their children’s schooling can forge new support structures and trusting relationships between families and educators.  Networks that connect students to role models and mentors can change the course of their academic and professional lives.  A child’s broader social context, in addition to the knowledge and skills they gain through school, matters deeply for their future outcomes.  Left without intervention, however, real-world networks often form and evolve in inherently unequal ways. For example, patterns of preferential attachment can lead “the rich to get richer,” excluding many from accessing connections that might improve their lives in important ways.

In practice, each AI needs an objective function that represents what it is optimizing for.  Applications of AI for pedagogy and content mastery might optimize for “helping students get the highest possible score on a test.” Fostering more inclusive network connections, however, is a more deeply rooted and structural type of change than improving test scores.  Using AI to help cultivate these networks might do more for children’s life outcomes than focusing on pedagogy and content mastery alone.  

But some may argue that optimizing network connections is a more nebulous task than optimizing test scores.  What, precisely, should the objective function(s) be?

One framework for exploring this may involve focusing on how the networks that children and families are enmeshed in form and evolve in the first place.  In the context of schooling, this involves the wide range of policies that school districts design to determine which schools students can attend (“school assignment policies”), along with the practices families adopt when picking schools for their children under these policies.  Such policies and practices have historically perpetuated harmful features like school segregation by race and socioeconomic status—which, despite nearly 70 years since its formal outlawing, continues to define public education in the US.  Many scholars argue that demographic integration has historically been one of the most effective methods not only for enhancing the academic preparation of historically disadvantaged groups, but also for fostering greater compassion and understanding—say, an ethic of pluralism—across people from different backgrounds.  

AI can help support the design of more equitable school assignment policies that foster diverse and integrated schools, for example, by supporting district-level planning efforts to redraw “school attendance zones”—i.e., catchment areas that determine which neighborhoods feed to which schools—in ways that seek to mitigate underlying patterns of residential segregation without imposing large travel burdens and other inconveniences upon families.  

Existing researcher-practitioner partnerships—and some of my own research with collaborators Doug Beeferman, Christine Vega-Pourheydarian, Cassandra Overney, Pascal Van Hentenryck, Kumar Chandra, and Deb Roy—are leveraging tools from the operations research community and rule-based AI like constraint programming to explore alternative assignment policies that could optimize racial and socioeconomic integration in schools.

These algorithms can help simplify an otherwise cumbersome process of exploring a seemingly infinite number of possible boundary changes to identify potential pathways to more integrated schools that balance a number of competing objectives (like family travel times and school switching).  They can also be combined with machine-learning systems—for example, those that try to predict family choice in the face of boundary changes—to more realistically estimate how changing policies might affect school demographics.

Of course, none of these applications of AI come without risks. School switching can be disruptive for students, and even with school-level integration, segregation can persist at smaller scales like classrooms and cafeterias due to curricular tracking, a lack of culturally responsive teaching practices, and other factors. Furthermore, applications must be couched in an appropriate sociotechnical infrastructure that incorporates community voices into the policymaking process.  Still, using AI to help inform which students and families attend school with one other may spark deeper structural changes that alter the networks students connect to, and by extension, the life outcomes they ultimately achieve.

Changes in school assignment policies without changes in school selection behaviors amongst families, however, are unlikely to lead to sustainable transformations in the networks that students are tapped into. Here, too, AI may have a role to play.  For example, digital school-rating platforms like GreatSchools.org are increasingly shaping how families evaluate and select schools for their children—especially since their ratings are often embedded across housing sites like Redfin, which can influence where families choose to live.  

Some have argued that school-rating platforms, where ratings largely reflect test scores—measures notoriously reflective of race and income and not as indicative of how much schools actually help students learn—might have historically led white and affluent families to self-segregate into neighborhoods zoned for highly rated schools, creating a vicious cycle of residential segregation that reinforces patterns of school segregation and ensuing achievement gaps. A recent research project I did in collaboration with Eric Chu, Doug Beeferman, Rebecca Eynon, and Deb Roy fine-tuned large language models to explore how parents’ open-ended reviews on GreatSchools might contribute to such trends.  Our results showed that parents’ reviews are strongly associated with school-level test scores and demographics, and not associated with measures of student progress, suggesting that parents who consult reviews to make schooling choices may be factoring demographics more than actual school effectiveness into their decisions.

GreatSchools continues to invest in new ratings schemes that seek to break these feedback loops and offer a more complete view of school quality—as Sisyphean a task as it may seem.  What if platforms like GreatSchools also trained and deployed school recommender systems that simultaneously try to expose families to schools that satisfy their desires for their children (for example, rigorous course offerings, language immersion programs, compassionate and nurturing teachers) while also exposing them to schools “outside of their bubbles”—that is, quality schools they might not otherwise consider, perhaps because they have lower test scores, are in neighborhoods they wrote off before ever exploring, or something else? This multi-objective AI would not come without challenges of transparency and agency that accompany recommender systems deployed in other settings, but it could help spark new network connections that may not form otherwise.  

These are just some examples, and they are not mutually exclusive with pedagogically focused applications.  For example, while we likely lack the data to do this today, looking ahead, AI might help determine which students would benefit the most from which tutors—those who can not only help bridge learning gaps but also serve as relevant sources of mentorship, guidance, and inspiration.  And expanding our focus in AI for education to include networks will not absolve us of the fairness concerns and other risks that existing deployments of AI continue to pose.  Designing new applications of AI calls for careful and thoughtful exploration, especially as we as a society continue to respond to our rapidly changing AI landscape with a dynamic blend of fear, hope, concern, awe, and wonder.  Of course, as in life itself, all of these emotions are important.  Harnessing them to foster more inclusive network connections for the next generation of learners may be our most meaningful response of all.

https://www.wired.com/story/chatgpt-artificial-intelligence-education-networks/

How AI Is Used In Education

Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers, 1 million newsletter subscribers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.

Bernard’s latest book is ‘Business Trends in Practice: The 25+ Trends That Are Redefining Organisations’

How Is AI Used In Education — Real World Examples Of Today And A Peek Into The Future

While the debate regarding how much screen time is appropriate for children rages on among educators, psychologists, and parents, it’s another emerging technology in the form of artificial intelligence and machine learning that is beginning to alter education tools and institutions and changing what the future might look like in education. It is expected that artificial intelligence in U.S. Education will grow by 47.5% from 2017-2021 according to the Artificial Intelligence Market in the US Education Sector report. Even though most experts believe the critical presence of teachers is irreplaceable, there will be many changes to a teacher’s job and to educational best practices.     

Teacher and AI collaboration  

AI has already been applied to education primarily in some tools that help develop skills and testing systems. As AI educational solutions continue to mature, the hope is that AI can help fill needs gaps in learning and teaching and allow schools and teachers to do more than ever before. AI can drive efficiency, personalization and streamline admin tasks to allow teachers the time and freedom to provide understanding and adaptability—uniquely human capabilities where machines would struggle. By leveraging the best attributes of machines and teachers, the vision for AI in education is one where they work together for the best outcome for students. Since the students of today will need to work in a future where AI is the reality, it’s important that our educational institutions expose students to and use the technology.         

Click to accept marketing cookies and enable this content

Differentiated and individualized learning

Adjusting learning based on an individual student’s particular needs has been a priority for educators for years, but AI will allow a level of differentiation that’s impossible for teachers who have to manage 30 students in each class. There are several companies such as Content Technologies and Carnegie Learning currently developing intelligent instruction design and digital platforms that use AI to provide learning, testing and feedback to students from pre-K to college level that gives them the challenges they are ready for, identifies gaps in knowledge and redirects to new topics when appropriate. As AI gets more sophisticated, it might be possible for a machine to read the expression that passes on a student’s face that indicates they are struggling to grasp a subject and will modify a lesson to respond to that. The idea of customizing curriculum for every student’s needs is not viable today, but it will be for AI-powered machines.         

Universal access for all students     

Artificial intelligence tools can help make global classrooms available to all including those who speak different languages or who might have visual or hearing impairments. Presentation Translator is a free plug-in for PowerPoint that creates subtitles in real time for what the teacher is saying. This also opens up possibilities for students who might not be able to attend school due to illness or who require learning at a different level or on a particular subject that isn’t available in their own school. AI can help break down silos between schools and between traditional grade levels.      

Automate admin tasks     

An educator spends a tremendous amount of time grading homework and tests. AI can step in and make quick work out of these tasks while at the same time offering recommendations for how to close the gaps in learning. Although machines can already grade multiple-choice tests, they are very close to being able to assess written responses as well. As AI steps in to automate admin tasks, it opens up more time for teachers to spend with each student. There is much potential for AI to create more efficient enrollment and admissions processes.      

Tutoring and support outside the classroom       

Ask any parent who has struggled to help their teenager with algebra, and they will be very excited about the potential of AI to support their children when they are struggling at home with homework or test preparations. Tutoring and studying programs are becoming more advanced thanks to artificial intelligence, and soon they will be more available and able to respond to a range of learning styles.    

There are many more AI applications for education that are being developed including AI mentors for learners, further development of smart content and a new method of personal development for educators through virtual global conferences. Education might be a bit slower to the adoption of artificial intelligence and machine learning, but the changes are beginning and will continue.      

AI Mistakes You Must Avoid

The 12 Biggest AI Mistakes You Must Avoid

16 April 2023 Bernard Marr

The benefits of AI are undeniable — but so are the risks of getting it wrong.

In this post, you’ll learn the 12 biggest AI mistakes organizations make and get practical ways to avoid these common missteps so you can effectively harness the power of AI.

The 12 Biggest AI Mistakes You Must Avoid | Bernard Marr

1. Not Going “All In” on AI

AI is the most powerful technology humans have ever had access to — and now, every organization can put it to good use and create value for customers.

To fully realize the potential of AI, though, organizations must commit to its implementation and integration. It’s crucial to invest in the right infrastructure, personnel, and training to ensure successful AI adoption and avoid half-hearted attempts that can lead to wasted resources and suboptimal results.

2. Lack of Clear Business Goals

One of the biggest mistakes companies make is trying to implement AI solutions without having clear business goals in mind. This can result in a lot of wasted time and resources, with little or no return on investment (ROI).

If you’re going to launch AI initiatives in your business, make sure to establish specific, measurable objectives before you begin. By aligning AI projects with clear business goals, you can evaluate their impact and ROI, ensuring your efforts drive meaningful value for your organization.

3. Insufficient Expertise

Having the right expertise is critical for navigating the complexities of AI — but many companies underestimate the level of expertise needed and end up with poorly designed or inefficient systems.

Invest in hiring skilled professionals with expertise in machine learning, data science, and engineering, or focus on upskilling existing employees through training and education. Partnering with experienced consultants or vendors can also help you bridge knowledge gaps.

4. Ignoring Change Management

The successful integration of AI often involves significant changes to organizational processes, workflows, and employee roles. Neglecting the human aspect of AI adoption can lead to internal resistance, confusion, and reduced productivity.

Develop a robust change management strategy that includes clear communication, employee training, and support systems to help workers adapt to the new technology.

By addressing the cultural and behavioral aspects of AI adoption, you can facilitate a smoother transition and ensure your workforce is well-equipped to leverage the potential of AI with minimal disruption.

5. Poor Data Quality

AI models are only as good as the data they’re trained on. If the data used to train an AI model is incomplete, inconsistent, or biased, the model’s predictions may be inaccurate or unreliable.

In your organization, prioritize data quality by collecting, cleaning, and maintaining accurate, up-to-date datasets. Invest in proper data management practices to help you avoid skewed or biased AI models.

6. Neglecting to Involve the Right Stakeholders

Successful AI implementation requires collaboration across different teams, including IT, data science, business strategy, and legal. If a company neglects to involve the right stakeholders, they risk siloed decision-making, suboptimal results, and missed opportunities.

Make sure you’re engaging with all relevant parties early in the process, so you can identify requirements, manage expectations, and encourage collaboration, ensuring smoother AI adoption.

7. Over-Reliance on Black Box Models

Many AI models are complex, and their inner workings can be difficult to understand.

Companies that rely too heavily on “black box” models — complex machine learning algorithms and systems that don’t offer clear explanations for how they produce results — can run into problems with accountability and transparency.

These models are often characterized by their opacity, making it difficult for users, developers, or stakeholders to interpret underlying logic or decision-making processes.

Prioritize transparency in your organization’s AI models. This reduces the risks of unforeseen biases and errors and fosters trust. Consider providing clear explanations of how your AI systems work.

8. Inadequate Testing and Validation

Thorough testing and validation are essential for ensuring the reliability and accuracy of AI models. Plan to invest time and resources into rigorous testing processes, and be prepared to iteratively refine your models so you’re not making decisions based on faulty data.

9. Lack of Long-Term Planning

AI adoption requires long-term planning for ongoing maintenance, updates, and scalability. Companies that don’t plan for the future are at risk of becoming stuck with outdated AI models that don’t deliver expected outcomes.

When planning your AI initiatives, establish a comprehensive roadmap and allocate resources for the future, so your projects remain effective and aligned with evolving business needs.

10. Ignoring Ethical and Legal Considerations

AI models can raise a host of ethical and legal considerations, from data privacy and bias to accountability and transparency. Companies that don’t take these considerations seriously risk damaging their reputation, alienating customers, and even facing legal action.

Be proactive in addressing these types of issues, so your organization can build trust and avoid potential legal and reputational risks.

11. Misaligned Expectations

One common mistake is having unrealistic expectations about what AI can achieve.

While AI has transformative potential, it’s not a magic bullet. When making plans for artificial intelligence adoption, be realistic about AI’s capabilities and limitations. Manage stakeholder expectations throughout the implementation process, so you can avoid disappointment and ensure realistic assessments of potential project outcomes.

12. Failing to Monitor and Maintain AI Models

AI models require ongoing monitoring and maintenance to remain effective. Organizations must be prepared to regularly assess the performance of their AI systems. This will include updating and retraining models as necessary to account for changes in data or shifting business needs.

Neglecting this aspect of AI management can lead to outdated models that produce inaccurate or biased results. Establishing a robust monitoring and maintenance plan is essential for ensuring the long-term success of your AI projects.

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers, 1 million newsletter subscribers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.

Bernard’s latest book is ‘Business Trends in Practice: The 25+ Trends That Are Redefining Organisations’

Data Quotes

Data Quotes

  1. “In God we trust, all others bring data.” — W. Edwards Deming
  1. “Without big data, you are blind and deaf and in the middle of a freeway.” — Geoffrey Moore
  1. “If we have data, let’s look at data. If all we have are opinions, let’s go with mine.” — Jim Barksdale
  • “Without data you’re just another person with an opinion. -”Edwards Deming, Statistician
  • “Errors using inadequate data are much less than those using no data at all.”Charles Babbage, inventor and mathematician.
  • “If the statistics are boring, you’ve got the wrong numbers.” Edward Tufte, Professor emeritus of political science, statistics, and computer science at Yale University
  • “The ability to take data – to be able to understand it, to process it, to extract value from it, to visualise it, to communicate it – is going to be a hugely important skill in the next decades.” Hal Varian, Chief Economist, Google.
  • “I have no data yet. It is a capital mistake to theorise before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” Sir Arthur Conan Doyle, Writer and Physician

Wait Time

In 1972, Mary Budd Rowe, University of Florida, coined the phrase “wait time” to describe the period of time between a teacher’s question and a student’s response. Rowe found that teachers typically wait between .7 seconds and 1.5 seconds before speaking after they have asked a question. However, when teachers utilize wait times of 3 seconds or more, Rowe found that there were demonstrated increases in student creativity and learning. Robert Stahl further expanded on Mary Budd Rowe’s concept in 1994 by coining the term “think time”—the period of uninterrupted silence for both teachers and students to reflect on and process their thoughts, feelings, and reactions. Stahl’s definition, although similar to “wait time,” more specifically labeled the action that teachers and students undergo during the period of silence as thinking.

  • Wait time 1 refers to the amount of time a teacher will wait for a student’s response to them asking a question before they will speak again.
  • Wait time 2 refers to the time a teacher waits following a student’s response before speaking.

Use Wait Time 1 to support learning

Mary Budd Rowe defined wait time as the time between when the teacher asks a question and the student responds. This is also called Wait Time 1. 

She found that, traditionally, wait time has been less that 1.5 seconds. However, by doubling wait time to 3 seconds, several positive effects were experienced:

  • the length of responses increased
  • the correctness of responses increased
  • more students volunteered answers
  • “I don’t know” responses decreased
  • student confidence increased

Even teacher behaviors benefit from extended wait time. The quality of teachers’ questions increases while the number of questions asked decreases. Quality over quantity!

Now add Wait Time 2 to support learning

Wait Time 2 is the time after the student responds and the teacher replies. 

Waiting an additional few seconds here can elicit an extended response from students. In some cases, the teacher may nod or give an “umm.” 

  • A nonverbal signal that the teacher is considering allows students an opportunity to continue responding. 
  • Students may not always be able to add any additional information, but Wait Time 2 is an opportunity for extra output and elaboration.

Helpful Links- Really!

Classroom Jobs can help!

https://www.wholechildmodel.org/classroom-jobs 

Classroom Procedures

35 Classroom Procedures and Routines

Nonverbal Cues 

Setting Classroom Expectations to Improve Behavior 

5 Tips to Help Improve and Set Behavior Expectations

Here are 5 quick tips to establish consistent behavior expectations in your school or district:

1. Define your behavior expectations, along with rewards and consequences.

Invite key stakeholders from across your school to create your behavior expectations. Each desired behavior should be observable, measurable, objective and specific. Defining behaviors in this way also makes it much easier to model them for students, so they can see concrete examples of what they’re expected to do.

Next, establish a reward system for recognizing students who achieve these expectations, and establish consequences for expectations that are not met. Like the expectations, the rewards and consequences should be age-appropriate and consistent.

Finally, share these expectations and get buy-in from all teachers and staff members to ensure they’ll be implemented school-wide.

2. Clearly communicate your behavior expectations to students — and parents.  

One way to communicate consistent behavior expectations to students and parents is to put them in writing.

  • Create a handout, and distribute it to all students and parents.
  • Post the expectations on classroom walls or other prominent places so students can refer to them as often as needed. Even better, post the expectations in or near the area where the targeted behaviors are expected to take place (e.g. posting behavior expectations for the cafeteria on the cafeteria wall).
  • Post the expectations on the school website.
  • Include the expectations in the school handbook.

Then, read the expectations aloud to students. Explain what each expectation means, and why these are necessary and beneficial to everyone.

3. Show students what is meant by each expectation. Model and practice it.

To ensure students understand the behavior expectations, show them what they look like in action. Demonstrate what it looks like when a student is meeting the expectation.

4. Track student behaviors daily, and apply rewards and consequences consistently and equitably.

With Kickboard, you can easily collect, access, analyze, share and act on behavioral data in real-time. With behavior management tools such as one-click behavior tracking, you can easily track the positive behaviors that make up your ideal school culture, as well as inappropriate or negative behaviors that need improvement. In addition, you can motivate positive behaviors with goal-based incentives or rewards — such as behavior points, scholar dollars, student paychecks, or school store rewards — which are automatically tracked in Kickboard.

Teachers can help each other too, with one-click tools for behavior-specific notes, teacher-to-teacher comments, sharable dashboards, and room for reflection on student reports.

5. Review and reinforce these expectations throughout the year.

This keeps the behavior expectations top-of-mind for students and staff — and emphasizes how important they are to the culture of the entire school.

Clear, consistent behavior expectations, combined with real-time data tracking, are key components to building a safe, happy school where students and staff thrive. When students feel confident, respected, cared for and supported, disruptions and discipline incidents decline, learning increases, and academic achievement rises.