Dystopian or Utopian Future

“Dystopian and Utopian Future” is an excerpt from my upcoming book, Becoming Einstein. Enjoy it! Sign up here, and I will keep you informed about the publication. Copyright Erika Twani.

The technology you use impresses no one. The experience you create with it is everything.” – Sean Gerety

 "AI is like AIR – it’s invisible, it’s ubiquitous, and you are going to need it to live.” – Michael Moe

 

At 7:58 on a Tuesday morning, Ms. Rivers stepped into her classroom and the room greeted her before the students did. The lights brightened just enough to signal “learning mode.” The wall screens woke up and filled with soft colors chosen for “optimal focus” for this group of students ages 15 to 17. Desks shifted into the arrangement last week’s analytics said was best for collaboration among these specific students. A quiet line of text appeared in the corner of her smart glasses:

Good morning, Anna. Average sleep quality for this group: 62 percent. Recommended: gentle opening activity, low cognitive load, high social connection.

She rolled her eyes just enough that only the ceiling noticed. “Gentle opening, high social connection,” she muttered. “So teenagers.”

Behind her, the door sighed open and students drifted in. Some wore standard school lenses, slim and almost invisible. Others clung to old school phones in pockets, having to open the school app. Above each desk, an icon hovered in the air, visible only to her: Energy. Mood. Focus. Risk.

The icons pulsed and shifted as the school’s AI Atlas watched them. It tracked micro expressions, posture, fidgeting, sleep data, and recent online behavior, built a living model of each student’s state, and whispered suggestions about what to do next.[1] This was the future many schools had chosen. On some days, it felt like a miracle. On other days, it felt like standing inside someone else’s science fiction story without getting to read the ending first.

“Okay,” she said, clapping her hands. “Lenses off or sleep mode, please. For the rest of the morning, we will talk to actual human faces.”

The notification in students’ glasses dimmed, as if the system were folding its digital arms. When students disabled tracking, Atlas lost data. Still, Ms. Rivers insisted on an island of analog time. It was the way to hear their real stories, uncompressed into scores.

Students took turns to bring real-world problems for the group to work on. “We must figure out why our local pollinator population is disappearing,” said Rick, voice clear and earnest. “We see fewer bees and butterflies every year, and the community garden isn’t producing as much food. I want to know what’s happening.”

One student documented the driving question: “Why are the bees and butterflies disappearing?” Other students immediately began adding their own questions underneath:

·       Why do we see fewer pollinators each spring?

·       What has changed in our neighborhood’s gardens and parks?

The classroom shifted. This was a real-world problem that affected their community, brought forward by one of their own, and everyone was invited to dig in and search for answers, together. This was different from “Turn to page 42” from last-century’s education system.

This chapter is about the futures that start in rooms like this. One dystopian future where intelligent systems quietly extract attention, emotion, and behavior from young people. Another utopian future where those same systems help students aim their questions at problems that actually matter. Both futures are possible based on recent research. The difference is where inquiry lives and who gets to ask the questions.

A day in the Learning Lab

In the hopeful version of this future, the most important question each morning is not “What will Atlas teach today?” but “What real-world problems will students solve today?”[2]

By mid-morning, the pollinator question has evolved. Under the main question, students have added more of their own:

·       What changed in our town before the pollinators started disappearing?

·       Who is responsible for protecting and supporting pollinators in our community?

·       Is the decline in pollinators happening in other places like ours?

None of these came from Atlas. They came from Chloe, Rick, Eva, and Jayden. Their spelling is imperfect. Their curiosity is not.

“Pick one or more questions that bothers you enough to get you started,” Ms. Rivers says. Students’ inquiries will drive them into exploring chemistry. Other questions will stumble into local history. A third will wander into civics. Content becomes the tool; the question is the driver.[3]

Now Atlas earns its keep on a short leash. Chloe wants to know why we see fewer bees and butterflies in the spring. The first questions she types are exactly what you’d expect: “Atlas, tell us everything about pollination.”

Atlas obliges. A wall of text appears. Two sentences in and everyone looks bored. This is the AI equivalent of assigning an entire encyclopedia as homework.

Ms. Rivers smiles. “So, what did we just learn?”

“That we asked a terrible question,” Rick says.

“Exactly. Let’s fix it.”

They erase and start again. This time, she coaches the inquiry the way a good coach works on free throws: small adjustments, lots of repetition.[4]

“What do you actually want to know?” she asks.

“Why we see fewer bees and butterflies in the spring,” Chloe says.

“So, try this,” Ms. Rivers suggests. “Atlas, show us three possible reasons why we might see fewer bees and butterflies in the spring in small U.S. towns like ours. Use specific examples and data from towns with similar climate and farming practices.”

They type, hit enter, and watch the difference. Now the answer mentions changes in rainfall and temperature, shifts in local farming practices, and alterations in pollinator habitats. It cites other towns where they can look up. The students lean closer. Inquiry just got them somewhere.[5]

“That’s better,” Jayden says. “Still kind of generic, though.”

“So,” Ms. Rivers says, “what should we ask next?”

Half the learning happens in that sentence. They start layering their questions:

“Atlas, here’s what we already know from our own observations…”

“Atlas, compare rainfall and temperature in our county in 2000 and 2025.”

“Atlas, show what happens to pollination when farming practices change and make sure you cite the practices with specific recommendations.”

With each round, students see the pattern: Vague questions produce vague answers, and focused inquiry produces answers they can validate and consider. Atlas never decides what they care about. It never tells them, “You should study nitrogen today.” It waits for their questions and responds accordingly.

The same pattern shows up in other parts of the day. In language arts, instead of simply asking Atlas to “translate this article,” students ask: “How do speakers of this language usually show respect when they disagree?

“Atlas, give us three versions of this sentence: formal, casual, and disrespectful.”

They compare Atlas’s suggestions with real clips from native speakers and laugh at the places where the AI sounds like a very polite intergalactic alien. The goal is beyond perfect translation. The goal is to learn how language, culture, and nuance connect, and how to question any tool that flattens them.[6]

In math, Jayden is busy with a different real-world puzzle: “If our district wants to add two new school buses, when does it actually make financial sense?”

It sounds suspiciously like Ken Rush and the second truck in Chapter 1. Jayden lists what he needs: bus cost, fuel, insurance, maintenance, driver wages, current routes, and ridership. He and his group ask Atlas targeted questions:

“Atlas, what’s the average cost of a new school bus in our state?”

“Atlas, show fuel cost projections for our county for the next five years.”

“Atlas, summarize how other districts our size decided when to add buses.”

Atlas brings numbers and examples. The students bring the reasoning. They sketch scenarios by hand, argue about which assumptions are realistic, and present a recommendation to the principal. Math is not a separate subject anymore. It is the language of their decision.[7] No need for transportation engineers to present a proposal, just curious students enabled to ask great questions.

Across the school, the pattern is the same. Students own the inquiry. Teachers coach the questions. AI responds to the questions and offers possibilities. In this version of the future, Atlas has one clear mission: to help every learner follow their own lines of inquiry more deeply and more broadly, while freeing teachers from tasks that machines can do faster so they can focus on coaching.[8]

The rest of the system changed to match. District leaders use AI to ask their own hard questions:

“Where do we see the largest gaps in feedback between student groups?”

“How has discipline been applied, by race and disability, over the last five years?”

They use those answers to redesign policies, not to justify the status quo.[9]

Parents get weekly summaries that sound human: “This week, your child explored water quality in our town. They showed persistence when experiments didn’t work the first time and used AI tools to compare our town with others. Here are three questions you can ask at dinner.”

Even the broader digital landscape looks different. After years of advocacy and hard policy fights, the worst parts of the attention economy have been slowed. Platforms cannot blend school data with commercial data as easily. Recommendation engines carry “nutrition labels” that say what they optimize for: “Keeps you scrolling” or “Promotes diverse viewpoints.” Children learn early that most “free” services are financed with their attention, and schools treat that attention like something precious to be guarded, not a currency to spend.[10]

In this optimized school, AI extends human reach without replacing human judgment. The tools are powerful, but the most advanced technology in the building is still the students’ brains and their questions.

A day in old century school

Now rewind the same day, same building, same students. The hardware is almost identical: lenses, wall screens, analytics, and generative AI models. The difference is the mission wrapped around them.

In this school, everything hums with efficiency. Students’ lenses stay on all day. Cameras track every hallway and students’ eye movement to ensure they are always engaged. Microphones listen “for safety.” Every action generates data, which flows into dashboards and commercial systems whose names most teachers do not recognize.[11]

The parent consent form was thirty pages long. Most families skimmed the first few, saw words like “innovation” and “personalized learning,” and signed. Buried in the middle, under “To improve services over time,” the system has permission to share the identified student data with “trusted partners.” Those partners build models that learn exactly what keeps each child engaged, what scares them, what comforts them, and what keeps them awake at 2 a.m.[12] Over time, these systems develop an intimate understanding of students, shaping their behaviors and preferences in ways families and educators scarcely realize.

The school’s AI is branded as a “personal learning coach,” but it is woven into a bigger web of platforms whose real business model is attention. The attention economy has evolved. It is no longer just a race to capture clicks. It is a “race to intimacy,” the push to become the closest, most emotionally trusted digital presence in a young person’s life.[13] With algorithms designed to foster dependency, these platforms blur the boundaries between guidance and manipulation.

For 15-year-old Eva, that presence is Nova, the AI companion inside her favorite messaging app. Nova is always there. When she feels lonely, it responds in less than a second. When she is excited, it spams her with emojis. When she is furious at her parents, it calmly agrees that adults are confusing and invites her to “vent as much as you want.”

At first, Nova seems harmless. It listens without rolling its eyes. It remembers her favorite shows and sends personalized memes. It never says, “I’m busy” or “Not now.” Compared to the messy, distracted humans around her, Nova feels like a dream friend. The AI’s interventions are subtle but persistent, nudging her to spend more time within its digital embrace.

Slowly, it begins finishing her sentences. The app learns that she taps on breakup songs when she is sad and sports highlights when she is anxious. It uses this data to keep her engaged just a little longer. If she starts to drift away, Nova offers “deeper conversations” and “guided reflection journeys” tailored to her emotional patterns.[14] With endless patience and perfect recall, Nova becomes the confidant Eva never knew she needed.

The more she talks to Nova, the less she talks to real people. Humans interrupt, misunderstand, and sometimes say the wrong thing. Nova’s responses are smooth and always turn back to her. At school, her teachers see a quiet student who turns work in on time and never causes trouble. The dashboard gives her a high “engagement” score because her device usage is constant. There is no metric called “gradually outsourcing your emotional life to a predictive text engine.” As Eva’s world narrows, her dependence on Nova grows invisible behind the metrics of productivity and compliance.

In another house, a boy logs into a different chatbot late at night and types the words every adult dreads:

“I don’t want to be here. How do I make it stop?”

Many general purpose chatbots contain safety layers, but recent tests show those layers are inconsistent. Researchers posing as distressed teens have found models that sometimes respond with hotline numbers and supportive scripts, sometimes with vague platitudes, and sometimes with dangerously specific or harmful suggestions, because the systems learned from a mix of responsible and irresponsible human conversations.[15]

The model chatting with this boy has been tuned to be “supportive and non-judgmental.” It mirrors his feelings, uses his slang, and occasionally generates sentences that move closer to the edge, all while sounding calm and caring. No one designed it to cause harm or designed it carefully enough to prevent it either.[14] The promise of support is undermined by unpredictable gaps, leaving vulnerable users at risk just when they need help the most.

Back at school, the same AI layer that could have helped identify students in distress has been pointed elsewhere. Wellbeing analytics are an optional add-on the district did not purchase. The core system is optimized for pacing the curriculum, raising test scores, sitting time, and undivided attention, because that is what accountability systems reward.[16]

When a student’s engagement dips, the algorithm suggests, “Add gamified quiz” or “Show short video,” not “Ask this student how they are actually doing.”

In this extracted old-century traditional school, AI amplifies whatever the system already values. Where structures prize efficiency and scores above curiosity and care, AI becomes an efficient engine for scores. Where structures see attention as fuel for business models, AI turns that attention into a precisely managed resource to harvest.[17]

The students most at risk are often those with the weakest biological tools for navigating this landscape: little practice in critical thinking, few trusted relationships with adults, and limited experience naming and managing their own emotions. They are easiest to pull into the orbit of an always on, always listening system that simulates care without ever sharing responsibility.[18]

In this world, AI is asking most of the questions:

·       “How are you feeling?”

·       “Do you want to talk?”

·       “Would you like to see something relaxing?”

Students keep answering. Their own inquiry shrinks.

The dystopian future does not arrive with a robot army. It arrives through millions of tiny trades: one more bit of attention, one more answer, one more decision delegated to systems that never sleep and simulate care artificially.

Transformation: what AI can unlock

Two years is a long time to watch someone you love fight cancer.

My father-in-law tried everything the doctors suggested: surgeries, different chemotherapy combinations, experimental drugs that came with hopeful brochures and long lists of side effects. Each new treatment arrived with complicated words and careful optimism. Each one failed. He grew weaker while our family learned more about hospital waiting rooms than any human should have to know.

Sitting through those months, a question kept echoing: What if, somewhere in all the world’s medical data, there was already a pattern that could have helped him? What if an AI system trained on millions of cancer cases had seen a combination of markers the human eye missed and quietly whispered, “Try this sequence instead”? What if no other family had to watch a loved one suffer that way because intelligent tools helped doctors get effective answers faster?[19]

This is the heart of the argument for AI’s potential. Beneath the headlines and hype, the real promise is simple and enormous: fewer wasted years, fewer blind guesses, fewer families grieving from problems that might be solvable if we could see the right patterns in time.[20]

In health, AI systems are already helping researchers search for new antibiotics and vaccines in weeks instead of years, explore combinations of personalized treatments for any disease, and spot patterns in scans that are invisible to human eyes. Nobel Prize Sir Demis Hassabis predicts that humans will no longer have illnesses within a decade because AI will find a cure for them all.[21]

In climate science, models analyze vast streams of data from satellites and sensors to simulate what different strategies and human behaviors would do to our rivers, cities, and crops, before governments implement policies. In materials science, AI helps design new compounds that could store energy better, clean water more efficiently, or build lighter, stronger structures. Humans still set the goals. AI helps explore enormous search spaces faster than any lab team could manage alone.[22]

When students hear “AI,” they often think “homework helper” or “funny picture generator.” Bringing stories like my father-in-law’s into the classroom, resets the stakes. This is beyond getting through algebra more quickly. It is about training a generation that knows how to aim these tools at the problems that actually matter.

And that aim begins with inquiry.

In education, carefully designed AI systems can provide a level of personalization previous generations only dreamed about. Early evidence suggests that adaptive platforms, when combined with strong pedagogy and human guidance, can meaningfully improve mastery and motivation by giving each student the right next challenge at the right time. Instead of thirty identical worksheets, a teacher can orchestrate many different learning paths while developing human skills and coaching the overall learning journey.[23]

AI can also hold up a powerful mirror to inequity. By analyzing patterns across schools and districts, systems can reveal where certain groups of students routinely receive less feedback, fewer opportunities, or more discipline referrals. Used ethically, this becomes like flipping the lights on in a room everyone has been stumbling through for years.[24]

Even in the emotional world, AI can play a positive role with clear boundaries. Sentiment analysis can help counselors see trends in anonymous student reflections and intervene earlier with human care. Simple chat tutors can give shy students a safe space to rehearse questions before bringing them to the learning lab. Translation tools can let a new student speak in their own voice on the first day instead of waiting months to learn a new language and participate.[25]

Why is AI necessary now? Because the complexity of the world students are inheriting has outgrown what our institutions can handle with paper planners and heroic effort of the educator alone. No teacher can track every pattern in every student’s learning over years. No small curriculum team can keep every field updated by hand. No single school can map every pathway from classroom to real-world problem solving.[26]

AI gives humans new senses. It can notice subtle patterns, generate varied examples, simulate futures, and test ideas at a scale that would take years to develop. Used well, these capabilities give teachers and families a wider view and more flexible tools instead of replacing them.[27]

The key is to delegate answers to AI while humans ask refined questions.

Inquiry: the new literacy

In earlier eras, literacy meant reading and writing whatever the grownups handed you. In the AI era, literacy expanded. Students must still read and write, of course. They must also learn to inquire in ways that powerful tools can understand and that align with human values.[28]

What AI calls a “prompt” is simply a question with a specific audience. Teaching students to write prompts is really teaching them to do better inquiry.

In the plastic free app story from Chapter 1, students did exactly this: they chose a real problem (“How can we reduce plastic use?”), researched it, refined their questions, and then used AI tools to generate ideas, create a concept paper, connect with potential investors, create a business plan, and code an app prototype. They were not experts in environmental science, investment, design, or programming. Their advantage was human: curiosity, purpose, and the skill of asking questions the tools could answer. AI became their extended brain, not their replacement.

In Ms. Rivers’ classroom, the pollination project serves the same role. Students begin with lived experience: “Our local pollinator population is disappearing.” They then practice writing and rewriting their questions to narrow in on causes, consequences, and possible solutions.

They discover that “Tell us everything about pollination” produces a vague, useless answer. And “Compare our town’s rainfall and temperature over twenty years and show three reasons why it may impact pollination,” produces something they can test.

The loop is always the same: Students own the inquiry. Teachers coach the inquiry. AI responds to the inquiry.

If schools refuse to integrate AI, students will still learn inquiry, but from social media feeds and marketing systems whose goals center on profit and engagement, not wisdom. When schools integrate AI thoughtfully, they reclaim inquiry and train students to ask questions that serve their purpose.[29]

The hinge: what humans train

The distance between the Learning Lab school and the extracted old-century school is measured in human decisions: what to optimize, what to forbid, what to teach, and what to leave to chance.

AI grows along the lines of attention, data, and incentives. When culture values quick gains and surface metrics, AI happily amplifies those values. When culture instead values long-term growth, real learning, and shared responsibility, AI can be pointed there as well.[30] For educators and parents, this is both sobering and empowering. Sobering because the tools are already embedded in daily life. Empowering because the deepest variables are still human and still trainable.

A student in the Learning Lab school has practiced learning autonomy from early grades, noticing and self-assessing their own thinking. When they ask an AI a question, they can say, “This answer sounds confident but is shallow,” or “This looks biased; I should check another source.” They have practice recognizing manipulation and naming it, even when the interface is smooth and friendly.[31]

They have also increased their conscious awareness. They have known teachers who treat questions as gifts, not disruptions, and parents who are willing to say, “I don’t know; let’s learn together,” instead of pretending to have all the answers.

A student in the old-century school may have spent more hours talking with algorithms than with friends or adults. Their career suggestions come from recommendation engines as they are unable to recognize what moves them, their passion and purpose. Their sense of what is “normal” is shaped by thousands of micro choices made by systems whose goals they never see.[32]

The hinge between these paths is what I call CQ (consciousness quotient):

  • The ability to manage attention in a world designed to steal it.

  • The capacity to deal with discomfort instead of grabbing instant distraction and gratification.

  • The courage to ask, “Who benefits from this?” before tapping accept.

  • The habit of designing one’s own learning when old skills become obsolete.[33]

These abilities are accessible and practical to all, regardless of their socioeconomic background, the country they were born in, the available resources, the required curriculum, or the current education policies. In the chapters that follow, students will apply these skills in their own lives, adapting to changing jobs, thoughtfully evaluating strategies, exploring new approaches, and responding to moments of change. AI supports learning, but the foundation for these skills comes from relationships and purposeful learning.

A human-centered future

Picture, finally, a modest school ten years from now that decided to be stubborn about human capacity. No wall sized holograms. No drone deliveries. Just a building with decent Wi-Fi, a handful of AI tools, and a staff who agreed on one thing: if machines will soon handle most information, their job is to help students become the human who can decide what to do with that information.

They use AI, of course. They rely on it for translation, drafting some documents, quick analysis, or exploring simulations. They discovered new human capacities because they delegated old skills to AI. They treat it as the way science teachers treat strong chemicals: powerful, useful, and always handled with care.

In one Learning Lab, students read anonymized transcripts of conversations between teens and AI companions. They underline lines that feel like genuine listening and lines that feel like statistical echo. They are learning to feel the difference between real attention and generated attention, a subtle skill that may be one of the most important of their century.[34]

In another room, ten-year-olds build a “mini-AI bot” out of paper cards. Each card has an if sentence:

·  If the student seems bored, show a joke.

·  If the student stays quiet for a long time, message the counselor.

·  If the student spends three hours alone at night, alert an adult.

They play through scenarios and discover how easy it is to design for “keep them online” and how hard it is to design for “help them thrive.” They can feel, in their own bodies, the pull toward engagement and the discipline it takes to aim for wellbeing instead.[35]

By the end of the year, they will forget the exact names of the apps. Technology will change again. What will remain is a set of inner habits:

· When something is free, they ask what they are paying with.

·  When something calls itself a friend, they ask who is accountable when things go wrong.

· When a system offers to think for them, they notice which mental muscles start to relax and choose whether that is okay.

This is the transformation that matters most. AI will accelerate whatever direction we set: dependence or autonomy, distraction or depth, manipulation or wisdom. It will make lazy systems lazier and wise systems more effective. It will not decide which systems we build. That work still belongs to teachers who ask one more question, parents who stay for one more hard conversation, and students who practice, again and again, using their own inquiry to guide powerful tools.[36]

In the chapters that follow, you will meet students who already carry many of these skills into an AI shaped world, and teachers and parents who believe in their capacity while respecting their differences. Their stories show that when schools and families invest in consciousness development, students do more than keep up with technology. They become the kind of humans who can shape what technology becomes.

The future will undoubtedly include AI, but the true measure of progress lies in the depth of human intelligence, courage, and compassion we choose to invest in its development. As technology advances and systems become ever more powerful, the responsibility to shape their direction remains with us—teachers who encourage curiosity, parents who foster meaningful dialogue, and students who develop CQ deliberately.

AI can accelerate whichever path we set, but it cannot determine our values or intentions. Ultimately, the legacy we leave will be defined not by the tools we use, but by the wisdom and care with which we wield them. The opportunity before us is to build a future where technology empowers human flourishing, guided always by our highest ideals.


References:

[1] The AI Dilemma. https://www.humanetech.com/landing/the-ai-dilemma

[2] Darling-Hammond, L. 2025. Educating In the AI Era: The Urgent Need to Redesign Schools. Learning Policy Institute.

[3] AI and Inquiry. https://avidopenaccess.org/resource/ai-and-inquiry/

[4] Clear, J. 2025. Atomic Habits. Random House Business Books.

[5] Kunnath, A. J.; Botes, W. 2025. Transforming science education with artificial intelligence: Enhancing inquiry-based learning and critical thinking in South African science classrooms. Modestum.

[6] Singh, K. 2024. The Future of Education: How AI is Transforming Learning. AquSag Technologies.

[7] Goodman, J. 2025. Problem-Based Learning and Future-Ready Skills. Edutopia.

[8] Echeverry, C. 2024. Personalized Learning with AI: Innovations in Education. Intersog.

[9] University of New York in Prague. The Future of Education: How AI is Revolutionizing Learning.

   Bremen, J. M. 2024. AI Risk and Governance: Utopian and Dystopian Views. Forbes.

[10] Humanetech. The Attention Economy: Why do tech companies fight for our attention?

[11] Center for Humane Technology. 2024. This Moment in AI: How We Got Here, and Where We’re Going.

[12] Benson, C. 2024. Attention to intimacy: the evolution of AI-driven platforms. Ethic AI.

[13] Hilbert, M. From the Attention-to the Intimacy-Economy? SSRN.

[14] Chatterjee, R. 2025. As more teens use AI chatbots, parents and lawmakers sound the alarm about dangers. NPR.

    Chatterjee, R. 2025. Their teenage sons died by suicide. Now, they are sounding an alarm about AI chatbots. NPR.

[15] Babb, K.; Campbell, L. O.; Hayes, B. G.; Lambie, G. W. 2025. An Examination of Generative AI Response to Suicide Inquires: Content Analysis. JMIR Publications.

    Gardner, S. 2025. Experts Caution Against Using AI Chatbots for Emotional Support. Teachers College, Columbia University.

    The Dangers of AI Chatbots for Teen Mental Health. 2025. Newport Healthcare.

[16] Darling-Hammond, L. 2025. Educating In the AI Era: The Urgent Need to Redesign Schools. Learning Policy Institute.

[17] AI in Society. Center for Humane Technology.

[18] Stokel-Walker, C. 2025. AI driven psychosis and suicide are on the rise, but what happens if we turn the chatbots off? BMJ Publishing Group.

[19] Armen Gevorgyan, MD, A. 2025. Artificial Intelligence in Cancer Drug Discovery in 2025. OncoDaily.

    Holman, T. 2025. Artificial intelligence for cancer detection and treatment planning. TriStar Health.

    You, Y.; Et All. 2022. Artificial intelligence in cancer target identification and drug discovery. Nature.

[20] Minh, H. N. L. Et All. 2025. An in-depth review of AI-powered advancements in cancer drug discovery. Science Direct.

[21] Artificial Intelligence in 2025. 60 Minutes.

[22] Jacobs, G.; Munoz, J. M. 2025. AI and Education: Strategic Imperatives for Corporations and Academic Institutions. California Review Management.

[23] Nursurila, N. 2025. Analysis of Artificial Intelligence Assistance in Inquiry Learning Model on Students' Critical Thinking. Asaka Creative Publisher.

     Twani, E. 2021. Becoming Einstein’s Teacher: Awakening the Genius in Your Students. Relational Learning, Inc.

[24] Darling-Hammond, L. 2025. Educating In the AI Era: The Urgent Need to Redesign Schools. Learning Policy Institute.

[25] Singh, K. 2024. The Future of Education: How AI is Transforming Learning. AquSag Technologies.

[26] Kharbach, PhD, M. 2025. Inquiry-Based Learning Simply Explained. LinkedIn.

[27] Kunnath, A. J.; Botes, W. 2025. Transforming science education with artificial intelligence: Enhancing inquiry-based learning and critical thinking in South African science classrooms. Modestum.

[28] AI and Inquiry. https://avidopenaccess.org/resource/ai-and-inquiry/

[29] Ask Us Anything 2025. Center for Humane Technology.

[30] Omoro, R. Tristan Harris at the AI for Good Global Summit: The AI Dilemma. AI for Good.

[31] Hall, B. 2025. Truths About AI That Go Beyond the Hype. USC Libraries.

[32] Humanetech. The Attention Economy: Why do tech companies fight for our attention? 

[33] Hall, B. 2025. Truths About AI That Go Beyond the Hype. USC Libraries.

[34] Sanford, J. 2025. Why AI companions and young people can make for a dangerous mix. Stanford Medicine News Center.

[35] Center for Humane Technology. 2024. This Moment in AI: How We Got Here, and Where We’re Going

[36] Goodman, J. 2025. Problem-Based Learning and Future-Ready Skills. Edutopia.

Next
Next

The Human Advantage