The ChatGPT Generation: How Students Are Actually Using AI (And What Educators Are Learning)

AI can write better essays than your teenager and solve complex equations in a matter of seconds (no wonder they’re all cheating). Here’s how to prepare students for a world where the only advantage left is being human.

Illustration by Neil Jamieson

This cover story is part of our annual September “Top Schools” package.

On the penultimate day of school just before summer break, I’m standing in front of some two dozen fifth graders in a Brookline elementary school classroom awash in color. Everyone, including the teacher, is wearing the T-shirts they tie-dyed a few days earlier. The vibe is cozy and analog, no screens anywhere. The kids sit close together—either sprawled on the carpet at my feet or perched on chairs around their desk clusters. They’re not quite adult size, not yet teens, just as likely to climb a tree as grapple with Big Questions. Which means they’re the perfect age to tell me what it’s like to be a kid at the dawn of AI.

Towering above these 10- and 11-year-olds, I feel like the proverbial drug dealer on the playground. I’ve spent the past few months talking with educators about all the ways we could screw up the next generation if we give kids access to AI tools too soon, but also what could happen if we wait too long. I don’t want to be that person who first suggested that AI could do all their work for them.

Gen Z and Gen Alpha students are already using AI to lighten their loads, crack every code, game every assignment, and solve every problem. After all, the temptation is everywhere. They’re uploading photos of their daily worksheets to Snapchat and getting instant answers. They’re recording lectures, transcribing them into notes, and spinning those into flashcards—before lunch. They’re feeding essay prompts into ChatGPT, copying the results, zhuzhing a few phrases so it passes the sniff test, and calling it a day. While scrolling TikTok, they’re acing math homework with the help of automated apps. Meanwhile, they’re spilling their private data all over the internet.

Who can blame students for feeling a little nihilistic about school? AI is challenging life as we know it, from schoolwork to actual work, sparking an existential debate about what they’ll need to know to succeed in a world where machines run the show. AI is seeping into every industry, threatening to down-skill traditionally well-paid professions, including data science, financial analysis, medical imaging, statistics, legal research, marketing, and more. This trend will only accelerate as more companies harness AI to improve performance and cut costs.

Some Boston-area parents and tech entrepreneurs—including HubSpot’s Dharmesh Shah and Kayak founder and serial entrepreneur Paul English—argue that the best way to prepare kids for the coming AI-slathered future is to get them working with it early—in school. The $252 billion ed-tech industry is projected to double in the next five years, as developers introduce an astounding array of AI-boosted tools that promise to make teachers’ and kids’ lives easier. This includes “AI wrappers”—proprietary platforms built on top of general-purpose large language models like ChatGPT—that are designed to be kid- and teacher-friendly while promising boundless customizable learning experiences for every kind of learner and classroom.

Yet a big part of going to school is learning that life is complex and developing all kinds of intellectual and emotional skills to cope with that fact. “Your brain develops pathways based on your experiences and based on how hard you work,” says Ted McCarthy, former principal of Sutton High School and current interim director of Massachusetts Academy of Math & Science. “If you keep shortcutting it, you’re never going to develop the ability to work hard on your own.” In other words, conflict and disagreement, confusing assignments, academic challenges, even getting out of bed at the crack of dawn theoretically prepare us for the world of work and relationships, regardless of what the future brings.

Nir Eisikovits, a philosophy professor and founding director of the Applied Ethics Center at UMass Boston, warns that offloading more of our thinking to algorithms “weakens our ability to think for ourselves.” Left unchecked, this reliance on tech threatens to hollow out both the cognitive grit and the emotional agency that underpin meaningful learning—and mental well-being—in K–12 education. He reminds me of Socrates’s famous dictum: The unexamined life is not worth living. If we fail to emphasize critical thinking early in a child’s life, he says, if schools allow AI to grease the skids, short-circuiting the hard work, it could “usher us into an unexamined life.”

AI will undoubtedly trigger a wholesale redesign of school as a concept, in Massachusetts and beyond—and much sooner than we think.

Whether you believe the technology offers promise or peril in the world of education, the stakes couldn’t be higher: AI will undoubtedly trigger a wholesale redesign of school as a concept, in Massachusetts and beyond—and much sooner than we think. We urgently need to determine the outcomes we want for the next generation and then reverse-engineer an education system to match. And we have to tackle hard conversations right now, whether we’re ready or not. “We’ve all been put into this experiment that no one signed up for, and it’s moving too fast, and no one can keep up,” says Greg Kulowiec, a K–12 AI strategy consultant for the Massachusetts Department of Elementary and Secondary Education. Get things right, and we’ll find new ways to reach students of all backgrounds and abilities—helping them thrive in ways we never imagined. Get it wrong, and our kids could struggle to survive, let alone succeed.

Illustration by Dale Stephanos

In Ms. Baran’s Brookline classroom, the absence of easy answers is kind of the point. When students get stuck on a problem, they’re told to sit with it for a while before asking for help. Ruminating over challenges encourages kids to overcome the fact that “our brains automatically go to, ‘We can’t do this,’” explains one fifth grader. As they learn how to coach themselves out of tricky problems, they’re developing patience and perseverance. And if they’re still stuck, they’re told to reach out to their friends.

Sure, Ms. Baran wants her students to learn facts, spelling, and arithmetic. But far more important, she wants her fifth graders to see each other as collaborators in this project we call life.

“But AI prevents you from doing all that,” one fifth grader interjects on the day I visit, and heads swivel. If she could use a computer to get all the answers, she says, she’d naturally turn to her Chromebook first—instead of her brain and instead of her friends. That’s why, another student pipes up, “it’s important not to expose kids too much, just so they know…that AI isn’t the only thing that can help you learn.” Everyone nods.

Of course, fifth grade is low stakes; students get reports instead of letter grades, and none of their schoolwork will affect their chances of getting into college. So I ask the kids whether they’d use AI if the stakes were indeed higher, as in middle school and high school, when GPAs could begin to chart their life course. Maybe, they say. They’ve watched older siblings or babysitters rely on AI for homework. To this generation, using digital tools like that—even if it’s technically cheating—feels like a no-brainer. At the same time, they want their teachers tracking their work, because they feel that connection is important to their growth.

Getting a little “unsanctioned help” is nothing new—back in my day, using CliffsNotes in English class risked a suspension, but everyone did it anyway—yet AI takes it to the next level. To see how easy it is to cheat these days, I Google, “AI do my…” and it autofills “homework.” Instantly, I find StudyX, SmartSolve, and StudyMonkey (the tagline? “Learn Smarter, Not Harder, with AI,” followed by the claims: No more all-nighters. No more stress and anxiety. No more asking friends for help). Then there’s NoteGPT; that teaser reads, “Solve your homework instantly for free. Whether it’s math, science, or any other subject, upload your picture or PDF and get accurate solutions effortlessly.”

I click on Writify.ai, a stripped-down interface that offers to solve all my problems, festooned with cheery emojis to express…friendliness? It first asks for my “homework challenge,” so I straight up ask it to write a paper on The Great Gatsby. It prompts me for a teensy bit more info: what grade I’m in and whether it should analyze anything specific. Then, it asks if I want a few topic ideas (so helpful!)—or simply an essay about the book’s “overall message.” I say I’m in the 8th grade, type “overall message,” and Writify instantly generates an outline. Like most AI tools, it doesn’t want to let me go that easily. The tech is hungry for human data to make it stronger, so AI interfaces practically beg us to offload all of our work and thinking. “Would you like me to help you write a rough draft based on this plan?” Writify asks. “Or do you want to try writing a part first?” Rough draft, I answer. “Great choice!” it responds (typical AI, so cloying) and begins cranking out my essay while I scroll Instagram.

Free and easy AI tools like these have created an absurd game of cat and mouse that, let’s admit, students are handily winning. If anything, AI has given them the freedom to decide what activities are worth their time. At the tony all-boys Belmont Hill School—a Harvard-esque neoclassical brick-and-Ionic-column campus with a school crest that features a sextant(!)—a rising junior describes AI as his turbocharged study buddy, perfect for whipping up flashcards, parsing dense biology concepts, or brainstorming AP Euro essay angles. He uses AI even though his parents forbid it—not because it might compromise learning at Belmont Hill (where tuition clocks in at $64,800 per year), but because they don’t want him getting kicked out for violating the school’s AI policy.

His willingness to break the rules for better grades reveals something darker: The reality is that it’s never been harder to get into top schools. Case in point: Cutthroat Massachusetts has the dubious distinction of being where one of the first lawsuits in the country over AI in schools was filed. Last September, Dale and Jennifer Harris (a teacher and writer, respectively) sued Hingham Public Schools teachers and administrators after their son, whom they characterized as a straight-A student bound for an elite college, received a poor grade and was kept out of the National Honor Society that year for using AI as a research and drafting tool.

While the courts may focus on whether he violated Hingham’s AI policy, the case reveals something more telling: Students are wholly convinced they know better than their teachers how to use AI. Take the Belmont Hill student sitting across from me, an aspiring engineer. He may not be outsourcing entire papers to ChatGPT, but he’s spent the past year figuring out how to get AI to do what he calls “busy work,” confident he can distinguish between valuable intellectual exercises and meaningless time-wasters.

But can he? I turn to Belmont Hill history teacher George Sullivan, who has joined us for this conversation, and ask what he thinks. “A lot of students feel like they can judge for themselves what assignments are not worth doing fully,” Sullivan says diplomatically, “but I’m not sure that judgment is always accurate.” The student shifts uncomfortably in his seat.

Back to the cat-and-mouse game that dominates our schools in 2025, where teachers are turning to increasingly invasive surveillance tools to determine who, or what, actually did the work. Consider Draftback, a Chrome extension tool that allows them to review how a Google Doc came together. Every keystroke, copy and paste, deletion, and addition is recorded so the teacher can determine whether the data points to plagiarism. Using Draftback doesn’t guarantee that a student won’t dump an AI-generated answer into their paper, but it might discourage them from trying.

Some teachers are even trying old-fashioned pen-and-paper exercises to wrest the thinking from data centers back into classrooms. Peter Weller, an English and journalism teacher at South Hadley High School, got so frustrated with his students’ AI-generated papers and the dull class discussions that ensued that he bought a stack of composition notebooks and Uni-Ball pens for his kids. These days, he kicks off each class by asking the students to respond in their journals to prompts—sometimes an artwork, sometimes a quote, sometimes a piece of poetry—and then share their thoughts with the group. Weller says that the exercise has been transformative. “We’re building and establishing a sense of community. We are communicating; we’re telling stories,” he says. Helping the students make connections with one another and the world, he says, “is one of the most rewarding things that I do.”

Analog tools may offer a temporary reprieve from AI in the classroom, but pretending AI doesn’t exist isn’t the answer, says Greg Kulowiec, the AI adviser to the Massachusetts Department of Elementary and Secondary Education. Kulowiec spends his days trying to understand the bewildering array of ed-tech offerings while running AI literacy workshops for teachers and students. “If I promote [AI], then I can teach how to effectively use it,” he says. “And if I ban or ignore it, then students are going to be left to their own understanding, which might be sophisticated, and it might not.”

Illustration by Dale Stephanos

One compelling effect of AI is that it’s forcing soul-searching questions about education, equity, and ethics. Former Sutton High School principal Ted McCarthy thought he understood the AI problem—until one parent stopped him in his tracks. The school was developing its AI guidelines, and during one committee meeting, the parent asked what seemed like a straightforward question: “If a student uses AI to help them with homework, is it cheating?” Well, yes, of course. Then the parent asked whether giving his son feedback on his essay was cheating. Hmm, no. “Well,” the parent continued, “what if the kid doesn’t have anyone at home who can help? Why is it okay for a student to use Mom or Dad, but it’s not okay to get feedback from another source? Doesn’t that approach privilege certain students?”

After spending the academic year grappling with questions like these, Sutton High’s committee released a “living” set of guidelines for AI use that teachers and students will tweak together as new tools emerge.

In the midst of all this new tech, McCarthy’s primary concern is whether the kids are still learning. “If things become so easy to do with AI,” he says, “then students don’t develop the ability to persevere, to think, to struggle, to fail, to pick themselves back up.”

In fact, there’s precious little research showing that AI tools actually boost learning. One recent MIT study investigating participants’ brain activity during an essay-writing project found that those who offloaded a significant amount of their work to AI consistently showed weaker brain connectivity than those who didn’t, and underperformed at both the neural and behavioral level. The more participants leaned on AI to do their work, the more their brains atrophied.

Some students have already figured that out—and they’re steering clear of AI. “I feel like if you start using ChatGPT to write your essays too much, you become over-reliant on it,” says a rising senior at Massachusetts Academy of Math & Science, a public magnet school in Worcester. “Then it will get to a point where you won’t be able to write without it.” The student dreams of becoming a surgeon and a writer, so she avoids AI when she’s composing prose, fearing she’ll lose her own voice and critical muscle. But I’m getting the sense that she’s more of the exception to the rule.

One potential positive outcome is that the controversy around AI will push teachers to rethink how they’ve been doing things. Most teaching methods haven’t changed much in decades, Sullivan points out, even though the world certainly has. When information is everywhere and easy to find, why are we still asking kids to memorize facts? “We have to really have some hard conversations,” McCarthy agrees. “What do we really want kids to learn…and why?” It’s a crucial question given that AI could one day take over 70 percent of the skills for most jobs.

Fearing the future depicted in Wall-E—in which tech isolates us from one another and we spend our days passively consuming mindless content—UMass Boston’s Eisikovits believes future curricula must double down on teaching critical thinking skills, especially through writing. Eisikovits also pushes back on a popular analogy—that resistance to AI in 2025 is similar to past resistance to calculators. “These are tools that circle back on the person using them and change how they see the world, how they see themselves, what their options are,” he says. “For young people in school, it’s reshaping their capacity to think critically.”

ChatGPT may seem harmless, but every AI model carries the biases of its creators, warns Jeff Karg, managing director of Boston Innovation. As governments and corporations learn to use these tools to influence what we buy, how we think, and whom we vote for, those biases get baked into every response. That’s why Karg believes future generations will need razor-sharp critical-thinking skills to spot hidden agendas. As a hedge, he says today’s classrooms should double down on morality, ethics, history, civics, and media literacy.

To protect our kids from a Wall-E future, we need to raise them to see AI for what it really is and have the skills to shape it for the better. The machines “are not thinking, and they don’t understand anything,” Kulowiec says. “Someone is going to have to verify what these machines are cranking out. Someone has to be in charge.” And if AI does deliver a more automated world, kids will need a strong sense of who they are in a society where their work no longer defines them. Future generations, Eisikovits says, will need “to figure out what’s interesting to them, to figure out hobbies, to figure out how they will live a meaningful life.”

So maybe it’s time educators had more thoughtful conversations with the people they’re trying to help—namely, the students. Michael Chang, assistant director of the Earl Center for Learning & Innovation at Boston University, has spent the past few years working with high schoolers in summer workshops around the country, finding new ways to leverage AI in the classroom. During these intense sessions, he says he was surprised to learn that students didn’t want AI to do their work and didn’t see it as a “tool” they needed to be taught to use. Instead, they wanted AI to promote opportunities to “seed, nurture, and extend human relations,” according to a research report.

In one workshop, students brainstormed ideas for a bot that would assist in classrooms—not like a creepy Alexa or Siri, but as an entirely student-scripted bot that reflected the values and goals of the group. A big part of the exercise at that point was getting students to agree on the bot’s guardrails. Setting the tone and deciding what kind of feedback would actually promote quality conversation took multiple rounds of debate.

This thoughtful AI use is very different from what tech companies are pitching to schools. Consider the AI tutor. Kulowiec built one for his workshops that helps students explore Karl Marx’s writings. He uploaded all the texts he wanted them to read, then programmed strict guardrails about how the bot should talk and what questions students could ask. But, as Kuloweic points out, it’s still “a generative AI tool. Even with the best intention, the guardrails with most of these platforms can be overcome just by plugging away at it long enough.”

The AI tutor could become another classroom resource, agrees Genesis Carela, a senior policy analyst for Massachusetts at EdTrust, which works to dismantle racial and economic barriers in education. She says it could be especially useful in reaching disengaged students who would benefit from more personalized curricula, as well as students who learn differently and might thrive in a one-on-one environment where they can set their own agenda and pace.

But there’s a catch. If AI tutors take over entirely, warns Marina Bers, an education professor at Boston College, they could strip away opportunities for social-emotional growth and human mentorship. Teachers are the most important part of this learning, she says. Bers worries that more tech could create a two-tiered system: Underfunded schools get more bots, while wealthy schools keep their human teachers. That would leave some children “sitting in front of a screen”—cut off from peers and missing out on eye contact, empathy, and motor-skill development.

Whatever changes are coming, schools (and, subsequently, caregivers) also need to think hard about what data they’re giving away; most tech gives teachers access to every student click and keystroke. At the very least, this is precious data that could be bought and sold. At the very worst, this is information that could be weaponized against teachers, school districts, or even the students themselves.

These are just a few of the issues the state and local teachers’ unions are monitoring closely. “AI is already part of our classrooms and workplaces,” the Boston Teachers Union (BTU) wrote in a statement to Boston. “While it can support teaching and learning, it must never replace the human relationships and professional judgment that make schools thrive. The BTU calls for strong safeguards, clear guidelines, and educator voices to ensure AI serves students—not corporations—and protects the dignity, privacy, and purpose of public education.”

Here’s a radical thought: Maybe AI will save education by forcing us to blow it up and start over. “All this focus on AI is hiding the real question,” Bers says, “which is, ‘What is human intelligence now?’” Instead of creating awkward human-machine partnerships in which we’re constantly policing boundaries, maybe this is the moment to let each do what they do best. Rather than cramming AI into existing frameworks, we could seize this opportunity to redesign education around what makes us uniquely human.

If we go that route, classrooms like Ms. Baran’s in Brookline—with hands-on projects and real-world learning—could be the answer. They’re the opposite of the busywork assignments that AI is practically begging kids to outsource.

I ask the fifth graders which classroom activities they wouldn’t want offloaded to a computer, and their faces light up as they talk about their Rube Goldberg machine project. Over weeks, they planned and built contraptions to pop a balloon using five precise “transfers of energy.” They relied on art, craft, communication, and collaboration. With each tweak and modification, success came closer—until finally the dominoes tumbled, the ball rolled, the blocks fell, the little car sprinted, and pop! The room erupted in cheers. That’s where real learning happens: in the struggle. Wrestling with a blank page, failing and trying again, refusing shortcuts. The pure joy that follows is what you get from perseverance—and it’s exactly the kind of learning AI can’t replicate.

So what if schools focused on developing the human skills that AI can’t touch? Former Boston venture capitalist turned education entrepreneur Ted Dintersmith has spent a decade exploring this question. He even road-tripped across the country to visit schools and talk with teachers and students. He’s seen innovative curricula everywhere. A New York City elementary school built grades 3 to 5 around hip-hop, using rap beats to teach math and percussion to explain physics. In Fort Wayne, Indiana, a first-grade teacher told his class on day one, “We’re going to learn how to design robots.” When they asked if he knew how, he said, “Nope. We’ll figure it out together.”

Dintersmith’s conclusion? Kids learn better when they apply learning to solve real-world problems. That’s why, like many educational experts I spoke to, he wants shop class and vocational training in every school. In one Winchester, West Virginia, school district, he explains, students learned carpentry and welding—skills that opened doors to careers or made for compelling college essays. For Dintersmith, vocational education isn’t just job training. It’s about reconnecting learning with real purpose, boosting local economies, and giving students skills that automation can’t replace.

Indeed, something seems to be pushing Gen Z and Gen Alpha back into the very real world of craft, observes Mike van der Sleesen, cofounder of Vanson Leathers, a company that makes high-end motorcycle racing gear in Fall River. On a hot Saturday in July, we meet in the 19th-century mill building that houses Vanson’s showroom and factory. The massive old mill lacks air conditioning, the windows are plastic, and the ceiling fans are spinning wildly. Yet the handwork involved in making gear is attracting a new generation to manufacturing, says van der Sleesen. In the 50-plus years that he’s had the factory, he says, he’s never seen such enthusiasm among younger generations to sew a badass leather jacket.

Van der Sleesen introduces me to a recent hire, 21-year-old Anthony Stewart, a sandy-haired competitive motorcycle racer. After one year of college, Stewart says, he realized he wanted to work with his hands instead of a computer, but he never thought about manufacturing until he stumbled on Vanson while driving around Fall River on a whim. When he entered the factory, he was hooked. He loved that it was “wicked old.” Working in manufacturing, Stewart says, has made him “really appreciate goods that are quality and take human effort to build.”

Whether they’re rebelling against digital overload or preparing for an AI-dominated future, young people in Massachusetts are indeed flocking to vocational training in record numbers. Between 6,000 to 11,000 students are currently on waitlists for voc-tech schools in Massachusetts. In response, Governor Maura Healey filed a supplemental budget in February that included $75 million to support career technical training.

Post-pandemic research has confirmed that young people learn more from doing things together, in the physical world, than anyone could have imagined. Remote learning left many students restless and struggling to focus. When they returned to the classroom, teachers noticed shorter attention spans, more behavioral problems, and kids who seemed overwhelmed by real-world interactions after so much time on Zoom. That’s why education experts like BU’s Chang worry about letting AI take over too much of teaching. “Grading isn’t just a robot checking answers—it’s a relational task,” he says. One fifth grader puts it even better: “What Ms. Baran does is not only educational, it’s also emotional. She feels what students are feeling. If a student is sad, she won’t try to push them into something.”

If I’ve learned anything from AI, it’s that teaching is wildly more complicated than it looks. Knowing each student’s weak spots, tracking growth over time, and offering encouragement through failure makes teaching “way, way more complex and harder than rocket science,” says Justin Reich, a former high school history teacher and director of the Teaching Systems Lab at MIT. “You’re trying to rearrange the neurons inside young people’s minds for prosocial purposes, but you can’t go in there and touch them. You have to put them in a room with a bunch of other people and have their eyeballs and their ears bang into sounds and words and reading and other kinds of things, and hope that the sequence of instructional activities for this diverse group of students is going to lead to the outcomes that you want.”

Put more poetically, Bers says that educators are architects of inquiry who “know how to think, how to ask questions, how to connect things that will not be connected otherwise.” Even Paul English, the founder of Kayak who argues that we can’t get AI into schools fast enough, insists that AI can be a teaching assistant, “but it can’t truly love someone and take care of someone the way a good teacher can.”

Generated by Benjamen Purvis using AI for our 2024 “Boston’s AI Revolution” package

In many ways, our current educational system is beautifully engineered to produce adaptable, resilient, and independent adults. It’s built on the belief that to become engaged members of society, students need exposure to a broad range of knowledge and ways of thinking. Mastery in reading and writing helps them play with the universe of ideas. Studying science helps them meaningfully work with the physical world. Through art and music, they develop fine-motor skills and experience life through different senses.

If these are the benchmarks for future success, Ms. Baran’s fifth graders have hit the jackpot. They’re learning in a well-funded public system that values exploration as much as facts. Massachusetts-based educators have spent decades figuring out what works, collaborating with students and one another to spark curiosity and deepen understanding. Schools here have broad freedom to innovate, guided by state standards that emphasize equity and critical thinking. The goal? To raise adults who know how to think, not just what to think.

Here’s something else worth considering: The minds behind AI came from schools just like this one. Sam Altman went to a progressive school in St. Louis before dropping out of Stanford—meaning his high school gave him what he needed to change the world. Same with Mark Zuckerberg and Bill Gates. These tech leaders developed their curiosity and problem-solving skills in traditional classrooms. Then they used that foundation to build machines that mimic human thinking. Perhaps, then, it’s no coincidence that AI is largely an American innovation, developed at places like MIT, Stanford, NASA, and IBM by people educated by humans who prioritized thinking and exploration. The biggest AI companies—OpenAI, NVIDIA, Microsoft—are American, too. If our schools were truly broken, would they have produced the minds that created this technology?

At the same time, it’s important to acknowledge that the system works for some, but not for all. Maybe the most pressing problem we’re facing isn’t AI at all, but the fact that access to good teaching and resources remains chronically uneven. Gregory Molinar, who grew up in Fall River and is a father of four children under 17, feels that schools have mostly failed both him and his kids. His oldest attends a vocational-technical school, but even so, Molinar says something is missing—and as a result, his children aren’t staying engaged. “We are forcing millions of children to learn one way, and if that way doesn’t align with them, then we label them failures and crush them starting at a young age,” he says. If reliable AI tutors were available, he’d let the computer homeschool all of his kids. Molinar’s willingness to embrace that option underscores the stakes: When the system lets families down, parents naturally will look elsewhere, even if it means handing over their kids’ education to a machine.

The truth is, we’re reaching a tipping point. The overwhelming, unavoidable presence of generative AI in our lives demands a dramatic rethink of everything—school, work, and life itself. But AI is also provoking fascinating conversations about the value of being human.

Remember that Great Gatsby essay I asked Writify.ai to pen for me? I never read it. Too many words. But I did scan it, and when I got to the last line, I stared in wonderment. After cranking through all of the Great Gatsby papers all over the world, artificial intelligence concluded with an age-old warning to lean into our humanity. “Fitzgerald’s story reminds us to be careful about what we wish for,” it read, “and to understand what truly matters in life.”

This article was first published in the print edition of the September 2025 issue with the headline: “The ChatGPT Generation.”

Illustration by Comrade(@comradehq)