January 28, 2025

What I learned at the TedAI conference

How AI Will Impact Work, Creativity, and the World
January 28, 2025

What I learned at the TedAI conference

How AI Will Impact Work, Creativity, and the World
January 28, 2025
Briana Brownell
In this article
Start editing audio & video
This makes the editing process so much faster. I wish I knew about Descript a year ago.
Matt D., Copywriter
Sign up

What type of content do you primarily create?

Videos
Podcasts
Social media clips
Transcriptions
Start editing audio & video
This makes the editing process so much faster. I wish I knew about Descript a year ago.
Matt D., Copywriter
Sign up

What type of content do you primarily create?

Videos
Podcasts
Social media clips
Transcriptions

This fall, I escaped the Canadian snow (yes, in Canada, it snows in the fall) and headed to California for the TED AI conference in San Francisco. In 2023, the event was full of ebullience, but the 2024 version was tempered with trepidation—a reflection of where the AI field stands today.

There's no denying it: the pace of innovation in AI is staggering. Its implications are felt everywhere, from science to art to business. For those of us working closely with AI, the conference offered a wealth of inspiration and food for thought.

Here are my top ten takeaways from the event.

The future of AI

A more human-like language model?

Jessica Coon, a linguistics professor from McGill University who also worked as a linguistic consultant on the 2016 film "Arrival," spoke about the similarities among human languages and how they differ from current large language models.

Linguists have discovered that human languages share a reliance on intricate hierarchies of meaning and structure—words must be ordered in specific ways for the language to work. This universal trait sets human languages apart from the large language models we use today, which can learn from any kind of sequence, regardless of structure.

LLMs don't explicitly capture the structure of human language—they can learn languages that humans find impossible to pick up. This mismatch raises some provocative questions: Could building models that better mimic human linguistic structures lead to breakthroughs in AI?

This theme was also reflected in Pratyusha Sharma's talk on Project CETI, which uses AI to decode whale communication. The project has already made promising strides, highlighting the potential of AI to bridge gaps between human, animal, and machine languages. It's clear there's untapped potential in exploring the different ways humans, animals, and machines interpret meaning through language.

Beyond bigger models

For most AI, the way to get better is to get bigger: scale. Noam Brown, a research scientist at OpenAI, pointed out that scale has been the main driving force behind advancements in large language models (LLMs) over the last seven years. The difference between the groundbreaking 2017 GPT-1 and its successors wasn't a fundamentally new architecture—both use transformers—but rather the addition of more parameters and vastly more training data.

It's worked so far, but we can't continue in this way indefinitely. We're running out of high-quality data to train these systems, and the resources required for ever-larger models are becoming unsustainable.

Brown's solution? Smarter, not bigger. His work on OpenAI's o1 series incorporates "System 2 thinking"—a term borrowed from cognitive science that refers to deliberate, logical reasoning—into AI. Unlike traditional models that rely on brute force pattern recognition (akin to "System 1 thinking"), these new approaches focus on more thoughtful, step-by-step problem solving. The result? Improved performance without the need for ever-increasing size. As Brown put it: 20 seconds of thinking is worth 100,000 times more data.

I think this shift opens exciting possibilities. If logic-based enhancements can boost model capabilities, what other unexpected techniques might we find that could revolutionize AI in the coming years? Maybe ingenuity, not just scale, will be the next driver of innovation in AI.

Humans with AI: Augmentation, not replacement

There is widespread worry, both in Silicon Valley and elsewhere, that AI will be used to replace workers. Reid Hoffman, entrepreneur and AI advocate, offers a more optimistic vision of AI. He considers it a tool to enhance, not replace, human capabilities, calling this concept "super agency." Hoffman compared AI to transformative innovations like automobiles—technologies that expanded human potential by enhancing physical abilities. Instead of our physical abilities, AI supercharges our cognitive abilities, amplifying what we can think, create, and achieve.

Hoffman also said that another important power of AI lies in democratizing access to expertise. For instance, he imagines a world where anyone can consult a general practitioner or gain expert advice at any time, anywhere. This, he argued, could fundamentally level the playing field, enabling unprecedented ability for anyone to learn and grow.

However, I found that Hoffman's declaration—"Humans not using AI will be replaced by humans using AI"—hints at a harsher reality: adapting to AI is no longer optional. Will it truly benefit everyone or leave some people further behind?

Workplaces: a slow but steady transformation 

On the second day of the conference, a lively panel explored AI's impact on the workplace.

AI is already here for many organizations. Companies currently use AI tools for tasks like summarization and information retrieval. Practical but uninspiring, and as one panelist noted, these capabilities are "already old news." The real challenge: what comes next. They suggested that organizations need "leap projects"—ambitious initiatives that harness AI to create entirely new ways of working or delivering value. These projects could redefine industries if companies are bold enough to support them.

I was surprised at the disconnect the panelists revealed about the value of AI at workplaces. While management often claims that AI hasn't made much of an impact, workers beg to differ. Employees are already integrating AI tools into their workflows—oftentimes without formal acknowledgment or support. The quiet adoption of AI, which Ethan Mollick calls "Secret Cyborgs," is reshaping their employees' workdays faster than many executives realize. The discussion opened up new questions about incentives: If AI enables an employee to double their productivity, how should their compensation reflect that?

Another key takeaway from the panel was the need for executives to lead by example. Leaders should use AI themselves rather than delegating its adoption to junior staff. Companies must also empower internal AI advocates—those already experimenting with the technology—to share their insights and drive innovation.

It was clear to me that there is a lot more to explore in the intersection of work and AI, and we're only just scratching the surface with the ways work might change.

Resurrecting the dead: The ethical and emotional dimensions of AI

One of the most interesting discussions at the conference was around how AI is being used to "resurrect the dead," both emotionally and artistically.

Eugenia Kuyda, founder of ReplikaAI, shared a deeply personal story of the tragic loss of a close friend. Kuyda missed his texts, so she used AI to create a digital version of him, allowing her to converse with an echo of his personality. The experience brought her solace, and so she started Replika to allow others to create AI companions too.

But it wasn't all positive: Kuyda raised concerns about engagement with these technologies. Could AI companions, designed to maximize engagement, further isolate people from real human relationships? She proposed that instead of optimizing for time spent or clicks, AI systems should aim to enhance human flourishing and happiness.

The artistic implications of AI "resurrections" were also explored through pianist AyseDeniz Gokcin's performance of an AI-generated composition based on Chopin's work, her favorite composer. It reminded me of the ethical dilemmas Pamela McCorduck discussed in "AARON's Code" about extending an artist's work posthumously—without their permission—which raises thorny moral issues. Is it honoring their legacy or exploiting it?

But there was also something else. While technically impressive, the piece—at least, to me—didn't quite capture Chopin's essence. In fact, I thought it sounded more like Erik Satie. I don't know if this was intentional, but for me, it made me wonder about the limits of AI's ability to authentically channel an artist's voice.

Creativity: Eroding pipelines of artists

There was a lot of discussion about how AI is impacting creative professions and what this impact might be in the future. Professor Ben Zhao from the University of Chicago, creator of the anti-AI tool Nightshade, raised concerns about AI's encroachment on creative fields. He noted that the rise of generative AI has led to a significant decline in art school enrollments, with aspiring artists abandoning the industry in record numbers.

This trend has forced some art colleges to shrink or even close departments, putting them in existential survival mode. But I felt that it didn't really tell the whole story. While AI's impact is evident, it's not the sole factor in this decline. Economic pressures and shifting cultural interests also play roles.

However, I do agree that the rapid advancement of AI-generated art has intensified these challenges. Will it also lead to a devaluation of human creativity? What does this mean for the future of human art? I personally hope that it will result in a stronger focus on deliberate development of creative skills, but that is by no means a given. AI systems depend on human-created art for training data, and if human artistic production dwindles, the quality and diversity of AI-generated art could suffer, too. Who knows? Maybe we'll see a future human art style that AI can't mimic.

Copyright and fair use

The legal landscape surrounding AI's use of copyrighted material is... well... let's just say it's tense. There are a variety of perspectives. Angela Dunning, a copyright attorney and litigation partner at Cleary Gottlieb, defended AI training as fair use. She invoked Mark Twain's assertion that "there is no such thing as a new idea"—creativity inherently builds upon existing works. Dunning drew parallels to historical disruptions like photography, which once faced resistance but ultimately spurred artistic innovations such as abstract art and pointillism.

Meanwhile, Ed Newton-Rex, founder of the nonprofit Fairly Trained, advocated for mandatory licensing of training data. He unveiled a statement signed by over 11,000 artists, including Björn Ulvaeus and Thom Yorke, declaring unlicensed AI training a severe threat to creative livelihoods.

We don't yet know where the lawsuits will land, and I expect the intricacies will play out over a multi-year or multi-decade timeline as courts make and appeal decisions.

The artistry of lived experience

Harvey Mason Jr., CEO of the Recording Academy, shared his perspective on the role of AI in the creative industries. While he expressed concerns about AI's potential impact on artists, he emphasized the importance of embracing the technology as it reshapes music creation, distribution, and discovery. However, he was clear about AI's limitations: it cannot replicate inspired art.

Mason Jr. used Stevie Wonder's 1976 masterpiece, "Songs in the Key of Life," as a perfect example of what AI lacks. On one level, AI cannot achieve the level of artistry found in such an iconic album. But on a deeper, more figurative level, it also cannot replicate songs in the key of life—the lived human experience that fuels works of such profound emotional resonance. Music—and all art—is not just technical skill but a reflection of life's joys, struggles, and our relationships with each other.

This idea resonates with a growing conversation among experts and artists: what separates human art from AI-generated work might be intentionality. Artists ask themselves, "Why would I create this detail?" This sense of purpose and meaning behind creative choices, born from personal experience and emotion, is something AI cannot authentically emulate.

In a world increasingly touched by AI, it's the artistry of lived experience—the uniquely human ability to translate life into art—that remains irreplaceable.

Search as a creative tool

I'm going to end with my favorite insight of the conference: Search as a creative tool. Instead of searching for and linking content based on keywords, for example, we can use LLMs to find much more sophisticated patterns and similarities.

This idea was from Steven Johnson, Author and Editorial Director of Notebook LM, who shared some really interesting insights about his use of a digital commonplace book—an organized collection of notes, quotes, and ideas. This practice, inspired by the traditions of figures like Leonardo da Vinci and Virginia Woolf, has allowed him to deepen his creative process by capturing and connecting ideas over time.

Johnson said that tools like Notebook LM can sift through his notes and archives to uncover unexpected patterns, link disparate ideas, and draw new insights from his own work. For example, he used the tool to search for instances where he had used suspense in his writing—something that would be nearly impossible with standard search engines. After all, what keyword could you search to reliably find something like that?

I think this approach opens up a ton of intriguing possibilities for creatives. Imagine searching your own work for emotional shifts, recurring themes, or overlooked connections. I've started experimenting with using my own notes, and I'd recommend you give it a try too.

What next?

There are so many things I didn't get a chance to cover: how AI is being used to find new medicines faster, the rise of machine teaching as the next phase of our interaction with AI, how "Responsible AI" is hiding a lot of bad stuff. But one thing is clear: the breakthroughs in AI have only just begun.

Briana Brownell
Briana Brownell is a Canadian data scientist and multidisciplinary creator who writes about the intersection of technology and creativity.
Share this article
Start creating—for free
Sign up
Join millions of others creating with Descript

What I learned at the TedAI conference

This fall, I escaped the Canadian snow (yes, in Canada, it snows in the fall) and headed to California for the TED AI conference in San Francisco. In 2023, the event was full of ebullience, but the 2024 version was tempered with trepidation—a reflection of where the AI field stands today.

There's no denying it: the pace of innovation in AI is staggering. Its implications are felt everywhere, from science to art to business. For those of us working closely with AI, the conference offered a wealth of inspiration and food for thought.

Here are my top ten takeaways from the event.

The future of AI

A more human-like language model?

Jessica Coon, a linguistics professor from McGill University who also worked as a linguistic consultant on the 2016 film "Arrival," spoke about the similarities among human languages and how they differ from current large language models.

Linguists have discovered that human languages share a reliance on intricate hierarchies of meaning and structure—words must be ordered in specific ways for the language to work. This universal trait sets human languages apart from the large language models we use today, which can learn from any kind of sequence, regardless of structure.

LLMs don't explicitly capture the structure of human language—they can learn languages that humans find impossible to pick up. This mismatch raises some provocative questions: Could building models that better mimic human linguistic structures lead to breakthroughs in AI?

This theme was also reflected in Pratyusha Sharma's talk on Project CETI, which uses AI to decode whale communication. The project has already made promising strides, highlighting the potential of AI to bridge gaps between human, animal, and machine languages. It's clear there's untapped potential in exploring the different ways humans, animals, and machines interpret meaning through language.

Beyond bigger models

For most AI, the way to get better is to get bigger: scale. Noam Brown, a research scientist at OpenAI, pointed out that scale has been the main driving force behind advancements in large language models (LLMs) over the last seven years. The difference between the groundbreaking 2017 GPT-1 and its successors wasn't a fundamentally new architecture—both use transformers—but rather the addition of more parameters and vastly more training data.

It's worked so far, but we can't continue in this way indefinitely. We're running out of high-quality data to train these systems, and the resources required for ever-larger models are becoming unsustainable.

Brown's solution? Smarter, not bigger. His work on OpenAI's o1 series incorporates "System 2 thinking"—a term borrowed from cognitive science that refers to deliberate, logical reasoning—into AI. Unlike traditional models that rely on brute force pattern recognition (akin to "System 1 thinking"), these new approaches focus on more thoughtful, step-by-step problem solving. The result? Improved performance without the need for ever-increasing size. As Brown put it: 20 seconds of thinking is worth 100,000 times more data.

I think this shift opens exciting possibilities. If logic-based enhancements can boost model capabilities, what other unexpected techniques might we find that could revolutionize AI in the coming years? Maybe ingenuity, not just scale, will be the next driver of innovation in AI.

Humans with AI: Augmentation, not replacement

There is widespread worry, both in Silicon Valley and elsewhere, that AI will be used to replace workers. Reid Hoffman, entrepreneur and AI advocate, offers a more optimistic vision of AI. He considers it a tool to enhance, not replace, human capabilities, calling this concept "super agency." Hoffman compared AI to transformative innovations like automobiles—technologies that expanded human potential by enhancing physical abilities. Instead of our physical abilities, AI supercharges our cognitive abilities, amplifying what we can think, create, and achieve.

Hoffman also said that another important power of AI lies in democratizing access to expertise. For instance, he imagines a world where anyone can consult a general practitioner or gain expert advice at any time, anywhere. This, he argued, could fundamentally level the playing field, enabling unprecedented ability for anyone to learn and grow.

However, I found that Hoffman's declaration—"Humans not using AI will be replaced by humans using AI"—hints at a harsher reality: adapting to AI is no longer optional. Will it truly benefit everyone or leave some people further behind?

Workplaces: a slow but steady transformation 

On the second day of the conference, a lively panel explored AI's impact on the workplace.

AI is already here for many organizations. Companies currently use AI tools for tasks like summarization and information retrieval. Practical but uninspiring, and as one panelist noted, these capabilities are "already old news." The real challenge: what comes next. They suggested that organizations need "leap projects"—ambitious initiatives that harness AI to create entirely new ways of working or delivering value. These projects could redefine industries if companies are bold enough to support them.

I was surprised at the disconnect the panelists revealed about the value of AI at workplaces. While management often claims that AI hasn't made much of an impact, workers beg to differ. Employees are already integrating AI tools into their workflows—oftentimes without formal acknowledgment or support. The quiet adoption of AI, which Ethan Mollick calls "Secret Cyborgs," is reshaping their employees' workdays faster than many executives realize. The discussion opened up new questions about incentives: If AI enables an employee to double their productivity, how should their compensation reflect that?

Another key takeaway from the panel was the need for executives to lead by example. Leaders should use AI themselves rather than delegating its adoption to junior staff. Companies must also empower internal AI advocates—those already experimenting with the technology—to share their insights and drive innovation.

It was clear to me that there is a lot more to explore in the intersection of work and AI, and we're only just scratching the surface with the ways work might change.

Resurrecting the dead: The ethical and emotional dimensions of AI

One of the most interesting discussions at the conference was around how AI is being used to "resurrect the dead," both emotionally and artistically.

Eugenia Kuyda, founder of ReplikaAI, shared a deeply personal story of the tragic loss of a close friend. Kuyda missed his texts, so she used AI to create a digital version of him, allowing her to converse with an echo of his personality. The experience brought her solace, and so she started Replika to allow others to create AI companions too.

But it wasn't all positive: Kuyda raised concerns about engagement with these technologies. Could AI companions, designed to maximize engagement, further isolate people from real human relationships? She proposed that instead of optimizing for time spent or clicks, AI systems should aim to enhance human flourishing and happiness.

The artistic implications of AI "resurrections" were also explored through pianist AyseDeniz Gokcin's performance of an AI-generated composition based on Chopin's work, her favorite composer. It reminded me of the ethical dilemmas Pamela McCorduck discussed in "AARON's Code" about extending an artist's work posthumously—without their permission—which raises thorny moral issues. Is it honoring their legacy or exploiting it?

But there was also something else. While technically impressive, the piece—at least, to me—didn't quite capture Chopin's essence. In fact, I thought it sounded more like Erik Satie. I don't know if this was intentional, but for me, it made me wonder about the limits of AI's ability to authentically channel an artist's voice.

Creativity: Eroding pipelines of artists

There was a lot of discussion about how AI is impacting creative professions and what this impact might be in the future. Professor Ben Zhao from the University of Chicago, creator of the anti-AI tool Nightshade, raised concerns about AI's encroachment on creative fields. He noted that the rise of generative AI has led to a significant decline in art school enrollments, with aspiring artists abandoning the industry in record numbers.

This trend has forced some art colleges to shrink or even close departments, putting them in existential survival mode. But I felt that it didn't really tell the whole story. While AI's impact is evident, it's not the sole factor in this decline. Economic pressures and shifting cultural interests also play roles.

However, I do agree that the rapid advancement of AI-generated art has intensified these challenges. Will it also lead to a devaluation of human creativity? What does this mean for the future of human art? I personally hope that it will result in a stronger focus on deliberate development of creative skills, but that is by no means a given. AI systems depend on human-created art for training data, and if human artistic production dwindles, the quality and diversity of AI-generated art could suffer, too. Who knows? Maybe we'll see a future human art style that AI can't mimic.

Copyright and fair use

The legal landscape surrounding AI's use of copyrighted material is... well... let's just say it's tense. There are a variety of perspectives. Angela Dunning, a copyright attorney and litigation partner at Cleary Gottlieb, defended AI training as fair use. She invoked Mark Twain's assertion that "there is no such thing as a new idea"—creativity inherently builds upon existing works. Dunning drew parallels to historical disruptions like photography, which once faced resistance but ultimately spurred artistic innovations such as abstract art and pointillism.

Meanwhile, Ed Newton-Rex, founder of the nonprofit Fairly Trained, advocated for mandatory licensing of training data. He unveiled a statement signed by over 11,000 artists, including Björn Ulvaeus and Thom Yorke, declaring unlicensed AI training a severe threat to creative livelihoods.

We don't yet know where the lawsuits will land, and I expect the intricacies will play out over a multi-year or multi-decade timeline as courts make and appeal decisions.

The artistry of lived experience

Harvey Mason Jr., CEO of the Recording Academy, shared his perspective on the role of AI in the creative industries. While he expressed concerns about AI's potential impact on artists, he emphasized the importance of embracing the technology as it reshapes music creation, distribution, and discovery. However, he was clear about AI's limitations: it cannot replicate inspired art.

Mason Jr. used Stevie Wonder's 1976 masterpiece, "Songs in the Key of Life," as a perfect example of what AI lacks. On one level, AI cannot achieve the level of artistry found in such an iconic album. But on a deeper, more figurative level, it also cannot replicate songs in the key of life—the lived human experience that fuels works of such profound emotional resonance. Music—and all art—is not just technical skill but a reflection of life's joys, struggles, and our relationships with each other.

This idea resonates with a growing conversation among experts and artists: what separates human art from AI-generated work might be intentionality. Artists ask themselves, "Why would I create this detail?" This sense of purpose and meaning behind creative choices, born from personal experience and emotion, is something AI cannot authentically emulate.

In a world increasingly touched by AI, it's the artistry of lived experience—the uniquely human ability to translate life into art—that remains irreplaceable.

Search as a creative tool

I'm going to end with my favorite insight of the conference: Search as a creative tool. Instead of searching for and linking content based on keywords, for example, we can use LLMs to find much more sophisticated patterns and similarities.

This idea was from Steven Johnson, Author and Editorial Director of Notebook LM, who shared some really interesting insights about his use of a digital commonplace book—an organized collection of notes, quotes, and ideas. This practice, inspired by the traditions of figures like Leonardo da Vinci and Virginia Woolf, has allowed him to deepen his creative process by capturing and connecting ideas over time.

Johnson said that tools like Notebook LM can sift through his notes and archives to uncover unexpected patterns, link disparate ideas, and draw new insights from his own work. For example, he used the tool to search for instances where he had used suspense in his writing—something that would be nearly impossible with standard search engines. After all, what keyword could you search to reliably find something like that?

I think this approach opens up a ton of intriguing possibilities for creatives. Imagine searching your own work for emotional shifts, recurring themes, or overlooked connections. I've started experimenting with using my own notes, and I'd recommend you give it a try too.

What next?

There are so many things I didn't get a chance to cover: how AI is being used to find new medicines faster, the rise of machine teaching as the next phase of our interaction with AI, how "Responsible AI" is hiding a lot of bad stuff. But one thing is clear: the breakthroughs in AI have only just begun.

Featured articles:

No items found.

Articles you might find interesting

AI for Creators

How to use AI for writing

If you let AI do the writing, you'll get slop—or popcorn. But if you use it as a tool in your writing, you can write faster, and better.

AI for Creators

8 creative exercises you can do with AI tools

AI can push you out of your comfort zone and make you consider new perspectives — both very important to real creativity. Here's how to do it.

Related articles:

Share this article

Get started for free →