AI safety advocates tell founders to slow down


“Move cautiously and red-team things” is sadly not as catchy as “move fast and break things.” But three AI safety advocates made it clear to startup founders that going too fast can lead to ethical issues in the long run.

“We are at an inflection point where there are tons of resources being moved into this space,” said Sarah Myers West, co-executive director of the AI Now Institute, onstage at TechCrunch Disrupt 2024. “I’m really worried that right now there’s just such a rush to sort of push product out onto the world, without thinking about that legacy question of what is the world that we really want to live in, and in what ways is the technology that’s being produced acting in service of that world or actively harming it.”

The conversation comes at a moment when the issue of AI safety feels more pressing than ever. In October, the family of a child who died by suicide sued chatbot company Character.AI for its alleged role in the child’s death.

“This story really demonstrates the profound stakes of the very rapid rollout that we’ve seen of AI-based technologies,” Myers West said. “Some of these are longstanding, almost intractable problems of content moderation of online abuse.

But beyond these life-or-death issues, the stakes of AI remain high, from misinformation to copyright infringement.

“We are building something that has a lot of power and the ability to really, really impact people’s lives,” said Jingna Zhang, founder of artist-forward social platform Cara. “When you talk about something like Character.AI, that emotionally really engages with somebody, it makes sense that I think there should be guardrails around how the product is built.”

Zhang’s platform Cara took off after Meta made it clear that it could use any user’s public posts to train its AI. For artists like Zhang herself, this policy is a slap in the face. Artists need to post their work online to build a following and secure potential clients, but by doing that, their work could be used to shape the very AI models that could one day put them out of work.

“Copyright is what protects us and allows us to make a living,” Zhang said. If artwork is available online, that doesn’t mean it’s free, per se — digital news publications, for example, have to license images from photographers in order to use them. “When generative AI started becoming much more mainstream, what we are seeing is that it does not work with what we are typically used to, that’s been established in law. And if they wanted to use our work, they should be licensing it.”

Aleksandra Pedraszewska, AI Safety, ElevenLabs; Sarah Myers West, Executive Director, AI Now Institute; and Jingna Zhang, Founder & CEO, Cara at TechCrunch Disrupt 2024 on Wednesday, October 30, 2024.Image Credits:Katelyn Tucker/ Slava Blazer Photography

Artists could also be impacted by products like ElevenLabs, an AI voice cloning company that’s worth over a billion dollars. As head of safety at ElevenLabs, it’s up to Aleksandra Pedraszewska to make sure that the company’s sophisticated technology isn’t co-opted for nonconsensual deepfakes, among other things.

“I think red-teaming models, understanding undesirable behaviors, and unintended consequences of any new launch that a generative AI company does is again becoming [a top priority],” she said. “ElevenLabs has 33 million users today. This is a massive community that gets impacted by any change that we make in our product.”

Pedraszewska said that one way people in her role can be more proactive about keeping platforms safe is to have a closer relationship with the community of users.

“We cannot just operate between two extremes, one being entirely anti-AI and anti-GenAI, and then another one, effectively trying to persuade zero regulation of the space. I think that we do need to meet in the middle when it comes to regulation,” she said.

TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.



Source link

About The Author

Scroll to Top