When Karandeep Anand’s 5-year-old daughter returns from school, she eagerly engages with her favorite AI-generated personas on Character.AI, including “Librarian Linda.” This personal experience with the platform may prove invaluable as Anand steps into his new role as the CEO of Character.AI, a transition announced last month.
Anand takes the helm at a pivotal time for the company, which allows users to interact with a variety of AI personas. Character.AI is navigating a competitive landscape while facing legal challenges from families alleging exposure to inappropriate content. Furthermore, lawmakers and advocacy groups have raised concerns about the platform’s safety, particularly for users under 18. Experts have also highlighted the risks of forming unhealthy attachments to AI characters.
Anand’s Vision for Character.AI
Bringing a wealth of experience from tech giants like Microsoft and Meta, Anand leads Character.AI’s team of approximately 70 employees. His strategic vision is to transform the platform into a hub for interactive AI entertainment, moving away from passive social media consumption towards co-creating narratives with AI.
“AI can power a very, very powerful personal entertainment experience unlike anything we’ve seen in the last 10 years in social media, and definitely nothing like what TV used to be,” Anand stated in an interview.
Unlike general-purpose AI tools such as ChatGPT, Character.AI offers a diverse range of chatbots modeled after celebrities and fictional characters. Users can create their own personas for conversations or role play, with bots responding in human-like ways, including references to facial expressions or gestures.
Focus on Youth Safety
Character.AI has faced scrutiny following lawsuits from families claiming the platform exposed children to harmful content. In response, the company has implemented safety measures, including pop-ups directing users mentioning self-harm to the National Suicide Prevention Lifeline. The AI model has been updated to minimize exposure to sensitive content for users under 18, and parents can opt to receive weekly activity reports.
Despite these efforts, Anand acknowledges the ongoing challenge of maintaining safety, especially for younger users. The platform requires users to be over 13, yet does not verify birthdates, highlighting a potential gap in safeguarding.
“The tech and the industry and the user base is constantly evolving (so) that we can never let the guard off. We have to constantly stay ahead of the curve,” Anand emphasized.
Character.AI is also testing new features to prevent misuse, such as a video generator launched last month. This tool allows users to animate bots, but safeguards are in place to prevent negative uses like deepfakes or bullying.
Enhancing the User Experience
Anand aims to refine the platform’s safety filter, which he describes as “overbearing.” He cited instances where harmless content, such as “vampire fan fiction role play,” is unnecessarily censored. His goal is to update the model to better understand context while maintaining safety.
Another priority is fostering a creator ecosystem on Character.AI, encouraging more users to develop new chatbot characters. Enhancements to the social feed, where users share AI-generated content, are also planned. This feature mirrors a Meta app that allows public sharing of AI creations, though it has faced privacy challenges.
Competing in the AI Landscape
Character.AI faces stiff competition in the AI domain, with companies like Meta offering lucrative packages to attract top talent. The departure of co-founder Noam Shazeer to Google underscores the competitive nature of the industry.
“It is hard, I will not lie,” Anand admitted. “The good news for me as CEO is all the people we have here are very, very passionate and mission driven.”
As Anand leads Character.AI through these challenges, his focus remains on innovation and safety, striving to position the platform as a leader in AI entertainment while ensuring user trust and security.
