OpenAI Forms Council to Facilitate ‘Healthy Interactions With AI’

OpenAI has formed a council to help it define and monitor “what healthy interactions with AI [artificial intelligence] should look like.”

The Expert Council on Well-Being and AI is composed of eight researchers and experts focused on how technology affects mental health, the company said in a Tuesday (Oct. 14) blog post.

OpenAI has consulted with many of these experts in the past, as when the company was developing parental controls and notification language for parents whose teen may be in distress, according to the post.

In the first formal meeting of the council last week, they discussed the company’s current work in these areas, the post said.

Moving forward, the council will monitor the company’s approach and will explore topics like how AI should behave in sensitive situations, what kinds of guardrails can support people using ChatGPT, and how ChatGPT can have a positive impact on people’s lives, per the post.

“Some of our initial discussions have focused around what constitutes well-being and the ways ChatGPT might empower people as they navigate all aspects of their life,” OpenAI said in the post. “We’ll keep listening, learning and sharing what comes out of this work.”

OpenAI said in a Sept. 2 blog post that it formed a Global Physician Network that includes more than 250 physicians that will help the company on health issues related to AI and will continue to expand to include experts in areas like eating disorders, substance abuse and adolescent health.

The company said more than 90 of these physicians had already contributed to its research on how its models “should behave in mental health contexts.”

“Their input directly informs our safety research, model training and other interventions, helping us to quickly engage the right specialists when needed,” OpenAI said in the post.

On Sept. 17, OpenAI said that in addition to the parental controls, it planned to create an automated age-prediction system that can determine whether users of its chatbot are over 18 and then send younger users to an age-restricted version of ChatGPT.

PYMNTS reported at the time that the new measures came after a lawsuit from the parents of a teenager who died by suicide, accusing the chatbot of encouraging the boy’s actions.

Source: https://www.pymnts.com/