tech
March 2, 2026
I’m on the Meta Oversight Board. We need AI protections now
AI is transforming our world. Accepting independent oversight is the least companies can do to protect our rights

TL;DR
- AI's rapid development outpaces government regulation, posing potential dangers without equivalent safety testing seen in other industries.
- AI companies like OpenAI and Google, despite claims of safety focus, face scrutiny for prioritizing profit and capabilities over understanding complex model risks.
- Independent oversight is suggested as a means for AI companies to build public trust by voluntarily submitting to external review.
- Corporate duties to shareholders can conflict with safety priorities, as evidenced by past tech industry issues like social media's impact on elections and mental health.
- Meta's Oversight Board, though imperfect, provides a case study for AI oversight, highlighting the need for diverse perspectives, transparency, and adherence to human rights law.
- Effective oversight requires accessibility, public consultation, transparency, and sufficient powers vested by the originating company.
- Adequate funding for oversight bodies, including expert staff and consultants, is essential for robust analysis and decision-making, and the cost is negligible compared to AI investments.
Continue reading the original article