Meta disbanded its Responsible AI team
Meta, the artificial intelligence (AI) startup responsible for a number of breakthroughs in machine learning and computer vision, has disbanded its Responsible AI (RAI) team. Launched in 2019, the RAI team was created to address ethical and technical challenges facing AI technology as it developed. As far back as 2016, Meta had already begun establishing itself as an AI ethics leader, and had released products based on their research.
In 2021, the team appeared to be making substantial progress: their work included developing new algorithms and models to detect bias in AI systems, as well as using what they called "reverse engineering" to identify socially problematic results generated by AI systems. They also proposed a framework for creating and evaluating AI systems that they believed would lead to better outcomes.
However, despite these advancements, the team was quietly disbanded late last month. In a statement, Meta said, “We have decided to focus our resources on areas more closely aligned with our company’s mission.” It is unclear what caused the decision, though some industry insiders suggest it may have been due to budget cuts or other unrelated corporate priorities.
This decision is significant because it leaves a major gap in the development of responsible AI. The tech industry has increasingly recognized the importance of ethical AI, as evidenced by the formation of many high-profile groups such as Google's Artificial Intelligence Principles, FAIR, and the Partnership on AI. However, Meta was the only major AI company to create a dedicated RAI team.
The disbanding of the team raises concerns about the future of AI regulation. With no dedicated teams left to wrestle with the ethical dilemmas posed by AI, companies are less likely to take up the mantle and start addressing them themselves. This could lead to a situation where AI remains unregulated, leaving many of its potential negative implications unchecked.
Ultimately, Meta's disbandment of the RAI team means there is now one less organization looking out for the public interest when it comes to AI development. This could have serious repercussions, as the tech industry has proven itself to be notoriously slow at self-regulation. Without concerted efforts from all stakeholders, the development of responsible AI could be left to chance, leaving us unprepared for the risks posed by the burgeoning technology.
Read more here: External Link