OpenAI buffs safety team and gives board veto power on risky AI
OpenAI has taken additional steps to ensure the safety of its artificial intelligence system by hiring a new safety team and allowing its board of directors to have veto power over any projects deemed too risky. The safety team is composed of experts from a variety of fields, including AI, computer science, cybersecurity, ethics, and law. Their job will be to review OpenAI's AI-related projects and activities, ensuring that they meet high standards of safety.
The company has also provided its board of directors with veto power over any AI project or activity deemed too risky. This decision comes in the wake of recent concerns about AI safety and the potential for AI technology to be used for nefarious purposes. By giving its board members the ability to block certain projects, OpenAI wants to make sure that it does not inadvertently create a powerful tool that could be used for illegal or malicious reasons.
In addition to creating a safety team and granting its board the power to veto risky projects, OpenAI has also announced plans to invest further in AI safety research. The company is investing in companies working on safety technology, such as DeepMind's AI verification platform and OpenAI's own safety engineering team. With these investments, OpenAI hopes to continually develop more secure and reliable AI technologies and products.
OpenAI is also making some of its safety technology open source, allowing other organizations to benefit from it. The company is also encouraging collaboration between researchers and developers in order to build a safer AI future. By taking these steps, OpenAI is setting an example for other AI companies and helping to ensure that AI technology is used responsibly and safely.
Read more here: External Link