Friday, 21 July 2023

Voluntary Commitments to help make AI safer.

Voluntary Commitments to help make AI safer.

On July 21, 2023, a group of top AI companies, including Open AI, Google, and Meta, agreed to a set of voluntary commitments to help make AI safer. 

These commitments include:

  • Oversight from independent security experts: The companies agreed to establish an independent oversight body to review their AI safety practices. This body will be composed of experts from academia, government, and industry.
  • Watermarking of AI-generated content: The companies agreed to create watermarking systems to flag when an image, text, or video is created by AI. This will help to prevent AI-generated content from being used for malicious purposes, such as spreading misinformation or creating deepfakes.
  • Thorough testing of AI systems: The companies agreed to thoroughly test their AI systems before releasing them to the public. This will help to ensure that the systems are safe and secure.
  • Sharing of information about AI risks: The companies agreed to share information about the risks associated with AI. This will help to raise awareness of these risks and to develop solutions to address them.

In addition to these commitments, the companies also agreed to work with the Biden-Harris administration to develop a comprehensive framework for the responsible development of AI. This framework will address the ethical, legal, and societal implications of AI. It will also provide guidance for businesses, governments, and individuals on how to use AI safely and responsibly.

The commitments made by these companies are a positive step towards ensuring that AI is used for good. However, it is important to remember that these are JUST VOLUNTARY commitments. There is NO GUARANTEE that the companies will uphold these commitments. It is important to continue to monitor the development of AI and to advocate for strong safety measures.












SANJAY NANNAPARAJU

WA +91 98484 34615



No comments:

Post a Comment