“The artificial intelligence (AI) shift is as big as anything can get,” said Sundar Pichai, head of Google and its holding Alphabet, on Wednesday at the Google I/O developer conference stage. AI was the overriding theme of this I/O, seven years after Pichai proclaimed that Google was now an “AI first” company. But AI carries risks, both known and unknown. Even if that almost got lost in the hustle and bustle surrounding the new announcements, Google also gave the stage to a warning caller: James Manyika.

Interspersed with advertising for Google, he warned against launching AI regardless of the consequences. Manyika heads the new department for “Technology and Society” at Google. “While I believe it’s important to celebrate the incredible advancement of AI, and the immense potential it has for people everywhere, we also need to recognize that it’s an evolving technology that’s still being developed and that there is still a lot to do”, the Senegalese opened his speech.

Google must deal with AI “boldly and responsibly”, his boss Pichai had announced shortly before on the same stage in Mountain View. This is what Manyika referred to: “There is a natural tension between these two (poles)”, he admitted, “We believe that embracing this tension is not only possible but crucial. The only way to be bold in the long term is to is to be responsible from the start.”

AI advances many sciences, but AI also risks exacerbating existing social challenges, such as unfair biases. With the further development of AI and new areas of application, new dangers would also arise.

Google misinformation is of particular concern. The question of trust arises, especially with generative AI. As a striking example, he showed a short video from Google’s universal translator (Universal Translator, a name taken from Star Trek), in which a lady first speaks English (original) and then repeats the same thing in Spanish (AI product). After translating the English content into Spanish, Google’s systems also recorded speaking style and intonation, which was then incorporated into the Spanish speech output. In addition, an AI created lip movements that matched the Spanish text. Eventually it was all stitched together into a video where the lady is chanting Spanish with matching lip movements.

Such high-quality manipulation of student videos has early indications of reducing dropout rates, Manyika reported. So this technique could be “incredibly beneficial, but could be used by bad people for deep fakes.”

However, Manyika’s task at the Google I/O 2023 was not only to highlight the dangers of AI and to explain why Google may not always be the first with a new AI service, but to present the group as a responsible provider. The man recalled the seven ethical rules for the development of artificial intelligence that Google established in 2018. Accordingly, AI must:

  1. bring benefits to society
  2. Avoid creating or encouraging unfair bias
  3. be safety oriented and tested
  4. bear responsibility towards people
  5. Consider data protection and data security in the basic principles
  6. meet high standards of academic excellence, and
  7. available for purposes consistent with these principles.

In 2018, Pichai ruled out Google’s participation in the development of AI algorithms for military weapon systems. The same applies to the development of technology that would be suitable for enabling large-scale surveillance of people or would violate the principles of international law and human rights.

The seven ethical principles for AI “guide our product development and help us evaluate each AI application,” Manyika said on Wednesday. For example, the universal translator is only made available to selected users. Any images and videos generated by a Google AI would be tagged as an AI product by metadata. Google would also like to win over other AI operators. Google search engine should then mark AI-generated content as such. Manyika announced more “watermarking innovations” for AI, but didn’t get more specific.

In the coming months, Google will add a tool to its search engine that will allow end users to find more information about images. Images found by searching will show when Google first saw that image or something similar, and where else it found that image, be it on news sites, social networking sites, or fact-checking sites. This information can help to assess the degree of trustworthiness of an image.

But the AI ​​principles would also help Google decide what not to do. “Years ago, we were the first company that decided not to market a generic facial recognition programming interface,” the speaker pointed out. “We felt that there were no adequate security measures in place.”

The data company is now working on reducing problematic products from its AI models. This expressly includes adversarial testing. The Google manager also talked about a Google interface called Perspective: It was developed to track down toxic user comments on the Internet. Scientists have now used Perspective to develop an evaluation standard for Large Language Models (LLM). All LLMs of importance, including those of competitors Anthropic and OpenAI, would now use these standards to assess the toxicity of their own language models.

However, Google will not be able to master the challenges alone. “Building responsible AI must be a collaborative effort,” Manyika underscored, “with researchers, humanities scholars, industry experts, governments and ordinary people, but also creators and publishers. (…) It’s a very exciting time. There’s a lot that we can achieve and a great deal that we need to get right. Together.”

Recommended Editorial Content

With your consent, an external YouTube video (Google Ireland Limited) will be loaded here.

Always load YouTube video

James Manyika, Google VP Technology & Society

Google I/O 2023 Keynote


(ds)

To home page

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply