“Imagine” asks users to name a real or imaginary location, which LaMDA will describe (the test is whether LaMDA can recognize the description). “Talk About” features a conversation starter in an effort to test whether the AI stays on topic. “List” asks users to name a business or topic, with the goal of seeing if LaMDA can break it down into bullet points, so if you ask “I want to plant a vegetable garden,” the answer might include subtopics like “What do you want to grow?” or “Water and care”. Google’s approach is therefore more conservative than that taken by Meta, whose chatbot, teased by users, has started saying some rather disturbing things and even criticizing its boss, Mark Zuckerberg. But this is precisely the meaning of these tests. According to Mary Williamson, head of research engineering at Facebook AI Research (FAIR), many companies don’t like to test their chatbots with people because what they say will be detrimental to the business, just like what they say. happened with Microsoft’s Tay. learned to say racist and anti-Semitic phrases on Twitter.
Unfortunately, the risk is definitely around the corner, but for scientists, it’s the best way to test AI in a way they could never do in the lab. Provided that these experiments are precisely limited to their field of action and then are not carelessly disseminated in concrete applications, as Timnit Gebru (precisely dismissed by Google for his positions) has long warned. Will Google be able to avoid it despite all its precautions? If you’re curious, here’s the official registration page and although it’s only for those who live in the United States, it has some interesting information: https://aitestkitchen.withgoogle.com/