Online Research Work

Main Menu

  • Home
  • Web-based experiments
  • Online ethnography
  • Online interview
  • Online content analysis
  • Other
    • Online business research
    • Privacy Policy
    • Terms and Conditions

Online Research Work

Header Banner

Online Research Work

  • Home
  • Web-based experiments
  • Online ethnography
  • Online interview
  • Online content analysis
  • Other
    • Online business research
    • Privacy Policy
    • Terms and Conditions
Web-based experiments
Home›Web-based experiments›Why open conversational AI is a tough problem to solve

Why open conversational AI is a tough problem to solve

By John K. Morrell
July 10, 2022
0
0

The “intelligence” of AI continues to grow. And AI comes in many forms, from Spotify’s recommendation system to self-driving cars. AI uses natural language processing (NLP) to provide natural, human-like language. It mimics humans and generates human-like messages by parsing commands.

That said, it’s still difficult to create an AI tool that understands the nuances of natural human languages. Open conversation is even more complex. That’s why some of the latest projects like LaMDA and BlenderBot have no commercial applications and are kept for research purposes only.

Making sense of open conversational AI

Conversational AI refers to technologies, such as chatbots or virtual agents, with which users can interact. Compared to other AI applications, they use large volumes of data and complex techniques to mimic human interactions, recognize speech and text inputs, and translate their meanings into different languages.

Currently, there are two types of conversational AI: goal-oriented AI and social conversational AI. Goal-oriented AI usually focuses on short interactions to help users achieve their goals, such as booking a taxi, playing a song, shopping online, etc. Social AI engages more in a conversation as a companion, i.e. an open conversation.

Open conversational AI models must be able to handle a large number of possible conversations, which can be difficult to design. Therefore, open conversational AI can be more expensive and take longer to develop than other types of AI.

Conversational AI has core components that allow it to process, understand, and generate outputs. The quality of outputs can be subjective, given the level of expectation the AI ​​responds to.

Lack of apps

At the Google I/O conference last year, Sundar Pichai said the company is looking for ways to meet the needs of developers and businesses. On top of that, he said the Language Model for Dialogue Application (LaMDA) would be a huge step forward in natural conversation.

While large tech companies and open source initiatives are doing their best to bring conversational AI solutions and products to enterprise customers, most are limited to supporting or helping teams up to a certain level. As a result, generalized platforms with probabilistic language models have largely remained in peripheral use cases. Another challenge is that due to its nature, open AI has a higher risk of being misused.

Open Conversational AI – Timeline

First announced at Google’s I/O 2021 event, the tech giant described LaMDA as its “breakthrough conversation technology.”

A recent controversy that brought LaMDA back into the spotlight was when Google artificial intelligence engineer Blake Lemoine claimed that LaMDA was sentient. He posted a few snippets of his conversations with Google’s LaMDA, a Transformer-based language model.

LaMDA’s conversational skills have been developing for years. Like other recent language models, including BERT and GPT-3, it is based on Transformer. A few other reports also suggest that LaMDA passed the Turing test and therefore it is susceptible. That said, the Turing test cannot be considered the ultimate test for a model to possess human intelligence. This has been proven by several experiments in the past.

From a fancy automated answering machine, chatbots and voice assistants like Google Echo, Cortana, Siri and Alexa have been invaluable sources of information.

In July 2021, researchers at Meta released BlenderBot 2.0, a text-based assistant that queries the internet for up-to-date information about movies and TV shows. BlenderBot is fully automated – it also remembers the context of previous conversations. But the system suffers from problems such as a tendency towards toxicity and factual inconsistencies.

“Until the models have a deeper understanding, they will sometimes contradict each other. Likewise, our models cannot yet fully understand what is safe or not. And while they build long-term memory, they don’t really learn from it, which means they don’t improve on their mistakes,” the Meta researchers said. wrote in a blog post introducing BlenderBot 2.0.

Before that, in 2015, at the height of the chatbot craze, Meta (formerly known as Facebook) launched an AI-powered, human-powered virtual assistant called Mr. Powered by M, some users from Facebook could access the “next-gen” assistant through Messenger that would automatically make purchases, arrange gift deliveries, make restaurant reservations, and more.

Reviews were mixed because media house CNN noted that M often suggested inappropriate responses to conversations – and Meta decided to stop the experiment in 2018.

Reverse

As machines learn from humans, they also internalize our flaws – moods, political views, tones, biases, etc. But, since they can’t assess right from wrong on their own (yet), this usually results in a no filter on the machine’s response. A lack of understanding of words, emotions and points of view acts as a huge obstacle to achieving human-like intelligence.

It is still difficult for the current body of conversational AI tools to understand user emotions, detect and respond to offensive content, understand multimedia content beyond text, understand slang and coded language, etc.

Related posts:

  1. Policybazaar, Backed by SoftBank, Files Draft Prospectus for $ 810 Million IPO
  2. AWS Calls on Scientists in Singapore to Overcome Obstacles Facing Quantum Computing • The Register
  3. 2D quantum crystal sensor could chase dark matter particles
  4. Spell Bringing MLOps to Deep Learning to Facilitate the Deep Learning Journey for Businesses

Archives

  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021

Categories

  • Online business research
  • Online content analysis
  • Online ethnography
  • Online interview
  • Web-based experiments

Recent Posts

  • State-owned banks see an increase in digital personal loans
  • CAG reports failure to share data between IT and MHA on foreign contributions
  • High impact on inflation Black Latinos Native Americans | New
  • McGrath plays Commonwealth Games final despite testing positive for Covid
  • Hungarian government buys gas with brutal euro loans
  • Privacy Policy
  • Terms and Conditions