ai assistant summit - san francisco;
Notes on talks at AI Assistant Summit in San Francisco.
The State of Natural Language Understanding: Past, Present and Future
- There are algorithms that create word embeddings that output multiple vectors per symbol.
- Word embedding calculus is very interesting. US + Capital = Washingtonhttp://www.dmtk.io/word2vec_multi.html
Deep Learning for Spoken Language Understanding
People also don’t understand dialog fully. 5.5% error rate - but they ask questions in order to disambiguate.
By trying to understand what person is saying helps to bias the system into correctly transcribing
bi-directional LSTM work for pretty much everythingu
Estimate domain during parsing
Don’t only model agents but also model users. Different people can have different dialogs,
So the types of users should be modeled. The user model and agent model can converse with each other.
Then we can test if this looks like actual dialogs that people can have. This improves performance dramatically
Error handling is actually hard. You need the bot to take the user on track if he inputs nonsense. You also need to throw garbage information out of the system.
Cognitive Belief Modeling for Naturalistic Dialog Management
- Theory of mind.
- There can be questions about the answer or about performed task.
- Build common ground - common beliefs. There are a lot of info in cognitive sciences on that.
- Invoke repair mechanisms
- Dialog modeling has to be debuggable. Contradicts DL at the current state-of-the-art.
People speech parsing is orders of magnitude harder. Parsers don’t work with speech - hence intent parsing for conversations instead of parse trees - or hybrid approach.
Look-up: Noah Goodman at Stanford
NLP, Parsing, Information Extraction, Dialog and Question Answering
- Embeddings of n-grams.
- CNN’s better
- Context depended word embeddings. POS word embeddings. Could not talk because it is internal development - not published.
- Alexa bot prize - chit-chat-bot competition
- If people don’t work they decline quickly
- We’re influenced by other embodied social creature. Embodiment is important.
- A lot of tech that is being developed is not grounded / tested where those apps actually needed. This is why they fail.
- Elderly people love properly designed technology. There are no studies that show that elderly people don’t like technology.
Building a Conversational Agent Overnight with Dialogue Self-Play
Dialogue research team. Use DL to model dialogs. DL is a loser.
Goal oriented dialog.
Coreference - pronouns
Entailment - Imply something
- User simulator
- Record user - bot dialog then crowd source dialog rewrites
- Combining automation and human intelligence. Machines talk to machines and users correct - this is much cheaper than expert labeling.
- Has datasets that we can look at
- Contextual language understanding using RNNs
- Use hand crafted rule-based policies to train NN models. Models are as good as rule based agents, but can improve and can generalize. Can be improved with RL
- reward - if task is completed minus the number of turns
Siri’s Natural Language Understanding
From rule based to deep learning techniques.
2b queries a week.
NLU in Siri
1) Domain chooser
2) Action classifier (get verb)
Start with a rule based when you have no data
- Interesting parsing ideas in the slides
- They’re doing research about conversation modeling
Amazon Alexa Prize - in Conversational AI
- Non-goal oriented dialogs
- No intents and slots
- Hybrid systems are great - LSTMs + Rule based
- Alexa Social Bots
- Alexa Prize Proceedings
AVA - Autodesk Virtual Assistant (B2C)
- E-mail like queries that are not one sentence.
- Provide user with a choice. Make user TOM easier
- Interactions affect bot personality. Transactions and use cases affect dialog architecture
Emotion Intelligence for Clothing Shopping
Andrew Magliozzi - Co-Founder
- Should partner - limited domain. They’re looking for someone that can manage complex dialogs and clarifications
- Gathering medical data from patients - hard problem
- Gathering medical data - is a huge industry. If it saves doctors time - they’ll pay for the service like that.
The Future of Voice Computing is in the Ear
- Voice assistants don’t work in loud environments (workplace)
- Workers on the go
- Messaging is a killer app for voice assistant.
- “Oh that’s interesting” - to save last 15 seconds of the conversation
Their AI assistant is similar to our ideas in terms of B2B.
Parsing voice is hard. All NLP research about text, parsing, trees helps very little.
Amazing product and a very good presentation.
AI first company - actually they’re trying to of-source hardware to someone else. They tried to create an in-ear device for the past year - and it’s hard, not interesting and they’re looking to partner with hardware manufacturer instead.
- Healthcare applications. Mostly about the problems with data. Less so about conversational AI.
Conversational Agents and the Future of Intelligent Customer Experience
- Customer support via chat bot. Very sophisticated
- 60 businesses with 200M customers
- Dynamic Memory Networks Seq-to-Seq on dialog data
- Intent classification needs a lot of data. For corner cases you need very large amount of data.
Does hotel provide breakfast
Does hotel provide eggs for breakfast (this is hard to train because of few examples)
- Provides services on the human side - voice, datasets etc.
- They do similar thing to us - rule based hybrid. Hire assistant. Very small closed interesting domain
From rule-based to learning
Crowd sourcing dialogs. Create scenarios and make real people take roles.
- Use rules to produce data for ML
Ivan Crewkov - MyBuddy.ai
Growing the Generation of AI-Natives
- Fake AI to gather data. Have people to pretend to be an AI