Word grounding loop comes from the following idea of mine:
There is utility in proper word embeddings that makes it useful in various systems that do POS tagging, parsing, encoding and so on. Given that proper POS and proper concept selection inside of big ontology helps with problems of disambiguation and further improves sentence parse, it is logical to maximize various aspects of the word embeddings.
Let us suppose that a binary 2d map would have greater overlap between words of the same POS, words that are conceptually related and so on. In order to infer, and thus produce the correct embedding, the embedding process should rely on the parser to extract true meaning, POS and other properties contained inside of the sentence and even outside of the sentence – such as the context of the conversation, book, or chapter, etc. This idea makes word embedding and the rest of the system circular and self feeding mechanism. Better embeddings should result in better parsing, while better parsing should provide the mechanism of embedding with reliable information such as POS, concept, relations with current sentence and current context.
Given this property one can design a system with a high degree of entropy at the start, where parser improves embedding of individual words, while individual words help produce better parses. If there is a structure to the language the system will settle with word embeddings such that facilitate the rest of the system in parsing, tagging, and encoding.
As a bonus, here is a picture I took today: