XAGI

Inspired by the data and compute hungry, water thirsty, mysterious black boxes that drives modern AI...

this study explores the possibility that explainability in natural language processing might guide us to AGI, or human level AI, via the improvement of knowledge representation.

It has been suggested that human-level AI requires human-level knowledge representation and a natural language of thought.

Word embeddings from statistical models can encode a great deal of nuanced information, however they possess some counterintuitive qualities as well which could be improved.

This research explores the need for ways to improve word representation in vector form, and an intelligible multimodal embedding is proposed as a potential path toward more robust AI.

  • technology, artificial intelligence

  • this research began Summer 2022