TIMan Logo

Text Information Management and Analysis Group


About| People | Projects | Publications | Downloads| Internal Wiki

Project: Human-Like Intelligent Systems

For many complex tasks, humans are far more intelligent than the best AI systems. While the large language models (LLMs), notably ChatGPT, have demonstrated outstanding performance in performing many NLP tasks, their ability of performing logical reasoning and explanation of their behavior is quite limited due to the limitation of their underlying neural network architecture. This limitation significantly limits their utility since both unreliable inferences (halluciation) and lack of explanation (provenance) would undermine the trustworthiness of any such intelligent system. The more critical an application is, the higher level of trustworthiness is needed. To break this limitation, we need to study how to build human-like intelligent systems. Human brains are known to have two somewhat separate systems, i.e., System 1 and System 2. System 1 is a fast intuitive but unreliable system, quite similar to the current neural networks. System 2 is a slow symbolic system that can perform logic reasoning and planning. The current LLMs and intelligent systems in general appear to be able to successfully simulate human System 1, but how to extend it to further simulate System 2 remains a difficult challenge, which requires revision to or extension of the current transformer-based deep neural network architecture, and many neuro-symbolic models/architectures have already been develoed precisely for this purpose, in the broad context of neuro-symbolic AI. However, we are far from having a neuro-symbolic system that can simulate both System 1 and System 2 of human brains in an integrated manner. We are interested in tackling this problem from the following perspectives: