- 20 Medien
- hochgeladen 5. August 2019
In the visible future, humans will more and more rely on highly developed AI systems to process and share information. But how do AI systems actually need to present information to avoid a misprocessing and misunderstanding of the information by humans? It becomes necessary that systems are able to “understand” the human reasoning process and not just apply normative laws of “correct reasoning” such as classical logic or probability theory, expecting that in general human reasoners will employ them. Whenever humans reason about information, their derived conclusions can strongly deviate from normative theories. This is, however, not caused by simple errors of attention or motivation, but it depends on the underlying mental representation. Despite progress of cognitive psychological theories a descriptive theory of human reasoning in general and for individual reasoners’ is missing. But what are the characteristics of human reasoning? How do we need to change psychological experimental research to better understand human reasoning and how good are state-of-the-art cognitive systems to predict an individual reasoner? Implications for human reasoning and cognitive systems are discussed.