Everyday explanations reveal what people understand and what they believe constitutes understanding. We propose studies that have people evaluate explanations to address 3 questions about understanding: (1) When understanding is abstract, can it be characterized in terms of probabilistic dependence among the relevant concepts, or is information about the mechanism producing the event also represented? (2) How do people represent knowledge in a way that is coherent among multiple levels of precision? (3) How do people incorporate others’ knowledge, like that of experts or scientific bodies, into their understanding? We propose a series of experiments to address each of these questions. We hope to model the results using an extension of the Causal Bayes net formalism by loosening assumptions and adding constraints that make it more psychologically plausible. We expect this to involve the development of a model of mental simulation of causal mechanism.