Abstract: | Can we determine the causal direction between only two variables? How can we make optimal predictions in the presence of distribution shifts? We often face such causal modeling or prediction questions in science, management, and engineering. Recently causal discovery has benefited a great deal from statistics and machine learning, and on the other hand, causal information has been demonstrated to be able to facilitate understanding and solving certain machine learning problems. In this talk I will first discuss how conditional independence in random variables and the independent noise condition enable causal discovery, i.e., learning causal information from purely observational data. They lead to the so-called constraint-based and functional causal model-based approaches to causal discovery, respectively. In particular, the latter type of approaches is able to distinguish cause from effect given two variables. I will illustrate the advantages and limitations of those approaches and report real-world applications. Secondly, I will consider two machine learning problems--semi-supervised learning and domain adaptation (or transfer learning)--from a causal point of view, and briefly discuss why and how the underlying causal knowledge helps to solve learning problems when the i.i.d. assumption is dropped. |
Date: | 8 June 2016 |
Time: | 11:00am - 12:00noon |
Speaker: | Dr Kun Zhang |
Venue: | Room 7-207, 7/F, Academic 3 |
[ Back ]