Since context-aware applications use implicit sensing and increasingly complex decision making, they may make mistakes or users may misunderstand their actions. This may hinder trust and adoption of context-aware applications. We hypothesize that making these applications intelligible by explaining themselves to users would help counter this lack of trust. The proposed thesis would contribute to context-aware computing by (i) understanding the need to explain these applications to users, (ii) understanding the benefits and trade-offs of providing intelligibility, and (iii) providing toolkit support intelligibility to ultimately improve the trust, adoption of, and sustained use context-aware systems.