Machine Reading for self-learning has been a longstanding dream of AI. But Natural Language Processing research has cleverly decomposed this challenge into several subtasks, each of which has been addressed with some success, but which do not add up to meet the grand challenge. In contrast to Information Extraction, a Machine Reading system has to interpret every sentence in the text; in contrast to Text Mining, the system has to read just a single text in depth, not gather evidence from dozens or hundreds of examples, and in contrast to Text Summarization, the system has to create a formal abstract representation/interpretation of the text content, not just reproduce selected portions of the input. Furthermore, the resulting growing knowledge base has to evolve its own generalizations and be able to produce expectations and hypotheses that address gaps or ambiguities in new input text. There is no system that can do this robustly today. I describe three component projects executed at CMU over the past few years that address different aspects of the challenge, principally by creating increasingly robust yet very simple representations for the underlying knowledge base.