Large Language Models (LLMs) have emerged as universal models for addressing language understanding and inference tasks. However, In order to operate factually and rigorously, these models need to be provided with a relevant associated factual context and to reason systematically and controllably over the retrieved evidence. Retrieval Augmented Generation (RAG) became the complementary architectural mechanism which enables LLMs to operate factually, in which a retrieval step delivers the associated facts relevant within a specific task. However, there is a disconnect between the performance of retrieval models, which are grounded on textual embeddings (comparatively lower semantic granularity and performance) and the capacity of LLMs to interpret and operate over these results. As a result, LLMs are currently bound to the comparatively lower performance of RAGs and to their ability to critically reason over these facts.RATIONAL aims to develop new methods to support LLMs to perform factual and controlled evidence-based Natural Language Inference (NLI). At the centre of its methodological contributions, RATIONAL will develop:(i) new evidence retrieval models which can support crossing the semantic gap between abstract query intents and concrete evidence, by developing novel query augmentation paradigms, sentence embedding methods and contextual retrieval generation methods specialised for crossing the intent-evidence abstraction gap.(ii) new evidence-based Natural Language Inference (NLI) models which are capable of integrating and reasoning over the retrieved facts in a controllable and critical manner. These new NLI methods will concentrate on the integration of formal epistemological mechanisms and joint quantitative and qualitative reasoning methods to support evidence-based reasoning.