This project aims to exploit the data generated by depth sensors in the realm of 3D computer vision. The goal is to develop and enhance state-of-the-art deep learning methods that can utilize this data to enable various applications such as dense depth map generation from structured-light sensors, novel view synthesis, and dense visual RGB-D SLAM.Structured light sensors are one of the most commonly used depth sensors in computer vision applications. However, accurate depth map generation using structured light sensors remains a challenging task. This project proposes a solution that combines data from multi-view images to improve the accuracy of dense depth map generation using structured light sensors.Another application of depth sensor data is in the field of novel view synthesis. Neural Radiance Fields (NeRF) have shown great potential in this domain. This project aims to explore and develop new algorithms based on NeRF that can generate novel views of an object from a given set of views. This can have significant applications in virtual reality and 3D content creation.Dense Visual SLAM is another field that this project investigates. Visual SLAM is a popular technique in robotics and autonomous vehicles to create maps of the environment using visual data. However, traditional methods for visual SLAM often struggle to generate accurate dense depth maps in real-time. This project aims to develop a dense visual SLAM pipeline that leverages recent advances in neural rendering. By utilizing depth sensor data and an efficient neural rendering implementation, the proposed visual SLAM pipeline aims to generate more accurate and comprehensive maps of the environment while achieving higher processing speed compared to existing methods.In conclusion, the AI-Sensor project aims to contribute to the advancement of deep learning methods in the field of 3D computer vision by exploiting the data generated by depth sensors. The project proposes novel approaches for dense depth map generation using structured light sensors, novel view synthesis based on Neural Radiance Fields, and a dense visual SLAM pipeline based on recent advances in neural rendering. These contributions have the potential to significantly impact various fields such as robotics, autonomous vehicles, virtual reality, and 3D content creation.