Course 1: Welcome to Self Driving Car Engineer Nanodegree Welcome to the Self-Driving Car Engineer Nanodegree program! Learn about the Nanodegree experience, as well as hear from Waymo, one of Udacity's partners for the program.1 hourAn Introduction to Your Nanodegree ProgramGetting HelpYou are starting a challenging but rewarding journey! Take 5 minutes to read how to get help with projects and content.Meet WaymoHear from Waymo, one of the most cutting-edge autonomous vehicle companies out there! You'll learn about the company as well as about the Waymo Open Dataset, which you'll use in parts of the program.Course 2: Computer Vision In this course, you will develop critical Machine Learning skills that are commonly leveraged in autonomous vehicle engineering. You will learn about the life cycle of a Machine Learning project, from framing the problem and choosing metrics to training and improving models. This course will focus on the camera sensor and you will learn how to process raw digital images before feeding them into different algorithms, such as neural networks. You will build convolutional neural networks using TensorFlow and learn how to classify and detect objects in images. With this course, you will be exposed to the whole Machine Learning workflow and get a good understanding of the work of a Machine Learning Engineer and how it translates to the autonomous vehicle context.19 hoursIntroduction to Deep Learning for Computer VisionDive into Deep Learning for Computer Vision, learning about its use cases, history, and what you’ll build by the end of the course.The Machine Learning WorkflowMachine learning is more than just building a model - getting each step of the workflow right is crucial.Sensor and Camera CalibrationLearn how to calibrate your camera to remove distortions for improved perception.From Linear Regression to Feedforward Neural NetworksBuild skills in linear and logistic regression before taking on feedforward neural networks, a type of deep learning.Image Classification with CNNsConvolutional networks improve on feedforward networks for areas such as image classification - let’s get started building them!Object Detection in ImagesObject detection builds on classification by finding multiple important objects within a single image - find out how!Object Detection in an Urban EnvironmentUse the Waymo dataset to detect objects in an urban environment.Course 3: Sensor fusion Besides cameras, self-driving cars rely on other sensors with complementary measurement principles to improve robustness and reliability, using sensor fusion. You will learn about the lidar sensor, different lidar types, and relevant criteria for sensor selection. Also, you will learn how to detect objects in a 3D lidar point cloud using a deep-learning approach, and then evaluate detection performance using a set of metrics. In the second half of the course, you will learn how to fuse camera and lidar detections and track objects over time with an Extended Kalman Filter. You will get hands-on experience with multi-target tracking, where you will initialize, update and delete tracks, assign measurements to tracks with data association techniques, and manage several tracks simultaneously.25 hoursIntroduction to Sensor Fusion and PerceptionGet started with sensor fusion and perception, why they are important, and the history of their development in self-driving cars.The Lidar SensorLearn about the lidar sensor, capable of capturing important 3D data in point clouds. Detecting Objects in LidarDetect objects from the 3D data coming in from a lidar sensor.Mid-Term Project: 3D Object DetectionUse the Waymo dataset to detect 3D objects in the surrounding environment.Kalman FiltersLearn from the best! Sebastian Thrun will walk you through the usage and concepts of a Kalman Filter using Python.Extended Kalman FiltersBuild an Extended Kalman Filter that's capable of handling data from multiple sources.Multi-Target TrackingGet your tracking skills ready for the real world by learning how to track multiple targets simultaneously.Final Project: Sensor Fusion and Object TrackingUse the Waymo dataset, along with sensor fusion, to track multiple 3D objects in the surrounding environment.Course 4: Localization In this course, you will learn all about robotic localization, from one-dimensional motion models up to using three-dimensional point cloud maps obtained from lidar sensors. You’ll begin by learning about the bicycle motion model, an approach to use simple motion to estimate location at the next time step, before gathering sensor data. Then, you’ll move onto using Markov localization in order to do 1D object tracking. From there, you will learn how to implement two scan matching algorithms, Iterative Closest Point (ICP) and Normal Distributions Transform (NDP), which work with 2D and 3D data. Finally, you will utilize these scan matching algorithms in the Point Cloud Library (PCL) to localize a simulated car with lidar sensing, using a 3D point cloud map obtained from the CARLA simulator.16 hoursIntroduction to LocalizationMeet the team that will guide you through the localization lessons, and learn the intuition behind robotic localization!C++ CheckpointAre you ready to build Kalman Filters with C++? Take these quizzes to find out!Markov LocalizationLearn the math behind localization, as well as how to implement Markov localization in C++.Creating Scan Matching AlgorithmsLearn about and build two scan matching algorithms for localization: Iterative Closest Point (ICP) and Normal Distributions Transforms (NDT).Utilizing Scan MatchingLearn how to align point clouds with ICP and NDT before leveraging them to localize a self-driving car in a simulated environment!Scan Matching LocalizationLocalize a self-driving car within a point cloud from the CARLA simulator with the localization algorithms you learned in previous lessons - how accurate is your algorithm?Course 5: Planning Path planning routes a vehicle from one point to another, and it handles how to react when emergencies arise. The Mercedes-Benz Vehicle Intelligence team will take you through the three stages of path planning. First, you’ll apply model-driven and data-driven approaches to predict how other vehicles on the road will behave. Then you’ll construct a finite state machine to decide which of several maneuvers your own vehicle should undertake. Finally, you’ll generate a safe and comfortable trajectory to execute that maneuver.11 hoursBehavior PlanningLearn how to think about high-level behavior planning in a self-driving car.Trajectory GenerationUse C++ and the Eigen linear algebra library to build candidate trajectories for the vehicle to follow.Motion PlanningMotion Planning and Decision Making for Autonomous VehiclesCourse 6: Control This course will teach you how to control a car once you have a desired trajectory. In other words, how to activate the throttle and the steering wheel of the car to move it following a trajectory described by coordinates. The course will cover the most basic but also the most common controller: the Proportional Integral Derivative or PID controller. You will understand the basic principle of feedback control and how they are used in autonomous driving techniques.7 hoursPID ControlLearn about and how to use PID controllers with Sebastian! Control and Trajectory Tracking for Autonomous VehiclesCourse 7: Congratulations! Congratulations on making it through the program!5 minutesCongratulations!Course 8: OptionalAdditional Content: Kalman Filters OptionalFind additional content here on Unscented Kalman Filters, for sensor fusion and tracking.6 hoursUnscented Kalman FiltersWhile Extended Kalman Filters work great for linear motion, real objects rarely move linearly. With Unscented Kalman Filters, you'll be able to accurately track non-linear motion!Course 9: OptionalAdditional Content: Prediction OptionalFind additional content here on prediction, helping autonomous vehicles to predict how other vehicles and objects might move in the future.2 hoursPredictionUse data from sensor fusion to generate predictions about the likely behavior of moving objects. Course 10: OptionalAdditional Content: Control OptionalFind additional content here on Vehicles Models and Model Predictive Control, a more advanced form of control.2 hoursVehicle ModelsIn this lesson, you'll learn about kinematic and dynamic vehicle models. We'll use these later with Model Predictive Control.Model Predictive ControlIn this lesson, you'll learn how to frame the control problem as an optimization problem over time horizons. This is Model Predictive Control!Course 11: OptionalAdditional Content: Point Cloud Library Optional2 hours[Optional] Intro to PCLLearn about the Point Cloud Library (PCL). Use a simulation highway environment to explore lidar sensing and generate point clouds.Course 12: OptionalAdditional Content: Deep Learning OptionalFind additional content on deep learning here, including fully convolutional networks and semantic segmentation for scene understanding, as well as how to improve inference performance from a speed standpoint.7 hoursFully Convolutional NetworksIn this lesson you'll learn the motivation for Fully Convolutional Networks and how they are structured.Scene UnderstandingIn this lesson you'll be introduced to the problem of Scene Understanding and the role FCNs play.Inference PerformanceIn this lesson you'll become familiar with various optimizations in an effort to squeeze every last bit of performance at inference.Course 13: OptionalAdditional Content: Functional Safety OptionalFind additional content around the field of functional safety for autonomous vehicles and a high level overview of ISO 26262.6 hoursIntroduction to Functional SafetyYou will learn to make safer vehicles using risk evaluation and systems engineering.Functional Safety: Safety PlanA functional safety plan is critical to any functional safety project. Here you will learn what goes into a safety plan so that you can document your own.Functional Safety: Hazard Analysis and Risk AssessmentIn a hazard analysis and risk assessment, you will identify vehicular malfunctions and evaluate their risk levels. You can then derive safety goals defining how your vehicle will remain safe.Functional Safety: Functional Safety ConceptYou will derive functional safety requirements from the safety goals and then add extra functionality to the system diagram. Finally you document your work, a part of functional safety.Functional Safety: Technical Safety ConceptOnce you have derived functional safety requirements, you drill down into more detail. In the technical safety concept, you refine your requirements into technical safety requirements.Functional Safety at the Software and Hardware LevelsThe last step in the vehicle safety design phase is to derive hardware and software safety requirements. In this lesson, you will derive these requirements and refine a software system architecture.Course 14: OptionalAutonomous Systems Interview OptionalLearn about interviewing for autonomous systems interviews and find plenty of practice questions depending on the role you are looking for!4 hoursAutonomous Systems Interview PracticeStart off with some tips on interviewing for an autonomous systems role, then watch how candidates approach their interview questions. Finish off by practicing some questions of your own!CompanyAbout Us Why Udacity? Blog In the News Jobs at Udacity Become a Mentor Partner with Udacity ResourcesCatalog Career Outcomes Help and FAQ Scholarships Resource Center Udacity SchoolsSchool of Animation and Game Development School of Artificial Intelligence School of Autonomous Systems School of Business Career Resources School of Cloud Computing School of Cybersecurity School of Data Science School of DevOps School of Executive Leadership School of Product Management School of Programming and Development Featured ProgramsBusiness Analytics SQL AWS Cloud Architect Data Analyst Intro to Programming Digital Marketing Self Driving Car Engineer Only at UdacityArtificial Intelligence Deep Learning Digital Marketing Flying Car and Autonomous Flight Engineer Intro to Self-Driving Cars Machine Learning Engineer Robotics Software Engineer