This lecture covers the fundamentals of deep neural networks and splines, starting with feedforward deep neural networks and the ReLU activation function. It then delves into continuous-piecewise linear functions in one and multiple dimensions, discussing their algebraic properties and implications for deep ReLU neural networks. The lecture also explores the universal approximation properties of CPWL functions and their implementation via deep ReLU networks. Additionally, it examines the refinement of activation functions, constraining activation functions, the representer theorem for deep neural networks, and the outcome of this theorem. The lecture concludes with a comparison of linear interpolators, the discussion on deep spline networks, their opportunities, challenges, and their connection with existing schemes.