The ability of the brain to produce adaptive and dexterous movements relies on the seamless integration of sensory feedback into motor commands. Despite significant advances in motor neuroscience, critical gaps persist in understanding how proprioceptive and motor representations emerge, interact, and support complex behaviors. This thesis bridges neuroscience and artificial intelligence by leveraging deep learning, musculoskeletal simulations, and neural data analysis to elucidate the computational mechanisms underlying sensorimotor control. This work begins by exploring how machine learning has transformed the study of sensorimotor control. Methods for analyzing behavior, neural data, and modeling sensorimotor pathways are reviewed, establishing the foundation for neural networks as computational tools for investigating sensorimotor processing. Proprioception-the sense of body position and movement-is central to sensorimotor integration. Using a musculoskeletal model of the primate arm, synthetic proprioceptive data were generated to train neural networks on tasks that tested hypotheses about proprioceptive computations. Models optimized to predict arm position and velocity exhibited strong alignment with neural activity in the cuneate nucleus and somatosensory cortex during goal-directed movements. These findings revealed distinctions between active and passive movement conditions and showed that deeper network layers provided the best alignment with neural activity. Together, these results suggest that proprioceptive processing is actively shaped by top-down modulation during voluntary behavior. Building on this foundation, the thesis addresses the active control of high-dimensional musculoskeletal systems, where sensory feedback and motor commands converge to achieve dexterous behavior. A novel curriculum-learning framework, combined with a new latent exploration technique, enabled artificial agents to perform object manipulation tasks with high energy efficiency. This approach earned first place at the MyoChallenge 2023 NeurIPS competition, showcasing the potential of deep reinforcement learning for musculoskeletal control. The culmination of this work integrates these approaches to model the neural basis of naturalistic grasping behavior through imitation learning. These models demonstrated strong alignment with sensorimotor brain activity, emphasizing the critical role of muscle-based control in sensorimotor representations. Adaptive temporal processing and sensory-motor convergence emerged as key principles of sensorimotor integration. Furthermore, the models demonstrated the ability to generate complex motor behaviors using decoded information from a small subset of neurons, highlighting the potential for shared brain-AI control frameworks. This thesis advances our understanding of the computational principles underlying sensorimotor integration, offering biologically aligned frameworks for studying neural dynamics and providing insight