Abstract: The rapid evolution of ubiquitous sensing, communication, and computation technologies has revolutionized of cyber-physical systems (CPS) and IoT in virous fields, such as robotics and autonomy, smart grids, aerospace, and smart cities. Integrating learning-based methodologies into CPS control unlocked vast opportunities for AI-enabled systems. However, current decision-making frameworks lack a comprehensive understanding of the tridirectional relationship among communication, learning and control, posing challenges to design effective methodologies for multi-agent systems operating in complex and dynamic environments. To tackle these challenges, in the first part of the talk, we focus on learning and control with information sharing that leverages communication capabilities. We begin by introducing an uncertainty quantification method designed for collaborative perception in connected autonomous vehicles (CAVs). Our findings demonstrate that communication among multiple agents can enhance object detection accuracy and reduce uncertainty. Building upon this, we develop a safe and scalable deep multi-agent reinforcement learning (MARL) framework that leverages shared information among agents to improve system safety and efficiency. We validate the benefits of communication in MARL, particularly in the context of CAVs in challenging mixed traffic scenarios. To incentivize agents to communicate and coordinate, we design a novel stable and efficient reward reallocation scheme based on Shapley value for MARL. Additionally, we present our theoretical analysis of robust MARL methods under state uncertainties, such as uncertainty quantification in the perception modules or worst-case adversarial state perturbations. In the second part of the talk, we briefly outline our research contributions on data-driven robust optimization for autonomous mobility-on-demand (AMoD) systems and sustainable mobility. We also highlight our research results concerning CPS security and provide insights into our ongoing work in the field of learning and control. Through our findings, we aim to advance AI-enabled CPS for safer, efficient, and resilient systems in dynamic environments.