We study the conditions that underlie the asymptotic and exponential convergence of saddle flow dynamics of convex-concave functions to a saddle point. First, we propose a certificate rooted in observability for asymptotic convergence of saddle flows. It generalizes conventional conditions for convergence, e.g., strict convexity-concavity, and leads to a novel regularization method that is separable and makes minimal requirements on convexity-concavity for asymptotic convergence. Second, we show that saddle flows’ global exponential stability is a direct consequence of strong convexity-concavity, which provides a lower-bound estimate of the convergence rate. This insight explains some existing algorithms’ convergence properties for equality constrained convex optimization, e.g., proximal gradient. It is further exploited to design a novel alternative regularized algorithm that achieves exponential convergence at a rate depending only on the strong convexity of the objective. Our results generalize to saddle flow dynamics with projections on the vector field and have immediate applications in inequality constrained convex optimization, in particular, distributed linear programs and constrained reinforcement learning.