The notion of computation as inherently discrete is fading. Modern perspectives emphasize smooth, differentiable algorithms—a view central to differential linear logic. Recent work by Daniel Murfet and myself builds on this by extending Girard’s encoding of Turing machines in linear logic, using Sweedler semantics (where '!' is interpreted as Sweedler’s cofree coalgebra) to explore connections between computational structure and geometry. Specifically, we relate the execution of a Turing machine by a universal Turing machine to the germ associated with that machine, leveraging tools from Singular Learning Theory (SLT), a niche branch of machine learning that links learning dynamics to singularities in parameter space. This talk synthesizes ideas from differential linear logic, SLT, and program synthesis, with a focus on concrete examples. Our main result connects the Taylor series expansion of a Hessian matrix to a Turing machine’s robustness to error, suggesting a deeper geometric perspective on program space. Rather than showcasing disparate fields, my goal is to make this connection feel natural and foundational.