In this talk, I will discuss two case studies of how seemingly unrelated areas of mathematics can be used in tandem to understand complementary aspects of biological systems. First, I will cover how to identify the sources of variability in gene networks: Understanding the effect of noise on gene networks is fundamental in developing accurate models. Typically, comparing experimental data and simulations only takes into account individual trajectories, but not the lineage and the corresponding cell-cell variability and correlation. Here I show that a model that includes intrinsic and extrinsic noise can capture the variability and correlation seen in experimental data. The model is based on an extension of the Gillespie algorithm that takes into account cell growth, division, lineage dependence, and extrinsic noise. If time allows, I will discuss the graph-theory approach to cell tracking that we used to analyze the experimental data. Second, I will cover how neural activity can reshape network structure, resulting in neural coding: A central problem in neuroscience is how the brain learns, stores, and decodes information. While bump attractor networks have been proposed to explain memory storage, how such networks emerge is still not well understood. Experiments have shown that the replay of neural activity during sleep is a key factor in memory; so replay could be the origin of bump attractor networks. Here I present a firing rate model with synaptic plasticity that explains how this replay of neural activity can reshape network structure, resulting in a bump attractor network. If time allows, I will discuss the algebraic-geometry approach that we used to recover place-field and network structure from neural activity.