I'm Head of Engineering at atdepth where we're building the ocean modeling, data engineering, and hybrid cloud GPU computing infrastructure needed to monitor and forecast ocean-based human intervention operations such as marine carbon dioxide removal and deep sea mining.
Before joining atdepth I was an Applied Scientist at Afresh working on reducing food waste by using machine learning to help grocery stores optimize their inventory management and ordering decisions.
Before that I received my PhD in Computational Earth, Atmospheric, and Planetary Sciences from MIT where I worked at the intersection of climate science and scientific machine learning. As part of the Climate Modeling Alliance I developed Oceananigans.jl, a fast and flexible next-generation ocean model written in Julia that runs on GPUs, and used it to train machine learning models of geophysical turbulence and simulate all kinds of fluid dynamics.
And before that I received my MS and BS in Physics from the University of Waterloo where I used ultrashort pulse lasers and synchrotrons to make movies of molecular dynamics.
You can scroll up to see my blog and some cool movies. You can also scroll down for descriptions of my past research and a list of publications.
My work has involved developing ocean models for climate modeling. I am the original developer of Oceananigans.jl, a fast, friendly, and flexible Julia package for computational fluid dynamics on CPUs and GPUs (Ramadhan et al., 2020). Oceananigans.jl serves as the ocean component for the next-generation climate model being developed by the Climate Modeling Alliance. I led development of the first release and implemented features to turn it into a fully-fledged ocean model. I also worked on
Even with today's immense computational resources, climate models cannot resolve every cloud in the atmosphere or eddying swirl in the ocean. However, collectively these small-scale turbulent processes play a key role in setting Earth's climate. The problem is that they are not well-represented in climate models.
My research combines physical principles and machine learning to improve how turbulence is represented in climate models We take simple yet robust models of these turbulent processes and augment the partial differential equations they solve with neural networks. The augmented models are trained using resolved simulations of the turbulent process generated using Oceananigans.jl so the neural networks learn the missing physics (Ramadhan et al., 2023).
In collaboration with oceanographers at MIT, we have leveraged Oceananigans.jl to tackle research problems requiring the speedup provided by GPUs. For example, we have fit oceanic convection models using Bayesian inference, accurately modeled meltwater from Antarctic ice shelves, and even investigated the circulation of subsurface oceans on icy moons such as Jupiter's Europa and Saturn's Enceladus.
I have done some work on bringing in a dynamical perspective to how the Southern Ocean's meridional overturning circulation interacts with the ocean surface and sea ice around Antarctica (Ramadhan et al., 2022). This was done by inferring surface stress patterns from decades of observational satellite data. We also investigated trends that may explain patterns and rates of Antarctic land ice loss and sea level rise.
Marine microbial communities lie at the bottom of the oceanic food web sustaining all marine animal life. The geographical structure of these microbial communities, and thus oceanic biodiversity, is set by short-range ecological interactions. To investigate how these interactions affect biodiversity, we utilized an agent-based modeling approach in which millions of marine microbes are modeled as particles that are advected by surface ocean currents derived from satellite observations. The interaction was modeled using an imbalanced probabilistic rock-paper-scissors game leading to the reproduction of observed ecological phenomena.
Have you ever seen a molecule bend or participate in a chemical reaction? If so, probably not directly: single molecules are notoriously hard to observe for any length of time. For my MSc thesis I developed a rigorous computational framework to create movies of individual molecules undergoing chemical reactions using Coulomb explosion imaging (CEI), a technique of studying the ultrafast dynamics of smaller molecules in the gas phase. While CEI has always promised that atomic structures may be measured, in practice no rigorous method is available and instead the momentum vectors are studied. The momentum vectors tell a large part of the story but aren't as satisfying to study as the actual structure everyone seeks so a method of retrieving the structure is highly desirable.
The structure may be recovered by attempting to simulate the CEI experiment backwards in time; however, solving for the molecular geometries constitutes an ill-posed nonconvex optimization problem that is difficult to tackle computationally even for small molecules. I am actively trying different optimization approaches and also collaborating with the Department of Statistics and Actuarial Science on a fully statistical approach using Bayesian inference. So far the statistical approach seems more promising as it will allow for the inclusion of measurement uncertainty which every previous study has neglected and seems to scale well to larger molecules (Ramadhan, 2017).
Molecular movie of proton migration in acetylene imaged in momentum space (by other members of my group).
Multiple (two) structures found in searching for molecular geometries showcasing the ill-posed nature of the optimization problem.
The geometries and dynamics of small gas molecules may be studied by Coulomb explosion imaging (CEI), providing a means of directly probing the atomic structure and dynamics of smaller molecules in the gas phase, a regime where no method is viable. CEI is usually performed using ultrashort laser pulses (~10−15 s) with the goal of "blowing up" the molecule as fast as possible to minimize the disturbance to the molecule and ensure accurate imaging of the atomic structure.
As a proof of principle, we were able to use single X-ray photons from the Canadian Light Source synchrotron to study the dynamics of dissociative ionization in the OCS molecule using CEI. The use of single X-ray photons led to faster ionization and "blow up" compared to short laser pulses, showing promise for greater temporal precision in CEI experiments. It also allowed us to identify a surprisingly rich set of ultrafast molecular dynamics for the first time (Ramadhan et al., 2016).
The traditional synthesis method for polyynes, an allotrope of carbon with chemical structure (−C≡C−)n, is a challenging and dangerous multistep procedure that provides little control over the their end-caps. Yet polyynes are of great interest in interstellar chemistry and especially in nanotechnology as potential elements for molecular machines and carbon cluster precursors. Their end-caps may endow them with extra functionality and so a safe and controllable synthesis procedure is highly desirable.
By irradiating different liquid solvents with short laser pulses, we are able to easily synthesize long-chain polyynes and demonstrate end-cap control for methyl caps. Using high-performance liquid chromatography (HPLC), we have confirmed the synthesis of polyynes up to C18H2 and methyl-capped polyynes up to HC14CH3. This opens the possibility for controlling the synthesis of other polyyne molecules and their efficient mass production (Ramadhan et al., 2016).
Graphene has attracted an enormous amount of attention over the past decade owing to its peculiar properties and vast applicability. However, large-scale single layers of pristine graphene are difficult to obtain and thus much research has focused on graphene oxide gels which may be used to produce high-quality graphene. These gels have their own applications too, such as drug delivery and sensor engineering. To satisfy the large demand for graphene oxide gels, an efficient production method is highly desirable.
By irradiating aqueous graphene oxide with femtosecond laser pulses, we were able to convert the solution into a gel with physical and chemical properties comparable with those of a monolayer graphene sheet. We were also able to control the properties of the synthesized gel by simply tuning the laser pulse's properties allowing for the production of different gels suitable for building nano-sized graphene photodetectors and transistors (Ibhrahim et al., 2014).
Project Lovelace is an open online platform for learning about science and developing computational thinking through programming and problem solving. It is a collection of computational science problems and tutorials taken from all branches of the natural, social, and mathematical sciences. Each problem teaches a scientific application (e.g. locating earthquakes, DNA splicing) and requires the use of scientific insight and some programming skills to solve. Tutorials teach computational methods that students and researchers may find useful (e.g. solving differential equations, Bayesian inference) and may be required knowledge for some problems.
While Project Lovelace is still in development, we are deploying the website and the problems one by one throughout the winter in preparation for a pilot run in April 2017. In addition to the website's recreational aspect we ultimately hope that the problems and tutorials may be used in undergraduate courses, especially to complement courses that lack a computational portion as computational methods have become ubiquitous in almost every field of science.
The name commemorates Ada Lovelace who proposed the first algorithm to be run on a computer in the 1840's.