Reduced Augmentation Implicit Low-Rank (RAIL) Integrators For Advection-Diffusion And Fokker–Planck Models

Document Type

Article

Publication Date

2025

Published In

SIAM Journal On Scientific Computing

Abstract

This paper introduces a novel computational approach termed the reduced augmentation implicit low-rank (RAIL) method by investigating two predominant research directions in low-rank solutions to time-dependent partial differential equations (PDEs): dynamical low-rank (DLR), and step-and-truncation (SAT) tensor methods. The RAIL method is designed to enhance the efficiency of traditional full-rank implicit solvers, while maintaining accuracy and stability. We consider spectral methods for spatial discretization, and diagonally implicit Runge–Kutta and implicit-explicit RK methods for time discretization. The efficiency gain is achieved by investigating low-rank structures within solutions at each Runge–Kutta (RK) stage. In particular, we develop a reduced augmentation procedure to predict the basis functions to construct projection subspaces. This procedure balances algorithm efficiency and accuracy by incorporating as many bases as possible from previous RK stages, and by optimizing the basis representation through a singular value decomposition truncation. As such, one can form implicit schemes for updating basis functions in a dimension-by-dimension manner, similar in spirit to the K-L step in the DLR framework. We propose applying a postprocessing step to maintain global mass conservation. We validate the RAIL method through numerical simulations of advection-diffusion problems and a Fokker–Planck model. Our approach generalizes and bridges the DLR and SAT approaches, offering a comprehensive framework for efficiently and accurately solving time-dependent PDEs in the low-rank format with implicit treatment.

Keywords

dynamical low-rank, step-and-truncate, implicit-explicit method, advection-diffusion equation, Fokker–Planck

Share

COinS