Altan Haan


About Me

Photo of Altan Haan

I'm a software engineer on the MLSys team at OctoML, a startup that's automating end-to-end optimizations of deep learning (DL) models using the Apache TVM DL Compiler stack. Currently, I'm working on optimizing the DL training loop. Previously, I was a research assistant in the UW SAMPL research group, working on programming languages (PL) and machine learning (ML) systems research. I received my B.S. in Computer Science from the Allen School at UW in June 2020, under the supervision of Zachary Tatlock.

You can find me on GitHub, Twitter, and LinkedIn.

Recent News

Apr. 2021: accepted CS PhD at UC Berkeley! (go bears)
Feb. 2021: upgraded to Software Engineer at OctoML.
Jan. 2021: "Dynamic Tensor Rematerialization" accepted to ICLR 2021 with a spotlight presentation!

Research Interests

Broadly speaking, my research lies in (and around) the intersection of programming languages, machine learning, and systems. I enjoy synthesizing new techniques using ideas from each area, with an emphasis on improving large systems. I am particularly interested in building the next generation of compiler stacks, where data-driven methods play a central role in the optimization and code generation process. I believe that properly designed symbolic methods (e.g., rewriting systems, type-guided synthesis, CEGIS) can benefit greatly from incorporating cost models and heuristics trained from the large corpus of software that exists today.

Publications and Preprints

[1] Dynamic Tensor Rematerialization
Marisa Kirisame, Steven Lyubomirsky, Altan Haan, Jennifer Brennan, Mike He, Jared Roesch, Tianqi Chen, Zachary Tatlock.
ICLR 2021 spotlight.

[2] Simulating Dynamic Tensor Rematerialization*
Altan Haan, supervised by Zachary Tatlock.
Honors thesis, 2020.

Research Projects

DTR. Dynamic Tensor Rematerialization (DTR) is a dynamic runtime technique for reducing peak memory requirements when training deep learning models. DTR is a "checkpointing" method which frees and recomputes intermediate computations as needed, thus trading more compute for less space. Unlike existing checkpointing methods which require offline planning, DTR is an online algorithm and operates fully in the runtime, enabling checkpointing for arbitrarily dynamic models. Notably, DTR is able to produce near-optimal checkpointing schedules when compared against Checkmate, a state-of-the-art static technique that uses ILP. Check out our preprint for more details.

TVM/Relay. Relay is the high-level differentiable IR used internally by the TVM DL compiler. With an ML-like syntax, Relay allows users to write differentiable programs with control flow, as a generalization of static computation graphs. I have contributed a few gradient implementations for tensor operators in Relay, which are required for automatic differentiation (AD) and backpropagation. I've also helped maintain and improve the AD code in Relay, as we push towards a fully functioning Relay training loop. Leveraging the end-to-end optimizations of TVM for training should bring great performance improvements over current more ad-hoc approaches.

Program synthesis. Forthcoming.

Miscellaneous Links


This page was generated on Sun Jun 27 14:39:49 2021.