Bradley Brown

Hello! I am a CS PhD student at the University of Oxford, supervised by Professor Ronald Clark and supported by the Clarendon Fund. I obtained my undergraduate degree at the University of Waterloo studying Software Engineering with a joint major in Combinatorics and Optimization. Previously, I was a Research Scientist intern at NVIDIA’s Toronto AI lab, Layer 6 AI, and Akasha Imaging.

[ Email  /  Github  /  Twitter  /  Google Scholar  /  LinkedIn ]

profile photo



hpp Hydragen: High-Throughput LLM Inference with Shared Prefixes
Jordan Juravsky*, Bradley Brown*, Ryan Ehrlich*, Daniel Y. Fu, Christopher Ré, Azalia Mirhoseini
[ Paper ]

Introducing an exact, simple (no custom CUDA) implementation of attention that can accelerate LLM throughput by over 30x for problems containing shared prefixes and large batch sizes.

NeuralField-LDM: Scene Generation with Hierarchical Latent Diffusion Models
Seung Wook Kim*, Bradley Brown*, Kangxue Yin, Karsten Kreis, Katja Schwarz, Daiqing Li, Robin Rombach, Antonio Torralba, Sanja Fidler
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2023.
[ Project Page / Paper ]

Building a generative model of open-world 3D scenes trained on real-world in-the-wild data.

hpp Verifying the Union of Manifolds Hypothesis for Image Data
Bradley C.A. Brown, Anthony L. Caterini, Brendan Leigh Ross, Jesse C. Cresswell, Gabriel Loaiza-Ganem
International Conference on Learning Representations (ICLR) 2023.
[Paper / Video / Code]

Extending the manifold hypothesis to support natural image data lying on a union of manifolds with varying intrinsic dimension. Show increased performance in generative modelling and image classification tasks by designing models with an inductive bias for this structure.

hpp Language Models Inversely Scale on Piecewise Function Evaluation with Biased Examples
Jordan Juravsky*, Bradley Brown*, Atif Mahmud*, Ryan Ehrlich*, Wais Shahbaz*
Tiny Paper at the International Conference on Learning Representations (ICLR) 2023.

Demonstrating that large language models (LLMs) can be misled by providing them with factually correct, but unrepresentative/biased examples, in the context of integer-to-integer piecewise functions.

hpp Relating Regularization and Generalization through the Intrinsic Dimension of Activations
Bradley C.A. Brown, Jordan Juravsky, Anthony L. Caterini, Gabriel Loaiza-Ganem
NeurIPS 2022 workshops: OPT 2022 and HITY 2022.
[ Paper / Code ]

Investigating how the intrinsic dimension of activations in deep neural networks are affected by regularization, correlated with improved validation performance and are coupled with the effects of sudden generalization (grokking).

hpp Session-based Recommendation with Transformers
Yichao Lu, Zhaolin Gao, Zhaoyue Cheng, Jianing Sun, Bradley Brown, Guangwei Yu, Anson Wong, Felipe Pérez, Maksims Volkovs
Proceedings of the Recommender Systems Challenge 2022.

Leveraging transformers and self-supervised learning techniques to achieve 2/300+ teams on the RecSys session-based recommendation system challenge.

kts Towards Rotation Invariance in Object Detection
Agastya Kalra, Guy Stoppi, Bradley Brown, Rishav Agarwal, Achuta Kadambi
International Conference on Computer Vision (ICCV) 2021.
[ Paper / Video / Code ]

Proposing a mathematically sound rotation augmentation scheme and loss modification for object detection models that leads to better rotation invariance/equivariance.

Template from this website, adapted from this website.