Distributionally Robust Classification on a Data Budget
Real world uses of deep learning require predictable model behavior under distribution shifts. Models such as CLIP show emergent natural distributional robustness comparable to humans, but may require hundreds of millions of training samples. Can we train robust learners in a domain where data is limited?
I recently finished a paper which analyzes this question in considerable detail. Please feel free to check out the paper’s official site for our methods and our results!
I want to thank my close collaborators at NYU, Ameya Joshi and Minh Pham, as well as my advisor, Chinmay Hegde, for helping me with this work.