AI Safety, Robustness and Explainability
With a dual background in humanities and computer science, I bring unique perspectives to my work as a researcher, developer and communicator. My work has been presented at AAAI and ICML. For more details, see my recent publication history and LinkedIn Profile.
- Computer Science Ph.D. Researcher at New York University. My research centers on AI safety, explainability and robustness.
- I maintain VL Hub, a vision-language pretraining framework which integrates CLIP pretraining, LiT-Tuning, CoCa, conventional timm vision models and SimCLR contrastive models into a single test-train-eval framework, making it easier to compare models across architectures and objectives.
- I created the ArcheType method, which uses large language models to help with data cleaning and integration.