Research Areas
We're experimenting with solutions in these spaces. We are always looking for research collaborators or partners in pushing frontiers.
Model Distillation
Distilling capabilities of larger models into smaller, cheaper, and domain specific ones.
Alternatives to Gradient Descent
Local losses, feedback alignment, forward-forward, and other unconventional deep learning approaches.
Mechanistic Interpretability
Interpretability and formal verification of deep learning models.
Get in touch
Request early access to AI Shephard . Or get in touch with our team to learn more about partnerships and research collaborations.