Junior ML Researcher, Compression specialization
Рaycheck: 80-120k (+ bonuses based on the results of completed projects)

Format: full-time (40 hours/week), remote work, employment at MIPT

At the MIL Compression Group, we thrive on the breadth of AI. We use SotA methods of quantization, distillation and pruning of neural networks to reduce the memory footrpint of models and maintain their quality. We help our customers reduce infrastructure costs, speed up top grids and adapt them to low-power devices.
We are growing and we need young guys with bright eyes :) If you are interested in developing SotA methods for optimizing models so that BERT and GPT-3 work in every kettle - come to us!

  • Search and analyze articles on the topic of the project;
  • Implement compression methods for models from articles;
  • Conduct experiments with implemented methods;
  • Describe findings and document your research process.

We are waiting for you:
  • Worked with popular neural network architectures (ResNet, UNet, ViT, BERT etc);
  • Interested in methods of compression and acceleration of neural networks (Quantization, Pruning, Knowledge distillation etc);
  • You like a scientific approach, you know how to generate hypotheses and confirm them experimentally;
  • You write working code and know how to write tests;
  • Making the most of PyTorch, Git, and your favorite IDE.

It will be an advantage if:
  • Availability of scientific publications;
  • Experience in competitions and hackathons.

Next steps:
  • Submit your application;
  • P.S. include portfolios and open projects in your resume;
  • We will check resume and portfolio, prepare a personal task;
  • We will check the task and conduct an interview with the team.
Name CV_Name_Surname.pdf
This website uses cookies to ensure you get the best experience
Made on