Logo of Huzzle

🚀 Internship

• Starts May 19

Summer 2024 Research Co-Op/ Intern: PhD Tech

🚀 Summer Internship


🤑 $91.5K - $137.3K

AI generated summary

  • The candidate must have experience in Python development and ML frameworks, a solid understanding of transformer architecture, research experience and publications in ML or related fields, and expertise in LLM/LMM pretraining and distributed training frameworks.
  • The candidate will collaborate with experts to enhance language and multimodality models, designing and implementing innovative algorithms and architectures for large-scale distributed training, and contributing to research papers and code releases.

Summer Internship

Research & Development, DataSeattle


  • As a Co-op student, you can make an immediate contribution to AMD's next generation of technology innovations. We have a dynamic, high-energy work environment, filled with expert employees, and unique opportunities for developing your career. You will have the opportunity to connect with AMD leaders, receive one-on-one mentorship, attend amazing networking events and much more. With AMD, you can get hands-on experience that will give you a competitive edge in the workforce.


  • Experience developing and debugging in Python
  • Experience in ML frameworks such as PyTorch, JAX, TensorFlow
  • Experience with distributed training
  • Solid understanding of transformer architecture
  • Experience on LLMs/LMMs finetuning, distillation, and/or RLHF
  • Research experience and publications in ML, NLP, Vision and Language, or related fields
  • Expertise on LLM/LMM pretraining
  • Expertise on distributed training frameworks
  • Strong publication record
  • We are looking for students who are passionate about generative AI and enjoying the design and implementation of novel research ideas to improve the quality of LLMs/LMMs and other generative models, accelerate the training and inference speed, and influence future hardware and software directions.The ideal candidate will have expertise and hands-on experience on training LLMs/LMMs, familiar with hyper-parameter tuning, data processing, and latest training techniques for LLMs/LMMs. A PhD or MS degree in ML, NLP, Vision, or related fields is preferred.

Education requirements

Currently Studying

Area of Responsibilities

Research & Development


  • You will work with world renowned scientists and engineers to advance the state of the art on large language models (LLMs), large multimodality models (LMMs), and other generative models. You will develop novel large scale distributed training algorithms and model architectures to improve the quality of models and accelerate the model training and inference speed of LLMs/LMMs. You are expected to work on the algorithm and model design, implementation, and evaluation. You will write paper submissions based on the internship work and release the code to the public.


Work type

Full time

Work mode


Start date

May 19, 2024




91520 - 137280 USD


  • Healthcare coverage, dental, and vision
  • Paid holidays
  • Relocation stipend
  • Education assistance for required Co-op/Intern course