FAQs
What is the duration of the internship?
The internship lasts for 13 weeks, with two possible sessions: May 16th to August 9th, 2025, or June 16th to September 12th, 2025.
Is this internship paid?
Yes, this internship is a paid position.
Who is eligible to apply for this internship?
Rising seniors, either undergraduate or graduate students, who are interested in technology, data science, and healthcare are eligible to apply. We particularly encourage students who have been historically underrepresented in this field.
What are the main responsibilities of the Data Scientist Evaluation Intern?
The intern will be responsible for designing reusable evaluation frameworks, developing AI/ML evaluation strategies, researching metrics and compliance guidelines, and communicating technical methods and results effectively.
What are the minimum qualifications required for applying?
Candidates should have an advanced degree in a quantitative discipline or equivalent practical experience, a deep understanding of AI/ML evaluation metrics, experience developing evaluation frameworks, and the ability to research technical publications.
Are there any preferred qualifications for this internship?
Preferred qualifications include the ability to work cross-functionally on teams, experience in regulated environments, and familiarity with software engineering practices.
What is the hourly pay range for this internship?
The US hourly range for this internship position is $45.67 - $53.85, plus benefits.
What kind of projects will the intern work on?
The intern will work on projects related to the evaluation of AI models, including those for products like Verily Numetric Retinal Service and Digital Biomarkers, Verily Lightpath, and Verily Viewpoint.
Will the intern have the opportunity to work with cross-functional teams?
Yes, the intern will work closely with cross-functional partners to design and create evaluation frameworks.
What kind of outcomes are expected from the evaluation frameworks developed by the intern?
The evaluation frameworks should assess AI model performance, safety, and adherence to responsible AI principles.