FAQs
What is the duration of the internship for the Research Scientist Intern position?
The internships are twelve (12) to twenty-four (24) weeks long, with various start dates throughout the year.
What educational qualifications are required for this internship?
Candidates must currently have, or be in the process of obtaining, a PhD degree in Computer Science, Artificial Intelligence, Signal Processing, Machine Learning, Computer Vision, Electrical Engineering, Applied Mathematics, Acoustics Engineering, or a related STEM field.
What programming languages and tools should candidates be proficient in?
Candidates should have 3+ years of experience with Python, Matlab, or similar languages, as well as experience with machine learning software platforms such as PyTorch and TensorFlow.
Are there specific experience requirements for this position?
Yes, candidates should have at least 2+ years of experience building computational models in audio, audio-visual, or speech application domains using machine learning or signal processing.
What types of collaborations will the intern engage in?
The intern will collaborate with researchers and engineers across diverse disciplines, including audio and acoustic engineering.
Is work authorization required for this internship?
Yes, candidates must obtain work authorization in the country of employment at the time of hire and maintain ongoing work authorization during employment.
What are the preferred qualifications for candidates applying to this position?
Preferred qualifications include demonstrated software engineering experience, a strong background in statistical modeling or signal processing, published papers in top conferences, experience in team environments, and intent to return to a degree program after the internship.
Does Meta provide benefits for interns?
Yes, in addition to base compensation, Meta offers various benefits to interns.
What kind of research topics will the intern work on?
Interns will work on projects in multimodal representation learning, audio-visual scene analysis, egocentric audio-visual learning, multi-sensory speech enhancement, and acoustic activity localization.
Where can candidates learn more about the work done by the audio team at Meta Reality Labs?
Candidates can visit https://tech.fb.com/inside-facebook-reality-labs-research-the-future-of-audio/ for more information on the audio team's work.