Logo of Huzzle


AI Security Research Scientist Intern - 2024 PhD - San Jose

Logo of TikTok



1mo ago

🚀 Off-cycle Internship

San Jose

AI generated summary

  • You need a PhD in AI/related fields, AI security research experience, programming skills, problem-solving abilities, and experience with AI/ML frameworks for this AI Security Research Scientist Intern position.
  • You will conduct AI security research, develop mitigation strategies, collaborate with teams, and stay informed on the latest trends in the field.

Off-cycle Internship

Software Engineering•San Jose


  • The Security Engineering team is a research and development (R&D) team that operates within the broader Security and Risk Control Organization. Their core responsibility is to construct, implement, and sustain secure infrastructures, platforms, and technologies. In addition, they provide support to cross-functional teams within the organization. The team's ultimate objective is to serve and safeguard TikTok products and infrastructures on a global scale. We are looking for strong AI Security Research Scientist to join through the student researcher program.


  • Experience in AI/machine learning, with a strong focus on security aspects; demonstrated experience in conducting AI security research with published papers or presentations in recognized forums.
  • Proficiency in programming languages such as Python, R, or Java, and familiarity with AI/ML frameworks like TensorFlow or PyTorch.
  • Excellent problem-solving skills and the ability to think creatively about complex challenges.
  • Preferred Qualifications:
  • Ph.D. degree in Computer Science, Cybersecurity, AI, or related fields.
  • Hands-on experience in developing and deploying secure AI systems in a real-world setting.
  • Prior experience in industry or academic settings, working on large-scale AI projects and research.

Education requirements


Area of Responsibilities

Software Engineering


  • Conduct in-depth research on AI-specific security threats, including adversarial attacks, model tampering, and data privacy issues.
  • Develop and implement strategies to detect and mitigate AI security vulnerabilities in various domains, such as natural language processing, computer vision, and other machine learning areas.
  • Collaborate with cross-functional teams to integrate AI security measures into existing and new products.
  • Stay abreast of the latest trends and advancements in AI security, attending conferences and engaging with the broader research community.


Work type

Full time

Work mode



San Jose