Logo of Huzzle


• Starts Jun 2

Security Data Analyst/Research Intern

Logo of SpyCloud


22d ago

🚀 Off-cycle Internship


AI generated summary

  • You must be pursuing a degree in Computer Science or related field, have strong Python skills, understanding of data processing, and be located in Austin, TX for summer internship. Nice to have experience in cybersecurity, AWS, and REST APIs.
  • You will develop code for data analysis, parse datasets, automate processes in Python, utilize Jupyter Notebooks, maintain infrastructure, manage data ingestion, collaborate on innovative systems, using Python, Golang, Linux, databases, and AWS services.

Off-cycle Internship



  • As a Security Data Analyst/Research intern based at SpyCloud’s Austin, Texas office, you will have the opportunity to work closely with the Data Analysis manager in developing and enhancing tools to parse, transform, and analyze security data. You'll have the chance to apply your Python skills, knowledge of data analysis techniques, and automation techniques to real-world scenarios.


  • Currently pursuing a degree in Computer Science, Data Science, Cybersecurity, or a related major.
  • Good understanding of computer science fundamentals (data structures, algorithms, data processing fundamentals)
  • Familiarity with databases (relational, NoSQL, Search/Analytics)
  • Familiarity with building data solutions and automation tools using Python
  • Familiarity with Linux, bash/ksh scripting, and Regular Expressions
  • Located in Austin, Texas for the duration of the summer term (June 3rd - August 2nd)
  • On-site attendance once a week
  • Nice to Have:
  • Experience creating Python packages or open-source package management
  • A background or strong interest in cybersecurity
  • Familiarity with a version control system. We use Git.
  • Experience with AWS - Compute, Storage, Database
  • Familiarity with building REST APIs

Education requirements

Currently Studying

Area of Responsibilities



  • Develop code used in the analysis of recaptured data
  • Parse and transform structured and unstructured datasets
  • Build python based automation for the parsing platform
  • Develop and maintain Jupyter Notebooks (including Lab and Hub) for data analysis
  • Maintain and improve the existing codebase and infrastructure
  • Manage weekly ingestion process
  • Collaborate and design with team to build innovative data systems
  • Our Stack:
  • Python (some Golang)
  • Jupyter
  • Linux
  • Databases - Relational, NoSQL
  • AWS - EC2, RDS, SQS, S3, Lambda, API Gateway, and much more


Work type

Full time

Work mode


Start date

Jun 2, 2024