Logo of Huzzle

Internship

Data Strategy Co-op - NY - College Program 2024

Logo of Marsh McLennan

Marsh McLennan

1mo ago

🚀 Off-cycle Internship

New York

AI generated summary

  • Seeking a candidate pursuing a technical degree or with strong ability to learn new concepts quickly, experienced in Python, R, SQL, SKLearn, with exposure to Docker, Spark, cloud environments, and REST APIs. Must have strong analytical skills, work well in Agile environments, and demonstrate a desire to learn and add value to team initiatives.
  • The Data Strategy Co-op will work on developing and deploying custom data pipelines, leveraging large datasets to drive revenues, consuming data from various sources and formats, constructing and maintaining data pipelines, and performing exploratory data analysis or developing machine-learning models to support Data Strategy products.

Off-cycle Internship

DataNew York

Description

  • The Co-Op role within the Data Strategy group at Guy Carpenter (“GC”) provides an opportunity to contribute to the design, analysis and facilitation of data-centric work streams across multiple innovative projects at one of the world’s largest and most respected risk management and reinsurance firms. The Data Strategy group has a “start-up style” mandate and internal consulting role (within a $1.3 billion company) to quickly and efficiently enhance the acquisition, storage, analysis, fidelity, and monetization of massive amounts of client, internal, and third-party data across the Guy Carpenter organization.
  • As part of the Data Strategy group, this role will work with data analysts, scientists, data engineers, product managers, and stakeholders from other internal groups and will gain exciting real-world software engineering and data manipulation experience within a thriving group. He/she will participate in increasing the efficiency of the data collection and analysis process across GC and running analyses on various quantitative and qualitative subjects.

Requirements

  • Pursuit of Masters or Bachelor’s degree in a technical field (computer science, data science or related quantitative field mathematics etc.) or a demonstrated ability to learn new software and analytical concepts quickly. Strong academic record in major and work experience is more important than the field of study.
  • Excellent verbal and writing skills for complex communications with Guy Carpenter colleagues at all levels of the organization
  • Exposure to technologies and programming languages such as Python, R, SQL, SKLearn, machine learning
  • What Makes You Stand Out:
  • · Prior experience with:
  • software best practices including DRY coding, unit & integration testing, documentation, etc.
  • modern containerized deployment technologies (Docker)
  • big-data processing technologies like Spark
  • developing in a modern, agile SDLC environment (dev/qc/prod environments with CI/CD, etc.)
  • using a cloud environment (AWS/GCP/Azure)
  • consuming REST APIs
  • Strong analytical skills and intellectual curiosity as demonstrated through academic experience or work assignments
  • Experience working in an Agile environment to facilitate the quick and effective fulfillment of group goals
  • Experience with risk evaluation modeling and/or building data products
  • Good interpersonal skills for establishing and maintaining good internal relationships, working well as part of a team and for presentations contribute to both technical and non-technical discussions
  • Strong emphasis on a desire to learn quickly and add value to team initiatives

Education requirements

Currently Studying
Bachelors
Masters

Area of Responsibilities

Data

Responsibilities

  • Develop, implement, and deploy custom data pipelines powering machine learning algorithms, insights generation, client benchmarking tools, business intelligence dashboards, reporting and new data products.
  • Leverage multiple large datasets to drive revenues via the development of new products with the Data Strategy team, as well as the enhanced delivery of existing products
  • Consume data from a variety of sources (relational DBs, APIs, NetApp and other cloud storage, FTPs) & formats (Excel, CSV, XML, parquet, unstructured))
  • Construct and maintain data pipelines between internal/external sources and the data lake and implement modern quality assurance practices including automated validation with frameworks like dbt or Great Expectations.
  • Participate in development standards across the team through code reviews, unit/integration testing, and monitoring
  • Perform exploratory data analysis and/or develop or enhance machine-learning models to support Data Strategy products.

Details

Work type

Full time

Work mode

office

Location

New York