Empleo de AWS Data Engineer (Module lead) en Cuauhtémoc, Ciudad de México - Vacante 103273

Feria Virtual de Empleos de Tecnología en México 2024

¡Más de 700 ofertas de trabajo en México, USA y Remoto!

Visitar feria

Publicado hace 14 días.

AWS Data Engineer (Module lead) en Marsh Mclennan

Sueldo oculto

Ciudad de México

Empleado de tiempo completo

Inglés: Nivel Avanzado

AWS Data Engineer (Module Lead)

What can you expect?

As an  AWS data engineer  your responsible for designing, implementing, and maintaining data solutions on the Amazon Web Services (AWS) platform. You’ll work with large datasets, streaming data, and various AWS services like Amazon Redshift, Amazon EMR, AWS Glue, and AWS Lambda. Your role involves building data pipelines, data lakes, and data warehouses, as well as optimizing data storage, retrieval, and processing. You will also collaborate with data scientists, analysts, and other stakeholders to ensure data quality, availability, and security. Additionally, you may be involved in data modeling, performance tuning, and troubleshooting to ensure efficient and reliable data processing.

You will have an opportunity to work on our internal application -  Blue[i] Analytics Solutions :

Blue[i] is  next-generation analytics suite of digital solutions that delivers our industry-leading data and actionable insights through an intuitive, interactive, and engaging platform, allowing you to make critical business decisions with confidence.

What is in it for you?

  • Leading training  and development program
  • Hybrid work from home flexibility
  • Health care and insurance for you and your dependents and Flexible benefits packages to suit your needs and lifestyle
  • Professional environment where your career path really matters and is supported in our global organization
  • Excellent career diversification opportunities
  • Great team environment with energetic and supportive colleagues

What you need to have: 

  • Someone with 3-5 years’ experience in a data engineering role
  • AWS data services experience in particular CloudFormation and/or Terraform), EMR, Glue RDS, S3, EC2, Redshift and other data infrastructure automation services
  • Infrastructure automation through DevOps scripting using - Python, R, SparkQL or PowerShell. Can consider someone with JAVA
  • Querying and managing data - SQL
  • Data modelling - Relational, Dimensional, NoSQL, Snowflake
  • Code Management System - GitHub, BitBucket or Gitlab
  • Experience with CI/CD practices & tools - Gitlab, Jenkins. Experience in Big Data, Spark, Hadoop, and Hive
  • Ability to engage with senior level stakeholders and having a broad influence in the internal team
  • Experience supporting technical projects
  • Must have a degree on either BE/ BTECH/ MCA/ M TECH

What makes you stand out:

  • Should be able to work independently, capable of applying judgment to plan and execute your tasks.
  • Ability to work in team across multiple projects and multiple geographies.
  • Have Good communication and presentation skills.
  • Ability to multitask and liaise with multiple stakeholders.
  • Ability to identify process improvement opportunities.
  • Adaptability to change.