
Vacante de
Parallel Staff
Senior Data Engineer
60,000 a 70,000 MXN
Join our ever-growing group of diverse talent to tackle challenging problems and create innovative solutions. At ParallelStaff, we aren't just looking to support businesses; we want to support you! Embrace courage. Spawn meaningful ideas. Build future-proof solutions.
What the person will do in the role
This candidate will be working on architecting and implementing a large-scale business intelligence solution on the cloud. Develop data pipelines to create facts and dimension models for analytics.
- This role involves expert-level experience in developing low latency applications.
- This candidate will work on developing SQL for Snowflake, develop scripts with Python, and UNIX. (MUST)
- This candidate will span multiple roles such as data architect, data modeler, and data engineer.
Basic Requirements:
Job responsibilities, core deliverables, and major duties
- Focus on major areas of work, typically 20% or more of role
- Build software across our entire cutting-edge analytics platform, including data processing, storage, and APIs, with awesome cutting-edge technologies. We operate in real-time with high-availability.
- Think of new ways to help make our platform more scalable, resilient, and reliable, and then work across our team to put your ideas into action.
- Ensure performance isn’t our weakness by implementing and refining robust data processing and caching technologies.
- Help us stay ahead of the curve by working closely with data architects, stream processing specialists, API developers, our DevOps team, and analysts to design systems that can scale elastically in ways that make other groups jealous
Required Education, Experience/Skills/Training:
Minimum and Preferred. (include functional experience as well as behavioral attributes and/or leadership capabilities)
Required
- Have 7+ years of experience developing with a mix of languages (Python, SQL, etc.) and frameworks to implement data acquisition, processing, and serving technologies. Experience with real-time and scalable systems is preferred. (MUST)
- Experience building robust and scalable data integration (ETL) pipelines using SQL, Python, and Spark. (MUST)
- Hands-on experience with newer technologies relevant to the data space such as Spark, and Snowflake. You’ll have plenty of experience with developing in a cloud-native environment with many different technologies. (MUST)
- Expert knowledge of data systems and SQL
- Experienced with Excel and data manipulation
- Excellent communication skills and ability to interact with all levels of end-users and technical resources
- Creative thinking and motivated self-starter
- English Level: C1 or C2 is mandatory
Preferred
- Experience with open-source tools such as Spark, and Yarn/Kubernetes.
- Experience working in AWS cloud environment.
- Prior experience building scalable platforms – handling large-scale data, operationalizing clusters in a cloud environment.
- Practical Knowledge of Linux or UNIX shell scripting.
- Strong sense of ownership, urgency, and drive
Our Offer
- Competitive Monthly Compensation
- Full-Time - 40 hrs a Week
- 100% Remote Work
- Annual Bonus
- Vacation / Holidays Off
- Private Health Insurance
- Sign In Bonus $100 USD
Requerimientos de la vacante
Especialidad
python
Lugar de trabajo
Remoto: México
Nivel de inglés
Nivel Avanzado
Aplicar a la vacante
Manda tu perfil y currículum al reclutador para darte a conocer.