Data Engineer Python/Scala

Location: Remote OR Kiev, Ukraine

Date posted: June 17, 2020

Description

Data Engineer in AdTech company in the USA (remote or in Kyiv office)

It's a fast-growing social ad tech company. Vetted by Facebook, Twitter, Instagram and Snapchat as an Official Partner, we empower brands and advertisers to outperform their social ads campaigns on social networks at scale.

Preferred tool of 4000+ companies worldwide, our cutting-edge platform is used everyday by advertisers (Meetic, L’Occitane, Cheerz, Sarenza, Hawkers...) and agencies (Havas, Dentsu-Aegis, GroupM...) to automate and optimize all their social ads campaigns in one place.

We’re changing the way people run ads on social media and we participate in creating the future of interaction between brands and people.

— participation in the development of a global data science product;
— the opportunity to take a leading position in a growing team;
— interesting tasks and a team of experienced colleagues in our team;
— experience in an international company with offices in New York, Paris, Tel Aviv, and Kyiv.

 

Responsibilities

• Spend 50% in applying software and data engineering background in resolving reporting and data platform disruptions in comanie's Ad Platform. This will involve investigation on data and integration of various services used in the data management platform — Amazon Web Services such as Redshift, Elastic Map Reduce, Data pipelines, etc.
• Use 10% of your time to participate in System design review meetings to provide your inputs based on data engineering experience.
• Another 10% of the time will be spent on validating the data quality and integrity of the newly developed projects.
• Spend 30% of the time learning new data technologies and implementing data solutions for providing real time insights that will enable the business users to take right decisions.
• At the end of every Sprint cycle, participate in post-production activities such as code reviews and preparation of documentation of runbooks for ETL /data pipeline.

 

Requirements

• 2+ years in Python/Scala.
• 3+ years of experience in developing/supporting DW/BI Applications.
• 3+ years’ experience in supporting/troubleshooting failed Pentaho Data Integration/Kettle ETL processes.
• 1+ year experience in developing Tableau Dashboards for end users.
• Experience in Hadoop, Flume, Kafka, Hive, Spark.
• Experience in working with AWS S3, AWS Data Pipeline, AWS EMR, AWS Redshift, AWS Athena, and AWS Kinesis.
• Basic scripting skills in Bash/Perl.
• Familiarity with monitoring systems like Datadog, Pingdom to catch/prevent issues.
• Advanced / Fluent English.

Nice-to-have
• Database administration and development skills, especially MySQL and Redshift databases.
• Experience in writing complex SQL queries and performance tuning techniques.