Senior Big Data Engineer

  • Home
  • Job Openings
  • Senior Big Data Engineer

Senior Big Data Engineer

Job Type: Full Time
Job Location: Remote

We are looking for Senior Data Engineers to work remotely on a data analytics projects that form a RTB exchange. We process over 100k requests/s and all teams constantly face with high volume, low latency and geo-distributed challenges. Our data volume is over 20TB/day, over 200 uncompressed.

We value technical excellence and you will have both resources and time to deliver world-class code.

This is a 100% remote position, for qualified candidates we help with visa and relocation to Wrocław, Poland.

You will be working with team members in NYC and Poland.

Salary range for this role: B2B: 24000 – 30000 PLN/month, UoP: 20000 – 25000 PLN/month, we can pay in PLN, EUR or USD.

We are happy to negotiate higher compensation if you are an ideal candidate.

The data is used to generate reports, update advertising budges and update our optimization engines, we focus on running as close to real-time as possible with very tight SLAs.

If you like solving hard and technically challenging problems, join us to use those skills here to create real-time, concurrent, globally distributed systems applications and services.

If you think you are a good fit even though you don’t meet all requirements – please apply, we are currently filling multiple roles and will do our best to find the best match.

Role Details

  • Work on creating and maintaining reliable and scalable distributed data processing systems
  • Optimize our jobs that utilize Kafka, Hadoop, Spark, Presto and Kubernetes
  • Scale, troubleshoot and fix existing applications and services
  • Own a complex set of services and applications
  • Focus ensuring that applications run 24/7
  • Lead technical discussions leading to improvements in tools, processes or projects
  • 70% time – new development; 30% – bug fixing, maintenance
  • Work on hybrid infrastructure: GCP + on-prem
  • Regular tasks include:
  • creating new jobs to ingest different internal and third party data
  • create different consumers for data sent via Kafka and Spark Streaming
  • develop frameworks for new jobs

Key Requirements

  • very good knowledge of Python
  • minimum 7 years of relevant professional experience
  • Database experience, for example: PostgreSQL, MongoDB, Cassandra, Redis
  • Proficiency in Linux

Bonus Experience

  • Real Time Bidding (RTB), AdTech, High Frequency Trading (HFT) experience
  • Devops experience (Chef, Jenkins, CircleCI)
  • GCP
  • Scala and Java experience
This job is no longer accepting applications.