Senior Data Engineer

5 days ago


Remote Warszawa, Czech Republic Welltech Full time

What We're Looking For

As a Senior Data Engineer, you will play a crucial role in building and maintaining the foundation of our data ecosystem. You’ll work alongside data engineers, analysts, and product teams to create robust, scalable, and high-performance data pipelines and models. Your work will directly impact how we deliver insights, power product features, and enable data-driven decision-making across the company.

This role is perfect for someone who combines deep technical skills with a proactive mindset and thrives on solving complex data challenges in a collaborative environment.

Required skills:

  • 5+ years of experience in data engineering or backend development, with a strong focus on building production-grade data pipelines.
  • 2-3+ years of experience working with AWS services (Administration of Redshift is a must),
  • Solid experience working with AWS services (Spectrum, S3, RDS, Glue, Lambda, Kinesis, SQS).
  • Proficient in Python and SQL for data transformation and automation.
  • Experience with dbt for data modeling and transformation.
  • Good understanding of streaming architectures and micro-batching for real-time data needs.
  • Experience with CI/CD pipelines for data workflows (preferably GitLab CI).
  • Familiarity with event schema validation tools/ solutions (Snowplow, Schema Registry).
  • Excellent communication and collaboration skills. Strong problem-solving skills—able to dig into data issues, propose solutions, and deliver clean, reliable outcomes.
  • A growth mindset—enthusiastic about learning new tools, sharing knowledge, and improving team practices.

Tech Stack You’ll Work With:

  • Cloud: AWS (Redshift, Spectrum, S3, RDS, Lambda, Kinesis, SQS, Glue, MWAA)
  • Languages: Python, SQL
  • Orchestration: Airflow (MWAA)
  • Modeling: dbt
  • CI/CD: GitLab CI (including GitLab administration)
  • Monitoring: Datadog, Grafana, Graylog
  • Event validation process: Iglu schema registry
  • APIs & Integrations: REST, OAuth, webhook ingestion
  • Infra-as-code (optional): Terraform


Nice to Have:

  • Experience with additional AWS services: EMR, EKS, Athena, EC2.
  • Hands-on knowledge of alternative data warehouses like Snowflake or others.
  • Experience with PySpark for big data processing.
  • Familiarity with event data collection tools (Snowplow, Rudderstack, etc.).
  • Interest in or exposure to customer data platforms (CDPs) and real-time data workflows.

🚀 Who Are We?

Welcome to Welltech—where health meets innovation 🌍 As a global leader in Health & Fitness industry, we’ve crossed over 220 million installs with three life-changing apps, all designed to boost well-being for millions. Our mission? To transform lives through intuitive nutrition trackers, powerful fitness solutions, and personalized wellness journeys—all powered by a diverse team of over 700 passionate professionals with presence across 5 hubs.

Why Welltech? Imagine joining a team where your impact on global health and wellness is felt daily. At Welltech, we strive to be proactive wellness partners for our users, while continually evolving ourselves.

Candidate journey: ⭕️ Recruiter call ➔ ⭕️ Technical call with the hiring manager ➔ ⭕️ Meet the future stakeholders

✨ Why You’ll Love Being Part of Welltech:

  • Grow Together: Join a culture that champions both personal and professional growth. Here, you’ll thrive as we learn, evolve, and succeed together. 
  • Lead by Example: No matter your role, your leadership matters. Every team member is empowered to inspire and make an impact. 
  • Results-Driven: We’re all about achieving meaningful outcomes. It’s not just about the effort, but the difference we make every day. 
  • We Are Well-Makers: Be part of a movement that’s creating a healthier, happier world. Together, we make well-being a reality 
,[ Pipeline Development and Optimization: Build and maintain reliable, scalable ETL/ELT pipelines using modern tools and best practices, ensuring efficient data flow for analytics and insights. , Data Modeling and Transformation: Design and implement effective data models that support business needs, enabling high-quality reporting and downstream analytics. , Collaboration Across Teams: Work closely with data analysts, product managers, and other engineers to understand data requirements and deliver solutions that meet the needs of the business. , Ensuring Data Quality: Develop and apply data quality checks, validation frameworks, and monitoring to ensure the consistency, accuracy, and reliability of data. , Performance and Efficiency: Identify and address performance issues in pipelines, queries, and data storage. Suggest and implement optimizations that enhance speed and reliability. , Security and Compliance: Follow data security best practices and ensure pipelines are built to meet data privacy and compliance standards. , Innovation and Continuous Improvement: Test new tools and approaches by building Proof of Concepts (PoCs) and conducting performance benchmarks to find the best solutions. , Automation and CI/CD Practices: Contribute to the development of robust CI/CD pipelines (GitLab CI or similar) for data workflows, supporting automated testing and deployment. ] Requirements: Boost, Data pipelines, ETL, Data modeling, Storage, Security, CI/CD Pipelines, GitLab CI, Automated testing, Data engineering, AWS, Redshift, AWS S3, Amazon RDS, Glue, AWS Lambda, Kinesis, AWS Redshift, Amazon SQS, MWAA, Python, SQL, Airflow, dbt, GitLab, Datadog, Grafana, Graylog, REST API, OAuth, Terraform, Amazon EMR, Amazon EKS, Athena, AWS EC2, Data warehouses, Snowflake, PySpark Additionally: Sport subscription, Private healthcare, International projects, Training budget, Lunch card.

  • Remote, Wrocław, Czech Republic AVENGA (Agencja Pracy, nr KRAZ: 8448) Full time

    Key Requirements:5 years of experience as Data EngineerProven experience in Azure Databricks (data engineering, pipelines, performance tuning, Python)Azure DevOps (Repos, Pipelines, YAML)Azure Key VaultAzure Data Factory (optional)Good to have knowledge within Power BI.Strong analytical and problem-solving skillsExcellent communication and stakeholder...


  • Warszawa, Czech Republic Bayer Full time

    Required Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering. Proficiency in building scalable ETL/ELT pipelines. Preferred Qualifications: Experience with orchestration tools (e.g., Airflow, dbt). Knowledge of cloud data platforms and big data tools. Strong problem-solving...


  • Remote, Czech Republic SquareOne Full time

    What You Bring? We’re looking for experienced data engineers ready to take ownership and elevate enterprise-level data systems: 4+ years in data engineering and cloud data platforms Proven expertise with Snowflake, Python, and SQL Experience with data lakes, batch/stream processing, and cloud-native architectures Strong understanding of data...


  • Remote, Warszawa, Czech Republic Crestt Full time

    Technologies & Tools Snowflake (enterprise data warehouse) dbt Cloud (ETL/ELT development – licenses provided) Confluence (documentation) Azure DevOps / Jira (task planning and tracking) We are seeking an experienced Senior Data Engineer with strong expertise in Snowflake to support a short-term data engineering initiative (September–December 2025,...


  • Warszawa, Czech Republic Bayer Full time

    Bachelor/Master’s degree in Computer Science, Engineering, or a related field. 5+ years of working experience in the field of Data & Analytics, preferably in the CPG industry 5+ years of proficient coding experience with Python for data engineering, including SQL and PySpark (DataFrame API, Spark SQL, MLlib), with hands-on experience in various databases...


  • Warszawa, Czech Republic Bayer Full time

    5+ years of working experience in the field of Data & Analytics, preferably in the CPG industry 5+ years of proficient coding experience with Python for data engineering, including SQL and PySpark (DataFrame API, Spark SQL, MLlib), with hands-on experience in various databases (SQL/NoSQL), key libraries (e.g., pandas, SQLAlchemy), parallel processing, and...

  • Data Engineer

    5 days ago


    Remote, Warszawa, Czech Republic Welltech Full time

    What We're Looking For As a Data Engineer, you’ll work on building and supporting the data infrastructure that powers our internal analytics, reporting, and customer-facing products. You’ll develop reliable, well-structured data pipelines, collaborate with stakeholders to understand data requirements, and contribute to our data platform’s scalability...


  • Remote, Czech Republic PAR Data Central Full time

    What do we need from you:  Strong expertise in MS SQL Server, including: Writing complex stored procedures, triggers, and functions Query tuning and performance optimization using tools like Execution Plans and SQL Profiler Hands-on experience with high-performance databases such as ClickHouse or similar solutions Deep understanding of OOP, Design...


  • Remote, Czech Republic Dogtronic Solutions Full time

    Requirements Strong Python and SQL skills  Proven experience delivering Databricks PySpark pipelines into production  Solid hands-on experience with Databricks  Experience with cloud platforms (Azure, AWS, or GCP)  Comfortable working with large datasets and building robust data pipelines  Clear communication skills – able to explain technical...


  • Remote, Gdynia, Warszawa, Poznań, Kraków, Wrocław, Czech Republic Idego Group Sp. z o.o. Full time

    Minimum 4+ years of experience as a Data Engineer.Proven commercial experience with Databricks.Strong knowledge of AWS (nice to have).Proficiency in Python, PySpark, and SQL.Excellent command of English, both spoken and written. We are looking for a Senior Data Engineer to join one of our clients' projects in an international environment.Our perkswork...