Data Engineer with Databricks @ KMD Poland

5 days ago


Remote Warsaw, Czech Republic KMD Poland Full time

Ideal candidate:   Has 3+ years of commercial experience in implementing, developing, or maintaining data load systems (ETL/ELT).  Is proficient in Python, with a solid understanding of data processing challenges.  Has experience working with Apache Spark and Databricks.  Is familiar with MSSQL databases or other relational databases.  Has some experience working with distributed systems on a cloud platform.  Has worked on large-scale systems and understands performance optimization.  Is comfortable with Git and CI/CD practices, and can contribute to deployment processes for data pipelines.  Is proactive, eager to learn, and has a strong can-do attitude.  Communicates fluently in English and Polish, both written and spoken.  Is a team player with excellent collaboration and communication skills.  Nice to Have: Experience with Azure  Experience working with SSIS  Familiarity with Azure PostgreSQL  Knowledge of Docker and Kubernetes  Exposure to Kafka or other message brokers and event-driven architecture  Experience working in Agile/Scrum environments  Location: Warsaw (Inflancka 4A) or Remote Work (Poland)  B2B Contract, Targeted Salary: 170 PLN Net/Hour  #Python #ApacheSpark #Databricks #MSSQL #Git #CI/CD #Docker #Azure #Kubernetes Are you ready to join our international team as a Data Engineer with Databricks? We shall tell you why you should... What products do we develop? KMD Elements is a cloud-based solution tailored for the energy and utility market. It offers a highly efficient way to handle complex data validation and advanced formula-based settlements on time series. Designed for the international market, KMD Elements automates intricate calculation and billing processes. Key features include an advanced configuration engine, robust automation capabilities, multiple integration options, and a customer-centric interface. More info can be found here How do we work? #Agile #Scrum #Teamwork #CleanCode #CodeReview #E2Eresponsibility #ConstantImprovement ,[Develop and maintain data delivery pipelines for a leading IT solution in the energy market, leveraging Apache Spark, Databricks, Delta Lake, and Python. , Have end-to-end responsibility for the full lifecycle of features you develop. , Design technical solutions for business requirements from the product roadmap. , Ensure optimal performance , Refactor existing code and enhance system architecture to improve maintainability and scalability. , Design and evolve the test automation strategy, including technology stack and solution architecture. , Prepare reviews, participate in retrospectives, estimate user stories, and refine features ensuring their readiness for development. ] Requirements: Apache Spark, Databricks, Python, User stories, ETL, Spark, MSSQL, Relational database, Cloud, Git, Azure, SSIS, PostgreSQL, Docker, Kubernetes, Kafka Additionally: Remote work, Private healthcare, Flat structure, International projects, Sport subscription, Free coffee, Playroom, Free snacks, Free beverages, Modern office, No dress code.



  • Remote, Warsaw, Czech Republic KMD Poland Full time

    Ideal candidate:   Has 3+ years of commercial experience in implementing, developing, or maintaining data load systems (ETL/ELT).  Is proficient in Python, with a solid understanding of data processing challenges.  Has experience working with Apache Spark and Databricks.  Is familiar with MSSQL databases or other relational databases.  Has some...


  • Remote, Warsaw, Czech Republic KMD Poland Full time

    Ideal candidate:   Has 5+ years of commercial experience in implementing, developing, or maintaining data load systems (ETL/ELT).  Demonstrates strong programming skills in Python, with a deep understanding of data-related challenges.  Has hands-on experience with Apache Spark and Databricks.  Is familiar with MSSQL databases.  Has experience working...


  • Remote, Warsaw, Czech Republic KMD Poland Full time

    Ideal candidate:   Has 5+ years of commercial experience in implementing, developing, or maintaining data load systems (ETL/ELT).  Demonstrates strong programming skills in Python, with a deep understanding of data-related challenges.  Has hands-on experience with Apache Spark and Databricks.  Is familiar with MSSQL databases.  Has experience working...


  • Remote, Warsaw, Czech Republic KMD Poland Full time

    Ideal candidate:   Has 3+ years of experience in administering PostgreSQL and Microsoft SQL Server relational databases.  Is familiar with Microsoft Azure and its database services, such as: Azure SQL Database, Azure Database for PostgreSQL (Flexible Server), Azure Monitor, Log Analytics, Azure Backup.  Has experience with optimizing JSON and JSONB data...


  • Remote, Warsaw, Czech Republic KMD Poland Full time

    Ideal candidate:   Has 3+ years of experience in administering PostgreSQL and Microsoft SQL Server relational databases.  Is familiar with Microsoft Azure and its database services, such as: Azure SQL Database, Azure Database for PostgreSQL (Flexible Server), Azure Monitor, Log Analytics, Azure Backup.  Has experience with optimizing JSON and JSONB data...


  • Warsaw, Czech Republic KMD Poland Full time

    We are a GOOD match if you: Professionally: Have experience in Windows Server technologies. Have worked with Hyper-V virtualization. Have knowledge in managing physical server infrastructure. Are familiar with Kubernetes cloud deployments (AKS/EKS/GKE). Have DevOps and CI/CD Pipeline knowledge of principles and experience in implementation (preferably Azure...


  • Remote, Czech Republic INNOBO Full time

    To thrive and succeed, you are expected to have: Bachelor’s degree in computer science, engineering, or a related field, complemented by experience in data engineering. A master’s degree is preferred Extensive experience with Git and managing version control in a collaborative environment Proven track record of implementing and managing CI/CD pipelines...


  • Remote, Czech Republic Link Group Full time

    Requirements: 5+ years of experience in Data Engineering 2+ years of hands-on experience with Databricks Strong skills in SQL, PySpark, and Python Solid background in data warehousing, ETL, distributed data processing, and data modeling Excellent analytical and problem-solving skills in big data environments Experience with structured, semi-structured, and...

  • Data Engineer

    1 week ago


    Remote, Czech Republic EBIS Full time

    For our client, we are looking for a specialist with at least 2 years (mid) or at least 5 years (senior) of experience working with Databricks  to join the client's project team, who will: work autonomously once business logic and requirements are defined, deliver solutions that are testable, maintainable, and resilient, actively communicate any issues,...

  • Data Engineer

    2 weeks ago


    Remote, Warsaw, Wrocław, Białystok, Kraków, Gdańsk, Czech Republic Addepto Full time

    What you’ll need to succeed in this role: At least 3 years of commercial experience implementing, developing, or maintaining Big Data systems. Strong programming skills in Python: writing a clean code, OOP design. Strong SQL skills, including performance tuning, query optimization, and experience with data warehousing solutions. Experience in designing...