Current jobs related to Senior Data Engineer - Kraków Warszawa Wrocław Poznań Łódź - GFT Poland


  • Łódź, Łódź Voivodeship, Czech Republic CommerzBank Full time

    Which technology & skills are important for us? Very good knowledge of data pipeline orchestration (design scalable, cloud-native data pipelines for data transformation and aggregation based on business use cases).Very good knowledge of GCP (or other Cloud) and creating Cloud based architecture (BigQuery, Dataproc/PySpark, Cloud Composer/Apache...


  • Remote, Gdynia, Warszawa, Poznań, Kraków, Wrocław, Czech Republic Idego Group Sp. z o.o. Full time

    Minimum 4+ years of experience as a Data Engineer.Proven commercial experience with Databricks.Strong knowledge of AWS (nice to have).Proficiency in Python, PySpark, and SQL.Excellent command of English, both spoken and written. We are looking for a Senior Data Engineer to join one of our clients' projects in an international environment.Our perkswork...


  • Remote, Gdynia, Warszawa, Poznań, Kraków, Wrocław, Czech Republic Idego Group Sp. z o.o. Full time

    At least 6-7 years of experience in the listed technology stackGreat experience with SnowflakeExperience in DBTExperience with AWS (preffered) or AzureExperience in Fivetran (or a similar tool)Strong communication skillsFluent spoken and written EnglishWe are looking for someone who brings a positive attitude, a strong work ethic, and a commitment to...


  • Remote, Wrocław, Czech Republic AVENGA (Agencja Pracy, nr KRAZ: 8448) Full time

    Key Requirements:5 years of experience as Data EngineerProven experience in Azure Databricks (data engineering, pipelines, performance tuning, Python)Azure DevOps (Repos, Pipelines, YAML)Azure Key VaultAzure Data Factory (optional)Good to have knowledge within Power BI.Strong analytical and problem-solving skillsExcellent communication and stakeholder...


  • Łódź, Czech Republic CommerzBank Full time

    Which technology & skills are important for us? 👌 Very good knowledge of data pipeline orchestration (design scalable, cloud-native data pipelines for data transformation and aggregation based on business use cases). Very good knowledge of GCP (or other Cloud) and creating Cloud based architecture (BigQuery, Dataproc/PySpark, Cloud Composer/Apache...


  • Warszawa, Czech Republic Bayer Full time

    Required Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering. Proficiency in building scalable ETL/ELT pipelines. Preferred Qualifications: Experience with orchestration tools (e.g., Airflow, dbt). Knowledge of cloud data platforms and big data tools. Strong problem-solving...


  • Kraków, Warszawa, Wrocław, Poznań, Łódź, Czech Republic GFT Poland Full time

    Minimum 5 years of overall IT experience and 2+ years in the field of software development with Big Data technologies, micro services design and development, event based architectures in cloud Required Technology Skillset: Apache NiFi, Apache Kafka, Apache Spark, Apache Hive, Apache HDFS, Apache Oozie, SQL, Python, Google SDK, REST API, Linux Shell...


  • Kraków, Czech Republic ITDS Full time

    You're ideal for this role if you have: Strong experience in PySpark, Scala, or similar data engineering languages  Hands-on experience building production data pipelines using Hadoop, Spark, and Hive  Knowledge of cloud platforms and migrating on-premise solutions to the cloud  Experience with scheduling tools such as Airflow and workflow...


  • Kraków, Warszawa, Wrocław, Czech Republic Capgemini Polska Sp. z o.o. Full time

    YOUR PROFILEYou have hands-on experience in data engineering and are comfortable working independently on moderately complex tasks.You've worked with Databricks and PySpark in real-world projects.You're proficient in Python for data transformation and automation.You've used at least one cloud platform (AWS, Azure, or GCP) in a production environment.You...


  • Kraków, Gdańsk, Poznań, Wrocław, Warszawa, Czech Republic Capgemini Polska Sp. z o.o. Full time

    YOUR PROFILEYou have hands-on experience in data engineering and can independently handle moderately complex tasks.You've worked with PySpark in distributed data processing scenarios.You are familiar with AWS data services such as S3, Glue, EMR, Lambda, Redshift, or similar.Experience working with additional  major cloud platform (Azure, or GCP)You are...

Senior Data Engineer

2 weeks ago


Kraków Warszawa Wrocław Poznań Łódź, Czech Republic GFT Poland Full time
  • Minimum 5 years of overall IT experience and 2+ years in the field of software development with Big Data technologies, micro services design and development, event based architectures in cloud
  • Required Technology Skillset: Apache NiFi, Apache Kafka, Apache Spark, Apache Hive, Apache HDFS, Apache Oozie, SQL, Python, Google SDK, REST API, Linux Shell Scripting
  • Strong database skills: at least one of SQL databases (Oracle, Teradata, MySQL, PostgreSQL, MS SQL Server) and at least one of NoSQL databases (MongoDB, Apache Hbase)
  • Framework: experience with one of Cloud platform providers (AWS, GCP, Azure) - GCP preferred
  • Strong understanding of using Hadoop (Hortonworks/Cloudera) ecosystem such as HDFS and Hive for data storing and analysis.
  • Good understanding of RDBMS, Star Schema, Slowly Changing Dimensions, CDC
  • Experience in dataflow automation using Apache NiFi and creating near Real-time data pipelines using Apache Kafka
  • Basic knowledge of Python & SQL, and capable of navigating databases especially Hive.
  • Data pipeline building technologies.
  • DRY, KISS, SOLID, Clean Code, Self-documenting Code
  • Optionally, knowledge and experience of BigQuery, DataProc, Java SpringBoot
  • Working in English on a daily basis
  • Positive, proactive and can-do attitude

Why join GFT?

You will work with and learn from top IT experts. You will join a crew of experienced engineers: 60% of our employees are senior level.

Interested in the cloud? You will enjoy our full support in developing your skills: training programs, certifications and our internal community of experts. We have strong partnerships with top cloud providers: Google, Amazon and Microsoft - we are number one in Poland in numbers of GCP certificates. Apart from GCP, you can also develop in AWS or Azure.

We are focused on development and knowledge sharing. Internal expert communities provide a comfortable environment where you can develop your skillset in areas such as blockchain, Big Data, cloud computing or artificial intelligence.  

You will work in a stable company (32 years on the market) in demanding and challenging projects for the biggest financial institutions in the world.

We offer you:

  • Hybrid work from – 2 office day per week
  • Contract of employment or B2B contract
  • Working in a highly experienced and dedicated team
  • Benefit package that can be tailored to your personal needs (private medical coverage, sport & recreation package, lunch subsidy, life insurance, etc.)
  • On-line training and certifications fit for career path
  • Access to e-learning platform
  • Mindgram - a holistic mental health and wellbeing platform
  • Work From Anywhere (WFA) - the temporary option to work remotely outside of Poland for up to 140 days per year (including Italy, Spain, the UK, Germany, Portugal, and Bulgaria)
    Social events
,[Openness to work in a hybrid model (2 days from the office per week), Openness to visiting the client's office in Cracow once every two months (for 3 days), Design, develop, and maintain scalable data pipelines and solutions using Big Data technologies (e.g., Apache NiFi, Kafka, Spark, Hive, HDFS), Implement event-driven and microservices-based architectures in cloud environments, preferably Google Cloud Platform (GCP), Automate data flows and build near real-time data pipelines using Apache NiFi and Kafka, Work with large-scale datasets from structured and unstructured sources, utilizing the Hadoop ecosystem (HDFS, Hive) for data storage and analysis., Leverage RDBMS and NoSQL databases to support data ingestion, transformation, and querying, Write clean, maintainable, and efficient code following best practices (DRY, KISS, SOLID, Clean Code), Use Python, SQL, and Shell scripting for data processing and orchestration, Collaborate with cross-functional teams to deliver secure and reliable software solutions aligned with business needs, Contribute to the development of enterprise-scale data solutions in a fast-paced, agile environment, Engage in daily communication in English within a global team] Requirements: Spark, Hadoop, Apache Kafka, Apache Nifi, Apache Spark, Apache Hive, Apache HDFS, Apache Oozie, SQL, Python, Google SDK, REST API, Linux, GCP, BigQuery, DataProc, Java, Spring Boot Additionally: Home office, Knowledge sharing, Life insurance, Sport subscription, Training budget, Private healthcare, International projects, Integration events, English lessons, Platforma Mindgram, Free coffee, Playroom, Free snacks, Free beverages, In-house trainings, In-house hack days, Modern office, Free fruits.