Data Engineer @

1 day ago


Kraków, Lesser Poland, Czech Republic HSBC Technology Poland Full time
  • Experience with most of the following technologies (Apache Hadoop, Scala, Apache Spark, Sparkstreaming, YARN, Kafka, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services).
  •  Sound knowledge on working Unix/Linux Platform.
  •  Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL.
  •  Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible& Jenkins) and requirement management in JIRA.
  • Understanding of big data modelling techniques using relational and non-relational techniques.
  •  Experience on Debugging the Code issues and then publishing the highlighted differences to thedevelopment team/Architects.
  • Experience with: time-series/analytics dB's such as Elastic search, scheduling tools such as Airflow,Control-M, understanding of Cloud design patterns, developing Hive QL, UDF's for analysing semi structured/structured datasets.
  • Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

Some careers shine brighter than others.
If you're looking for a career that will help you stand out, join HSBC, and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further.
Your career opportunity

The Data Engineer will be responsible for designing, building, and managing the data infrastructure and data
pipeline processes for the bank. The ideal candidate will have a strong background in data engineering, excellent
leadership skills, and a thorough understanding of the banking industry's data requirements.

What we offer

  • Competitive salary
  • Annual performance-based bonus
  • Additional bonuses for recognition awards
  • Multisport card
  • Private medical care
  • Life insurance
  • One-time reimbursement of home office set-up (up to 800 PLN).
  • Corporate parties & events
  • CSR initiatives
  • Financial support with trainings and education
  • Nursery discounts
  • Social fund
  • Flexible working hours
  • Free parking
,[Spark development and design using Scala 2.10+ or Java development or Python development and design using Java 1.8+, automated testing of new and existing components in an Agile, DevOps anddynamic environment., Promote development standards, code reviews, mentoring, knowledge sharing. , Production support & troubleshooting., Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring, Liaison with BAs to ensure that requirements are correctly interpreted and implemented., Participate in regular planning and status meetings. Input to the development process – through theinvolvement in Sprint reviews and retrospectives. Input into system architecture and design., Peer code reviews.] Requirements: Apache Hadoop, Scala, Apache Spark, Yarn, Kafka, Hive, Python, ETL, SQL, REST API, Unix, Linux, Data pipelines, Hadoop, Spark, Spark SQL, Git, GitHub, Ansible, Jenkins, Jira, Big Data, Airflow, Control-M, Cloud, Design Patterns, Kanban Additionally: Training budget, Private healthcare, Flat structure, International projects, Multisport card, Monthly remote work subsidy, Psychological support, Conferences, PPK option, Annual performance based bonus, Integration budget, International environment, Small teams, Employee referral bonus, Mentoring, Workstation reimbursement, Company share purchase plan, Childcare support programme, Bike parking, Playroom, Shower, Canteen, Free coffee, Free beverages, Free parking, In-house trainings, In-house hack days, No dress code, Modern office, Knowledge sharing, Garden, Massage chairs, Kitchen.

  • Kraków, Lesser Poland, Czech Republic ITDS Full time

    You're ideal for this role if you have:Strong experience in PySpark, Scala, or similar data engineering languages Hands-on experience building production data pipelines using Hadoop, Spark, and Hive Knowledge of cloud platforms and migrating on-premise solutions to the cloud Experience with scheduling tools such as Airflow and workflow...

  • Data Engineer @

    1 day ago


    Kraków, Lesser Poland, Czech Republic ABB Full time

    Advanced degree in Computer Science, Engineering, Data Science, or a related field (Master's preferred).Proven experience (preferably 3+ years) as a Data Engineer with demonstrated expertise in building production-grade data pipelines and hands-on experience with Microsoft Fabric (Data Factory, Lakehouse, Dataflows).Strong knowledge of ETL/ELT concepts, data...


  • Kraków, Lesser Poland, Czech Republic HSBC Technology Poland Full time

    Strong experience with database technologies (SQL, NoSQL), data warehousing solutions, and big data technologies (Hadoop, Spark).Proficiency in programming languages such as Python, Java, or Scala.Experience with cloud platforms (AWS, Azure, Google Cloud) and their data services. Certifications in cloud platforms (AWS Certified Data Analytics, Google...

  • Data Engineer @

    6 days ago


    Kraków, Lesser Poland, Czech Republic Antal Full time

    5+ years of IT experience, with 2+ years in software development using Big Data technologies, microservices, and event-driven cloud architectures.Hands-on experience with Apache NiFi, Kafka, Spark, Hive, HDFS, Oozie, SQL, Python, and Linux Shell scripting.Strong database skills: at least one SQL database (Oracle, PostgreSQL, MySQL, etc.) and one NoSQL...


  • Kraków, Lesser Poland, Czech Republic HSBC Technology Poland Full time

    What you need to have to succeed in this roleExcellent experience in the Data Engineering Lifecycle. You will have created data pipelines which take data through all layers from generation, ingestion, transformation and serving. Senior stakeholder management skills. Experience of modern Software Engineering principles and experience of creating well tested...


  • Kraków, Lesser Poland, Czech Republic beBeeDataEngineering Full time 105,000 - 135,000

    Job Title: Big Data Expertise EngineerJob DescriptionAs a skilled data engineer, you will play a pivotal role in creating innovative solutions for managing and analyzing large datasets. Your primary focus will be on utilizing Scala and Spark to design and implement scalable data pipelines that cater to the needs of our organization.Key responsibilities...


  • Kraków, Lesser Poland, Czech Republic beBeeDataEngineering Full time 5,400,000 - 7,800,000

    Unlock your potential as a Data and Financial Engineering Expert in our Product Control department. Collaborate with cross-functional teams to design, build, and maintain automated solutions that enhance controls and analytical processes.Key Responsibilities:Develop innovative solutions using Python, SQL, and VBA to improve data quality and reduce manual...

  • Data Engineer @

    3 days ago


    Kraków, Lesser Poland, Czech Republic Mindbox S.A. Full time

    Minimum 10 years of software development experience, including minimum 7 years of Python programming experience.Solid experience in Python, with knowledge of at least one Python web framework such as Django, Flask, etc.Have mindset of design thinking and ability to draw out the solution design.Experience of streaming data pipeline using PySpark, Apache Beam...

  • Data Engineer @

    1 week ago


    Kraków, Lesser Poland, Czech Republic Mindbox S.A. Full time

    Minimum 5 years of overall IT experience, including 2+ years in software development with Big Data technologies, microservices, and cloud-based event-driven architectures.Strong hands-on expertise with Apache NiFi, Apache Kafka, Apache Spark, Apache Hive, Apache HDFS, Apache Oozie, SQL, Python, Google SDK, REST API, Linux Shell Scripting.Solid database...


  • Kraków, Lesser Poland, Czech Republic beBeeData Full time 800,000 - 1,000,000

    Cloud Data Engineer PositionWe are looking for a skilled Cloud Data Engineer to join our team. The ideal candidate will have hands-on experience in designing, developing, testing, and deploying ETL/SQL pipelines connected to various on-prem and Cloud data sources.Key ResponsibilitiesDesign and build Google Cloud data models and transformations in BigQuery...