Current jobs related to Data Engineer @ - Remote Warszawa - SquareOne


  • Remote, Warszawa, Czech Republic Sunscrapers Full time

    What's important for us?At least 7 years of professional experience as a data engineer, Undergraduate or graduate degree in Computer Science, Engineering, Mathematics, or similar, Excellent command in spoken and written English, at least C1, Strong professional experience with Python and SQL, Strong understanding of data pipeline, ETL, ELT etc. Hands on...


  • Remote, Warszawa, Czech Republic Sunscrapers Full time

    What's important for us?At least 5 years of professional experience as a data engineer, Undergraduate or graduate degree in Computer Science, Engineering, Mathematics, or similar, Excellent command in spoken and written English, at least C1, Expertise on AWS stack, Experience with infrastructure-as-code tools, like Terraform, Hands-on experience with...

  • AI/ML Engineer @

    2 days ago


    Remote, Czech Republic PAR Data Central Full time

    Position Overview: As an AI/ML Engineer, you will play a crucial role in enhancing our platform's capabilities by developing and refining machine learning models that drive accurate forecasting, event analysis, data-driven decision-making, and more. You will collaborate closely with product owners and engineering teams to implement scalable AI solutions that...

  • Data Engineer @

    2 weeks ago


    Remote, Czech Republic Link Group Full time

    3+ years of experience as a Data Engineer or similar roleStrong knowledge of SQL and data modeling principlesExperience with Python or Scala for data processingHands-on experience with cloud platforms (ideally AWS, but Azure/GCP also valuable)Familiarity with tools like Apache Spark, Airflow, Kafka, or similarExperience with data lakes, data warehouses, or...

  • Data Engineer @

    1 week ago


    Remote, Wroclaw, Warszawa, Czech Republic Tooploox Full time

    Experience and skills you need to join us:BS/BA in Data Engineering/Computer Science + 2 years of experience.Extensive expertise with Apache Spark (especially PySpark), Hadoop, and Apache Hive with a proven track record of optimizing large-scale data systems.Strong programming skills in Python.Comprehensive understanding of database concepts, including...

  • Lead Data Engineer @

    2 weeks ago


    Remote, Czech Republic Link Group Full time

    6+ years of experience in Data EngineeringProven experience in a technical leadership or lead engineer roleExpert in Python, SQL, and cloud-native data processing toolsStrong knowledge of AWS (preferred), GCP or Azure also welcomeExperience with orchestration tools (Airflow, Dagster, etc.)Proficiency with modern data platforms like Databricks, Snowflake,...

  • Data Engineer @

    3 days ago


    Remote, Czech Republic RITS Group Full time

    3+ years of experience in a Data Platform Engineering roleStrong software engineering experience and working with PythonStrong experience working with SQL and databases/engines such as MySQL, PostgreSQL, SQL Server, Snowflake, Redshift, Presto, etcExperience building ETL and stream processing pipelines using Kafka, Spark, Flink, Airflow/Prefect,...

  • Data Engineer @

    2 weeks ago


    Remote, Czech Republic AVENGA (Agencja Pracy, nr KRAZ: 8448) Full time

    Minimum experience of 5 years in a Data Engineer roleAdvanced working SQL knowledge and experience working with relational databases, as well as familiarity with one or more cloud-based data warehouses such as Snowflake, Redshift, BigQueryKnowledge of Python, Shell ScriptExperience with Cloud: AWS or AzureExperience with Data Modelling such as Kimball...


  • Remote, Czech Republic Ework Group Full time

    A master's degree in data, Computer Science, Information Management, or a related field.Experience of 5+ years in the domain of software and data engineering, with relevant years in Data Pipeline Engineering and Integration.Strong proficiency in Databricks and/or Fabric and a deep understanding of its core functions and tools to optimize data workflows, date...

  • Data Engineer

    3 hours ago


    Remote, Czech Republic Link Group Full time

    Must-Have QualificationsAt least 3+ years of experience in data engineering.Strong expertise in one or more cloud platforms: AWS, GCP, or Azure.Proficiency in programming languages like Python, SQL, or Java/Scala.Hands-on experience with big data tools such as Hadoop, Spark, or Kafka.Experience with data warehouses like Snowflake, BigQuery, or...

Data Engineer @

2 weeks ago


Remote Warszawa, Czech Republic SquareOne Full time
  • Experience- 4 years and more
  • Expertise in Python – advanced knowledge of Python, including libraries like pandas, numpy, etc., and data processing tools.
  • Experience with PySpark – hands-on experience with Apache Spark and its integration with large-scale data processing systems.
  • Experience with ETL and data pipelines – designing, implementing, and optimizing ETL processes in cloud or on-premises environments.
  • Experience with cloud platforms - Azure
  • SQL proficiency – solid experience working with relational databases (e.g., PostgreSQL, MySQL, SQL Server).
  • Experience with Jupyter notebooks – using Jupyter for prototyping and testing data transformation processes.
  • Understanding of data architecture (Data Fabric) – knowledge of building data infrastructure and integrating multiple data sources across the organization.
  • Analytical skills – ability to solve complex problems and optimize data processing workflows.
  • Collaboration and communication skills – ability to work effectively in a cross-functional team and communicate with various stakeholders.
,[Design and build automated data pipelines to ingest, process, and transform data from various sources., Leverage Python and PySpark for data processing and implementing transformation algorithms., Work with Jupyter notebooks to document and test data processing workflows., Optimize and ensure scalability of ETL (Extract, Transform, Load) processes., Collaborate with other data engineers, analysts, and architects to build a robust data architecture (Data Fabric)., Ensure data quality, availability, and integrity throughout the processing pipeline., Work with cloud-based tools and databases Azure, Minimize processing times and ensure timely delivery of data to end-users.] Requirements: Python, PySpark, Azure, Jupyter Notebook, Azure Synapse