Current jobs related to Senior Data Engineer @ - Remote Warsaw - RemoDevs


  • Remote, Warsaw, Czech Republic Square One Resources Full time

    Bachelor s/Master s degree in Computer Science, Information Systems, or related field.Strong hands-on experience with Microsoft Fabric components:Data FactoryLakehouse / OneLakeSynapse Data EngineeringPower BIExperience with data modeling (star/snowflake) and performance tuning in Power BI.Deep understanding of modern data architecture patterns including...


  • Remote, Czech Republic Ework Group Full time

    A master's degree in data, Computer Science, Information Management, or a related field.Experience of 5+ years in the domain of software and data engineering, with relevant years in Data Pipeline Engineering and Integration.Strong proficiency in Databricks and/or Fabric and a deep understanding of its core functions and tools to optimize data workflows, date...


  • Remote, Warsaw, Czech Republic C&F S.A. Full time

    What you will need: 4+ years of experience in Azure and 5+ of industrial experience in the domain of large-scale data management, visualization and analytics;Hands-on knowledge of the following data services and technologies in Azure (for example Databricks, Data Lake, Synapse, Azure SQL, Azure Data Factory, Azure Data Explorer);Good understanding of Azure...

  • Senior Data Engineer @

    13 hours ago


    Remote, Czech Republic SquareOne Full time

    What You Bring?We're looking for experienced data engineers ready to take ownership and elevate enterprise-level data systems:4+ years in data engineering and cloud data platformsProven expertise with Snowflake, Python, and SQLExperience with data lakes, batch/stream processing, and cloud-native architecturesStrong understanding of data...


  • Remote, Kraków, Zagreb, Wrocław, Split, Warsaw, Czech Republic ELEKS Full time

    REQUIREMENTSExperience in Data Engineering, SQL, ETL(data validation + data mapping + exception handling) 3+ years Hands-on experience with Databricks 2+ years Experience with Python Experience with AWS (e.g. S3, Redshift, Athena, Glue, Lambda, etc.) Knowledge of the Energy industry (e.g. energy trading, utilities, power systems etc.) would be a...


  • Remote, Czech Republic Matrix Global Services Full time

    +3 years of experience in large scale, distributed server side, backend developmentExtensive experience in stream & batch big data pipeline processing using Apache SparkExperience with Linux, Docker, and KubernetesExperience in working with cloud providers (e.g., AWS, GCP)Strong experience with event streaming platforms like Kafka or its alternatives, such...


  • Remote, Czech Republic Matrix Global Services Full time

    At least 2 years' experience with JavaExperience in building, optimizing, and maintaining large-scale big data pipelines using popular open-source frameworks (Kafka, Spark, Hive, Presto, Airflow, etc)Experience with SQL/NoSQL/key value DBsHands-on experience in Spring, Sprint BootExperience with AWS cloud servicessuch as EMR, Aurora, Snowflake, S3, Athena,...


  • Remote, Wrocław, Gdańsk, Rzeszów, Czech Republic beBeeDataEngineering Full time

    Job Title: Senior Cloud Data Engineer">We are seeking a highly skilled


  • Remote, Warsaw, Czech Republic SquareOne Full time

    Proven experience as a Data Analyst with a focus on quality assurance of the reportsStrong proficiency in SQLStrong hands-on experience with data visualization tools such as Tableau.Excellent problem-solving skills and attention to detail.Strong written and verbal communication skills. Senior Data Analyst with QA experienceWe are looking for Senior Data...


  • Remote, Warszawa, Czech Republic Sunscrapers Full time

    What's important for us?At least 7 years of professional experience as a data engineer, Undergraduate or graduate degree in Computer Science, Engineering, Mathematics, or similar, Excellent command in spoken and written English, at least C1, Strong professional experience with Python and SQL, Strong understanding of data pipeline, ETL, ELT etc. Hands on...

Senior Data Engineer @

2 weeks ago


Remote Warsaw, Czech Republic RemoDevs Full time

Requirements

  • 3+ years Python coding experience. 
    • 5+ years - SQL Server based development of large datasets 
      • 5+ years with Experience with developing and deploying ETL pipelines using Databricks Pyspark. 
        • Experience in any cloud data warehouse like Synapse, ADF, Redshift, Snowflake. 
          • Experience in Data warehousing - OLTP, OLAP, Dimensions, Facts, and Data modeling. 
            • Previous experience leading an enterprise-wide Cloud Data Platform migration with strong architectural and design skills. 
              • Experience with Cloud based data architectures, messaging, and analytics. 
                • Cloud certification(s). 
                  • Add ons: Any experience with Airflow , AWS lambda, AWS glue and Step functions is a Plus. 

Please note that is 100% remote role - but we are going to consider only candidates from from Warsaw and surroundings

About:

We are seeking a highly motivated and self-driven data engineer for our growing data team -who is able to work and deliver independently and as a team. In this role, you will play a crucial part in designing, building and maintaining our ETL infrastructure and data pipelines.

,[This position is for a Cloud Data engineer with a background in Python, DBT, SQL and data warehousing for enterprise-level systems. , Adhere to standard coding principles and standards. , Build and optimize data pipelines for efficient data ingestion, transformation and loading from various sources while ensuring data quality and integrity. , Design, develop, and deploy python scripts and ETL processes in ADF environment to process and analyze varying volumes of data. , Experience of DWH, Data Integration, Cloud, Design and Data Modelling. , Proficient in developing programs in Python and SQL , Experience with Data warehouse Dimensional data modeling. , Working with event based/streaming technologies to ingest and process data. , Working with structured, semi structured and unstructured data. , Optimize ETL jobs for performance and scalability to handle big data workloads. , Monitor and troubleshoot ADF jobs, identify and resolve issues or bottlenecks. , Implement best practices for data management, security, and governance within the Databricks environment. Experience designing and developing Enterprise Data Warehouse solutions. , Proficient writing SQL queries and programming including stored procedures and reverse engineering existing process. , Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards. , Checking in, checkout and peer review and merging PRs into git Repo. , Knowledge of deployment of packages and code migrations to stage and prod environments via CI/CD pipelines. ] Requirements: ETL, Data pipelines, Cloud, Python, dbt, SQL, ADF, DWH, Data modelling, Data warehouse, Data modeling, Big Data, Data management, Security, Databricks, Git, CI/CD Pipelines, Redshift, Snowflake, OLAP, AWS, AWS Glue, Airflow, AWS Lambda Tools: . Additionally: Private healthcare.