Senior Data Engineer @ hubQuest
6 days ago
What we expect 5+ years of professional experience as a Data Engineer or Software Engineer in data-intensive environments Strong Python development skills, with solid understanding of OOP, modular design, and testing (unit/integration) Experience with PySpark and distributed data processing frameworks Hands-on experience with Azure Data ecosystem, including Databricks, Data Factory, Synapse, and serverless compute Solid knowledge of SQL and database performance optimization Experience in CI/CD and DevOps practices for data pipelines (GitHub Actions, Azure DevOps, or similar) Proven ability to refactor complex systems and implement scalable, automated solutions Experience in data testing, validation, and observability frameworks Strong communication skills and the ability to work independently in a global, collaborative team environment Fluent English Nice to have Experience with DBT (Data Build Tool) for transforming and testing data in analytics pipelines Experience with Terraform or other Infrastructure as Code tools Familiarity with containerization and orchestration (Docker, Kubernetes) Understanding of data governance and metadata management principles Experience in multi-tenant or multi-market system design We are a team of experts bringing together top talents in IT and analytics. Our mission is to build high-performing data and technology teams — from the ground up or by scaling existing ones — to help our partners become truly data-driven organizations. Currently, we are looking for a Senior Data Engineer to join the Global Analytics Unit, a global, centralized team driving data-driven decision-making and developing smart data products that power everyday operations across markets. The team’s essence is innovation — we foster a data-first mindset across all business areas, from sales and logistics to marketing and procurement. As the team expands its analytics solutions globally, we’re seeking a hands-on engineer who combines strong software craftsmanship with a passion for building scalable, automated data systems. Why join us If you want to: Work on complex, large-scale data systems with global impact Build robust and scalable data pipelines using modern cloud-native architectures Contribute to innovative projects and see your ideas implemented Work in a diverse global team of top-tier engineers and data professionals Have the freedom to shape tools, technologies, and processes Operate in a culture that values autonomy, collaboration, and technical excellence ...then this role is for you. We also offer: Flexible working hours and remote work options A relaxed, non-corporate environment with no unnecessary bureaucracy Private medical care and Multisport card A modern office in central Warsaw with great transport links Access to global learning resources, certifications, and knowledge exchange In short We’re looking for a software-minded Data Engineer — someone who writes clean, testable Python, designs systems that scale globally, and loves automating everything that can be automated. You’ll have a real impact on the architecture and delivery of global analytics solutions used daily across multiple markets. ,[Design, develop, and maintain end-to-end data pipelines and architectures for large-scale analytics solutions, Refactor code and REST services to support dynamic OpCo deployments and multi-tenant scalability, Develop zero-touch deployment pipelines to automate infrastructure and environment provisioning, Implement data validation and testing frameworks ensuring reliability and accuracy across data flows, Integrate new pipelines into a harmonized data execution portal, Build and maintain serverless and Databricks-based data processing systems on Azure, Design and optimize ETL/ELT workflows in Python (including PySpark), Implement Infrastructure as Code (IaC) for reproducible deployments using tools like Terraform, Collaborate closely with Data Scientists, Architects, and Analysts to deliver production-grade data products, Troubleshoot, monitor, and improve data pipelines to ensure performance and resilience] Requirements: Python, Azure, Data engineering, PySpark, Databricks, Azure Data, SQL, dbt, Terraform, Docker, Kubernetes Tools: Agile, Scrum. Additionally: Sport subscription, Private healthcare, Training budget, Small teams, International projects, Free coffee, Bike parking, Free beverages, In-house trainings, In-house hack days, Modern office, No dress code.
-
Senior Data Scientist @ hubQuest
6 days ago
Remote, Warsaw, Czech Republic hubQuest Full timeWhat we expect 5+ years of professional experience in Data Science or ML Engineering, including production deployments MSc or PhD in Computer Science, Statistics, Mathematics, Physics, or related technical field Strong Python programming skills, including software engineering practices (OOP, modular code design, testing) Solid experience with ML frameworks...
-
Lead GenAI Engineer
1 week ago
Remote, Warsaw, Czech Republic hubQuest Full timeWhat we’re looking for 8+ years of software engineering experience, including 2+ years in a lead or architect-level role. Strong proficiency in C#, Python, and Azure services (App Services, Functions, Cognitive Services, Azure OpenAI). Proven, hands-on experience designing and implementing AI or GenAI solutions — such as LLM integrations, chatbots,...
-
Senior Azure Data Engineer @ Experis Polska
2 weeks ago
Warsaw, Czech Republic Experis Polska Full timeTech Stack Programming: Python, PySpark, SQL, SparkSQL, Bash Azure: Databricks, Data Factory, Delta Lake, Data Vault 2.0 CI/CD: Azure DevOps, GitHub, Jenkins Orchestration: Airflow, Azure Data Factory Databases: SQL Server, Oracle, PostgreSQL, Vertica Cloud: Azure (expert), AWS (intermediate) Tools: FastAPI, REST APIs, Docker, Unity Catalog Preferred...
-
Senior Data Engineer @ Link Group
6 days ago
Remote, Czech Republic Link Group Full timeRequired Skills & Experience 5–8 years of hands-on experience in data engineering or similar roles. Strong knowledge of AWS services such as S3, IAM, Redshift, SageMaker, Glue, Lambda, Step Functions, and CloudWatch. Practical experience with Databricks or similar platforms (e.g., Dataiku). Proficiency in Python or Java, SQL (preferably Redshift), Jenkins,...
-
Senior Data Engineer @ 1dea
6 days ago
Remote, Czech Republic 1dea Full timemin 5 yrs of relevant experience Solid experience with AWS services (S3, IAM, Redshift, Sagemaker, Glue, Lambda, Step Functions, CloudWatch) Experience with platforms like Databricks, Dataiku Proficient in Python / Java, SQL – Redshift preferred, Jenkins, CloudFormation, Terraform, Git, Docker, 2-3 years of Spark – PySpark Good communication and SDLC...
-
Senior Azure Data Engineer @ C&F S.A.
6 days ago
Remote, Warsaw, Czech Republic C&F S.A. Full timeWhat you will need: 4+ years of experience in Azure and 5+ of industrial experience in the domain of large-scale data management, visualization and analytics Hands-on knowledge of the following data services and technologies in Azure (for example Databricks, Data Lake, Synapse, Azure SQL, Azure Data Factory, Azure Data Explorer) Experience with...
-
Remote, Warsaw, Czech Republic KMD Poland Full timeIdeal candidate: Has 5+ years of commercial experience in implementing, developing, or maintaining data load systems (ETL/ELT). Demonstrates strong programming skills in Python, with a deep understanding of data-related challenges. Has hands-on experience with Apache Spark and Databricks. Is familiar with MSSQL databases. Has experience working...
-
Remote, Warsaw, Czech Republic KMD Poland Full timeIdeal candidate: Has 5+ years of commercial experience in implementing, developing, or maintaining data load systems (ETL/ELT). Demonstrates strong programming skills in Python, with a deep understanding of data-related challenges. Has hands-on experience with Apache Spark and Databricks. Is familiar with MSSQL databases. Has experience working...
-
Senior Data Engineer @ RemoDevs
2 weeks ago
Warsaw, Czech Republic RemoDevs Full time3+ years of Python development experience, including Pandas 5+ years writing complex SQL queries with RDBMSes. 5+ years of Experience with developing and deploying ETL pipelines using Airflow, Prefect, or similar tools. Experience with cloud-based data warehouses in environments such as RDS, Redshift, or Snowflake. Experience with data warehouse design:...
-
Remote, Czech Republic INNOBO Full timeTo thrive and succeed, you are expected to have: Bachelor’s degree in computer science, engineering, or a related field, complemented by experience in data engineering. A master’s degree is preferred Extensive experience with Git and managing version control in a collaborative environment Proven track record of implementing and managing CI/CD pipelines...