Data Engineer
2 weeks ago
What you’ll need to succeed in this role: At least 3 years of commercial experience implementing, developing, or maintaining Big Data systems. Strong programming skills in Python: writing a clean code, OOP design. Strong SQL skills, including performance tuning, query optimization, and experience with data warehousing solutions. Experience in designing and implementing data governance and data management processes. Deep expertise in Big Data technologies, including Databricks, Spark, Apache Airflow and other modern data orchestration and transformation tools. Experience implementing and deploying solutions in cloud environments (with a preference for Azure). Knowledge of how to build and deploy Power BI reports and dashboards for data visualization. Excellent understanding of dimensional data and data modeling techniques. Consulting experience and the ability to guide clients through architectural decisions, technology selection, and best practices. Ability to work independently and take ownership of project deliverables. Master’s or Ph.D. in Computer Science, Data Science, Mathematics, Physics, or a related field. Addepto is a leading AI consulting and data engineering company that builds scalable, ROI-focused AI solutions for some of the world’s largest enterprises and pioneering startups, including Rolls Royce, Continental, Porsche, ABB, and WGU. With our exclusive focus on Artificial Intelligence and Big Data, we help organizations unlock the full potential of their data through systems designed for measurable business impact and long-term growth.Beyond client projects, we have developed our own product offerings born from real-life client insights and challenges. We are also actively releasing open-source solutions to the community, transforming practical experience into tools that benefit the broader AI ecosystem. This commitment to scalable innovation, proven ROI delivery, and knowledge sharing has earned us recognition by Forbes as one of the top 10 AI consulting companies worldwide. As a Data Engineer, you will have the exciting opportunity to work with a team of technology experts on challenging projects across various industries, leveraging cutting-edge technologies. Here are some of the projects we are seeking talented individuals to join: Design and development of a universal data platform for global aerospace companies. This Azure and Databricks powered initiative combines diverse enterprise and public data sources. The data platform is at the early stages of the development, covering design of architecture and processes, as well as giving freedom for technology selection. Data Platform Transformation for energy management association body. This project addressed critical data management challenges, boosting user adoption, performance, and data integrity. The team is implementing a comprehensive data catalog, leveraging Databricks and Apache Spark/PySpark, for simplified data access and governance. Secure integration solutions and enhanced data quality monitoring, utilizing Delta Live Table tests, established trust in the platform. The intermediate result is a user-friendly, secure, and data-driven platform, serving as a basis for further development of ML components. Design of the data transformation and following data ops pipelines for global car manufacturer. This project aims to build a data processing system for both real-time streaming and batch data. We’ll handle data for business uses like process monitoring, analysis, and reporting, while also exploring LLMs for chatbots and data analysis. Key tasks include data cleaning, normalization, and optimizing the data model for performance and accuracy. Discover our perks and benefits: Work in a supportive team of passionate enthusiasts of AI & Big Data. Engage with top-tier global enterprises and cutting-edge startups on international projects. Enjoy flexible work arrangements, allowing you to work remotely or from modern offices and coworking spaces. Accelerate your professional growth through career paths, knowledge-sharing initiatives, language classes, and sponsored training or conferences, including a partnership with Databricks, which offers industry-leading training materials and certifications. Choose from various employment options: B2B, employment contracts, or contracts of mandate. Make use of 20 fully paid days off available for B2B contractors and individuals under contracts of mandate. Participate in team-building events and utilize the integration budget. Celebrate work anniversaries, birthdays, and milestones. Access medical and sports packages, eye care, and well-being support services, including psychotherapy and coaching. Get full work equipment for optimal productivity, including a laptop and other necessary devices. With our backing, you can boost your personal brand by speaking at conferences, writing for our blog, or participating in meetups. Experience a smooth onboarding with a dedicated buddy, and start your journey in our friendly, supportive, and autonomous culture. ,[Design and optimize scalable data processing pipelines for both streaming and batch workloads using Big Data technologies such as Databricks, Apache Airflow, and Dagster., Architect and implement end-to-end data platforms, ensuring high availability, performance, and reliability., Lead the development of CI/CD and MLOps processes to automate deployments, monitoring, and model lifecycle management., Develop and maintain applications for aggregating, processing, and analyzing data from diverse sources, ensuring efficiency and scalability., Collaborate with Data Science teams on Machine Learning projects, including text/image analysis, feature engineering, and predictive model deployment., Design and manage complex data transformations using Databricks, DBT, and Apache Airflow, ensuring data integrity and consistency., Translate business requirements into scalable and efficient technical solutions while ensuring optimal performance and data quality., Ensure data security, compliance, and governance best practices are followed across all data pipelines.] Requirements: Python, SQL, ETL, Azure, Databricks, Spark, Docker, CI/CD, Kubernetes, Kafka, Power BI, Airflow, Dagster, dbt Tools: Jira, Confluence, Wiki, GitHub, Agile, Scrum, Kanban. Additionally: Private healthcare, Multisport card, Referral bonus, MyBenefit cafeteria, International projects, Flat structure, Paid leave, Training budget, Language classes, Team building events, Small teams, Flexible form of employment, Flexible working hours and remote work possibility, Free coffee, Startup atmosphere, No dress code, In-house trainings.
-
Senior Data Engineer
2 weeks ago
Remote, Warsaw, Wrocław, Białystok, Kraków, Gdańsk, Czech Republic Addepto Full timeWhat you’ll need to succeed in this role: At least 5 years of commercial experience implementing, developing, or maintaining Big Data systems. Strong programming skills in Python: writing a clean code, OOP design. Strong SQL skills, including performance tuning, query optimization, and experience with data warehousing solutions. Experience in...
-
Junior Data Engineer
2 weeks ago
Remote, Warsaw, Wrocław, Białystok, Kraków, Gdańsk, Czech Republic Addepto Full timeWhat you’ll need to succeed in this role: At least 1 year of proven commercial experience developing, or maintaining Big Data systems. Hands-on experience with Big Data technologies, including Databricks, Apache Spark, Airflow, and DBT. Strong programming skills in Python: writing a clean code, OOP design. Experience in designing and implementing data...
-
Data Scientist
3 days ago
Remote, Warszawa, Gdańsk, Wrocław, Białystok, Kraków, Czech Republic Addepto Full time🎯 What you’ll need to succeed in this role: At least 3+ years of proven commercial experience designing and implementing scalable AI solutions (Machine Learning, Predictive Modeling, Optimization, NLP, Computer Vision, GenAI). Proficiency in developing ML algorithms from scratch to production deployment. Strong programming skills in Python: writing...
-
Middle/Senior Data Engineer
2 weeks ago
Warsaw, Białystok, Gdańsk, Łódź, Wrocław, Czech Republic Godel Technologies Europe Full timeIdeally you have: 3+ years in Data Engineering role Strong understanding of data modeling, data warehousing, and ETL/ELT processes Solid programming skills in Python Knowledge of Data warehousing tools: Snowflake, Redshift Strong knowledge of SQL and database optimization Experience in building and maintaining data pipelines using Fivetran, Matillion, dbt...
-
Middle Data Engineer @ N-iX
5 days ago
Remote, Kraków, Wrocław, Warsaw, Czech Republic N-iX Full timeMust-Have Technologies: Experience with cloud data warehouse technologies, including Snowflake and AWS S3. Solid SQL skills; experience migrating legacy environments to cloud platforms. Familiarity with ETL frameworks and tools (preferably SSIS, dbt). Proficiency in Python for data pipeline development and automation. Strong documentation and...
-
Senior Data Engineer
2 weeks ago
Remote, Warszawa, Wrocław, Białystok, Kraków, Gdańsk, Czech Republic Addepto Full timeWhat you’ll need to succeed in this role: At least 5 years of commercial experience implementing, developing, or maintaining Big Data systems, data governance and data management processes. Strong programming skills in Python (or Java/Scala): writing a clean code, OOP design. Hands-on with Big Data technologies like Spark, Cloudera, Data Platform,...
-
Data Engineer
2 weeks ago
Remote, Warszawa, Wrocław, Białystok, Kraków, Gdańsk, Czech Republic Addepto Full timeWhat you’ll need to succeed in this role: At least 3 years of commercial experience implementing, developing, or maintaining Big Data systems, data governance and data management processes. Strong programming skills in Python (or Java/Scala): writing a clean code, OOP design. Hands-on with Big Data technologies like Spark, Cloudera Data Platform,...
-
Data Engineer @ Godel Technologies Europe
22 hours ago
Białystok, Warszawa, Gdańsk, Łódź, Wrocław, Czech Republic Godel Technologies Europe Full timeIdeally you have: 3+ years in Data Engineering role Solid Python programming skills for building and maintaining data pipelines Advanced SQL skills, including query optimization and performance tuning Experience with ETL/ELT tools and data orchestration frameworks like Apache Airflow, dbt Strong understanding of data modeling principles (dimensional...
-
Senior Data Engineer @ hubQuest
5 days ago
Remote, Warsaw, Czech Republic hubQuest Full timeWhat we expect 5+ years of professional experience as a Data Engineer or Software Engineer in data-intensive environments Strong Python development skills, with solid understanding of OOP, modular design, and testing (unit/integration) Experience with PySpark and distributed data processing frameworks Hands-on experience with Azure Data ecosystem,...
-
Azure Data Engineer Tech Lead @ Lingaro
5 days ago
Remote, Warsaw, Czech Republic Lingaro Full timeMinimum of 5 years of experience in data engineering or a related field. Strong technical skills in data engineering, including proficiency in programming languages such as Python, SQL and Scala. Familiarity with Azure cloud platform and services, and experience in implementing data solutions in a cloud environment. Expertise in working with various data...