Current jobs related to Big Data Engineer @ - Remote Łódź - HARMAN Connected Services

  • Big Data Engineer

    6 hours ago


    Remote, Czech Republic Link Group Full time

    Must-Have QualificationsAt least 3+ years of experience in big data engineering.Proficiency in Scala and experience with Apache Spark.Strong understanding of distributed data processing and frameworks like Hadoop.Experience with message brokers like Kafka.Hands-on experience with SQL/NoSQL databases.Familiarity with version control tools like Git.Solid...


  • Remote, Czech Republic Link Group Full time

    Core Requirements:Proven experience in Python backend development with a focus on data-heavy applicationsHands-on expertise with Big Data tools such as Apache Spark, Databricks, etc.Experience with data orchestration tools like AirflowWorking knowledge of Azure Cloud servicesFamiliarity with Docker and containerized application deploymentAbility to write...


  • Remote, Łódź, Czech Republic HARMAN Connected Services Full time

    What You Need to Be Successful At least 5 years of commercial experience in developing Python, Java or Scala Proficient in working with big data technologies (e.g., Hadoop, Spark and MapReduce) Knowledge of database (SQL/NoSQL) technologies (Snowflake, PostgreSQL, MySQL, DynamoDB) Experience in working with Kubernetes and stream data processing frameworks...


  • Remote, Czech Republic beBee Careers Full time

    Senior Data Engineer Role OverviewThis senior data engineer position involves working on the creation of big data engineering pipelines with a focus on streamlining data processing and analysis. Proficiency in Python, SQL, and dbt (data build tool) is essential for this role. Experience in developing data solutions on AWS cloud platforms and utilizing Apache...


  • Remote, Czech Republic SoftServe Full time

    IF YOU AREA Big Data engineer focused on designing and building scalable data pipelinesWell-versed in batch and/or streaming data processingProficient in SQL and PythonExperienced in data engineering on Google Cloud Platform (GCP)Skilled in tools like Apache Spark (GCP Dataproc), Cloud Dataflow, or Apache BeamFamiliar with Apache Airflow, Cloud Composer or...


  • Remote, Kraków, Poznań, Szczecin, Czech Republic beBee Careers Full time

    Jako Big Data Engineer zadbajesz o projektowanie i architekturę platform danych.


  • Remote, Czech Republic Link Group Full time

    ✅ Requirements: 5+ years of experience in data engineering, including technical leadership responsibilities Strong hands-on expertise in Python and Big Data tools (e.g., Spark, Databricks) Experience working with Airflow or equivalent data orchestration frameworks Proficiency in Azure Cloud and containerization using Docker Excellent team collaboration...

  • Senior Data Engineer

    15 hours ago


    Remote, Łódź, Czech Republic beBee Careers Full time

    About the RoleWe seek a highly skilled and motivated Big Data Engineer to join our growing team.As a Big Data Engineer, you will contribute to the development and buildout of components of our customer's advertising platform, which brings sponsored content to millions of media-enabled devices worldwide.The expertise you bring to the team will play a crucial...


  • Remote, Czech Republic beBee Careers Full time

    Job Title: Senior Big Data EngineerAs a key member of our team, you will be responsible for designing and implementing big data platforms. You will also collaborate closely with the Data Science team to set up and automate Data Science models/algorithms for production use.You will have the opportunity to work on various projects that involve building,...


  • Remote, Czech Republic SoftServe Full time

    IF YOU ARE A seasoned professional, focused on data pipeline creation and comfortable with batch processing and streaming processing Used to leading a team or considering yourself ready to do it Experienced in Python and SQL Skilled in data solutions development within AWS (additional cloud experience is a plus) Confident in Apache Spark, Databricks, Flink,...

Big Data Engineer @

2 weeks ago


Remote Łódź, Czech Republic HARMAN Connected Services Full time

What You Need to Be Successful

  • At least 5 years of commercial experience in developing Python, Java or Scala
  • Proficient in working with big data technologies (e.g., Hadoop, Spark and MapReduce)
  • Knowledge of database (SQL/NoSQL) technologies (Snowflake, PostgreSQL, MySQL, DynamoDB)
  • Experience in working with Kubernetes and stream data processing frameworks frameworks (Flink, Apache Ignite)
  • Experience in optimizing SQL queries
  • Experience in working with large datasets
  • Hands-on experience with orchestration tools like Airflow or similar
  • Experience in AWS
  • Knowledge of Design Patterns
  • Knowledge of version control system: Git
  • Proficiency in English, enough to communicate and understand technical documentation
  • University degree in Computer Sciences, Telecommunication and/ or similar

Bonus Points if You Have

  • GoLang
  • Experience with GraphQL
  • Experience in usage and administration of Amazon Web Services
  • Experience in development of micro services/server less solutions
  • Knowledge of container technologies like Docker and Kubernetes

A Career at HARMAN TECH Services (HTS)

We're a global, multi-disciplinary team that's putting the innovative power of technology to work and transforming tomorrow. At HTS, you solve challenges by creating innovative solutions.

  • Combine the physical and digital, making technology a more dynamic force to solve challenges and serve humanity's needs
  • Work at the convergence of cross channel UX, cloud, insightful data, IoT and mobility
  • Empower companies to create new digital business models, enter new markets, and improve customer experiences

About the Role

We seek a highly skilled and motivated Big Data Engineer to join our growing team. As Big Data Engineer, you will contribute to development and buildout of components of our customer's advertising platform which brings sponsored content to millions of media-enabled devices worldwide.

Expertise you bring to the team, will play a crucial role in building modern, well performing, services which we will use to deliver targeted ads to our users. On daily basis, you will be collaborating with other professionals around design, implementation, and review of the source code, followed by delivery of proper documentation and quality.

 

About HARMAN: Where Innovation Unleashes Next-Level Technology

Ever since the 1920s, we've been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected.

Across automotive, lifestyle, and tech services, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today's most sought-after performers, while our tech services serve humanity by addressing the world's ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other.

If you're ready to innovate and do work that makes a lasting impact, join our talent community today

,[       Collaborate with data scientists, product managers, and other engineers to refine and improve data processing systems ,        Design and maintain scalable and optimized data pipelines for efficient collection, processing, and storage data ,        Collaborate with data scientists, product managers, and other engineers to refine and improve attribution methodologies ,        Participate in R&D projects in the area of Cloud Services ,        Establish and follow best coding practices ,        Work closely with stakeholders to understand and translate business requirements into technical solutions ,        Conduct A/B testing and performance analysis to validate and iterate on attribution models. ,        Ensure software quality: create and maintain unit, integration and functional tests, participate in code review ,        Report tasks progress ] Requirements: Data pipelines, Python, Java, Scala, Hadoop, Spark, SQL, NoSQL, Snowflake, PostgreSQL, MySQL, Kubernetes, Flink, Airflow, AWS, Design Patterns, Version control system, Golang, GraphQL, Amazon Web Services, Docker Additionally: Remote work, Flexible working hours, Competitive compensation, Private healthcare, Flat structure, Sport subscription, Training budget, International projects, Employee referral bonus, Employee discounts on HARMAN products (JBL), Free coffee, Bike parking, Playroom, Free snacks, Free beverages, No dress code, Startup atmosphere, Modern office, In-house trainings.