Scala/Spark Big Data Developer @

6 days ago


Remote Wrocław, Czech Republic Comscore via CC Full time

The candidate must have:

  • 3+ years building production software in Java/Scala or another object oriented language
  • 3+ years working in Windows; bonus points for Linux experience
  • 1+ years of experience with Spark, primarily using Scala for Big data processing (that includes understanding of how Spark works and why)
  • Knowledge of SQL
  • Professional working proficiency in English (both oral and written)
  • Good software debugging skills, not only prints, but also using debugger
  • Quite good understanding of Git

If you don't have all the qualifications, but you're interested in what we do  -> let's talk

Correct Context is looking for a Scala/Spark Big Data Developer for Comscore in Poland and around.

Comscore is a global leader in media analytics, revolutionizing insights into consumer behavior, media consumption, and digital engagement.

Comscore leads in measuring and analyzing audiences across diverse digital platforms. Thrive on using cutting-edge technology, play a vital role as a trusted partner delivering accurate data to global businesses, and collaborate with industry leaders like Facebook, Disney, and Amazon. Contribute to empowering businesses in the digital era across media, advertising, e-commerce, and technology sectors.

We offer:

  • Real big data projects (PB scale)
  • An international team (US, PL, IE, CL)
  • A small, independent team working environment
  • High influence on working environment
  • Hands on environment
  • Flexible work time
  • Fully remote or in-office work in Wroclaw, Poland
  • 18,000 - 25,000 PLN net/month B2B
  • Private healthcare (PL)
  • Multikafeteria (PL)
  • Free parking (PL)

If you don't have all the qualifications, but you're interested in what we do and you have a solid Linux understanding -> let's talk

The recruitment process for the Scale/Spark Big Data Developer position has following steps:

  1. Technical survey - 10min
  2. Technical screening - 30 min video call
  3. Technical interview - 60min video call
  4. Technical/Managerial interview- 60 min video call (Steps 3 and 4 may be combined into a single step)
,[Design, implement, and maintain petabyte-scale Big Data pipelines using Scala, Spark and a lot of other tech, Tackle C# legacy services and write clean, performant Java/Scala code , Conduct Proof of Concept for enhancements , Writing great and performant Big Data Scala code , Work with technologies like AWS, Kubernetes, Airflow, EMR, Hadoop, Linux/Ubuntu, Kafka, and Spark , Automate tests and data workflows end-to-end ] Requirements: Scala, Java, Spark, Big Data, SQL, Git Tools: GIT. Additionally: Private healthcare, Remote work, Flexible working hours, Flat structure, Free parking, No dress code, Startup atmosphere, Modern office, Free coffee, Playroom.

  • Remote, Wrocław, Czech Republic Comscore (via CC) Full time

    The candidate must have: 2+ years of experience with Linux Solid knowledge of Linux (bash, threads, IPC, filesystems; being power-user is strongly desired, understanding how OS works so you can benefit from performance optimizations in production but also in daily workflows) 1+ years of experience with Spark,  primarily using Scala for Big data...


  • Remote, Wrocław, Czech Republic beBeeBigData Full time

    The perfect candidate must have a solid foundation in software development, preferably in Java or Scala. They should possess 3+ years of experience in building production-level software and working with Windows; Linux expertise is highly valued.A strong understanding of Spark, particularly using Scala for Big Data processing, is also required. Additionally,...


  • Remote, Wrocław, Czech Republic Comscore (via CC) Full time

    The candidate must have:Linux as a daily driver of operationsGood understanding of Spark internals and mechanics4+ years of commercial experience with JavaProven ability to debug JVM/Java (hopefully in Spark ecosystem)Ability to design high performance processing pipelinesHands-on, proactive approachFamiliarity with CI/CD pipelines and DevOps...


  • Remote, Kraków, Czech Republic Strefa IT Kandydata Full time

    Very good knowledge of Scala SparkGitHubJira and ConfluenceExperience working within Scrum frameworkGood communication skills in English (Discussing, presenting)GCP, and Apache AirflowJUnit, TDD, BDDJenkinsClean code, SOLID, KISS, DRY, design patternsElasticsearchExperience in at least of of the large Cloud providers (Azure, AWS)Pervious Secure Code Warrior...


  • Remote, Czech Republic beBeeDataEngineer Full time

    Job OverviewWe are seeking a highly skilled Big Data Engineer to design, implement and optimize big data processing pipelines using Scala and Spark.Key ResponsibilitiesDesign and develop scalable big data processing architectures using Scala and Spark.Collaborate with cross-functional teams to ensure seamless integration of big data solutions.Maintain and...


  • Remote, Czech Republic Matrix Global Services Full time

    At least 2 years' experience with JavaExperience in building, optimizing, and maintaining large-scale big data pipelines using popular open-source frameworks (Kafka, Spark, Hive, Presto, Airflow, etc)Experience with SQL/NoSQL/key value DBsHands-on experience in Spring, Sprint BootExperience with AWS cloud servicessuch as EMR, Aurora, Snowflake, S3, Athena,...

  • Big Data Architect

    3 days ago


    Remote, Czech Republic beBeeData Full time

    Job Description:We are seeking a highly skilled Senior Big Data Engineer to join our team. As a key member of our data platform, you will be responsible for designing and developing innovative data applications using technologies like Java, Spring, Kafka, Mongo, K8s, and MySQL.Required Skills and QualificationsJava development experienceKafka and Spark...

  • Big Data Specialist

    7 days ago


    Remote, Warszawa, Czech Republic beBeeDataEngineer Full time

    Job Title:Big Data Specialist">Role Overview:We are seeking a skilled Big Data Specialist to design and implement cutting-edge data pipelines using Python, PySpark, and Jupyter notebooks.The ideal candidate will have hands-on experience with cloud platforms, Azure, and ETL processes.Key Responsibilities:Data Pipeline Development:Design, build, and optimize...

  • Data Engineer @

    4 days ago


    Remote, Czech Republic Link Group Full time

    3+ years of experience as a Data Engineer or similar roleStrong knowledge of SQL and data modeling principlesExperience with Python or Scala for data processingHands-on experience with cloud platforms (ideally AWS, but Azure/GCP also valuable)Familiarity with tools like Apache Spark, Airflow, Kafka, or similarExperience with data lakes, data warehouses, or...

  • Data Engineer

    7 days ago


    Remote, Warszawa, Wrocław, Białystok, Kraków, Gdańsk, Czech Republic Addepto Full time

    What you'll need to succeed in this role:At least 3 years of commercial experience implementing, developing, or maintaining Big Data systems, data governance and data management processes.Strong programming skills in Python (or Java/Scala): writing a clean code, OOP design.Hands-on with Big Data technologies like Spark, Cloudera Data Platform, Airflow,...