Hadoop Data Engineer @
7 days ago
- Scala
- Experience designing and deploying large scale distributed data processing systems with few technologies such as PostgreSQL or equivalent databases, SQL, Hadoop, Spark, Tableau
- Proven ability to define and build architecturally sound solution designs.
- Demonstrated ability to rapidly build relationships with key stakeholders.
- Experience of automated unit testing, automated integration testing and a fully automated build and deployment process as part of DevOps tooling.
- Must have the ability to understand and develop the logical flow of applications on technical code level
- Strong interpersonal skills and ability to work in a team and in global environments.
- Should be proactive, have learning attitude & adjust to work in dynamic work environments.
- Exposure in Enterprise Data Warehouse technologies
- Exposure in a customer facing role working with enterprise clients.
- Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA.
We are the team that provides the metrics and analytical products that drive change in DevOps practices across the bank. We provide the key measures that track performance and capabilities of our engineering teams across the world, and the critical insights and intelligence that leadership teams rely on to shape and deliver strategic change.
Creating an inspiring place to thrive for the talented, we use their expertise and courage to introduce the technology of the future into your business. - This is the foundation of Mindbox and the goal of our business and technology journey. We operate and develop in four areas:
Autonomous Enterprise - automation of business processes using RPA, OCR, and AI.
Business Managment Systems ERP - we implement, adapt, optimize, and maintain flexible, safe, and open ERP of production and distribution companies worldwide.
Talent Network - we provide access to the best specialists.
Modern Architecture - we build integrated, sustainable, and open CI / CD environments based on containers enabling safe and more frequent delivery of proven changes in the application code.
We treat technology as a tool to achieve a goal. Thanks to our consultants' reliability and proactive approach, initial projects usually become long-term cooperation. For over 16 years, it has provided various services to support clients in digital transformation.
,[We are looking for Data Engineers to join the IT team within the Environmental, Social & Governance department of the Data and Analytics office. The engineering team is responsible for taking business logic/poc asset designs and using them to create robust data pipelines using spark in scala. Our pipelines are orchestrated through airflow and deployed through a Jenkins based CICD pipeline. We operate on a private GCP instance and an on-premises Hadoop cluster. Engineers are embedded in multi-disciplinary teams including business analysts, data analysts, data engineers and software engineers and architects.] Requirements: Hadoop, HDFS, Google cloud platform, Scala, SQL, Hive, Apache Spark, Airflow, PostgreSQL, Tableau, Big Query , Cloud Dataflow, Cloud DataProc Tools: Agile, Scrum. Additionally: Sport Subscription, Private healthcare, Life insurance, Training budget.-
Data Engineer
6 days ago
Remote, Krakow, Warszawa, Czech Republic beBee Careers Full time 800,000 - 1,500,000Job Title: Data Engineer - Scalable Data Solutions">Pyspark and Scala development and design.Experience with scheduling tools like Airflow.Familiarity with Hadoop ecosystem, including Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, and RESTful services.Strong understanding of Unix/Linux environment.Hands-on experience in building data...
-
Hadoop Administrator @
1 week ago
Warszawa, Mazovia, Czech Republic WIPRO IT SERVICES POLAND Sp. z o.o. Full timeRequired Skills:Strong hands-on experience with Hadoop ecosystem (preferably Hortonworks or Apache).Proficient in Linux/Unix OS services, administration, and scripting (Shell, awk).Solid debugging and troubleshooting skills for Hadoop-related issues.Experience with automation tools like Ansible and scripting in Python.Familiarity with HBase is an added...
-
Senior Distributed Systems Engineer
6 days ago
Remote, Krakow, Warszawa, Czech Republic beBee Careers Full time €90,000 - €120,000Job SummaryWe are seeking a skilled Data Engineer to join our team, responsible for designing and deploying large-scale distributed data processing systems using technologies such as Hadoop, Spark, and PostgreSQL.Main Responsibilities:Design and deploy robust data pipelines using Spark in Scala.Utilize Airflow for orchestration and Jenkins for deployment...
-
Hadoop Ecosystem Expert
1 week ago
Warszawa, Mazovia, Czech Republic beBeeAdmin Full time 90,000 - 120,000Job Description:We are seeking a skilled Hadoop administrator to manage and support DAP Hadoop platforms at L2 level.The ideal candidate will have strong expertise in Hadoop ecosystem components, automation, and Linux system administration. Key Responsibilities:Provide L2 support for Hadoop platforms (Hortonworks or Apache open-source distributions).Monitor...
-
Big Data Engineer
3 days ago
Remote, Czech Republic Link Group Full timeMust-Have QualificationsAt least 3+ years of experience in big data engineering.Proficiency in Scala and experience with Apache Spark.Strong understanding of distributed data processing and frameworks like Hadoop.Experience with message brokers like Kafka.Hands-on experience with SQL/NoSQL databases.Familiarity with version control tools like Git.Solid...
-
Data Engineer
1 week ago
Remote, Czech Republic Link Group Full timeMust-Have QualificationsAt least 3+ years of experience in data engineering.Strong expertise in one or more cloud platforms: AWS, GCP, or Azure.Proficiency in programming languages like Python, SQL, or Java/Scala.Hands-on experience with big data tools such as Hadoop, Spark, or Kafka.Experience with data warehouses like Snowflake, BigQuery, or...
-
Leading Kafka Engineer
6 days ago
Remote, Warszawa, Czech Republic beBeeKafka Full time 1,040 - 16,000Job Title:Leading Kafka EngineerJoin the team of experts in designing and implementing data pipelines using Big Data technologies like Hadoop, Spark, Hive, and Kafka.
-
Data Engineer with Azure @
7 days ago
Krakow, Warszawa, Czech Republic Mindbox S.A. Full timeSignificant experience in development with Python and SQL programming languagesUnderstanding in applying data engineering methods to the cyber security domainExperience designing, building, and maintaining data pipelines and ELT workflows across disparate data setsData profiling and analysisData pipeline designStreaming pipelinesExperience with Azure DevOps,...
-
Data Architect with Data Engineering Focus
6 days ago
Krakow, Warszawa, Czech Republic beBee Careers Full time €60,000 - €85,000Job Description:", "", "">Develop and maintain large-scale data pipelines using Python, SQL, and Spark.", "">Design and implement efficient data architectures to support reporting, analytics, and machine learning initiatives.", "">Analyze complex datasets to extract insights and produce statistical metrics for business reports.", "">Collaborate with...
-
Senior Data Engineer
1 week ago
Remote, Warszawa, Wrocław, Białystok, Kraków, Gdańsk, Czech Republic beBeeData Full time 50,000 - 75,000About the RoleAs a Data Architect, you will play a crucial part in designing and developing scalable data management architectures, infrastructure, and platform solutions for streaming and batch processing using Big Data technologies like Apache Spark, Hadoop, Iceberg. Your expertise will help us leverage cutting-edge technologies to drive business growth...