
GCP HPC DevOps Engineer @
2 days ago
Your profile:
- 5+ years of experience with HPC (High-Performance Computing) environments, including SLURM workload manager, MPI, and other HPC-related software,
- extensive hands-on experience managing Linux-based systems, including performance tuning and troubleshooting in an HPC context,
- proven experience migrating and managing SLURM clusters in cloud environments, preferably GCP,
- proficiency with automation tools such as Ansible and Terraform for cluster deployment and management,
- experience with Spack for managing and deploying HPC software stacks,
- strong scripting skills in Python, Bash, or similar languages for automating cluster operations,
- in-depth knowledge of GCP services relevant to HPC, such as Compute Engine (GCE), Cloud Storage, and VPC networking,
- strong problem-solving skills with a focus on optimizing HPC workloads and resource utilization.
Work from the European Union region and a work permit are required.
Nice to have:
- Google Cloud Professional DevOps Engineer or similar GCP certifications,
- familiarity with GCP's HPC-specific offerings, such as Preemptible VMs, HPC VM images, and other cost-optimization strategies,
- experience with performance profiling and debugging tools for HPC applications,
- advanced knowledge of HPC data management strategies, including parallel file systems and data transfer tools,
- understanding of container technologies (e.g., Singularity, Docker) specifically within HPC contexts,
- experience with Spark or other big data tools in an HPC environment.
We are Xebia – a place where experts grow. For nearly two decades now, we've been developing digital solutions for clients from many industries and places across the globe. Among the brands we've worked with are UPS, McLaren, Aviva, Deloitte, and many, many more.
We're passionate about Cloud-based solutions. So much so, that we have a partnership with three of the largest Cloud providers in the business – Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). We even became the first AWS Premier Consulting Partner in Poland.
Formerly we were known as PGS Software. In 2021, we joined Xebia Group – a family of interlinked companies driven by the desire to make a difference in the world of technology.
Xebia stands for innovation, talented team members, and technological excellence. Xebia means worldwide recognition, and thought leadership. This regularly provides us with the opportunity to work on global, innovative projects.
Our mission can be captured in one word: Authority. We want to be recognized as the authority in our field of expertise.
What makes us stand out? It's the little details, like our attitude, dedication to knowledge, and the belief in people's potential – emphasizing every team members development. Obviously, these things are not easy to present on paper – so, make sure to visit us to see it with your own eyes
Now, we've talked a lot about ourselves – but we'd love to hear more about you.
Send us your resume to start the conversation and join Xebia
,[leading the migration of on-premises SLURM-based HPC (High-Performance Computing) clusters to Google Cloud Platform,, designing, implementing, and managing scalable and secure HPC infrastructure solutions on GCP,, optimizing SLURM configurations and workflows to ensure efficient use of cloud resources,, managing and optimizing HPC environments, focusing on workload scheduling, job efficiency, and scaling SLURM clusters,, automating cluster deployment, configuration, and maintenance tasks using scripting languages (Python, Bash) and automation tools (Ansible, Terraform),, integrating HPC software stacks using tools like Spack for dependency management and easy installation of HPC libraries and applications,, deploying, managing, and troubleshooting applications using MPI, OpenMP, and other parallel computing frameworks on GCP instances,, collaborating with engineering, support teams, and stakeholders to ensure smooth migration and ongoing operation of HPC workloads,, providing expert-level support for performance tuning, job scheduling, and cluster resource optimization,, staying current with emerging HPC technologies and GCP services to continually improve HPC cluster performance and cost efficiency.] Requirements: GCP, Python, Ansible, Bash, Terraform, Data management, Networking, VPC, Docker, Spark, Big Data Additionally: Training budget, Private healthcare, Multisport, Integration events, International projects, Mental health support, Referral program, Modern office, Canteen, Free snacks, Free beverages, Free tea and coffee, No dress code, Playroom, In-house trainings, In-house hack days, Normal atmosphere :).-
Remote, Wrocław, Gdańsk, Rzeszów, Czech Republic beBeeDevOps Full time 1,800,000 - 2,500,000High-Performance Computing Infrastructure SpecialistWe are seeking a highly skilled High-Performance Computing (HPC) infrastructure specialist to join our team. As an HPC DevOps Engineer, you will play a key role in designing, implementing, and managing scalable and secure HPC infrastructure solutions on Google Cloud Platform.Key...
-
DevOps Engineer @
6 days ago
Remote, Czech Republic Link Group Full timeStrong hands-on experience with Google Cloud Platform services (Compute Engine, Cloud Storage, Cloud Functions, VPC).Proficiency in Infrastructure as Code (IaC) using Terraform, Google Cloud Deployment Manager, or similar tools.Practical experience with CI/CD tools (Cloud Build, Jenkins, GitHub Actions).Knowledge of containerization and orchestration...
-
Senior GCP Data Engineer @
5 days ago
Remote, Wrocław, Gdańsk, Rzeszów, Czech Republic Xebia sp. z o.o. Full time7+ years in a data engineering role, with hands-on experience in building data processing pipelines,experience in leading the design and implementing of data pipelines and data products,proficiency with GCP services, for large-scale data processing and optimization,extensive experience with Apache Airflow, including DAG creation, triggers, and workflow...
-
DevOps Engineer
6 days ago
Remote, Czech Republic Upvanta Full timeRequirements:Proven experience in managing production workloads on GCP.Strong knowledge of containerization and orchestration.Solid hands-on background in automation, CI/CD, and scripting.Familiarity with cloud security principles and implementation. We are looking for a DevOps Engineer to join a German healthcare institution. The position does not require...
-
GCP Platform Engineer APDP Senior Role @
6 days ago
Remote, Czech Republic Link Group Full time5+ years of hands-on experience with GCP and public cloud infrastructure.Strong IaC background (Terraform), preferably with Azure DevOps pipelines.Experience with managed GCP services: Cloud Run, BigQuery, Dataproc, Vertex AI, GKE.Knowledge of monitoring and observability practices.Hands-on with Kubernetes, Bash, and Linux system administration.Solid...
-
GCP Cloud Platform Support Engineer @
2 days ago
Remote, Wrocław, Gdańsk, Rzeszów, Czech Republic Xebia sp. z o.o. Full timeYour profile:availability to work in US time zone (until 10:00/11:00 pm CET),3+ years of support experience with GCP, with emphasis on troubleshooting,strong understanding of Google services, including BigQuery, Workflows, Batch, Dataproc, Dataflow, Cloud Run, GCS, monitoring, logging, VPC concepts, and networking,proficiency in interpreting code written in...
-
Senior Azure DevOps Engineer @
5 days ago
Remote, Wrocław, Gdańsk, Rzeszów, Czech Republic Xebia sp. z o.o. Full timeYour profile:6+ years of experience in infrastructure, software engineering, or DevOps, with strong expertise in Microsoft Azure Cloud services (IaaS, PaaS, SaaS), including compute, networking, security, data services, and containerization (Azure Kubernetes Service),hands-on experience designing and building scalable, multi-region, highly available, secure...
-
Data Architect GCP @
5 days ago
Remote, Warsaw, Czech Republic SquareOne Full time7+ years of experience in data architecture, database design, and data engineeringProven expertise in Google Cloud Platform (GCP), including: Dataplex, BigQuery, Dataflow (Apache Beam) and other GCP-native toolsStrong experience with Apache-based data pipelining tools (Beam, Airflow, Kafka, Spark)Expertise in data modeling (conceptual, logical, physical)...
-
DevOps Engineer @
2 days ago
Remote, Czech Republic Awumba Full timeExperience in building/designing CI/CD pipelines (we use Azure DevOps).Experience in one of the public cloud providers (Azure, AWS, GCP)Experience in Infrastructure as Code tools (Pulumi, Terraform)Experience in container technologies (Docker, Kubernetes, etc.)Experience in writing scripts in Powershell, Bash, etc.Openness to work in a multicultural...
-
Senior DevOps Engineer @
3 days ago
Remote, Krakow, Czech Republic QVC GROUP GLOBAL BUSINESS SERVICES Full timeBachelor's degree in computer science, Engineering, or a related field, or equivalent experience.5+ years of experience in a DevOps or related role.3+ years of experience in building software applicationsStrong experience with CI/CD tools such as Jenkins, Azure Pipelines, Bitbucket (preferred)/GitLab CI, CircleCI, or similar.Experience in developing software...