#10439 | 2018-09-03 Malmö area, Sweden

Big Data Operations Engineer (Hadoop, Spark)

Job Summary:
We are seeking a solid Big Data Operations Engineer focused on operations to administer/scale our multipetabyte Hadoop clusters and the related services that go with it. This role focuses primarily on provisioning, ongoing capacity planning, monitoring, management of Hadoop platform and application/middleware that run on Hadoop.

Job Description:
  • Hands on experience with managing production clusters (Hadoop, Kafka, Spark, more).
  • Strong development/automation skills. Must be very comfortable with reading and writing
Python and Java code.
  • Overall 10+ years with at least 5+ years of Hadoop experience in production, in medium to
large clusters.
  • Tools-first mindset. You build tools for yourself and others to increase efficiency and to make
hard or repetitive tasks easy and quick.
  • Experience with Configuration Management and automation.
  • Organized, focused on building, improving, resolving and delivering.
  • Good communicator in and across teams, taking the lead.
Education:
Bachelors or Master Degree in Computer Science or similar technical degree.

  • Responsible for maintaining and scaling production Hadoop, HBase, Kafka, and Spark clusters.
  • Responsible for the implementation and ongoing administration of Hadoop infrastructure including monitoring, tuning and troubleshooting.
  • Provide hardware architectural guidance, plan and estimate cluster capacity, Hadoop cluster deployment.
  • Improve scalability, service reliability, capacity, and performance.
  • Triage production issues when they occur with other operational teams.
  • Conduct ongoing maintenance across our large scale deployments.
  • Write automation code for managing large Big Data clusters
  • Work with development and QA teams to design Ingestion Pipelines, Integration APIs, and provide Hadoop ecosystem services
  • Participate in the occasional on-call rotation supporting the infrastructure.
  • Hands on to troubleshoot incidents, formulate theories and test hypothesis, and narrow down possibilities to find the root cause.

Competence demands:
  • Hands on experience with managing production clusters (Hadoop, Kafka, Spark, more).
  • Strong development/automation skills. Must be very comfortable with reading and writing Python and Java code.
  • Overall 10+ years with at least 5+ years of Hadoop experience in production, in medium to large clusters.
  • Tools-first mindset. You build tools for yourself and others to increase efficiency and to make hard or repetitive tasks easy and quick.
  • Experience with Configuration Management and automation.


Start: as soon as found
Duration: long term assignment
Work location: Malmö area, Sweden
Requirements: Min. 5 years of professional IT experience.
Job type: Freelance

Jeg er interesseret i denne opgave

Træk filer hertil for at vedhæfte dem
Intet er blevet vedhæftet.
Overfører: "{{uploadingFile.name}}" Vedhæftet: "{{candidate.fileName}}"
Kunne ikke behandle "{{uploadErrFile.name}}". Kun filer med følgende endelser tillades: ".pdf, .doc, .docx". Gem venligst din fil som en af de tilladte filtyper og prøv igen. Filen skal være under {{uploadErrFile.$errorParam}}
{{uploadErrorMsg}}
Vælg dokument
{{$select.selected.commonName}}
Korriger venligst disse felter og prøv så at sende igen:
  • {{ getValFieldName(field) }}

Du har lige indsendt denne formular.

Bemærk: Hvis vi vurderer, at du er den rigtige kandidat til opgaven, kontakter vi dig personligt. Dine CV-oplysninger vil ikke blive videregivet til kunden, før vi har talt med dig.

Hvis du har spørgsmål angående denne opgave, er du velkommen til at kontakte ressourceafdelingen:

Olga Saibel
Sourcing Specialist

E-mail
Mobil+46 76 843 39 34

Gratis forumaften i København

Kursus _pop

Kom til gratis forumaften om W3C og det moderne web, hvor Kenneth Christiansen vil guide os gennem webteknologiens nye muligheder.