Scala + Spark Data Engineer – Full Time

Languages: Scala, Java

  • Big Data Orchestration: Airflow, Spark on Kubernetes, Yarn, Oozie
  • Big Data Processing: Hadoop, Kafka, Spark & Spark Structured Streaming.
  • Experience on SOLID & DRY principles with Good Software Architecture & Design implementation

experience

  • Advanced Scala experience (e.g. Functional Programming, using Case classes, Complex Data

Structures & Algorithms)

  • Proficient in developing automated frameworks for unit & integration testing.
  • Experience with Docker and Helm and related container technologies.
  • Proficient in deploying and managing Spark workloads on Kubernetes clusters.
  • Experience in evaluation and implementation of Data Validation & Data Quality
  • Devops experience in Jenkins, Maven, Github, Github actions, CI/CD

 

  • Skillset: Scala + Spark + Airflow + Docker/Kubernetes
  • Experience: 6–12 years
  • CTC: Open
Mandatory skills: (Scala + Spark + Airflow + Docker/Kubernetes)
Data Engineer
Scala Coding Exp
Spark
Airflow
Docker
Kubernetes
Yarn/Oozie
Devops
Mandatory Skills: Scala + Spark + Airflow + Docker/Kubernetes
Required Experience: 6+ Years
Job Location:
Job Type:
Notice Period: Immediate to 30 Days
Total Experience
Relevant Experience
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Submit Your Resume

Our Team will get in touch with you and get you started on your journey.

Your application has been successfully submitted.

Thank you for taking the time to reach out to us. We’ll be in touch shortly.