Data Engineer

Job Description:

  • Develop and maintain scalable data pipelines using PySpark and distributed data processing frameworks.
  • Leverage Azure Data Factory (ADF) to orchestrate and automate complex data workflows and ETL processes.
  • Implement data solutions in Databricks for efficient big data processing and real-time analytics.
  • Design and optimize ETL processes to extract, transform, and load data from various sources into Azure cloud storage.
  • Ensure data accuracy and performance tuning using SQL for querying and managing large datasets.
  • Collaborate with cross-functional teams to implement secure, reliable, and scalable data solutions on Azure.

  • Must have skills : PySpark ,Azure Data Factory (ADF),Databricks ,ETL ,SQL
  • Location : Gurgaon/Hyderabad
  • Exp : 3-5 years
  • Mode : Hybrid/Remote
  • Max salary : 13 LPA
  • Notice period : Immediate
Job Category: Data Engineer
Job Type: Hybrid
Job Location: Gurgaon

Apply for this position

Allowed Type(s): .pdf, .doc, .docx
× How can i help you?