Bestkaam Logo
Zigsaw Logo

PySpark + DBT Developer

Actively Reviewing the Applications

Zigsaw

India, Delhi Full-Time On-site
Posted 3 weeks ago Apply by June 16, 2026

Job Description

PySpark + DBT Developer


Experience: 4–6 Years

Location: Gurgaon

Immediate joiners only

Budget- 15-18 LPA

Job Summary:

We are looking for a skilled PySpark + DBT Developer with 4–6 years of experience in building scalable data pipelines and transforming large datasets. The ideal candidate should have strong hands-on experience in PySpark, DBT (Data Build Tool), and modern data warehouse environments.


Key Responsibilities:

  • Develop, optimize, and maintain scalable data pipelines using PySpark.
  • Design and implement data transformation workflows using DBT.
  • Build and manage data models in cloud-based data warehouses.
  • Perform data validation, transformation, and cleansing.
  • Optimize performance of Spark jobs and DBT models.
  • Work closely with Data Engineers, Analysts, and Business teams to understand data requirements.
  • Ensure data quality, governance, and best practices.
  • Troubleshoot and resolve production issues.


Required Skills:

  • 4–6 years of experience in Data Engineering.
  • Strong hands-on experience in PySpark.
  • Good working knowledge of DBT (Data Build Tool).
  • Experience with SQL and data modelling concepts.
  • Hands-on experience with cloud platforms like AWS / Azure / GCP.
  • Experience with data warehouses such as Snowflake, Redshift, BigQuery, or Azure Synapse.
  • Understanding of ETL/ELT concepts.
  • Familiarity with version control tools like Git.


Good to Have:

  • Experience with Airflow or other orchestration tools.
  • Knowledge of CI/CD pipelines.
  • Understanding of Agile methodology.
  • Experience in handling large-scale distributed datasets.

Check Qualification

Quick Tip

Customize your resume and cover letter to highlight relevant skills for this position to increase your chances of getting hired.