Databricks Engineer
Actively Reviewing the ApplicationsMcLaren Strategic Solutions (MSS)
India, Karnataka
Full-Time
On-site
INR 1–4 LPA
Posted 3 weeks ago
•
Apply by June 15, 2026
Job Description
About Us
Next Generation of Technology Consulting
Our approach is built on delivering value by combining our powerful ecosystem of platforms with capital efficient execution.
We bring together deep domain expertise and our strength in technology to help the world’s leading businesses build their digital core, optimize operations, accelerate revenue growth and deliver tangible outcomes at speed and scale.
Job Description
Key Responsibilities
Required Qualifications
Next Generation of Technology Consulting
Our approach is built on delivering value by combining our powerful ecosystem of platforms with capital efficient execution.
We bring together deep domain expertise and our strength in technology to help the world’s leading businesses build their digital core, optimize operations, accelerate revenue growth and deliver tangible outcomes at speed and scale.
Job Description
Key Responsibilities
- Design and develop scalable data ingestion pipelines using Databricks and Apache Spark.
- Build and maintain ETL/ELT workflows to ingest structured and semi structured data from various source systems.
- Implement data transformation logic using PySpark, SQL, and Delta Lake.
- Integrate data from APIs, databases, file systems, and streaming platforms.
- Optimize data processing performance and manage large-scale data workloads.
- Collaborate with Data Architects, Business Analysts, and QA teams to ensure accurate data delivery.
- Implement monitoring, logging, and error handling mechanisms for ingestion pipelines.
- Support CI/CD deployment processes for data engineering solutions.
- Ensure adherence to data governance, security, and data quality standards.
Required Qualifications
- 4+ years of experience in Data Engineering or Big Data development.
- Strong experience with Databricks and Apache Spark.
- Proficiency in Python (PySpark) and SQL.
- Experience building data pipelines in cloud environments such as AWS, Azure, or GCP.
- Knowledge of data ingestion frameworks and ETL tools.
- Experience working with structured and semi-structured data formats (JSON, Parquet, CSV, Avro).
- Familiarity with version control tools such as Git.
- Experience with Delta Lake architecture.
- Knowledge of workflow orchestration tools such as Airflow or Azure Data Factory.
- Experience with streaming platforms such as Kafka or Event Hub.
- Exposure to CI/CD pipelines and DevOps practices for data platforms.
Required Skills
Engineering
Git
Monitoring
Python
Quality Standards
ETL Tools
Apache Spark
SQL
AWS
CI/CD Pipelines
Spark
Kafka
Data Governance
Azure
Airflow
Databricks
ETL
JSON
Avro
Parquet
CSV
Data Engineering
DevOps
CI/CD
Apache
Data quality
Governance
Orchestration
Version control
Consulting
Data processing
Data ingestion
Big Data
Data platforms
Ingestion
Data transformation
Delta
Data pipelines
PySpark
Logging
Data Development
Cloud environments
Factory
Domain expertise
Error handling
Workflow orchestration
Data formats
ELT
Strength
Azure Data Factory
Big data development
ELT workflows
Data Factory
Event Hub
Quick Tip
Customize your resume and cover letter to highlight relevant skills for this position to increase your chances of getting hired.
Related Similar Jobs
View All
NFC Solutions - Senior .Net Developer
NFC Solutions India Pvt Ltd
Hyderabad
Full-Time
Testing
Engineering
ASP.NET
+2
DevOps Engineer (Terraform)
Gainwell Technologies
India
Full-Time
₹5–14 LPA
Python
Jenkins
Terraform
+7
2070747-Lead Assistant Manager
EXL
Ahmedabad
Full-Time
SAS
SQL
Marketing strategy
+1
Content Governance & Digital Experience, HR, Associate
BlackRock
India
Full-Time
Communication
Project Management
Automation
+36
Python Fullstack Developer
Platinum Software
India
Full-Time
₹3–6 LPA
Git
Django
MySQL
+4
Share
Quick Apply
Upload your resume to apply for this position