Data Engineer
Actively Reviewing the ApplicationsXerox
On-site
Posted 4 weeks ago
•
Apply by June 8, 2026
Job Description
About Xerox Holdings Corporation
For more than 100 years, Xerox has continually redefined the workplace experience. Harnessing our leadership position in office and production print technology, we’ve expanded into software and services to sustainably power the hybrid workplace of today and tomorrow. Today, Xerox is continuing its legacy of innovation to deliver client-centric and digitally-driven technology solutions and meet the needs of today’s global, distributed workforce. From the office to industrial environments, our differentiated business and technology offerings and financial services are essential workplace technology solutions that drive success for our clients. At Xerox, we make work, work. Learn more about us at www.xerox.com.
Job Role: Data Engineer
Job Description: A Data Engineer with AI/ML focus combines traditional data engineering responsibilities with the technical requirements for supporting Machine Learning (ML) systems and artificial intelligence (AI) applications. This role involves not only designing and maintaining scalable data pipelines but also integrating advanced AI/ML models into the data infrastructure. The role is critical for enabling data scientists and ML engineers to efficiently train, test, and deploy models in production. This role is also responsible for designing, building, and maintaining scalable data infrastructure and systems to support advanced analytics and business intelligence. This role often involves leading mentoring junior team members and collaborating with cross-functional teams.
Key Responsibilities:
Data Infrastructure for AI/ML:
- Design and implement robust data pipelines that support data preprocessing, model training, and deployment.
- Ensure that the data pipeline is optimized for high-volume and high-velocity data required by ML models.
- Build and manage feature stores that can efficiently store, retrieve, and serve features for ML models.
- Collaborate with ML engineers and data scientists to integrate machine learning models into production environments.
- Implement tools for model versioning, experimentation, and deployment (e.g., MLflow, Kubeflow, TensorFlow Extended).
- Support automated retraining and model monitoring pipelines to ensure models remain performant over time.
- Design and maintain scalable, efficient, and secure data pipelines and architectures.
- Develop data models (both OLTP and OLAP).
- Create and maintain ETL/ELT processes.
- Build automated pipelines to collect, transform, and load data from various sources (internal and external).
- Optimize data flow and collection for cross-functional teams.
- Develop CI/CD pipelines to deploy models into production environments.
- Implement model monitoring, alerting, and logging for real-time model predictions.
- Ensure high data quality, integrity, and availability.
- Implement data validation, monitoring, and alerting mechanisms.
- Support data governance initiatives and ensure compliance with data privacy laws (e.g., GDPR, HIPAA).
- Work with cloud platforms (AWS, Azure, GCP) and data engineering tools like Apache Spark, Kafka, Airflow, etc.
- Use containerization (Docker, Kubernetes) and CI/CD pipelines for data engineering deployments.
- Collaborate with data scientists, analysts, product managers, and other engineers.
- Provide technical leadership and mentor junior data engineers.
- Strong problem-solving and critical-thinking skills.
- Excellent communication and collaboration abilities.
- Leadership experience and the ability to guide technical decisions.
- Data Engineering: Apache Spark, Airflow, Kafka, dbt, ETL/ELT pipelines
- ML/AI Integration: MLflow, Feature Store, TensorFlow, PyTorch, Hugging Face
- GenAI: LangChain, OpenAI API, Vector DBs (FAISS, Pinecone, Weaviate)
- Cloud Platforms: AWS (S3, SageMaker, Glue), GCP (BigQuery, Vertex AI)
- Languages: Python, SQL, Scala, Bash
- DevOps & Infra: Docker, Kubernetes, Terraform, CI/CD pipelines
- Bachelor's or Master's degree in Computer Science, Engineering, or related field.
- 3 to 6 years of experience in data engineering.
- Strong understanding of data modeling, ETL/ELT concepts, and distributed systems.
- Experience with big data tools and cloud platforms.
Required Skills
Engineering
Python
Cloud Platforms
Apache Spark
Data Modeling
Scala
BigQuery
Docker
Kubernetes
Terraform
CI/CD Pipelines
Kafka
TensorFlow
PyTorch
MLOps
Data Governance
Azure
MLflow
Kubeflow
Airflow
Bash
Data Engineering
DevOps
Data models
Pinecone
Data pipelines
Sagemaker
Vertex
Glue
Computer Science
Quick Tip
Customize your resume and cover letter to highlight relevant skills for this position to increase your chances of getting hired.
Related Similar Jobs
View All
Senior iOS Engineer (B2C, Large-scale experience)
Gramian Consulting
India
Full-Time
Swift
A/B Testing
Testing
+2
Data Scientist
Capgemini
India
Full-Time
₹1–16 LPA
Machine Learning
Python
Azure
+2
Field Service Engineer
Ciena
India
Full-Time
Communication
Engineering
Networking
+32
Area Sales Executive : Uttar Pradesh - Eastern Region (India)
Pluga Franklin Electric
India
Full-Time
Sales
Engineering
Analytics
Software Engineer - Frontend
PayPal
India
Full-Time
₹1–4 LPA
Recruitment
SDLC
Onboarding
+50
Share
Quick Apply
Upload your resume to apply for this position