Machine Learning Engineer – Conversational AI & Personalization
Actively Reviewing the ApplicationsWeekday (YC W21)
India, Karnataka, Bengaluru
Full-Time
On-site
INR 30–40 LPA
Posted 3 weeks ago
•
Apply by June 9, 2026
Job Description
This role is for one of our clients :
Industry : Technology , information and Internet
Seniority level : Mid Senior
Min Experience: 5 years
Location: Bangalore
JobType: full-time
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
Role Overview
We are seeking a highly skilled Machine Learning Engineer to build and scale AI-powered conversational and personalization systems. This role focuses on designing robust ML architectures that integrate large language models (LLMs), retrieval systems, and recommendation engines into reliable, production-grade platforms.
You will work at the forefront of applied AI—developing multilingual NLP systems, optimizing LLM orchestration, and deploying scalable machine learning pipelines that power intelligent chat, contextual recommendations, and personalized user experiences.
The ideal candidate thrives in fast-paced environments, understands both modern LLM ecosystems and classical ML approaches, and has a strong foundation in production ML system design.
Key Responsibilities
Conversational AI & LLM Engineering
Design and deploy scalable conversational AI systems supporting multi-turn dialogue, contextual memory, and personalization.Build and optimize LLM orchestration layers including prompt engineering, routing logic, fallback strategies, and multi-model selection.Integrate and manage models across platforms such as OpenAI, Claude, Qwen, LLaMA, and other modern LLM providers.Develop evaluation pipelines to measure response accuracy, contextual alignment, tone consistency, and hallucination reduction.Optimize production systems for latency, throughput, and cost efficiency using batching, caching, and prompt optimization techniques.
Retrieval & Knowledge Augmentation
Implement and manage vector search infrastructure using tools such as Qdrant, Pinecone, FAISS, or equivalent.Architect Retrieval-Augmented Generation (RAG) pipelines to enhance model outputs with contextual knowledge.Work with structured and unstructured datasets to improve relevance and factual accuracy.Design document ingestion, embedding, and indexing workflows for scalable knowledge systems.
Personalization & ML Pipelines
Build end-to-end ML pipelines for user segmentation, classification, and recommendation systems.Develop ranking models and intelligent content transformation systems combining LLMs with traditional ML techniques.Create adaptive personalization engines that evolve based on user behavior and contextual signals.Ensure robust monitoring, retraining strategies, and model lifecycle management.
Multilingual NLP & Model Optimization
Fine-tune and deploy models for Hindi and multilingual Natural Language Understanding (NLU) and Natural Language Generation (NLG).Experiment with and productionize open-source LLMs such as Qwen, Mixtral, and LLaMA in low-latency environments.Evaluate and optimize closed-source LLM performance for cost and quality trade-offs.Implement pipelines using frameworks like LangChain, RAG toolkits, FAISS/Chroma vector stores, and multilingual translation systems such as IndicTrans2 and NLLB.
Cross-Functional Collaboration
Partner with product teams and domain experts to translate business needs into scalable ML solutions.Contribute to system design decisions around architecture, scalability, and deployment.Continuously benchmark, experiment, and iterate to improve model performance and system robustness.Drive best practices in ML experimentation, evaluation, and production deployment.
Required Qualifications
5+ years of experience in Machine Learning or Applied AI roles.Strong hands-on expertise with Large Language Models, RAG systems, and vector databases.Proven experience building and deploying production-grade ML systems.Proficiency in Python and ML/NLP frameworks such as PyTorch, TensorFlow, and Hugging Face Transformers.Experience with multilingual NLP systems, particularly Indian language processing.Deep understanding of prompt engineering, evaluation methodologies, and hallucination mitigation.Solid knowledge of scalable system design, model serving, latency optimization, and cost-aware deployment strategies.
Core Skills
Machine Learning Engineering
Industry : Technology , information and Internet
Seniority level : Mid Senior
Min Experience: 5 years
Location: Bangalore
JobType: full-time
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
Role Overview
We are seeking a highly skilled Machine Learning Engineer to build and scale AI-powered conversational and personalization systems. This role focuses on designing robust ML architectures that integrate large language models (LLMs), retrieval systems, and recommendation engines into reliable, production-grade platforms.
You will work at the forefront of applied AI—developing multilingual NLP systems, optimizing LLM orchestration, and deploying scalable machine learning pipelines that power intelligent chat, contextual recommendations, and personalized user experiences.
The ideal candidate thrives in fast-paced environments, understands both modern LLM ecosystems and classical ML approaches, and has a strong foundation in production ML system design.
Key Responsibilities
Conversational AI & LLM Engineering
Design and deploy scalable conversational AI systems supporting multi-turn dialogue, contextual memory, and personalization.Build and optimize LLM orchestration layers including prompt engineering, routing logic, fallback strategies, and multi-model selection.Integrate and manage models across platforms such as OpenAI, Claude, Qwen, LLaMA, and other modern LLM providers.Develop evaluation pipelines to measure response accuracy, contextual alignment, tone consistency, and hallucination reduction.Optimize production systems for latency, throughput, and cost efficiency using batching, caching, and prompt optimization techniques.
Retrieval & Knowledge Augmentation
Implement and manage vector search infrastructure using tools such as Qdrant, Pinecone, FAISS, or equivalent.Architect Retrieval-Augmented Generation (RAG) pipelines to enhance model outputs with contextual knowledge.Work with structured and unstructured datasets to improve relevance and factual accuracy.Design document ingestion, embedding, and indexing workflows for scalable knowledge systems.
Personalization & ML Pipelines
Build end-to-end ML pipelines for user segmentation, classification, and recommendation systems.Develop ranking models and intelligent content transformation systems combining LLMs with traditional ML techniques.Create adaptive personalization engines that evolve based on user behavior and contextual signals.Ensure robust monitoring, retraining strategies, and model lifecycle management.
Multilingual NLP & Model Optimization
Fine-tune and deploy models for Hindi and multilingual Natural Language Understanding (NLU) and Natural Language Generation (NLG).Experiment with and productionize open-source LLMs such as Qwen, Mixtral, and LLaMA in low-latency environments.Evaluate and optimize closed-source LLM performance for cost and quality trade-offs.Implement pipelines using frameworks like LangChain, RAG toolkits, FAISS/Chroma vector stores, and multilingual translation systems such as IndicTrans2 and NLLB.
Cross-Functional Collaboration
Partner with product teams and domain experts to translate business needs into scalable ML solutions.Contribute to system design decisions around architecture, scalability, and deployment.Continuously benchmark, experiment, and iterate to improve model performance and system robustness.Drive best practices in ML experimentation, evaluation, and production deployment.
Required Qualifications
5+ years of experience in Machine Learning or Applied AI roles.Strong hands-on expertise with Large Language Models, RAG systems, and vector databases.Proven experience building and deploying production-grade ML systems.Proficiency in Python and ML/NLP frameworks such as PyTorch, TensorFlow, and Hugging Face Transformers.Experience with multilingual NLP systems, particularly Indian language processing.Deep understanding of prompt engineering, evaluation methodologies, and hallucination mitigation.Solid knowledge of scalable system design, model serving, latency optimization, and cost-aware deployment strategies.
Core Skills
Machine Learning Engineering
- Generative AI
- Large Language Models (LLMs)
- Retrieval-Augmented Generation (RAG)
- Vector Search Systems
- Multilingual NLP
- Model Fine-Tuning
- Recommendation Systems
- Production ML Architecture
- Scalable AI Systems
Quick Tip
Customize your resume and cover letter to highlight relevant skills for this position to increase your chances of getting hired.
Related Similar Jobs
View All
Engineering Manager
MiQ
India
Full-Time
₹50–90 LPA
Data Science
Analytics
Data Management Operations (AWS/ Anlaytics/Python/AI)
FICO
India
Full-Time
₹18–22 LPA
Machine Learning
Engineering
Troubleshooting
+59
AWS Sagemaker
People Prime Worldwide
India
Contract
Machine Learning
Monitoring
Python
+65
Machine Learning Specialist
EXL
India
Full-Time
Machine Learning
Python
Data Analytics
+1
Cloud GCP - Senior Engineer
Iris Software Inc.
India
Full-Time
Git
Python
AWS
+11
Share
Quick Apply
Upload your resume to apply for this position