AWS Data Engineer- Pyspark with Databricks or Snowflake
Actively Reviewing the ApplicationsITC Infotech
Job Description
Job Description Data Engineer Core Technical Skills Snowflake & Spark � Building and managing scalable data pipelines. � Spark-based transformations and ETL workflows. � Expertise in PySpark, including optimization techniques and cost management. � Snowflake-specific capabilities: o Performance tuning and query optimization. o Partitioning and clustering strategies. o Cost control and resource management. o Advanced features such as Time Travel, Zero-Copy Cloning, and Streams & Tasks for data engineering workflows. � Delta Lake concepts (ACID transactions, Z-Ordering, OPTIMIZE, VACUUM) for hybrid architectures. SQL & Relational Databases o Advanced SQL query writing. o PostgreSQL expertise (window functions, CTEs, query plans, indexing strategy). Streaming & Messaging o Apache Kafka for real-time ingestion and topic management. o Understanding of event-driven architecture. AWS & Cloud Services o Excellent in AWS Glue, Lambda, Step functions and Data Analytics services.
Quick Tip
Customize your resume and cover letter to highlight relevant skills for this position to increase your chances of getting hired.
Related Similar Jobs
View All
Senior Principal Product Marketing Specialist – DCS
NTT DATA, Inc.
MERN STACK
Capgemini
Java Developer
EITACIES Inc.
Machine Learning Developer (Freelance)
Mindrift
Data Analyst
Persistent Systems
Share
Quick Apply
Upload your resume to apply for this position