About Position :
We are looking for a highly skilled and innovative Senior Data Engineer with strong expertise in Azure Databricks, PySpark, and Snowflake, along with hands-on experience integrating AI tools such as ChatGPT, Copilot, or Databricks AI Functions. The ideal candidate will architect and build scalable data pipelines, enable intelligent data processing, and drive AI-powered analytics across enterprise datasets.
- Role : Senior Data Engineer (ADB + Snowflake)
- Location : Pune, Hyderabad, Bangalore
- Experience : 6 to 12 Years
- Job Type : Full Time Employment
What You'll Do :
Design and develop scalable data pipelines using Azure Databricks, PySpark, and Delta Lake.Implement and optimize data warehousing solutions using Snowflake.Integrate AI capabilities into data workflows using tools like ChatGPT, Copilot, Databricks AI Functions, or Snowflake Copilot. [learn.microsoft.com], [docs.snowflake.com]Collaborate with data scientists and analysts to enable AI-powered insights, including summarization, classification, and sentiment analysis.Build and maintain ETL / ELT workflows, ensuring data quality, performance, and governance.Use LangChain, pyspark-ai, or similar libraries to embed LLMs into PySpark workloads. [justinmatters.co.uk]Work with stakeholders to understand business requirements and translate them into technical solutions.Ensure security, compliance, and performance optimization across cloud data platforms.Document architecture, workflows, and best practices for AI-enhanced data engineering.Expertise You'll Bring :
6–12 years of experience in data engineering, with strong hands-on expertise in Azure Databricks, PySpark, and Snowflake.Deep understanding of Lakehouse architecture, Delta Lake, and data warehousing principles.Proficiency in Python, SQL, and Spark for building scalable data pipelines and transformation workflows.Experience integrating AI tools such as ChatGPT, Copilot, or Databricks AI Functions into data workflows for intelligent automation and enhanced analytics.Familiarity with LangChain, pyspark-ai, or similar frameworks for embedding LLMs into data engineering tasks.Knowledge of Snowflake Copilot, Snowpark, and AI-powered query generation.Strong grasp of data governance, security, and metadata management using tools like Unity Catalog.Experience with CI / CD pipelines, DevOps practices, and infrastructure-as-code tools (e.g., Terraform, GitHub Actions).Ability to collaborate with data scientists, analysts, and business stakeholders to deliver AI-enhanced insights.Excellent problem-solving, communication, and documentation skills.Benefits :
Competitive salary and benefits packageCulture focused on talent development with quarterly growth opportunities and company-sponsored higher education and certificationsOpportunity to work with cutting-edge technologiesEmployee engagement initiatives such as project parties, flexible work hours, and Long Service awardsAnnual health check-upsInsurance coverage : group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parentsValues-Driven, People-Centric & Inclusive Work Environment :
Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds.
We support hybrid work and flexible hours to fit diverse lifestyles.Our office is accessibility-friendly, with ergonomic setups and assistive technologies to support employees with physical disabilities.If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employmentLet’s unleash your full potential at Persistent - persistent.com / careers
“Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”