Job Title : Big Data Professional
Job Description : A Big Data Professional is required to drive our data engineering initiatives. The ideal candidate will have a strong foundation in Python for data engineering, including PySpark and pandas.
Key Responsibilities :
- Develop and maintain large-scale data processing pipelines using PySpark.
- Design and implement efficient data models using SQL for data modeling, optimization, and analytics.
- Work with Databricks workflows, notebooks, and Delta Lake to ensure seamless data integration and analysis.
- Collaborate with cross-functional teams to integrate Azure Data Factory, Azure Data Lake Storage Gen2, and Azure Synapse Analytics into our data infrastructure.
- Maintain a solid understanding of Postgres schema design, indexing, query tuning, and Generative AI concepts.
- Familiarity with CI / CD and Git-based workflows to ensure smooth project delivery.
Requirements :
Total Experience : 6+ yearsRelevant Experience in data engineering and analyticsCurrent CTC and Expected CTCNotice Period (Immediate to 2 weeks)Location : Any Xebia location (Hybrid, 3 days office per week)Skills & Expertise :
Proficient in Python for data engineering.Strong SQL skills for data modeling, optimization, and analytics.Hands-on expertise in Databricks workflows, notebooks, and Delta Lake.Experience with Azure Data Factory, Azure Data Lake Storage Gen2, and Azure Synapse Analytics.Solid understanding of Postgres schema design, indexing, and query tuning.Exposure to Generative AI concepts and integration with Azure OpenAI or similar services.Familiarity with CI / CD and Git-based workflows.Benefits :
Competitive compensation package.Opportunities for growth and professional development.Collaborative and dynamic work environment.Why Join Us :
We are a leading technology consulting firm.We offer a range of benefits and perks.We are committed to diversity, equity, and inclusion.