What You Will Do :
Following are high level responsibilities that you will play but not limited to :
- Lead the design, development, and implementation of modern data pipelines, data models, and ETL / ELT processes.
- Architect and optimize data lake and warehouse solutions using Microsoft Fabric, Databricks, or Snowflake.
- Enable business analytics and self-service reporting through Power BI and other visualization tools.
- Collaborate with data scientists, analysts, and business users to deliver reliable and high-performance data solutions.
- Implement and enforce best practices for data governance, data quality, and security.
- Mentor and guide junior data engineers; establish coding and design standards.
- Evaluate emerging technologies and tools to continuously improve the data ecosystem.
Required Qualifications :
Bachelor's degree in computer science, Information Technology, Engineering, or a related field.Bachelors / Masters degree in Computer Science, Information Technology, Engineering, or related field.8+ years of experience in data engineering or data platform development, with at least 23 years in a lead or architect role.Strong hands-on experience in one or more of the following :Microsoft Fabric (Data Factory, Lakehouse, Data Warehouse)Databricks (Spark, Delta Lake, PySpark, MLflow)Snowflake (Data Warehousing, Snowpipe, Performance Optimization)Power BI (Data Modeling, DAX, Report Development)Proficiency in SQL and programming languages like Python or Scala.Experience with Azure, AWS, or GCP cloud data services.Solid understanding of data modeling, data governance, security, and CI / CD practices.Preferred Qualifications :
Familiarity with data modeling techniques and practices for Power BI.Knowledge of Azure Databricks or other data processing frameworks.Knowledge of Microsoft Fabric or other Cloud Platforms.Skills Required
Databricks, snowflake , Power Bi, Python, Sql