Talent.com
Sr. Engineer (MS Fabric Migration & Development)

Sr. Engineer (MS Fabric Migration & Development)

Innover DigitalBangalore (division)
1 day ago
Job description
  • Hands-on Migration and Development
  • Pipeline Translation : Directly develop and migrate 20+ complex data pipelines from Azure Data Factory (ADF) into optimized Fabric Data Pipelines and Spark Notebooks (PySpark / Scala).
  • Stored Procedure Conversion : Rewrite and optimize 30+ complex stored procedures into robust, scalable transformations using PySpark in Fabric Lakehouses or high-performance T-SQL in Fabric Data Warehouses.
  • Data Modeling : Implement the designed data architecture (Medallion layers : Bronze, Silver, Gold) within OneLake , focusing on Delta Lake best practices and V-Order optimization for efficient querying.
  • Large Data Handling : Execute the migration of 30+ TB of historical data efficiently and accurately, ensuring data integrity and reconciliation between the legacy and Fabric environments.
  • Unit & Integration Testing : Develop and execute comprehensive unit and integration tests to validate the correctness and performance of migrated data pipelines and stored procedures.
  • Performance Tuning : Work closely with the Technical Lead to identify and resolve data bottlenecks and optimize Spark code and Warehouse queries to meet latency requirements.
  • Collaboration and DevOps Support
  • CI / CD Adherence : Operate within the established DevOps pipeline, ensuring all code changes, notebooks, and object definitions are managed under Git version control and deployed via the automated CI / CD framework.
  • Required Qualifications

    • Experience : 5+ years of dedicated experience in ETL / ELT development and data warehousing, with involvement in at least one major cloud data migration.
    • Core Skills : Strong, demonstrable experience with the Azure Data Stack (ADF, Synapse) and hands-on development experience in Microsoft Fabric.
    • Data Languages : Highly proficient in T-SQL for querying and procedural logic, and mandatory expertise in Python (PySpark) for data transformation.
    • Data Modeling : Solid understanding of Kimball-style data warehousing, dimensional modeling, and modern Lakehouse concepts (Delta Lake, Parquet).
    • Tools : Experience working with Git for source control and familiarity with the principles of CI / CD deployment.
    Create a job alert for this search

    Development Engineer • Bangalore (division)