Description :
As a Staff Software Engineer, you will serve as a key technical leader in designing and implementing data and reporting solutions across the Snowflake consumption layer and our reporting ecosystem.
You'll work closely with the Data Platform, Product, and Analytics teams to build reliable data pipelines, reporting models, APIs, and self-service tools that ensure trust, transparency, and performance in our data products.
This role combines data platform expertise with reporting system design, ideal for someone who can bridge upstream data architecture with downstream business intelligence needs.
Responsibilities :
- Architect and implement scalable reporting data models in Snowflake to support ThoughtSpot and Power BI consumption.
- Design and maintain robust semantic models, data sharing mechanisms, and APIs for downstream consumers.
- Partner with the Data Platform team to align ingestion, conformance, and consumption patterns across ADLS, Iceberg, Airflow, and IICS.
- Define and implement data governance, quality validation, and security standards across reporting pipelines.
- Build performant, reusable data transformations and APIs for reporting and dashboards.
- Integrate with Snowflake Data Sharing, webhooks, and REST / GraphQL endpoints to deliver customer-facing insights.
- Lead proof-of-concepts for new reporting frameworks and data sharing capabilities.
- Ensure reliability, accuracy, and auditability across all data presented to end users.
- Implement data validation and quality frameworks (e. g., dbt tests, Great Expectations).
- Monitor data freshness and pipeline health through observability tools and Snowflake monitoring.
- Collaborate with DAAS and Platform teams to triage and resolve data quality issues.
- Partner with product managers, analysts, and business stakeholders to translate requirements into scalable data solutions.
- Mentor engineers in data modelling, performance tuning, and best practices for Snowflake and BI systems.
- Contribute to documentation, standards, and architecture reviews.
Requirements :
8+ years of experience in data engineering, data products, or BI systems.Deep expertise in :
1. Snowflake - modeling, performance optimization, RBAC, and data sharing
2. SQL - advanced queries, window functions, analytical optimizations
3. ThoughtSpot / Power BI - data modeling, embedding, semantic layer design
4. Python / PySpark - data transformations and automation
5. Airflow, Informatica IICS, or equivalent for orchestration
6. Azure ecosystem - ADLS, ADF, and related services.
Strong understanding of data modelling patterns (star schema, Iceberg, medallion architecture).Familiarity with Kafka / Debezium for CDC-based ingestion.Experience building or consuming REST / GraphQL APIs for data products.Bachelor's degree in Computer Science, Software Engineering, or a related technical fieldA Master's or PhD in Computer Science, Data Science, Machine Learning, Artificial Intelligence or Statistics is a strong plus.Preferred :
Hands-on with dbt, Great Expectations, or similar data quality frameworks.Experience embedding analytics through ThoughtSpot Everywhere or APIs.Knowledge of Iceberg table management, webhooks, and event-driven pipelines.Background in healthcare, benefits, or financial data domains.(ref : hirist.tech)