Job Description – Data Architect
We are seeking a highly skilled Data Architect with 10–12 years of experience in designing and implementing enterprise data strategies and architectures from the ground up. The ideal candidate will have strong expertise in Data Lakes, Data Warehouses, and Data Lakehouses, with a focus on Snowflake (priority), along with hands-on experience in Databricks, Azure Data Factory, Python, and Apache Spark. The role requires proven ability in architecting complex, large-scale data ecosystems that handle both batch and streaming data pipelines.
Key Responsibilities
- Define and execute the enterprise data architecture strategy to support analytics, BI, and AI / ML initiatives.
- Architect and implement Data Lakes, Data Warehouses, and Data Lakehouses on modern cloud platforms.
- Lead data platform modernization using Snowflake (preferred), Databricks, Azure Data Factory, Python, and Spark.
- Design and optimize batch and streaming data pipelines for scalable, secure, and reliable data processing.
- Develop frameworks for data ingestion, transformation, orchestration, and automation.
- Establish and enforce data governance, metadata management, and quality standards.
- Partner with business stakeholders, data engineers, and analytics teams to ensure alignment with enterprise needs.
- Provide technical leadership and mentoring to engineering teams, instilling best practices.
- Stay updated with emerging technologies to drive continuous data platform modernization.
________________________________________
Required Skills & Experience
10–12 years of overall IT experience, with at least 8+ years as a Data Architect.Proven experience in architecting and implementing Data Lakes, Data Warehouses, and Data Lakehouses.Strong hands-on expertise with :o Snowflake (priority focus).
o Databricks and Apache Spark (batch + streaming).
o Azure Data Factory (ADF) for orchestration and data integration.
o Python for data engineering, automation, and custom transformations.
Strong knowledge of ETL / ELT design, data modeling, schema design, and query optimization.Experience with real-time streaming frameworks (Kafka, Spark Streaming, Event Hubs, etc.).Familiarity with data governance, lineage, cataloging, and compliance frameworks.Solid understanding of cloud-native services (Azure / AWS / GCP).Excellent communication, problem-solving, and leadership skills.Work Location
Remote( one day need to go for laptop picking )