Overview
We are seeking a highly skilled and experienced SDE 3 – Data Platform Engineer with strong hands-on expertise and proven leadership capabilities to drive the design, development, and optimization of our data infrastructure. This role requires deep technical proficiency, architectural foresight, and strategic thinking to ensure our data systems are scalable, high-performing, and reliable . You will be responsible for managing high-volume, high-velocity data pipelines , making data seamlessly accessible across teams, and enabling data-driven decision-making at scale. As a key technical leader, you will help shape the foundation of Licious’s data ecosystem and its long-term data strategy.
What You’ll Do
- Technical Leadership :
- Lead and mentor a team of data engineers, fostering best practices in data architecture, coding standards, and operational excellence.
- Drive technical discussions, design reviews, and code reviews to ensure scalability, reliability, and maintainability.
- Architecture & Strategy :
- Define and own the data engineering roadmap and long-term data strategy in alignment with Licious’s business goals.
- Architect and implement scalable, fault-tolerant, and high-performance data platforms that can handle high-volume and high-velocity data.
- Continuously assess and evolve the data architecture to meet growing business and technical demands.
- Data Infrastructure & Pipelines :
- Design and develop robust ETL / ELT pipelines and workflow orchestration using tools like Airflow , Azkaban , or Luigi .
- Build real-time and batch data processing systems using technologies such as Apache Spark , Flink , Kafka , and Debezium .
- Implement data quality checks, observability, lineage, and monitoring frameworks to ensure reliable data delivery.
- Data Governance & Quality :
- Establish and enforce best practices around data governance, security, lineage, and compliance.
- Ensure data integrity, accuracy, and reliability across the data ecosystem.
- Define data retention, backup, and disaster recovery strategies.
- Collaboration & Influence :
- Partner with product, analytics, and ML teams to define data requirements and optimize data access.
- Collaborate with DevOps and platform engineering teams to optimize infrastructure cost, reliability, and performance.
What You’ll Bring
Extensive hands-on experience with big data tools and frameworks :Batch & Stream Processing : Spark (Structured Streaming), Flink, Kafka, Debezium.Data Warehousing & Analytics : AWS Redshift, ClickHouse, Snowflake, Databricks, Delta Lake, Presto / Trino, Hive, Hudi, or Iceberg.Data Orchestration : Airflow, Azkaban, Luigi.Proficiency with databases :Strong knowledge of SQL (MySQL, PostgreSQL) and NoSQL (MongoDB, Cassandra) systems.Expertise in data modeling, partitioning, indexing, and performance optimization.Cloud & Infrastructure Expertise :Proven experience with AWS services such as EC2, S3, EMR, Glue, RDS, Redshift, Lambda, MSK, and SQS.Experience designing secure, cost-efficient, and scalable data solutions in the cloud.Programming Skills :Strong command of Python , Scala , or Java for building scalable data applications and frameworks.Solid understanding of software engineering principles, including CI / CD, testing, and version control.Soft Skills & Leadership :Proven ability to lead technical teams, mentor engineers, and drive architectural decisions.Strong problem-solving skills, attention to detail, and the ability to balance strategic and hands-on work.Preferred Qualifications
Experience with modern data lakehouse architectures.Familiarity with containerization and orchestration (Docker, Kubernetes).Understanding of MLOps and data observability frameworks.Prior experience in high-growth or consumer-tech environments.