SDE 1 – Data Platform Engineer
Overview
We are looking for a passionate and motivated Software Development Engineer (SDE 1) – Data Platform to join our growing Data Platform Engineering team at Licious .
In this role, you’ll design, build, and optimize data systems and pipelines that fuel analytics, reporting, and data-driven decision-making across the organization. You’ll work on modern data platforms, collaborate with experienced engineers, and gain hands-on exposure to cutting-edge data and cloud technologies.
This opportunity is ideal for someone who enjoys solving complex data problems, working with large-scale systems, and continuously learning in a fast-paced, high-impact environment.
What You’ll Do
Data Pipelines & Infrastructure
- Design and develop scalable ETL / ELT pipelines and workflows using orchestration tools such as Airflow, Azkaban, or Luigi .
- Build batch and real-time data processing systems using technologies like Apache Spark, Flink, Kafka, and Debezium .
- Implement data quality, observability, and lineage frameworks to ensure reliable, accurate, and trusted data delivery.
Data Management & Optimization
Develop and maintain data ingestion, transformation, and storage layers using modern data lake and warehouse platforms ( AWS S3, Redshift, Snowflake, Databricks ).Write and optimize SQL queries and data models to support analytical and reporting workloads.Collaborate with senior engineers to enhance system performance, scalability, and cost efficiency .Collaboration & Delivery
Partner with data analysts, ML engineers, and product teams to translate business needs into scalable data solutions.Work closely with platform and DevOps teams to ensure secure, efficient, and automated data operations.Participate in code reviews , maintain high-quality documentation, and follow software engineering best practices .Continuous Learning
Stay updated with emerging data technologies, frameworks, and cloud solutions .Contribute to initiatives in data standardization, governance, and automation to strengthen the organization’s data foundation.What You’ll Bring
Technical Skills :
Strong understanding of data engineering fundamentals — ETL / ELT design, data modeling, warehousing, and lake architectures.Hands-on experience with one or more data technologies such as Spark, Kafka, or Airflow .Proficiency in Python, Java, or Scala for data engineering and automation tasks.Strong SQL skills and familiarity with relational databases (e.G., MySQL, PostgreSQL ).Exposure to cloud environments , preferably AWS , including services like S3, Glue, Redshift, EMR .Foundational Knowledge :
Understanding of software engineering practices — version control (Git), testing, CI / CD, and documentation.Basic awareness of data governance, observability, and quality monitoring concepts.Soft Skills :
Strong analytical thinking and problem-solving mindset.Collaborative and eager to learn from experienced mentors.Detail-oriented, reliable, and driven to build scalable, high-performance systems.Preferred Qualifications :
Experience or exposure to data lakehouse architectures .Familiarity with containerization (Docker) and orchestration (Kubernetes) .Understanding of MLOps, data observability, or data catalog frameworks .Prior experience in high-growth, consumer-tech, or digital-first environments .