Role : Data Engineer
Experience : 6+ years
Location : Pune / Hyderabad
Work Mode : WFO
Job Profile and proactive Data Engineer to lead and manage critical data infrastructure that powers our products and analytics. You will design and scale high-performance data pipelines and models across both our Data Lake and Data Warehouse environments. Responsible for improving the reliability, scalability, and automation of our data systems, ensuring the delivery of high-quality, trustworthy data across the organization.
Skill set :
- ETL Orchestration and workflows (Airflow)
- Programming Python / Kotlin / Scala
- Streaming Kafka / Flink / Spark
- Database, Data modelling, Performance tuning, Querying (SQL & Presto)
- Data Lake / warehouse Snowflake
Roles and responsibilities :
Own critical data systems that support multiple products / teamsDevelop, implement and enforce best practices for data infrastructure and automationDesign, develop and implement large scale, high-volume, high-performance data models and pipelines for Data Lake and Data WarehouseImprove the reliability and scalability of our Ingestion, data processing, ETLs, Reporting tools and data ecosystem servicesManage a portfolio of data pipelines that deliver high-quality, trustworthy dataOther specifications :
5+ years experience working in data platform and data engineering or a similar roleProficiency in programming languages such as Python / Kotlin / Scala5+ years of experience in ETL orchestration and workflow management tools like AirflowExpert in database fundamentals, SQL, data reliability practices and distributed computing5+ years of experience with the Distributed data / similar ecosystem (Spark, Presto) and streaming technologies such as Kafka / Flink / Spark StreamingExcellent communication skills and experience working with technical and non-technical teams and knowledge of reporting toolsComfortable working in fast paced environment, self-starter and self-organizing(ref : hirist.tech)