Key Responsibilities :
Data Engineering & Architecture
- Design, build, and operationalize enterprise-scale data solutions using AWS services (Spark, EMR, DynamoDB, RedShift, Kinesis, Lambda, Glue).
- Build data pipeline frameworks for high-volume, real-time data ingestion and processing.
- Develop and maintain optimal ETL architecture and workflows.
- Work with NoSQL databases (DynamoDB, MongoDB) and messaging systems (Kafka, Kinesis).
- Implement internal process improvements to automate manual processes, optimize data delivery, and redesign infrastructure for scalability.
Analytics & Insights
Build analytics tools to provide actionable insights into customer acquisition, operational efficiency, and key business metrics.Support data scientists and analytics teams with data infrastructure and tools.Evangelize high standards of quality, reliability, and performance for data models and algorithms.Cloud & Data Security
Utilize AWS cloud data lake solutions for real-time or near real-time use cases.Ensure data security across multiple regions and compliance with data separation standards.Collaboration & Stakeholder Support
Work with Executive, Product, Data, and Design teams to resolve data-related technical issues.Assist teams in leveraging data infrastructure to optimize product performance.Create prototypes and proof-of-concepts to support iterative development.Skills Required
data engineering , Aws, Etl, Data Modeling, Data Architecture, Data Pipeline