The individual will be responsible for designs, builds, and maintains scalable data solutions including data models, ETL pipelines, and real-time processing systems He / she is also responsible for mentoring junior team members and play a pivotal role in overseeing the technical aspects of data related projects.
🔹 Key Responsibilities (Must Have)
- Hands-on experience working on ETL processes in Data Warehouse or Big Data projects
- Exposure to cloud data services on AWS / Azure / GCP (ETL, data cleanup, BI, or related data services)
- Strong SQL expertise with the ability to solve complex queries efficiently
- Identify, analyze, and resolve performance and scalability issues
- Work effectively as an individual contributor
- Basic understanding of reporting tools and data visualizations
- Actively participate in design and code reviews
- Assist in task estimation and planning
- Guide team members by promoting best practices, structured problem-solving, and quality standards
- Demonstrate strong communication and analytical skills
- High energy, enthusiasm, and willingness to learn and adapt quickly to new technologies
🔹 Good to Have
Understanding of Agile methodologies (Scrum, Sprint-based development)Experience with PythonCertifications in SQL and / or Cloud technologiesExposure to Master Data Management (MDM) or Data GovernanceFamiliarity with streaming data processing , data lakes , and distributed data architectures🔹 Why Join Us?
Opportunity to work on modern cloud-based data platformsFast learning curve with scope to be groomed into advanced data roles within 1–2 monthsCollaborative environment with a strong focus on growth and innovation