Key Responsibilities
- Develop and implement data processing solutions using Databricks and PySpark to optimize workflows and improve efficiency.
- Collaborate with data engineers and analysts to design scalable AWS-based data architectures .
- Analyze complex datasets to identify trends and insights that inform strategic decision-making.
- Provide technical guidance and mentorship to team members, fostering a collaborative environment.
- Ensure data quality and integrity through robust validation and cleansing processes.
- Optimize existing data pipelines and workflows for improved performance and reduced processing time.
- Conduct code reviews to maintain high coding standards and best practices.
- Stay updated with emerging industry trends and technologies to enhance data processing capabilities.
- Participate in Agile development processes , including sprint planning, stand-ups, and retrospectives.
- Troubleshoot and resolve technical issues related to data processing and AWS infrastructure.
- Document technical specifications and workflows to facilitate knowledge sharing.
- Engage with stakeholders to translate business requirements into actionable technical solutions.
- Contribute to impactful data-driven solutions aligned with organizational objectives.
Skills Required
Azure Databricks, Pyspark, Aws, Data Processing, Storage Solutions, Data Architecture