We’re Hiring : Data Engineer (AWS | Python | PySpark | SQL)
Work Type : 100% Remote
Experience : 5–7 years
Notice Period :
Immediate joiners preferred; up to 15 days acceptable
About the Role :
We are seeking a highly skilled Data Engineer
to design, develop, and optimize robust data pipelines and cloud-based architectures. You will work with cutting-edge AWS services, Python, PySpark, SQL, and Redshift to deliver scalable, reliable, and high-performance data solutions.
Key Responsibilities :
Design and implement scalable data pipelines using AWS services and PySpark.
Develop data workflows and ETL processes using Python and SQL.
Optimize and manage Redshift databases for performance, scalability, and data integrity.
Monitor, troubleshoot, and optimize data systems for cost-effectiveness and efficiency.
Must-Have Skills : Redshift
– Expertise in advanced querying, optimization, and database management.
AWS
– Hands-on experience with Glue, S3, EMR, Lambda.
Python
– Strong scripting, automation, and data transformation skills.
PySpark
– Proven experience handling large datasets and distributed data processing.
Nice-to-Have Skills :
Familiarity with data warehousing concepts and big data architecture.
Why Join Us?
100% remote work flexibility.
Opportunity to work on cutting-edge data engineering projects.
Collaborative and innovative work culture.
Compensation :
Competitive and commensurate with skills and experience —
no limits for exceptional talent.
Apply now
and be part of a team that’s redefining data-driven solutions.
Interested candidates may share their resumes at
hr@namasys.ai.
Data Engineer Python Sql • India