Role - AWS Data Engineer
Skills - AWS & PySpark
Job Locations - Pune, Greater Noida & Hyderabad
Experience - 6+ years
We at Coforge are hiring AWS Data Engineers with the following skill-set :
- Design, develop, and maintain robust ETL / ELT pipelines using tools like Apache Spark, Airflow, or similar.
- Build and optimize data architectures (data lakes, data warehouses, etc.) on cloud platforms such as AWS, Azure, or GCP.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions.
- Ensure data quality, integrity, and security across all data systems.
- Monitor and troubleshoot data pipeline performance and reliability.
- Implement best practices for data governance and metadata management.
- Automate data ingestion and transformation processes.
- Maintain documentation for data systems and processes.
- Proficiency in SQL and programming languages like Python or Scala.
- Experience with big data technologies (e.g., Hadoop, Spark & Kafka).
- Hands-on experience with Cloud data platforms (AWS Redshift, Azure Synapse & Google BigQuery).
- Familiarity with data modeling, warehousing, and performance tuning.
- Experience with CI / CD pipelines and DevOps practices.
- Knowledge of data privacy regulations (e.g., GDPR & HIPAA).
- Exposure to machine learning workflows and data science collaboration.
- Certifications in cloud platforms or data engineering tools.