Job Responsibilities :
- Design, develop, and implement data engineering solutions leveraging various GCP offerings .
- Work hands-on with BigQuery, BigTable, and PySpark for data processing, storage, and analytics.
- Collaborate with development teams to automate build, test, and deployment processes, contributing to CI / CD pipelines.
- Troubleshoot and resolve complex issues related to data pipelines, builds, tests, and deployments on GCP.
- Monitor and continuously improve the performance and reliability of data pipelines and deployed solutions.
- Contribute to the development of Java applications as needed, supporting end-to-end solution delivery.
- Stay updated with the latest GCP offerings, features, and best practices to ensure optimal and modern solutions.
- Apply strong problem-solving and troubleshooting skills to address technical challenges efficiently.
- Work effectively both independently and as part of a collaborative team.
Required Skills :
Hands-on experience with GCP offerings for data engineering projects.Proficiency in BigQuery, BigTable, and PySpark .Strong SQL Background .Experience with GCP and Big Data technologies.Strong problem-solving and troubleshooting skills.Excellent communication and collaboration skills.Ability to work independently and as part of a team.Skills Required
BigQuery, bigtable , Pyspark, Big Data, Sql, Gcp