Key Responsibilities
Design, develop, and optimize data pipelines and ETL processes on GCP or Azure.
Work with structured and unstructured data, integrating sources such as databases, APIs, and streaming platforms.
Implement and manage data warehouses, data lakes, or lakehouse architectures.
Develop clean, efficient, and reusable Python scripts for automation, data transformation, and processing.
Collaborate with data scientists, analysts, and business teams to deliver high-quality, timely datasets.
Monitor, troubleshoot, and enhance data workflows for performance, scalability, and cost efficiency.
Ensure data quality, governance, and compliance with organizational and industry standards.
Required Skills & Qualifications
Proven experience as a Data Engineer or in a related role.
Hands-on expertise with :
GCP services : BigQuery, Dataflow, Pub / Sub, Cloud Storage, etc.
Azure services : Azure Data Factory, Synapse, Databricks, Blob Storage, etc.
Strong proficiency in Python scripting language for data processing and automation.
Solid knowledge of SQL and experience with both relational and NoSQL databases.
Experience building and maintaining ETL / ELT workflows and data models.
Familiarity with CI / CD pipelines and version control systems (e.g., Git).
Excellent analytical and problem-solving skills.
Data Engineer • Coimbatore, Tamil Nadu, India