About Client : -
Our client is a French multinational information technology (IT) services and consulting company, headquartered in Paris, France. Founded in 1967, It has been a leader in business transformation for over 50 years, leveraging technology to address a wide range of business needs, from strategy and design to managing operations.
The company is committed to unleashing human energy through technology for an inclusive and sustainable future, helping organisations accelerate their transition to a digital and sustainable world.
They provide a variety of services, including consulting, technology, professional, and outsourcing services.
Job Details : -
location : Bangalore
Mode Of Work : Hybrid
Notice Period : Immediate Joiners
Experience : 6-8 yrs
Type Of Hire : Contract to Hire
JOB DESCRIPTION : About the Role
We are looking for a highly skilled Data Pipeline Engineer with strong experience in Google Cloud Platform (GCP) to design, develop, and optimize data integration pipelines that move and transform data, specifically from BigQuery to AlloyDB.
The ideal candidate will have a deep understanding of data orchestration, automation, and ETL / ELT frameworks on GCP, and will be comfortable working in a dynamic environment where data models, performance requirements, and architecture evolve over time. The candidate should be able to understand the solution design, and make changes to it as per the requirement.
This role is hands-on and cross-functional — working closely with solution architects, data analysts, and application teams to ensure seamless, secure, and efficient data flow across GCP components.
Key Responsibilities
o Design and build robust, scalable, and parameterized data pipelines to move data from BigQuery ? Cloud Storage ? AlloyDB.
o Leverage Cloud Composer (Airflow), Cloud Functions, EventArc, and Pub / Sub to orchestrate and automate data movement.
o Implement control schema to handle incremental and delta loads.
o Create and manage DAGs in Cloud Composer 2 to schedule and monitor data workflows.
o Develop asynchronous or parallel execution strategies to optimize pipeline performance under GCP constraints (e.g., AlloyDB single COPY process).
o Collaborate with solution architects to review and refine pipeline architecture.
o Make design updates and code refactoring based on evolving data requirements, schema changes, or performance improvements.
o Ensure pipelines align with GCP best practices, security, and cost optimization guidelines.
o Tune BigQuery queries and AlloyDB import strategies for large datasets (terabytes of data).
o Implement partitioning, batching, and retry mechanisms for high throughput and reliability.
o Implement detailed logging, alerting, and monitoring using Cloud Logging, Cloud Monitoring, and Stackdriver.
o Set up job-level and table-level audit trails for pipeline observability and troubleshooting.
o Use Service Accounts, VPC Service Controls, IAM roles, and CMEK encryption to ensure data security and governance compliance.
o Adhere to enterprise security policies and guardrails for data movement across GCP projects.
Required Skills and Experience
o BigQuery (SQL, views, partitioning)
o AlloyDB / PostgreSQL (import / export, COPY operations)
o Cloud Storage (buckets, lifecycle policies)
o Cloud Composer (Airflow)
o Cloud Functions, Pub / Sub, and EventArc
Soft Skills
Sample Deliverables
Data Pipeline Engineer • Kolhapur, IN