FREELANCE
Experience : 7 to 15 Years
Location : Remote
Key Responsibilities :
Design, build and optimize large volume data pipelines using Big Query on Google Cloud Platform.
Work on complex SQL, performance tuning, query optimization and large data processing workloads.
Implement scalable data frameworks, transformations, data modelling and ingestion pipelines.
Integrate data from multiple sources into Big Query using Dataflow, Dataproc, Cloud Storage, Pub / Sub and other GCP services.
Work with cross-functional teams (Analytics / BI / Product / Lead Engineers) to enable business focused outcomes and faster insights.
Ensure data governance, quality, security and best engineering practices are followed.
Improve existing data standards, drive automation, and contribute to continuous delivery and CI / CD data workflows.
Required Skills & Expertise :
Deep expertise in Big Query architecture and hands-on query engineering at enterprise scale.
Strong experience in GCP Data Engineering stack (Dataflow, Dataproc, Pub / Sub, Cloud Storage, Composer etc.)
Strong SQL, Python and ETL development background.
Good understanding of distributed systems / cloud native data patterns.
Background in large structured / unstructured data, performance engineering and high-volume analytics systems. Good to Have :
GCP Professional Data Engineer Certification
Exposure to DBT, Looker, Data form or other modern data transformation tools
Experience with Kafka / Streaming / Realtime system
Consultant • mumbai city, maharashtra, in