Looking for a Freelance Data Engineer to join a team of rockstar developers. The candidate should have a minimum of 8+ yrs. of experience.
There are multiple openings. If you're looking for freelance / part time opportunity (along with your day job) & a chance to work with the top 0.1% of developers in the industry, this one is for you! You will report into IIT'ans / BITS grads with 10+ years of development experience + work with F500 companies (our customers).
Company Background - We are a multinational software company that is growing at a fast pace. We have offices in Florida & New Delhi. Our clientele spreads across the US, Australia & APAC. To give you a sense of our growth rate, we've added 70+ employees in the last 6 weeks itself and expect another 125+ by the end of Q4 2025.
Key Responsibilities
- Design, build, and maintain real-time data ingestion pipelines using Debezium, Kafka Connect, and Apache Kafka .
- Develop and optimize Spark jobs on Kubernetes , including Spark Streaming and batch data workflows.
- Build, schedule, and orchestrate workflows using Apache Airflow or similar orchestration tools.
- Implement and maintain modern table formats such as Hudi, Iceberg, and Parquet for scalable and efficient data lakes.
- Work with object storage systems such as S3 or GCP equivalents.
- Build scalable data warehouse solutions using BigQuery , including materialized views , partitioning, clustering, and performance tuning.
- Work with ETL / ELT pipelines , data modeling , and data warehousing best practices.
- Develop and consume REST APIs to support data services and downstream applications.
- Write scalable, well-structured Python code for data processing and automation.
- Use dbt for data transformation and modeling workflows.
- Manage deployments and code quality using Git , CI / CD pipelines , and IaC principles.
- Collaborate with cross-functional teams across engineering, analytics, and product.
- Ensure high standards of data quality, reliability, and system performance.
Mandatory / Essential Requirements
Experience
Total 8+ years of professional experience in Data Engineering or related roles.Minimum 5+ years of hands-on experience with Google Cloud Platform (GCP) .Education & Industry Background
Graduate from a top-tier college / university .Experience working with top-tier organizations (e.g., Google, PayU, Accenture, Capgemini, Jumia, Ernst & Young) is preferred.Technical Expertise
GCP Services : BigQuery, Cloud Run, Cloud Scheduler, Dataflow, Pub / Sub.Strong understanding of ETL / ELT frameworks , Airflow , data modeling , and data warehousing .Advanced Python programming skills.Strong proficiency in SQL including analytical queries and materialized views.Experience with dbt for data transformations.Hands-on experience with Git or other version control systems.Experience implementing CI / CD pipelines .Understanding of Infrastructure-as-Code (IaC) principles.Professional Traits
No managerial experience required — this is a hands-on IC role .Excellent problem-solving skills with strong attention to detail.Ability to work independently in a fast-paced and dynamic environment.Proactive, ownership-driven mindset.What we need
~35 hours of work per week.100% remote from our sideYou will be paid out every month.Min 4yrs of experiencePlease apply only if you have a 100% remote job currentlyIf you do well, this will continue for a long time