We are hiring multiple Data Engineers to join international data platform, analytics, and cloud engineering teams. These fully remote, long-term freelance roles are ideal for engineers who can build scalable data pipelines, work with modern cloud-native data stacks, and support large-scale enterprise data initiatives.
Open Roles (Multiple Positions)
We are recruiting across core and specialized data engineering areas :
Core Data Engineering
- Data Engineer
- Senior Data Engineer
- Cloud Data Engineer (AWS / Azure / GCP)
Specialized Roles
ETL / ELT DeveloperBig Data Engineer (Spark / Hadoop / Databricks)Data Pipeline EngineerData Platform EngineerStreaming Data Engineer (Kafka / Kinesis / Pub / Sub)If you have strong experience building data systems or pipelines, we encourage you to apply.
Engagement Details
Type : Independent Freelance ConsultantLocation : 100% RemoteDuration : Initial 6–12 month contract (extendable to multi-year)Start Date : Immediate or within the next few weeksClients : Global enterprises, SaaS companies, and cloud-first data teamsKey Responsibilities
Design and build scalable, reliable data pipelines using modern data engineering tools and frameworks.Develop ETL / ELT workflows for structured, semi-structured, and unstructured data.Implement data ingestion, transformation, storage, and processing solutions.Work with cloud-native data services (AWS Glue, Redshift, EMR, Azure Data Factory, Synapse, GCP BigQuery, Dataflow).Build batch and streaming data pipelines using Spark, Databricks, Kafka, or similar technologies.Optimize performance, cost, and reliability of data systems for large-scale deployments.Collaborate with analytics, BI, ML, and backend teams to deliver end-to-end data solutions.Ensure data quality, integrity, governance, and security across data workflows.Support CI / CD pipelines, version control, and automation related to data environments.Minimum Qualifications
Minimum 2 years of hands-on experience as a Data Engineer.Strong experience with Python or SQL (or both).Practical knowledge of data pipeline development using Spark, PySpark, or equivalent.Hands-on experience with one major cloud platform (AWS, Azure, or GCP).Understanding of data modeling, warehousing concepts, and distributed systems.Experience working with ETL / ELT tools or frameworks.Ability to work independently in a remote, distributed setup.Preferred Skills
Experience with Databricks or large-scale Spark clusters.Knowledge of streaming technologies (Kafka / Kinesis / Pub / Sub / Flink).Experience working with data lakes (S3, ADLS, GCS) and lakehouse architectures.Exposure to orchestration tools such as Airflow, Dagster, Prefect, or AWS Step Functions.Familiarity with containerization (Docker) and orchestration (Kubernetes).Experience integrating with BI, ML, or analytics platforms.Cloud certifications (AWS / Azure / GCP Data Engineer) are a plus.Why Join This Opportunity
Large-scale, cloud-native data engineering projects.Multiple openings with fast-track onboarding.Fully remote with flexible working hours.Long-term freelance roles with consistent project work.Work with modern data stacks, lakehouse architectures, and global teams.How to Apply
Send your CV to Careers@SkillsCapital.io