Job Title : Data Engineer
Experience Required : 6–8 Years
Location : Gurugram / Noida (Onsite position)
Employment Type : Full-time
About SID Global Solutions
SID Global Solutions is a premier Google implementation partner and global technology services firm helping Fortune 500 enterprises across BFSI, Healthcare, Retail, Manufacturing, and Public Sector accelerate digital transformation.
We specialize in AI, Cloud, Automation, API Management, and Modern Data Platforms — driving innovation and business growth at scale.
About the Role :
We are seeking an experienced Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. The ideal candidate will have strong expertise in modern data technologies, cloud platforms, and big data ecosystems, with a passion for optimizing data flow and enabling data-driven decision-making across the organization.
Key Responsibilities :
- Design, develop, and maintain robust ETL / ELT pipelines for structured and unstructured data.
- Build and optimize data warehouses, data lakes, and data models to support analytics and reporting needs.
- Work closely with data analysts, data scientists, and business teams to ensure data accuracy and availability.
- Implement data governance, quality, and security standards across the data ecosystem.
- Manage and optimize data storage and retrieval for high performance and scalability.
- Collaborate with cross-functional teams to migrate, integrate, and transform data across systems.
- Monitor and troubleshoot data pipeline performance, ensuring minimal downtime.
- Evaluate and implement new data tools, frameworks, and best practices to enhance data operations.
Required Skills & Qualifications :
Bachelor’s or Master’s degree in Computer Science, Information Technology, Engineering, or related field.6–8 years of experience in data engineering or similar roles.Proficiency in SQL and experience with relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB).Strong experience with ETL tools (e.g., Apache Airflow, Talend, Informatica, AWS Glue, dbt).Expertise in at least one programming language (Python, Scala, or Java).Hands-on experience with cloud data platforms — AWS (Redshift, S3, Glue), Azure (Data Factory, Synapse), or GCP (BigQuery, Dataflow).Familiarity with big data frameworks (Spark, Hadoop, Kafka).Experience with data modeling, schema design, and data warehousing concepts.Knowledge of CI / CD pipelines, containerization (Docker / Kubernetes), and version control (Git).Good to Have :
Experience with real-time streaming data solutions.Exposure to machine learning data pipelines or analytics platforms.Familiarity with data governance and cataloging tools (Collibra, Alation, Apache Atlas).Understanding of DevOps or MLOps principles in data environments.