Talent.com
This job offer is not available in your country.
Data Engineer - ETL / Data Pipeline

Data Engineer - ETL / Data Pipeline

Grizmo Labs Private LimitedBangalore
30+ days ago
Job description

We are seeking a skilled Data Engineer with 3-4 years of experience in building and maintaining robust data pipelines. The ideal candidate will have a strong command of AWS services and be proficient in both SQL and NoSQL databases. Attention to detail, a commitment to data integrity, and the ability to manage large-scale data workflows are essential for this role.

Responsibilities :

  • Design, develop, and maintain scalable data pipelines and ETL processes to handle large volumes of data efficiently.
  • Ensure data integrity and reliability by implementing data validation, cleansing, and transformation processes.
  • Collaborate with data scientists, analysts, and other stakeholders to define data requirements and support data-driven decision-making.
  • Optimise data workflows and storage solutions to improve performance and reduce costs.
  • Develop and maintain database schemas, queries, and stored procedures using SQL and NoSQL databases.
  • Deploy and manage data infrastructure on cloud platforms like AWS or Azure, ensuring optimal performance and security.
  • Implement best practices for data governance, security, and compliance.
  • Monitor and maintain data systems in production, troubleshoot issues, and provide timely solutions.
  • Stay up-to-date with the latest technologies and trends in data engineering, and apply them to the project when necessary.
  • Write and maintain documentation related to data pipelines, data models, and other technical aspects of the :
  • Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field.
  • At least 3-4 years of experience in data engineering, with a focus on building and maintaining data pipelines and ETL processes.
  • Strong proficiency in SQL and experience with database systems such as MySQL, PostgreSQL, or MongoDB.
  • Experience with big data technologies like Apache Spark, Hadoop, or Kafka.
  • Familiarity with cloud platforms like AWS, Azure, or Google Cloud, and experience with services like S3 Redshift, Dataflow, or

BigQuery.

  • Strong problem-solving and analytical skills, with the ability to work independently and in a team environment.
  • Good communication and interpersonal skills, with the ability to collaborate effectively with other team members and
  • stakeholders.

  • Experience with data warehousing and data modeling is a plus.
  • (ref : hirist.tech)

    Create a job alert for this search

    Data Pipeline Engineer • Bangalore