๐๐ผ๐ฏ ๐ง๐ถ๐๐น๐ฒ : Sr. Data Engineer
๐๐ผ๐ฏ ๐๐ผ๐ฐ๐ฎ๐๐ถ๐ผ๐ป : Chennai, Tamil Nadu, India
๐๐ผ๐ฏ ๐๐๐ฟ๐ฎ๐๐ถ๐ผ๐ป : Permanent
๐๐ผ๐ฏ ๐ง๐๐ฝ๐ฒ : On-Site
๐ง๐ต๐ฒ ๐๐ต๐ฎ๐น๐น๐ฒ๐ป๐ด๐ฒ :
We are seeking a skilled Sr. Data Engineer with 5-8 years of experience in building and managing data pipelines. The ideal candidate will have a strong proficiency in relational and NoSQL databases, as well as experience with data warehouses and real-time data streaming. This role requires excellent problem-solving skills, attention to detail, and the ability to work effectively in a team environment.
๐ฅ๐ผ๐น๐ฒ๐ & ๐ฅ๐ฒ๐๐ฝ๐ผ๐ป๐๐ถ๐ฏ๐ถ๐น๐ถ๐๐ถ๐ฒ๐ :
- Design, build, and maintain efficient and scalable data pipelines to support data integration and transformation across various sources.
- Work with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra) to manage and optimize large datasets.
- Utilize Apache Spark for distributed data processing and real-time analytics.
- Implement and manage Kafka for data streaming and real-time data integration between systems.
- Collaborate with cross-functional teams to gather and translate business requirements into technical solutions.
- Monitor and optimize the performance of data pipelines and architectures, ensuring high availability and reliability.
- Ensure data quality, consistency, and integrity across all systems.
- Stay up to date with the latest trends and best practices in data engineering and big data technologies.
๐๐๐๐ฒ๐ป๐๐ถ๐ฎ๐น ๐ฆ๐ธ๐ถ๐น๐น๐ & ๐ฅ๐ฒ๐พ๐๐ถ๐ฟ๐ฒ๐บ๐ฒ๐ป๐๐ :
๐ ๐๐๐ ๐๐ฎ๐๐ฒ :
5-8 years of experience in data engineering, with a focus on building and managing data pipelines.Strong proficiency in relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).Experience in building data pipelines with data warehouses like Snowflake and Redshift.Experience in processing unstructured data stored from S3 using Athena, Glue, etc.Hands-on experience with Kafka for real-time data streaming and messaging.Solid understanding of ETL processes, data integration, and data pipeline optimization.Proficiency in programming languages like Python, Java, or Scala for data processing.Experience with Apache Spark for big data processing and analytics.Excellent problem-solving skills and attention to detail.Strong communication and collaboration skills, with the ability to work effectively in a team environment.๐ฃ๐ฟ๐ฒ๐ณ๐ฒ๐ฟ๐ฟ๐ฒ๐ฑ ๐ค๐๐ฎ๐น๐ถ๐ณ๐ถ๐ฐ๐ฎ๐๐ถ๐ผ๐ป๐ :
Familiarity with cloud platforms like AWS, GCP, or Azure for data infrastructure.