Description :
HackerRank is a Y Combinator alumnus backed by tier-one Silicon Valley VCs with total funding of over $58 million. We are revolutionizing technical hiring by giving companies a skills-based hiring platform that enables our customers to assess technical skills effectively. The HackerRank Developer Skills Platform is the standard for assessing developer skills for 2,000+ companies across industries and 10M+ developers around the world. Companies like LinkedIn, Stripe, and Peloton rely on HackerRank to objectively evaluate skills against millions of developers at every step of the hiring process, allowing teams to hire the best and reduce engineering time. Developers rely on HackerRank to turn their skills into great jobs. We're data-driven givers who take full ownership of our work and love delighting our customers!
Job Description : Responsibilities :
- Design, develop, and maintain data pipelines for the ingestion, transformation, and storage of large-scale datasets.
- Work with both batch and real-time data processing systems to ensure timely and accurate data flow.
- Optimize and manage data models for analytics, product insights, and operational use cases.
- Collaborate with analysts, data scientists, and backend engineers to integrate data systems into production workflows.
- Implement and manage data quality, observability, and monitoring frameworks.
- Drive improvements in data architecture, performance, and scalability.
- Ensure compliance with data governance, privacy, and security standards.
- Mentor junior engineers and contribute to best practices across the data engineering function.
Requirements :
Experience : 4 - 6 years in data engineering or related fields.Strong hands-on experience with Python or Scala for data processing.Proficiency with SQL and database technologies (PostgreSQL, MySQL, or similar).Experience with big data tools such as Apache Spark, Airflow, Kafka, Snowflake, or Redshift.Solid understanding of ETL design, data modeling (Star / Snowflake schemas), and data warehousing concepts.Experience building and maintaining cloud-based data infrastructure (AWS, GCP, or Azure).Familiarity with CI / CD pipelines, Docker, and version control systems like Git.Strong analytical mindset and the ability to work with large-scale, high-volume datasets.(ref : hirist.tech)