Description : Preferable Tamilnadu Candidates.
Exp : 5+yrs.
NP : Imm-15 days.
Rounds : 3 Rounds (Virtual).
Mandate Skills : Scala, Spark, Databricks.
Job Description : The Role :
- Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights.
- Constructing infrastructure for efficient ETL processes from various sources and storage systems.
- Leading the implementation of algorithms and prototypes to transform raw data into useful information.
- Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI / ML transformations.
- Creating innovative data validation methods and data analysis tools.
- Ensuring compliance with data governance and security policies.
- Interpreting data trends and patterns to establish operational alerts.
- Developing analytical tools, programs, and reporting mechanisms.
- Conducting complex data analysis and presenting results effectively.
- Preparing data for prescriptive and predictive modeling.
- Continuously exploring opportunities to enhance data quality and reliability.
- Applying strong programming and problem-solving skills to develop scalable solutions.
Requirements :
Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala).5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines.High proficiency in Scala / Java and Spark for applied large-scale data processing.Expertise with big data technologies, including Spark, Data Lake, and Hive.(ref : hirist.tech)