Data Immediate - 15 days joiners Experience- 5+ Responsibilities (Must-Haves) :
- 5+ years of experience in dashboard story development, dashboard creation, and data engineering pipelines .
- Hands-on experience with log analytics, user engagement metrics, and product performance metrics .
- Ability to identify patterns, trends, and anomalies in log data to generate actionable insights for product enhancements and feature optimization .
- Collaborate with cross-functional teams to gather business requirements and translate them into functional and technical specifications.
- Manage and organize large volumes of application log data using Google Big Query .
- Design and develop interactive dashboards to visualize key metrics and insights using any of the tool like Tableau Power BI , or ThoughtSpot AI .
- Create intuitive, impactful visualizations to communicate findings to teams including customer success and leadership.
- Ensure data integrity, consistency, and accessibility for analytical purposes.
- Analyse application logs to extract metrics and statistics related to product performance, customer behaviour, and user sentiment .
- Work closely with product teams to understand log data generated by Python-based applications .
- Collaborate with stakeholders to define key performance indicators (KPIs) and success metrics.
- Can optimize data pipelines and storage in Big Query .
- Strong communication and teamwork skills .
- Ability to learn quickly and adapt to new technologies.
- Excellent problem-solving Responsibilities (Nice-to-Haves) :
- Knowledge of Generative AI (GenAI) and LLM-based solutions .
- Experience in designing and developing dashboards using ThoughtSpot AI .
- Good exposure to Google Cloud Platform (GCP) .
- Data engineering experience with modern data warehouse Responsibilities :
- Participate in the development of proof-of-concepts (POCs) and pilot projects.
- Ability to articulate ideas and points of view clearly to the team.
- Take ownership of data analytics and data engineering Nice-to-Haves :
- Experience working with large datasets and distributed data processing tools such as Apache Spark or Hadoop .
- Familiarity with Agile development methodologies and version control systems like Git .
- Familiarity with ETL tools such as Informatica or Azure Data Factory
ref : hirist.tech)