Job Description
Key Responsibilities :
Team Leadership & Development : Lead, mentor, and manage a team of analysts, BI developers, and reporting specialists. Foster a high-performance culture and support professional growth within the team.
Solutioning & Project Delivery : Drive the end-to-end delivery of analytics projects, from problem definition and data collection to insight generation and final presentation. Ensure projects are delivered on time, within scope, and to the highest quality standards. Design and implement robust analytical solutions, including sales performance reporting, incentive compensation, and market access analytics.
Client Engagement & Strategy : Act as a key point of contact for clients, understanding their business challenges and translating them into analytical projects. Develop strategic roadmaps for analytics and reporting solutions.
Business Intelligence & Reporting : Define, develop, and implement the BI and reporting roadmap, ensuring alignment with client business objectives. Drive the creation of insightful dashboards and reports that provide a clear picture of performance. Lead the end-to-end execution of descriptive analytics projects, focusing on "what happened" and "why it happened."
Data Governance & Quality : Oversee data ingestion, transformation, and validation processes to ensure the accuracy, completeness, and reliability of data used in BI dashboards and reports. Establish and enforce best practices for data governance.
Cross-functional Collaboration : Work closely with other departments, including sales, marketing, and technology teams, to ensure alignment and effective use of analytical insights.
Requirements
Experience : 7+ years of experience in the life sciences industry, with a strong focus on commercial analytics, sales operations, and consulting.
Domain Expertise : Deep understanding of life sciences commercial processes, including sales force effectiveness, incentive compensation, market access, and marketing analytics.
Technical Skills : Proven expertise in analytical tools such as SQL, Python, R, and BI tools and data visualization platforms (e.g., Tableau, Power BI, or Qlik Sense ).
Leadership : Demonstrated experience in leading and managing analytical and BI teams. Strong communication, presentation, and interpersonal skills, with the ability to convey complex analytical findings to non-technical stakeholders.
Problem-Solving : Excellent analytical and problem-solving abilities with a data-driven approach to decision-making
Requirements
Key Responsibilities Pipeline Development : Design, build, and maintain efficient and scalable ETL / ELT pipelines on the Databricks platform using PySpark, SQL, and Delta Live Tables (DLT). Lakehouse Management : Implement and manage data solutions within the Databricks Lakehouse Platform, ensuring best practices for data storage, governance, and management using Delta Lake and Unity Catalog. Code Optimization : Write high-quality, maintainable, and optimized PySpark code for large-scale data processing and transformation tasks. AI & ML Integration : Collaborate with data scientists to productionize machine learning models. Utilize Databricks AI features such as the Feature Store, MLflow for model lifecycle management, and AutoML for accelerating model development. Data Quality & Governance : Implement robust data quality checks and validation frameworks to ensure data accuracy, completeness, and reliability within the delta tables. Performance Tuning : Monitor, troubleshoot, and optimize the performance of Databricks jobs, clusters, and SQL warehouses to ensure efficiency and cost-effectiveness. Collaboration : Work closely with data analysts, data scientists, and business stakeholders to understand their data requirements and deliver effective solutions. Documentation : Create and maintain comprehensive technical documentation for data pipelines, architectures, and processes. Required Qualifications & Skills Experience : 3-5 years of hands-on experience in a data engineering role. Databricks Expertise : Proven, in-depth experience with the Databricks platform, including Databricks Workflows, Notebooks, Clusters, and Delta Live Tables. Programming Skills : Strong proficiency in Python and extensive hands-on experience with PySpark for data manipulation and processing. Data Architecture : Solid understanding of modern data architectures, including the Lakehouse paradigm, Data Lakes, and Data Warehousing. Delta Lake : Hands-on experience with Delta Lake, including schema evolution, ACID transactions, and time travel features. SQL Proficiency : Excellent SQL skills and the ability to write complex queries for data analysis and transformation. Databricks AI : Practical experience with Databricks AI / ML capabilities, particularly MLflow and the Feature Store. Cloud Experience : Experience working with at least one major cloud provider (AWS, Azure, or GCP). Problem-Solving : Strong analytical and problem-solving skills with the ability to debug complex data issues. Communication : Excellent verbal and written communication skills. Preferred Qualifications Databricks Certified Data Engineer Associate / Professional certification. Experience with CI / CD tools (e.g., Jenkins, Azure DevOps, GitHub Actions) for data pipelines. Familiarity with streaming technologies like Structured Streaming. Knowledge of data governance tools and practices within Unity Catalog.
Reporting Manager • Bangalore, KA, in