MongoDB’s mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. We enable organizations of all sizes to easily build, scale, and run modern applications by helping them modernize legacy workloads, embrace innovation, and unleash AI. Our industry-leading developer data platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available in more than 115 regions across AWS, Google Cloud, and Microsoft Azure. Atlas allows customers to build and run applications anywhere—on premises, or across cloud providers. With offices worldwide and over 175,000 new developers signing up to use MongoDB every month, it’s no wonder that leading organizations, like Samsung and Toyota, trust MongoDB to build next-generation, AI-powered applications.
Summary
As a Senior Analytics Engineer I at MongoDB, you will play a critical role in leveraging data to drive informed decision-making and simplify end user engagement across our most critical data sets. You will be responsible for designing, developing, and maintaining robust analytics solutions, ensuring data integrity, and enabling data-driven insights across all of MongoDB. This role requires an analytical thinker with strong technical expertise to contribute to the growth and success of the entire business.
We are looking to speak to candidates who are based in Gurugram for our hybrid working model.
Responsibilities
- Design, implement, and maintain highly performant data post-processing pipelines
- Create shared data assets that will act as the company’s source-of-truth for critical business metrics
- Partner with analytics stakeholders to curate analysis-ready datasets and augment the generation of actionable insights
- Partner with data engineering to expose governed datasets to the rest of the organization
- Make impactful contributions to our analytics infrastructure, systems, and tools
- Create and manage documentation, and conduct knowledge sharing sessions to proliferate tribal knowledge and best practices
- Maintain consistent planning and tracking of work in JIRA tickets
Skills & Attributes
Bachelor’s degree (or equivalent) in mathematics, computer science, information technology, engineering, or related discipline3-5 years of relevant experienceStrong Proficiency in SQL and experience working with relational databasesSolid understanding of data modeling and ETL processesProficiency in Python for automation, data manipulation, and analysisExperience managing ETL and data pipeline orchestration with dbt and AirflowComfortable with command line functionsFamiliarity with Hive, Trino (Presto), SparkSQL, Google BigQueryExperience with cloud data storage like AWS S3, GCSExperience with managing codebases with gitConsistently employs CI / CD best practicesExperience translating project requirements into a set of technical sub-tasks that build towards a final deliverableExperience combining data from disparate data sources to identify insights that were previously unknownPrevious project work requiring expertise in business metrics and datasetsStrong communication skills to document technical processes clearly and lead knowledge-sharing efforts across teams.The ability to effectively collaborate cross-functionally to drive actionable and measurable resultsCommitted to continuous improvement, with a passion for building processes / tools to make everyone more efficientA passion for AI as an enhancing tool to improve workflows, increase productivity, and generate smarter outcomes.A desire to constantly learn and improve themselves