FinOps Data Engineer
BU / FUNCTION DESCRIPTION
Data Operations team is responsible for Platform and Data excellence. They strive to provide excellent customer service and leverage data platforms to improve on operational efficiency and publishing data on time.
we strongly believe that data and analytics are strategic drivers for future success. We are building a world class advanced analytics team that will solve some of the most complex strategic problems and deliver topline growth and operational efficiencies across our business The Analytics team is part of the Organization and is responsible for driving organic growth by leveraging big data and advanced analytics.
We are on an exciting journey to build and scale our advanced analytics practice. We looking for a Data Engineer. The suitable candidate should have demonstrated experience in designing and implementing ETL solutions on-premise and cloud platforms to support Enterprise data warehouse, Data Lake and advanced analytics capabilities. Success in this role comes from marrying a strong data engineering background with product and business acumen to deliver data operational excellence.
You will be responsible to help defining KPI’s, requirements analysis, design and implementation data solutions on-premise and cloud. The candidate will work closely with vendor partners, business unit representatives, project sponsors and Segment CIO teams to deliver the solutions. The candidate is expected to communicate data operations status, issues and defined metrics to all levels of management.
ROLES
We are seeking a FinOps Data Engineer who will bridge the gap between cloud financial operations and engineering. This role involves designing and implementing ETL pipelines, building cost dashboards, ensuring tagging compliance, detecting anomalies, and collaborating with stakeholders to optimize cloud spend. The ideal candidate will have strong experience in DevOps, Data Engineering, and Cloud Cost management.
RESPONSIBILITIES
- Design and develop ETL solutions for cost and usage data using best practices for data warehousing and analytics.
- Analyze cloud cost and usage across AWS, Databricks, and other Cloud & On-Prem platforms.
- Build and maintain cloud cost dashboards and reporting solutions for visibility across LOBs and programs.
- Implement tagging standards and establish compliance checks and generate reports to ensure adherence to tagging standards.
- Detect & Analyze cost anomalies and usage patterns; proactively identify optimization opportunities using AWS Cost explorer, Databricks System tables & backup tables as driver.
- Collaborate with stakeholders (DevOps, Application teams, Finance, Architect, Infra) to implement cost-saving strategies following FinOps Foundation standards.
- Develop automated workflows for data ingestion, transformation, and validation.
- Document processes, data flows, and standards for FinOps operations.
- Work with vendors and internal teams to ensure KPIs for cost and tagging compliance are met.
- Enable accurate showback / chargeback models aligned to LOB / Program.
- Support forecast vs. actual reporting and provide monthly FinOps insights by enabling automated workflows, alerts, notifications, guardrails.
- Work with DevOps teams on cluster governance, resource control, policy enforcement, guardrails, build cost validation queries, build granular user level cost at tags like LOBs, data products, sessions and roll up to workspace level cost.
- Manage pipelines for ETL jobs, infrastructure automation, and monitoring tools.
- Implement cost-aware DevOps practices (auto-scaling, scheduling, workload orchestration).
- Collaborate on implementing cluster policies, SQL warehouse governance, and operational efficiency.
- Have deep working knowledge of on-prem & cloud ESB architecture to address the client’s requirements for scalability, reliability, security, and performance.
- Provide technical assistance in identifying, evaluating, and developing systems and procedures.
- Manage foundational data administration tasks such as scheduling jobs, troubleshooting job errors, identifying issues with job windows, assisting with Database backups and performance tuning.
- Design, Develop, Test, Adapt ETL code & jobs to accommodate changes in source data and new business requirements.
- Proactively communicate innovative ideas, solutions, and capabilities over and above the specific task request
- Effectively communicate status, workloads, offers to assist other areas.
- Collaboratively work with a team and independently. Continuously strive for high performing business solutions
Competencies & Experience Required / Desired
6+ years in Data Engineering with strong ETL design, development, and optimization experience.Hands-on experience with AWS services and cost management tools.Strong knowledge of tagging strategies and governance in multi-cloud environments.Proficiency in SQL, PL / SQL, and data warehousing best practices.Experience with DevOps practices, CI / CD pipelines, and automation tools.Familiarity with FinOps principles and cloud cost optimization techniques.Ability to analyze large datasets and detect anomalies using scripting or BI tools (e.g., Power BI, Tableau).Excellent communication skills to work with technical and business stakeholders.Strong problem-solving capabilities. Results oriented. Relies on fact-based logic for decision-making.Ability to work with multiple projects and work streams at one time. Must be able to deliver results based upon project deadlines.Willing to flex daily work schedule to allow for time-zone differences for global team communications.Strong interpersonal and communication skillsMOTIVATIONAL / CULTURAL FIT
Experience with other ETL tools / servicesData integration experience using platforms.Experience in visualization tools, Tableau, PowerBI or other tools.Experience in developing basic data science models using Python or a similar languageLeadership qualities and mentoring others on team.Having AWS certification is a plus