RazerTech Consulting is mandated to hire for a Sr. Technical AI Data Engineer for a US-based Strategy Consulting, Investment Banking Advisory firm.
Location : Hyderabad | Initial 3-4 months Remote, later transitioning into Hybrid setup
Position Summary :
We are looking for a skilled and highly motivated Technical Data Engineer to join our fast growing data team at a pivotal moment. In this role, you will have the opportunity to build and shape critical components of our data infrastructure - not entirely from scratch, but close to it.
Responsibilities :
- Gain a comprehensive understanding of current data sources, pipelines, and storage systems.
- Design, build, and maintain scalable ETL / ELT pipelines to automate the movement of data from diverse sources to a centralized data warehouse.
- Optimize data pipelines for performance, reliability, and maintainability.
- Ensure data is validated, transformed, and stored in formats that meet analytical needs.
- Analyze existing data origin sources (internal databases, third-party APIs, web-based systems) to assess structure, quality, and reliability.
- Define and document data architecture, recommending improvements to support current and future data needs.
- Collaborate with stakeholders to align technical solutions with business requirements.
- Apply data wrangling techniques to prepare raw data for analysis, including handling missing values, data deduplication, and schema standardization.
- Ensure data integrity and implement logging, alerting, and monitoring for all data workflows.
- Partner with Data Analysts and business Stakeholders to support A / B testing frameworks and provide infrastructure for
running experiments.
Enable self-service reporting and analysis by ensuring well-documented, accessible datasets.Assist in the development of dashboards and reports using tools like Tableau or Power BI.Support the data team in presenting key metrics and insights in a visually compelling way.Continuously identify and integrate new data sources -internal or external - to enhance business insights and competitive edge.Deploy systems to monitor data quality, pipeline health, and job failures proactively.Design and implement automated pipelines to ingest and process data from key sources.Stay current with advancements in AI technologies, frameworks (e.g., TensorFlow, PyTorch), and large language models (LLMs) to inform architectural decisions and promote innovation.Ensure data is clean, validated, and ready for use by analysts and stakeholders.Document existing workflows and identify quick wins for optimization.Take initiative and ownership of projects from concept to deployment, demonstrating a builder's mindset.Qualifications :
A bachelor's degree in Engineering, Data Science, or related field is required. Masters degree highly preferred.5+ years experience in a relevant Technical Data Engineer position, machine learning, deep learning, or AI systems engineering.Proficient in SQL and at least one programming language (e.g., Python).Experience with data pipeline tools (e.g., Airflow, dbt, Apache Beam, etc.).Familiarity with cloud platforms (AWS, GCP, Azure) and data warehouses (e.g., Snowflake, BigQuery, Redshift).Experience with frameworks like BeautifulSoup, Scrapy, or Selenium.Experience in training, fine-tuning, and deploying large language models (LLMs) or transformer-based architectures (e.g., BERT, GPT, LLaMA)Knowledge of A / B testing frameworks and visualization tools (e.g., Tableau, Power BI).Experience designing and implementing scalable ML pipelines using tools like MLflow, Kubeflow, Airflow, CI / CD pipelines for model deployment.Strong problem-solving skills and the ability to work in a fast-paced environment.Experience in an entrepreneurial or start up environment preferred.Demonstrated leadership capabilities, able to effectively communicate and collaborate with members at all levels of the company.(ref : hirist.tech)