Talent.com
Search jobs
Search salary
Tax calculator
For employers
Sign in
Find jobs
Search jobs
Hyderabad, TS
AWS Data Engineer - PySpark/Business Intelligence
This job offer is not available in your country.
AWS Data Engineer - PySpark / Business Intelligence
NS Global Corporation
Hyderabad
30+ days ago
Job description
Responsibilities :
Design, develop, and maintain scalable data pipelines using Databricks, PySpark, and other relevant technologies.
Build and optimize ETL processes to ingest, transform, and load data from various sources into the data warehouse.
Implement data modeling and data warehousing solutions to support business intelligence and analytics needs.
Develop and maintain data governance policies and procedures to ensure data quality, security, and compliance.
Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver data solutions.
Monitor and troubleshoot data pipeline performance issues and implement optimizations.
Participate in code reviews, testing, and deployment processes.
Stay up-to-date with the latest data engineering technologies and best :
3+ years of experience in data engineering or a similar role.
Strong programming skills in Python, PySpark, and SQL.
Hands-on experience with Databricks and AWS cloud services (S3, IAM, Lambda, etc.).
Experience with workflow orchestration tools such as Apache Airflow.
Familiarity with FastAPI for building high-performance APIs.
Solid understanding of data modeling, data warehousing, and ETL processes.
Experience with version control systems (e.g., Git) and CI / CD pipelines.
Strong problem-solving skills and ability to work in a fast-paced environment.
Good communication skills and ability to collaborate in cross-functional teams.
Experience with data governance, security, and compliance best practices.
Proficiency in using Spotfire for data visualization and reporting.
Experience with Databricks Unity Catalog or similar data governance and metadata management Qualifications :
Experience with real-time data processing and streaming technologies.
Familiarity with machine learning workflows and MLOps.
Certifications in Databricks, AWS.
Experience implementing data mesh or data fabric architectures.
Knowledge of data lineage and metadata management best Stack : Databricks, Python, PySpark, SQL, Airflow, FastAPI, AWS (S3, IAM, ECR, Lambda), Spotfire.
(ref :
hirist.tech)
Create a job alert for this search
Aws Data Engineer • Hyderabad
Create