Role Summary :
We are seeking a skilled Oracle Cloud Data Engineer with 5-8 years of experience in big-data ecosystems. The ideal candidate will have hands-on expertise with various cloud platforms, data platforms, and middleware technologies. This role is crucial for designing, building, and maintaining robust data pipelines that drive our business insights and analytics.
Key Responsibilities :
- Data Pipeline Development : Design, build, and maintain scalable data pipelines using technologies like Python, PySpark, and SQL.
- Cloud Platform Management : Work across multiple cloud environments, including AWS, Azure, OCI (Oracle Cloud Infrastructure), and GCP, to manage data and infrastructure.
- Data Platform Utilization : Utilize data platforms such as Snowflake and Cloudera for data processing, storage, and transformation.
- Middleware Integration : Implement and manage data ingestion and integration processes using middleware tools like Kafka, Informatica, and ADF (Azure Data Factory).
- Data Modeling & Analysis : Apply strong data modeling principles to design efficient data structures and support business intelligence reporting with tools like Power BI and Qlikview.
Skills : Mandatory Skills :
Cloud & Big Data : Experience as an AWS Data Engineer with knowledge of big-data ecosystems.Programming : Proficiency in Python & PySpark and SQL.Data Platforms : Experience with Snowflake and Cloudera.Middleware : Familiarity with Kafka, Informatica, and ADF.Data Modeling : Strong skills in Data Modeling.Required Cloud Experience :
Cloud Platforms : Experience with Azure, OCI, and GCP.BI Tools : Knowledge of Power BI and Qlikview.Nice to Have Skills :
Hadoop : Experience with Hadoop ecosystems.Cloud : Familiarity with Azure.Education & Experience :
Education : A bachelor's degree in Computer Science, Information Systems, or a related technical field is preferred.
Experience : A minimum of 5-8 years of experience in data engineering and big-data ecosystems.
Notice Period : Immediate- 15 days
(ref : hirist.tech)