Description
Primary Skills : Databricks with Pyspark, Python
Secondaryskills; ADF
- 7+ years of experience with detailed knowledge of data warehouse technical architectures, ETL / ELT and reporting / analytic tools.
- 4+ years of work experience in Databricks, Pyspark and Python project development work
- Expertise in Bigdata Eco systems like HDFS and Spark
- Strong hands-on experience on Python or Scala.
- Hands on experience on Azure Data Factory.
- Able to convert the SQL stored procedures to Python code in Pyspark frame work using Dataframes.
- Design and develop SQL Server stored procedures, functions, views and triggers to be used during the ETL process.
- SQL SERVER development work experience with relational databases and knowledge is a must
- Development of Stored Procedures for transformations in ETL pipeline
- Should have work experience in Agile projects
- Write and maintain documentation of the ETL processes via process flow diagrams.
- Collaborate with business users, support team members, and other developers throughout the organization to help everyone understand issues that affect the data warehouse
- Good experience on customer interaction is required.
- Possesses good interpersonal and communication skills.