Job descriptionHands-on experience creating automated data pipelines using modern technology stacks for batch ETL, data streaming, or change data capture, and for data processing to load advanced analytics data repositoriesExperience in designing data lake storage structures, data acquisition, transformation, and distribution processingProficient in designing and implementing data integration processes in a large distributed environment using cloud services e.g. Azure Data Factory, Data Catalog, Databricks, Stream AnalyticsAdvanced experience in SQL programmingProficient in programming languages (e.g. Python, Java) and REST APIs (e.g. Azure API Management, MuleSoft) to process data