- Bachelor's or Masters degree in Computer Science, Information Technology, or related field.
- In depth hands-on implementation knowledge on Databricks. Delta Lake, Delta table - Managing Delta Tables, Databricks Cluster Configuration, Cluster policies.
- Experience handling structured and unstructured datasets
- Strong proficiency in programming languages like Python, Scala, or SQL.
- Experience with Cloud platforms like AWS, Azure, or Google Cloud, and understanding of cloud-based data storage and computing services.
- Familiarity with big data technologies like Apache Spark, Hadoop, and data lake architectures.
- Develop and maintain data pipelines, ETL workflows, and analytical processes on the Databricks platform.
- Should have good experience in Data Engineering in Databricks Batch process and Streaming
- Should have good experience in creating Workflows & Scheduling the pipelines.
- Should have good exposure on how to make packages or libraries available in DB.
- Familiarity in Databricks default runtimes
- Databricks Certified Data Engineer Associate / Professional Certification (Desirable).
- Should have experience working in Agile methodology
- Strong verbal and written communication skills.
- Strong analytical and problem-solving skills with a high attention to detail.
Role : Solution Architect
Industry Type : IT Services & Consulting
Department : Engineering - Software & QA
Employment Type : Full Time, Permanent
Role Category : Software Development
Education
UG : BCA in Any Specialization, B.Tech / B.E. in Any Specialization
PG : MCA in Any Specialization, M.Tech in Any Specialization
Skills Required
Azure Synapse, Azure Data Factory, Architecture, Pyspark, Azure Databricks, Sql