You'll Be Expected To Have :
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform.
- Proficiency in programming languages such as Python, Scala, or SQL.
- Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark.
- Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services.
- Proven track record of delivering scalable and reliable data solutions in a fast-paced environment.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills with the ability to work effectively in cross-functional teams.
- Good to have experience with containerization technologies such as Docker and Kubernetes.
- Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Skills Required
Python, Scala, Aws, Azure, Gcp