Job Title : Databricks Engineer
Experience : 7.5+ Years
Location : Remote (IST Shift)
Contract Type : Long-Term
Overview
We are seeking an experienced Databricks Engineer with 7.5+ years of expertise in database development, ETL, and cloud-based data solutions . The ideal candidate should be highly skilled in SQL, Databricks, PySpark, and cloud technologies , with proven experience in designing scalable data workflows, optimizing performance, and delivering enterprise-grade data solutions.
Responsibilities
Design, develop, and optimize database solutions for enterprise applications.
Build and manage ETL pipelines and large-scale data processing workflows using Databricks and PySpark .
Implement data modeling, schema design, and performance tuning for high-volume data systems.
Develop and maintain cloud-based workflows with AWS (S3, Redshift, Lambda, EC2).
Automate ETL processes and improve system efficiency through workflow automation .
Perform query optimization, indexing, and stored procedures for database performance improvements.
Lead data migration and warehousing projects , ensuring smooth data integration.
Develop dashboards and analytics using Power BI and Google Data Studio.
Requirements
7.5+ years of hands-on experience in data engineering, ETL, and databases .
Proven expertise in Databricks, PySpark, and Snowflake .
Strong knowledge of AWS cloud services (S3, Lambda, EC2).
Experience with Active Batch or similar scheduling tools .
Excellent problem-solving, analytical, and communication skills.
Ability to work in fast-paced, remote environments .
Key Skills
Databases : Oracle PL / SQL, MySQL, SQL Server, Redshift, PostgreSQL, Netezza
Programming & Tools : Python, Unix Shell Scripting, PySpark, Databricks, SnapLogic, Snowflake, SQL Developer, PL / SQL Developer, MySQL Workbench, DBeaver, PyCharm
Cloud & Scheduling : AWS S3, AWS Lambda, AWS EC2, Active Batch
Data Analytics : Power BI, Google Data Studio
Engineer • Kollam, Kerala, India