Overview :
Join a rapidly growing team building world-class, large-scale Big Data architectures. This is a hands-on coding role focused on developing and optimizing data solutions using modern cloud and distributed computing technologies.
Key Responsibilities :
Development & Engineering
Write high-quality, scalable code using
Python ,
Scala , or similar languages.
Work with
SQL ,
PySpark ,
Databricks , and
Azure
cloud environments.
Optimize Spark performance and ensure efficient data processing pipelines.
Apply sound programming principles including
version control ,
unit testing , and
deployment automation .
Design and implement
APIs ,
abstractions , and
integration patterns
for distributed systems.
Define and implement
ETL ,
data transformation , and
automation workflows
in parallel processing environments.
Client Interaction
Collaborate directly with
Fortune 500 clients
to understand strategic data requirements.
Align project deliverables with client goals and Sigmoid’s strategic initiatives.
Innovation & Continuous Learning
Stay current with emerging Big Data and cloud technologies to maximize ROI.
Explore and productize new tools and techniques for large-scale data engineering.
Culture & Mindset
Demonstrate strategic and out-of-the-box thinking.
Be
analytical ,
data-driven , and
entrepreneurial
in approach.
Thrive in a
fast-paced, agile, start-up
environment.
Balance leadership with hands-on contribution.
Qualifications
Bachelor’s or higher
in Computer Science or related technical field.
Experience : 8 to 14 Years
Proven experience in
Big Data ,
application development , and
data management .
Strong skills in
Python ,
PySpark
(functional and OOP).
Effective
written and verbal communication
skills.
Technical Lead • Nashik, Maharashtra, India