Functional Area : Data Science & AI / ML
Employment Type : Full Time, Permanent
Role Category : Engineering and Technology
Experience : 4-6 years
Job Description :
As a growing organization in the healthcare domain, we seek a Data Science Application Expert with expertise in JupyterLab, SAS Studio, and R Studio. The ideal candidate will design, develop, and optimize data science workflows, ensuring robust, scalable, and efficient data processing pipelines. You will collaborate with cross-functional teams to support data-driven decision-making, build machine learning models, and implement analytical solutions aligned with industry standards for security and compliance.
Key Responsibilities :
Data Science & Application Management :
Machine Learning & Analytics :
Data Engineering & Integration :
Infrastructure as Code (IaC) & Cloud Deployment :
Security & Compliance :
Collaboration & Stakeholder Management :
Job Requirements :
Education : Bachelor’s / Master’s degree in Computer Science, Data Science, Statistics, or related fields.
Experience : 4-6 years of experience in data science application management, focusing on JupyterLab, SAS Studio, and R Studio.
Technical Skills :
Core Data Science Platforms :
Expertise in JupyterLab, SAS Studio, and R Studio for data science workflows.
Strong understanding of SAS programming (Base SAS, Advanced SAS, SAS Viya), R, and Python.
Experience in managing and scaling JupyterHub, RStudio Server, and SAS Viya in cloud or on-prem environments.
Programming & Frameworks :
Proficiency in Python, R, SAS, SQL, and shell scripting.
Experience with Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch for machine learning.
Cloud & Infrastructure :
Experience in deploying and managing JupyterLab, R Studio, and SAS Viya on AWS, Azure, or GCP.
Hands-on experience with AWS SageMaker, Glue, Lambda, Step Functions, and EMR.
Proficiency in Terraform, AWS CDK, or CloudFormation for infrastructure automation.
Database Management & ETL :
Experience with SQL, NoSQL databases (PostgreSQL, DynamoDB, Snowflake, Redshift, MongoDB).
Hands-on experience in ETL pipelines and data wrangling using SAS, Python, and SQL.
DevOps & CI / CD Tools :
Familiarity with CI / CD pipelines using Jenkins, GitLab, or AWS native tools.
Experience with Docker, Kubernetes, and containerized deployments.
Additional Skills :
Event-Driven Architecture : Experience in real-time data processing using Kafka, Kinesis, or SNS / SQS.
Security Best Practices : Implementation of secure access controls and data encryption.
Cost Optimization : Understanding cloud pricing models and optimizing compute resources.
Agile Development : Hands-on experience with Agile methodologies like Scrum and Kanban.
Key Attributes :
Problem-Solving Mindset : Ability to troubleshoot complex data science workflows and propose scalable solutions.
Detail-Oriented : Strong focus on data integrity, performance optimization, and reproducibility.
Collaborative : Team player who thrives in a dynamic, cross-functional environment.
User-Centric Approach : Commitment to delivering scalable and efficient data science applications.
Data Scientist • Bengaluru, Karnataka, India