About the Company
Re : Sources is the backbone of Publicis Groupe , the world's third-largest communications group. Formed in 1998 as a small team to service a few Publicis Groupe firms, Re : Sources has grown to 5,000+ people servicing a global network of prestigious advertising, public relations, media, healthcare and digital marketing agencies. We provide technology solutions and business services including finance, accounting, legal, benefits, procurement, tax, real estate, treasury and risk management to help Publicis Groupe agencies do what they do best : create and innovate for their clients.
In addition to providing essential, everyday services to our agencies, Re : Sources develops and implements platforms, applications and tools to enhance productivity, encourage collaboration and enable professional and personal development. We continually transform to keep pace with our ever-changing communications industry and thrive on a spirit of innovation felt around the globe. With our support, Publicis Groupe agencies continue to create and deliver award-winning campaigns for their clients.
Job Location : Gurgaon, Bengaluru, Pune
Responsibilities
Must have skills :
Strong written and verbal communication skills
Strong experience in implementing Graph database technologies (property graph)
Strong experience in leading data modelling activities for a production graph database solution
Strong experience in Cypher (or Tinkerpop Gremlin) with understanding of tuning
Strong experience working with data integration technologies, specifically Azure Services, ADF, ETLs, JSON, Hop or ETL orchestration tools.
Strong experience using PySpark, Scala, DataBricks
10+ years’ experience in design and implementation of complex distributed systems architectures
Strong experience with Master Data Management solutions
Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
Experience with stream-processing systems : Storm, Spark-Streaming, etc.
Strong knowledge Azure based services
Strong understanding of RDBMS data structure, Azure Tables, Blob, and other data sources
Experience with GraphQL
Experience in high availability and disaster recovery solutions
Experience with test driven development
Understanding of Jenkins, CI / CD processes using ADF, and DataBricks.
Strong analytical skills related to working with unstructured datasets.
Strong analytical skills necessary to triage and troubleshoot
Results-oriented and able to work across the organization as an individual contributor
Good to have skills :
Knowledge in graph data science, such as graph embedding
Knowledge in Neo4J HA Architecture for Critical Applications (Clustering, Multiple Data Centers, etc.)
Experience in working with EventHub, streaming data.
Experience with big data tools : Hadoop, Spark, Kafka, etc.
Experience with Redis
Understanding of ML models and experience in building ML pipeline, MLflow, AirFlow.
Bachelor's degree in engineering, computer science, information systems, or a related field from an accredited college or university; Master's degree from an accredited college or university is preferred. Or equivalent work experience.
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Build processes supporting data transformation, data structures, metadata, dependency and workload management.
A successful history of manipulating, processing and extracting value from large disconnected datasets.
Working knowledge of message queuing, stream processing, and highly scalable Azure based data stores.
Strong project management and organizational skills.
Experience supporting and working with cross-functional teams in a dynamic environment.
Qualifications
Bachelor's degree in engineering, computer science, information systems, or a related field from an accredited college or university; Master's degree from an accredited college or university is preferred. Or equivalent work experience.
Required Skills
Strong written and verbal communication skills
Strong experience in implementing Graph database technologies (property graph)
Strong experience in leading data modelling activities for a production graph database solution
Strong experience in Cypher (or Tinkerpop Gremlin) with understanding of tuning
Strong experience working with data integration technologies, specifically Azure Services, ADF, ETLs, JSON, Hop or ETL orchestration tools.
Strong experience using PySpark, Scala, DataBricks
10+ years’ experience in design and implementation of complex distributed systems architectures
Strong experience with Master Data Management solutions
Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
Experience with stream-processing systems : Storm, Spark-Streaming, etc.
Strong knowledge Azure based services
Strong understanding of RDBMS data structure, Azure Tables, Blob, and other data sources
Experience with GraphQL
Experience in high availability and disaster recovery solutions
Experience with test driven development
Understanding of Jenkins, CI / CD processes using ADF, and DataBricks.
Strong analytical skills related to working with unstructured datasets.
Strong analytical skills necessary to triage and troubleshoot
Results-oriented and able to work across the organization as an individual contributor
Preferred Skills
Knowledge in graph data science, such as graph embedding
Knowledge in Neo4J HA Architecture for Critical Applications (Clustering, Multiple Data Centers, etc.)
Experience in working with EventHub, streaming data.
Experience with big data tools : Hadoop, Spark, Kafka, etc.
Experience with Redis
Understanding of ML models and experience in building ML pipeline, MLflow, AirFlow.
Data Architect • Bengaluru, India