Good at problem solving, well-versed with overall project architecture and Hands-on Coding Skills :
Proficiency in multiple programming languages - ideally Python
Proficiency in at least one cluster computing frameworks (preferably Spark, alternatively Flink or Storm)
Proficiency in at least one cloud data lakehouse platforms (preferably AWS data lake services or Databricks, alternatively Hadoop), at least one relational data stores (Postgres, Oracle or similar) and at least one NOSQL data stores (Cassandra, Dynamo, MongoDB or similar)
Proficiency in at least one scheduling / orchestration tools (preferably Airflow, alternatively AWS Step Functions or similar)
Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), Agile methodology (develop PI plans and roadmaps), TDD (or BDD) and CI / CD tools (Jenkins, Git,)