Job description3+ Years of in Big Data & Data related technology experienceExpert level understanding of distributed computing principlesExpert level knowledge and experience in Apache SparkHands on programming with PythonProficiency with Hadoop v2, Map Reduce, HDFS, SqoopExperience with building stream-processing systems, using technologies such as Apache Storm or Spark-StreamingExperience with messaging systems, such as Kafka or RabbitMQGood understanding of Big Data querying tools, such as Hive, and ImpalaExperience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, FilesGood understanding of SQL queries, joins, stored procedures, relational schemasExperience with NoSQL databases, such as HBase, Cassandra, MongoDBKnowledge of ETL techniques and frameworksPerformance tuning of Spark JobsExperience with native Cloud data services AWS or AZURE Databricks,GCP.Ability to lead a team efficientlyExperience with designing and implementing Big data solutionsPractitioner of AGILE methodology