We are seeking a highly skilled and experienced
AWS Data Engineer
with strong expertise in
Streaming Data
and
Big Data Technologies ,
AWS, Python, & Spark.
The ideal candidate will have hands-on experience in architecting and implementing scalable data solutions using modern cloud-native tools and frameworks.
Key Responsibilities :
Architect and implement scalable data pipelines for
real-time, batch, structured, and unstructured data .
Design and develop solutions using
AWS services
such as
EMR, Kinesis, Glue, Athena, S3, CloudFormation, API Gateway .
Work extensively with
streaming platforms
like
Kafka, Flink, Spark Streaming .
Develop and optimize data ingestion workflows using
Apache NiFi, Airflow, Sqoop, Oozie .
Build and maintain data lakes and analytics platforms using
AWS Lake Formation .
Hands-on development using
Scala with Spark
for distributed data processing.
Work with
NoSQL databases
including
DynamoDB, HBase , and
Hadoop ecosystem tools
like
MapReduce, Hive, HDFS .
Collaborate with cross-functional teams to deliver high-performance data solutions.
Technical Skills Required :
Mandatory :
Spark, AWS, Hadoop, Python
Big Data Tools :
EMR, Glue, Hive, HDFS, HBase, MapReduce
Streaming & Messaging :
Kafka, Kinesis, Flink, Spark Streaming
Data Ingestion :
Apache NiFi, Airflow, Sqoop, Oozie
Cloud Platforms :
AWS (preferred), Hortonworks, Cloudera, MapR
Programming :
Scala (with Spark), Python (optional)
Databases :
DynamoDB, NoSQL, HBase
Others :
AWS Athena, Lake Formation, CloudFormation
Aws Data Engineer • Delhi, India