TCS Hiring !!Virtual Drive
TCS - Hyderabad
12 PM to 1 PM
Immediate Joiners
5 to 7 years
Role
Exp - 5 to 7 years
Please read Job description before Applying
NOTE : If the skills / profile matches and interested, please reply to this email by attaching your latest updated CV and with below few details :
Name :
Contact Number :
Email ID :
Highest Qualification in : (Eg. B.Tech / B.E. / M.Tech / MCA / M.Sc. / MS / BCA / B.Sc. / Etc.)
Current Organization Name :
Total IT Experience- 5 to 7 years
Location : Hyderabad
Current CTC
Expected CTC
Notice period : Immediate
Whether worked with TCS - Y / N
Must-Have
(Ideally should not be more than 3-5)
Strong proficiency in Python programming.
Hands-on experience with PySpark and Apache Spark .
Knowledge of Big Data technologies (Hadoop, Hive, Kafka, etc.).
Experience with SQL and relational / non-relational databases.
Familiarity with distributed computing and parallel processing .
Understanding of data engineering best practices.
Experience with REST APIs , JSON / XML , and data serialization.
Exposure to cloud computing environments.
Good-to-Have
5+ years of experience in Python and PySpark development.
Experience with data warehousing and data lakes .
Knowledge of machine learning libraries (e.g., MLlib) is a plus.
Strong problem-solving and debugging skills.
Excellent communication and collaboration abilities.
SN
Responsibility of / Expectations from the Role
Develop and maintain scalable data pipelines using Python and PySpark .
Design and implement ETL (Extract, Transform, Load) processes.
Optimize and troubleshoot existing PySpark applications for performance.
Collaborate with cross-functional teams to understand data requirements.
Write clean, efficient, and well-documented code.
Conduct code reviews and participate in design discussions.
Ensure data integrity and quality across the data lifecycle.
Integrate with cloud platforms like AWS , Azure , or GCP .
Implement data storage solutions and manage large-scale datasets.
Bigquery Gcp Iceberg • Hyderabad, India