Role - Data Engineer
Experience : 1.5 Years to 5 Years
Skill : GCP, Data Engineer, DBT, Google Analytics
Location : Remote
JD :
candidate must have experience working on Spark, Kafka, and Python. They should be able to apply their expertise with cloud storage and computing for data pipelines in GCP, particularly using GCS and Dataproc. The role involves writing pipelines in Airflow to orchestrate data workflows and analyzing data from third-party providers such as Google Analytics and Google Ads. Strong analytical skills are required for working with unstructured datasets, as well as experience in manipulating, processing, and extracting value from large disconnected datasets. The candidate should also have experience with software engineering practices in data engineering, including release management and testing, along with relevant tooling such as dbt and Great Expectations.
They should demonstrate knowledge of data governance, privacy, and security, and possess basic exposure to machine learning concepts. The role requires supporting teams during migration activities through guidelines, support, and adherence to good practices. The candidate will independently handle incoming tracking requests as assigned by the Product Owner and support the team in championing web analytics and data literacy within the organization. Additionally, they will work to enhance data capabilities through training and support as needed, while dynamically adapting to changing situations to deliver results.
Strong interpersonal and conflict resolution skills are essential, along with the ability to form productive relationships quickly and create influence necessary for driving transformational changes. Excellent verbal and written communication skills are also required.
Data Engineer • Alappuzha, IN