Fleet Management Limited
Our 30-year journey rides on the passion of over 27,000 seafarers and 1,000 onshore professionals. Today, we are one of the largest independent third-party ship management companies managing over 650+ diverse types of vessels.
Headquartered in Hong Kong SAR, China, we operate on a global scale having 27 offices in 12 countries. Our client base spans over 100 world-class ship owners, including Fortune 500 companies from China, Greece, India, Japan, Korea, Netherlands, Norway, Turkey and the USA, among others.
In a shore career at Fleet, you will be working with a team of a highly passionate, self-driven and committed group of people. We aim to be a place where you can achieve your full potential, regardless of your background.
We are looking for individuals who are ambitious about making a strong contribution to Fleet&aposs short and long-term sustainable growth – whether you are dealing directly with clients or working in a role supporting the business, such as technology, legal or communications.
As a Data Engineer, your typical day will cover the following areas :
1. Build & Optimise Data Systems
(Focus : Core engineering work with a balance of pipelines and infrastructure)
- Create & maintain data highways : Develop and manage cloud-based data lakes, warehouses, and ETL / ELT pipelines that ingest, process, and deliver data from 630+ ships and external sources
- Keep systems shipshape : Monitor cloud infrastructure performance, resolve bottlenecks, and ensure scalability / reliability for 24 / 7 maritime operations—no room for "set and forget."
- Secure the cargo : Implement data quality checks, encryption, and compliance standards (GDPR, SOC2) to protect sensitive maritime telemetry and operational data
- Automate the mundane : Use tools like Airflow to streamline workflows and reduce manual intervention in pipeline maintenance
2. Support Analytics & Troubleshoot Issues
(Focus : Enabling insights while keeping systems running smoothly)
Fuel AI / ML engines : Partner with Data Scientists to prep datasets for predictive models (e.g., fuel efficiency, preventative maintenance, etc.) and troubleshoot pipeline issues impacting their workSolve data mysteries : Diagnose root causes of pipeline failures, data discrepancies, or MLOps hiccups—then implement fixes that prevent repeat headachesMap the data terrain : Document source-to-target mappings, conduct data profiling, and clarify dependencies so analysts can self-serve without guessworkStay curious : Experiment with new tools and techniques to improve data quality, system performance and pipeline resilience3. Collaborate & Learn
(Focus : Teamwork and growth in a fast-paced environment)
Be the glue : Work closely with onshore and offshore developers, IT Operations, and stakeholders across Fleet to deliver solutions that balance technical rigor with real-world usabilityCommunicate clearly : Break down complex data concepts for non-technical audiences (think ship captains, not just engineers) and ask questions to avoid ambiguityLearn by doing : Shadow senior engineers, participate in code reviews, and absorb best practices to level up your craft—no prior maritime experience required, but curiosity is a must.Essential :
3+ years' experience in a Data Engineering role using SQL, PySpark and AirFlowStrong understanding in Data Lake and Data Warehouse design best practices and principlesPractical hands-on experience in cloud-based data services for ETL / ELT covering AWS EC2, S3 Storage and EMRAbility to manage and enhance infrastructure related to environments covering Spark, Hive, PrestoExperience with databases such as Postgres, MySQL, OracleStrong work ethic and ability to work independently on agreed goalsClear communication skills in English – both in speaking and writingDesirable :
Deployment experience and management of MLOps framework, such as AWS SageMaker AI, ECRExperience in other cloud platform and hybrid cloud infrastructure, e.g. GCP, AzureExperience in the maritime industryShow more
Show less
Skills Required
Airflow, Pyspark, Data Warehouse, Sql, Hive, Presto, Mysql, Postgres, Spark, Aws Ec2, Data Lake, Oracle