The Principal Data Modeler / Architect will lead a critical database modernization initiative to reverse engineer, redesign, and normalize our existing OLTP data architecture. This role requires a seasoned data professional who can analyze complex legacy systems, identify architectural debt, and execute a comprehensive schema refactoring strategy while ensuring zero business disruption. The ideal candidate combines deep expertise in data modeling, normalization principles, and large-scale database migration with strong collaboration skills to coordinate cross-functional teams through this transformation.
Experience in the EdTech domain is a plus.
Primary Objective
Reverse engineer our existing large-scale OLTP database, analyze current denormalized structures, and redesign the data model following normalization best practices while optimizing for performance, maintainability, and scalability in a modern SaaS environment.
Key Responsibilities
Database Analysis & Reverse Engineering
Reverse engineer existing OLTP databases to create comprehensive conceptual, logical, and physical data models
Analyze current schema design to identify denormalization patterns, data redundancies, integrity issues, and architectural debt
Profile existing data quality, identify anomalies, inconsistencies, and constraint violations
Document current state architecture, data flows, and integration points
Schema Redesign & Normalization
Redesign database schema by applying normalization principles (1NF through BCNF) to eliminate redundancy and improve data integrity
Create optimized conceptual, logical, and physical data models for the target state architecture
Design migration paths that maintain referential integrity and business rule enforcement
Identify opportunities for strategic denormalization where justified by performance requirements
Migration Strategy & Execution
Develop comprehensive migration strategies with minimal or zero downtime requirements
Design data transformation logic to migrate from denormalized to normalized structures
Create phased migration approaches using patterns such as strangler fig, dual-write, or shadow databases
Implement data validation and reconciliation frameworks to ensure migration accuracy
Develop rollback strategies and contingency plans for migration phases
Coordinate database deployment using CI / CD pipelines and migration tools
Performance Optimization & Tuning
Benchmark existing system performance to establish baselines
Optimize new schema design through indexing strategies, partitioning, and query optimization
Conduct load testing and performance validation pre and post-migration
Fine-tune database configuration, connection pooling, and caching strategies for AWS Aurora PostgreSQL
Ensure normalized design meets or exceeds current performance benchmarks
Data Governance, Security & Compliance
Ensure redesigned data model maintains compliance with relevant standards (GDPR, HIPAA, FERPA for EdTech)
Implement data governance policies, access controls, and security measures in new architecture
Define and enforce data quality standards throughout migration process
Establish data lineage and metadata management for new structures
Documentation & Knowledge Transfer
Create comprehensive documentation including data models, schema specifications, migration procedures, and data dictionaries
Develop data flow diagrams, entity-relationship diagrams, and architecture decision records
Document rationale for normalization decisions and any strategic denormalization
Create runbooks for migration execution and rollback procedures
Provide knowledge transfer sessions to technical teams
Required Skills & Qualifications
Data Modeling & Database Expertise
10+ years of hands-on experience in data modeling and database design
5+ years of experience reverse engineering and refactoring large-scale OLTP databases
Expert-level understanding of database normalization theory and practical application (1NF through 5NF, BCNF)
Deep expertise in both forward and reverse engineering of data models
Proven track record of successful database modernization and schema refactoring projects
In-depth knowledge of relational database design principles, patterns, and anti-patterns
Technical Proficiency
Expert-level SQL skills and extensive experience with PostgreSQL (required)
Proficiency with additional RDBMS platforms (SQL Server, Oracle, MySQL)
Hands-on experience with AWS Aurora PostgreSQL or similar cloud-based database platforms
Strong expertise in data modeling tools ( erwin, SqlDBM, ER / Studio, or similar )
Experience with database migration and version control tools
Familiarity with database comparison and schema diff tools
Knowledge of database DevOps practices and CI / CD pipeline integration
Experience with cloud platforms (AWS, Azure, GCP) and cloud-native database services
Relevant certifications (AWS Certified Database Specialty, PostgreSQL certifications) are advantageous
Data Modeler • Kurnool, Andhra Pradesh, India