← Back to experience
AUTOMOTIVE

Porsche / MHP

Senior Full Stack Developer · Jun 2023 — Present · Remote

Central data backbone and monitoring tools for Porsche's global vehicle management system, processing order data across international supply chain.

60+ Kafka topics
21 Core packages
26 API endpoints
23 Data streams

Porsche needed a unified data platform to process vehicle orders, dealer quotas, stock levels, and logistics data flowing from multiple on-premise and cloud sources. The challenge was bridging legacy on-premise systems with modern cloud infrastructure while maintaining data consistency, compliance, and real-time visibility across the entire supply chain.

I designed and built the multi-layer data pipeline architecture, developed the full-stack order monitoring application, and managed infrastructure-as-code across the platform. My responsibilities included Kafka topic management, Lambda function development, security policy enforcement, and delivering satellite data product repositories.

Client layer React monitoring dashboard with Porsche Design System, enterprise data tables, charting
API & auth gateway REST API with OIDC authentication, WAF protection, CDN distribution
Backend / Data pipeline / Compute
26 Lambda functions
Multi-layer data pipeline (ingest → transform → persist)
Scheduled processing and change events
Event backbone & storage Kafka (on-premise + cloud), FIFO queues, Redshift analytics warehouse, DynamoDB
External data sources Vehicle ordering systems, dealer appointment platforms, quota management, compliance
Multi-Layer Data Pipeline

Three-stage pipeline architecture separating ingestion, transformation, and persistence into independently deployable layers. Each layer has its own error handling, retry logic, and dead-letter queues, enabling granular monitoring and recovery without reprocessing entire data flows.

Data architecture
Partition-Based Concurrency

Concurrency model aligned with Kafka partition topology to maximize throughput while preserving message ordering guarantees. Processing scales horizontally by adding partitions, with each consumer instance handling a deterministic subset of the data stream.

Performance
Change Data Capture

Real-time change detection on upstream data sources feeding incremental updates into the pipeline. Only modified records flow through transformation and persistence stages, dramatically reducing processing volume and warehouse compute costs.

Data sync
Compliance-First Infrastructure

Policy-as-code enforcement with 40+ security rules validated on every deployment. Infrastructure definitions are checked against organizational compliance requirements before provisioning, preventing non-compliant resources from reaching production.

Security
Cross-Region Failover

Multi-region deployment strategy ensuring data pipeline continuity during regional outages. Kafka topic replication and stateless compute layers allow traffic redirection with minimal data loss and recovery time.

Reliability
GitOps Topic Management

Declarative Kafka topic configuration managed through version-controlled definitions. Topic creation, schema evolution, and partition changes flow through pull request review and automated validation before applying to the cluster.

Operations
Backend
Node.jsTypeScriptAWS CDKLambdaAPI Gateway
Frontend
ReactVitePorsche Design SystemAG GridTanStack QueryTailwind
Cloud & Data
KafkaKinesis FirehoseRedshift ServerlessDynamoDBS3CloudFront
Patterns & Tooling
Event sourcingFIFO messagingParquet columnar storageLernaCloudFormation Guard
  • Built central data backbone processing 60+ Kafka topics
  • Designed multi-layer pipeline architecture (ingest → transform → persist)
  • Built full-stack order monitoring application
  • Managed 40+ security policy rules via policy-as-code
  • Delivered 16 satellite data product repositories