Data Architect (Hands-On) - Mainframe CDC & Db2 ? Aurora Postgres

Data Architect (Hands-On) - Mainframe CDC & Db2 ? Aurora Postgres

Posted 2 days ago by Strike IT Services

Negotiable
Inside
Hybrid
London/Hybrid, UK

Summary: A leading financial services organization is seeking a hands-on Data Architect to lead a data modernization program, migrating critical workloads from Mainframe to AWS. The role requires deep expertise in mainframe CDC, Db2, Kafka/MSK, and Aurora Postgres, focusing on designing and implementing a seamless migration of high-value datasets. The ideal candidate will take ownership of strategy, architecture, and delivery while ensuring minimal lag and robust controls during the migration process.

Key Responsibilities:

  • Architect and implement CDC pipelines using IBM IIDR/CDC, Precisely Connect, and IBM zDIH.
  • Design subscriptions, bookmarks, resync processes, backfill/replay paths.
  • Ensure low-latency, resilient CDC operations with clear runbooks and controls.
  • Translate Db2 schemas into Aurora-optimised Postgres models.
  • Handle identities/sequences, RI, triggers, LOBs, timestamp precision, and performance tuning.
  • Set standards for naming, partitioning, indexing, and storage.
  • Build and optimise ingestion & transformation flows.
  • Implement UPSERT/MERGE patterns with idempotency, ordering guarantees, and exactly/at-least-once semantics.
  • Deliver clean, repeatable end-to-end loads.
  • Manage EBCDIC to UTF-8 conversions, COMP-3/packed decimal, and binary numeric types.
  • Implement deterministic transformations with golden-set validation.
  • Use AWS SCT/DMS for schema conversion and bulk loads.
  • Work with AWS Glue/Athena/Redshift for downstream access and analytics.
  • Apply modern engineering: Terraform (IaC), GitLab CI/CD, versioning, automation.
  • Support dual-run, reconciliation, rollback, and validation routines.
  • Build dashboards for lag, throughput, costs, and failure conditions (CloudWatch/Grafana).
  • Implement lineage, masking, KMS encryption, IAM boundaries.
  • Produce runbooks, alerts, and operational documentation.
  • Ensure priority tables consistently run with <5s CDC lag.
  • Aim for zero encoding defects across validation suites.
  • Maintain >99.5% schema conformance to target models.
  • Provide clear lineage for all P1 flows.
  • Track and monitor cost per 1M events trending down.
  • Demonstrate deep hands-on expertise with IBM IIDR/CDC and Precisely Connect.
  • Show strong knowledge of Db2 catalog and z/OS fundamentals.
  • Prove experience modelling and tuning Aurora Postgres.
  • Exhibit hands-on skills in Kafka/MSK, AWS SCT/DMS, Python, and SQL.
  • Maintain a strong engineering/testing mindset by writing validation tests before cutover.

Key Skills:

  • Deep hands-on expertise with IBM IIDR/CDC and Precisely Connect.
  • Strong knowledge of Db2 catalog and z/OS fundamentals.
  • Proven experience modelling and tuning Aurora Postgres.
  • Hands-on skills in Kafka/MSK, AWS SCT/DMS, Python, and SQL.
  • A strong engineering/testing mindset.

Salary (Rate): undetermined

City: London

Country: UK

Working Arrangements: hybrid

IR35 Status: inside IR35

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

A leading financial services organisation is embarking on a major data modernisation programme to move critical workloads off the Mainframe and onto AWS. We're looking for a hands-on Data Architect with deep experience in mainframe CDC, Db2, Kafka/MSK, and Aurora Postgres to design, build, and lead a zero-surprise migration of high-value datasets.

This is a highly technical role for someone who can own strategy, architecture, and delivery-and isn't afraid to get hands-on with design, modelling, and pipelines.

They want someone to enable the safe, accurate and repeatable migration of key datasets from Db2 on the Mainframe to AWS Aurora Postgres, with minimal lag, full lineage, robust controls, and flawless encoding/typing integrity.

6-month contract | Hybrid working (3 days/week in London office) | INSIDE IR35

1. Change Data Capture (CDC) Strategy & Build

  • Architect and implement CDC pipelines using IBM IIDR/CDC, Precisely Connect, and (where appropriate) IBM zDIH

  • Design subscriptions, bookmarks, resync processes, backfill/replay paths

  • Ensure low-latency, resilient CDC operations with clear runbooks and controls

2. Target Data Modelling - Db2 ? Aurora Postgres

  • Translate Db2 (z/OS and LUW) schemas into Aurora-optimised Postgres models

  • Handle identities/sequences, RI, triggers, LOBs, timestamp precision, and performance tuning

  • Set standards for naming, partitioning, indexing, and storage

3. Pipeline Design - Db2 ? CDC ? MSK/Kafka/S3 ? Aurora

  • Build and optimise ingestion & transformation flows

  • Implement UPSERT/MERGE patterns with idempotency, ordering guarantees, and exactly/at-least-once semantics

  • Deliver clean, repeatable end-to-end loads

4. Encoding & Data Types

  • Manage EBCDIC ? UTF-8 conversions, COMP-3/packed decimal, and binary numeric types

  • Implement deterministic transformations with golden-set validation

5. Tooling & Platform Integration

  • Use AWS SCT/DMS for schema conversion and bulk loads

  • Work with AWS Glue/Athena/Redshift for downstream access and analytics

  • Apply modern engineering: Terraform (IaC), GitLab CI/CD, versioning, automation

6. Cutover, Controls & Observability

  • Support dual-run, reconciliation, rollback, and validation routines

  • Build dashboards for lag, throughput, costs, and failure conditions (CloudWatch/Grafana)

  • Implement lineage, masking, KMS encryption, IAM boundaries

  • Produce runbooks, alerts, and operational documentation

Key Outcomes/KPIs

  • Priority tables consistently running <5s CDC lag

  • Zero encoding defects across validation suites

  • >99.5% schema conformance to target models

  • Clear lineage for all P1 flows

  • Cost per 1M events tracked, monitored, and trending down

Must-Have Experience

  • Deep hands-on expertise with IBM IIDR/CDC and Precisely Connect (design + operations)

  • Strong knowledge of Db2 catalog and z/OS fundamentals, including batch windows and SMF considerations

  • Proven experience modelling and tuning Aurora Postgres

  • Hands-on skills in Kafka/MSK, AWS SCT/DMS, Python, and SQL

  • A strong engineering/testing mindset - you write validation tests before cutover