ETL/Python Developer with Medicaid

ETL/Python Developer with Medicaid

Posted 4 days ago by SDVS Technologies LLC

Negotiable
Undetermined
Remote
Remote

Summary: The Senior ETL / Python Developer role focuses on supporting the Enterprise Data Warehouse and Analytics Program, specifically for Medicaid analytics and federal reporting. The position requires expertise in ETL and Python engineering, along with experience in handling large healthcare datasets in a regulated environment. Collaboration with various teams is essential to ensure the development and maintenance of scalable data pipelines. The ideal candidate will have a strong delivery focus and a background in cloud-based platforms.

Key Responsibilities:

  • Design, develop, and maintain enterprise ETL pipelines using Azure Data Factory (ADF), Informatica PowerCenter, and Python-based frameworks
  • Build and optimize scalable data processing solutions using Python, Spark, and Databricks
  • Support Medicaid analytics and federal reporting initiatives (e.g., T-MSIS, PERM, MARS, Quality of Care)
  • Develop robust data validation, reconciliation, and audit-traceable data pipelines
  • Write and optimize SQL and stored procedures across relational platforms such as Snowflake, Oracle, and SQL Server
  • Participate in cloud migration and modernization initiatives within Azure-based architectures
  • Collaborate with analysts, QA, and reporting teams to ensure data quality, accuracy, and timeliness
  • Follow data engineering best practices for performance, reliability, reusability, and security
  • Support production operations, incident resolution, and root-cause analysis
  • Participate in code reviews, source control, and CI/CD processes using Azure DevOps and GitHub

Key Skills:

  • 5+ years of data engineering experience with a focus on enterprise data warehousing
  • 5+ years of hands-on ETL development using Informatica PowerCenter, Azure Data Factory, or similar tools
  • 5+ years of Python development for data engineering and automation
  • 3+ years of experience with Spark-based processing frameworks (Databricks or equivalent)
  • Strong SQL expertise and experience with relational databases (such as Teradata, Snowflake, Oracle, SQL Server)
  • Experience with source control and DevOps practices (Azure DevOps, GitHub, CI/CD)
  • Bachelor's degree or higher in Computer Science, Engineering, Analytics, or a related field
  • Strong analytical, problem-solving, and troubleshooting skills

Salary (Rate): undetermined

City: undetermined

Country: undetermined

Working Arrangements: remote

IR35 Status: undetermined

Seniority Level: undetermined

Industry: Other

Senior ETL / Python Developer

Our client is seeking a hands-on Senior Data Engineer (ETL / Python Developer) to support the Enterprise Data Warehouse (EDW) and Analytics Program. This role plays a critical part in designing, developing, and maintaining scalable data ingestion and transformation pipelines that support Medicaid analytics, federal reporting, and enterprise decision support.

The ideal candidate brings strong ETL and Python engineering expertise, experience working with large healthcare datasets, and the ability to operate effectively in a regulated, audit-sensitive environment. This is a delivery-focused role requiring close collaboration with architects, analysts, QA, PMs, SMEs, developers, and reporting BI teams across both legacy and cloud-based platforms.

Primary Responsibilities

Design, develop, and maintain enterprise ETL pipelines using Azure Data Factory (ADF), Informatica PowerCenter, and Python-based frameworks
Build and optimize scalable data processing solutions using Python, Spark, and Databricks
Support Medicaid analytics and federal reporting initiatives (e.g., T-MSIS, PERM, MARS, Quality of Care)
Develop robust data validation, reconciliation, and audit-traceable data pipelines
Write and optimize SQL and stored procedures across relational platforms such asSnowflake, Oracle, and SQL Server
Participate in cloud migration and modernization initiatives within Azure-based architectures
Collaborate with analysts, QA, and reporting teams to ensure data quality, accuracy, and timeliness
Follow data engineering best practices for performance, reliability, reusability, and security
Support production operations, incident resolution, and root-cause analysis
Participate in code reviews, source control, and CI/CD processes using Azure DevOps and GitHub

Required Qualifications

5+ years of data engineering experience with a focus on enterprise data warehousing
5+ years of hands-on ETL development using Informatica PowerCenter, Azure Data Factory, or similar tools
5+ years of Python development for data engineering and automation
3+ years of experience with Spark-based processing frameworks (Databricks or equivalent)
Strong SQL expertise and experience with relational databases (such as Teradata,Snowflake, Oracle, SQL Server)
Experience with source control and DevOps practices (Azure DevOps, GitHub, CI/CD)
Bachelor s degree or higher in Computer Science, Engineering, Analytics, or a related field
Strong analytical, problem-solving, and troubleshooting skills

Preferred Qualification

Experience supporting State Medicaid EDW or MMIS analytics environments
Healthcare or public-sector analytics experience (Medicaid / Medicare preferred)
Data modeling experience in enterprise data warehouse environments
Scripting experience (PowerShell, Bash) for automation and orchestration
Experience designing or consuming APIs (REST) within data platforms
Familiarity with data quality frameworks, reconciliation, and audit support
Azure certifications related to data engineering or analytics