Senior Cloud Developer / Cloud Data Platform Architect )/ Software Engineer :: REMOTE
Posted Today by 1761834693
Negotiable
Outside
Remote
USA
Summary: The role of Senior Cloud Developer / Full Stack Developer involves leading the design and implementation of cloud-native, large-scale data platforms, primarily using AWS technologies. The ideal candidate will have extensive experience in data engineering, particularly with AWS, PySpark, Java, and Python, and will collaborate with various teams to create secure and scalable ETL pipelines. This position requires a strong background in fintech and a proven track record in building high-performance data solutions. The role is remote, allowing for flexibility in work arrangements.
Key Responsibilities:
- Architect, build, and maintain data pipelines leveraging AWS Glue, Lambda, Step Functions, EMR, ECS, and Kinesis.
- Develop ETL and ELT frameworks using PySpark / Spark / Databricks for large-scale data processing.
- Lead migration of legacy batch processes to event-driven, real-time architectures on AWS.
- Design and implement Terraform-based infrastructure for modular, reusable AWS resource provisioning.
- Optimize performance and scalability of distributed data processing pipelines.
- Integrate and orchestrate data ingestion from multiple systems (S3, Kafka, APIs, databases).
- Work with Snowflake / Redshift / Postgres for warehousing and data modeling.
- Partner with architecture, product, and analytics teams to define data governance, lineage, and quality frameworks.
- Implement CI/CD pipelines with Jenkins, Pytest, and Junit to ensure test-driven deployments.
- Mentor junior engineers, review code, and promote best practices for data engineering and DevOps.
Key Skills:
- 15+ years of total software engineering experience, with 8+ years in Data Engineering / Big Data / Cloud roles.
- Hands-on experience with AWS (Lambda, Glue, Step Functions, EMR, ECS, S3, Kinesis, SQS, SNS).
- Experience in Python, Java.
- Strong knowledge of Spark / PySpark for batch and stream processing.
- Experience with Terraform, Docker, Jenkins, and CI/CD automation.
- Working knowledge of Kafka or Kinesis for data streaming.
- Familiarity with Snowflake / Redshift / Postgres / DynamoDB.
- Expertise in event-driven architectures and microservices.
- Strong understanding of data lake, data warehouse, and lakehouse design principles.
- Proven experience delivering secure, scalable, and cost-optimized AWS solutions.
Salary (Rate): undetermined
City: undetermined
Country: USA
Working Arrangements: remote
IR35 Status: outside IR35
Seniority Level: undetermined
Industry: IT
Job Title: Senior Cloud Developer / Full Stack Developer with Data/ Cloud Data Platform Architect (AWS, Spark, Python)/ Software Engineer
Location: Remote
Duration: Long-term contract
Visa: / GC
Need Fintech Experience
Must Have : Java , PYthon , Data and AWS
About the Role
We are seeking a Software Engineer to lead the design and implementation of cloud-native, large-scale data platforms. The ideal candidate will have extensive experience in AWS, PySpark, Java, Python, and Terraform, with a proven track record of building high-performance, event-driven data solutions.
You'll collaborate closely with architects, DevOps engineers, and product teams to design secure, scalable, and resilient ETL pipelines that power enterprise-grade applications and analytics systems.
Key Responsibilities
- Architect, build, and maintain data pipelines leveraging AWS Glue, Lambda, Step Functions, EMR, ECS, and Kinesis.
- Develop ETL and ELT frameworks using PySpark / Spark / Databricks for large-scale data processing.
- Lead migration of legacy batch processes to event-driven, real-time architectures on AWS.
- Design and implement Terraform-based infrastructure for modular, reusable AWS resource provisioning.
- Optimize performance and scalability of distributed data processing pipelines.
- Integrate and orchestrate data ingestion from multiple systems (S3, Kafka, APIs, databases).
- Work with Snowflake / Redshift / Postgres for warehousing and data modeling.
- Partner with architecture, product, and analytics teams to define data governance, lineage, and quality frameworks.
- Implement CI/CD pipelines with Jenkins, Pytest, and Junit to ensure test-driven deployments.
- Mentor junior engineers, review code, and promote best practices for data engineering and DevOps.
Required Skills & Experience
- 15+ years of total software engineering experience, with 8+ years in Data Engineering / Big Data / Cloud roles.
- Hands-on experience with AWS (Lambda, Glue, Step Functions, EMR, ECS, S3, Kinesis, SQS, SNS).
- Experience in Python, Java
- Strong knowledge of Spark / PySpark for batch and stream processing.
- Experience with Terraform, Docker, Jenkins, and CI/CD automation.
- Working knowledge of Kafka or Kinesis for data streaming.
- Familiarity with Snowflake / Redshift / Postgres / DynamoDB.
- Expertise in event-driven architectures and microservices.
- Strong understanding of data lake, data warehouse, and lakehouse design principles.
- Proven experience delivering secure, scalable, and cost-optimized AWS solutions.
Must have
- AWS Certified Solutions Architect or AWS Data Analytics certification.
- Background in FinTech, Banking, or Government data platforms.
- Experience with Databricks, Airflow, or similar orchestration tools.
- Experience with Spring Boot / Java-based microservices.
- Experience in test-driven development (TDD/BDD) using Cucumber, Junit, or Pytest.
Education
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- MBA or relevant technical management experience a plus.