Databricks Developer: Java Spark

Databricks Developer: Java Spark

Posted 5 days ago by 1752661312

Negotiable
Outside
Remote
USA

Summary: The Databricks Developer role focuses on designing, developing, and maintaining scalable data processing solutions on the Databricks platform, specifically for IRS datasets. Candidates must possess advanced skills in Java and Apache Spark, along with a strong understanding of big data processing and secure data handling in a federal environment. The position emphasizes performance optimization and collaboration with cross-functional teams to deliver effective data solutions. An active IRS MBI clearance is required for this role.

Key Responsibilities:

  • Design, develop, and maintain scalable data pipelines using Apache Spark on Databricks
  • Implement data processing logic in Java 8+, leveraging functional programming and OOP best practices
  • Integrate with IRS data systems including IRMF, BMF, or IMF
  • Optimize Spark jobs for performance, reliability, and cost-efficiency
  • Collaborate with cross-functional teams to gather requirements and deliver data solutions
  • Ensure compliance with data security, privacy, and governance standards
  • Troubleshoot and debug production issues in distributed data environments

Key Skills:

  • Active IRS MBI Clearance Required. IRS issued laptop strongly preferred. Please provide a copy of the candidate s active MBI letter.
  • Bachelor s degree in Computer Science, Information Systems, or a related field.
  • 8+ years of professional experience demonstrating the required technical skills and responsibilities listed:
  • Hands-on experience working with IRS IRMF, BMF, or IMF datasets
  • Understanding of IRS data structures, compliance, and security protocols
  • Strong expertise in Java 8 or higher
  • Experience with functional programming (Streams API, Lambdas)
  • Familiarity with object-oriented design patterns and best practices
  • Proficient in Spark Core, Spark SQL, and DataFrame/Dataset APIs
  • Understanding of RDDs and when to use them
  • Experience with Spark Streaming or Structured Streaming
  • Skilled in performance tuning and Spark job optimization
  • Ability to use Spark UI for troubleshooting stages and tasks
  • Familiarity with HDFS, Hive, or HBase
  • Experience integrating with Kafka, S3, or Azure Data Lake
  • Comfort with Parquet, Avro, or ORC file formats
  • Strong understanding of batch and real-time data processing paradigms
  • Experience building ETL pipelines with Spark
  • Proficient in data cleansing, transformation, and enrichment
  • Experience with YARN, Kubernetes, or EMR for Spark deployment
  • Familiarity with CI/CD tools like Jenkins or GitHub Actions
  • Monitoring experience with Grafana, Prometheus, Datadog, or Spark UI logs
  • Proficient in Git
  • Experience with Maven or Gradle
  • Unit testing with JUnit or TestNG
  • Experience with Mockito or similar mocking frameworks
  • Data validation and regression testing for Spark jobs
  • Experience working in Agile/Scrum environments
  • Strong documentation skills (Markdown, Confluence, etc.)
  • Ability to debug and troubleshoot production issues effectively
  • Experience with Scala or Python in Spark environments
  • Familiarity with Databricks or Google DataProc
  • Knowledge of Delta Lake or Apache Iceberg
  • Experience with data modeling and performance design for big data systems

Salary (Rate): undetermined

City: undetermined

Country: USA

Working Arrangements: remote

IR35 Status: outside IR35

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

Databricks Developer: Java Spark

Location: Remote

Duration: Long Term

Task Description:

The Databricks Developer will be responsible for designing, developing, and maintaining scalable data processing solutions on the Databricks platform, with a focus on integrating and transforming IRS datasets such as the Information Returns Master File (IRMF), Business Master File (BMF), and Individual Master File (IMF). This role requires advanced proficiency in Java and Apache Spark, and a deep understanding of big data processing, performance optimization, and secure data handling in a federal environment.

Required skills/Level of Experience:

We are seeking a Databricks Developer with deep expertise in Java and Apache Spark, along with hands-on experience working with IRS data systems such as IRMF, BMF, or IMF. The ideal candidate will be responsible for designing, developing, and optimizing big data pipelines and analytics solutions on the Databricks platform. This role requires a deep understanding of distributed data processing, performance tuning, and scalable architecture.

Key Responsibilities:

  • Design, develop, and maintain scalable data pipelines using Apache Spark on Databricks
  • Implement data processing logic in Java 8+, leveraging functional programming and OOP best practices
  • Integrate with IRS data systems including IRMF, BMF, or IMF
  • Optimize Spark jobs for performance, reliability, and cost-efficiency
  • Collaborate with cross-functional teams to gather requirements and deliver data solutions
  • Ensure compliance with data security, privacy, and governance standards
  • Troubleshoot and debug production issues in distributed data environments

Required Skills & Qualifications:

  • Active IRS MBI Clearance Required. IRS issued laptop strongly preferred. Please provide a copy of the candidate s active MBI letter.
  • Bachelor s degree in Computer Science, Information Systems, or a related field.
  • 8+ years of professional experience demonstrating the required technical skills and responsibilities listed:

IRS Data Systems Experience

  • Hands-on experience working with IRS IRMF, BMF, or IMF datasets
  • Understanding of IRS data structures, compliance, and security protocols

Programming Language Proficiency

  • Strong expertise in Java 8 or higher
  • Experience with functional programming (Streams API, Lambdas)
  • Familiarity with object-oriented design patterns and best practices

Apache Spark

  • Proficient in Spark Core, Spark SQL, and DataFrame/Dataset APIs
  • Understanding of RDDs and when to use them
  • Experience with Spark Streaming or Structured Streaming
  • Skilled in performance tuning and Spark job optimization
  • Ability to use Spark UI for troubleshooting stages and tasks

Big Data Ecosystem

  • Familiarity with HDFS, Hive, or HBase
  • Experience integrating with Kafka, S3, or Azure Data Lake
  • Comfort with Parquet, Avro, or ORC file formats

Data Processing and ETL

  • Strong understanding of batch and real-time data processing paradigms
  • Experience building ETL pipelines with Spark
  • Proficient in data cleansing, transformation, and enrichment

DevOps / Deployment

  • Experience with YARN, Kubernetes, or EMR for Spark deployment
  • Familiarity with CI/CD tools like Jenkins or GitHub Actions
  • Monitoring experience with Grafana, Prometheus, Datadog, or Spark UI logs

Version Control & Build Tools

  • Proficient in Git
  • Experience with Maven or Gradle

Testing

  • Unit testing with JUnit or TestNG
  • Experience with Mockito or similar mocking frameworks
  • Data validation and regression testing for Spark jobs

Soft Skills / Engineering Practices

  • Experience working in Agile/Scrum environments
  • Strong documentation skills (Markdown, Confluence, etc.)
  • Ability to debug and troubleshoot production issues effectively

Preferred Qualifications:

  • Experience with Scala or Python in Spark environments
  • Familiarity with Databricks or Google DataProc
  • Knowledge of Delta Lake or Apache Iceberg
  • Experience with data modeling and performance design for big data systems

Nice to have skills:

  • Experience with Scala or Python in Spark environments
  • Familiarity with Databricks or Google DataProc
  • Knowledge of Delta Lake or Apache Iceberg
  • Data modeling and performance design for big data systems.