Golang Data Engineer with Kafka

Golang Data Engineer with Kafka

Posted 1 day ago by 1754032185

Negotiable
Outside
Remote
USA

Summary: The role of Golang Data Engineer with Kafka involves developing and maintaining data-intensive APIs and distributed analysis capabilities for Bayer Crop Science. The position requires extensive experience in software engineering, particularly with GoLang and Apache Kafka, to transform complex scientific datasets into innovative software solutions. The engineer will work remotely and contribute to a team focused on agricultural solutions for a sustainable future. Candidates should have a strong background in cloud infrastructure and data modeling for large-scale databases.

Key Responsibilities:

  • Develop and maintain data-intensive APIs using a RESTful approach.
  • Create distributed analysis capabilities around various datasets.
  • Apply knowledge of algorithms and data structures to improve software craftsmanship.
  • Collaborate with a team to solve complex challenges with real-world impact.
  • Explore technology stacks to find optimal solutions for datasets.
  • Present work at relevant technical conferences.

Key Skills:

  • 4 years of experience with GoLang.
  • 8 years of software development experience.
  • Experience with Apache Kafka for stream processing.
  • Proficiency in building and maintaining data-intensive APIs.
  • Familiarity with Docker for containerized application deployments.
  • Experience with unit testing and test-driven development.
  • Knowledge of cloud platforms like AWS, Azure, or Google Cloud.
  • Data modeling experience for large-scale databases (relational or NoSQL).

Salary (Rate): undetermined

City: undetermined

Country: USA

Working Arrangements: remote

IR35 Status: outside IR35

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:
Primary Skills
GoLang Data Engineer
Job Description
T+S
Native on W2
Need long jobs history like 2+ years
Need Linkedin
Remote
Fill below table

GoLang

Years

Kafka Years

gRPC

years

Required Qualifications:

  • 4 years GoLang
  • 8 years Development
  • building and maintaining data-intensive APIs using a RESTful approach
  • Apache Kafka - Stream processing
  • Docker - creating containerized application deployments
  • Protocol buffers and gRPC
  • Unit Testing and Test Driven Development
  • Working in AWS, Azure or Google Cloud Platform, Apache Beam or Google Cloud Dataflow, Google Kubernetes Engine or Kubernetes
  • data modeling for large scale databases, either relational or NoSQL

Preferred - Not required:

  • Experience working with scientific datasets, quantitative science to business problems
  • Bioinformatics experience, especially large scale storage and data mining of variant data, variant annotation, and genotype to phenotype correlation

The mission of Bayer Crop Science is centered on developing agricultural solutions for a sustainable future that will include a global population projected to eclipse 9.6 billion by 2050. We approach agriculture holistically, looking across a broad range of solutions from using biotechnology and plant breeding to produce the best possible seeds, to advanced predictive and prescriptive analytics designed to select the best possible crop system for every acre.

To make this possible, Bayer collects terabytes of data across all aspects of its operations, from genome sequencing, crop field trials, manufacturing, supply chain, financial transactions and everything in between. There is an enormous need and potential here to do something that has never been done before. We need great people to help transform these complex scientific datasets into innovative software that is deployed across the pipeline, accelerating the pace and quality of all crop system development decisions to unbelievable levels.

What you will do is why you should join us:

Be a critical senior member of a data engineering team focused on creating distributed analysis capabilities around a large variety of datasets

Take pride in software craftsmanship, apply a deep knowledge of algorithms and data structures to continuously improve and innovate

Work with other top-level talent solving a wide range of complex and unique challenges that have real world impact

Explore relevant technology stacks to find the best fit for each dataset

Pursue opportunities to present our work at relevant technical conferences

o Google Cloud Next 2019:

o GraphConnect 2015:

o Google Cloud Blog:

Project your talent into relevant projects. Strength of ideas trumps position on an org chart

If you share our values, you should have:

At least 8 years experience in software engineering

At least 4 years experience with Go / GoLang

Proven experience (2 years) building and maintaining data-intensive APIs using a RESTful approach

Experience with stream processing using Apache Kafka

A level of comfort with Unit Testing and Test Driven Development methodologies

Familiarity with creating and maintaining containerized application deployments with a platform like Docker

A proven ability to build and maintain cloud based infrastructure on a major cloud provider like AWS, Azure or Google Cloud Platform

Experience data modeling for large scale databases, either relational or NoSQL

Bonus points for:

Experience with protocol buffers and gRPC

Experience with: Google Cloud Platform, Apache Beam and or Google Cloud Dataflow, Google Kubernetes Engine or Kubernetes

Experience working with scientific datasets, or a background in the application of quantitative science to business problems

Bioinformatics experience, especially large scale storage and data mining of variant data, variant annotation, and genotype to phenotype correlation