Negotiable
Outside
Remote
USA
Summary: The GoLang Engineer role focuses on developing and maintaining data-intensive APIs and distributed analysis capabilities within a data engineering team. Candidates should possess extensive experience in software engineering, particularly with GoLang and Apache Kafka, while also being comfortable with cloud infrastructure and containerized applications. The position emphasizes software craftsmanship and innovation in transforming complex scientific datasets into impactful software solutions. This is a remote position based in the USA, classified as outside IR35.
Key Responsibilities:
- Be a critical senior member of a data engineering team focused on creating distributed analysis capabilities around a large variety of datasets.
- Apply a deep knowledge of algorithms and data structures to continuously improve and innovate.
- Work with other top-level talent solving a wide range of complex and unique challenges.
- Explore relevant technology stacks to find the best fit for each dataset.
- Pursue opportunities to present work at relevant technical conferences.
- Project talent into relevant projects, emphasizing the strength of ideas over position.
Key Skills:
- 4 years GoLang experience.
- 8 years Development experience.
- Experience building and maintaining data-intensive APIs using a RESTful approach.
- Stream processing experience with Apache Kafka.
- Containerized application deployments experience with Docker.
- Familiarity with Protocol buffers and gRPC.
- Unit Testing and Test Driven Development experience.
- Experience working in AWS, Azure, or Google Cloud Platform.
- Data modeling for large scale databases, either relational or NoSQL.
- Experience with scientific datasets and quantitative science applications is preferred.
- Bioinformatics experience is a plus.
Salary (Rate): undetermined
City: undetermined
Country: USA
Working Arrangements: remote
IR35 Status: outside IR35
Seniority Level: undetermined
Industry: IT
Native on W2
GoLang Years | Kafka Years | gRPC years |
|
|
|
Required Qualifications:
- 4 years GoLang
- 8 years Development
- building and maintaining data-intensive APIs using a RESTful approach
- Apache Kafka - Stream processing
- Docker - creating containerized application deployments
- Protocol buffers and gRPC
- Unit Testing and Test Driven Development
- Working in AWS, Azure or Google Cloud Platform, Apache Beam or Google Cloud Dataflow, Google Kubernetes Engine or Kubernetes
- data modeling for large scale databases, either relational or NoSQL
Preferred - Not required:
- Experience working with scientific datasets, quantitative science to business problems
- Bioinformatics experience, especially large scale storage and data mining of variant data, variant annotation, and genotype to phenotype correlation
The mission of Bayer Crop Science is centered on developing agricultural solutions for a sustainable future that will include a global population projected to eclipse 9.6 billion by 2050. We approach agriculture holistically, looking across a broad range of solutions from using biotechnology and plant breeding to produce the best possible seeds, to advanced predictive and prescriptive analytics designed to select the best possible crop system for every acre.
To make this possible, Bayer collects terabytes of data across all aspects of its operations, from genome sequencing, crop field trials, manufacturing, supply chain, financial transactions and everything in between. There is an enormous need and potential here to do something that has never been done before. We need great people to help transform these complex scientific datasets into innovative software that is deployed across the pipeline, accelerating the pace and quality of all crop system development decisions to unbelievable levels.
What you will do is why you should join us:
Be a critical senior member of a data engineering team focused on creating distributed analysis capabilities around a large variety of datasets
Take pride in software craftsmanship, apply a deep knowledge of algorithms and data structures to continuously improve and innovate
Work with other top-level talent solving a wide range of complex and unique challenges that have real world impact
Explore relevant technology stacks to find the best fit for each dataset
Pursue opportunities to present our work at relevant technical conferences
o Google Cloud Next 2019:
o GraphConnect 2015:
o Google Cloud Blog:
Project your talent into relevant projects. Strength of ideas trumps position on an org chart
If you share our values, you should have:
At least 8 years experience in software engineering
At least 4 years experience with Go / GoLang
Proven experience (2 years) building and maintaining data-intensive APIs using a RESTful approach
Experience with stream processing using Apache Kafka
A level of comfort with Unit Testing and Test Driven Development methodologies
Familiarity with creating and maintaining containerized application deployments with a platform like Docker
A proven ability to build and maintain cloud based infrastructure on a major cloud provider like AWS, Azure or Google Cloud Platform
Experience data modeling for large scale databases, either relational or NoSQL
Bonus points for:
Experience with protocol buffers and gRPC
Experience with: Google Cloud Platform, Apache Beam and or Google Cloud Dataflow, Google Kubernetes Engine or Kubernetes
Experience working with scientific datasets, or a background in the application of quantitative science to business problems
Bioinformatics experience, especially large scale storage and data mining of variant data, variant annotation, and genotype to phenotype correlation