Negotiable
Outside
Hybrid
USA
Summary: The role of Data Scientist/Machine Learning Engineer focuses on leveraging expertise in Natural Language Processing (NLP) and Generative AI to develop advanced machine learning solutions. Candidates are expected to have hands-on experience with transformer architectures and be able to discuss their work with large language models and Retrieval-Augmented Generation (RAG). The position allows for hybrid or remote working arrangements based in Raleigh, NC. The role is classified as outside IR35.
Key Responsibilities:
- Utilize proficiency in NLP and transformer-based architectures to develop machine learning models.
- Conduct resume scrutiny to ensure candidates can elaborate on specific tools and techniques used in NLP.
- Discuss challenges and optimizations related to Retrieval-Augmented Generation (RAG) systems.
- Detail experience with large language models (LLMs), including fine-tuning and prompt engineering.
Key Skills:
- Proficiency in NLP and transformer-based architectures.
- Experience in sentiment analysis, named entity recognition (NER), and text classification.
- Knowledge of Retrieval-Augmented Generation (RAG) systems.
- Experience with large language models (LLMs) and their lifecycle.
Salary (Rate): undetermined
City: Raleigh
Country: USA
Working Arrangements: hybrid
IR35 Status: outside IR35
Seniority Level: undetermined
Industry: IT
Job Title: Data Scientist/Machine Learning Engineer With Gen AI Exp
Location: Raleigh, NC Hybrid /Remote Also
Duration: C2H
Description:
Technical Expertise**:
- **Core Skills**: Proficiency in NLP, and transformer-based architectures, with a transition to Generative AI (Gen AI). Experience in sentiment analysis, named entity recognition (NER), and text classification using NLP is essential.
- **Resume Scrutiny**: If a resume highlights NLP, candidates must elaborate on specific tools, techniques, and their applications (e.g., keyword extraction from PDF documents using sentiment analysis).
- **RAG Focus**: Retrieval-Augmented Generation (RAG) is critical. Candidates should be prepared to discuss challenges, such as latency or relevancy issues in RAG systems, and explain how they optimized performance, tying solutions to specific projects.
- **LLM Experience**: Candidates must detail their work with large language models (LLMs), including the extent of involvement (e.g., fine-tuning, prompt engineering) and full lifecycle experience. Familiarity with bots or assistants (internal or external) and their user base scale (comparable to Google s) is a plus