Negotiable
Outside
Remote
USA
Summary: The Sr AI Data Engineer role involves solution engineering for enterprise-scale data management, focusing on modern data integration frameworks and scalable distributed systems. The position requires developing data integration tasks within the data and analytics space and collaborating with the Data Management team. The ideal candidate will have extensive experience in building data-driven solutions, particularly with AWS services and cloud-based data platforms. This is a 12-month contract position with potential for later hire.
Key Responsibilities:
- The Data/AI Engineer will be responsible for solution engineering of enterprise scale data management best practices.
- This includes patterns such as modern data integration frameworks and building of scalable distributed systems using emerging cloud-based data design patterns.
- This role will be responsible for developing data integration tasks in the data and analytics space.
- This position will report to the Director of Data Management under Data & AI organization.
- Demonstrate ability in implementing data warehouse solutions using modern data platforms such as Snowflake, Databricks or Redshift.
- Build data integration solutions between transaction systems and analytics platforms.
- Expand data integration solutions to ingest data from internal and external sources and to further transform as per the business consumption needs.
- Develop tasks for a multitude of data patterns, e.g., real-time data integration, advanced analytics, machine learning, BI and reporting.
- Fundamental understanding of building data products by data enrichment and ML.
- Act as a team player and share knowledge with the existing team members.
Key Skills:
- Education: Bachelor s degree in computer science or a related field.
- Minimum 5 years of experience in building data driven solutions.
- At least 3 years of experience working with AWS services.
- Expertise in real-time data solutions, good-to-have knowledge of streams processing, Message Oriented Platforms and ETL/ELT Tools.
- Strong scripting experience using Python and SQL.
- Working knowledge of foundational AWS compute, storage, networking and IAM.
- Understanding Gen AI models, prompt engineering, RAG, fine tuning and pre-tuning will be a plus.
- Solid scripting experience in AWS using Lambda functions.
- Knowledge of CloudFormation template preferred.
- Hands-on experience with popular cloud-based data warehouse platforms such as Redshift and Snowflake.
- Experience in building data pipelines with related understanding of data ingestion, transformation of structured, semi-structured and unstructured data across cloud services.
- Knowledge and understanding of data standards and principles to drive best practices around data management activities and solutions.
- Experience with one or more data integration tools such as Attunity (Qlik), AWS Glue ETL, Talend, Kafka etc.
- Strong understanding of data security authorization, authentication, encryption, and network security.
- Hands on experience in using and extending machine learning framework and libraries, e.g, scikit-learn, PyTorch, TensorFlow, XGBoost etc. preferred.
- Experience with AWS SageMaker family of services or similar tools to develop machine learning models preferred.
- Strong written and verbal communication skills to facilitate meetings and workshops to collect data, functional and technology requirements, document processes, data flows, gap analysis, and associated data to support data management/governance related efforts.
- Acts with integrity and proactively seeks ways to ensure compliance with regulations, policies, and procedures.
- Demonstrated ability to be self-directed with excellent organization, analytical and interpersonal skills, and consistently meet or exceed deadline deliverables.
- Strong understanding of the importance and benefits of good data quality, and the ability to champion results across functions.
Salary (Rate): undetermined
City: undetermined
Country: USA
Working Arrangements: remote
IR35 Status: outside IR35
Seniority Level: undetermined
Industry: IT
Sr AI Data Engineer
12 Months contract - Later hire (C2H)
Location : Remote Job / McLean, VA 22102, USA
Job description:
Responsibilities:
Job Information
- The Data/AI Engineer will be responsible for solution engineering of enterprise scale data management best practices.
- This includes patterns such as modern data integration frameworks and building of scalable distributed systems using emerging cloud-based data design patterns.
- This role will be responsible for developing data integration tasks in the data and analytics space.
- This position will report to the Director of Data Management under Data & AI organization.
Key Job Functions
- Demonstrate ability in implementing data warehouse solutions using modern data platforms such as Snowflake, Databricks or Redshift.
- Build data integration solutions between transaction systems and analytics platforms.
- Expand data integration solutions to ingest data from internal and external sources and to further transform as per the business consumption needs.
- Develop tasks for a multitude of data patterns, e.g., real-time data integration, advanced analytics, machine learning, BI and reporting.
- Fundamental understanding of building data products by data enrichment and ML.
- Act as a team player and share knowledge with the existing team members.
Qualifications:
Education: Bachelor s degree in computer science or a related field.
Minimum Experience
- Minimum 5 years of experience in building data driven solutions.
- At least 3 years of experience working with AWS services.
Specialized Knowledge & Skills:
- Expertise in real-time data solutions, good-to-have knowledge of streams processing, Message Oriented Platforms and ETL/ELT Tools.
- Strong scripting experience using Python and SQL.
- Working knowledge of foundational AWS compute, storage, networking and IAM.
- Understanding Gen AI models, prompt engineering, RAG, fine tuning and pre-tuning will be a plus.
- Solid scripting experience in AWS using Lambda functions.
- Knowledge of CloudFormation template preferred.
- Hands-on experience with popular cloud-based data warehouse platforms such as Redshift and Snowflake.
- Experience in building data pipelines with related understanding of data ingestion, transformation of structured, semi-structured and unstructured data across cloud services.
- Knowledge and understanding of data standards and principles to drive best practices around data management activities and solutions.
- Experience with one or more data integration tools such as Attunity (Qlik), AWS Glue ETL, Talend, Kafka etc.
- Strong understanding of data security authorization, authentication, encryption, and network security.
- Hands on experience in using and extending machine learning framework and libraries, e.g, scikit-learn, PyTorch, TensorFlow, XGBoost etc. preferred.
- Experience with AWS SageMaker family of services or similar tools to develop machine learning models preferred.
- Strong written and verbal communication skills to facilitate meetings and workshops to collect data, functional and technology requirements, document processes, data flows, gap analysis, and associated data to support data management/governance related efforts.
- Acts with integrity and proactively seeks ways to ensure compliance with regulations, policies, and procedures.
- Demonstrated ability to be self-directed with excellent organization, analytical and interpersonal skills, and consistently meet or exceed deadline deliverables.
- Strong understanding of the importance and benefits of good data quality, and the ability to champion results across functions.