Negotiable
Undetermined
Hybrid
Remote or Houston, Texas
Summary: The AI Engineer role focuses on developing AI-driven analytical solutions for trading and market analytics, requiring a strong foundation in data engineering and applied analytics. The position involves close collaboration with traders and analysts to deliver scalable solutions using large-scale market data. The ideal candidate should possess both technical expertise and effective communication skills to translate business needs into actionable insights. This is a contract position based in Houston, TX, with a hybrid working model.
Key Responsibilities:
- Design, develop, and deliver AI-driven analytics for front office use cases, including seasonality analysis, correlation, regression, forecasting, and scenario modeling
- Build and maintain scalable, reusable data pipelines using Databricks and Spark (PySpark, SQL, Delta Lake, Unity Catalog)
- Perform statistical and econometric analyses on large, complex datasets such as market pricing and fundamental time series data
- Collaborate directly with business stakeholders to gather requirements, clarify objectives, and communicate insights effectively
- Develop and integrate LLM-based and agentic workflows, including prompt engineering, orchestration, retrieval, and guardrails
- Productionize solutions with appropriate testing, observability, documentation, and version control
- Hands-on experience with Databricks and Apache Spark (PySpark, SQL, Delta Lake, Unity Catalog)
- Strong data engineering experience, including data ingestion, modeling, orchestration, and performance optimization
- Solid foundation in statistics, economics, or data science, particularly related to time series analysis
- Experience implementing LLM-based solutions, including prompt design, retrieval workflows, and agent frameworks
- Familiarity with modern engineering practices and tooling, such as CI/CD pipelines, infrastructure as code, experimentation frameworks, and data governance
- Strong communication and collaboration skills, with the ability to work effectively with technical and non-technical stakeholders
Key Skills:
- Hands-on experience with Databricks and Apache Spark (PySpark, SQL, Delta Lake, Unity Catalog)
- Strong data engineering experience, including data ingestion, modeling, orchestration, and performance optimization
- Solid foundation in statistics, economics, or data science, particularly related to time series analysis
- Experience implementing LLM-based solutions, including prompt design, retrieval workflows, and agent frameworks
- Familiarity with modern engineering practices and tooling, such as CI/CD pipelines, infrastructure as code, experimentation frameworks, and data governance
- Strong communication and collaboration skills, with the ability to work effectively with technical and non-technical stakeholders
Salary (Rate): undetermined
City: Houston
Country: United States
Working Arrangements: hybrid
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
- Hybrid work model with close collaboration alongside trading and analytics teams
- Fast paced, iterative development approach-prototype quickly, refine with user feedback, and harden solutions for production
- Strong focus on operational excellence, including secure CI/CD practices, automated testing, and production reliability
- Design, develop, and deliver AI driven analytics for front office use cases, including seasonality analysis, correlation, regression, forecasting, and scenario modeling
- Build and maintain scalable, reusable data pipelines using Databricks and Spark (PySpark, SQL, Delta Lake, Unity Catalog)
- Perform statistical and econometric analyses on large, complex datasets such as market pricing and fundamental time series data
- Collaborate directly with business stakeholders to gather requirements, clarify objectives, and communicate insights effectively
- Develop and integrate LLM based and agentic workflows, including prompt engineering, orchestration, retrieval, and guardrails
- Productionize solutions with appropriate testing, observability, documentation, and version control
- Required Qualifications
- Hands on experience with Databricks and Apache Spark (PySpark, SQL, Delta Lake, Unity Catalog)
- Strong data engineering experience, including data ingestion, modeling, orchestration, and performance optimization
- Solid foundation in statistics, economics, or data science, particularly related to time series analysis
- Experience implementing LLM based solutions, including prompt design, retrieval workflows, and agent frameworks
- Familiarity with modern engineering practices and tooling, such as CI/CD pipelines, infrastructure as code, experimentation frameworks, and data governance
- Strong communication and collaboration skills, with the ability to work effectively with technical and non technical stakeholders
- Experience in commodity trading, financial trading, or energy markets
- Familiarity with market microstructure, supply demand fundamentals, and risk management concepts
- Exposure to MLOps tooling, feature stores, vector databases, and model lifecycle management
- We are an equal opportunity employer and value diversity at all levels of the organization.
- All qualified applicants will be considered without regard to race, color, religion, gender, sexual orientation, gender identity or expression, age, national origin, disability, veteran status, or any other legally protected status.