Negotiable
Outside
Remote
USA
Summary: The Senior Data Engineer will be responsible for designing and implementing big data infrastructure to support analytical and operational needs within the agency. This role involves creating systems for data collection, management, and transformation, enabling data scientists and business analysts to derive insights for informed decision-making. The position requires extensive experience in cloud-based data platforms, particularly Azure, and proficiency in various data engineering tools and languages. The role is fully remote and classified as outside IR35.
Key Responsibilities:
- Implementing and maintaining scalable data pipelines
- Build ingestion and transformation flows in Azure Data Factory
- Manages storage structures in Data Lake
- Configure Synapse and implement monitoring and alerting
- Collaborate with data architects and business Subject Matter Experts (SMEs) to support operational reporting and data quality
Key Skills:
- Bachelor's degree in Data Engineering, Computer Science, or a related field
- Minimum 8 years of experience in enterprise data architecture, including at least 5 years in designing and deploying cloud-based data platforms (preferably Azure)
- Proficiency in Azure Data Factory, Data Lake Gen2, Synapse, and GitHub
- Experience implementing monitoring, logging, and alerting pipelines
- Strong skills in Python, SQL, and data transformation logic
- Familiarity with metadata tagging, sensitivity labelling, and QA frameworks
Salary (Rate): undetermined
City: undetermined
Country: USA
Working Arrangements: remote
IR35 Status: outside IR35
Seniority Level: undetermined
Industry: IT
Role: Senior Data Engineer
Location: REMOTE
This resource will serve as the authoritative resource for the agency who specializes in preparing big data infrastructure for analytical or operational uses. You will be responsible for designing and creating systems that collect, manage, and convert raw data into usable information for data scientists and business analysts to interpret and enables the agency to make smarter decisions and optimize operations.
Primary responsibilities:
Implementing and maintaining scalable data pipelines
Build ingestion and transformation flows in Azure Data Factory
Manages storage structures in Data Lake
Configure Synapse and implement monitoring and alerting.
Collaborate with data architects and business Subject Matter Experts (SMEs) to support operational reporting and data quality.
Minimum Qualifications
Must have a Bachelor s degree in Data Engineering, Computer Science, or a related field.
Minimum 8 years of experience in enterprise data architecture, including at least 5 years in designing and deploying cloud-based data platforms (preferably Azure).
Proficiency in Azure Data Factory, Data Lake Gen2, Synapse, and GitHub
Experience implementing monitoring, logging, and alerting pipelines
Strong skills in Python, SQL, and data transformation logic
Familiarity with metadata tagging, sensitivity labelling, and QA frameworks