Negotiable
Outside
Remote
USA
Summary: We are seeking a skilled Data Engineer with expertise in building and optimizing data pipelines and architectures. The role involves creating scalable data systems, ensuring data availability, and supporting data-driven decision-making. A strong understanding of data management practices, ETL processes, and collaboration with stakeholders is essential for success in this position.
Key Responsibilities:
- Data Pipeline Development: Design, construct, test, and maintain highly scalable data management systems. Develop and implement architectures that support the extraction, transformation, and loading (ETL) of data from various sources.
- Data Integration: Integrate structured and unstructured data from multiple data sources into a unified data system, ensuring data quality and consistency.
- Data Warehousing: Build and maintain data warehouses and data lakes to store and retrieve vast amounts of data efficiently. Optimize the performance of databases and queries to meet business needs.
- Data Processing: Implement data processing frameworks (e.g., Hadoop, Spark) to process large datasets in real-time or batch processing.
- Automation and Monitoring: Automate manual processes, optimize data delivery, and develop data monitoring systems to ensure data integrity and accuracy.
- Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data needs and provide technical solutions that meet business requirements.
- Performance Tuning: Optimize the performance of ETL processes, databases, and data pipelines to handle large volumes of data and reduce processing times.
Key Skills:
- Bachelor’s degree in computer science, Engineering, Information Technology, or a related field.
- 5+ years of experience as a Data Engineer or in a similar role.
- Proficiency in Snowflake SQL and experience with relational databases (e.g., MySQL, PostgreSQL).
- Knowledge of data warehousing solutions (e.g., Snowflake, Redshift, BigQuery).
- Experience with cloud platforms (e.g., AWS, Google Cloud, Azure).
- Good to have hands on programming languages like Python.
- Strong understanding of Matillion ETL tool and processes.
- Experience with data modeling and data architecture design.
Salary (Rate): undetermined
City: undetermined
Country: USA
Working Arrangements: remote
IR35 Status: outside IR35
Seniority Level: undetermined
Industry: IT
Job Summary:
We are looking for a skilled Data Engineer to join our team. The ideal candidate will have experience in building and optimizing data pipelines, architectures, and data sets. You will be responsible for creating and maintaining scalable data architectures, ensuring data availability, and enabling data-driven decision-making processes. This role requires a deep understanding of data management practices, data warehousing, ETL processes.
Key Responsibilities:
• Data Pipeline Development: Design, construct, test, and maintain highly scalable data management systems. Develop and implement architectures that support the extraction, transformation, and loading (ETL) of data from various sources.
• Data Integration: Integrate structured and unstructured data from multiple data sources into a unified data system, ensuring data quality and consistency.
• Data Warehousing: Build and maintain data warehouses and data lakes to store and retrieve vast amounts of data efficiently. Optimize the performance of databases and queries to meet business needs.
• Data Processing: Implement data processing frameworks (e.g., Hadoop, Spark) to process large datasets in real-time or batch processing.
• Automation and Monitoring: Automate manual processes, optimize data delivery, and develop data monitoring systems to ensure data integrity and accuracy.
• Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data needs and provide technical solutions that meet business requirements.
• Performance Tuning: Optimize the performance of ETL processes, databases, and data pipelines to handle large volumes of data and reduce processing times.
Required Qualifications:
• Bachelor’s degree in computer science, Engineering, Information Technology, or a related field.
• 5+ years of experience as a Data Engineer or in a similar role.
• Proficiency in Snowflake SQL and experience with relational databases (e.g., MySQL, PostgreSQL).
• Knowledge of data warehousing solutions (e.g., Snowflake, Redshift, BigQuery).
• Experience with cloud platforms (e.g., AWS, Google Cloud, Azure).
• Good to have hands on programming languages like Python.
• Strong understanding of Matillion ETL tool and processes.
• Experience with data modeling and data architecture design.