Senior Data Engineer - Remote (10+ years Exp must)

Senior Data Engineer - Remote (10+ years Exp must)

Posted 2 weeks ago by 1752834603

Negotiable
Outside
Remote
USA

Summary: The Senior Data Engineer role involves collaborating with a team to create scalable and extensible data solutions using AWS and other cloud technologies. The position requires extensive experience in data engineering, leadership skills, and a passion for technology and data optimization. The ideal candidate will have a strong background in data modeling, ETL processes, and database technologies, along with the ability to mentor junior engineers. This is a remote position requiring over 10 years of experience.

Key Responsibilities:

  • Design, implement, and operate scalable data solutions using AWS and cloud technologies.
  • Lead a team of data engineers to optimize and simplify existing solutions and build new products.
  • Mentor and develop team members in data engineering practices.
  • Develop and operate large-scale data structures for business intelligence analytics.
  • Influence the data strategy of the team or organization.
  • Build and manage ETL pipelines and data warehousing solutions.

Key Skills:

  • 9+ years of experience as a Data Engineer or in a similar role.
  • Bachelor's degree in management, business, computer science, engineering, mathematics, or a related discipline.
  • Experience with data modeling, data warehousing, and ETL processes.
  • Proficiency in SQL and experience with database technologies such as Redshift, Oracle, MySQL, or MS SQL.
  • Experience with massively parallel processing data technologies like Redshift, Teradata, or Hadoop.
  • Coding proficiency in modern programming languages, preferably Python and Pyspark.

Salary (Rate): undetermined

City: undetermined

Country: USA

Working Arrangements: remote

IR35 Status: outside IR35

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

Role: Sr Data Engineer

Location: Remote

NOTE: Must have 10+ Years of Total Exp

Must have valid LinkedIn (Created on or before 2017)

As an AWS Data Engineer , you will work with rest of the team of Data engineers to deliver the best in class, scalable and extensible data solutions using AWS and other cloud-based technologies such as Aurora, RDS, Redshift, EMR, Spark. You are an expert at designing, implementing, and operating stable, scalable, solutions to flow data from production systems into data lakes and analytical data bases. The role requires someone who loves data, understands enterprise information systems, has strong business sense, and can lead a team to put these skills into action.

The ideal candidate is excited by technology, passionate about learning and come up with creative ideas to optimize and simplify existing solutions and build new products. This role requires excellent leadership skills, strong technical expertise and hands-on project management skills. You should enjoy mentoring and developing rest of the engineers on your team.

BASIC QUALIFICATIONS

9+ years of experience as a Data Engineer or in a similar role

Bachelor s degree in management, business, computer science, engineering, mathematics, or a related discipline

Experience with data modeling, data warehousing, and building ETL pipelines

Experience leading and influencing the data strategy of your team or organization

Experience in SQL

Developing and operating large-scale data structures for business intelligence analytics using: ETL/ELT processes; OLAP technologies; data modeling; SQL;

Experience with at least one database technology such as Redshift, Oracle, MySQL or MS SQL

Experience with at least one massively parallel processing data technology such as Redshift, Teradata, Netezza, Spark or Hadoop based big data solution

PREFERRED QUALIFICATIONS

Industry experience as a Data Engineer or related specialty (e.g., Software Engineer, Business Intelligence Engineer, Data Scientist) with a track record of manipulating, processing, and extracting value from large datasets.

Coding proficiency in at least one modern programming language (Python, Ruby, Java, etc). Python and Pyspark preferred

Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets

Experience building data products incrementally and integrating and managing datasets from multiple sources

Experience leading large-scale data warehousing and analytics projects, including using AWS technologies Redshift, S3, Glue, EMR, EC2, Data-pipeline and other big data technologies