Negotiable
Undetermined
Remote
Remote
Summary: The role is focused on data engineering within the health care domain, offering a remote working arrangement. Candidates are expected to have extensive experience in data engineering and proficiency in various programming languages and big data technologies. The position requires strong analytical skills and experience with cloud platforms and data pipeline solutions.
Key Responsibilities:
- Design and implement data engineering solutions in the health care domain.
- Build and maintain data pipelines and data warehousing solutions.
- Utilize big data technologies such as Apache Spark and Hadoop.
- Work with cloud platforms like AWS, Azure, or Google Cloud Platform.
- Collaborate with cross-functional teams to understand data requirements.
- Ensure data quality and integrity throughout the data lifecycle.
- Utilize ETL tools and frameworks for data processing.
- Manage version control using systems like Git.
Key Skills:
- 8+ years of experience in data engineering or related field.
- Strong programming skills in Python, SQL, or Scala.
- Hands-on experience with big data technologies (e.g., Apache Spark, Hadoop).
- Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform.
- Proficiency in building data pipelines and data warehousing solutions.
- Experience with ETL tools and frameworks.
- Strong understanding of database systems (SQL and NoSQL).
- Familiarity with data lake and data warehouse architectures.
- Experience with version control systems like Git.
- Strong problem-solving and analytical skills.
Salary (Rate): £37.50 hourly
City: undetermined
Country: undetermined
Working Arrangements: remote
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
Data Engineering
Remote Role
Visa: s and s on W2 Basis
Domain: Health Care
Required Skills & Qualifications:
- 8+ years of experience in data engineering or related field
- Strong programming skills in Python, SQL, or Scala
- Hands-on experience with big data technologies (e.g., Apache Spark, Hadoop)
- Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (Google Cloud Platform)
- Proficiency in building data pipelines and data warehousing solutions
- Experience with ETL tools and frameworks
- Strong understanding of database systems (SQL and NoSQL)
- Familiarity with data lake and data warehouse architectures
- Experience with version control systems like Git
- Strong problem-solving and analytical skills