Negotiable
Outside
Hybrid
USA
Summary: We are seeking a Senior Data Engineer with expertise in AVEVA PI Historian and Google Cloud Platform to enhance our data engineering team. The role involves designing and maintaining ETL/ELT pipelines, integrating time-series data, and supporting real-time analytics. The ideal candidate will have a strong background in data ingestion, transformation, and cloud services. This position offers flexibility with remote or hybrid working arrangements in Atlanta, Georgia.
Key Responsibilities:
- Design, build, and maintain ETL/ELT pipelines for time-series data from AVEVA PI and similar historian systems.
- Develop robust solutions for data ingestion, transformation, and archival from industrial systems.
- Extract, query, and integrate historian data into enterprise data lakes and analytical platforms.
- Create and manage .NET (C#) and Python code for data integration, API development, and automation tasks.
- Leverage Google Cloud Platform services such as BigQuery, Dataflow, Pub/Sub, and Cloud Storage for scalable data architecture.
- Model and manage SQL databases to store processed data and metadata.
- Collaborate with OT/IT teams, process engineers, and business analysts to define and meet data requirements.
- Ensure high performance, data integrity, and operational reliability across historian and cloud systems.
- Participate in code reviews and support continuous integration and delivery (CI/CD) pipelines.
Key Skills:
- Bachelor's or Master's degree in Computer Science, Engineering, or related field.
- 3+ years of experience as a Data Engineer with a focus on time-series or operational data.
- Strong expertise in AVEVA PI Historian or similar systems (e.g., Aspen IP.21, GE Proficy).
- Proficiency in .NET (C#) and Python.
- Hands-on experience with SQL and database performance optimization.
- Deep understanding of Google Cloud Platform, especially tools like BigQuery, Dataflow, and Pub/Sub.
- Strong knowledge of data modeling, time-series analytics, and data lake/warehouse architecture.
- Familiarity with ETL/ELT tools, version control (Git), and DevOps practices.
Salary (Rate): undetermined
City: Atlanta
Country: USA
Working Arrangements: hybrid
IR35 Status: outside IR35
Seniority Level: Senior
Industry: IT
We are looking for a highly skilled Senior Data Engineer with hands-on experience in AVEVA PI Historian and Google Cloud Platform (Google Cloud Platform) to join our growing data engineering team. You ll be responsible for building reliable data pipelines, integrating industrial time-series data, and supporting real-time analytics for critical operations.
Key Responsibilities:-
Design, build, and maintain ETL/ELT pipelines for time-series data from AVEVA PI and similar historian systems.
-
Develop robust solutions for data ingestion, transformation, and archival from industrial systems.
-
Extract, query, and integrate historian data into enterprise data lakes and analytical platforms.
-
Create and manage .NET (C#) and Python code for data integration, API development, and automation tasks.
-
Leverage Google Cloud Platform services such as BigQuery, Dataflow, Pub/Sub, and Cloud Storage for scalable data architecture.
-
Model and manage SQL databases to store processed data and metadata.
-
Collaborate with OT/IT teams, process engineers, and business analysts to define and meet data requirements.
-
Ensure high performance, data integrity, and operational reliability across historian and cloud systems.
-
Participate in code reviews and support continuous integration and delivery (CI/CD) pipelines.
-
Bachelor s or Master s degree in Computer Science, Engineering, or related field.
-
3+ years of experience as a Data Engineer with a focus on time-series or operational data.
-
Strong expertise in AVEVA PI Historian or similar systems (e.g., Aspen IP.21, GE Proficy).
-
Proficiency in .NET (C#) and Python.
-
Hands-on experience with SQL and database performance optimization.
-
Deep understanding of Google Cloud Platform, especially tools like BigQuery, Dataflow, and Pub/Sub.
-
Strong knowledge of data modeling, time-series analytics, and data lake/warehouse architecture.
-
Familiarity with ETL/ELT tools, version control (Git), and DevOps practices.
-
Experience with other historian platforms (e.g., Honeywell PHD).
-
Exposure to Docker/Kubernetes and CI/CD pipelines.
-
Familiarity with visualization tools like Power BI, Looker, or Tableau.
-
Background in manufacturing, energy, oil & gas, or industrial automation domains.