Negotiable
Outside
Remote
USA
Summary: The role of Python Engineer with Grafana experience involves developing and extending existing frameworks for data processing and integration. The ideal candidate will possess strong Python skills and experience with various SQL/NoSQL databases, ETL pipelines, and visualization tools. This position is fully remote and requires expertise in handling data from multiple sources. The engineer will also be responsible for building dashboards and managing workloads in Linux environments.
Key Responsibilities:
- Write efficient Python ETL pipelines to handle data ingestion, transformation, and loading.
- Query and work with multiple databases: Microsoft SQL, Oracle SQL, PostgreSQL, MongoDB, TimescaleDB, ELK stack DB.
- Process data from REST APIs, Kafka, and other streaming sources into time-series databases.
- Extend/customize existing frameworks by building utilities for data processing.
- Work on data warehousing, data masking, and data lakes.
- Deploy and manage workloads on Linux-based environments.
- Build dashboards and monitoring solutions with Grafana.
Key Skills:
- Strong proficiency in Python (especially for ETL processes).
- Experience with Grafana visualization.
- Hands-on experience with SQL/NoSQL queries across multiple databases.
- Familiarity with Kafka and REST API data ingestion.
- Strong knowledge of ETL, data lakes, data warehousing, and data masking.
- Comfortable working in Linux environments.
Salary (Rate): undetermined
City: undetermined
Country: USA
Working Arrangements: remote
IR35 Status: outside IR35
Seniority Level: undetermined
Industry: IT
Python Engineer with Grafana Exp
100% Remote
Must have : Python, Grafana, ETL, SQL/NoSQL, data masking, data lakes .
About the Role
We re looking for a Python Engineer with Grafana experience to join the team. The ideal candidate will have strong expertise in Python, database queries across multiple SQL/NoSQL platforms, and experience working with ETL pipelines, time-series databases, and visualization tools.
You will be working with an existing framework and extending it to handle custom data processing and integration needs. This role requires deep knowledge of Python and the ability to adapt libraries/utilities for different data sources.
Key Responsibilities
- Write efficient Python ETL pipelines to handle data ingestion, transformation, and loading.
- Query and work with multiple databases: Microsoft SQL, Oracle SQL, PostgreSQL, MongoDB, TimescaleDB, ELK stack DB.
- Process data from REST APIs, Kafka, and other streaming sources into time-series databases.
- Extend/customize existing frameworks by building utilities for data processing.
- Work on data warehousing, data masking, and data lakes.
- Deploy and manage workloads on Linux-based environments.
- Build dashboards and monitoring solutions with Grafana.
- Required Skills & Experience
- Strong proficiency in Python (especially for ETL processes).
- Experience with Grafana visualization.
- Hands-on experience with SQL/NoSQL queries across multiple databases.
- Familiarity with Kafka and REST API data ingestion.
- Strong knowledge of ETL, data lakes, data warehousing, and data masking.
- Comfortable working in Linux environments.