Negotiable
Outside
Remote
USA
Summary: The Data Engineer role involves designing, developing, and maintaining scalable big data solutions primarily using Apache Spark and Scala. The position requires collaboration with cross-functional teams to integrate data pipelines and ensure data quality and compliance. Candidates should have strong SQL skills and experience with Power BI for reporting and visualization. The role is remote but requires working in the PST time zone.
Key Responsibilities:
- Design, develop, and maintain scalable big data solutions using Apache Spark and Scala.
- Implement complex SQL queries for data transformation and analytics.
- Develop and optimize Power BI dashboards for business reporting and visualization.
- Collaborate with cross-functional teams to integrate data pipelines and reporting solutions.
- Ensure data quality, security, and compliance across all systems.
Key Skills:
- Strong proficiency in Apache Spark with hands-on experience in Scala.
- Solid understanding of SQL for data manipulation and analysis.
- Experience in Power BI for creating interactive reports and dashboards.
- Familiarity with distributed computing concepts and big data ecosystems (Hadoop, Hive, etc.).
- Ability to work with large datasets and optimize data workflows.
Salary (Rate): undetermined
City: undetermined
Country: USA
Working Arrangements: remote
IR35 Status: outside IR35
Seniority Level: undetermined
Industry: IT
Title- Data Engineer
Location - Redmond ,WA ( Open for WFM -Need to work in PST time zone)
Key Responsibilities
- Design, develop, and maintain scalable big data solutions using Apache Spark and Scala.
- Implement complex SQL queries for data transformation and analytics.
- Develop and optimize Power BI dashboards for business reporting and visualization.
- Collaborate with cross-functional teams to integrate data pipelines and reporting solutions.
- Ensure data quality, security, and compliance across all systems.
Required Skills & Experience
- Strong proficiency in Apache Spark with hands-on experience in Scala.
- Solid understanding of SQL for data manipulation and analysis.
- Experience in Power BI for creating interactive reports and dashboards.
- Familiarity with distributed computing concepts and big data ecosystems (Hadoop, Hive, etc.).
- Ability to work with large datasets and optimize data workflows.
Highly Desirable
- Spark Performance Tuning Expertise: Proven ability to optimize Spark jobs for efficiency and scalability.
- Knowledge of cluster resource management and troubleshooting performance bottlenecks.
- Experience with Azure cloud services for big data solutions (e.g., Azure Data Lake, Azure Databricks, Synapse Analytics).
- Exposure to other cloud platforms (AWS or Google Cloud Platform) is a plus.
- Experience working in financial domain or with ERP systems (e.g., SAP, Oracle ERP).
- Understanding of compliance and regulatory requirements in financial data processing.
Additional Qualifications
- Strong problem-solving and analytical skills.
- Excellent communication and collaboration abilities.
- Recent Microsoft experience
- Bachelor s or Master s degree in Computer Science, Engineering, or related field.
Key skills:
Synapse, IAAS VM, SQL Server, ADLS Gen 2, Azure Automation account runbooks using PowerShell, Spark, Scala programming.