Negotiable
Outside
Remote
USA
Summary: We are looking for an experienced AWS/Spark Data & AI Engineer proficient in Python and AI to design and deploy scalable data pipelines and AI solutions on AWS. The role demands expertise in big data technologies and machine learning, with a focus on building efficient applications. The ideal candidate will collaborate with data scientists and stakeholders to meet data requirements while ensuring data quality and performance optimization. Staying current with industry trends is also essential for this position.
Key Responsibilities:
- Design, develop, and maintain large-scale, distributed data pipelines using Apache Spark, PySpark, and AWS services like AWS Glue, EMR, S3, and Redshift.
- Implement ETL (Extract, Transform, Load) processes to ingest, transform, and load data from various sources into data lakes and data warehouses.
- Develop, train, and deploy machine learning and AI models using Python and relevant libraries (e.g., scikit-learn, TensorFlow, PyTorch).
- Collaborate with data scientists and business stakeholders to understand data requirements and translate them into technical solutions.
- Optimize and fine-tune Spark jobs and other data processing applications for performance, scalability, and cost-efficiency.
- Ensure data quality, integrity, and security across all data processing and storage systems.
- Troubleshoot and resolve issues related to data pipelines, Spark jobs, and AWS infrastructure.
- Stay up-to-date with the latest trends and technologies in big data, cloud computing, and artificial intelligence.
- Participate in code reviews and contribute to a culture of engineering excellence and best practices.
Key Skills:
- Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
- Proven professional experience with AWS cloud services, particularly those related to data engineering (S3, Glue, EMR, Redshift, Lambda).
- Extensive experience with Apache Spark and PySpark for big data processing and analytics.
- Advanced proficiency in Python for data manipulation, scripting, and application development.
- Strong understanding of AI/Machine Learning concepts, algorithms, and practical experience in building and deploying ML models.
- Experience with big data frameworks and technologies (e.g., Hadoop, Hive).
- Solid knowledge of data warehousing concepts, data modeling, and ETL processes.
- Proficiency in SQL and working with relational and NoSQL databases.
- Excellent problem-solving, analytical, and communication skills.
- Ability to work both independently and collaboratively in a fast-paced environment.
Salary (Rate): undetermined
City: undetermined
Country: USA
Working Arrangements: remote
IR35 Status: outside IR35
Seniority Level: undetermined
Industry: IT
Job Description:
We are seeking a highly skilled and experienced AWS/Spark Data & AI Engineer with a strong background in Python and artificial intelligence to join our team. The ideal candidate will be responsible for designing, developing, and deploying scalable data pipelines and AI solutions on the AWS cloud platform. This role requires a deep understanding of big data technologies, machine learning concepts, and a proven ability to leverage Python to build robust and efficient data and AI applications.
Responsibilities:
Design, develop, and maintain large-scale, distributed data pipelines using Apache Spark, PySpark, and AWS services like AWS Glue, EMR, S3, and Redshift.
Implement ETL (Extract, Transform, Load) processes to ingest, transform, and load data from various sources into data lakes and data warehouses.
Develop, train, and deploy machine learning and AI models using Python and relevant libraries (e.g., scikit-learn, TensorFlow, PyTorch).
Collaborate with data scientists and business stakeholders to understand data requirements and translate them into technical solutions.
Optimize and fine-tune Spark jobs and other data processing applications for performance, scalability, and cost-efficiency.
Ensure data quality, integrity, and security across all data processing and storage systems.
Troubleshoot and resolve issues related to data pipelines, Spark jobs, and AWS infrastructure.
Stay up-to-date with the latest trends and technologies in big data, cloud computing, and artificial intelligence.
Participate in code reviews and contribute to a culture of engineering excellence and best practices.
Qualifications:
Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
Proven professional experience with AWS cloud services, particularly those related to data engineering (S3, Glue, EMR, Redshift, Lambda).
Extensive experience with Apache Spark and PySpark for big data processing and analytics.
Advanced proficiency in Python for data manipulation, scripting, and application development.
Strong understanding of AI/Machine Learning concepts, algorithms, and practical experience in building and deploying ML models.
Experience with big data frameworks and technologies (e.g., Hadoop, Hive).
Solid knowledge of data warehousing concepts, data modeling, and ETL processes.
Proficiency in SQL and working with relational and NoSQL databases.
Excellent problem-solving, analytical, and communication skills.
Ability to work both independently and collaboratively in a fast-paced environment.