Negotiable
Outside
Remote
USA
Summary: The Senior Databricks Engineer role requires a seasoned professional with 5-7 years of experience in data engineering and software development, focusing on complex problem-solving. The position emphasizes expertise in big data tools, SQL, and data pipeline management, with a strong emphasis on Databricks and related technologies. The role is primarily remote, with potential options for working in Los Angeles, California. The position is classified as outside IR35, indicating a favorable tax status for contractors.
Key Responsibilities:
- 5-7 years industry experience coding commercial software and a passion for solving complex problems.
- 5-7 years direct experience in Data Engineering with experience in tools such as:
- Big data tools: Databricks, Apache Spark, Delta Lake, etc.
- Relational SQL (Preferably T-SQL. Alternatively pgSQL, MySQL).
- Data pipeline and workflow management tools: Databricks Workflows, Airflow, Step Functions, etc.
- Object-oriented/object function scripting languages: PySpark/Python, Java, C++, Scala, etc.
- Experience working with Data Lakehouse architecture and Delta Lake/Apache Iceberg
- Advanced working SQL knowledge and experience working with relational databases, query authoring and optimization (SQL) as well as working familiarity with a variety of databases.
- Experience manipulating, processing, and extracting value from large, disconnected datasets.
- Ability to inspect existing data pipelines, discern their purpose and functionality, and re-implement them efficiently in Databricks.
- Experience manipulating structured and unstructured data.
- Experience architecting data systems (transactional and warehouses).
- Experience the SDLC, CI/CD, and operating in dev/test/prod environments.
Key Skills:
- 5-7 years industry experience coding commercial software.
- 5-7 years direct experience in Data Engineering.
- Proficiency in big data tools: Databricks, Apache Spark, Delta Lake.
- Strong SQL skills (T-SQL, pgSQL, MySQL).
- Experience with data pipeline and workflow management tools.
- Knowledge of object-oriented scripting languages (PySpark/Python, Java, C++, Scala).
- Experience with Data Lakehouse architecture.
- Advanced SQL knowledge and experience with relational databases.
- Ability to manipulate large datasets.
- Experience with data systems architecture.
- Familiarity with SDLC and CI/CD processes.
Salary (Rate): undetermined
City: undetermined
Country: USA
Working Arrangements: remote
IR35 Status: outside IR35
Seniority Level: undetermined
Industry: IT
- 5-7 years industry experience coding commercial software and a passion for solving complex problems.
- 5-7 years direct experience in Data Engineering with experience in tools such as:
- Big data tools: Databricks, Apache Spark, Delta Lake, etc.
- Relational SQL (Preferably T-SQL. Alternatively pgSQL, MySQL).
- Data pipeline and workflow management tools: Databricks Workflows, Airflow, Step Functions, etc.
- Object-oriented/object function scripting languages: PySpark/Python, Java, C++, Scala, etc.
- Experience working with Data Lakehouse architecture and Delta Lake/Apache Iceberg
- Advanced working SQL knowledge and experience working with relational databases, query authoring and optimization (SQL) as well as working familiarity with a variety of databases.
- Experience manipulating, processing, and extracting value from large, disconnected datasets.
- Ability to inspect existing data pipelines, discern their purpose and functionality, and re-implement them efficiently in Databricks.
- Experience manipulating structured and unstructured data.
- Experience architecting data systems (transactional and warehouses).
- Experience the SDLC, CI/CD, and operating in dev/test/prod environments.