05-02-2026 12:38:18
Job_303648
3 - 5 years
Job Description
● 3+ years of experience in PySpark development and distributed data
processing.
● Strong hands-on experience with Databricks (Notebooks, Jobs, Workflows, Delta
Lake).
● Proficiency in writing efficient Spark transformations and Spark SQL queries.
● Experience building modular, reusable, and testable data pipelines.
● Exposure to workflow orchestration using Databricks Workflows or Apache
Airflow.
● Familiarity with Git-based version control and basic CI/CD concepts.
● Basic understanding of cloud platforms such as Azure, AWS, or GCP.
● Good communication skills with the ability to document solutions clearly