Roles & Responsibilities
We are seeking a highly skilled Senior Data Engineer to join our team and contribute to building scalable, reliable, and high-performance data platforms. The ideal candidate will have strong expertise in Python, SQL, Spark, and hands-on experience with cloud-based big data platforms such as Databricks.
Key Responsibilities :
- Design, develop, and maintain large-scale data pipelines and ETL processes.
- Work with Spark and Databricks to process and transform large datasets efficiently.
- Optimize SQL queries and data models for performance and scalability.
- Collaborate with data scientists, analysts, and business stakeholders to deliver reliable data solutions.
- Ensure best practices in data governance, data quality, and security.
- Mentor junior engineers and contribute to technical design reviews.
Required Skills & Qualifications :
Strong experience in Python programming for data engineering.Advanced knowledge of SQL for querying, optimization, and modeling.Hands-on expertise with Apache Spark for distributed data processing.Working knowledge of Databricks (Databricks Certification : Yes / No – preferred but not mandatory).Experience with data lakes, data warehouses, and cloud platforms (Azure / AWS / GCP).Familiarity with CI / CD pipelines and version control (Git).Excellent problem-solving and communication skills.Nice-to-Have :
Databricks Certification (Associate / Professional) – highly preferred.Knowledge of streaming frameworks (Kafka, Delta Live Tables).Experience with data orchestration tools (Airflow, ADF, etc.).Tell employers what skills you have
Version Control
Airflow
Apache Spark
Scalability
Modeling
Big Data
Pipelines
ETL
Data Quality
Data Governance
Data Engineering
SQL
Python
Orchestration
Python Programming
Technical Design