Roles & Responsibilities
Role and Responsibilities
Design, develop, and optimize ETL / ELT pipelines using Databricks (PySpark, Spark SQL, Delta Lake).
Collaborate with data architects and business teams to ensure
data models support analytics and reporting requirements.
Manage and monitor Databricks clusters, jobs, and workflows
for performance and cost optimization.
Integrate Databricks with cloud platforms (e.g., Azure Data
Lake, AWS S3, Synapse, Redshift, Snowflake, etc.).
Requirements / Qualifications
Hands-on experience with Databricks and Apache Spark.
Strong proficiency in Python (PySpark), SQL, and Spark SQL.
Experience with cloud data platforms (Azure, AWS, or GCP).
Familiarity with Delta Lake, Parquet, and data lakehouse
architecture.
Experience in version control (Git) and DevOps practices (CI / CD).
Understanding of data modeling, data warehousing, and ETL best practices.
Thanks, and Best Regards
Karanam Vijaya Kiran
(EA Registration no : R1443178)
HP : +65 92333815
Recruitment Manager
Helius Technologies Pte Ltd (EA Licence No : 11C3373)
https : / / www.linkedin.com / in / vijay-karanam-68462131 /
Tell employers what skills you have
PySpark
Apache Spark
Kubernetes
Azure
Data Modeling
DevOps
Pipelines
Architects
Scripting
ETL
Databricks
SQL
Python
Docker
S3
Data Warehousing
Linux
Cloud Engineer • Islandwide, SG