Roles & Responsibilities
Job Title : SQL& HADOOP DEVELOPER
Role Overview :
We are seeking a highly skilled professional with strong expertise in SQL, Spark, and Hadoop. The ideal candidate should have 3–4 years of experience working with large datasets and distributed systems. Experience in the banking domain will be considered an added advantage.
Responsibilities :
- Design, develop, and maintain scalable data pipelines and ETL processes.
- Work with large datasets using SQL, Spark, and Hadoop frameworks.
- Ensure data quality, consistency, and availability across systems.
- Collaborate with cross-functional teams to understand requirements and deliver data solutions.
- Optimize queries and processes for improved performance.
- Communicate effectively with stakeholders and provide regular updates.
Mandatory Skills :
Minimum 3–4 years of experience in data engineering or related roles.Strong proficiency in SQL .Hands-on experience with Spark and Hadoop .Excellent communication skills (both written and verbal).Banking domain knowledge (added advantage).Preferred Qualifications :
Experience in ETL tools or data pipeline frameworks .Familiarity with cloud platforms (AWS, Azure, or GCP).Knowledge of Python or Scala for data processing.Ability to work independently and in a team environment.Tell employers what skills you have
Excellent Communication Skills
Scala
Azure
Big Data
Ability To Work Independently
Pipelines
Hadoop
ETL
Data Quality
MapReduce
Data Engineering
SQL
Distributed Systems
Python
Banking