Data engineer (Azure) – Synapse and Pyspark, Python, Datawarehouse and Azure Data Explorer, Azure Devops
Job Scope
• Design, review and development of Pyspark scripts. Testing, troubleshooting of data pipelines, orchestration.
• Designing and developing reports and dashboards in PowerBI,setting up access control with row level security DAX query experience.
• Establishing connections to source data systems such as on-prem databases, IOT devices, APIs.
• Managing the collected data in appropriate storage/data-base solutions e.g. file systems, SQL servers, Big Data platforms such as Hadoop, HANA, etc. as required by the specific project requirements.
• Design, development of relevant data pipelines using pyspark, copy data activities for batch ingestion.
• Performing data integration e.g. using database table joins, or other mechanisms at an appropriate level as required by the analysis requirements of the project.
• Deployment of pipeline artifacts from one environment to the other using Azure Devops.
Skills & Experience
• Bachelor’s Degree in Computer Science or Engineering with 2 years of experience in Azure Data engineering, Python, Pyspark or Big Data development.
• Sound Knowledge of Azure Synapse analytics for pipelines, orchestration, set up.
• 1-2 experience in Visualization design and development with Power BI. Knowledge on row-level security, access control.
• Sound experience in SQL, Datawarehouse, data marts, data ingestion with Pyspark and Python.
• Expertise in developing and maintaining ETL processing pipelines in cloud-based platforms such as AWS, Azure, etc. (Azure Synapse or data factory preferred)
• Team player with good interpersonal, communication, and problem-solving skills.
• Preferred to have Devops expertise.
Working hours :
8:30am to 6pm (Monday to Friday) onsite, no hybrid option.
Data engineer (Azure) • D02 Anson, Tanjong Pagar, SG