Roles & Responsibilities
Responsibilities :
- Work closely with data stewards, data analysts, and business end-users to implement and support data solutions.
- Design and build robust and scalable data ingestion and data management solutions for batch-loading and streaming from multiple data sources using Python via different mechanisms such as API, file transfer, direct interface with Oracle and MSSQL databases.
- Familiar with SDLC process : Requirement gathering, design and development, SIT testing, supporting UAT, and CICD deployment using GitHub for enhancement and a new ingestion pipeline.
Ensure compliance with IT security standards, policies, and procedures.
Provide BAU support in terms of production job monitoring, issue resolution, and bug fixes.
Enable ingestion checks and data quality checks for all data sets in the data platform and ensure the data issues are actively detected, tracked, and fixed without breaching SLA.Requirement :
Possess a degree in Computer Science / Information Technology or related fields.At least 3 years of experience in a role focusing on the development and support of data ingestion pipelines.Experience with building on data platforms, e.g., Snowflake .Proficient in SQL and Python .Experience with Cloud environments (e.g., AWS ).Experience with continuous integration and continuous deployment ( CICD ) using GitHub .Experience with Software Development Life Cycle (SDLC) methodology.Experience with data warehousing concepts.Strong problem-solving and troubleshooting skills.Strong communication and collaboration skills .Able to design and implement solutions and perform code review independently.Able to provide production support independently.Agile, fast learner, and able to adapt to changes.Tell employers what skills you have
Oracle
Pipelines
Hadoop
Data Management
Agile
ETL
SDLC
Data Quality
SQL
Python
Continuous Integration
API
Github
Data Warehousing
Databases
Software Development