Roles & Responsibilities
Responsibilities :
Admin role : On-premises Kubernetes (inhouse Kubernetes Package) + Azure Kubernetes + Spark Cluster + ADO CICD Apache Ranger , Spark, Kafka, Datahub, Airflow, Trino, Iceberg , Azure DevOps (CI / CD)& Kubernetes
Scope :
- Work with Client’s core platform team on installing inhouse packed on-premises Kubernetes.
- Configure Azure Kubernetes on cloud platform.
- Deploy Apache Spark, Apache Ranger and Apache Kafka on Kubernetes.
- Azure DevOps (ADO) / recommended tool for deployment automation
- Data Storage are divided on Hot and Cold logical partition in GDP. Integrate ADLS G2 with the Hot and Cold storage via S3 Protocol.
- Integration with the bank’s Central Observability Platform (COP).
- Grafana-based monitoring dashboards for Spark jobs and K8s pods.
- Implementing data encryption at rest using TDE via SCB CaaS (Cryptography as a Service).
- On-premises Kubernetes (Client’s inhouse Kubernetes Package) + Azure Kubernetes + Spark Cluster + ADO CICD-Apache Ranger , Spark, Kafka, Datahub, Airflow, Trino, Iceberg , Azure DevOps (CI / CD)& Kubernetes
- Configuring on-wire encryption (TLS) for intra / inter-cluster communication.
- Enforcing RBAC (Role-Based Access Control) via Apache Ranger for Datahub, Spark, and Kafka.
- Working alongside the Client’s central security team for platform control assessments.
- During project execution, Client’s will come up with tools technology to transfer 5 PB data from one logical partition to another logical partition within existing Hadoop Platform.
o Migrating 1PB of data from Brownfield Azure HDI (ADLS G2) to Greenfield GDP AKS (ADLS G2).
Implementing a backup & disaster recovery strategy for ADLS and Isilon.Tokenization of personal data using Protegrity.Tell employers what skills you have
Requirements Gathering
Airflow
Apache Spark
Kubernetes
Azure
Big Data
Hadoop
Cryptography
Protocol
Access Control
Tokenization
Apache Kafka
Encryption
Disaster Recovery
Apache