Does the idea of a technically challenging role in a fast-growing company excite you? Are you passionate about cutting-edge technology and working collaboratively with a smart and energetic team? Are you ready to get hands-on with emerging technology, share your knowledge and experience with your peers and be part of a dynamic fast-paced work environment where curiosity and inquisitiveness is encouraged and rewarded? If so, then
This role requires that you have:
Min. 5+ years of experience on BIG Data Architecture, BIG Data - Apache
Hadoop (HDFS) - Hbase/Hive/Pig/Flume/Scoop/MapReduce/Yarn
End-to-end system implementation including data security and privacy concerns.
Building systems that rely on proprietary algorithms, building and
running large-scale distributed systems, web services, extracting
structured-data from unstructured-content etc.
Provide technical leadership and governance of the big data team and the
implementation of the solution architecture in following Hadoop
ecosystem (Hadoop (HortonWorks), Map Reduce, Pig, Hive, Yarn, Tez,
Spark, Pheonix, Presto, Hbase, Storm, Kafka, Flume, OoZie, Ambari,
Security – Kerberos, Ranger, Knox, HDFS encryption)
You have loaded the outcome to NoSQL database (HBase / Cassandra) and
served them to other applications (live website traffic, other internal
tools)
You have stream processing experience on any - Apache Storm, Spark, Flink
You have Code/Build/Deployment -- Git, Maven, Jenkins
Provide cloud-computing infrastructure solutions on Amazon Web Services (AWS - EC2, VPCs, S3, IAM)
Last not but least Research and Details oriented mind-set
Quadratyx is an equal opportunity employer - we will never differentiate candidates on the basis of religion, caste, gender, language, disabilities or ethnic group.
Quadratyx reserves the right to place/move any candidate to any company
location, partner location or customer location globally, in the best
interest of Quadratyx business.