We are a product-centric insight & automation services company globally. We help the world’s organizations
make better & faster decisions using the power of insight & intelligent automation.
We build and operationalize their next-gen strategy, through Big Data, Artificial Intelligence, Machine
Learning, Unstructured Data Processing and Advanced Analytics. Quadratyx can boast of more extensive
experience in data sciences & analytics than most other companies in India. We firmly believe in Excellence Everywhere.
Principal Data Engineer
Job / Role Information
Designation:
Principal Data Engineer
Function:
Technical
Role:
Team Lead
Location:
Hyderabad
Job Description
Purpose of the Job/ Role:
The ideal candidate needs to work on multiple projects as a technical lead driving user story analysis and elaboration, design and development of software applications, testing, and build automation tools. Architect big data analytics framework. Have experience in leading a team of engineers. To have experience in agile and other rapid application development methods
Key Requisites:
Building systems that rely on proprietary algorithms, building and running large-scale distributed systems, Web services, extracting structured data from unstructured content etc.
Expertise in data structures and algorithms, experience to design & develop the architecture of platform related technologies, cloud, analytics, apps
End-to-end system implementation including data security and privacy concerns
Experience on big data architecture, big data - Apache Hadoop (HDFS) – Hbase / Hive / Pig / Flume / Scoop / MapReduce / Yarn.
Provide technical leadership and governance of the big data team and the implementation of the solution architecture in following Hadoop ecosystem Hadoop (HortonWorks), Map Reduce, Pig, Hive, Yarn, Tez, Spark, Pheonix, Presto, Hbase, Storm, Kafka, Flume, OoZie, Ambari, Security – Kerberos, Ranger, Knox, HDFS encryption
You have stream processing experience in handling any of the following - Apache Storm, Spark, Flink
Experience with code repository management, code merge and quality checks, continuous integration, and automated deployment & management using tools like Jenkins, Git, Puppet, Chef, Maven, Ivy, Urban Code, Docker or comparable tool.
You have loaded the outcome to NoSQL database (HBase / Cassandra) and served them to other applications (live Website traffic, other Internal tools)
Working Relationships
Reporting to
Vice President
External Stakeholders
Clients
Skills/ Competencies Required
Technical Skills
Hands-on experience with software development and system administration.
Expertise in Hadoop ecosystem and architecture components, Big Data tools and technologies including Hadoop, Spark, Hive, Kafka, MapReduce etc.
Sufficient understanding of data science, analytics and software engineering to be able to communicate effectively with the engineering team, hands-on exposure to software engineering.
Expertise with SQL and Data modeling working in Agile development process.
Knowledge of any coding languages.
Experience in cloud technologies.
Soft Skills
Passion and analytical abilities to solve complex problems
Bachelor’s or Master’s in Computer Science, Computer Engineering, or related discipline from a well-known institute
Minimum 8-10 years of work experience in an IT organization (preferably Analytics / Big Data/ Data Science/ AI background)
Quadratyx is an equal opportunity employer - we will never differentiate candidates on the basis of religion, caste, gender, language, disabilities or ethnic group.
Quadratyx reserves the right to place/move any candidate to any company
location, partner location or customer location globally, in the best
interest of Quadratyx business.