- Setup and maintain big data platform
- Work with Data Scientists, Data Analysts, and other internal stakeholders to assist with data-related technical issues and support their data pipeline infrastructure and data preparation needs.
- Convert models from Data Scientists/Analysts into optimized & scalable production grade ML code.
- Design, develop, and maintain optimal data pipeline for Big Data platform.
- Develop, evaluate, recommend and maintain Big Data reporting and visualization applications
- Participate in the design, development, and testing of highly scalable & resilient micro-services, frameworks and APIs that can support different applications
- Troubleshoot software and system issues.
- Bachelor or Master’s degree computer science, software engineering, information systems or related field.
- The candidate should have at least 6 years of technical experience in Information Technology with at least 3 years in Big Data, Data Warehousing or Business Intelligent technology.
- Broad knowledge of various aspects of Big Data with good understanding and hands-on experience in Hadoop based technologies such as HDFS, MapReduce, Spark, Kafka etc.
- Deep understanding of both relational and NoSQL database technologies such as PostgreSQL, Oracle DB, Cassandra, MongoDB, Neo4J etc.
- Good knowledge in programming languages such as Java, Python or Scala on Linux/Windows platforms.
- Ability to write production grade ML code for edge devices will be a major plus point
- Experience in Big Data visualization and reporting software.
- Experience in designing ETL/BI solutions.
- Experience in DevOps and DataOps
- Familiar with Linux/UNIX system administration
- Experience in operational support in delivering Big Data solutions.
- Effective oral and written communication with strong analytical, problem solving, multitasking and project management skills are essential on the job.
Shortlisted candidate will be offered a 1 Year Agency Contract employment.