The Hadoop Data and Information specialist will be responsible to design, develop and support application solutions with focus primarily on Hadoop for a Financial Services Environment. This role will play a vital role in building new data pipelines from various structured and unstructured sources into Hadoop. Must be eager to wear multiple hats, and be capable of picking up new technologies at a fast pace.
EXPERIENCE AND QUALIFICATIONS
- Tertiary qualifications with majors in at least one of the following: Computer Science, Information Systems or similar
- Certification in Hadoop Development.
- Strong experience in Hadoop – HIVE, Pig, Spark, Impala, Oozie, Sqoop, and Map Reduce
- Writing high-performance, reliable and maintainable code.
- Ability to write MapReduce jobs.
- Good knowledge of database structures, theories, principles, and practices.
- Ability to write Pig Latin scripts.
- Hands on experience in HiveQL.
- Familiarity with data loading tools like Flume, Sqoop and Kafka.
- Knowledge of workflow/schedulers like Oozie.
- Analytical and problem solving skills, applied to Big Data domain
- Proven understanding with Hadoop, HBase, Hive, Pig, and HBase.
- Good aptitude in multi-threading and concurrency concepts.
- Must have Java experience.
- Financial Services experience.
- In-depth knowledge of Data Warehouse and Big Data best practices.
- Knowledge and technical appreciation of the interconnectivities and interfaces between various technical platforms, operating systems and processes.
- Good understanding of data ITIL
- Must understand the need to align the IT and business strategies.