AWS Data Engineer (AWS, Snowflake, Denodo)

 

Recruiter:

Quarphix

Job Ref:

JHB000045/FK

Date posted:

Saturday, March 12, 2022

Location:

Johannesburg, South Africa


JOB SUMMARY:
We are actively recruiting for a Senior AWS Data Engineer (AWS, Snowflake, Denodo) role for our client in the Insurance, Financial Services industry.

JOB DESCRIPTION:

Job Responsibilities:
  • Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties – Glue, Step-functions, Kafka CC, PySpark, DynamoDB, Delta.io, RedShift, Lambda, DeltaLake, Python,.
  • Analyze, re-architect and re-platform on-premise data warehouses to data platforms on AWS cloud using AWS or 3rd party services and Kafka CC.
  • Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, PySpark, Scala, Kafka CC.
  • Design and implement data engineering, ingestion and curation functions on AWS cloud using AWS native or custom programming.
  • Perform detail assessments of current state data platforms and create an appropriate transition path to AWS cloud.
  • Design, implement and support an analytical data infrastructure providing ad-hoc access to large datasets and computing power.
  • Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL, AWS big data technologies and Kafka CC.
  • Creation and support of real-time data pipelines built on AWS technologies including Glue, Lambda, Step Functions, PySpark, Athena and Kafka CC
  • Continual research of the latest big data and visualization technologies to provide new capabilities and increase efficiency
  • Working closely with team members to drive real-time model implementations for monitoring and alerting of risk systems.
  • Collaborate with other tech teams to implement advanced analytics algorithms that exploit our rich datasets for statistical analysis, prediction, clustering and machine learning
  • Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers
  • Deep understanding of ETL, ELT, data digestion/cleansing and engineering.
  • Knowledge of Master Data Management and data quality tools and processing.
  • Advanced analytics (Visualization tools – PowerBI/Tableau)
  • Data Governance and Catalog
  • Lambda & Kappa architecture patterns
  • Machine learning and Data Science
  • 5 + years – big data (Snowflake, Big Query, Redshift) and data engineering
  • 4 - 8 years of experience in Data Virtualization / SQL/ ETL / Data warehousing / Data Integration technologies.
  • Solid understanding of SQL and good grasp of relational and analytical database management theory and practice.
  • Good knowledge of JDBC, XML and Web Services APIs.
  • Prior experience in Denodo Platform.
Qualification/s:
  • Bachelor's Degree in Computer Science, Information Technology, or other relevant fields
  • Has experience in any of the following AWS Athena and Glue Pyspark, DynamoDB, Redshift, Lambda, Step Functions and Kafka CC
  • Proficient in AWS Redshift, S3, Glue, Athena, PySpark, Step Functions , Glue Workflows, Kafka CC, Delta.io
  • Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations
Minimum Experience Required:
  • Advanced working “data engineering” knowledge and experience working with modern data practices, using Delta.io, CDC management and data load practices.
  • Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
  • Experience working with distributed systems as it pertains to data storage and computing
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, meta data, dependency, and workload management.
  • A successful history of manipulating, processing and extracting value from large, disconnected data sets.
  • Working knowledge of message queuing, stream processing, and highly scalable Big Data, data stores.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • Experience in a Data Engineer or similar roles
  • Experience with big data tools is a must: Delta.io, PySpark, Kafka, etc.
  • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
  • Experience with data pipeline and workflow management tools: Step functions, glue workflow etc.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift

To apply, send your CV to

 

NB! This job is now closed. You can apply for other jobs by uploading your CV.



 

 

 

Similar jobs you might be interested in:

AWS Data Engineer (Senior) 2517
Location: Pretoria
Salary: Monthly
aws data engineer
7 days ago


AWS Data Engineer (Expert)
Location: Centurion
Salary:
Join our client as an Expert aws data engineer in South Africa. Design, implement, and optimize Big data Pipelines using aws services. Ensure data integrity, security, and compliance while collaborating with teams to integrate solutions. Bring your expertise in aws technologies and data engineering to drive impactful data solutions.
9 days ago


AWS Data Engineer - 1432
Location: Pretoria
Salary: Hourly
Contract Starts: 01.06.2024Contract Ends: 31.12.2026Location: Midrand/Menlyn/Rosslyn/Home Office rotation
28 days ago


Data Engineer (AWS and Redshift) - Johannesburg – up to R1mil
Location: Johannesburg
Salary: 1000000
data engineer (aws and Redshift) - Johannesburg – up to R1mil
21 days ago


AWS Data Engineers Required - Project Based - Contractual-to start asap-JHB - Hybrid
Location: Midrand
Salary: Market related
aws, data, engineers, Required, Project, Based, Contractual, to, start, asap, JHB, Hybrid
10 days ago


Intermediate Data Engineer - Johannesburg – up to R700k per annum
Location: Johannesburg
Salary: 700000
Intermediate data engineer - Johannesburg – up to R700k per annum
Today


Cloud Engineer - FinOps
Location: Sandton
Salary: R800k - 950k per year
Cloud engineer with relevant experience, qualifications + azure experience
1 day ago


Lead Product Engineer
Location: Johannesburg
Salary:
By developing customized solutions tailored to our specific needs. This will help us achieve our development goals in a timely and cost-effective mannerSWIFT CBPR+ protocol implementation project which is an Industry project and has reputational implications for the bank in the international swift community as well as a requirement for Cloud aws expertise.
3 days ago


Platform Engineer (Mulesoft Integration)
Location: Johannesburg
Salary: Hourly
We are in search of a PLATFORM engineer with expertise in Mulesoft Integration for one of our banking industry clients, for a 12-month rolling employment contract. The incumbent will be joining the Platforms and engineering Gateway technology tribe responsible for the development, design and run of the Mulesoft API and Integration platform residing on‐premises and aws Cloud.Apply platform e...
3 days ago


Data engineer – Johannesburg – up to R900k
Location: Johannesburg
Salary: 900000
data engineer – Johannesburg – up to R900k
3 days ago


Create a free job alert for AWS Data Engineer (AWS, Snowflake, Denodo) in Johannesburg

Enter your email address below and we will email you similar jobs when they become available:

You can cancel at any time. We will not spam you.
By giving us your email address your agree to our Terms and Conditions