SUMMARY:
-
POSITION INFO:
FUNCTIONAL DEFINITION AND RESPONSIBILITY:
The Data Engineer is responsible for developing, maintaining, and optimising the data systems and operational pipelines that support the company’s market research and analytics processes. The role ensures that data flows seamlessly from collection through transformation to reporting, enabling timely, accurate, and actionable insights for clients. This position bridges technical development and operational efficiency, ensuring data reliability, process automation, and continuous improvement across the data lifecycle.
RESPONSIBILITIES:
Data Systems Development and Maintenance:
- Design, build, and maintain data pipelines and ETL processes for efficient ingestion, transformation, and storage of large, multi-source datasets.
- Develop automation solutions that improve data quality, integrity, and processing efficiency across the research workflow.
- Maintain and enhance database systems, ensuring data consistency, accuracy, and optimal performance.
- Develop APIs and integration scripts to connect internal platforms with third-party data sources and tools.
Operational Data Management:
- Monitor data workflows to ensure timely delivery and availability for analysis and reporting.
- Diagnose and resolve issues in data processing and system performance.
- Implement process controls and error handling to minimise downtime and data discrepancies.
- Support data validation and reconciliation to maintain confidence in reporting outputs.
Collaboration and Stakeholder Support:
- Work closely with research, data analytics and data science to understand operational needs and deliver scalable data solutions.
- Translate data and workflow requirements into robust, maintainable technical systems.
- Provide technical input into data architecture, governance, and process improvement initiatives.
- Document data flows, system configurations, and development procedures for operational continuity.
Innovation and Continuous Improvement:
- Identify opportunities to optimise data pipelines and automate manual processes.
- Evaluate and implement new tools and technologies to improve data efficiency and quality.
- Contribute to the continuous improvement of data engineering standards, best practices, and workflows.
QUALIFICATIONS & EXPEREINCE:
- Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or a related field (or equivalent experience).
- 3-5 years’ experience in data operations, data engineering, or software development roles.
- Proven experience in building and maintaining data pipelines and automated workflows.
- Exposure to market research or data analytics environments is highly advantageous.
- Experience in managing data within cloud-based or hybrid infrastructures.
TECHNICAL SKILLS:
- Proficiency in scripting languages (Python, R, DAX, or equivalent).
- Expertise in designing and maintaining ETL/ELT pipelines.
- Proficiency in SQL and data modelling for relational and non-relational databases.
- Experience with cloud platforms (AWS, Azure, or GCP) and associated data services.
- Knowledge of BI tools (Power BI, Tableau) and data warehousing concepts.
- Knowledge of Microsoft Fabric.
- Knowledge of APIs and systems integration.
CORE COPETENCIES:
- Strong statistical / mathematical skills.
- Excellent understanding of data operations within a commercial or research environment.
- Ability to translate business requirements into scalable technical workflows.
Please consider your application unsuccessful should you not receive a response within 2 weeks of applying.