Our client is undergoing a large-scale data transformation, with multiple concurrent data platform builds and a strategic move toward
Microsoft Fabric and Azure cloud technologies . This role focuses on
building modern, scalable data pipelines from scratch , with the majority of work being
greenfield rather than migration . Key Responsibilities
Design, build, and operate end-to-end data pipelines across cloud and hybrid environments (Azure & Microsoft Fabric). Develop batch and real-time data ingestion, transformation, and delivery solutions using up-to-date data platform patterns (lakehouse / warehouse). Work with structured and semi-structured data, applying strong data modelling and engineering principles. Contribute to the implementation of scalable data platforms aligned with enterprise architecture standards. Support and modernise legacy/on-prem data systems as part of a broader cloud transformation strategy. Monitor and maintain data pipeline performance, ensuring reliability, cost-efficiency, and stability. Troubleshoot and resolve data issues using logging, alerting, and root cause analysis. Collaborate closely with business stakeholders to translate requirements into effective data solutions. Document processes, pipelines, and runbooks to support ongoing operations and handovers.
Experience & Skills Required
Strong experience building data pipelines within
Azure environments
(ADF, Synapse, Fabric). Hands-on experience with
Microsoft Fabric
(highly desirable / priority skill). Proficiency in
Python and/or PySpark
for data processing. Strong SQL skills and understanding of relational data modelling. Experience working with
data lake / lakehouse architectures
and distributed processing. Exposure to hybrid environments (cloud + on-prem systems). Experience with
Microsoft Purview
or data governance tools is a plus. Familiarity with tools like
WhereScape RED
is beneficial but not essential. Strong communication skills — able to engage with both technical and non-technical stakeholders. Ability to work in fast-paced environments with multiple concurrent data projects. Project Overview
You will play a key role in shaping how data is engineered and consumed across the organisation, working closely with both technical teams and business stakeholders. This is an opportunity to work on
cutting-edge data technology in a high-impact environment , with strong potential for long-term growth and ownership.
#J-18808-Ljbffr
Apply on Kit Job: kitjob.ca/job/2g9hbj
📌 Azure Data Engineer (Fredericton)
🏢 Enzo Tech Group
📍 Fredericton
Reply to this offer
Impress this employer describing Your skills and abilities, fill out the form below and leave Your personal touch in the presentation letter.