19 Apr
|
S.i. Systems
|
Toronto
19 Apr
S.i. Systems
Toronto
Apply on Kit Job: kitjob.ca/job/2gcvrs
Azure and Databricks Data Engineer designing, building and supporting the data driven applications for our energy client. ID 25-199
Location: Oshawa - on site 3 days a week
Duration: 1 year
Work hours: 35hrs/week
**must be eligible for enhanced reliability security clearance**
Job Overview
- Will build reliable, supportable & performant data lake & data warehouse products to meet the organization’s need for data to drive reporting analytics, applications, and innovation.
- Will employ best practice in development, security, and accessibility and design to achieve the highest quality of service for our customers.
- Build and productionize modular and scalable data ELT/ETL pipelines and data infrastructure leveraging the wide range of data sources across the organization.
- Build curated common data models designed by the Data Modelers that offer an integrated, business-centric single source of truth for business intelligence, reporting, and downstream system use, in collaboration with Data Architect.
- Work closely with infrastructure, and cyber teams and Senior Data Developers to ensure data is secure in transit and at rest.
- Clean, prepare and optimize datasets for performance, ensuring lineage and quality controls are applied throughout the data integration cycle.
- Support Business Intelligence Analysts in modelling data for visualization and reporting, using dimensional data modeling and aggregation optimization methods.
- Troubleshoot issues related to ingestion, data transformation and pipeline performance, data accuracy and integrity.
- Collaborate with Business Analysts, data scientists, Senior Data Engineers, data Data Analysts and, solution Architects and Data Modelers to develop data pipelines to feed our data marketplace.
- Assist in identifying,
designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Work with tools in the Microsoft Stack; Azure Data Factory, Azure Data Lake, Azure SQL Databases, Azure Data Warehouse, Azure Synapse Analytics Services, Azure Databricks, Microsoft Purview, and Power BI.
- Work within the agile SCRUM work management framework in delivery of products and services, including contributing to feature & user story backlog item development, and utilizing related Kanban/SCRUM toolsets.
- Assist in building data catalog and maintenance of relevant metadata for datasets published for enterprise use.
- Develop optimized, performant data pipelines and models at scale using technologies such as Python, Spark and SQL, consuming data sources in XML, CSV, JSON, REST APIs, or other formats.
Qualifications
- Completion of a four-year University education in computer science, computer/software engineering or other relevant programs within data engineering, data analysis, artificial intelligence, or machine learning.
- Experience as a Data Engineer designing and building data pipelines.
- Fluent in creating data processing frameworks using Python, PySpark, SparkSOL and SOLExperience with Azure Data Factory, ADLS, Synapse Analytics and Databricks
- Experience building data pipelines for Data Lakehouses and Data Warehouses
- Valuable understanding of data structures and data processing frameworks
- Knowledge of data governance and data quality principles
- Effective communication skills to translate technical details to non-technical stakeholders
Disclaimer:
AI may be used in evaluating candidates.
This posting is for an existing vacancy.
Apply
Apply on Kit Job: kitjob.ca/job/2gcvrs
📌 Azure and Databricks Data Engineer designing, building and supporting the data driven applications for our energy client. ID 25-199 (Toronto)
🏢 S.i. Systems
📍 Toronto