DataOps Cloud Data Engineer - Senior
About the role
Toronto, ON, Canada
Contract
Workspace
On-Site
Start Date
2026-06-01
APPLICATIONS FOR THIS POSITION ARE NOW CLOSED
Job Type
Client
Health Services Cluster
Hourly Wage
73-85
End Date
2027-10-05
Job Description
Responsibilities:
Design, develop, test, implement, and troubleshoot:
-
data pipelines using various tools such as Databricks, Azure Data Factory, AWS Glue
-
complex data transformation procedures
-
data models for data warehouses and lakehouses
-
Skills
-
Experience and Skill Set Requirements
Data engineering Experience
Programming & scripting: Python, SQL, Linux shell, PowerShell.
Data manipulation/analysis using pandas and pyspark.
Working with XLSX, CSV, JSON files, relational databases, cloud storage, structured and unstructured data
40%
Cloud experience
AWS and/or Azure services.
Cloud data warehouse solutions such as AWS Redshift.
Data lakehouse solutions such as Databricks (Delta Lake).
Data processing orchestration/automation
25%
Data Warehouse
ETL tools (such as Azure Data Factory, AWS Glue) as well as cloud agnostic tools such as Informatica IDMC.
Data modeling & architecture: relational and dimensional modeling.
Data reporting/visualization.
25%
Other Skills
Full SDLC from requirements gathering, design, implementation, testing to deployment and production support
Project management (agile/scrum and waterfall)
Change and Incident management
Communication, presentation and negotiation skills
Consulting, problem-solving and decision-making skills
5%
Public Sector
Experience working in OPS and/or public sector in general
5%
- Supplier Comments
Closing Date/Time: 2026-05-19, 12:00 p.m.
Max submission: 1 (one)
5 days onsite
Assignment Type: This position is listed as Onsite; the resource is expected to work 7.25 hours per calendar day between the standard working hours of 8:00-5:00 PM (excluding lunch breaks) every Monday to Friday inclusive at the identified OPS office location.
MUST HAVES:
-
Data engineering (pandas, pyspark)
-
Data lakehouse solutions such as Databricks (Delta Lake).
-
Working with XLSX, CSV, JSON files, relational databases, cloud storage, structured and unstructured data
-
Experience using AWS Services such as Glue, StepFunctions, Lambda, S3
-
Extract/Transform/Load data using tools such as Informatica IDMC
MUST HAVES