Elevate data engineering efforts as a Lead Data Engineer in a cloud-centric AWS setup. Specialize in streaming architectures and big data warehousing solutions to meet client needs.
This role is all about implementing large-scale data engineering solutions while focusing on streaming data architecture and delivering high-performance outcomes. You will guide architectural decisions, define best practices for data modeling, and ensure seamless DBT implementation. Experience in AWS services, particularly with Redshift and error-handling frameworks, is crucial for your success in this role.
Key Responsibilities:
• Direct architecture for streaming data from GCP to AWS
• Create resilient data ingestion frameworks incorporating monitoring
• Employ Spark/PySpark for distributed processing pipelines
• Maintain scalable warehouses and ETL processes using DBT
• Lead performance tuning for Redshift and ensure best practices
Requirements:
• Over 6 years of big data engineering experience
• Successful history in streaming architecture development
• Expertise in optimizing Amazon Redshift performance
• Familiarity with CI/CD pipeline creation for data
• Client-facing delivery experience is a must
Apply your engineering skills to produce leading-edge streaming solutions that maximize data efficacy and integrity.
#J-18808-Ljbffr
Apply on Kit Job: kitjob.ca/job/2fsmr3
📌 Lead Data Engineer Specializing in AWS and DBT Frameworks (Ottawa)
🏢 Two Circles
📍 Ottawa
Reply to this offer
Impress this employer describing Your skills and abilities, fill out the form below and leave Your personal touch in the presentation letter.