Back to Jobs
Blackline SafetyData Science & Analytics 3h ago
Data Engineer
Remote (Canada)
Full-time
Not Disclosed
Be the first applicant! 🚀
Job Description
OVERVIEW
Our team at Blackline Safety is growing! As a people-driven technology company, with a mission to make sure every worker returns home safely, we drive innovation, practice resiliency, demonstrate leadership, go the extra mile for our customers, and empower our people to be their best.
Responsibilities:
- Design, build, and maintain scalable data pipelines (batch and real-time) using modern cloud tools.
- Partner closely with product and engineering teams to deliver data-driven features and services that support our safety mission.
- Support our existing ETL/ELT processes, data lake, data warehouse, and data lakehouse environments.
- Ensure data quality, reliability, and observability through robust monitoring, alerting, and documentation practices.
- Participate actively in architectural discussions and contribute to the long-term strategy of our data platform.
Who are you?
You’re a pragmatic builder and problem-solver who’s passionate about enabling better decisions through data.
- You understand that clean architecture and thorough documentation are essential, not optional.
- You work well in agile teams and enjoy collaborating with product, engineering, and analytics partners.
- You have opinions on data modeling, enjoy the challenge of working with event-driven architectures, and are comfortable switching between SQL, Python, Java, Scala and a variety of AWS services.
- You have embraced AI-assisted development as a core part of your workflow — leveraging agentic tools such as Claude Code, Amazon Q Developer, and GitHub Copilot to accelerate delivery, reduce toil, and maintain higher standards of code quality.
About You
- Degree in Computing Science, Data Engineering, Information Technology, or related discipline.
- 3 years of experience in a Data Engineer role.
Technical Skills:
- Experience with event-driven architecture and related technologies (e.g., Apache Kafka, Amazon MSK).
- Familiarity with Infrastructure as Code (IaC) and CI/CD best practices in multi-environment deployment workflows, including hands-on experience with Terraform and GitHub Actions.
- Experience with monitoring and alerting platforms such as CloudWatch, Datadog, or PagerDuty.
- Hands-on experience with ETL/ELT and orchestration tools, including AWS Glue Studio, Matillion, and AWS Step Functions.
- Proficiency with AWS data services: Redshift, EMR, S3, Lambda, Kinesis, and MSK.
- Strong proficiency in SQL, Python, Java, and/or Scala.
Nice to Have:
- Experience with additional big data technologies such as BigQuery, Snowflake, Microsoft Fabric, Hive, Hadoop, Apache Flink, or Apache Spark.
- Familiarity with BI and analytics tools, including Power BI and Amazon Quick.
- Experience supporting data science teams with the infrastructure, tooling, and model-serving pipelines they need.
- Familiarity with AI/ML services on AWS, including Amazon Bedrock, SageMaker, or Strands, and an understanding of how data engineering work underpins model training and inference workflows.
- Comfort working alongside data scientists to help productionize models, ensure training data quality, and build reliable data feeds for analytical and predictive use cases.
Benefits
- Competitive base salary and annual compensation review
- Comprehensive health and dental benefits*
- Mental health and wellness support
- Flexible work arrangements and hybrid work model for eligible positions
- Paid vacation, personal and sick days*
- Professional development opportunities
- Education funding
- Participation in the Company's employee stock ownership plan
- A collaborative, inclusive, and mission-driven culture
- Exclusive access to perks and discounts
- A flexible ‘Dress for Your Day’ environment
Is this company safe?
Ask Hyrizon AI to scan this company for potential red flags.
Safety First
- Never pay for a job application.
- Do not share sensitive bank info.
- Verify the client before starting work.