84.51° Overview:
84.51° is a retail data science, insights and media company. We help The Kroger Co., consumer packaged goods companies, agencies, publishers and affiliates create more personalized and valuable experiences for shoppers across the path to purchase.
Powered by cutting-edge science, we utilize first-party retail data from more than 62 million U.S. households sourced through the Kroger Plus loyalty card program to fuel a more customer-centric journey using 84.51° Insights, 84.51° Loyalty Marketing and our retail media advertising solution, Kroger Precision Marketing.
Join us at 84.51°!
__________________________________________________________
Senior Data Engineer (P3005)
Who we are
As a full-stack data science subsidiary of The Kroger Company, we leverage over 10 petabytes of data to personalize the experience for 62 million households. We are seeking a hands-on Senior Data Engineer (G2) to design, build, and operate our analytical lakehouse on a modern data stack. As a senior individual contributor on a hybrid scrum team, you will partner closely with Product, Analytics, and Engineering to deliver scalable, high-quality data products on Databricks and Azure. You will contribute to technical direction, uphold engineering best practices, and remain deeply involved in coding, testing, and production operations—without people management responsibilities.
Key Responsibilities
- Data engineering delivery: Design, develop, and optimize secure, scalable batch and near-real-time pipelines on Databricks (PySpark/SQL) with Delta Lake and Delta Live Tables (DLT). Implement medallion architecture, Unity Catalog governance, and robust data quality checks (expectations/testing). Build performant data models and tables to power analytics, ML, and downstream applications
- Product collaboration and agile execution: Translate business requirements into data contracts, schemas, and SLAs in partnership with Product and Analytics. Participate in backlog refinement, estimation, sprint planning, and retros in a hybrid onshore/offshore environment. Deliver clear documentation (designs, runbooks, data dictionaries) to enable self-serve and reuse
- Reliability, observability, and operations: Implement monitoring, alerting, lineage, and cost/performance telemetry; troubleshoot and tune Spark jobs and storage. Participate in on-call/incident response rotations and drive post-incident improvements
- CI/CD, and infrastructure as code: Contribute to coding standards, code reviews, and reusable patterns/modules. Build and maintain CI/CD pipelines (GitHub Actions) and manage infrastructure with Terraform (data, compute, secrets, policies)
- Continuous improvement and knowledge sharing: Mentor peers informally, share best practices, and help evaluate/introduce modern tools and patterns
Required Qualifications
- Experience: 4–6 years in data engineering; 1–2 years operating as a senior/lead individual contributor on delivery-critical projects. Proven track record delivering production-grade pipelines and data products on cloud platforms.
- Core technical skills: Databricks: 2–3+ years with Spark (PySpark/SQL); experience building and operating DLT pipelines and Delta Lake. Azure: Proficiency with ADLS Gen2, Entra ID (Azure AD), Key Vault, Databricks on Azure, and related services. Languages and tools: Expert-level SQL and strong Python; Git/GitHub, unit/integration/data testing, and performance tuning. Infrastructure as code: Hands-on Terraform for data platform resources and policies. Architecture: Solid understanding of medallion and dimensional modeling, data warehousing concepts, and CI/CD best practices
- Collaboration and communication: Excellent communicator with the ability to work across Product, Analytics, Security, and Platform teams in an agile setup
Preferred qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience)
- Azure or Databricks certifications (e.g., Data Engineer Associate/Professional)
- Experience with ELT tools (e.g., Fivetran), Snowflake, and streaming (Event Hubs, Kafka)
- Familiarity with AI-ready data practices and AI developer tools (e.g., GitHub Copilot)
- Exposure to FinOps concepts and cost/performance optimization on Databricks and Azure
The opportunity
- Build core data products that power personalization for millions of customers at enterprise scale
- Work with modern tooling (Databricks, Delta Lake/DLT, Unity Catalog, Terraform, GitHub Actions) in a collaborative, growth-minded culture
- Hybrid work, competitive compensation, comprehensive benefits, and clear paths for advancement
PLEASE NOTE:
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United Stated and with the Kroger Family of Companies (i.e. H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).
#LI-SSS
Pay Transparency and Benefits
- The stated salary range represents the entire span applicable across all geographic markets from lowest to highest. Actual salary offers will be determined by multiple factors including but not limited to geographic location, relevant experience, knowledge, skills, other job-related qualifications, and alignment with market data and cost of labor. In addition to salary, this position is also eligible for variable compensation.
- Below is a list of some of the benefits we offer our associates:
- Health: Medical: with competitive plan designs and support for self-care, wellness and mental health. Dental: with in-network and out-of-network benefit. Vision: with in-network and out-of-network benefit.
- Wealth: 401(k) with Roth option and matching contribution. Health Savings Account with matching contribution (requires participation in qualifying medical plan). AD&D and supplemental insurance options to help ensure additional protection for you.
- Happiness: Paid time off with flexibility to meet your life needs, including 5 weeks of vacation time, 7 health and wellness days, 3 floating holidays, as well as 6 company-paid holidays per year. Paid leave for maternity, paternity and family care instances.
Top Skills
Similar Jobs
What you need to know about the Montreal Tech Scene
Key Facts About Montreal Tech
- Number of Tech Workers: 255,000+ (2024, Tourisme Montréal)
- Major Tech Employers: SAP, Google, Microsoft, Cisco
- Key Industries: Artificial intelligence, machine learning, cybersecurity, cloud computing, web development
- Funding Landscape: $1.47 billion in venture capital funding in 2024 (BetaKit)
- Notable Investors: CIBC Innovation Banking, BDC Capital, Investissement Québec, Fonds de solidarité FTQ
- Research Centers and Universities: McGill University, Université de Montréal, Concordia University, Mila Quebec, ÉTS Montréal



