Senior Data Engineer (AU)
Zitcha
Description
At Zitcha, we’re at the forefront of revolutionizing the retail media landscape, and we invite you to join our visionary team making Retail Media better for everyone.
As the world’s first Adaptive, Unified Retail Media Platform, Zitcha empowers retailers to unlock their full potential by seamlessly integrating planning, delivery, and insights across all channels – onsite, offsite, and in-store. As part of our team, you’ll help build the intelligent, efficient, and high-performance technology that provides scalable solutions tailored to each retailer’s unique vision and business model.
We understand that no two retailers are the same, and their networks shouldn’t be either. Zitcha’s composable architecture ensures retailers have the right capabilities at the right time, whether launching a new retail media network or scaling an existing one. Our platform simplifies and automates previously manual processes, making it easier for retailers to unlock the power of their first-party customer data and deliver personalized, impactful advertising experiences.
With Zitcha, you’re not just keeping up with retail media – you’re helping define it. Join us in creating a better way to build and grow Retail Media Networks, transforming them into valuable extensions for retailers and brands worldwide.
Now, Zitcha is seeking a Senior Data Engineer to join our innovative team in Australia. This is your opportunity to shape the future of retail media as we revolutionize the industry and empower retailers and brands to achieve unparalleled success.
Key Responsibilities
- Data Pipeline Development: Design, develop, and optimize highly scalable and reliable ETL/ELT data pipelines using a combination of batch and streaming technologies, ensuring efficient data ingestion, transformation, and loading into our data platforms.
- Data Platform Management: Architect, implement, and maintain robust data solutions within our cloud environments, primarily leveraging Google Cloud Platform (GCP) and Amazon Web Services (AWS).
- Data Warehousing: Lead the design and implementation of data warehousing solutions utilizing Google BigQuery and Snowflake, optimizing for performance, cost-efficiency, and analytical needs.
- Orchestration & Automation: Develop, manage, and optimize data workflows using Apache Airflow for scheduling, monitoring, and automating complex data pipelines.
- Data Modelling & Semantic Layer: Define and implement data models (e.g., star, schemas) and develop a semantic layer to ensure data consistency, usability, and discoverability for business users and data analysts. Experience with tools like Metabase for building this semantic layer is highly valued.
- SQL Expertise: Write complex, optimized SQL queries for data extraction, transformation, and analysis, and advise on best practices for SQL performance tuning.
- Data Quality & Governance: Implement and enforce data quality checks, validation processes, and governance policies to ensure accuracy, completeness, and reliability of data.
- Collaboration: Work closely with data scientists, data analysts, product managers, and other engineering teams to understand data requirements and deliver effective data solutions.
- Troubleshooting & Optimization: Proactively identify and resolve data-related issues, performance bottlenecks, and optimize existing data infrastructure for efficiency and scalability.
- Innovation: Stay up-to-date with the latest trends and technologies in data engineering, retail media, and cloud platforms, evaluating and recommending new tools and approaches.
Requirements
Mandatory
- Bachelor’s degree in Computer Science, Software Engineering, or a related quantitative field.
- 5+ years of hands-on experience in data engineering roles, with a proven track record of delivering large-scale data solutions.
- Expert-level proficiency in SQL and strong experience with relational and analytical databases.
- Demonstrated experience with Google BigQuery and Snowflake as primary data warehousing solutions.
- Extensive experience designing, building, and managing data pipelines on Google Cloud Platform (GCP) and Amazon Web Services (AWS). This includes familiarity with relevant services (e.g., GCP Dataflow, Dataform, Pub/Sub, Cloud Storage; AWS S3).
- Solid experience with Apache Airflow for workflow orchestration and scheduling.
- Experience with Metabase for semantic modelling, dashboarding, and self-service analytics.
- Strong understanding of data modelling principles (e.g., dimensional modelling, Kimball methodology).
- Proficiency in Python for data manipulation and automation.
- Experience with version control systems (e.g., Git).
- Excellent problem-solving, analytical, and communication skills.
- Ability to work independently and collaboratively in a fast-paced, agile environment.
Nice to Have
- Experience in the retail, e-commerce, or advertising technology (AdTech) industry.
- Familiarity with streaming data processing technologies (e.g., Kafka, Pub/Sub).
- Experience with containerization (Docker, Kubernetes).
- Knowledge of CI/CD practices for data pipelines.
- Exposure to machine learning data pipelines and MLOps.
If this sounds like you, don’t hesitate to reach out!
Note: This role is also open to candidates based in Melbourne.