Job Description
Who we’re looking for:
The ‘Zendesk Analytics Prototyping’ (ZAP) team is seeking a Data Engineer to support our mission of building a robust data foundation for CRM insights and improving customer support tools. As a Data Engineer, you will work on designing and implementing scalable, high-quality data solutions, collaborating closely with engineering and analytics teams to enhance Zendesk’s operations. This role offers the opportunity to grow your skills in a dynamic environment and make a measurable impact on how Zendesk leverages data.
What you’ll be doing:
* Collaborate with team members to define requirements and translate them into scalable data models and pipelines.
* Develop and maintain ELT pipelines, ensuring data reliability and scalability for business reporting and analytics use cases.
* Build and optimize SQL-based data models using dbt and other ETL tools.
* Identify and implement improvements in data delivery, processing performance, and system efficiency.
* Contribute to the team’s technical vision and bring innovative solutions to enhance data systems.
What you bring to the role
Basic Qualifications:
* 3+ years of data engineering experience building, maintaining and working with data pipelines & ETL processes in big data environments.
* Extensive experience with SQL, ideally in the context of data modeling and analysis.
* Hands-on production experience with dbt, and proven knowledge in modern and classic Data Modeling - Kimball, Inmon, etc.
* Programming skills in Python or a similar language, with an emphasis on data transformation and automation.
* Experience with cloud columnar databases - primarily with Snowflake, query authoring (SQL) as well as working familiarity with a variety of databases.
* Proven experience in performance testing, capacity planning, and cost optimization for large-scale, complex data pipelines and systems. This includes identifying bottlenecks, ensuring scalability, and minimizing operational costs in cloud-based data environments.
* Excellent communication and collaboration skills.
Preferred Qualifications:
* SnowPro Core certification or equivalent hands-on expertise.
* Hands-on production experience with Apache Spark (Spark SQL / PySpark).
* Familiarity with Lean/6 Sigma principles and an understanding of CRM analytics.
Our Data Stack:
ELT (Snowflake, dbt, Airflow, Kafka)
BI (Looker)
Infrastructure (AWS, Kubernetes, Terraform, GitHub Actions)
#LI-KO1