Skip to main content

Staff Data Engineer

Job Description

Who we’re looking for:

The ‘Zendesk Analytics Prototyping’ (ZAP) team is looking for an experienced Staff Data Engineer to support the team’s charter to accelerate CRM measurements and insights, and help promote a data-centric approach to improving customer support tools and operations. To realize that mission, we build and maintain robust, fine-grained, and contextually rich datasets, providing a foundation for developing insights that help enhance Zendesk’s support operations.

The role will be responsible for closely partnering with Software Development Engineers and Business Intelligence Engineers to build high quality data pipelines and manage the team’s data lake. You’ll work in a collaborative environment using the latest engineering best practices with involvement in all aspects of the software development lifecycle. You will craft and develop curated datasets, applying standard architectural & data modeling practices. You will be primarily developing Data Warehouse Solutions in Snowflake using technologies such as dbt, Airflow, Terraform.

What you’ll be doing:

  • Collaborate with team members and internal stakeholders to understand business requirements, define successful analytics outcomes, and design data models.

  • Develop, automate, and maintain scalable ELT pipelines in our Data Warehouse, ensuring reliable business reporting.

  • Design & build ELT based data models using SQL & DBT.

  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery

  • Work with data and analytics experts to strive for greater functionality in our data systems

What you bring to the role

Basic Qualifications:

  • 8+ years of data engineering experience building, maintaining and working with data pipelines & ETL processes in big data environments.

  • Extensive experience with SQL, ideally in the context of data modeling and analysis.

  • Hands-on production experience with dbt, and proven knowledge in modern and classic Data Modeling - Kimball, Inmon, etc.

  • Proficiency in a programming language such as Python, Go, Java, or Scala. 

  • Experience with cloud columnar databases (Google BigQuery, Amazon Redshift, Snowflake), query authoring (SQL) as well as working familiarity with a variety of databases.

  • Experience with processes supporting data transformation, data structures, metadata, dependency, ensuring efficient data processing performance and workload management.

  • Excellent communication and collaboration skills.

  • Thrive in ambiguous situations, possesses a proactive problem-solving attitude.

Preferred Qualifications:

  • Experience with BigQuery, Snowflake, or similar cloud warehouses.

  • Familiarity with AI tools and techniques that could be applied to data analysis and data transformation tasks.

  • Completed projects with dbt.

  • Familiarity with Lean/6 Sigma principles and an understanding of CRM analytics.

Our Data Stack:

ELT (Snowflake, dbt, Airflow, Kafka)

BI (Tableau, Looker)

Infrastructure (AWS, Kubernetes, Terraform, GitHub Actions)

#LI-DT2

Zendesk software was built to bring a sense of calm to the chaotic world of customer service. Today we power billions of conversations with brands you know and love.

Zendesk believes in offering our people a fulfilling and inclusive experience. Our hybrid way of working, enables us to purposefully come together in person, at one of our many Zendesk offices around the world, to connect, collaborate and learn whilst also giving our people the flexibility to work remotely for part of the week.

D'autres ont aussi consulté

Staff Data Engineer

Entreprise:
Zendesk
Ville:
France
Type de contrat: 
CDI, Temps plein
Catégories: 
Ingénieur Data
Diplôme: 
Master
Publiée:
25.03.2024
Partagez maintenant: