Block logo

Staff Data Engineer, Public Web

Block
Full-time
On-site
Seattle, Washington, United States

Since we opened our doors in 2009, the world of commerce has evolved immensely, and so has Square. After enabling anyone to take payments and never miss a sale, we saw sellers stymied by disparate, outmoded products and tools that wouldn’t work together.

So we expanded into software and started building integrated, omnichannel solutions – to help sellers sell online, manage inventory, offer buy now, pay later functionality through Afterpay, book appointments, engage loyal buyers, and hire and pay staff. Across it all, we’ve embedded financial services tools at the point of sale, so merchants can access a business loan and manage their cash flow in one place. Afterpay furthers our goal to provide omnichannel tools that unlock meaningful value and growth, enabling sellers to capture the next generation shopper, increase order sizes, and compete at a larger scale.

Today, we are a partner to sellers of all sizes – large, enterprise-scale businesses with complex operations, sellers just starting, as well as merchants who began selling with Square and have grown larger over time. As our sellers grow, so do our solutions. There is a massive opportunity in front of us. We’re building a significant, meaningful, and lasting business, and we are helping sellers worldwide do the same.

Job Description

As a Data Engineer on the Marketing Data Engineering team, you will join an organization whose mandate is to build and maintain foundational data infrastructure essential to driving Square’s revenue growth and accelerating seller acquisition. You will work closely with web analytics teams to build best-in-class data pipelines and processes that stitch together complex sets of data stores and drive large investment decisions. Your work will have a direct impact on hundreds of stakeholders at Square.

You Will

  • Be the expert on end-to-end data flow for Marketing public web event data
  • Collaborate with upstream engineering teams to ensure site events are instrumented and logged accurately
  • Work closely with web analytics and SEO teams to translate stakeholder needs into ETL requirements
  • Design, build and maintain critical data pipelines to ensure highly accurate and reliable business reporting
  • Analyze new data sources and work with stakeholders to understand the impact of integrating new data into existing pipelines and models
  • Participate in on-call rotation, monitor daily execution, diagnose and log issues, and fix business critical pipelines to ensure SLAs are met with internal stakeholders
  • Assist the analytics team in interpreting data trends and identifying anomalies through querying and data analysis. Act as a liaison between business needs and ETL processes by resolving data discrepancies and implementing scalable solutions. Make data model and ETL code improvements to improve pipeline efficiency and data quality
  • Build and own data import/export pipelines and incorporate into existing workflows to enable reporting and optimization efforts

You Will

  • At least 8 years experience in data engineering or similar discipline supporting Web Analytics and Marketing organizations
  • Expertise in SQL, especially within cloud-based data warehouses like Snowflake, Google BigQuery, and Amazon Redshift
  • Experience designing medium-to-large data engineering solutions and responsible for the entire lifecycle of projects including scoping, design, development, testing, deployment, and documentation.
  • Experience with ETL scheduling technologies with dependency checking, such as Airflow or Prefect, as well as schema design and dimensional data modeling
  • Experience with Linux/OSX command line, version control software (git)
  • Working experience with Python, REST API’s, and Terraform
  • Strong business intuition and ability to understand complex business systems, data architecture, and software design patterns
  • Experience building real-time data pipelines or working with streaming applications using with Kafka or similar streaming technologies
  • Excellent communication skills, particularly when explaining technical matters to less technical co-workers
  • Experience with data visualization technologies such as Amplitude and Looker is a plus
  • BS degree in Engineering, Computer Science, Math or a related technical field

We’re working to build a more inclusive economy where our customers have equal access to opportunity, and we strive to live by these same values in building our workplace. Block is an equal opportunity employer evaluating all employees and job applicants without regard to identity or any legally protected class. We also consider qualified applicants with criminal histories for employment on our team, and always assess candidates on an individualized basis.

We believe in being fair, and are committed to an inclusive interview experience, including providing reasonable accommodations to disabled applicants throughout the recruitment process. We encourage applicants to share any needed accommodations with their recruiter, who will treat these requests as confidentially as possible. Want to learn more about what we’re doing to build a workplace that is fair and square? Check out our I+D page.

Block will consider qualified applicants with arrest or conviction records for employment in accordance with state and local laws and "fair chance" ordinances.