Shopify’s AppUI team was prioritizing speeding up embedded applications, but performance data was fragmented and unreliable.
Problem
No baseline: no visibility into how long apps actually took to load.
No monitoring: issues were only discovered through user complaints.
No measurement: new features or updates couldn’t be tied to performance.
Data was scattered across multiple systems with inconsistent schemas.
Solution
I built a centralized reporting pipeline using Kafka, Airflow, dbt, BigQuery, and Looker Studio. This produced a daily report on app load speed with drill-downs by location, device, browser, client, and app version.
Sample report (the actual report is confidential).
Impact
Established the first-ever baseline for app load speed.
Set a 50% improvement target for the year.
Proved that apps adopting new features loaded 10x faster than those that didn’t.
Gave engineers a trusted, ongoing way to track and improve performance.
Shopify’s developer documentation relied on a search engine that often failed to surface relevant results, limiting adoption.
Problem
No way to measure search performance.
Engineers couldn’t test improvements effectively.
Developers were struggling to find the right documentation quickly.
Solution
I defined robust search performance metrics (CTR, Mean Reciprocal Rank, Time on Page, Bounce Rate, Pogo Stick Rate) by integrating click data, user profiles, shop data, and API logs. I then built an automated A/B testing framework so engineers could test search improvements continuously.
Impact
Increased click-through rate (our North Star metric) by 2%.
This translated into a significantly improved developer experience.
For an e-commerce company, improvements at this scale mean thousands in immediate revenue — and strategic value for developer adoption.
Shopify undertook a large-scale data modernization effort: moving from Presto to GCS, Kimball Ross to Domain Driven Design, PySpark to dbt, proprietary scheduling to Airflow, and Mode to Looker Studio.
Problem
Hundreds of models/reports needed redesign or elimination.
Business-critical teams relied on fragile, outdated pipelines.
Solution
I led the migration of 12 critical data models over six months. Responsibilities included inventorying models, redesigning them in dbt, rebuilding reporting in Looker Studio, and collaborating with engineers and product teams to ensure accuracy.
A Figma database design, a common method used when designing data models.
Impact
Ensured continuity of core reporting during a full-stack migration.
Unlocked new analytical capabilities via dbt + Looker.
Gained deep expertise in dbt, Looker Studio, GCS, and Domain Driven Design.
Helped Shopify scale its data infrastructure for long-term growth.
MaxSold, an online auction platform, wanted to increase merchant earnings and revenue without major investment.
Problem
Bidding increments were static and suboptimal.
Hypothesis: small increases early in auctions could lift average lot value significantly.
Needed a way to test this rigorously despite variability and technical limitations.
Solution
I independently designed and ran an A/B test comparing the old vs. optimized bidding scheme. Controlled for high-variance auctions, validated with secondary metrics, and managed implementation without full engineering support.
Impact
New scheme raised average lot value by ~$5.
Delivered ~$1M in additional annual profit and ~$3M in revenue.
Directly improved merchant take-homes.
High-ROI project: small operational change → multimillion-dollar impact