Skip to content

Sentry PR Friction and ADR Standardization

Sentry PR Friction and ADR Standardization

Section titled “Sentry PR Friction and ADR Standardization”

This study evaluates where review back-and-forth is concentrated in getsentry/sentry and identifies which Architecture Decision Records (ADRs) plus enforceable rules could reduce repeated discussion cycles.

  • Window: last 90 days
  • Sample: 500 merged PRs
  • Metrics: review events, merge latency, changed file count, churn
  • Deep dive: highest-friction PRs by review-event volume and merge latency
  • Median time-to-merge: 4.12h
  • P75/P90 time-to-merge: 21.84h / 69.54h
  • Median review events per PR: 2 (mean 3.19)
  • Formal CHANGES_REQUESTED: 1%
  • Small PR median TTM: 2.07h
  • Large PR median TTM: 21.44h

The baseline above is merge-biased. It captures friction in merged PRs but misses:

  • high-discussion PRs that were closed without merge
  • open PRs with substantial discussion that later went stale

Those two cohorts often contain failed-consensus signals and unresolved decision ambiguity.

Missing-Signal Extension: Abandoned and Stale PRs

Section titled “Missing-Signal Extension: Abandoned and Stale PRs”

To capture those signals, we added:

  • closed-unmerged PRs in the same 90-day window
  • open PRs stale for 14+ days and 30+ days
  • Queried: 500 closed-unmerged PRs (GitHub API list cap)
  • Median discussion volume: 2
  • High-discussion abandoned share (>=10 discussion items): 2.4%

Examples:

  • #109150: 45 discussion items, closed unmerged
  • #108406: 30 discussion items, closed unmerged
  • #109526: 24 discussion items, closed unmerged
  • #111112: 22 discussion items, closed unmerged
  • Open PRs sampled: 286
  • Stale 14+ days: 36
  • Stale 30+ days: 9

Examples:

  • #108533: 42 discussion items, stale 20 days
  • #109781: 6 discussion items, stale 19 days

Measuring merged PRs alone can understate governance friction. Abandoned high-traction PRs and stale discussed PRs should be tracked as first-class outcomes in future runs.

  • API contract and compatibility semantics
  • Nullability/type safety and error paths
  • UX flow behavior invariants
  • Testing evidence expectations
  • Permission and reliability guardrails
  1. PR Slice Boundaries and Risk Budget
  2. API Contract Evolution Protocol
  3. Test Evidence Matrix by Change Type
  4. UI/Flow Behavioral Invariants
  5. Permission and Reliability Guardrails