Skip to content

UX Review: 2.26 Measure and test for performance #185

@jenstrickland

Description

@jenstrickland

THIS IS NOT A PROPOSAL!

Continue discussion with a due date of November 1, 2025.

Revised

2.26 Measure and test for performance

Check for performance issues in both simulated and real-world scenarios while also accounting for compliance issues.

Success Criterion: Performance testing
Identify and resolve bottlenecks or issues in the underlying code or infrastructure which could impact sustainability and performance. Consider both simulated and real-world metrics. Monitor performance across every release cycle using appropriate tooling or through research and auditing.

Success Criterion: Compliant measurement
Collect only data required to provide a streamlined and effective user journey and comply with relevant accessibility and data protection legislation. Put policies in place to ensure strict adherence.

Previous

I no longer have access to the previous document. Earlier version: https://w3c.github.io/sustainableweb-wsg/#display-any-variables-that-have-a-negative-impact-on-your-project

Discussion

screenshot from Mural discussion, the text follows

Alex:

  • Potentially Remove
  • SC: Compliant measurement
  • A11y & Privacy Conflict

Clarify

  • Misleading title - actual content may need fleshing out. It is more about validaton. I would probably add the word
    "continually" or "throughout".

  • Needs clarification. It is to vague. What is "effiency"?

  • What metrics are useful when measuring performance and when it comes to some of the less machine-testable elements such as navigation?

  • I find this one interesting because I am very interested in web performance but somehow dont know how to explain or reason about the additional resources provided for this WSG. It feels like good stuff but maybe could be tightened up a little bit.

  • To me, the UX end of this is "set a page budget" (data budget). Measuring / testing performance would be Webdev, right? Although, "test performance" is a good one if you are assessing an existing site that you are redesigning. The UX could be measuring page speed, setting a baseline for improved performance etc.

  • I would think of this on a product level. End performance depends on many different aspects and as someone who worked in the area for many years and anaylsed hundreds of sites for their performance, it is never a specific team fault or responsibility. Performance budgets, load times, etc. should be monitored on the product level and everyone 'touching' the site/ product should be aware of the impact their part has (very common issue: 3rd parties and pixels, usually added by marketing)

Comments

  • Establish continuous performance monitoring
  • The only one I can see some issue with on our end is the performance aspect, BUT it may be possible to cover the concept (in relevance to us) in the compatibility testing aspects.

Google sheet of feedback

Metadata

Metadata

Assignees

Labels

taskforce-uxThis issue affects the UX taskforce.

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions