Skip to content

RingierIMU/property-qa-challenge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

QA Challenge – Multi‑Tenant Platform Assessment

Timebox: Up to 4 hours of focused work within a 72‑hour window after you start.

Goal: Assess your ability to independently explore a multi‑tenant, consumer‑facing platform and produce a clear, comprehensive findings report. We value critical thinking, prioritization, and practical QA methodology over breadth alone.


Scope

Choose one of the following sites to test (public areas only):

Focus on the following key user journeys (adapt as applicable to the chosen site):

  1. Search & Filters: Perform a property search; apply filters (price, rooms, property type, location). Validate URL params, pagination, sorting.
  2. Listing Detail: Open multiple listings; verify media, metadata (price, location), map, similar listings, contact/lead forms.
  3. Lead Submission (if present): Attempt to submit a contact form. (Do not spam actual agents; submit only once per unique scenario.)
  4. Localization/Formatting: Currency, number/date formatting, language toggles, SEO‑friendly URLs.
  5. Cross‑tenant hygiene: Consistency of shared components (headers/footers, cookie banners, login links), tenant‑specific branding.

Optional (only if visible without login): wishlist/favorites, compare, newsletter sign‑up, or account creation.


Constraints & Ethics

  • No credentials. Test only public pages and actions visible without authentication.
  • Be a good citizen: Don’t use automation that could overload the site (no heavy crawlers / load testing). Limit form submissions to avoid spamming real partners.
  • Respect privacy & terms. Do not scrape personal data.

What to Deliver

Submit your work as one of the following:

  • A fork/branch/PR against this repo, or
  • A zipped folder or a shareable link (e.g., Google Drive) containing your outputs.

Required:

  1. docs/Test_Report – Your structured, readable findings report.
  2. docs/Issues.csv – A concise list of findings with severity, reproducible steps, and evidence links.
  3. evidence/ – Screenshots and optional short screen recordings (MP4/GIF). Name files descriptively and reference them in the report/CSV.

How to Work

  1. Plan (10–20 min): Choose platform, define your test charters, list your environments.
  2. Explore (2–3 hrs): Execute charters; take notes; capture evidence.
  3. Consolidate (40–60 min): Prioritize, write the report, export Lighthouse, and file your issues.

Devices/Browsers (pick at least two):

  • Desktop Chrome (latest) and Mobile (Chrome/Android or Safari/iOS)
  • Bonus: Firefox and/or Safari desktop

Tooling allowed (pick any):

  • Browser DevTools (Network, Performance, Lighthouse)
  • Screen capture tools (CleanShot, Loom, etc.)
  • Accessibility checks (Chrome Lighthouse a11y tab, axe DevTools)

What We Evaluate

  • Coverage & Prioritization: Did you test high‑impact flows? Did you categorize/severity‑rank issues realistically?
  • Reproducibility: Are steps, expected vs. actual results, and environment details clear?
  • Technical Insight: Use of DevTools (console errors, failing requests, cache, URL state, CLS/LCP hints).
  • Communication: Structure, readability, and evidence quality.
  • Product Thinking: Do you spot tenant‑specific inconsistencies, SEO issues, and real user impact?

Submission Checklist

  • docs/Test_Report.md completed
  • docs/Issues.csv created
  • At least 6–12 issues/observations total, with 3+ medium/high severity
  • Screenshots/screencasts in evidence/ and referenced properly
  • Optional: GitHub Issues created using the provided template

Good luck, and have fun!

About

QA Candidate Challenge – Multi‑Tenant Platform Assessment

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published