Denzil's Take Home Exercise #75
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Notes:
Each concern is encapsulated behind well-tested facades / repositories that allow for easy composition, maintenance, and/or replacement. My goal is always for future devs (eg. me) to confidently make changes safely without fully digging into every implementation nuance (eg. even the schema of a database table)
I made the intentional decision to NOT become a system of record for story data; ie. I do not create a database entry for every fetched story. IMHO, stories start to lose value once they fall off the top page.
I decided against scheduling the data pull with some sort of a cron / sidekiq job. As an internal tool for a small number of users, It would result in a significant number of unnecessary API calls during periods of disuse.
Despite containing some duplication, the HomePageCollator and LikedPageCollator classes are intentionally kept on parallel tracks, anticipating future divergence.
The HackerNewsScraper class parallelizes its api pulls with the Typhoeus gem for a speedy first page load that mitigates the Hacker New API's unfortunate waterfall of N+1 API requests
News API "scrape" results are transparently cached in Rail's low level cache, reducing the number of overall API requests. The cache expiry duration is currently configurable at the page level. (3 minutes for the top news index, and 1 day for each individual story on the liked stories index)
No major change / tech debt has been introduced, to minimize the additional cognitive load on hypothetical coworkers. ie. I worked with the stack already present, eg: erb templates, turbolinks/ stimulus, rspec, devise, etc
Only 3 new gems have been added, all of which are either opt-in, or self-documenting.