Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -17,3 +17,4 @@
/yarn-error.log

.byebug_history
.vscode
62 changes: 62 additions & 0 deletions DENZIL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Denzil Kriekenbeek take home exercise

## Installation problems on Apple M1 Mac Book Air, Sonoma 14.4.1 (23E224)

### nio4r gem failing bundle install:
```
gem install nio4r -v 2.5.8 -- --with-cflags="-Wno-incompatible-pointer-types"
```

### Postgres gem install failing bundle install due to Postgres not being installed:
```
brew install postgresql
brew services start postgresql@14
```

### Wrong version of OpenSSL being used when building Ruby 3.1.2 with ruby-install
Add to .zshrc
```
export PATH="/opt/homebrew/opt/openssl@1.1/bin:$PATH"
export LIBRARY_PATH="$LIBRARY_PATH:/opt/homebrew/opt/openssl@1.1/lib/"
export RUBY_CONFIGURE_OPTS="--with-openssl-dir=$(brew --prefix openssl@1.1)"
```
`ruby-install ruby-3.1.2`

## Initial Impressions:
- The Hacker News API requires N+1 requests to populate a page, we'll have to do some significant caching to make this tolerable.

- 1st step would be to refresh my memory by reading
https://guides.rubyonrails.org/caching_with_rails.html

- Might need to enable development (in memory) cache with bin/rails dev:cache, but this would mean production would need an alternate memory store (memcached?). No need to decide now.

## Random thought livestream:

- Any time an API is involved, I reach for the VCR gem; it mocks out external API calls, allowing for deterministic unit tests. Incidentally, while reading the docs, noticed that the supported Typhoeus library can handle parallel requests. Seems applicable to this problem.

- There's a nagging deprecation warning that seems easily fixable.

- Login/logout is the first requirement, and I see the devise gem in the gemfile, so let's get that working next.

- Heh, didn't realize the User table already had all the devise columns until I went to create a migration. *facepalm* As an aside, I was going to add the annotate gem for easy schema reference.

- Now that we can guarantee that users are logged in, the next step is to retrieve Hacker News entries via its API. Will brute force the N+1 request first, then iterate from there.

- My plan is to create a "repository" to abstract away all this API work. But in good TDD practice, I'll start by writing a failing test that lets me design my interface.

- Hmm, the API has an "ask" item that is unhelpfully tagged also typed as a "story". The only difference I see is that an "ask" has a text field, where a real "story" does not. But I suppose for this exercise we only care about titles.

- Whoops neglected this file: Got my brute force scraper working. Piped that output to home page. Added a like button to each row... Next step is to make it do something, which involves creating table for this data to live in.

- I added low level cacheing for the scrape results. The likes would have to be dynamic, so there would have to be some collation of the data sets. Hence the introduction of my collator classes. I realized that the home page's cache expiration would have to be on the order of minutes, whereas each individual story details cache could live for days. As a result, every piece of information should only be loaded once, keeping our bandwidth low at the expense of some cache space.

- I was considering doing some partial render caching as well, but I also wanted to submit it before EOW :)

- I'm also glad to have included the typhoeus gem to do parallel fetches, which should prevent a heavy waterfall on initial page load.


## Final Thoughts:
- This was a really fun exercise! I haven't used Rails 7 before, so I took the opportunity to acquaint myself with how Stimulus worked. I'm happy with the resulting "SPA"-like experience. I think you'll see that I'm very test driven, and I like to build facades of abstraction that make making tweaks later easier. After building all the tools I needed for the home page, I was able to build the liked page in a few minutes. Thank you, and I hope to hear from you soon!

Sincerely,
-Denzil
3 changes: 3 additions & 0 deletions Gemfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ source 'https://rubygems.org'

ruby File.read('.ruby-version').chomp

gem 'annotate', group: :development # reminds us of model schemas
gem 'byebug', platforms: [:mri, :mingw, :x64_mingw], group: [:development, :test]
gem 'capybara', group: [:development, :test]
gem 'coffee-rails'
Expand All @@ -17,6 +18,8 @@ gem 'sass-rails'
gem 'selenium-webdriver', group: [:development, :test]
gem 'spring', group: :development
gem 'turbolinks'
gem 'typhoeus' # parallelizes http requests
gem 'tzinfo-data', platforms: [:mingw, :mswin, :x64_mingw, :jruby]
gem 'uglifier'
gem "vcr", group: :test # mocks http requests
gem 'web-console', group: :development
11 changes: 11 additions & 0 deletions Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,9 @@ GEM
tzinfo (~> 2.0)
addressable (2.8.1)
public_suffix (>= 2.0.2, < 6.0)
annotate (3.2.0)
activerecord (>= 3.2, < 8.0)
rake (>= 10.4, < 14.0)
bcrypt (3.1.18)
bindex (0.8.1)
builder (3.2.4)
Expand Down Expand Up @@ -101,6 +104,8 @@ GEM
diff-lcs (1.5.0)
digest (3.1.0)
erubi (1.11.0)
ethon (0.16.0)
ffi (>= 1.15.0)
execjs (2.8.1)
ffi (1.15.5)
globalid (1.0.0)
Expand Down Expand Up @@ -239,10 +244,13 @@ GEM
turbolinks (5.2.1)
turbolinks-source (~> 5.2)
turbolinks-source (5.2.0)
typhoeus (1.4.1)
ethon (>= 0.9.0)
tzinfo (2.0.5)
concurrent-ruby (~> 1.0)
uglifier (4.2.0)
execjs (>= 0.3.0, < 3)
vcr (6.2.0)
warden (1.2.9)
rack (>= 2.0.9)
web-console (4.2.0)
Expand All @@ -262,6 +270,7 @@ PLATFORMS
ruby

DEPENDENCIES
annotate
byebug
capybara
coffee-rails
Expand All @@ -277,8 +286,10 @@ DEPENDENCIES
selenium-webdriver
spring
turbolinks
typhoeus
tzinfo-data
uglifier
vcr
web-console

RUBY VERSION
Expand Down
37 changes: 37 additions & 0 deletions app/assets/javascripts/likes.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@

class Likes {
constructor() {
document.addEventListener("ajax:success", this.ajax_listener);
}

ajax_listener = (event) => {
const [data, _status, _xhr] = event.detail;
const { cmd, ...json } = data;

switch (data.cmd) {
case "update_story_likes":
const { story_id, likers } = json;
return this.update_story_likes(story_id, likers);
}
}

update_story_likes(story_id, likers) {
const storyLikeElementId = "story_likes_" + story_id;

let newContents = "";
if (likers.length > 0) {
newContents = "Liked by: "
newContents += likers
.map(liker => liker.name)
.join(", ")
}

this.replace_element(storyLikeElementId, newContents);
}

replace_element(elementId, newContents) {
document.getElementById(elementId).innerHTML = newContents;
}
}

new Likes();
26 changes: 26 additions & 0 deletions app/controllers/pages_controller.rb
Original file line number Diff line number Diff line change
@@ -1,2 +1,28 @@
class PagesController < ApplicationController
before_action :authenticate_user!

MAX_STORIES = 20

def home
home_page_data = HomePageCollator.call(limit: MAX_STORIES)
render locals: home_page_data
end

def liked_index
liked_page_data = LikedPageCollator.call
render locals: liked_page_data
end

def like_story
story_id = params.require(:story_id)
LikeRepo.new(current_user.id)
.toggle_like(story_id)

likers = LikeRepo.fetch_likes(story_id)
render json: {
cmd: :update_story_likes,
story_id: story_id,
likers: likers
}
end
end
23 changes: 23 additions & 0 deletions app/models/like.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# == Schema Information
#
# Table name: likes
#
# id :bigint not null, primary key
# active :boolean default(FALSE)
# created_at :datetime not null
# updated_at :datetime not null
# story_id :integer
# user_id :bigint
#
# Indexes
#
# index_likes_on_user_id (user_id)
# index_likes_on_user_id_and_story_id (user_id,story_id) UNIQUE
#
class Like < ApplicationRecord
belongs_to :user

def user_name
user.full_name
end
end
31 changes: 30 additions & 1 deletion app/models/user.rb
Original file line number Diff line number Diff line change
@@ -1,6 +1,35 @@
# == Schema Information
#
# Table name: users
#
# id :bigint not null, primary key
# current_sign_in_at :datetime
# current_sign_in_ip :inet
# email :string default(""), not null
# encrypted_password :string default(""), not null
# first_name :string
# last_name :string
# last_sign_in_at :datetime
# last_sign_in_ip :inet
# remember_created_at :datetime
# reset_password_sent_at :datetime
# reset_password_token :string
# sign_in_count :integer default(0), not null
# created_at :datetime not null
# updated_at :datetime not null
#
# Indexes
#
# index_users_on_email (email) UNIQUE
# index_users_on_reset_password_token (reset_password_token) UNIQUE
#
class User < ApplicationRecord
# Include default devise modules. Others available are:
# :confirmable, :lockable, :timeoutable and :omniauthable
devise :database_authenticatable, :registerable,
:recoverable, :rememberable, :trackable, :validatable
:recoverable, :rememberable, :trackable, :validatable

def full_name
"#{first_name} #{last_name}"
end
end
81 changes: 81 additions & 0 deletions app/repos/hacker_news_scraper.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
class HackerNewsScraper
API_ROOT = "https://hacker-news.firebaseio.com/v0/"

def self.retrieve_top_stories(
limit: nil,
cache_expiry: 3.minutes,
relevant_fields: []
)
Rails.cache.fetch(
:hacker_news_top_stories,
expires_in: cache_expiry
) do
scraper = new
story_ids = scraper.fetch_top_story_ids
limit ||= story_ids.size

scraper
.fetch_stories(story_ids.first(limit))
.map do |story_details|
story_details.slice(*relevant_fields)
end
end
end

def self.retrieve_story_details(
story_id:,
cache_expiry: 1.day,
relevant_fields: []
)
Rails.cache.fetch(
[:story_details, story_id],
expires_in: cache_expiry
) do
story_details = new.fetch_story_details(story_id)
story_details.slice(*relevant_fields)
end
end

def fetch_top_story_ids
end_point = "topstories.json"
get(end_point)
end

def fetch_story_details(story_id)
end_point = story_endpoint(story_id)
get(end_point)
.symbolize_keys
end

def fetch_stories(story_ids)
hydra = Typhoeus::Hydra.new
requests = build_hydra_requests(story_ids) do |request|
hydra.queue(request)
end
hydra.run

requests.map do |request|
JSON.parse(request.response.body)
.symbolize_keys
end
end

private

def get(api_endpoint)
request = Typhoeus.get(API_ROOT + api_endpoint)
JSON.parse(request.response_body)
end

def story_endpoint(story_id)
"item/#{story_id}.json"
end

def build_hydra_requests(story_ids, &block)
story_ids.map do |story_id|
api_end_point = API_ROOT + story_endpoint(story_id)
Typhoeus::Request.new(api_end_point)
.tap { |req| block.call(req) }
end
end
end
52 changes: 52 additions & 0 deletions app/repos/home_page_collator.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
class HomePageCollator
def self.call(limit: nil, cache_expiry: nil)
repo = new
repo.limit = limit if limit.present?
repo.cache_expiry = cache_expiry if cache_expiry.present?
repo.execute
end

attr_accessor :limit, :cache_expiry

def initialize
@scraper = HackerNewsScraper
@like_repo = LikeRepo
@limit = nil
@cache_expiry = 3.minutes # top stories will be in constant flux
end

def execute
story_data = scrape_news_data
story_ids = extract_story_ids(story_data)
liker_data = lookup_likes(story_ids)

{
story_data: story_data,
liker_data: liker_data
}
end

private

attr_reader :scraper, :like_repo

def scrape_news_data
scraper.retrieve_top_stories(
limit: limit,
cache_expiry: cache_expiry,
relevant_fields: [
:id,
:title,
:url
]
)
end

def extract_story_ids(scraped_data)
scraped_data.map { |story| story[:id] }
end

def lookup_likes(story_ids)
like_repo.fetch_grouped_likes(story_ids)
end
end
Loading