@@ -5,9 +5,9 @@ defmodule Nebulex.Adapters.Partitioned do
55 ## Features
66
77 * Partitioned cache topology (Sharding Distribution Model).
8- * Configurable primary storage adapter.
98 * `ExHashRing` for distributing the keys across the cluster members.
109 * Support for transactions via Erlang global name registration facility.
10+ * Configurable primary storage adapter.
1111
1212 ## Partitioned Cache Topology
1313
@@ -59,15 +59,34 @@ defmodule Nebulex.Adapters.Partitioned do
5959 `:pg` is used under-the-hood by the adapter to manage the cluster nodes.
6060 When the partitioned cache is started in a node, it creates a group and joins
6161 it (the cache supervisor PID is joined to the group). Then, when a function
62- is invoked, the adapter picks a node from the group members, and then the
63- function is executed on that specific node. In the same way, when a
64- partitioned cache supervisor dies (the cache is stopped or killed for some
65- reason), the PID of that process is automatically removed from the PG group;
66- this is why it's recommended to use consistent hashing for distributing the
67- keys across the cluster nodes.
62+ is invoked, the adapter uses `ExHashRing` to determine which node should
63+ handle the request based on the key's hash value. This ensures consistent
64+ key distribution across the cluster nodes, even when nodes join or leave
65+ the cluster.
66+
67+ The key distribution process works as follows:
68+
69+ 1. Each node in the cluster is assigned a set of virtual nodes (vnodes) in
70+ the hash ring.
71+ 2. When a key is accessed, `ExHashRing.Ring` is used to find the node
72+ responsible for that key (the hash value is used to find the corresponding
73+ vnode in the hash ring).
74+ 3. The request is routed to the physical node that owns that vnode.
75+
76+ This consistent hashing approach provides several benefits:
77+
78+ * Minimal key redistribution when nodes join or leave the cluster.
79+ * Even distribution of keys across the cluster.
80+ * Predictable key-to-node mapping.
81+ * Efficient node lookup for key operations.
82+
83+ When a partitioned cache supervisor dies (the cache is stopped or killed for some
84+ reason), the PID of that process is automatically removed from the PG group.
85+ The hash ring is then automatically rebalanced to ensure keys are properly
86+ distributed among the remaining nodes.
6887
6988 This adapter depends on a local cache adapter (primary storage), it adds
70- a thin layer on top of it in order to distribute requests across a group
89+ an extra layer on top of it in order to distribute requests across a group
7190 of nodes, where is supposed the local cache is running already. However,
7291 you don't need to define any additional cache module for the primary
7392 storage, instead, the adapter initializes it automatically (it adds the
@@ -502,7 +521,7 @@ defmodule Nebulex.Adapters.Partitioned do
502521 end
503522 end
504523
505- def do_put_all ( action , adapter_meta , entries , ttl , opts ) do
524+ defp do_put_all ( action , adapter_meta , entries , ttl , opts ) do
506525 timeout = Keyword . fetch! ( opts , :timeout )
507526 opts = [ ttl: ttl ] ++ opts
508527
0 commit comments