-
Notifications
You must be signed in to change notification settings - Fork 97
Alternative learning paths
Currently, Metacademy is based on a single dependency graph, where each concept has a single set of dependencies. (In some cases, the dependencies are annotated at a more fine-grained level, where each of a concept's goals depends on a set of concept goals.) The graph is used to generate a learning plan for a given concept; this learning plan depends only on the concepts a user has already learned. Might we want to offer alternative learning plans, similarly to how Google Maps offers several alternative sets of directions?
In theory, there may be different sets of prerequisites which would suffice to learn a given concept, and different sets of prereqs may be more efficient for different users. In my experience, this has been a lot less common than I'd expected, but it does happen. I'll list some examples, followed by several proposals for how the graph structure could be extended to allow alternative paths.
One common case is already covered by the existing structure: when a given concept has to be learned to varying depths depending on the situation. E.g., many machine learning concepts depend on "positive definite matrices," but only at a fairly superficial level. This is already handled by goal-specific dependencies. A given concept can depend on a specific set of goals for PD matrices, and those goals each have their own sets of dependencies.
But here are some examples which aren't handled so well under the current framework:
-
Many resources present belief propagation using a different formalism. Koller's book and Coursera course both use the clique tree formalism (which is also sometimes known as the junction tree algorithm). Bishop's and MacKay's books both use factor graphs. MIT's graphical models course, 6.438, presented it in terms of MRFs. All of these presentations are closely related, and involve essentially the same ideas, but aren't equivalent. They also have different dependencies. Currently, the factor graph version is treated as the canonical one. The Koller resources are listed for BP, but there is a note explaining the difference. The junction tree version of the algorithm is listed as a separate concept.
-
Regularization is an abstract concept which is needed to understand the motivation for a lot of machine learning algorithms (e.g. SVMs). But you can't just explain regularization in the abstract; it has to be introduced in the context of some particular learning algorithm. Currently, ridge regression is treated as the canonical version of regularization, since linear regression is probably the simplest algorithm to start with. This means other concepts, such as logistic regression and SVMs, depend on ridge regression when it isn't strictly required.