Skip to content

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Sep 2, 2025

This PR contains the following updates:

Package Change Age Confidence
LanguageExt.Core 5.0.0-beta-48 -> 5.0.0-beta-57 age confidence

Release Notes

louthy/language-ext (LanguageExt.Core)

v5.0.0-beta-57: Extension operators supported for more types

Following on from the extension-operators and .NET 10 release yesterday, I have now implemented operators for even more of the core types in language-ext.

The full set so far:

  • ChronicleT<Ch, M, A>
  • Eff<A>
  • Eff<RT, A>
  • Either<L, R>
  • EitherT<L, M, R>
  • Fin<A>
  • FinT<M, A>
  • IO<A>
  • Option<A>
  • OptionT<M, A>
  • These<A, B>
  • Try<A>
  • TryT<M, A>
  • Validation<F, A>
  • ValidationT<F, M, A>

There is no need to do any of these to make the extension operators work for the types, but the generic extensions will all return an abstract K<F, A> type. So, I am in the process of making these bespoke versions return a concrete type. This is purely for usability's sake.

And because each core type can support many different traits, the number of operators they support can be quite large too (see the Try operators for a good example!). So, for each core type I've decided on a new folder structure:

TYPE folder
  | ----- TYPE definition.cs
  | ----- TYPE module.cs
  | ----- TYPE case [1 .. n].cs  (Left, Right, Some, None, etc.)
  | ----- Extensions
  | ----- Operators
  | ----- Prelude

This will keep the many method extensions and operator extensions away from the core functionality for any one type: which hopefully will make the API docs easier to read and the source-code easier to navigate.

Let me know if you see any quirkiness with the new operators. This is a lot of typing, so it would be good to catch issues early!

v5.0.0-beta-56: .NET 10 version bump + new extension operators

The new version of .NET, version 10.0, has added capabilities for 'extension everything'. Well, almost everything. Most importantly for us is that it has added extension support for operators. And not just operators, but generic operators on interfaces and delegates!

I wasn't sure how far the csharplang team had taken it, but it turns out: just far enough to give use some reeeeeally useful capabilties.

This is an early release that:

  • Bumps the .NET version to 10.0 (as it's RTM now)
  • Adds support for ...
    • Functor operators
    • Applicative operators
    • Monad operators
    • Choice operators
    • Fallible operators
    • Final operators
    • Semigroup operators
    • SemigroupK operators
    • Bespoke downcast operators (previously only supported as the .As() extension method)

If your types implement any of the above traits then you get the operators for free.

Functor

>>> Functor operators

In a language like Haskell, you will often see something like this:

(\x -> x + 1) <$> mx

Where the left-hand side in the mapping function for the mx functor type.

With this release of language-ext we can now do the same:

(x => x + 1) * mx;

For example, let's create an IO operation that reads a line from the console:

var readLine = from ln in IO.lift(Console.ReadLine)
               from _  in guard(ln is not null, (Error)"expected a value")
               select ln;

We can then map the IO<string> to an IO<Option<int>> like so:

var op = (line => parseInt<Option>(line)) * readLine;

Which is the less terse version of this:

var op = (parseInt<Option>) * readLine;

Which is ultimately equivalent to:

var op = readLine.Map(parseInt<Option>);

I have chosen the * operator as this is applying each value in mx to the function and re-wrapping. Like a cross-product.

Applicative

>>> Applicative operators

The last example might not have seemed too compelling, but we can also do multi-argument function mapping using a combination of Functor and Applicative.

Going back to the Haskell example, you could use both functor map and applictive apply together to apply multiple arguments to a mapping function:

(\x y z -> x + y + z) <$> mx <*> my <*> mz

The above allows three applicative structures (like Option, IO, Either, Validation, etc.) to have their logic run to extract the values to pass to the lambda, which sums the values.

We can do the same:

((int x, int y, int z) => x + y + z) * mx  * my  * mz;

Unfortunately multi-argument applications must specify their lambda argument types (which we don't need to do for single-argument lambdas). But it's a small price to pay.

Before we would have had to write:

fun((int x, int y, int z) => x + y + z).Map(mx).Apply(my).Apply(mz);

Here's the above example with three IO<int> computations:

var mx = IO.pure(100);
var my = IO.pure(200);
var mz = IO.pure(300);

var mr = ((int x, int y, int z) => x + y + z) * mx  * my  * mz;

var r = mr.Run(); // 600

Monad

>>> Monad operators

Monad action

The next operator to gain additional features is the >> operator. Previously, I only used it for monad-action operations and couldn't fully generalise it. It has now been fully generalised to allow the sequencing of any two monad types.

var mx = IO.pure(100);   // IO<int>
var my = IO.pure(true);  // IO<bool>

var mr = mx >> my;       // IO<bool>

This forces the first computation to run before running the second one and returning its value.

Monad bind

The right-hand operand of >> can also be a bind-delegate of the form: Func<A, K<M, B>>, i.e, monad-bind.

Here's the readLine example from earlier, but we're using the IO returning version of parseInt.

var op = readLine >> (line => parseInt<IO>(line));

In Haskell this is the >>= operator.

We can be even more concise in this instance:

var op = readLine >> parseInt<IO>;

Ultimately, this is equivalent to:

var op = readLine.Bind(parseInt<IO>);

Choice

>>> Choice operators

Choice enables the | operator where the left-hand side is K<F, A> and the right-hand side is also K<F, A>. The idea being is that if the left-hand side fails then the right-hand side is run.

var mx = IO.fail<int>((Error)"fails");
var my = IO.pure(100);
var mr = mx | my; 

var r = mr.Run(); // 100

This is good for providing default values when one operation fails or standardised error messages.

In the previous example we can add a default error message in case the parse fails. First, let's create an Error type. This would usually be a static readonly field somewhere in your app:

var numberExpected = Error.New("expected a number");

Then we can catch the error and provide a default message:

var op = readLine >> parseInt<IO> | IO.fail<int>(numberExpected);

That will yield a "expected a number" exception if the parse fails.

| can also be used when Pure<A> is the right-hand operand and the F type is also an Applicative (allowing use to call F.Pure):

var op = readLine >> parseInt<IO> | Pure(100);

This allows for sensible default values when the parse fails.

Fallible

>>> Fallible operators

Fallible shares the same | operator as Choice but the right-hand operand can only be: a CatchM<E, F, A> struct (created by the @catch functions) or Fail<E> created by the Prelude.Fail function when working with the Fallible<E, F> trait. Sdditionally the right-hand side can be the Error type for the Fallible<F> trait.

So, in the previous example we don't need to use IO.fail, we can use the Error value directly, because IO supports the Fallible<IO> trait:

var op = readLine >> parseInt<IO> | numberExpected;

Final

>>> Final operators

The final trait allows for an operation to be run regardless of the success (or not) of the left-hand side.

For example, if we create an IO operation to write "THE END" to the console:

var theEnd = IO.lift(() => Console.WriteLine("THE END"));

We can then use the | operator again to have a finally operation.

var op = readLine >> parseInt<IO> | final(theEnd);

Semigroup and SemigroupK

Semigroup supports the + operator for the Semigroup.Combine and SemigroupK.Combine associative binary operations. Usually used for concatenation or summing.

Bespoke downcast operators

The .As() downcast extension method often requires wrapping an expression block with parentheses, which can be a bit ugly.

I've started implementing the prefix + operator for .As(). The beauty of it is it works quite elegantly with LINQ expressions and the like. For example, in the ForkCancelExample in the EffectsExamples sample. You can see how this:

public static Eff<RT, Unit> main =>
   (from frk in fork(inner)
    from key in Console<RT>.readKey
    from _1  in frk.Cancel
    from _2  in Console<RT>.writeLine("done")
    select unit).As();

Becomes this:

public static Eff<RT, Unit> main =>
   +from frk in fork(inner)
    from key in Console<RT>.readKey
    from _1  in frk.Cancel
    from _2  in Console<RT>.writeLine("done")
    select unit;

It just instantly reduces the clutter and make the expression easier to read.

Status

All of the operators will automatically work for any types that implement the traits listed above. But the hand-coded ones that return concrete types like IO<A> instead of K<IO, A> have so far only been implemented for:

  • IO<A>
  • Eff<A>
  • Eff<RT, A>

I will do the rest of the types over the next week or so. I just wanted to get an initial version up, so you can all have a play with it and to get any feedback you may have.

v5.0.0-beta-55: Discriminated unions refactor + Coproducts [BREAKING CHANGES]

This is quite a big change. I started writing it a few months back, then I moved house, and that disappeared several months of my life - all for very good reasons, so I'm not complaining! However, be aware, that I had a major context-switch halfway through this change. So, just be a little wary with this upgrade, there may be some inconsistencies here and there - especially as it was very much focusing on consistency!

There are also breaking changes (discussed in the next section). I don't want to be doing breaking changes at this stage of the beta, but there are good reasons, which I'll cover in the relevant sections.

Discriminated union changes (BREAKING CHANGES)

I have changed the discriminated-union types (Either, Validation, ...) to embed the case-types within the generic base-type instead of a non-generic module-type and changed the constructor functions to be embedded within the non-generic module-type instead of the generic base-type.

So, previously a type like Either<L, R> would be defined like so:

// Generic discriminate-union type
public abstract record Either<L, R>
{
    // Left constructor function
    public static Either<L, R> Left(L value) => 
        new Either.Left<L, R>(value);

    // Right constructor function
    public static Either<L, R> Right(R value) => 
        new Either.Right<L, R>(value);
}

// Module type
public class Either
{
    // Left case-type
    public sealed record Left<L, R>(L Value) : Either<L, R>;

    // Right case-type
    public sealed record Right<L, R>(R Value) : Either<L, R>;
}

Obviously this is a stripped down version, but pay attention to where the constructor functions are and where the case-types are.

Now Either<L, R> is defined like this:

// Generic discriminate-union type
public abstract record Either<L, R>
{
    // Left case-type
    public sealed record Left(L Value) : Either<L, R>;

    // Right case-type
    public sealed record Right(R Value) : Either<L, R>;
}

public class Either
{
    // Left constructor function
    public static Either<L, R> Left(L value) => 
        new Either<L, R>.Left(value);

    // Right constructor function
    public static Either<L, R> Right(R value) => 
        new Either<L, R>.Right(value);
}

This 'flip' will cause breaking changes to those using v5, which is not ideal, but I've been following the development of the unions proposal for C# and I believe it will future-proof language-ext users when real sum-types turn up in C#.

So, the case-type Either.Right<L, R> becomes Either<L, R>.Right and the constructor-function Either<L, R>.Right(A value) becomes Either.Right<L, R>(R value). In some circumstances, this makes the constructor functions slightly easier to use due to better generics inference.

All sum-types in language-ext have been changed to maintain consistency across the library and hopefully will mean much less upheaval when C# unions turn up. Migration is pretty mechanical too.

Apologies to those on v5 that will get hit by this, I don't want to be making large changes like this as I get closer to an RTM, but looking at the way the csharplang team are designing union-types, I think this will make any future migration easier, so it's best to do it now.

Lifting (BREAKING CHANGES)

One benefit of moving the constructor-functions, outside of the type that it is constructing, is that we can put constraints on the functions that aren't on the type itself. This has tangible benefits for types that lift other types (like transformers).

And so, to try and build a consistent approach to construction, I've decided to move all lifting functions (which are just constructor functions with addtional logic) out of the type they are lifting and into the static module.

So, where before this was possible:

var mx = IO<A>.Lift(operation);

We can now only use the module version:

var mx = IO.lift(operation);

Migration of your code to use the new constructor and lifting functions should be entirely mechanical, which although time-consuming should be relatively straight-forward.

IEnumerable support removed from Option, Either, Fin, and Validation (BREAKING CHANGES)

Extension methods without traits are unfortunately poison to our more principled trait-based approach. And so, I need to not be inheriting IEnumerable or other .NET types that have a billion extension methods hanging off.

Option<A>, Either<L, R>, Fin<A>, and Validation<F, A> no longer support IEnumerable. You can use them with IEnumerable by calling value.AsEnumerable().

These<A, B> type

This is a brand new type: the These type is a bit like Either except it has three states rather than two. The states are:

  • This(A) - means "I have an alternative value" (like Left for Either)
    • This is the value that will short-circuit a monadic computation.
  • That(B) - means "I have a success value" (like Right for Either)
  • Both(A, B) - means we have a 'failure' value (A) but it's non-fatal as we also have a 'success' value (B), this allows for an alternative ('failure') value to be captured whilst also continuing a monadic operation. The way to think of this is like a warning in a compiler: we have a failure value, but we can continue.

ChronicleT<Ch, M, A> type

ChronicleT is a new transformer type that wraps up the These type (so, this is like EitherT to Either). It has a few methods accessible via either the Prelude, the ChronicleT module, or the ChronicleT<Ch, M> type. To reduce the amount of generics needed use: ChronicleT<Ch, M>:

Method Alternative method Description
ChronicleT<Ch, M>.dictate(A) Lifts That(A) into the transformer (the success value).
ChronicleT<Ch, M>.confess(Ch) Lifts This(A) into the transformer (the failure value).
ChronicleT<Ch, M>.chronicle(Ch, A) Lifts Both(Ch, A) into the transformer (the success and falure values)
ChronicleT.memento(ChronicleT<Ch, M, A>) ma.Memento() 'Flattens' the tri-state into an Either dual-state.
ChronicleT.absolve(A, ChronicleT<Ch, M, A>) ma.Absolve(A) Forces the chronicle into a 'dictate' state (success). Takes a default A value to use (to use if the structure doesn't already contain an A value).
ChronicleT.condemn(ChronicleT<Ch, M, A>) ma.Codemn() Takes any non-fatal error and makes it fatal: converting a Both(Ch, A) to a This(Ch).

Amongst other functionality.

CoproductCons, Coproduct, and CoproductK traits

One thing that has been building for a while is the need to standardise coproduct types. With the new These and ChronicleT types, we have two more coproducts. Coproducts are the dual of products. Product types are records and tuples. For example, a 2-tuple is (A * B) which means the number of values in the set of A multiplied by the number of values in the set of B.

So, a tuple (bool, A) would be 2 * A. A tuple (bool, uint) would be 2 * UInt32.MaxValue. It represents the total possible combinations of values that can be stored in the type.

Coproduct types are sum-types (we don't multiply, we add). The classic sum-type is Either<L, R>, which means L + R. So, Either<bool, A> would be 2 + A possible values.

Hence the name algebraic data-types!

We now have a number of sum-types (coproducts), so standardising traits to work with all coproducts makes sense:

CoproductCons<F>
public interface CoproductCons<in F>
    where F : CoproductCons<F>
{
    /// <summary>
    /// Construct a coproduct structure in a 'Left' state
    /// </summary>
    /// <param name="value">Left value</param>
    /// <typeparam name="A">Left value type</typeparam>
    /// <typeparam name="B">Right value type</typeparam>
    /// <returns>Constructed coproduct structure</returns>
    public static abstract K<F, A, B> Left<A, B>(A value);
    
    /// <summary>
    /// Construct a coproduct structure in a 'Left' state
    /// </summary>
    /// <param name="value">Left value</param>
    /// <typeparam name="A">Left value type</typeparam>
    /// <typeparam name="B">Right value type</typeparam>
    /// <returns>Constructed coproduct structure</returns>
    public static abstract K<F, A, B> Right<A, B>(B value);
}

CoproductCons allows construction of the the coproduct F. We take the Left and Right parlance from Either, but it can construct any type F<A, B>. This is like Applicative.Pure which allows us to construct a new applicative structure F<A>, but with CoproductCons we can represent alternative values too. This allows for standardising certain functionality where an alternative value is needed.

NOTE: There is crossover here to the Fallible type, but Fallible is very much about supporting alternative failure values, coproducts are more general in that the alternative-value doesn't necessarily represent failure. Fallible also expects the failure-value to be fixed, whereas coproducts don't have that constraint.

Coproduct<F>
public interface Coproduct<F> : CoproductCons<F>
    where F : Coproduct<F>
{
    // Abstract interface

    public static abstract C Match<A, B, C>(Func<A, C> Left, Func<B, C> Right, K<F, A, B> fab);
    
    // Default virtual methods

    public static virtual C Match<A, B, C>(C Left, Func<B, C> Right, K<F, A, B> fab) =>
        F.Match(_ => Left, Right, fab);
    
    public static virtual C Match<A, B, C>(Func<A, C> Left, C Right, K<F, A, B> fab) =>
        F.Match(Left, _ => Right, fab);
    
    public static virtual B IfLeft<A, B>(Func<A, B> Left, K<F, A, B> fab) =>
        F.Match(Left, identity, fab);

    public static virtual B IfLeft<A, B>(B Left, K<F, A, B> fab) =>
        F.Match(_ => Left, identity, fab);
    
    public static virtual A IfRight<A, B>(Func<B, A> Right,  K<F, A, B> fab) =>
        F.Match(identity, Right, fab);
    
    public static virtual A IfRight<A, B>(A Right, K<F, A, B> fab) => 
        F.Match(identity, _ => Right, fab);    
    
    public static virtual (Seq<A> Lefts, Seq<B> Rights) Partition<FF, A, B>(K<FF, K<F, A, B>> fabs)
        where FF : Foldable<FF> =>
        fabs.Fold((Lefts: Seq<A>(), Rights: Seq<B>()), 
                  (s, fab) => fab.Match(Left: l => s with { Lefts = s.Lefts.Add(l) },
                                        Right: r => s with { Rights = s.Rights.Add(r) }));
    
    public static virtual Seq<A> Lefts<G, A, B>(K<G, K<F, A, B>> fabs)
        where G : Foldable<G> =>
        fabs.Fold(Seq<A>(), (s, fab) => fab.Match(Left: s.Add, Right: s));
    
    public static virtual Seq<B> Rights<G, A, B>(K<G, K<F, A, B>> fabs)
        where G : Foldable<G> =>
        fabs.Fold(Seq<B>(), (s, fab) => fab.Match(Left: s, Right: s.Add));
}

Coproduct<F> brings in the opportunity to pattern-match on the coproduct structure so that we can work with the A or the B type. Note that it inherits CoproductCons<F>.

Another thing to note is that we depend on the K<F, A, B> type rather than the K<F, A> type. So if you're building your own coproduct types, then you need to derive your instance type from K<F, A, B> and K<F, B>. See Either<L, R> for an example:

K<Either<L>, R>,
K<Either, L, R>

That also means you need two trait-implementation classes: one for the K<F, B> types and one for the K<F, A, B> types.

So, Either has:

public class Either<L> : 
    Monad<Either<L>>, 
    Fallible<L, Either<L>>,
    Traversable<Either<L>>,
    Natural<Either<L>, Option>,
    Choice<Either<L>>
{
    ...
}

Which are the trait-implementations that support K<Either<L>, R> (where the L is 'baked in'). And...

public partial class Either :
    Coproduct<Either>,
    Bimonad<Either>
{
    ...
}

Which are the trait-implementations that support K<Either, L, R>.

Note that L is parametric (in the trait methods). These coproduct traits only work when the alternative-value is fully parametric. So, you can't implement coproduct for Fin<A> for example, because the alternative-value (Error) is fixed.

CoproductK<F>
public interface CoproductK<F> : CoproductCons<F>
    where F : CoproductK<F>
{
    // Abstract interface

    public static abstract K<F, A, C> Match<A, B, C>(Func<A, C> Left, Func<B, C> Right, K<F, A, B> fab);

    // Default virtual methods

    public static virtual K<F, A, C> Match<A, B, C>(C Left, Func<B, C> Right, K<F, A, B> fab);
    public static virtual K<F, A, C> Match<A, B, C>(Func<A, C> Left, C Right, K<F, A, B> fab);
    public static virtual K<F, A, B> IfLeft<A, B>(Func<A, B> Left, K<F, A, B> fab);
    public static virtual K<F, A, B> IfLeft<A, B>(B Left, K<F, A, B> fab);
    public static virtual K<F, A, A> IfRight<A, B>(Func<B, A> Right,  K<F, A, B> fab);
    public static virtual K<F, A, A> IfRight<A, B>(A Right, K<F, A, B> fab);
}

One thing to note with Coproduct<F> is that the return type from the Match methods are non-lifted types (like A, B, and C). Certain coproduct types (like the transformer coproduct: EitherT) must be evaluated before we can extract the inner value and that can't always happen. So, the best thing we can do is re-lift the resulting A, B, or C values into K<F, A, A>, K<F, A, B>, or K<F, A, C>, because we know we can create those.

This is what that variant CoproductK<F> is for, it's the same as Coproduct<F> but we always stays lifted in the results.

Bifunctor trait

  • First renamed MapFirst
  • Second renamed MapSecond
public interface Bifunctor<F> 
    where F : Bifunctor<F>
{
    public static abstract K<F, M, B> BiMap<L, A, M, B>(
        Func<L, M> first, 
        Func<A, B> 
        second, K<F, L, A> fab);

    public static virtual K<F, M, A> MapFirst<L, A, M>(Func<L, M> first, K<F, L, A> fab) =>
        F.BiMap(first, identity, fab);
    
    public static virtual K<F, L, B> MapSecond<L, A, B>(Func<A, B> second, K<F, L, A> fab) =>
        F.BiMap(identity, second, fab);
}

Bimonad trait

Through BindFirst and BindSecond we can now do monadic bind behaviour on boths sides of a coproduct.

public interface Bimonad<M> : Bifunctor<M> 
    where M : Bimonad<M>
{
    public static abstract K<M, Y, A> BindFirst<X, Y, A>(
        K<M, X, A> ma, 
        Func<X, K<M, Y, A>> f);
    
    public static abstract K<M, X, B> BindSecond<X, A, B>(
        K<M, X, A> ma, 
        Func<A, K<M, X, B>> f);

    public static virtual K<M, X, A> FlattenFirst<X, A>(K<M, K<M, X, A>, A> mma) =>
        M.BindFirst(mma, identity);    
    
    public static virtual K<M, X, A> FlattenSecond<X, A>(K<M, X, K<M, X, A>> mma) =>
        M.BindSecond(mma, identity);
}

Coreadable trait

A restriction on the original Readable trait can be lifted by also providing a Coreadable implementation for your types. The original Readable trait has a method called Local:

public interface Readable<M, Env>
    where M : Readable<M, Env>
{
    public static abstract K<M, A> Asks<A>(Func<Env, A> f);

    public static virtual K<M, Env> Ask =>
        M.Asks(Prelude.identity);

    public static abstract K<M, A> Local<A>(Func<Env, Env> f, K<M, A> ma);
}

It maps the readable 'environment' value. But, it can't (in generalised form), map to a different environment type. And so, that's left for concrete implementations in ReaderT and the like.

With Coreadable that limitation goes away:

public interface Coreadable<M>
    where M : Coreadable<M>
{
    public static abstract K<M, Env, A> Asks<Env, A>(Func<Env, A> f);

    public static virtual K<M, Env, Env> Ask<Env>() =>
        M.Asks<Env, Env>(Prelude.identity);

    public static abstract K<M, Env1, A> Local<Env, Env1, A>(
        Func<Env, Env1> f, 
        K<M, Env, A> ma);
}

Experimental features

One of the limitations of the trait approach in language-ext is that we can have only one trait-implementation class (well, one for K<F, A> and one for K<F, A, B> if we're creating product or coproduct types). But this limitation means certain traits, which should be ad-hoc, are left un-implemented because we would over-constrain the type if we used them in a non-ad-hoc way.

For example, on Validation<F, A> the F was constrained to Monoid<F>. This allows for the aggregation of multiple failure values when using the applicative functionality and for a default 'empty' state for Validation<F, A>.Empty(). However, that stops us from implementing Bifunctor and Bimonad (which both map the F value to a new type that hasn't got the Monoid constraint), limiting the extent of generic functionality that can be implemented.

This problem raises its head in a number of places in language-ext and it makes our traits less powerful than Haskell traits.

So, in this release I have some experimental features that loosen the trait constraints but tighten them elsewhere. For example, Validation<F, A> doesn't have a constraint of Monoid<F> on its type any more. But it does on Validation.Success<F, A>(value) and Validation.Fail<F, A>(error) - so the constructors act as a Monoid-trait gatekeeper, but the type itself doesn't.

That means you could map to a non-monoidal F type. The behaviour for Validation would be to turn-off the aggregation of errors. Other types might throw if the ad-hoc resolution doesn't resolve to a valid instance. This isn't ideal, but it's a truely exceptional event, so should be caught on first-use.

This behaviour extends a little further for types that aren't pure data-types, but are computations, like ValidationT:

This is what ValidationT looked like before:

public record ValidationT<F, M, A>(K<M, Validation<F, A>> runValidation)
    where F : Monoid<F>;

Note the Monoid<F> constraint.

And this is what it looks like now:

public record ValidationT<F, M, A>(Func<MonoidInstance<F>, K<M, Validation<F, A>>> runValidation);

The Monoid<F> constraint has been removed and now the K<M, Validation<F, A>> runValidation field has been wrapped with a Func that provides a MonoidInstance<F>. This works like a reader monad and means the instance can be provided when the monad-transformer is Run.

We can therefore put the Monoid constraint on the Run method and even allow for bespoke MonoidInstance<F> values to be provided.

For this experiment I have added SemigroupInstance<A> and MonoidInstance<A>...

SemigroupInstance<A>

SemigroupInstance provides a Combine function for A:

public record SemigroupInstance<A>(Func<A, A, A> Combine)
{
    public static Option<SemigroupInstance<A>> Instance { get; } =
        Try.lift(GetInstance)
           .ToOption()
           .Bind(x => x is null ? None : Some(x));
    
    static SemigroupInstance<A>? GetInstance()
    {
        var type  = typeof(Semigroup<>).MakeGenericType(typeof(A));
        var prop  = type.GetProperty("Instance");
        var value = prop?.GetValue(null);
        return (SemigroupInstance<A>?)value;
    }    
}

The Semigroup<A> trait has also been extended to give access to the instance value:

public interface Semigroup<A>
    where A : Semigroup<A>
{
    public A Combine(A rhs);
    
    public static virtual A operator +(A lhs, A rhs) =>
        lhs.Combine(rhs);

    /// <summary>
    /// Property that contains the trait in record form.  This allows the trait to be passed
    /// around as a value rather than resolved as a type.  It helps us get around limitations
    /// in the C# constraint system.
    /// </summary>
    public static virtual SemigroupInstance<A> Instance { get; } =
        new(Combine: Semigroup.combine);
}

MonoidInstance<A>

MonoidInstance provides a Combine function (by inheriting SemigroupInstance<A>) and an Empty property for A:

public record MonoidInstance<A>(A Empty, Func<A, A, A> Combine) :
    SemigroupInstance<A>(Combine)
{
    public new static Option<MonoidInstance<A>> Instance { get; } =
        Try.lift(GetInstance)
           .ToOption()
           .Bind(x => x is null ? None : Some(x));

    static MonoidInstance<A>? GetInstance()
    {
        var type = typeof(Monoid<>).MakeGenericType(typeof(A));
        var prop = type.GetProperty("Instance");
        var value = prop?.GetValue(null);
        return (MonoidInstance<A>?)value;
    }
}

As with Semigroup<A>, we also have access to a MonoidInstance<A> value from the Monoid<A> trait:

public interface Monoid<A> : Semigroup<A>
    where A : Monoid<A>
{
    public static abstract A Empty { get; }

    /// <summary>
    /// Property that contains the trait in record form.  This allows the trait to be passed
    /// around as a value rather than resolved as a type.  It helps us get around limitations
    /// in the C# constraint system.
    /// </summary>
    public new static virtual MonoidInstance<A> Instance { get; } =
        new (Empty: A.Empty, Combine: Semigroup.combine);
}

Experimental features conclusion

I'm not 100% happy with this feature. Ad-hoc polymorphism in this sense is effectively runtime resolution rather than compile-time. That opens up risks for unexpected runtime errors. They're unlikely to cause systemic issues, but still, compile-time is better.

However, this approach allows certain types to have decent fallback behaviour if a type hasn't implemented a required trait. We can also limit the constraints to the functions that need them.

Previously Validation<F, A> was a monoid because of one function: Validation<F, A>.Empty(), and it was a semigroup because of Apply and Combine. It would be preferable if we didn't have to constrain everything that touches Validation<F, A> if we're not using Empty, Apply, and Combine. This I feel is a reasonably strong argument in favour of the approach. However, I think I'd still like to minimise the ad-hoc instance resolution.

Update on version 5 release

With the .NET 10 release being imminent, I feel it's probably a good idea to align the release of language-ext version 5 with the .NET 10 release. I do want to access the 'extension operators' feature and fix up the operator inference before I do the language-ext RTM, but my thinking is to release within a month of .NET 10 RTM.

So, watch this space... v5 RTM coming soon!

v5.0.0-beta-54: Refining the Maybe.MonadIO concept

A previous idea to split the MonadIO trait into two traits: Traits.MonadIO and Maybe.MonadIO - has allowed monad-transformers to pass IO functionality down the transformer-chain, even if the outer layers of the transformer-chain aren't 'IO capable'.

This works as long as the inner monad in the transformer-chain is the IO<A> monad.

There are two distinct types of functionality in the MonadIO trait:

  • IO lifting functionality (via MonadIO.LiftIO)
  • IO unlifting functionality (via MonadIO.ToIO and MonadIO.MapIO)

Problem no.1

It is almost always possible to implement LiftIO, but it is often impossible to implement ToIO (the minimum required unlifting implementation) without breaking composition laws.

Much of the 'IO functionality for free' of MonadIO comes from leveraging ToIO (for example, Repeat, Fork, Local, Await, Bracket, etc.) -- and so if ToIO isn't available and has a default implementation that throws an exception, then Repeat, Fork, Local, Await, Bracket, etc. will also all throw.

This feels wrong to me.

Problem no.2

Because of the implementation hierarchy:

Maybe.MonadIO<M>
      ↓
   Monad<M>
      ↓
  MonadIO<M>

Methods like LiftIO and ToIO, which have default-implementations (that throw) in Maybe.MonadIO<M>, don't have their overridden implementations enforced when someone implements MonadIO<M>. We can just leave LiftIO and ToIO on their defaults, which means inheriting from MonadIO<M> has no implementation guarantees.

Solution

  1. Split MonadIO (and Maybe.MonadIO) into distinct traits:
    • MonadIO and Maybe.MonadIO for lifting functionality (LiftIO)
    • MonadUnliftIO and Maybe.MonadUnliftIO for unlifting functionality (ToIO and MapIO)
    • The thinking here is that when unlifting can't be supported (in types like StateT and OptionT) then we only implement MonadIO
    • but in types where unlifting can be supported we implement both MonadIO and MonadUnliftIO.
  2. In MonadIO and MonadUnliftIO (the non-Maybe versions) we make abstract the methods that previously had default virtual (exception throwing) implementations.
    • That means anyone stating their type supports IO must implement it!
  3. Make all methods in Maybe.MonadIO and Maybe.MonadUnliftIO have the *Maybe suffix (so LiftIOMaybe, ToIOMaybe, etc.)
    • The thinking here is that for monad-transformer 'IO passing' we can still call the Maybe variants, but in the code it's declarative, we can see it might not work.
    • Then in MonadIO and MonadUnliftIO (the non-Maybe versions) we can override LiftIOMaybe, ToIOMaybe, and MapIOMaybe and get them to invoke the bespoke LiftIO, ToIO, and MapIO from MonadIO and MonadUnliftIO.
    • That means all default functionality Repeat, Fork, Local, Await, Bracket, gets routed to the bespoke IO functionality for the type.

The implementation hierarchy now looks like this:

   Maybe.MonadIO<M>
         ↓
Maybe.MonadUnliftIO<M>
         ↓
      Monad<M>
         ↓
     MonadIO<M>
         ↓
  MonadUnliftIO<M>

This should (if I've got it right) lead to more type-safe implementations, fewer exceptional errors for IO functionality not implemented, and a slightly clearer implementation path. It's more elegant because we override implementations in MonadIO and MonadUnliftIO, not the Maybe versions. So, it feels more 'intentional'.

For example, this will work, because ReaderT supports lifting and unlifting because it implements MonadUnliftIO

    ReaderT<E, IO, A> mx;

    var my = mx.ForkIO();    // compiles

Whereas this won't compile, because StateT can only support lifting (by implementing MonadIO):

    StateT<S, IO, A> mx;

    var my = mx.ForkIO();    // type-constraint error

If you tried to implementing MonadUnliftIO for StateT you quickly run into the fact that StateT (when run) yields a tuple, which isn't compatible with the singleton value needed for ToIO. The only way to make it work is to drop the yielded state, which breaks composition rules.

Previously, this wasn't visible to the user because it was hidden in default implementations that threw exceptions.

@​micmarsh @​hermanda19 if you are able to cast a critical eye on this and let me know what you think, that would be super helpful?

I ended up trying a number of different approaches and my eyes have glazed over somewhat, so treat this release with some caution. I think it's good, but critique and secondary eyes would be helpful! That goes for anyone else interested too.

Thanks in advance 👍

v5.0.0-beta-52: IObservable support in Source and SourceT

IObservable can now be lifted into Source and SourceT types (via Source.lift, SourceT.lift, and SourceT.liftM).

Source or SourceT is now supports lifting of the following types:

  • IObservable
  • IEnumerable
  • IAsyncEnumerable
  • System.Threading.Channels.Channel

And, because both Source and SourceT can be converted to Producer and ProducerT (via ToProducer and ToProducerT), all of the above types can therefore also be used in Pipes.

More general support for foldables coming soon

v5.0.0-beta-51: LanguageExt.Streaming + MonadIO + Deriving

Features:

  • New streaming library
    • Transducers are back
    • Closed streams
      • Pipes
    • Open streams
      • Source
      • SourceT
      • Sink
      • SinkT
      • Conduit
      • ConduitT
    • Open to closed streams
  • Deprecated Pipes library
  • MonadIO
  • Deriving
  • Bug fixes

New streaming library

A seemingly innocuous bug in the StreamT type opened up a rabbit hole of problems that needed a fundamental rewrite to fix. In the process more and more thoughts came to my mind about bringing the streaming functionality under one roof. So, now, there's a new language-ext library LanguageExt.Streaming and the LanguageExt.Pipes library has been deprecated.

This is the structure of the Streaming library:

image

Transducers are back

Transducers were going to be the big feature of v5 before I worked out the new trait-system. They were going to be too much effort to bring in + all of the traits, but now with the new streaming functionality they are hella useful again. So, I've re-added Transducer and a new TransducerM (which can work with lifted types). Right now the functionality is relatively limited, but you can extend the set of transducers as much as you like by deriving new types from Transducer and TransducerM.

Documentation

The API documentation has some introductory information on the streaming functionality. It's a little light at the moment because I wanted to get the release done, but it's still useful to look at:


The Streaming library of language-ext is all about compositional streams. There are two key types of streaming
functionality: closed-streams and open-streams...

Closed streams

Closed streams are facilitated by the Pipes system. The types in the Pipes system are compositional
monad-transformers
that 'fuse' together to produce an EffectT<M, A>. This effect is a closed system,
meaning that there is no way (from the API) to directly interact with the effect from the outside: it can be executed
and will return a result if it terminates.

The pipeline components are:

  • ProducerT<OUT, M, A>
  • PipeT<IN, OUT, M, A>
  • ConsumerT<IN, M, A>

These are the components that fuse together (using the | operator) to make an EffectT<M, A>. The
types are monad-transformers that support lifting monads with the MonadIO trait only (which constrains M). This
makes sense, otherwise the closed-system would have no effect other than heating up the CPU.

There are also more specialised versions of the above that only support the lifting of the Eff<RT, A> effect-monad:

  • Producer<RT, OUT, A>
  • Pipe<RT, IN, OUT, A>
  • Consumer<RT, IN, A>

They all fuse together into an Effect<RT, A>

Pipes are especially useful if you want to build reusable streaming components that you can glue together ad infinitum.
Pipes are, arguably, less useful for day-to-day stream processing, like handling events, but your mileage may vary.

More details on the Pipes page.

Open streams

Open streams are closer to what most C# devs have used classically. They are like events or IObservable streams.
They yield values and (under certain circumstances) accept inputs.

  • Source and SourceT yield values synchronously or asynchronously depending on their construction. Can support multiple readers.
  • Sink and SinkT receives values and propagates them through the channel they're attached to. Can support multiple writers.
  • Conduit and ConduitT provides and input transducer (acts like a Sink), an internal buffer, and an output transducer (acts like a Source). Supports multiple writers and one reader. But can yield a Source`SourceT` that allows for multiple readers.

I'm calling these 'open streams' because we can Post values to a Sink/SinkT and we can Reduce values yielded by
Source/SourceT. So, they are 'open' for public manipulation, unlike Pipes which fuse the public access away.

Source

Source<A> is the 'classic stream': you can lift any of the following types into it: System.Threading.Channels.Channel<A>,
IEnumerable<A>, IAsyncEnumerable<A>, or singleton values. To process a stream, you need to use one of the Reduce
or ReduceAsync variants. These take Reducer delegates as arguments. They are essentially a fold over the stream of
values, which results in an aggregated state once the stream has completed. These reducers can be seen to play a similar
role to Subscribe in IObservable streams, but are more principled because they return a value (which we can leverage
to carry state for the duration of the stream).

Source also supports some built-in reducers:

  • Last - aggregates no state, simply returns the last item yielded
  • Iter - this forces evaluation of the stream, aggregating no state, and ignoring all yielded values.
  • Collect - adds all yielded values to a Seq<A>, which is then returned upon stream completion.
SourceT

SourceT<M, A> is the classic-stream embellished - it turns the stream into a monad-transformer that can
lift any MonadIO-enabled monad (M), allowing side effects to be embedded into the stream in a principled way.

So, for example, to use the IO<A> monad with SourceT, simply use: SourceT<IO, A>. Then you can use one of the
following static methods on the SourceT type to lift IO<A> effects into a stream:

  • SourceT.liftM(IO<A> effect) creates a singleton-stream
  • SourceT.foreverM(IO<A> effect) creates an infinite stream, repeating the same effect over and over
  • SourceT.liftM(Channel<IO<A>> channel) lifts a System.Threading.Channels.Channel of effects
  • SourceT.liftM(IEnumerable<IO<A>> effects) lifts an IEnumerable of effects
  • SourceT.liftM(IAsyncEnumerable<IO<A>> effects) lifts an IAsyncEnumerable of effects

Obviously, when lifting non-IO monads, the types above change.

SourceT also supports the same built-in convenience reducers as Source (Last, Iter, Collect).

Sink

Sink<A> provides a way to accept many input values. The values are buffered until consumed. The sink can be
thought of as a System.Threading.Channels.Channel (which is the buffer that collects the values) that happens to
manipulate the values being posted to the buffer just before they are stored.

This manipulation is possible because the Sink is a CoFunctor (contravariant functor). This is the dual of Functor:
we can think of Functor.Map as converting a value from A -> B. Whereas CoFunctor.Comap converts from B -> A.

So, to manipulate values coming into the Sink, use Comap. It will give you a new Sink with the manipulation 'built-in'.

SinkT

SinkT<M, A> provides a way to accept many input values. The values are buffered until consumed. The sink can
be thought of as a System.Threading.Channels.Channel (which is the buffer that collects the values) that happens to
manipulate the values being posted to the buffer just before they are stored.

This manipulation is possible because the SinkT is a CoFunctor (contravariant functor). This is the dual of Functor:
we can think of Functor.Map as converting a value from A -> B. Whereas CoFunctor.Comap converts from B -> A.

So, to manipulate values coming into the SinkT, use Comap. It will give you a new SinkT with the manipulation 'built-in'.

SinkT is also a transformer that lifts types of K<M, A>.

Conduit

Conduit<A, B> can be pictured as so:

+----------------------------------------------------------------+
|                                                                |
|  A --> Transducer --> X --> Buffer --> X --> Transducer --> B  |
|                                                                |
+----------------------------------------------------------------+
  • A value of A is posted to the Conduit (via Post)
  • It flows through an input Transducer, mapping the A value to X (an internal type you can't see)
  • The X value is then stored in the conduit's internal buffer (a System.Threading.Channels.Channel)
  • Any invocation of Reduce will force the consumption of the values in the buffer
  • Flowing each value X through the output Transducer

So the input and output transducers allow for pre and post-processing of values as they flow through the conduit.
Conduit is a CoFunctor, call Comap to manipulate the pre-processing transducer. Conduit is also a Functor, call
Map to manipulate the post-processing transducer. There are other non-trait, but common behaviours, like FoldWhile,
Filter, Skip, Take, etc.

Conduit supports access to a Sink and a Source for more advanced processing.

ConduitT

ConduitT<M, A, B> can be pictured as so:

+------------------------------------------------------------------------------------------+
|                                                                                          |
|  K<M, A> --> TransducerM --> K<M, X> --> Buffer --> K<M, X> --> TransducerM --> K<M, B>  |
|                                                                                          |
+------------------------------------------------------------------------------------------+
  • A value of K<M, A> is posted to the Conduit (via Post)
  • It flows through an input TransducerM, mapping the K<M, A> value to K<M, X> (an internal type you can't see)
  • The K<M, X> value is then stored in the conduit's internal buffer (a System.Threading.Channels.Channel)
  • Any invocation of Reduce will force the consumption of the values in the buffer
  • Flowing each value K<M, A> through the output TransducerM

So the input and output transducers allow for pre and post-processing of values as they flow through the conduit.
ConduitT is a CoFunctor, call Comap to manipulate the pre-processing transducer. Conduit is also a Functor, call
Map to manipulate the post-processing transducer. There are other non-trait, but common behaviours, like FoldWhile,
Filter, Skip, Take, etc.

ConduitT supports access to a SinkT and a SourceT for more advanced processing.

Open to closed streams

Clearly, even for 'closed systems' like the Pipes system, it would be beneficial to be able to post values
into the streams from the outside. And so, the open-stream components can all be converted into Pipes components
like ProducerT and ConsumerT.

  • Conduit and ConduitT support ToProducer, ToProducerT, ToConsumer, and ToConsumerT.
  • Sink and SinkT supports ToConsumer, and ToConsumerT.
  • Source and SourceT supports ToProducer, and ToProducerT.

This allows for the ultimate flexibility in your choice of streaming effect. It also allows for efficient concurrency in
the more abstract and compositional world of the pipes. In fact ProducerT.merge, which merges many streams into one,
uses ConduitT internally to collect the values and to merge them into a single ProducerT.

MonadIO

Based on this discuission I have refactored Monad, MonadIO, and created a new Maybe.MonadIO. This achieves the aims of the making MonadIO a useful trait and constraint. The one difference between the proposal and my implementation is that I didn't make MonadT inherit MonadIO.

Any monad-transformer must add its own MonadIO constraint if allows IO to be lifted into the transformer. This is more principled, I think. It allows for some transformers to be explicitly non-IO if necessary.

All of the core monad-transformers support MonadIO -- so the ultimate goal has been achieved.

Deriving

Anybody who's used Haskell knows the deriving keyword and its ability to provide trait-implementations automatically (for traits like Functor and the like). This saves writing a load of boilerplate. Well thanks to a suggestion by @​micmarsh we can now do the same.

The technique uses natural-transformations to convert to and from the wrapper type. You can see this in action in the CardGame sample. The Game trait-implementation looks like this:

public partial class Game :
    Deriving.Monad<Game, StateT<GameState, OptionT<IO>>>,
    Deriving.SemigroupK<Game, StateT<GameState, OptionT<IO>>>,
    Deriving.Stateful<Game, StateT<GameState, OptionT<IO>>, GameState>
{
    public static K<StateT<GameState, OptionT<IO>>, A> Transform<A>(K<Game, A> fa) =>
        fa.As().runGame;

    public static K<Game, A> CoTransform<A>(K<StateT<GameState, OptionT<IO>>, A> fa) => 
        new Game<A>(fa.As());
}

The only thing that needs implementing is the Transform and CoTransform methods. They simply unpack the underlying implementation or repack it. Deriving.Monad simply implements Monad<M> in terms of Transform and CoTransform, which means you don't have to write all the boilerplate.

Conclusion

Can I also just say a personal note of thanks to @​hermanda19 and @​micmarsh - well worked out and thoughtful suggestions, like the ones listed above, are manna for a library like this that is trying to push the limits of the language. Thank you!

Finally, I will be working on some more documentation and getting back to my blog as soon as I can. This is the home stretch now. So, there's lots of documentation, unit tests, refinements, etc. as I head toward the full v5 release. I have a few trips lined up, so it won't be imminent, but hopefully at some point in the summer I'll have the full release out of the door!

v5.0.0-beta-50: IO 'acquired resource tidy up' bug-fix

This issue highlighted an acquired resource tidy-up issue that needed tracking down...

The IO monad has an internal state-machine. It tries to run that synchronously until it finds an asynchronous operation. If it encounters an asynchronous operation then it switches to a state-machine that uses the async/await machinery. The benefit of this is that we have no async/await overhead if there's no asynchronicity and only use it when we need it.

But... the initial synchronous state-machine used a try/finally block that was used to tidy up the internally allocated EnvIO (and therefore any acquired resources). This is problematic when switching from `sync


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot force-pushed the renovate/languageext.core-5.x branch from 0641090 to ec5635a Compare October 21, 2025 17:51
@renovate renovate bot force-pushed the renovate/languageext.core-5.x branch from ec5635a to c764000 Compare November 12, 2025 18:01
@renovate renovate bot changed the title chore(deps): update dependency languageext.core to 5.0.0-beta-54 chore(deps): update dependency languageext.core to 5.0.0-beta-55 Nov 12, 2025
@renovate renovate bot force-pushed the renovate/languageext.core-5.x branch from c764000 to dba84e2 Compare November 13, 2025 23:55
@renovate renovate bot changed the title chore(deps): update dependency languageext.core to 5.0.0-beta-55 chore(deps): update dependency languageext.core to 5.0.0-beta-56 Nov 13, 2025
@renovate renovate bot force-pushed the renovate/languageext.core-5.x branch from dba84e2 to 6338527 Compare November 14, 2025 21:56
@renovate renovate bot changed the title chore(deps): update dependency languageext.core to 5.0.0-beta-56 chore(deps): update dependency languageext.core to 5.0.0-beta-57 Nov 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant