Skip to content

Conversation

@Dueris
Copy link
Member

@Dueris Dueris commented Dec 20, 2025

This is simply for tracking purposes. DO NOT MERGE

https://github.com/PaperMC/Folia/tree/ver/1.21.11
https://github.com/CraftCanvasMC/Canvas/tree/ver/1.21.11

TODO

Future plans

Dueris and others added 24 commits December 20, 2025 13:11
This generally is more powerful than Moonrises current executor in Paper, and also is a bit faster than 1.21.8 Canvas. Chunk *loading* should also be faster aswell, since with the recent Moonrise changes it allows us to safely replace the IO pool with an optimized varient that won't completely destroy the server.

I generally saw a double in CPS with these changes as the improvement, this is up for more testing though.
When using the `/ride` command in 1.21.11 on an entity in the same region, it works fine. However, when attempting to ride an entity out of region, it causes a crash due to how the `Entity#startRiding` command is written, making it not region threading safe when attempting to ride an entity out of the current region.

This was tested and is replicatable in the latest Folia 1.21.11 commit by forceloading a chunk(or chunk*s*) far outside the current region, and then running the command:
```
/ride @s mount @e[limit=1,type=!player,sort=furthest]
```

This effectively tries to teleport the mounter -> mount target, where the mount target and mounter are not in the same region. The fix in this PR provides a fix for the entirety of the `Entity#startRiding` method, and also a thread check at the HEAD of the method to ensure that `Entity#startRiding` is called on the region of the mounter entity.

TODO - when/if this is merged, this commit needs to be reverted

Tested-by: Dueris <duerismc@gmail.com>
This upstream conforms to the scheduler refactoring done by SpottedLeaf, adding a new CRS configuration option for Canvas, which is based off the EDF scheduler but allows processing intermediate tasks.

All changes mimicking Folia's recent upstream(which there were many) have been removed in preference for retaining the upstream version. The default scheduler option will remain EDF, as it has proven reliable. CRS is required for region profiling though, and the CRS scheduler will continue to be improved upon in the future.
This option has proven to be useful for servers with slow disks, but very bad for performance in other cases. It has been decided to remove this option, as the amount of people with performance decreases far outweighs the people with slow disks having performance increases.
Generally, this feature was useful and a good QOL feature for 1.21.8 servers at the time. Now, this packet does something different, and no longer displays the screen that it was intended to display. As such, it has been decided to remove this feature to match with Folia, as I see no point in continuing to contain and maintain this in Canvas.
- Added operator check in GLOBAL_BROADCAST consumer
- Messages now only appear to ops instead of all players
- Console logging remains unchanged

(cherry picked from commit dd6794d)

Co-authored-by: gbpii <gbpii@users.noreply.github.com>
…craft sources (#152)

* feat: update for weaver auto at discovery

* chore: switch to stable weaver
Generally, this API is slow af and just not thread-safe.
With Folia, this is an extreme issue. Due to this API being
not thread-safe, and the way this is called and designed
is completely unsafe. {@link CraftPlayer#onEntityRemove(Entity)}
is called in tracking end of non-player-entities, on *all* players...
This is being called on a non thread-safe field, which thus
causes threading issues from even the visibility API being
used by a plugin, even though it's not the plugins fault.

This is genuinely something we need to call, and we shouldn't
schedule this(because with enough regions and players this
will spam the entity scheduler), and we can't really do much
beyond make this API faster and thread-safe. As such, this
field has been changed to a {@link java.util.concurrent.ConcurrentHashMap}
which has O(1), or worst-case, O(log(n)) time complexity for removals
and lookups, which is sufficient for the purposes we need this for.

While probably not the greatest way to go about this, there
aren't many practical ways to go about resolving this issue,
which is desperately in need of fixing. Large production
servers that I(Dueris) have been in contact with crash every
few hours due to this issue, which isn't even caused by them,
but the fact they use this API in general. There were other ideas
proposed to fix this, yet none of them were entirely practical
or efficient. This is generally the best way to go about this
without completely disabling or rewriting this API from the ground
up with a different architecture and design to work with region threading

This could probably be reworked in the future to be better, but this does
fix the issues caused by the visibility API.

We can guarentee this is safe though, since we are trying to fix entity
*removal*, which normally we can only add/remove visibility invertedness when
on the owning player entities context. The method call we are trying to fix,
when called on the same context, won't cause issues, and the only cross-thread
interaction that can occur is the removal of the entity’s visibility entry
itself on tracking removal, which is now handled through a thread-safe map.
This ensures that entity teardown cannot corrupt internal state, while all visibility
inversion mutations remain confined to the owning player’s context and therefore
preserve the original single-threaded semantics of the API. If a situation where
say the removal and adding of an inverted visibility was to occur(like onEntityRemove
and just adding visibility inverstion), this wouldn't do anything, given the
entity is removed anyway if it *does* get added. If it's a removal and onEntityRemove,
this doesn't matter since they do the same thing ultimately
During Folia's refactoring of the scheduler system, SpottedLeaf changed the name of the Thread pattern for the scheduler threads. This was unnoticed by myself until a few reports of spark data not showing up in profilers, of which then I found the issue. This commit fixes up the pattern in the PinningThreadDumper, making spark profiler reports accurate to reflect the 1.21.11 scheduler change
…propagator in addTicketAtLevel

This fix was discussed by M2 and myself in DMs, seems to fix a critical issue in the propagator in Folia 1.21.11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants