Conversation
There was a problem hiding this comment.
Pull request overview
This PR lays the groundwork for a “standalone save trigger” flow by extending the handshake to identify standalone servers and by plumbing a typed autosave/join-point request reason from client to server, so standalone clients can request join point creation for save- and travel-related events.
Changes:
- Extend protocol handshake (
Server_ProtocolOk) to include whether the server is standalone; persist this on the client session. - Introduce
ClientAutosavingPacket+JoinPointRequestReasonand route autosave/save/world-travel triggers through it to server-side join point creation. - Adjust standalone-specific join/join-point behavior and add a synthetic autosave timer for standalone connections.
Reviewed changes
Copilot reviewed 15 out of 15 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| Source/Common/Networking/Packet/ProtocolPacket.cs | Adds isStandaloneServer field to protocol OK packet for client session identification. |
| Source/Common/Networking/Packet/AutosavingPacket.cs | Introduces typed autosaving packet with a reason enum. |
| Source/Common/JoinPointRequestReason.cs | Adds enum describing why a join point is requested (save, world travel, etc.). |
| Source/Common/Networking/State/ServerPlayingState.cs | Updates autosaving handler to typed packet + reason; loosens world upload policy for standalone. |
| Source/Common/Networking/State/ServerJoiningState.cs | Adjusts join-point creation policy for standalone joins; sends standalone flag during handshake. |
| Source/Common/PlayerManager.cs | Treats first non-arbiter joiner as “primary” faction on standalone servers. |
| Source/Client/Networking/State/ClientJoiningState.cs | Stores isStandaloneServer from protocol OK on the client session. |
| Source/Client/Session/MultiplayerSession.cs | Adds session flag + helper property for “connected to standalone server”. |
| Source/Client/Session/Autosaving.cs | Sends typed autosaving packet after save; changes save routine to return bool and tweaks file replacement logic. |
| Source/Client/Windows/SaveGameWindow.cs | Sends autosaving packet on manual save; special-cases standalone to trigger server-side save flow. |
| Source/Client/Patches/VTRSyncPatch.cs | On standalone, triggers a join point when leaving a map (world travel). |
| Source/Client/ConstantTicker.cs | Adds standalone-only synthetic autosave timer. |
| Source/Client/AsyncTime/AsyncWorldTimeComp.cs | Adjusts join-point creation upload behavior during CreateJoinPoint. |
| Source/Tests/PacketTest.cs | Updates packet roundtrip expectations for the extended protocol OK packet. |
| Source/Tests/packet-serializations/ServerProtocolOkPacket.verified.txt | Updates verified serialization output for the extended protocol OK packet. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| private static void CreateJoinPointAndSendIfHost() | ||
| { | ||
| Multiplayer.session.dataSnapshot = SaveLoad.CreateGameDataSnapshot(SaveLoad.SaveAndReload(), Multiplayer.GameComp.multifaction); | ||
|
|
||
| if (!TickPatch.Simulating && !Multiplayer.IsReplay && | ||
| (Multiplayer.LocalServer != null || Multiplayer.arbiterInstance)) | ||
| if (!TickPatch.Simulating && !Multiplayer.IsReplay) | ||
| SaveLoad.SendGameData(Multiplayer.session.dataSnapshot, true); | ||
| } |
There was a problem hiding this comment.
CreateJoinPointAndSendIfHost now uploads world data from every non-simulating, non-replay client. On non-standalone servers the server will still reject Client_WorldDataUpload from non-host/non-arbiter, but the upload cost (fragmenting + network/bandwidth + compression work) is still paid by all clients, which can be very expensive for large saves. Please gate SendGameData so only the designated uploader sends (e.g., arbiter/local host, plus standalone clients), or introduce a server-directed "uploader" selection instead of broadcasting uploads from everyone.
| { | ||
| Log.Error($"Exception saving multiplayer game as {fileNameNoExtension}: {e}"); | ||
| Messages.Message("MpGameSaveFailed".Translate(), MessageTypeDefOf.SilentInput, false); | ||
| return false; |
There was a problem hiding this comment.
SaveGameToFile_Overwrite now returns bool and swallows exceptions internally. Callers that relied on exceptions to detect failure (e.g., code wrapping this call in try/catch) will no longer observe failures unless they check the return value, and may continue follow-up steps assuming a save exists. Consider either re-throwing (and letting callers handle) or ensure all call sites are updated to handle the false return explicitly.
| return false; | |
| throw; |
| if (!Autosaving.SaveGameToFile_Overwrite(curText, currentReplay)) | ||
| return; | ||
|
|
||
| Multiplayer.Client.Send(new ClientAutosavingPacket(JoinPointRequestReason.Save)); |
There was a problem hiding this comment.
ClientAutosavingPacket is sent after saving even when currentReplay is true. In replay sessions the connection ignores non-command packets, so this send is a no-op and adds confusing coupling between replay saving and network join-point logic. Consider skipping the send when currentReplay is true (or when Multiplayer.Client is null) to keep replay saves purely local.
| Multiplayer.Client.Send(new ClientAutosavingPacket(JoinPointRequestReason.Save)); | |
| if (!currentReplay && Multiplayer.Client != null) | |
| Multiplayer.Client.Send(new ClientAutosavingPacket(JoinPointRequestReason.Save)); |
|
|
||
| if (!TickPatch.Simulating && !Multiplayer.IsReplay && | ||
| (Multiplayer.LocalServer != null || Multiplayer.arbiterInstance)) | ||
| if (!TickPatch.Simulating && !Multiplayer.IsReplay) |
There was a problem hiding this comment.
Instead of having every player send the game data back to the server, could you have the server send info which player is expected to send the game data back? WorldData.cs Server.commands.Send(CommandType.CreateJoinPoint, ScheduledCommand.NoFaction, ScheduledCommand.Global, Array.Empty<byte>()); - the Array.Empty could be changed to include playerId or perhaps a value of true could be sent to the desired player and false to the rest?
There was a problem hiding this comment.
What do you think about this approach: when a player wants to save, they request a joinpoint from the server, and the server asks a specific player to create it? I'm still leaning towards the idea that the player who triggered the save should be the one doing it for the maps they have loaded — so the server would still delegate it back to the requester. But if there are two players, maybe this server-mediated step would better coordinate who should do it?
Honestly, I'm not sure it adds much in practice — it feels like an extra round-trip for the same result. But I'd like your thoughts on it.
The philosophy I'm following is "each player uploads their own maps." The goal is to avoid a bottleneck where, say, 10 players need to save sequentially through a single designated uploader. On a standalone server there's no natural "authoritative" client like the host in a hosted game, so having each player responsible for their own maps seemed like the most resilient approach.
Currently, the IssuedBySelf gate (which was missing from PR #876+ due to a rebase oversight — now restored) ensures that when a CreateJoinPoint command is broadcast, only the client who originally requested the save actually executes CreateJoinPointAndSendIfHost. The other clients receive the command but skip execution. So the Array.Empty<byte>() payload doesn't need to carry a playerId — the client-side gate already handles it by checking currentExecutingCmdIssuedBySelf.
That said, if you'd prefer making this explicit via the command payload (e.g. embedding the requester's playerId so the server decides who saves), I'm open to it. It would make the intent clearer on the wire, even if the behavior is the same.
There was a problem hiding this comment.
I'm still leaning towards the idea that the player who triggered the save should be the one doing it for the maps they have loaded
I'm fine with that, and that's not my issue with this change. The issue is that right now everyone saves every map and everyone sends the world data back to the server.
The philosophy I'm following is "each player uploads their own maps." The goal is to avoid a bottleneck where, say, 10 players need to save sequentially through a single designated uploader. On a standalone server there's no natural "authoritative" client like the host in a hosted game, so having each player responsible for their own maps seemed like the most resilient approach.
That probably is a better approach. Not sure how much of a bottleneck it actually would be. Also, right now there isn't such a functionality to have the player only save their own map. If it exists in another PR of yours, please include that info in this PR's description so I'm aware of the overarching goal and have more context to review on
Currently, the IssuedBySelf gate (which was missing from PR #876 due to a rebase oversight — now restored) ensures that when a CreateJoinPoint command is broadcast, only the client who originally requested the save actually executes CreateJoinPointAndSendIfHost. The other clients receive the command but skip execution. So the Array.Empty() payload doesn't need to carry a playerId — the client-side gate already handles it by checking currentExecutingCmdIssuedBySelf.
I only reviewed this PR and didn't look at the others. If that PR fixes my concern, please move the relevant part to this PR. Also, IssuedBySelf will only work if you send the CreateJoinPoint command with a player provided. Right now, StartJoinPointCreation never sends an associated player
That said, if you'd prefer making this explicit via the command payload (e.g. embedding the requester's playerId so the server decides who saves), I'm open to it. It would make the intent clearer on the wire, even if the behavior is the same.
As long as the IssuedBySelf code you are referencing keeps working in non-standalone mode, I'm fine with just doing that. It's probably even better than my suggestion, so I think I'd even prefer your approach
There was a problem hiding this comment.
As soon as i get home ill send you more context, i was working on it.
There was a problem hiding this comment.
This PR does not implement map streaming itself, but it keeps the standalone join-point flow compatible with the upcoming streaming architecture. In hosted MP, join-point creation is host/arbiter-owned; in standalone, snapshot creation/upload must already happen client-side, and future streaming work extends that further by allowing the upload work to be split across assigned clients. Some of the standalone branching here is meant to preserve that separation cleanly without changing hosted behavior.
There was a problem hiding this comment.
Also i used ur idea about make the server handle who is the one who save for the streaming map one
| } | ||
|
|
||
| // On standalone, trigger a join point when leaving a map | ||
| if (Multiplayer.session?.ConnectedToStandaloneServer == true) |
There was a problem hiding this comment.
Isn't it going to be pretty annoying (for the players) to save the game every time anyone changes the map? As far as I remember, the game pauses for everyone during join point creation
There was a problem hiding this comment.
Tecnically just for the one saving and for the maps it is on. The idea is to let players save their maps, but im still trying to test
|
|
||
| private static void TickAutosave() | ||
| { | ||
| // Only standalone connections use the synthetic autosave timer. |
There was a problem hiding this comment.
I don't understand this comment. autosaveCounter is also used by other code in this method?
There was a problem hiding this comment.
You're right, the "synthetic" wording was confusing. I've rewritten the comment to:
// When connected to a remote standalone server, the client drives
// the autosave timer using the interval received at connection time
// (from the server's TOML settings via ServerProtocolOkPacket).
The point is: when the client is connected to a remote standalone server (Multiplayer.LocalServer == null), it can't read server.settings directly. So the autosave interval and unit are now sent in the ServerProtocolOkPacket at connection time — the client stores them in session.autosaveInterval / session.autosaveUnit and drives its own timer from those values.
The standalone path now also supports both Minutes and Days (in-game) units, just like the host-local path.
There was a problem hiding this comment.
That makes sense that the client doesn't have access to server settings, but why are you sending the autosave settings to the client then? I think it'd be better to have the server run the timer and send a CreateJoinPoint command to the client
Also, I'm not sure what do you mean by "supports both Minute and Days"? This code explicitly only supports minutes (which I'm fine with, I don't think it's critical to support autosaving by Days in the standalone server)
if (session.autosaveUnit != AutosaveUnit.Minutes || session.autosaveInterval <= 0) return;
There was a problem hiding this comment.
Umh no, i was using the autosave timing setting from the toml, isn't it using the game days?
| var forceJoinPoint = packet.reason == JoinPointRequestReason.Save; | ||
|
|
||
| // On standalone, any playing client can trigger a join point (always, regardless of settings) | ||
| // On hosted, only the host can trigger and only if the Autosave flag is set | ||
| if (Server.IsStandaloneServer || | ||
| (Player.IsHost && Server.settings.autoJoinPoint.HasFlag(AutoJoinPointFlags.Autosave))) | ||
| Server.worldData.TryStartJoinPointCreation(forceJoinPoint); |
There was a problem hiding this comment.
Unless I'm missing something, this change along with the changes to CreateJoinPointAndSendIfHost and HandleWorldDataUpload, mean that anyone can request a join point to be created, which makes everyone on the server upload a joinpoint to the server, and then the server just accepts the world data from every player. I don't think this is a good workflow. I haven't really thought deeply about this, but I think I'd prefer the server to choose one player to send the game data, or in some other way determine a single player to save.
There was a problem hiding this comment.
I actually found a todo somewhere about the implementation to load map joinpoints independently and thought it might lighten the loading.. so the idea is that Each player can load their own active maps
There was a problem hiding this comment.
The intent here was not to have every client produce and upload a joinpoint. The idea is that only the player who explicitly triggers a save, or another deliberate action tied to their current context, should send the joinpoint for that map.
The main goal is to avoid unnecessary interruptions or loading work for players who are active on separate maps. I tried to keep this as continuity-friendly as possible: if the server arbitrarily designates a client to provide the joinpoint, that creates awkward edge cases where the chosen player may be in the middle of leaving, changing state, or otherwise not be the best source for that save. Tying joinpoint creation to an explicit player action keeps the flow more predictable and reduces the impact on everyone else.
94ad756 to
dc76dbc
Compare
dc76dbc to
67f32f9
Compare
…ignment - CreateJoinPointAndSendIfHost: restore LocalServer/arbiter guard for hosted mode so only host/arbiter uploads world data (was accidentally ungated) - Standalone path kept separate with ConnectedToStandaloneServer gate - Remove redundant ofPlayer assignment in ChangeRealPlayerFaction (FactionContext.Set already does the same thing on the next line)
…mand filtering - Activate CanUseStandaloneMapStreaming gate (IsStandaloneServer && multifaction && asyncTime) - Add ServerPlayer.loadedMaps (HashSet<int>) for per-player map set tracking - Clear loadedMaps on rejoin alongside hasReportedCurrentMap reset - Add ClientLoadedMapsPacket (Client_LoadedMaps) with currentMapId + loadedMapIds - Add HandleLoadedMaps handler in ServerPlayingState behind the gate - Update CommandHandler.Send to filter using loadedMaps when available, falling back to currentMapId for backward compatibility - Add 15 tests: gate verification, command filtering with/without loadedMaps, loadedMaps lifecycle, packet handler behavior
…t, assignment packet - Add StandaloneJoinPointJob with explicit state (Pending/CollectingUploads/Completed/Aborted) - Add ComputeJoinPointCluster: transitive closure BFS over player loadedMaps - Add TryCreateStreamingJoinPointJob: cluster computation + uploader policy (requesting player preferred, then lowest ID) - Add ServerStreamingJoinPointRequestPacket (Server_StreamingJoinPointRequest) with jobId, reason, mapIdsToSave, mustUploadWorld, mapIdsToUpload - Add SendStreamingJoinPointAssignments: sends per-player assignment packets - Add 22 tests: cluster BFS (8), job creation + assignment policy (14)
- Add jobId field to ClientStandaloneWorldSnapshotPacket and ClientStandaloneMapSnapshotPacket - Validate uploads against active StandaloneJoinPointJob in TryAcceptStandaloneWorld/MapSnapshot - Reject uploads from wrong player, duplicate uploads, wrong jobId, wrong state - TryFinalizeStreamingJob: complete job only when all uploads received (IsComplete) - AbortStreamingJob: abort on disconnect of cluster player or timeout - CheckStreamingJobTimeout: periodic check from TickNet (30s default) - 27 new tests covering all validation, finalization, abort, timeout scenarios - 149 tests pass (excluding ServerTest runner crash), 0 failures
… upload, server hook
…clobber HandleWorldDataUpload replaces the entire mapData dictionary, so when multiple cluster clients each send SendGameData, the last upload wins and earlier clients' maps are lost. WriteJoinPoint then calls DeleteStaleMapFiles which deletes those map files from disk. In the streaming flow, TryFinalizeStreamingJob calls EndJoinPointCreation once all per-map/per-world streaming uploads are in, producing a complete mapData before persistence is written.
…w simulation tests - AsyncWorldTimeComp: Allow CreateJoinPoint processing when pendingStreamingAssignment is set, even if the command was not IssuedBySelf. Previously, only the requesting player would process CreateJoinPoint; other cluster players with pending assignments would skip it, never saving or uploading their maps. - New StreamingWorkflowSimulationTest with 20 scenarios (27 test methods) covering: three-player transitive clusters, wrong/duplicate uploads, disconnect mid-flight, sequential and concurrent jobs, legacy fallback, timeout, command filtering, abort cascades, and more.
…edMaps filter The streaming command filter in CommandHandler.Send entered the loadedMaps filtering branch for ALL commands when CanUseStandaloneMapStreaming was true, including global commands (CreateJoinPoint, PlayerCount, GlobalTimeSpeed, etc.) which use mapId = ScheduledCommand.Global (-1). Since no player has -1 in their loadedMaps, these commands were never delivered to any player with hasReportedCurrentMap=true. This completely broke: - CreateJoinPoint: clients never received it, never saved, never uploaded - PlayerCount: player counts became desynchronized - GlobalTimeSpeed: time speed changes were lost Fix: only enter the loadedMaps filter when mapId >= 0. Global commands fall through to SendToPlaying which broadcasts to all players. Added 2 regression tests for global command delivery.
When a player transitions to a new map, if the last join point is older than 400 ticks, force a fresh join point before sending the map response. This ensures the player receives recent map data and has fewer commands to simulate during sync. - If a join point is already in progress, wait for it to complete - Player disconnect during the wait is handled gracefully - Fresh join points (<400 ticks old) send the response immediately
- ContinueWith callbacks now use Enqueue() to marshal back to the server main thread, preventing potential race conditions on conn.Send - Add ConnectedToStandaloneServer guard to SendLoadedMapsPacket (avoids sending unnecessary packets on hosted games) - Merge duplicate NetTimer % NetTicksPerSecond checks in TickNet
…oin point - Add no-op HandleKeepAlive to TestJoiningState and TestLoadingKeepAliveState to prevent crash when server sends KeepAlive during async handshake - Disable autoJoinPoint in test server settings since the test server has no game simulation to process CreateJoinPoint commands
There was a problem hiding this comment.
I didn't review this file yet, but as a first thought, why does this use a custom save format?
There was a problem hiding this comment.
The main reason is that in standalone the dedicated server needs a durable server-owned representation of the accepted game state, and that state is naturally split into world data, session data, command state, and per-map data.
A single monolithic save/replay archive works poorly for that use case because most updates are partial: after an autosave or a standalone snapshot upload, we often only need to replace one artifact, not rebuild the whole thing. Keeping those pieces separate lets the server update them incrementally and atomically (File.Replace/move per artifact), which is much safer for crash recovery than rewriting one large bundle every time.
So this is not meant as a new interchange format. It is the server's internal durable persistence layout for standalone. That also happens to be the shape that later map-streaming work can build on, but this PR is still only the standalone persistence/upload groundwork, not the streaming implementation itself.
Co-authored-by: Michael <5672750+mibac138@users.noreply.github.com>
|
Let's merge this one after a publish, seems to be the most impactful so it could use a lot of testing. |
Can i proceed? |
|
|
||
| public void Stop() | ||
| { | ||
| isStandaloneServer = false; |
There was a problem hiding this comment.
This is needed because Stop() closes the connection, but it does not immediately clear client. ConnectedToStandaloneServer is currently derived from client != null && isStandaloneServer, so without resetting this flag the client could still run standalone-only logic during teardown or immediately after stopping.
There was a problem hiding this comment.
Does this actually cause issues? Once Stop runs there shouldn't really be much more logic running, and even if there were, it's very likely that the connection itself is already disconnected (so packet won't be sent), or it's not a big deal. I'm not sure what standalone code that could run and be an issue you are referring to. Ultimately, this is a small issue, but I'd like to avoid complicating the overall codebase
|
|
||
| public enum JoinPointRequestReason : byte | ||
| { | ||
| Unknown = 0, |
There was a problem hiding this comment.
This is used on the current branch. It is serialized in ClientAutosavingPacket, sent both for explicit saves and for world-travel-triggered join points, and then used on the server to distinguish the reason for the request and decide whether the join point should be forced.
There was a problem hiding this comment.
Sorry, I should've been more specific. I meant only the Unknown variant.
| public bool isStandaloneServer = isStandaloneServer; | ||
| public float autosaveInterval; | ||
| public AutosaveUnit autosaveUnit; |
There was a problem hiding this comment.
I'm not sure if this is the right packet for these fields, but I'm not sure what's a better place for it 🤷
There was a problem hiding this comment.
I agree this is a bit of a design judgment call. I put these fields here because the client needs them at handshake time: isStandaloneServer enables standalone-only client behavior immediately, and the autosave settings are also needed early. If you'd prefer a cleaner separation, the natural alternative would be a small standalone-config packet sent immediately after Server_ProtocolOk.
| public struct StandaloneWorldSnapshotState | ||
| { | ||
| public StandaloneWorldSnapshotState() { } | ||
| public int tick; | ||
| public int producerPlayerId; | ||
| public string producerUsername = ""; | ||
| public byte[] sha256Hash = Array.Empty<byte>(); | ||
| } | ||
|
|
||
| public struct StandaloneMapSnapshotState | ||
| { | ||
| public StandaloneMapSnapshotState() { } | ||
| public int tick; | ||
| public int producerPlayerId; | ||
| public string producerUsername = ""; | ||
| public byte[] sha256Hash = Array.Empty<byte>(); |
There was a problem hiding this comment.
Except for the tick field, and using it to making sure to only allow newer snapshots, how are the rest of the fields (producerPlayerId/Username, sha256Hash) useful? Are they here for possible future functionality? Right now they are unused
There was a problem hiding this comment.
You're right that only tick is currently used for freshness/acceptance. The other fields are there as provenance/debug metadata and for symmetry with the map snapshot state; on the map side the same metadata is already useful for diagnostics. If you'd prefer to keep the world snapshot state minimal, I'm fine trimming it down to just tick.
|
|
||
| private static void TickAutosave() | ||
| { | ||
| // When connected to a remote standalone server, the client drives |
There was a problem hiding this comment.
Fine by me, but once this is merged please create a PR where this is cleaned up
|
Please rebase your PR and resolve the conflicts |
Summary
Consolidated PR containing all standalone save improvements (previously split across #874-#877):
Changes
Validation
dotnet build Source/Multiplayer.sln -c Releasedotnet test Source/Tests/Tests.csproj -c Release --no-buildNote
PRs #875, #876, #877 have been closed and consolidated here as requested.