Skip to content

Grpc#188

Draft
Peu77 wants to merge 10 commits intodevfrom
grpc-refactor
Draft

Grpc#188
Peu77 wants to merge 10 commits intodevfrom
grpc-refactor

Conversation

@Peu77
Copy link
Copy Markdown
Member

@Peu77 Peu77 commented Feb 21, 2026

This pull request replaces the entire custom socket communication protocol with GRPC

Summary by CodeRabbit

  • New Features

    • Real-time gRPC-based multiplayer protocol and service with streaming tick updates, actions, queries, and debug reporting.
  • Refactor

    • Client and server network stack migrated from socket/JSON to gRPC.
    • Build system migrated from Makefiles to CMake across projects.
    • Server session and game lifecycle reworked for session-based flow and replay recording.
  • Documentation

    • Added developer guide with build/run and coding conventions.
  • Chores

    • Updated ignore rules to exclude build artifacts.

Copilot AI review requested due to automatic review settings February 21, 2026 23:34
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 21, 2026

📝 Walkthrough

Walkthrough

Migrates client-server communication from a JSON/socket Bridge to gRPC: adds proto schema, gRPC server and client bridge, replaces socket/JSON parsing with protobuf-based RPCs and streaming, refactors server session model, and converts many Makefile builds to CMake. No behavioral game rules changes recorded.

Changes

Cohort / File(s) Summary
VCS & IDE / Docs
/.gitignore, .idea/..., CLAUDE.md
Added ignore entries, IDE metadata, and a new CLAUDE.md documentation file.
Protobuf Schema
proto/core_game.proto
New core_game.proto describing GameObject, GameConfigProto, DebugDataEntry, Login, TickStream, action RPCs, queries, and CoreGameService.
Client: build + headers
bots/c/client_lib/CMakeLists.txt, bots/c/client_lib/Makefile, bots/c/client_lib/inc/*
Introduce CMake for client_lib; add extern "C" guards; remove socket/JSON declarations from internal header and add grpc_bridge.h dependency.
Client: gRPC bridge
bots/c/client_lib/src/internal/grpc_bridge.h, .../grpc_bridge.cpp
New grpc_bridge API and implementation: connect/login, TickStream (start/wait/end), actions, queries, debug submission, per-tick cache and proto->C conversions.
Client: migrate runtime logic
bots/c/client_lib/src/con_lib.c, .../public/*.c, .../src/public/get.c
Replace socket/JSON game loop with grpc_bridge streaming and RPCs; actions now invoke grpc bridge calls; object lookups delegate to bridge queries.
Client: removed JSON/socket modules
bots/c/client_lib/src/internal/json/*, .../socket.c
Deleted in-house JSON parser/serializer, config/state parsers, encode helpers, socket.c and related utilities.
Bots: CMake + Makefile migration
bots/c/hardcore/*, bots/c/softcore/*
Add CMakeLists for bots (gridmaster, my-core-bot) and update Makefiles to drive CMake builds (BUILD_DIR, BUILD_TYPE, prod/dev targets).
Server: build changes
server/CMakeLists.txt
Upgrade CMake, add FetchContent gRPC/Protobuf integration and core_game_proto generated/static library.
Server: gRPC service & sessions
server/inc/net/GameServiceImpl.h, server/src/net/GameServiceImpl.cpp, server/inc/net/PlayerSession.h, server/src/net/PlayerSession.cpp, server/inc/net/GrpcServer.h, server/src/net/GrpcServer.cpp
Add GameServiceImpl implementing CoreGameService (Login, TickStream, actions, queries), PlayerSession for per-player queues/sync, and GrpcServer wrapper.
Server: remove old TCP Bridge/Server
server/inc/net/Bridge.h, server/src/net/Bridge.cpp, server/inc/net/Server.h, server/src/net/Server.cpp
Deleted previous TCP Server and Bridge classes and their socket/threading message handling.
Server: integrate sessions into game loop
server/inc/game/Game.h, server/src/game/Game.cpp, server/src/main.cpp
Game now constructed with GameServiceImpl*, drains actions/debug from sessions, records replay state, and uses gRPC server startup and login coordination; debug data model switched to DebugDataEntry.
Server: object headers adjustments
server/inc/object/*.h
Added explicit C++ override specifiers to tick() and damage() declarations across object types.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Client Bot
    participant Bridge as grpc_bridge (client)
    participant Server as GameServiceImpl (gRPC)
    participant Game as Game Engine

    Client->>Bridge: grpc_bridge_connect(host,port)
    Bridge->>Server: Establish gRPC channel / Login RPC
    Server-->>Bridge: LoginResponse (config, team id)
    Bridge-->>Client: login complete

    Client->>Bridge: grpc_bridge_start_tick_stream()
    Bridge->>Server: Open TickStream (stream)
    Server-->>Bridge: TickSignal (tick, errors, gameOver)

    loop Per Tick
        Bridge-->>Client: grpc_bridge_wait_tick() returns TickSignal
        Client->>Bridge: grpc_bridge_create_unit / move / attack (unary RPCs)
        Bridge->>Server: Action RPCs → Server queues actions
        Client->>Bridge: grpc_bridge_send_debug_data() (unary)
        Bridge->>Server: SendDebugData RPC
        Client->>Bridge: grpc_bridge_end_turn() (stream message)
        Bridge->>Server: EndTurnRequest via TickStream
        Server->>Game: Process tick with drained actions/debug
        Server-->>Bridge: Next TickSignal
    end

    Client->>Bridge: grpc_bridge_shutdown()
    Bridge->>Server: Close stream / channel
    Server-->>Bridge: Cleanup ack
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Suggested labels

server, client_lib, my-core-bot

Suggested reviewers

  • PaulicStudios
  • cuhlig42
  • FreddyMSchubert

Poem

🐰 I swapped old sockets for protobuf streams bright,

I stitched CMake paths in the moonlit night,
With grpc_bridge hops and sessions aligned,
The server and bots now converse, well-defined.
Hop on—this rabbit's migration took flight! 🥕

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 16.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'Grpc' is too vague and generic. While it refers to a real technology being introduced, it does not clearly summarize the main change from the developer's perspective. Consider using a more descriptive title such as 'Replace socket communication with gRPC' or 'Refactor networking to use gRPC instead of custom sockets' to better convey the scope and nature of the change.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch grpc-refactor

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR migrates CORE Game networking from a custom TCP/JSON socket protocol to a gRPC + protobuf API, updating both the C++ server and the C bot client library accordingly.

Changes:

  • Replaced socket-based server accept/bridge logic with CoreGameService gRPC service + bidirectional tick stream.
  • Updated the server game loop to coordinate turns via TickSignal/EndTurnRequest and to ingest actions/debug data via RPCs.
  • Replaced the C client library’s socket/JSON implementation with a C/C++ gRPC bridge and switched bot build systems to CMake.

Reviewed changes

Copilot reviewed 50 out of 54 changed files in this pull request and generated 11 comments.

Show a summary per file
File Description
server/src/net/Server.cpp Removed legacy TCP listening socket implementation.
server/inc/net/Server.h Removed legacy server socket header.
server/src/net/Bridge.cpp Removed legacy TCP bridge (read/write threads, JSON framing).
server/inc/net/Bridge.h Removed legacy bridge header.
server/src/net/GrpcServer.cpp Added gRPC server wrapper (builder/start/shutdown).
server/inc/net/GrpcServer.h Added gRPC server wrapper interface.
server/src/net/GameServiceImpl.cpp Added gRPC service implementation (login, tick stream, actions, queries, debug).
server/inc/net/GameServiceImpl.h Added gRPC service interface and session-management API for the game loop.
server/src/net/PlayerSession.cpp Added per-team session state (stream handle, end-turn signaling, action/debug queues).
server/inc/net/PlayerSession.h Added session class definition used by gRPC threads + game loop.
server/src/main.cpp Switched startup flow to start gRPC server, wait for logins, and run Game with the service.
server/src/game/Game.cpp Updated main tick loop to use sessions/EndTurn instead of per-bridge JSON packets; stop sending state to clients.
server/inc/game/Game.h Updated Game to hold GameServiceImpl* and pending error storage for tick signals.
server/inc/object/Wall.h Marked overrides for tick()/damage().
server/inc/object/Unit.h Marked overrides for tick()/damage().
server/inc/object/GemPile.h Marked overrides for tick()/damage().
server/inc/object/Deposit.h Marked overrides for tick()/damage().
server/inc/object/Core.h Marked overrides for tick()/damage().
server/inc/object/Bomb.h Marked overrides for tick()/damage().
server/CMakeLists.txt Added gRPC/protobuf FetchContent + proto codegen and linked generated library into server.
proto/core_game.proto Added protobuf contract for config, tick lifecycle, actions, debug data, and query RPCs.
bots/c/softcore/my-core-bot/Makefile Switched bot build to CMake wrapper (cmake-build dir).
bots/c/softcore/my-core-bot/CMakeLists.txt Added CMake build for bot linking new client lib + proto target.
bots/c/softcore/gridmaster/Makefile Switched bot build to CMake wrapper (cmake-build dir).
bots/c/softcore/gridmaster/CMakeLists.txt Added CMake build for bot linking new client lib + proto target.
bots/c/hardcore/my-core-bot/Makefile Switched bot build to CMake wrapper (cmake-build dir).
bots/c/hardcore/my-core-bot/CMakeLists.txt Added CMake build for bot linking new client lib + proto target.
bots/c/hardcore/gridmaster/Makefile Switched bot build to CMake wrapper (cmake-build dir).
bots/c/hardcore/gridmaster/CMakeLists.txt Added CMake build for bot linking new client lib + proto target.
bots/c/client_lib/src/public/get.c Replaced local object lookups with gRPC query calls (server-authoritative).
bots/c/client_lib/src/public/actions.c Replaced per-tick action batching with direct gRPC action RPC calls.
bots/c/client_lib/src/public/debug.c Added missing include for string ops used by debug helpers.
bots/c/client_lib/src/con_lib.c Replaced TCP loop with gRPC login + tick stream + end-turn coordination.
bots/c/client_lib/src/internal/socket.c Removed legacy socket transport layer.
bots/c/client_lib/src/internal/json/* Removed custom JSON parser/encoder and state/config parsing code.
bots/c/client_lib/src/internal/grpc_bridge.h Added C ABI for gRPC bridge (connect/login/ticks/actions/queries/cache).
bots/c/client_lib/src/internal/grpc_bridge.cpp Added C++ gRPC client implementation + per-tick object cache.
bots/c/client_lib/inc/core_lib_internal.h Removed socket/JSON internals and exposed gRPC bridge instead.
bots/c/client_lib/inc/core_lib.h Added C++ linkage guards for C API compatibility.
bots/c/client_lib/inc/units.h Added C++ linkage guards for C API compatibility.
bots/c/client_lib/Makefile Switched client library build to CMake wrapper.
bots/c/client_lib/CMakeLists.txt Added CMake build for client lib + proto generation + gRPC FetchContent.
CLAUDE.md Added repository guidance doc (currently describes old TCP/JSON architecture).
.idea/.gitignore Added ignore rules for IntelliJ workspace artifacts.
.idea/vcs.xml Added IntelliJ VCS mapping file.
.idea/copilot.data.migration.ask2agent.xml Added IntelliJ Copilot migration state file.
.gitignore Added cmake-build/ ignore.
Files not reviewed (3)
  • .idea/.gitignore: Language not supported
  • .idea/copilot.data.migration.ask2agent.xml: Language not supported
  • .idea/vcs.xml: Language not supported

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +121 to 131
for (unsigned int teamId : connectedTeamIds)
{
auto it = bridges.find(teamId);
if (it != bridges.end() && it->second->isDisconnected())
auto session = service.getSession(teamId);
if (session && session->isDisconnected())
{
ReplayEncoder::instance().setDeathReason(teamId, death_reason_t::DISCONNECTED);
ReplayEncoder::instance().setPlace(teamId, placeCounter--);
}
else if (it != bridges.end())
{
connectedTeamIds.push_back(teamId);
// Remove from active list
connectedTeamIds.erase(
std::remove(connectedTeamIds.begin(), connectedTeamIds.end(), teamId), connectedTeamIds.end());
}
Copy link

Copilot AI Feb 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

connectedTeamIds is modified (erase/remove) while iterating with a range-based for loop, which is undefined behavior and can skip elements. Use an iterator-based loop, or build a new vector of still-connected teams, or do a single erase-remove pass after collecting the disconnected IDs.

Copilot uses AI. Check for mistakes.
Comment on lines +163 to +171
unsigned int GameServiceImpl::extractTeamId(grpc::ServerContext *context)
{
auto meta = context->client_metadata();
auto it = meta.find("x-team-id");
if (it != meta.end())
{
return static_cast<unsigned int>(std::stoul(std::string(it->second.data(), it->second.size())));
}
return 0;
Copy link

Copilot AI Feb 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extractTeamId() calls std::stoul() on untrusted metadata without validation/exception handling. A malformed or missing x-team-id value can throw and terminate the server, and the header is also trivially spoofable which allows a different client to impersonate a logged-in team for action RPCs. Consider validating/parsing defensively (catch exceptions, range-check) and binding requests to an authenticated session token rather than trusting a raw team-id header.

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +2
cmake_minimum_required(VERSION 3.28)
set(CMAKE_POLICY_VERSION_MINIMUM 3.5 CACHE STRING "Allow projects requiring CMake < 3.5" FORCE)
Copy link

Copilot AI Feb 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Raising cmake_minimum_required from 3.20 to 3.28 is a breaking build-requirement change for anyone (CI/devcontainers) on older CMake. If there isn’t a hard dependency on 3.28 features, consider keeping the previous minimum or gating newer behavior behind version checks.

Copilot uses AI. Check for mistakes.
Comment on lines +16 to +17
set(CMAKE_CXX_FLAGS_DEV "-O3 -DNDEBUG")
set(CMAKE_C_FLAGS_DEV "-O3 -DNDEBUG")
Copy link

Copilot AI Feb 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The client-lib dev build type sets -O3 -DNDEBUG, which disables asserts and makes debugging/ASAN less effective. If dev is intended to be a debug/sanitized config, consider using -O0/-Og and omitting -DNDEBUG (or renaming this config to reflect that it’s optimized).

Suggested change
set(CMAKE_CXX_FLAGS_DEV "-O3 -DNDEBUG")
set(CMAKE_C_FLAGS_DEV "-O3 -DNDEBUG")
set(CMAKE_CXX_FLAGS_DEV "-Og -g")
set(CMAKE_C_FLAGS_DEV "-Og -g")

Copilot uses AI. Check for mistakes.
Comment on lines +12 to +19
```
Bots (C or TS) <--TCP/JSON--> Server (C++) ---> Replay JSON files ---> Visualizer (Web)
```

- **Server** (`server/`): C++17 game engine, single source of truth. Manages game state, validates actions, produces replay files. Built with CMake, uses nlohmann_json, json-schema-validator, xxhash.
- **Client Libraries**: Protocol abstraction layers that handle TCP connection, JSON serialization, and game state diffing.
- C: `bots/c/client_lib/` — POSIX sockets, Makefile-built
- TypeScript: `bots/ts/client_lib/` — Node.js `net` module, runs on Bun
Copy link

Copilot AI Feb 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doc still describes the architecture as TCP/JSON and the C client library as POSIX-sockets based, but this PR migrates to gRPC/protobuf. Update the architecture diagram and bullets so new contributors aren’t misled.

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +6
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="VcsDirectoryMappings">
<mapping directory="$PROJECT_DIR$" vcs="Git" />
</component>
</project> No newline at end of file
Copy link

Copilot AI Feb 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.idea/vcs.xml is typically IDE-generated per workspace and not needed for building/running the project. If it’s not intentionally part of repo configuration, consider removing it and relying on .gitignore to keep IDE metadata out of version control.

Suggested change
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="VcsDirectoryMappings">
<mapping directory="$PROJECT_DIR$" vcs="Git" />
</component>
</project>

Copilot uses AI. Check for mistakes.
Comment on lines +26 to +30
if (!stream_->Write(signal))
{
Logger::LogWarn("Failed to write TickSignal to team " + std::to_string(teamId_) + ". Marking disconnected.");
disconnected_ = true;
}
Copy link

Copilot AI Feb 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When stream_->Write() fails, the session is marked disconnected but waitForEndTurn() waiters are not notified. This can cause the game loop to block until timeout even though the session is already disconnected. Consider notifying endTurnCv_ (or reusing notifyEndTurn() / a dedicated disconnect notifier) whenever disconnected_ transitions to true.

Copilot uses AI. Check for mistakes.
Comment on lines +22 to +28
/// Block until the server sends the next TickSignal.
/// Returns 0 on success, -1 on stream end / game over.
/// Fills out_tick, out_game_over, out_winner_team_id.
/// out_errors is a malloc'd NULL-terminated array of strdup'd strings (caller frees).
int grpc_bridge_wait_tick(unsigned long *out_tick, char ***out_errors, int *out_error_count, bool *out_game_over,
unsigned long *out_winner_team_id);

Copy link

Copilot AI Feb 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment for grpc_bridge_wait_tick says it returns -1 on "stream end / game over", but the implementation returns 0 when game_over is true and uses the out_game_over flag to signal it. Update the comment (or the behavior) so API users aren’t misled.

Copilot uses AI. Check for mistakes.
Comment on lines +68 to +76
FetchContent_Declare(
grpc
GIT_REPOSITORY https://github.com/grpc/grpc.git
GIT_TAG v1.62.1
GIT_SHALLOW TRUE
GIT_PROGRESS TRUE
UPDATE_DISCONNECTED TRUE
)
FetchContent_MakeAvailable(grpc)
Copy link

Copilot AI Feb 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The FetchContent_Declare block fetches grpc from GitHub at configure time using only a Git tag (v1.62.1), which is a mutable reference. If the grpc repo or this tag is ever compromised or moved, your build system could transparently pull and execute attacker-controlled code during CI or release builds. To reduce supply-chain risk, pin this dependency to an immutable commit SHA (or otherwise verified artifact) instead of a tag and avoid unauthenticated, mutable references for build-time code execution.

Copilot uses AI. Check for mistakes.
Comment on lines +44 to +52
FetchContent_Declare(
grpc
GIT_REPOSITORY https://github.com/grpc/grpc.git
GIT_TAG v1.62.1
GIT_SHALLOW TRUE
GIT_PROGRESS TRUE
UPDATE_DISCONNECTED TRUE
)
FetchContent_MakeAvailable(grpc)
Copy link

Copilot AI Feb 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The client library CMake config uses FetchContent_Declare to pull grpc from https://github.com/grpc/grpc.git by a mutable Git tag (v1.62.1) at build time. If that tag is retargeted or the upstream repository is compromised, builds of core_lib and dependent bots could ingest and execute malicious code with your CI or developer privileges. Pin this dependency to an immutable commit SHA (or checked-in source) and, where possible, add integrity verification to harden the supply chain.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 16

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
bots/c/client_lib/src/public/get.c (1)

71-74: ⚠️ Potential issue | 🟡 Minor

Potential memory leak if core_get_objs_filter contract changes.

Currently core_get_objs_filter returns NULL when no objects match, so !objects short-circuits and the !objects[0] branch is dead code. However, if core_get_objs_filter were ever changed to return an empty {NULL}-terminated array instead, this would leak the objects allocation.

🛡️ Defensive fix
 	t_obj **objects = core_get_objs_filter(condition);
-	if (!objects || !objects[0]) return NULL;
+	if (!objects || !objects[0])
+	{
+		free(objects);
+		return NULL;
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bots/c/client_lib/src/public/get.c` around lines 71 - 74, In
core_get_obj_filter_nearest, handle both return styles of core_get_objs_filter:
if it returns NULL return NULL immediately; if it returns an empty
NULL-terminated array (objects != NULL && objects[0] == NULL) free the returned
array before returning NULL to avoid leaking memory. Update
core_get_obj_filter_nearest to check objects first, then check objects[0] and
call free(objects) (or the appropriate deallocator used for
core_get_objs_filter) before returning NULL; proceed with normal logic only when
objects[0] != NULL.
🧹 Nitpick comments (23)
.gitignore (1)

3-3: cmake-build/ pattern may miss CLion's default build directories.

CLion typically generates cmake-build-debug/ and cmake-build-release/ rather than a bare cmake-build/. The current pattern won't ignore those variants, and they could appear as untracked files for developers using CLion.

♻️ Proposed fix
-cmake-build/
+cmake-build*/
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.gitignore at line 3, The .gitignore currently contains the literal pattern
"cmake-build/" which won't match CLion's default directories like
"cmake-build-debug/" or "cmake-build-release/"; update the .gitignore by
replacing or adding a glob that covers variants (e.g., use "cmake-build*/" or
"cmake-build-*/") so directories such as "cmake-build-debug" and
"cmake-build-release" are ignored; locate the existing "cmake-build/" entry in
.gitignore and adjust it accordingly.
.idea/.gitignore (1)

1-10: Consider ignoring .idea/ at the root level instead of committing it.

Committing the .idea/ directory ties the project to JetBrains tooling and can cause friction for contributors using other editors. Per-developer IDE state (run configs, local paths, shelf) drifts over time even with the .gitignore rules here in place.

The conventional approach is to add .idea/ to the root .gitignore and share only deliberately curated IDE config files if needed (e.g., .idea/runConfigurations/).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.idea/.gitignore around lines 1 - 10, Add a root-level ignore for the IDE
folder by replacing the scattered .idea-related entries with a single `.idea/`
entry in the repo gitignore, remove the committed IDE directory from Git
tracking (e.g., git rm -r --cached .idea) and commit the updated .gitignore; if
you need to share specific run configs, export and add only those curated files
instead of the whole `.idea/` folder.
bots/c/hardcore/gridmaster/Makefile (1)

3-4: BUILD_DIR should use ?= for consistency with BUILD_TYPE.

BUILD_TYPE ?= dev allows command-line override, but BUILD_DIR = cmake-build does not. Using ?= for both keeps the interface consistent.

✨ Suggested change
-BUILD_DIR = cmake-build
+BUILD_DIR ?= cmake-build
 BUILD_TYPE ?= dev
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bots/c/hardcore/gridmaster/Makefile` around lines 3 - 4, The Makefile sets
BUILD_DIR with a hard assignment which prevents command-line overrides; change
the variable assignment for BUILD_DIR from a plain '=' to a conditional
assignment '?=' (matching BUILD_TYPE ?= dev) so users can override BUILD_DIR via
make command line; update the line that defines BUILD_DIR accordingly
(referencing the BUILD_DIR and BUILD_TYPE symbols).
bots/c/softcore/gridmaster/CMakeLists.txt (1)

4-6: CMAKE_C_STANDARD_REQUIRED ON is missing — same asymmetry as my-core-bot/CMakeLists.txt.

🔧 Proposed fix
 set(CMAKE_C_STANDARD 11)
+set(CMAKE_C_STANDARD_REQUIRED ON)
 set(CMAKE_CXX_STANDARD 17)
 set(CMAKE_CXX_STANDARD_REQUIRED ON)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bots/c/softcore/gridmaster/CMakeLists.txt` around lines 4 - 6, The CMakeLists
sets CMAKE_C_STANDARD to 11 but does not enforce it; add the missing C standard
requirement by setting CMAKE_C_STANDARD_REQUIRED to ON (mirroring the existing
C++ enforcement) so the C compiler is required to support C11; locate the block
that contains set(CMAKE_C_STANDARD 11) and add set(CMAKE_C_STANDARD_REQUIRED ON)
alongside set(CMAKE_CXX_STANDARD_REQUIRED ON).
bots/c/softcore/my-core-bot/CMakeLists.txt (2)

4-6: CMAKE_C_STANDARD_REQUIRED ON is missing.

CMAKE_CXX_STANDARD_REQUIRED ON is set (Line 6) but there is no equivalent for C. Without it, a compiler that does not fully support C11 will silently fall back to an older standard rather than erroring.

🔧 Proposed fix
 set(CMAKE_C_STANDARD 11)
+set(CMAKE_C_STANDARD_REQUIRED ON)
 set(CMAKE_CXX_STANDARD 17)
 set(CMAKE_CXX_STANDARD_REQUIRED ON)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bots/c/softcore/my-core-bot/CMakeLists.txt` around lines 4 - 6, The CMake
config sets CMAKE_C_STANDARD to 11 but doesn't require it; add the equivalent
required flag by setting CMAKE_C_STANDARD_REQUIRED to ON so the build will fail
on compilers that don't fully support C11 (mirror the existing C++ requirement
CMAKE_CXX_STANDARD_REQUIRED ON); update the CMakeLists.txt near the lines where
set(CMAKE_C_STANDARD 11) and set(CMAKE_CXX_STANDARD_REQUIRED ON) appear to
include set(CMAKE_C_STANDARD_REQUIRED ON).

12-12: Generic target name bot risks conflicts in a unified build.

If a parent CMakeLists.txt ever add_subdirectorys multiple bots, two targets named bot will cause a CMake error. Naming it after the bot (my_core_bot) is safer and mirrors the pattern used by gridmaster.

🔧 Proposed fix
-add_executable(bot ${BOT_SOURCES})
-target_include_directories(bot PRIVATE inc ${CMAKE_CURRENT_SOURCE_DIR}/../../client_lib/inc)
-target_link_libraries(bot PRIVATE core_lib core_game_proto)
-target_compile_options(bot PRIVATE -Wall -Wextra -Werror)
-set_target_properties(bot PROPERTIES RUNTIME_OUTPUT_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}")
+add_executable(my_core_bot ${BOT_SOURCES})
+target_include_directories(my_core_bot PRIVATE inc ${CMAKE_CURRENT_SOURCE_DIR}/../../client_lib/inc)
+target_link_libraries(my_core_bot PRIVATE core_lib core_game_proto)
+target_compile_options(my_core_bot PRIVATE -Wall -Wextra -Werror)
+set_target_properties(my_core_bot PROPERTIES RUNTIME_OUTPUT_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bots/c/softcore/my-core-bot/CMakeLists.txt` at line 12, The target is
currently created with a generic name via add_executable(bot ${BOT_SOURCES}),
which will conflict if multiple bots are added; change the target name to a
unique, bot-specific identifier (e.g., add_executable(my_core_bot
${BOT_SOURCES})) so it mirrors the gridmaster pattern and avoids duplicate
target collisions—update any dependent
install/target_link_libraries/target_include_directories references that
currently use "bot" to the new target name.
bots/c/hardcore/gridmaster/CMakeLists.txt (1)

1-16: CMake configuration looks reasonable for the bot build.

Minor inconsistencies compared to the server's CMakeLists.txt:

  • CMAKE_C_STANDARD_REQUIRED is not set to ON (line 4 sets the standard but doesn't enforce it).
  • CMAKE_CXX_EXTENSIONS is not set to OFF (the server disables it).

These are non-blocking but worth aligning for consistency.

Consistency fix
 set(CMAKE_C_STANDARD 11)
+set(CMAKE_C_STANDARD_REQUIRED ON)
 set(CMAKE_CXX_STANDARD 17)
 set(CMAKE_CXX_STANDARD_REQUIRED ON)
+set(CMAKE_CXX_EXTENSIONS OFF)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bots/c/hardcore/gridmaster/CMakeLists.txt` around lines 1 - 16, Add
consistency flags to the CMake configuration: set CMAKE_C_STANDARD_REQUIRED to
ON and set CMAKE_CXX_EXTENSIONS to OFF near the existing standard settings
(alongside set(CMAKE_C_STANDARD 11) and set(CMAKE_CXX_STANDARD 17)) so that the
gridmaster project enforces the C standard and disables compiler-specific C++
extensions; update the file where project(gridmaster C CXX) and those set(...)
lines appear.
server/CMakeLists.txt (1)

52-76: FetchContent for gRPC will significantly increase build times.

Building gRPC from source via FetchContent pulls in abseil, protobuf, re2, and other transitive dependencies. First builds can easily take 30+ minutes. This is a valid choice for reproducibility, but consider documenting the expected build time or providing a find_package(gRPC) fallback for developers who have gRPC pre-installed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/CMakeLists.txt` around lines 52 - 76, The FetchContent_Declare(grpc) +
FetchContent_MakeAvailable(grpc) block is forcing gRPC to be built from source
(pulling abseil, protobuf, re2, etc.) which greatly increases first-build time;
update CMake to prefer a system-installed gRPC by first attempting
find_package(gRPC CONFIG QUIET) (or find_package(gRPC) fallback) and only fall
back to the existing FetchContent_Declare/FecthContent_MakeAvailable(grpc) block
when find_package fails, and/or add a clear comment near the set(...) flags and
FetchContent_Declare(grpc) that documents the expected long build time so
developers know to opt into the source build.
bots/c/client_lib/src/public/get.c (1)

32-69: Every call to core_get_objs_filter triggers a full GetAllObjects RPC.

Functions like core_get_obj_filter_nearest and core_get_objs_filter_count each invoke core_get_objs_filter, which in turn makes a full grpc_bridge_get_all_objects() RPC. If a bot calls multiple filter/count functions in a single tick, this results in redundant round-trips fetching the entire object list each time.

Consider caching the all-objects result for the duration of a tick (the cache is already cleared per-tick in con_lib.c), or documenting that bots should minimize filter calls.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bots/c/client_lib/src/public/get.c` around lines 32 - 69,
core_get_objs_filter currently calls grpc_bridge_get_all_objects on every
invocation causing redundant RPCs; change core_get_objs_filter to use a per-tick
cached t_obj ** array instead of calling grpc_bridge_get_all_objects each time:
add or use an existing global/static cache variable (referencing
grpc_bridge_get_all_objects and core_get_objs_filter) that is populated on first
call in a tick and returned thereafter, and rely on the existing per-tick cache
clear in con_lib.c to reset the cache between ticks; ensure you still free only
the container (not the objects) when replacing/clearing the cache and handle
allocation failures consistently as before.
server/src/main.cpp (1)

23-36: Duplicated sanitization logic.

The comment on line 23 acknowledges this is duplicated from GameServiceImpl. Consider extracting this into a shared utility function to keep sanitization rules in sync.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/main.cpp` around lines 23 - 36, This block duplicates the
sanitization logic used in GameServiceImpl (sanitized/rawEnv/MAX_LEN); extract
it into a shared utility function (e.g., a sanitizeEnvName or
sanitizeIdentifier) in a common utility/header so both main.cpp and
GameServiceImpl call the same function, preserve the current allowed chars and
MAX_LEN behavior, return the sanitized string (or empty) and replace the
in-place loop in main.cpp with a call to that new function to keep rules in
sync.
bots/c/client_lib/CMakeLists.txt (2)

44-52: FetchContent for gRPC will significantly increase build times.

Building gRPC from source via FetchContent can take 20-30+ minutes on first build. This is a pragmatic choice that avoids requiring a system-installed gRPC, but consider documenting the expected build time or exploring pre-built packages (e.g., via find_package(gRPC) with a FetchContent fallback) to improve developer experience.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bots/c/client_lib/CMakeLists.txt` around lines 44 - 52, The CMakeLists
currently always builds gRPC from source via
FetchContent_Declare/FetchContent_MakeAvailable which dramatically increases
first-build time; change this to prefer a system or prebuilt gRPC by calling
find_package(gRPC REQUIRED) first and only fall back to
FetchContent_Declare(FETCH_CONTENT...) / FetchContent_MakeAvailable(grpc) if
find_package fails, and add a short comment or README note near this logic
mentioning the expected 20–30+ minute build time when the fallback runs so
developers know what to expect.

16-22: dev build type uses -O3 -DNDEBUG which disables assertions — contradicts the ASan intent.

The "dev" build type sets -O3 -DNDEBUG (line 16-17), which disables assert() and optimizes aggressively. Then lines 20-22 enable ASan for this same "dev" type. ASan is most useful in debug/low-optimization builds where stack frames are preserved for readable reports. Consider using -O0 -g or at most -O1 -g for the dev build type instead.

♻️ Suggested fix
-set(CMAKE_CXX_FLAGS_DEV "-O3 -DNDEBUG")
-set(CMAKE_C_FLAGS_DEV   "-O3 -DNDEBUG")
+set(CMAKE_CXX_FLAGS_DEV "-O1 -g")
+set(CMAKE_C_FLAGS_DEV   "-O1 -g")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bots/c/client_lib/CMakeLists.txt` around lines 16 - 22, The dev build
currently forces -O3 -DNDEBUG via CMAKE_CXX_FLAGS_DEV / CMAKE_C_FLAGS_DEV which
disables assertions and conflicts with CORE_LIB_ASAN; change the dev
configuration so when CMAKE_BUILD_TYPE STREQUAL "dev" you set
CMAKE_CXX_FLAGS_DEV and CMAKE_C_FLAGS_DEV to a low-optimization, debug-friendly
setting (e.g. -O0 -g or -O1 -g) and remove DNDEBUG, then keep CORE_LIB_ASAN ON
for that build so ASan reports remain readable; update the conditional around
CMAKE_BUILD_TYPE "dev" to set the debug flags and enable CORE_LIB_ASAN
accordingly.
bots/c/client_lib/src/internal/grpc_bridge.cpp (2)

172-198: Connect loop has no maximum retry count — infinite loop on persistent failures.

grpc_bridge_connect loops forever if the channel never reaches READY or SHUTDOWN (e.g., stays in TRANSIENT_FAILURE). Consider adding a max retry count or total timeout to prevent the client from hanging indefinitely.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bots/c/client_lib/src/internal/grpc_bridge.cpp` around lines 172 - 198,
grpc_bridge_connect currently spins forever waiting for g_channel to become
GRPC_CHANNEL_READY; modify the connect loop in grpc_bridge_connect to enforce a
maximum retry count or overall timeout (e.g., total wait duration) while using
g_channel->GetState/WaitForStateChange, so that if the channel stays in
TRANSIENT_FAILURE or never reaches READY within the limit you break and return
an error (like -1); update logic around the existing state checks
(GRPC_CHANNEL_READY, GRPC_CHANNEL_SHUTDOWN) to also handle the timeout/retry
exhaustion case and log or return a clear failure code.

231-243: grpc_bridge_shutdown calls Finish() after WritesDone() — verify this doesn't block indefinitely.

If the server has already closed the stream or the channel is unhealthy, Finish() could potentially block. Consider adding a deadline to the tick context (g_tick_ctx) before calling Finish().

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bots/c/client_lib/src/internal/grpc_bridge.cpp` around lines 231 - 243,
grpc_bridge_shutdown currently calls g_tick_stream->WritesDone() followed by
g_tick_stream->Finish(), which can block if the server closed the stream or the
channel is unhealthy; before calling Finish() set a short deadline on the
existing g_tick_ctx (or create a new grpc::ClientContext with a deadline) so
Finish() will time out instead of blocking indefinitely—modify
grpc_bridge_shutdown to set g_tick_ctx's deadline (e.g., now + small timeout)
prior to calling g_tick_stream->Finish(), then handle a timeout or non-OK status
and proceed to reset g_tick_stream, g_tick_ctx, g_stub, and g_channel and call
grpc_bridge_cache_clear().
server/src/game/Game.cpp (4)

75-102: preFailures is always empty — dead code path in tick().

preFailures is declared on line 77 but never populated before being passed to tick() on line 102. Inside tick() (lines 148–151), it still iterates over preFailures and prefixes them with tick info, but this loop body never executes. Either remove the parameter and the loop in tick(), or document why it's kept as a placeholder.

♻️ Suggested cleanup
-		std::vector<std::pair<int, std::string>> preFailures;
...
-		tick(tickCount, actions, serverStartTime, preFailures, debugDataEntries);
+		tick(tickCount, actions, serverStartTime, debugDataEntries);

And update the tick() signature and body accordingly to remove the unused preFailures parameter and the dead loop at lines 148–151.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/game/Game.cpp` around lines 75 - 102, preFailures is created in
Game::run loop but never populated before being passed into tick(tickCount,
actions, serverStartTime, preFailures, debugDataEntries), leaving the loop over
preFailures in tick() dead; either remove the preFailures variable and the
corresponding parameter and loop from tick(), or implement population logic
where appropriate (e.g., collect session->preFailure() results) — update the
call site in the run loop (remove preFailures from the tick call) and adjust
tick()'s signature and body (remove the preFailures parameter and the loop that
prefixes tick info) so symbols to change include the local variable preFailures,
the tick(...) declaration/definition, and the tick(...) invocation in Game.cpp.

55-63: Log message says "Disconnecting" before checking whether already disconnected.

Line 57 logs "did not send EndTurn in time. Disconnecting." but then on line 58 checks if (session->isDisconnected()) to determine a different death reason (DISCONNECTED vs TIMEOUT_SENDING_DATA). The log is slightly misleading since the session may have already been disconnected by the TickStream ending. Consider adjusting the log to be more precise.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/game/Game.cpp` around lines 55 - 63, The log message in the
EndTurn timeout handling (Logger::LogWarn) is misleading because it
unconditionally says "Disconnecting." before checking session->isDisconnected();
change the logic to first check session->isDisconnected() and log an accurate
message (e.g., "already disconnected" vs "did not send EndTurn in time;
disconnecting"), or make the warn conditional so Logger::LogWarn reflects the
actual path taken; keep the subsequent calls to
ReplayEncoder::instance().setDeathReason(...) and session->setDisconnected()
unchanged.

466-472: failures parameter is intentionally unused — consider removing it from the signature.

The /*failures*/ parameter is no longer used in recordReplayState since errors are now routed through pendingErrors_. Keeping it in the signature is unnecessary.

♻️ Suggested cleanup
-void Game::recordReplayState(std::vector<std::pair<std::unique_ptr<Action>, Core *>> &actions, unsigned long long tick,
-							 std::vector<std::pair<int, std::string>> & /*failures*/)
+void Game::recordReplayState(std::vector<std::pair<std::unique_ptr<Action>, Core *>> &actions, unsigned long long tick)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/game/Game.cpp` around lines 466 - 472, The parameter 'failures' in
Game::recordReplayState is unused and should be removed: update the declaration
and definition of Game::recordReplayState to drop the std::vector<std::pair<int,
std::string>> &failures parameter, adjust any virtual/overridden declarations if
present, and update all call sites to stop passing that argument; ensure you
rely on pendingErrors_ for error routing and rebuild to catch any missed
references (search for recordReplayState and the symbol 'failures' to find
places to update).

49-73: Sequential waitForEndTurn means total worst-case wait is N × maxWait.

Each session is awaited sequentially. If a session times out after maxWait ms, the next session's wait starts only then. With many teams, this compounds. Consider waiting in parallel (e.g., start all waits then join) if latency becomes an issue.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/game/Game.cpp` around lines 49 - 73, The current loop calls
session->waitForEndTurn(maxWait) sequentially, causing worst-case total wait of
N×maxWait; change to initiate all waits in parallel (e.g., spawn std::async
futures or worker threads for each Session pointer) and collect their boolean
results, then iterate the sessions and for any where the wait returned false
perform the existing disconnection handling (use session->getTeamId(),
Logger::LogWarn, ReplayEncoder::instance().setDeathReason, setPlace,
session->setDisconnected(), Board::instance().getCoreByTeamId / removeObjectById
and Logger::Log) so behavior is identical but the wait is bounded to roughly
maxWait total rather than multiplied by session count.
proto/core_game.proto (3)

1-3: Buf lint: package directory mismatch.

Buf expects core_game package files to reside in a core_game/ directory, but the file is in proto/. This is a convention issue flagged by Buf's PACKAGE_DIRECTORY_MATCH rule. Consider either moving the file to proto/core_game/core_game.proto or adjusting the Buf configuration if you don't want to enforce this rule.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@proto/core_game.proto` around lines 1 - 3, The proto file declares package
core_game but lives in proto/, causing Buf's PACKAGE_DIRECTORY_MATCH lint; fix
by moving this file to a matching directory (e.g.,
proto/core_game/core_game.proto) or update Buf config to relax/enforce different
package-directory mapping; locate the declaration "package core_game;" in
proto/core_game.proto when applying the change and ensure imports/paths in build
scripts and any references are updated to the new location if you move the file.

15-17: Consider using a proto enum for ObjectType instead of bare int32 type.

Using int32 type loses self-documentation and type-safety benefits that a proto enum would provide. Clients and server would then share a single canonical definition of object types, reducing risk of mismatched magic numbers.

♻️ Example
+enum ObjectType {
+  OBJECT_TYPE_UNSPECIFIED = 0;
+  OBJECT_TYPE_CORE        = 1;
+  OBJECT_TYPE_UNIT        = 2;
+  OBJECT_TYPE_DEPOSIT     = 3;
+  OBJECT_TYPE_WALL        = 4;
+  OBJECT_TYPE_BOMB        = 5;
+  OBJECT_TYPE_GEM_PILE    = 6;
+}
+
 message GameObject {
   uint32 id                         = 1;
-  int32  type                       = 2;   // ObjectType enum value
+  ObjectType type                   = 2;
   Position pos                      = 3;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@proto/core_game.proto` around lines 15 - 17, Replace the bare int32 "type"
field on message GameObject with a typed proto enum: define an enum ObjectType
(e.g., UNKNOWN = 0, PLAYER = 1, NPC = 2, etc.) in this proto and change the
field declaration to use ObjectType type = 2; and update the inline comment
accordingly; also search for usages of GameObject.type in server and client
codegen to update any switch/case or literal magic numbers to the new ObjectType
enum symbols (e.g., ObjectType.PLAYER) so producers/consumers use the canonical
enum.

85-89: Hardcoded password "42" — acceptable for a game server but worth noting.

The LoginRequest sends a plaintext password and the server checks against "42". Combined with InsecureChannelCredentials on the client side, this provides minimal security. Fine for a local game environment but should not be used in production-facing contexts.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@proto/core_game.proto` around lines 85 - 89, The LoginRequest proto currently
carries a plaintext password and the codebase checks against a hardcoded value
("42"); replace this insecure setup by removing the hardcoded-secret check and
wiring authentication to a configurable/secure store (e.g., environment
variable, config, or secret manager) and/or hashed credentials, and switch
clients using InsecureChannelCredentials to use a secure transport (TLS/secure
channel) so passwords are not sent in plain text; locate usages of the
LoginRequest message and any authentication check that compares against "42" and
update them to validate against the configurable/hashed secret and enable secure
channel credentials on the client side.
server/src/net/GameServiceImpl.cpp (2)

55-61: Nested lock acquisition in waitForAllLogins predicate is safe but unconventional.

The predicate lambda acquires sessionsMutex_ while loginMutex_ is already held by the condition variable wait. This works because Login() never acquires loginMutex_ (it only calls loginCv_.notify_all() after releasing sessionsMutex_), so no deadlock occurs. A brief comment here would help future maintainers understand the lock ordering.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/net/GameServiceImpl.cpp` around lines 55 - 61, The predicate in
GameServiceImpl::waitForAllLogins acquires sessionsMutex_ while loginMutex_ is
held by the condition_variable wait; add a concise comment above the predicate
(or inside the lambda) explaining the lock ordering and why this is
safe—specifically note that Login() never takes loginMutex_ (it only takes
sessionsMutex_ and calls loginCv_.notify_all() after releasing it), so the
current ordering cannot deadlock; reference waitForAllLogins, the predicate
lambda, loginMutex_, sessionsMutex_, and Login()/loginCv_.notify_all() in the
comment for future maintainers.

326-437: Action RPCs reconstruct JSON to pass to Action constructors — consider direct construction.

All action handlers (CreateUnit, Move, Attack, TransferGems, Build) serialize proto fields into nlohmann::json just to pass them to Action constructors. This adds unnecessary serialization overhead and couples the new gRPC path to the legacy JSON-based action parsing. Consider adding constructors to Action subclasses that accept proto types or raw fields directly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/net/GameServiceImpl.cpp` around lines 326 - 437, Handlers
CreateUnit, Move, Attack, TransferGems and Build currently build nlohmann::json
and pass it into Action subclasses (CreateAction, MoveAction, AttackAction,
TransferGemsAction, BuildAction) which causes unnecessary serialization and
couples gRPC to the JSON parser; add overloaded constructors or factory methods
on those Action subclasses that accept the corresponding proto types or raw
fields (e.g., CreateAction(const core_game::CreateUnitRequest&),
MoveAction(unit_id, x, y) or MoveAction(const core_game::MoveRequest&), etc.),
update GameServiceImpl::CreateUnit, ::Move, ::Attack, ::TransferGems and ::Build
to call the new constructors directly (removing json construction), and keep the
existing json-based constructors for backward compatibility if needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.idea/copilot.data.migration.ask2agent.xml:
- Around line 1-6: The committed IDE metadata file
(.idea/copilot.data.migration.ask2agent.xml) should be excluded: add "/.idea/"
to the repository .gitignore, remove this specific file from version control
(e.g., git rm --cached .idea/copilot.data.migration.ask2agent.xml) and commit
the updated .gitignore so Ask2AgentMigrationStateService and its migrationStatus
artifact are no longer tracked; ensure no other .idea/* files remain staged
before pushing.

In @.idea/vcs.xml:
- Around line 1-6: The repository currently contains IDE-specific settings
(.idea/vcs.xml with the VcsDirectoryMappings component); add a .idea/ entry to
.gitignore and remove the tracked .idea directory from version control so IDE
files stop being committed. Update .gitignore to include a top-level ".idea/"
rule, then run git rm --cached -r .idea (or equivalent) to untrack the existing
.idea files (including vcs.xml) and commit the change with a message like
"Ignore IDE config (.idea)"; ensure no other necessary project-level files are
left only in .idea before removing.

In `@bots/c/client_lib/src/con_lib.c`:
- Around line 33-43: Remove the misleading (void)argc and validate argc before
using argv[1]: check that argc >= 2 (e.g., in the same initialization block
where tick_callback is checked) and return an error message if not, then parse
game.my_team_id = atoi(argv[1]); keep the existing tick_callback null check
intact and only access argv[1] after the argc check to avoid undefined behavior.
- Around line 116-120: The calls to grpc_bridge_send_debug_data() and
grpc_bridge_end_turn() currently ignore their int return values; check each
return and handle failures: call grpc_bridge_send_debug_data(), if it returns
non-zero log an error including the return code and relevant context, then still
call core_internal_reset_debugData(); call grpc_bridge_end_turn(), if it returns
non-zero log the failure (with return code and client id/context) and take a
safe action such as retrying the end turn a limited number of times or aborting
the current flow to avoid hanging grpc_bridge_wait_tick() on the next cycle;
ensure all logs use the existing logging facility so failures are visible for
debugging.

In `@bots/c/client_lib/src/internal/grpc_bridge.cpp`:
- Around line 484-502: The returned t_obj** arrays allocated in
grpc_bridge_get_my_units and grpc_bridge_get_all_objects are leaking because
only individual t_obj* elements are tracked by cache_store; modify the cache to
also track these allocated arrays and free them when grpc_bridge_cache_clear()
runs. Concretely, add a mechanism (e.g., a new cache record or function such as
cache_store_array) to register the malloc'ed t_obj** returned by
grpc_bridge_get_my_units and grpc_bridge_get_all_objects (store the pointer,
marking that only the array itself should be freed), call that registration
right after malloc before returning, and update grpc_bridge_cache_clear to
iterate and free those registered arrays (only free the array buffers, not the
cached t_obj* elements).
- Around line 36-53: In cache_store, the realloc call can return NULL and
overwrite g_cache, leaking the original buffer; update the growth branch
(symbols: g_cache, g_cache_capacity, g_cache_count, realloc) to first save the
old pointer into a temporary (e.g. old = g_cache), call realloc into a temp
pointer, check if the result is NULL, and on failure restore g_cache and handle
the error (return NULL or abort/log) instead of overwriting; on success assign
the temp back to g_cache and continue; ensure any subsequent malloc for t_obj
still checks for NULL before using the pointer.
- Around line 122-170: parse_config is not copying cfg.bomb_hp() into
game.config.bomb_hp, leaving bomb_hp uninitialized; add an assignment setting
game.config.bomb_hp = cfg.bomb_hp() in parse_config (in the sequence after
game.config.wall_build_cost = cfg.wall_build_cost() and before
game.config.bomb_countdown = cfg.bomb_countdown()) so the bomb HP from
GameConfigProto is propagated into the game.config structure.

In `@bots/c/hardcore/gridmaster/Makefile`:
- Around line 18-19: The prod Makefile target can reuse an existing build cache
and end up using the wrong CMAKE_BUILD_TYPE; update the prod target so the
configure step cannot be skipped by an existing $(BUILD_DIR). Either: change the
build directory to include the build type (e.g. use $(BUILD_DIR)-$(BUILD_TYPE)
or similar) so CMake caches are separate per CMAKE_BUILD_TYPE, or make prod
remove/clean $(BUILD_DIR) before configuring (e.g. run a clean/rm -rf
$(BUILD_DIR) then re-run cmake -DCMAKE_BUILD_TYPE=$(BUILD_TYPE)). Ensure
references to BUILD_DIR and CMAKE_BUILD_TYPE in the Makefile and the prod target
are updated accordingly.

In `@CLAUDE.md`:
- Around line 12-14: Add a language specifier to the fenced code block
containing the architecture diagram (the block with "Bots (C or TS)
<--TCP/JSON--> Server (C++) ---> Replay JSON files ---> Visualizer (Web)") by
changing the opening fence from ``` to ```text (or ```plaintext) so the block is
treated as plain text and satisfies markdownlint MD040.
- Around line 13-19: Update the stale architecture description to reflect the
new gRPC/Protobuf stack: replace mentions of "TCP/JSON", "JSON serialization",
and "POSIX sockets" with "gRPC streaming" and "Protobuf" and note that the C
client library is now built with CMake; specifically edit the "Server
(`server/`)" bullet and the "Client Libraries" sub-bullets (the lines
referencing Server (`server/`), C: `bots/c/client_lib/`, and TypeScript:
`bots/ts/client_lib/`) to state that the protocol is gRPC streams over Protobuf,
the client libraries use the Protobuf-generated stubs, and the C client is
CMake-built (update any runtime notes about Node/Bun as needed for TS).

In `@server/inc/net/GameServiceImpl.h`:
- Around line 86-88: extractTeamId currently uses 0 as a sentinel which can
match a valid team and let unauthenticated callers access GetMyCore and
GetMyUnits; change extractTeamId(grpc::ServerContext*) to return
std::optional<unsigned int> (or otherwise signal failure explicitly) and update
all call sites (GetMyCore, GetMyUnits, any places that currently call
extractTeamId) to check the optional and reject the request when missing,
keeping existing getSession-based validation for action methods (TickStream,
CreateUnit, etc.); alternatively, if you choose to disallow team id 0 globally,
enforce that in extractTeamId and document it, but do not rely on 0 as an error
sentinel anywhere in the codebase.

In `@server/src/main.cpp`:
- Around line 119-132: The range-for over connectedTeamIds mutates the container
inside the loop causing undefined behavior; instead, iterate without modifying
the vector by collecting IDs to remove (e.g., a local std::vector<unsigned int>
toRemove) while checking service.getSession(teamId) and calling
ReplayEncoder::instance().setDeathReason(teamId, death_reason_t::DISCONNECTED)
and setPlace(teamId, placeCounter--), then after the loop remove those IDs from
connectedTeamIds in a separate pass (or use std::remove_if with a predicate that
checks membership in toRemove) so connectedTeamIds is not modified during
iteration.

In `@server/src/net/GameServiceImpl.cpp`:
- Around line 464-545: These query RPCs (GetObjectById, GetObjectAtPos,
GetMyCore, GetMyUnits, GetAllObjects) read Board::instance() concurrently with
Game::run() mutations causing a data race; protect board access with a
readers-writer lock: add a shared mutex (e.g., std::shared_mutex) guarding
Board's state, acquire a shared/read lock (std::shared_lock) at the start of
each listed RPC before calling Board::instance() or iterating it, and acquire an
exclusive/write lock (std::unique_lock) inside Game::run() around all board
mutations; ensure Board's public mutation/iteration points use the same mutex so
RPC readers and the game loop are synchronized.
- Around line 163-172: The extractTeamId function currently calls std::stoul on
client metadata without validation which can throw; update
GameServiceImpl::extractTeamId to safely parse the "x-team-id" metadata by first
checking the metadata value is non-empty, then wrap the std::stoul call in a
try/catch that catches std::invalid_argument and std::out_of_range and returns 0
on error (or a safe default), and optionally log the malformed value via your
logging facility; ensure the function never lets exceptions escape so RPC
handler threads cannot be terminated.
- Around line 294-320: TickStream currently stores the stack-scoped raw pointer
in PlayerSession::stream_ and returns without clearing it, causing a
TOCTOU/dangling-pointer race; before TickStream returns clear session->stream_
under the same mutex used by PlayerSession (e.g. acquire streamMutex_ and set
stream_ = nullptr) and keep calling session->setDisconnected() while holding
that lock; likewise modify PlayerSession::signalTickStart to acquire
streamMutex_, check stream_ != nullptr and disconnected_.load() before calling
stream_->Write(...) so all access to stream_ is serialized and null-checked
under streamMutex_.

In `@server/src/net/GrpcServer.cpp`:
- Around line 20-25: The start() implementation silently returns when
builder.BuildAndStart() fails which violates its blocking contract; modify
start() (and GrpcServer API) to surface failure: either change
GrpcServer::start() to return bool and return false on failed BuildAndStart(),
or throw an exception (e.g., std::runtime_error) or call std::exit(); update
callers (e.g., main.cpp) to handle the new behavior and ensure shutdown() is
only expected after successful start(); specifically update the code path around
BuildAndStart(), the server_ member check, and any tests/callers that assume
start() blocks to react to the failure.

---

Outside diff comments:
In `@bots/c/client_lib/src/public/get.c`:
- Around line 71-74: In core_get_obj_filter_nearest, handle both return styles
of core_get_objs_filter: if it returns NULL return NULL immediately; if it
returns an empty NULL-terminated array (objects != NULL && objects[0] == NULL)
free the returned array before returning NULL to avoid leaking memory. Update
core_get_obj_filter_nearest to check objects first, then check objects[0] and
call free(objects) (or the appropriate deallocator used for
core_get_objs_filter) before returning NULL; proceed with normal logic only when
objects[0] != NULL.

---

Duplicate comments:
In `@bots/c/client_lib/Makefile`:
- Around line 15-19: The prod/dev targets are suffering the same build-type
caching problem; mark the targets (prod and dev) as .PHONY and ensure BUILD_TYPE
is exported to recursive makes so the BUILD_TYPE value cannot be cached/ignored
(e.g., add a .PHONY: prod dev line and export BUILD_TYPE or pass it through
MAKEFLAGS when invoking $(MAKE) BUILD_TYPE=... from the prod and dev targets) so
the recursive invocation always picks up the intended build type.

In `@bots/c/hardcore/my-core-bot/Makefile`:
- Around line 1-29: The Makefile suffers from BUILD_TYPE caching because
BUILD_DIR is static (BUILD_DIR = cmake-build) so switching BUILD_TYPE (e.g., via
the prod target) reuses a cached CMake configure; change BUILD_DIR to include
BUILD_TYPE (e.g., compute BUILD_DIR := cmake-build-$(BUILD_TYPE) or set
BUILD_DIR := $(BUILD_DIR)-$(BUILD_TYPE)) and update uses of $(BUILD_DIR) and the
cmake configure line (cmake -S . -B $(BUILD_DIR)
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)) so each build type gets its own build
directory; ensure the prod target still invokes $(MAKE) BUILD_TYPE=prod all (or
sets BUILD_DIR consistently) so cmake reconfigures into the correct per-type
directory.

In `@bots/c/softcore/my-core-bot/Makefile`:
- Around line 1-29: The Makefile uses a single static BUILD_DIR which causes
CMake cache contamination when switching BUILD_TYPE (so prod builds reuse dev
cache); update BUILD_DIR to incorporate BUILD_TYPE (e.g., change BUILD_DIR to
something like cmake-build-$(BUILD_TYPE) or otherwise derive it from BUILD_TYPE)
and ensure targets referencing BUILD_DIR (targets: all, $(EXEC),
$(BUILD_DIR)/Makefile, clean, fclean, re, prod) continue to use that variable so
each BUILD_TYPE gets an isolated build directory and the prod target no longer
reuses the dev cache.

---

Nitpick comments:
In @.gitignore:
- Line 3: The .gitignore currently contains the literal pattern "cmake-build/"
which won't match CLion's default directories like "cmake-build-debug/" or
"cmake-build-release/"; update the .gitignore by replacing or adding a glob that
covers variants (e.g., use "cmake-build*/" or "cmake-build-*/") so directories
such as "cmake-build-debug" and "cmake-build-release" are ignored; locate the
existing "cmake-build/" entry in .gitignore and adjust it accordingly.

In @.idea/.gitignore:
- Around line 1-10: Add a root-level ignore for the IDE folder by replacing the
scattered .idea-related entries with a single `.idea/` entry in the repo
gitignore, remove the committed IDE directory from Git tracking (e.g., git rm -r
--cached .idea) and commit the updated .gitignore; if you need to share specific
run configs, export and add only those curated files instead of the whole
`.idea/` folder.

In `@bots/c/client_lib/CMakeLists.txt`:
- Around line 44-52: The CMakeLists currently always builds gRPC from source via
FetchContent_Declare/FetchContent_MakeAvailable which dramatically increases
first-build time; change this to prefer a system or prebuilt gRPC by calling
find_package(gRPC REQUIRED) first and only fall back to
FetchContent_Declare(FETCH_CONTENT...) / FetchContent_MakeAvailable(grpc) if
find_package fails, and add a short comment or README note near this logic
mentioning the expected 20–30+ minute build time when the fallback runs so
developers know what to expect.
- Around line 16-22: The dev build currently forces -O3 -DNDEBUG via
CMAKE_CXX_FLAGS_DEV / CMAKE_C_FLAGS_DEV which disables assertions and conflicts
with CORE_LIB_ASAN; change the dev configuration so when CMAKE_BUILD_TYPE
STREQUAL "dev" you set CMAKE_CXX_FLAGS_DEV and CMAKE_C_FLAGS_DEV to a
low-optimization, debug-friendly setting (e.g. -O0 -g or -O1 -g) and remove
DNDEBUG, then keep CORE_LIB_ASAN ON for that build so ASan reports remain
readable; update the conditional around CMAKE_BUILD_TYPE "dev" to set the debug
flags and enable CORE_LIB_ASAN accordingly.

In `@bots/c/client_lib/src/internal/grpc_bridge.cpp`:
- Around line 172-198: grpc_bridge_connect currently spins forever waiting for
g_channel to become GRPC_CHANNEL_READY; modify the connect loop in
grpc_bridge_connect to enforce a maximum retry count or overall timeout (e.g.,
total wait duration) while using g_channel->GetState/WaitForStateChange, so that
if the channel stays in TRANSIENT_FAILURE or never reaches READY within the
limit you break and return an error (like -1); update logic around the existing
state checks (GRPC_CHANNEL_READY, GRPC_CHANNEL_SHUTDOWN) to also handle the
timeout/retry exhaustion case and log or return a clear failure code.
- Around line 231-243: grpc_bridge_shutdown currently calls
g_tick_stream->WritesDone() followed by g_tick_stream->Finish(), which can block
if the server closed the stream or the channel is unhealthy; before calling
Finish() set a short deadline on the existing g_tick_ctx (or create a new
grpc::ClientContext with a deadline) so Finish() will time out instead of
blocking indefinitely—modify grpc_bridge_shutdown to set g_tick_ctx's deadline
(e.g., now + small timeout) prior to calling g_tick_stream->Finish(), then
handle a timeout or non-OK status and proceed to reset g_tick_stream,
g_tick_ctx, g_stub, and g_channel and call grpc_bridge_cache_clear().

In `@bots/c/client_lib/src/public/get.c`:
- Around line 32-69: core_get_objs_filter currently calls
grpc_bridge_get_all_objects on every invocation causing redundant RPCs; change
core_get_objs_filter to use a per-tick cached t_obj ** array instead of calling
grpc_bridge_get_all_objects each time: add or use an existing global/static
cache variable (referencing grpc_bridge_get_all_objects and
core_get_objs_filter) that is populated on first call in a tick and returned
thereafter, and rely on the existing per-tick cache clear in con_lib.c to reset
the cache between ticks; ensure you still free only the container (not the
objects) when replacing/clearing the cache and handle allocation failures
consistently as before.

In `@bots/c/hardcore/gridmaster/CMakeLists.txt`:
- Around line 1-16: Add consistency flags to the CMake configuration: set
CMAKE_C_STANDARD_REQUIRED to ON and set CMAKE_CXX_EXTENSIONS to OFF near the
existing standard settings (alongside set(CMAKE_C_STANDARD 11) and
set(CMAKE_CXX_STANDARD 17)) so that the gridmaster project enforces the C
standard and disables compiler-specific C++ extensions; update the file where
project(gridmaster C CXX) and those set(...) lines appear.

In `@bots/c/hardcore/gridmaster/Makefile`:
- Around line 3-4: The Makefile sets BUILD_DIR with a hard assignment which
prevents command-line overrides; change the variable assignment for BUILD_DIR
from a plain '=' to a conditional assignment '?=' (matching BUILD_TYPE ?= dev)
so users can override BUILD_DIR via make command line; update the line that
defines BUILD_DIR accordingly (referencing the BUILD_DIR and BUILD_TYPE
symbols).

In `@bots/c/softcore/gridmaster/CMakeLists.txt`:
- Around line 4-6: The CMakeLists sets CMAKE_C_STANDARD to 11 but does not
enforce it; add the missing C standard requirement by setting
CMAKE_C_STANDARD_REQUIRED to ON (mirroring the existing C++ enforcement) so the
C compiler is required to support C11; locate the block that contains
set(CMAKE_C_STANDARD 11) and add set(CMAKE_C_STANDARD_REQUIRED ON) alongside
set(CMAKE_CXX_STANDARD_REQUIRED ON).

In `@bots/c/softcore/my-core-bot/CMakeLists.txt`:
- Around line 4-6: The CMake config sets CMAKE_C_STANDARD to 11 but doesn't
require it; add the equivalent required flag by setting
CMAKE_C_STANDARD_REQUIRED to ON so the build will fail on compilers that don't
fully support C11 (mirror the existing C++ requirement
CMAKE_CXX_STANDARD_REQUIRED ON); update the CMakeLists.txt near the lines where
set(CMAKE_C_STANDARD 11) and set(CMAKE_CXX_STANDARD_REQUIRED ON) appear to
include set(CMAKE_C_STANDARD_REQUIRED ON).
- Line 12: The target is currently created with a generic name via
add_executable(bot ${BOT_SOURCES}), which will conflict if multiple bots are
added; change the target name to a unique, bot-specific identifier (e.g.,
add_executable(my_core_bot ${BOT_SOURCES})) so it mirrors the gridmaster pattern
and avoids duplicate target collisions—update any dependent
install/target_link_libraries/target_include_directories references that
currently use "bot" to the new target name.

In `@proto/core_game.proto`:
- Around line 1-3: The proto file declares package core_game but lives in
proto/, causing Buf's PACKAGE_DIRECTORY_MATCH lint; fix by moving this file to a
matching directory (e.g., proto/core_game/core_game.proto) or update Buf config
to relax/enforce different package-directory mapping; locate the declaration
"package core_game;" in proto/core_game.proto when applying the change and
ensure imports/paths in build scripts and any references are updated to the new
location if you move the file.
- Around line 15-17: Replace the bare int32 "type" field on message GameObject
with a typed proto enum: define an enum ObjectType (e.g., UNKNOWN = 0, PLAYER =
1, NPC = 2, etc.) in this proto and change the field declaration to use
ObjectType type = 2; and update the inline comment accordingly; also search for
usages of GameObject.type in server and client codegen to update any switch/case
or literal magic numbers to the new ObjectType enum symbols (e.g.,
ObjectType.PLAYER) so producers/consumers use the canonical enum.
- Around line 85-89: The LoginRequest proto currently carries a plaintext
password and the codebase checks against a hardcoded value ("42"); replace this
insecure setup by removing the hardcoded-secret check and wiring authentication
to a configurable/secure store (e.g., environment variable, config, or secret
manager) and/or hashed credentials, and switch clients using
InsecureChannelCredentials to use a secure transport (TLS/secure channel) so
passwords are not sent in plain text; locate usages of the LoginRequest message
and any authentication check that compares against "42" and update them to
validate against the configurable/hashed secret and enable secure channel
credentials on the client side.

In `@server/CMakeLists.txt`:
- Around line 52-76: The FetchContent_Declare(grpc) +
FetchContent_MakeAvailable(grpc) block is forcing gRPC to be built from source
(pulling abseil, protobuf, re2, etc.) which greatly increases first-build time;
update CMake to prefer a system-installed gRPC by first attempting
find_package(gRPC CONFIG QUIET) (or find_package(gRPC) fallback) and only fall
back to the existing FetchContent_Declare/FecthContent_MakeAvailable(grpc) block
when find_package fails, and/or add a clear comment near the set(...) flags and
FetchContent_Declare(grpc) that documents the expected long build time so
developers know to opt into the source build.

In `@server/src/game/Game.cpp`:
- Around line 75-102: preFailures is created in Game::run loop but never
populated before being passed into tick(tickCount, actions, serverStartTime,
preFailures, debugDataEntries), leaving the loop over preFailures in tick()
dead; either remove the preFailures variable and the corresponding parameter and
loop from tick(), or implement population logic where appropriate (e.g., collect
session->preFailure() results) — update the call site in the run loop (remove
preFailures from the tick call) and adjust tick()'s signature and body (remove
the preFailures parameter and the loop that prefixes tick info) so symbols to
change include the local variable preFailures, the tick(...)
declaration/definition, and the tick(...) invocation in Game.cpp.
- Around line 55-63: The log message in the EndTurn timeout handling
(Logger::LogWarn) is misleading because it unconditionally says "Disconnecting."
before checking session->isDisconnected(); change the logic to first check
session->isDisconnected() and log an accurate message (e.g., "already
disconnected" vs "did not send EndTurn in time; disconnecting"), or make the
warn conditional so Logger::LogWarn reflects the actual path taken; keep the
subsequent calls to ReplayEncoder::instance().setDeathReason(...) and
session->setDisconnected() unchanged.
- Around line 466-472: The parameter 'failures' in Game::recordReplayState is
unused and should be removed: update the declaration and definition of
Game::recordReplayState to drop the std::vector<std::pair<int, std::string>>
&failures parameter, adjust any virtual/overridden declarations if present, and
update all call sites to stop passing that argument; ensure you rely on
pendingErrors_ for error routing and rebuild to catch any missed references
(search for recordReplayState and the symbol 'failures' to find places to
update).
- Around line 49-73: The current loop calls session->waitForEndTurn(maxWait)
sequentially, causing worst-case total wait of N×maxWait; change to initiate all
waits in parallel (e.g., spawn std::async futures or worker threads for each
Session pointer) and collect their boolean results, then iterate the sessions
and for any where the wait returned false perform the existing disconnection
handling (use session->getTeamId(), Logger::LogWarn,
ReplayEncoder::instance().setDeathReason, setPlace, session->setDisconnected(),
Board::instance().getCoreByTeamId / removeObjectById and Logger::Log) so
behavior is identical but the wait is bounded to roughly maxWait total rather
than multiplied by session count.

In `@server/src/main.cpp`:
- Around line 23-36: This block duplicates the sanitization logic used in
GameServiceImpl (sanitized/rawEnv/MAX_LEN); extract it into a shared utility
function (e.g., a sanitizeEnvName or sanitizeIdentifier) in a common
utility/header so both main.cpp and GameServiceImpl call the same function,
preserve the current allowed chars and MAX_LEN behavior, return the sanitized
string (or empty) and replace the in-place loop in main.cpp with a call to that
new function to keep rules in sync.

In `@server/src/net/GameServiceImpl.cpp`:
- Around line 55-61: The predicate in GameServiceImpl::waitForAllLogins acquires
sessionsMutex_ while loginMutex_ is held by the condition_variable wait; add a
concise comment above the predicate (or inside the lambda) explaining the lock
ordering and why this is safe—specifically note that Login() never takes
loginMutex_ (it only takes sessionsMutex_ and calls loginCv_.notify_all() after
releasing it), so the current ordering cannot deadlock; reference
waitForAllLogins, the predicate lambda, loginMutex_, sessionsMutex_, and
Login()/loginCv_.notify_all() in the comment for future maintainers.
- Around line 326-437: Handlers CreateUnit, Move, Attack, TransferGems and Build
currently build nlohmann::json and pass it into Action subclasses (CreateAction,
MoveAction, AttackAction, TransferGemsAction, BuildAction) which causes
unnecessary serialization and couples gRPC to the JSON parser; add overloaded
constructors or factory methods on those Action subclasses that accept the
corresponding proto types or raw fields (e.g., CreateAction(const
core_game::CreateUnitRequest&), MoveAction(unit_id, x, y) or MoveAction(const
core_game::MoveRequest&), etc.), update GameServiceImpl::CreateUnit, ::Move,
::Attack, ::TransferGems and ::Build to call the new constructors directly
(removing json construction), and keep the existing json-based constructors for
backward compatibility if needed.

Comment on lines +1 to +6
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="Ask2AgentMigrationStateService">
<option name="migrationStatus" value="COMPLETED" />
</component>
</project> No newline at end of file
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if .idea is gitignored
rg -n '\.idea' --glob '.gitignore' --glob '**/.gitignore'

Repository: 42core-team/monorepo

Length of output: 46


🏁 Script executed:

#!/bin/bash
# Check if .gitignore exists and list any .idea-related rules
if [ -f .gitignore ]; then
  echo "=== .gitignore content ==="
  cat -n .gitignore
else
  echo "No .gitignore file found"
fi

Repository: 42core-team/monorepo

Length of output: 153


Add .idea/ to .gitignore to exclude JetBrains IDE metadata.

This file is a GitHub Copilot plugin state artifact specific to one developer's local IDE session and should not be committed. The .idea/ directory contains machine-specific IDE state that typically conflicts across different development environments. Add /.idea/ to the repository's .gitignore file and remove this file from version control.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.idea/copilot.data.migration.ask2agent.xml around lines 1 - 6, The committed
IDE metadata file (.idea/copilot.data.migration.ask2agent.xml) should be
excluded: add "/.idea/" to the repository .gitignore, remove this specific file
from version control (e.g., git rm --cached
.idea/copilot.data.migration.ask2agent.xml) and commit the updated .gitignore so
Ask2AgentMigrationStateService and its migrationStatus artifact are no longer
tracked; ensure no other .idea/* files remain staged before pushing.

Comment on lines +1 to +6
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="VcsDirectoryMappings">
<mapping directory="$PROJECT_DIR$" vcs="Git" />
</component>
</project> No newline at end of file
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

.idea/ directory should be added to .gitignore.

IDE-specific directories like .idea/ are developer-local and should not be version-controlled; they create noise for contributors using other IDEs or different IDE configurations.

🔧 Add to `.gitignore`
+.idea/
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.idea/vcs.xml around lines 1 - 6, The repository currently contains
IDE-specific settings (.idea/vcs.xml with the VcsDirectoryMappings component);
add a .idea/ entry to .gitignore and remove the tracked .idea directory from
version control so IDE files stop being committed. Update .gitignore to include
a top-level ".idea/" rule, then run git rm --cached -r .idea (or equivalent) to
untrack the existing .idea files (including vcs.xml) and commit the change with
a message like "Ignore IDE config (.idea)"; ensure no other necessary
project-level files are left only in .idea before removing.

Comment on lines +33 to +43
(void)argc;

if (!tick_callback)
{
printf("Trust me, you'll want to provide a tick callback function.\n");
return 1;
}

// setup socket
// Parse team id from args
if (!argv[1]) return printf("Error: No team id provided.\n"), 1;
game.my_team_id = atoi(argv[1]);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

argc is cast to void but implicitly relied upon via argv[1].

While the C standard guarantees argv[argc] == NULL, casting argc to void is misleading — it suggests argc is intentionally unused, yet argv[1] is accessed on line 42. Using argc directly for validation is clearer and more robust.

Proposed fix
-	(void)argc;
-
 	if (!tick_callback)
 	{
 		printf("Trust me, you'll want to provide a tick callback function.\n");
 		return 1;
 	}
 
 	// Parse team id from args
-	if (!argv[1]) return printf("Error: No team id provided.\n"), 1;
+	if (argc < 2 || !argv[1]) return printf("Error: No team id provided.\n"), 1;
 	game.my_team_id = atoi(argv[1]);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
(void)argc;
if (!tick_callback)
{
printf("Trust me, you'll want to provide a tick callback function.\n");
return 1;
}
// setup socket
// Parse team id from args
if (!argv[1]) return printf("Error: No team id provided.\n"), 1;
game.my_team_id = atoi(argv[1]);
if (!tick_callback)
{
printf("Trust me, you'll want to provide a tick callback function.\n");
return 1;
}
// Parse team id from args
if (argc < 2 || !argv[1]) return printf("Error: No team id provided.\n"), 1;
game.my_team_id = atoi(argv[1]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bots/c/client_lib/src/con_lib.c` around lines 33 - 43, Remove the misleading
(void)argc and validate argc before using argv[1]: check that argc >= 2 (e.g.,
in the same initialization block where tick_callback is checked) and return an
error message if not, then parse game.my_team_id = atoi(argv[1]); keep the
existing tick_callback null check intact and only access argv[1] after the argc
check to avoid undefined behavior.

Comment on lines +116 to +120
grpc_bridge_send_debug_data();
core_internal_reset_debugData();

// Signal end of turn
grpc_bridge_end_turn();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Return values of grpc_bridge_send_debug_data() and grpc_bridge_end_turn() are ignored.

Both functions return int indicating success/failure. If grpc_bridge_end_turn() fails, the server may not advance the tick for this client, potentially causing a hang on the next grpc_bridge_wait_tick(). At minimum, log the failure for debuggability.

Suggested handling
 		// Send accumulated debug data
-		grpc_bridge_send_debug_data();
+		if (grpc_bridge_send_debug_data() != 0)
+			fprintf(stderr, "Warning: failed to send debug data\n");
 		core_internal_reset_debugData();
 
 		// Signal end of turn
-		grpc_bridge_end_turn();
+		if (grpc_bridge_end_turn() != 0)
+		{
+			fprintf(stderr, "Failed to signal end of turn\n");
+			break;
+		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
grpc_bridge_send_debug_data();
core_internal_reset_debugData();
// Signal end of turn
grpc_bridge_end_turn();
if (grpc_bridge_send_debug_data() != 0)
fprintf(stderr, "Warning: failed to send debug data\n");
core_internal_reset_debugData();
// Signal end of turn
if (grpc_bridge_end_turn() != 0)
{
fprintf(stderr, "Failed to signal end of turn\n");
break;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bots/c/client_lib/src/con_lib.c` around lines 116 - 120, The calls to
grpc_bridge_send_debug_data() and grpc_bridge_end_turn() currently ignore their
int return values; check each return and handle failures: call
grpc_bridge_send_debug_data(), if it returns non-zero log an error including the
return code and relevant context, then still call
core_internal_reset_debugData(); call grpc_bridge_end_turn(), if it returns
non-zero log the failure (with return code and client id/context) and take a
safe action such as retrying the end turn a limited number of times or aborting
the current flow to avoid hanging grpc_bridge_wait_tick() on the next cycle;
ensure all logs use the existing logging facility so failures are visible for
debugging.

Comment on lines +36 to +53
static t_obj *cache_store(const t_obj &obj)
{
if (g_cache == nullptr)
{
g_cache_capacity = 64;
g_cache = (t_obj **)malloc(sizeof(t_obj *) * g_cache_capacity);
}
else if (g_cache_count >= g_cache_capacity)
{
g_cache_capacity *= 2;
g_cache = (t_obj **)realloc(g_cache, sizeof(t_obj *) * g_cache_capacity);
}

t_obj *copy = (t_obj *)malloc(sizeof(t_obj));
*copy = obj;
g_cache[g_cache_count++] = copy;
return copy;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

realloc leak: if realloc fails, the original g_cache pointer is lost.

On line 46, if realloc returns NULL, the original g_cache is leaked because it's overwritten. Save the old pointer first and check the result.

🐛 Proposed fix
 	else if (g_cache_count >= g_cache_capacity)
 	{
 		g_cache_capacity *= 2;
-		g_cache = (t_obj **)realloc(g_cache, sizeof(t_obj *) * g_cache_capacity);
+		t_obj **tmp = (t_obj **)realloc(g_cache, sizeof(t_obj *) * g_cache_capacity);
+		if (!tmp)
+		{
+			fprintf(stderr, "cache_store: realloc failed\n");
+			return nullptr;
+		}
+		g_cache = tmp;
 	}
🧰 Tools
🪛 Cppcheck (2.19.0)

[error] 46-46: Common realloc mistake

(memleakOnRealloc)


[warning] 50-50: If memory allocation fails, then there is a possible null pointer dereference

(nullPointerOutOfMemory)


[warning] 51-51: If memory allocation fails, then there is a possible null pointer dereference

(nullPointerOutOfMemory)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bots/c/client_lib/src/internal/grpc_bridge.cpp` around lines 36 - 53, In
cache_store, the realloc call can return NULL and overwrite g_cache, leaking the
original buffer; update the growth branch (symbols: g_cache, g_cache_capacity,
g_cache_count, realloc) to first save the old pointer into a temporary (e.g. old
= g_cache), call realloc into a temp pointer, check if the result is NULL, and
on failure restore g_cache and handle the error (return NULL or abort/log)
instead of overwriting; on success assign the temp back to g_cache and continue;
ensure any subsequent malloc for t_obj still checks for NULL before using the
pointer.

Comment on lines +119 to 132

// Check for disconnected sessions among connected ones
for (unsigned int teamId : connectedTeamIds)
{
auto it = bridges.find(teamId);
if (it != bridges.end() && it->second->isDisconnected())
auto session = service.getSession(teamId);
if (session && session->isDisconnected())
{
ReplayEncoder::instance().setDeathReason(teamId, death_reason_t::DISCONNECTED);
ReplayEncoder::instance().setPlace(teamId, placeCounter--);
}
else if (it != bridges.end())
{
connectedTeamIds.push_back(teamId);
// Remove from active list
connectedTeamIds.erase(
std::remove(connectedTeamIds.begin(), connectedTeamIds.end(), teamId), connectedTeamIds.end());
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Undefined behavior: modifying connectedTeamIds while iterating over it with a range-for loop.

Lines 129-130 call erase on connectedTeamIds while the range-for on line 121 is iterating over it. This invalidates the iterator and is undefined behavior in C++. Use an index-based loop iterating in reverse, or collect IDs to remove in a separate pass.

🐛 Proposed fix: two-pass approach
-	// Check for disconnected sessions among connected ones
-	for (unsigned int teamId : connectedTeamIds)
+	// Check for disconnected sessions among connected ones
+	std::vector<unsigned int> disconnectedIds;
+	for (unsigned int teamId : connectedTeamIds)
 	{
 		auto session = service.getSession(teamId);
 		if (session && session->isDisconnected())
 		{
 			ReplayEncoder::instance().setDeathReason(teamId, death_reason_t::DISCONNECTED);
 			ReplayEncoder::instance().setPlace(teamId, placeCounter--);
-			// Remove from active list
-			connectedTeamIds.erase(
-					std::remove(connectedTeamIds.begin(), connectedTeamIds.end(), teamId), connectedTeamIds.end());
+			disconnectedIds.push_back(teamId);
 		}
 	}
+	for (unsigned int id : disconnectedIds)
+		connectedTeamIds.erase(std::remove(connectedTeamIds.begin(), connectedTeamIds.end(), id), connectedTeamIds.end());
🧰 Tools
🪛 Cppcheck (2.19.0)

[error] 129-129: Calling 'erase' while iterating the container is invalid.

(invalidContainerLoop)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/main.cpp` around lines 119 - 132, The range-for over
connectedTeamIds mutates the container inside the loop causing undefined
behavior; instead, iterate without modifying the vector by collecting IDs to
remove (e.g., a local std::vector<unsigned int> toRemove) while checking
service.getSession(teamId) and calling
ReplayEncoder::instance().setDeathReason(teamId, death_reason_t::DISCONNECTED)
and setPlace(teamId, placeCounter--), then after the loop remove those IDs from
connectedTeamIds in a separate pass (or use std::remove_if with a predicate that
checks membership in toRemove) so connectedTeamIds is not modified during
iteration.

Comment on lines +163 to +172
unsigned int GameServiceImpl::extractTeamId(grpc::ServerContext *context)
{
auto meta = context->client_metadata();
auto it = meta.find("x-team-id");
if (it != meta.end())
{
return static_cast<unsigned int>(std::stoul(std::string(it->second.data(), it->second.size())));
}
return 0;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

extractTeamId does not validate or handle malformed metadata.

std::stoul on line 169 will throw std::invalid_argument or std::out_of_range if the metadata value is malformed. Since this is called from multiple RPC handlers, an uncaught exception here would crash the gRPC handler thread.

🐛 Proposed fix
 unsigned int GameServiceImpl::extractTeamId(grpc::ServerContext *context)
 {
 	auto meta = context->client_metadata();
 	auto it = meta.find("x-team-id");
 	if (it != meta.end())
 	{
-		return static_cast<unsigned int>(std::stoul(std::string(it->second.data(), it->second.size())));
+		try
+		{
+			return static_cast<unsigned int>(std::stoul(std::string(it->second.data(), it->second.size())));
+		}
+		catch (const std::exception &)
+		{
+			Logger::LogWarn("Invalid x-team-id metadata");
+			return 0;
+		}
 	}
 	return 0;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/net/GameServiceImpl.cpp` around lines 163 - 172, The extractTeamId
function currently calls std::stoul on client metadata without validation which
can throw; update GameServiceImpl::extractTeamId to safely parse the "x-team-id"
metadata by first checking the metadata value is non-empty, then wrap the
std::stoul call in a try/catch that catches std::invalid_argument and
std::out_of_range and returns 0 on error (or a safe default), and optionally log
the malformed value via your logging facility; ensure the function never lets
exceptions escape so RPC handler threads cannot be terminated.

Comment on lines +294 to +320
grpc::Status GameServiceImpl::TickStream(
grpc::ServerContext *context,
grpc::ServerReaderWriter<core_game::TickSignal, core_game::EndTurnRequest> *stream)
{
unsigned int teamId = extractTeamId(context);
auto session = getSession(teamId);
if (!session)
{
return grpc::Status(grpc::StatusCode::UNAUTHENTICATED, "Not logged in or invalid team ID");
}

session->setStream(stream);

// Read EndTurnRequest messages from the client until stream ends or client disconnects
core_game::EndTurnRequest req;
while (stream->Read(&req))
{
session->notifyEndTurn();
}

// Stream ended — client disconnected
session->setDisconnected();
session->notifyEndTurn(); // wake up any waiting game loop
Logger::LogWarn("TickStream ended for team " + std::to_string(teamId));

return grpc::Status::OK;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Race condition: dangling stream pointer after TickStream returns.

stream is a stack-scoped pointer whose lifetime ends when TickStream returns (line 319). PlayerSession::stream_ holds this raw pointer and signalTickStart (called from the game loop) writes to it. There's a TOCTOU race between the isDisconnected() check in Game::run (line 36) and the actual write in signalTickStart: the TickStream RPC could return and invalidate the pointer between the check and the write.

Consider nullifying stream_ under streamMutex_ before TickStream returns, and checking stream_ != nullptr inside signalTickStart under the same lock.

🔒 Suggested mitigation in TickStream
 	// Stream ended — client disconnected
 	session->setDisconnected();
+	session->setStream(nullptr);  // clear before stream goes out of scope
 	session->notifyEndTurn(); // wake up any waiting game loop

And in PlayerSession::signalTickStart, guard writes with a null check under streamMutex_:

void PlayerSession::signalTickStart(...) {
    std::lock_guard<std::mutex> lock(streamMutex_);
    if (!stream_ || disconnected_.load()) return;
    stream_->Write(signal);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/net/GameServiceImpl.cpp` around lines 294 - 320, TickStream
currently stores the stack-scoped raw pointer in PlayerSession::stream_ and
returns without clearing it, causing a TOCTOU/dangling-pointer race; before
TickStream returns clear session->stream_ under the same mutex used by
PlayerSession (e.g. acquire streamMutex_ and set stream_ = nullptr) and keep
calling session->setDisconnected() while holding that lock; likewise modify
PlayerSession::signalTickStart to acquire streamMutex_, check stream_ != nullptr
and disconnected_.load() before calling stream_->Write(...) so all access to
stream_ is serialized and null-checked under streamMutex_.

Comment on lines +464 to +545
grpc::Status GameServiceImpl::GetObjectById(grpc::ServerContext * /*context*/,
const core_game::GetObjectByIdRequest *request,
core_game::GetObjectByIdResponse *response)
{
Object *obj = Board::instance().getObjectById(request->id());
if (obj)
{
fillGameObject(response->mutable_object(), *obj);
}
return grpc::Status::OK;
}

grpc::Status GameServiceImpl::GetObjectAtPos(grpc::ServerContext * /*context*/,
const core_game::GetObjectAtPosRequest *request,
core_game::GetObjectAtPosResponse *response)
{
Position pos(request->pos().x(), request->pos().y());
Object *obj = Board::instance().getObjectAtPos(pos);
if (obj)
{
fillGameObject(response->mutable_object(), *obj);
}
return grpc::Status::OK;
}

grpc::Status GameServiceImpl::GetMyCore(grpc::ServerContext *context,
const core_game::GetMyCoreRequest * /*request*/,
core_game::GetMyCoreResponse *response)
{
unsigned int teamId = extractTeamId(context);
Core *core = Board::instance().getCoreByTeamId(teamId);
if (core)
{
fillGameObject(response->mutable_core(), *core);
}
return grpc::Status::OK;
}

grpc::Status GameServiceImpl::GetMyUnits(grpc::ServerContext *context,
const core_game::GetMyUnitsRequest * /*request*/,
core_game::GetMyUnitsResponse *response)
{
unsigned int teamId = extractTeamId(context);
for (auto &obj : Board::instance())
{
if (obj.getType() == ObjectType::Unit)
{
const auto &unit = static_cast<const Unit &>(obj);
if (unit.getTeamId() == teamId)
{
fillGameObject(response->add_units(), obj);
}
}
}
return grpc::Status::OK;
}

grpc::Status GameServiceImpl::GetConfig(grpc::ServerContext * /*context*/,
const core_game::GetConfigRequest * /*request*/,
core_game::GetConfigResponse *response)
{
*response->mutable_config() = getConfigProto();
return grpc::Status::OK;
}

grpc::Status GameServiceImpl::GetTick(grpc::ServerContext * /*context*/,
const core_game::GetTickRequest * /*request*/,
core_game::GetTickResponse *response)
{
response->set_tick(currentTick_.load());
return grpc::Status::OK;
}

grpc::Status GameServiceImpl::GetAllObjects(grpc::ServerContext * /*context*/,
const core_game::GetAllObjectsRequest * /*request*/,
core_game::GetAllObjectsResponse *response)
{
for (auto &obj : Board::instance())
{
fillGameObject(response->add_objects(), obj);
}
return grpc::Status::OK;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Query RPCs access Board::instance() without synchronization — potential data race with the game loop.

GetObjectById, GetObjectAtPos, GetMyCore, GetMyUnits, and GetAllObjects all read from the board. These RPCs run on gRPC thread pool threads, while Game::run() modifies the board during tick execution. Although the expected flow is that clients query only during their turn window, nothing enforces this, and a misbehaving or slow client could issue queries during tick execution, causing a data race.

Consider either:

  • Adding a read lock around board access in query RPCs (and a write lock in the game loop during mutation), or
  • Rejecting queries outside the active turn window.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/net/GameServiceImpl.cpp` around lines 464 - 545, These query RPCs
(GetObjectById, GetObjectAtPos, GetMyCore, GetMyUnits, GetAllObjects) read
Board::instance() concurrently with Game::run() mutations causing a data race;
protect board access with a readers-writer lock: add a shared mutex (e.g.,
std::shared_mutex) guarding Board's state, acquire a shared/read lock
(std::shared_lock) at the start of each listed RPC before calling
Board::instance() or iterating it, and acquire an exclusive/write lock
(std::unique_lock) inside Game::run() around all board mutations; ensure Board's
public mutation/iteration points use the same mutex so RPC readers and the game
loop are synchronized.

Comment on lines +20 to +25
server_ = builder.BuildAndStart();
if (!server_)
{
Logger::LogErr("Failed to start gRPC server on " + address);
return;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

start() silently returns on failure — caller cannot detect it.

If BuildAndStart() fails, start() logs an error and returns. Since start() is documented to block the calling thread, the caller (e.g., main.cpp) likely assumes it will block until shutdown() is called. A silent return breaks that assumption and may cause the program to proceed to cleanup/exit prematurely.

Consider either throwing an exception, calling std::exit(), or changing the return type to bool so the caller can react.

Proposed fix (option: return bool)

In GrpcServer.h:

-	void start();
+	bool start();

In GrpcServer.cpp:

-void GrpcServer::start()
+bool GrpcServer::start()
 {
 	std::string address = "0.0.0.0:" + std::to_string(port_);
 
 	grpc::ServerBuilder builder;
 	builder.AddListeningPort(address, grpc::InsecureServerCredentials());
 	builder.RegisterService(service_);
 
 	server_ = builder.BuildAndStart();
 	if (!server_)
 	{
 		Logger::LogErr("Failed to start gRPC server on " + address);
-		return;
+		return false;
 	}
 
 	Logger::Log("gRPC server listening on " + address);
 	server_->Wait();
+	return true;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/net/GrpcServer.cpp` around lines 20 - 25, The start()
implementation silently returns when builder.BuildAndStart() fails which
violates its blocking contract; modify start() (and GrpcServer API) to surface
failure: either change GrpcServer::start() to return bool and return false on
failed BuildAndStart(), or throw an exception (e.g., std::runtime_error) or call
std::exit(); update callers (e.g., main.cpp) to handle the new behavior and
ensure shutdown() is only expected after successful start(); specifically update
the code path around BuildAndStart(), the server_ member check, and any
tests/callers that assume start() blocks to react to the failure.

Copy link
Copy Markdown
Member

@FreddyMSchubert FreddyMSchubert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't compile lol. Also literally took half an hour to build initially, I stopped the time.

CMake Deprecation Warning at cmake-build/_deps/grpc-src/third_party/cares/cares/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED):
  Compatibility with CMake < 3.5 will be removed from a future version of
  CMake.

  Update the VERSION argument <min> value or use a ...<max> suffix to tell
  CMake that the project does not need compatibility with older versions.


-- 
-- 25.1.0
-- Checking for module 'libsystemd>=233'
--   Package 'libsystemd', required by 'virtual:world', not found
-- Configuring done (0.4s)
CMake Error in CMakeLists.txt:
  No known features for CXX compiler

  ""

  version .


CMake Generate step failed.  Build files cannot be regenerated correctly.
make[1]: *** [Makefile:13: cmake-build/Makefile] Error 1
make[1]: Leaving directory '/workspaces/monorepo/bots/c/hardcore/my-core-bot'
make: *** [Makefile:40: all] Error 2
root@4bafadad61ea:/workspaces/monorepo# 

# Conflicts:
#	bots/c/client_lib/CMakeLists.txt
#	bots/c/client_lib/src/internal/grpc_bridge.cpp
@FreddyMSchubert FreddyMSchubert marked this pull request as draft February 22, 2026 12:18
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
bots/c/client_lib/CMakeLists.txt (2)

112-115: ASAN compile flags are PRIVATE but link flags are PUBLIC — consumers of core_lib get ASAN linked without being compiled with it.

-fsanitize=address as a PUBLIC link option propagates to every executable that links core_lib when CORE_LIB_ASAN is on. Those executables will have the ASAN runtime linked but their own translation units won't be instrumented, producing mixed-mode binaries where only core_lib's objects are shadow-mapped. This is technically functional but yields incomplete and potentially misleading ASAN reports.

If the intent is to instrument the full bot binary, downstream consumers (bot executables) should also receive -fsanitize=address compile options. If the intent is to sandbox only core_lib side-effects, this is a known limitation worth documenting in a comment.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bots/c/client_lib/CMakeLists.txt` around lines 112 - 115, The CMake
configuration makes ASAN compile flags PRIVATE but link flags PUBLIC for the
target core_lib (CORE_LIB_ASAN branch), which causes downstream executables to
link ASAN without being compiled with it; to fix, either make both the compile
and link options PUBLIC so consumers receive -fsanitize=address at compile time
(add the -g -fno-omit-frame-pointer -fsanitize=address compile options as PUBLIC
on target_compile_options for core_lib) or change the link option to PRIVATE so
ASAN is only applied to core_lib; if you intend to only instrument core_lib, add
a clarifying comment about the limitation instead of changing propagation.

45-53: gRPC is pinned 16 minor versions behind the current stable release; UPDATE_DISCONNECTED TRUE compounds the maintenance friction.

The latest stable gRPC C++ release is v1.78.1 (February 20, 2026), while this file pins GIT_TAG v1.62.1 (March 2024). The gap spans nearly two years of bug fixes and security patches, including:

  • CVE-2024-7246 (HPACK table poisoning): fixed in v1.62.3
  • CVE-2024-11407 (DoS via data corruption): fixed in v1.68.0
  • SPIFFE verification hardening (gRFC A87): added in v1.75.0

Additionally, UPDATE_DISCONNECTED TRUE (line 51) prevents network access during updates. Changing GIT_TAG alone will not fetch a new tag if it is not already known locally; only tags/commits already in the local FetchContent cache can be checked out. This requires developers to either clear the cache directory or be aware of the limitation when updating versions.

Consider bumping to a recent tag and documenting the UPDATE_DISCONNECTED constraint.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bots/c/client_lib/CMakeLists.txt` around lines 45 - 53, Update the gRPC
FetchContent declaration: change the GIT_TAG value in FetchContent_Declare from
v1.62.1 to a recent stable tag (e.g., v1.78.1), remove or set
UPDATE_DISCONNECTED to FALSE so FetchContent can access the network to fetch new
tags, and add a short comment near
FetchContent_Declare/FetchContent_MakeAvailable explaining that
UPDATE_DISCONNECTED prevents fetching new tags from remote and that developers
must clear the FetchContent cache when switching tags if they intentionally keep
UPDATE_DISCONNECTED TRUE.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@bots/c/client_lib/CMakeLists.txt`:
- Around line 17-22: The case-sensitive comparison against CMAKE_BUILD_TYPE
causes inconsistent behavior with the per-config variable CMAKE_CXX_FLAGS_DEV;
update the guard that sets CORE_LIB_ASAN to normalize the build type before
comparing (e.g., convert CMAKE_BUILD_TYPE to lowercase or uppercase) and then
compare against "dev" (or the chosen normalized form) so CORE_LIB_ASAN is
enabled for any casing; modify the conditional that references CMAKE_BUILD_TYPE
and ensure CORE_LIB_ASAN is set when the normalized value equals "dev".
- Around line 104-110: The PROTO_GEN_DIR is added with
target_include_directories(core_lib PRIVATE ...) so generated proto headers are
treated as normal includes and subject to core_lib's -Wall -Wextra -Werror;
change the inclusion of PROTO_GEN_DIR to a system include so warnings from
generated code are suppressed: update the target_include_directories invocation
that references PROTO_GEN_DIR for target core_lib to mark PROTO_GEN_DIR as
SYSTEM (i.e., use target_include_directories(core_lib PRIVATE
${CMAKE_CURRENT_SOURCE_DIR}/src/internal SYSTEM ${PROTO_GEN_DIR}) or the
equivalent CMake SYSTEM syntax) while leaving the project's own include
directory and target_compile_options intact.

---

Nitpick comments:
In `@bots/c/client_lib/CMakeLists.txt`:
- Around line 112-115: The CMake configuration makes ASAN compile flags PRIVATE
but link flags PUBLIC for the target core_lib (CORE_LIB_ASAN branch), which
causes downstream executables to link ASAN without being compiled with it; to
fix, either make both the compile and link options PUBLIC so consumers receive
-fsanitize=address at compile time (add the -g -fno-omit-frame-pointer
-fsanitize=address compile options as PUBLIC on target_compile_options for
core_lib) or change the link option to PRIVATE so ASAN is only applied to
core_lib; if you intend to only instrument core_lib, add a clarifying comment
about the limitation instead of changing propagation.
- Around line 45-53: Update the gRPC FetchContent declaration: change the
GIT_TAG value in FetchContent_Declare from v1.62.1 to a recent stable tag (e.g.,
v1.78.1), remove or set UPDATE_DISCONNECTED to FALSE so FetchContent can access
the network to fetch new tags, and add a short comment near
FetchContent_Declare/FetchContent_MakeAvailable explaining that
UPDATE_DISCONNECTED prevents fetching new tags from remote and that developers
must clear the FetchContent cache when switching tags if they intentionally keep
UPDATE_DISCONNECTED TRUE.

Comment on lines +17 to +22
set(CMAKE_CXX_FLAGS_DEV "-O3 -DNDEBUG")
set(CMAKE_C_FLAGS_DEV "-O3 -DNDEBUG")

option(CORE_LIB_ASAN "Enable AddressSanitizer on core_lib" OFF)
if(CMAKE_BUILD_TYPE STREQUAL "dev")
set(CORE_LIB_ASAN ON)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Case-sensitive STREQUAL "dev" check is inconsistent with CMake's flags variable lookup.

CMake maps CMAKE_BUILD_TYPE to an uppercase config suffix when resolving per-config flags. CMAKE_CXX_FLAGS_DEV therefore activates for any casing of "dev" (Dev, DEV, etc.), but the STREQUAL "dev" guard on line 21 is case-sensitive. Passing -DCMAKE_BUILD_TYPE=Dev would apply the O3 -DNDEBUG flags but skip enabling CORE_LIB_ASAN.

🛠️ Proposed fix: normalize before comparing
-if(CMAKE_BUILD_TYPE STREQUAL "dev")
+string(TOLOWER "${CMAKE_BUILD_TYPE}" _build_type_lower)
+if(_build_type_lower STREQUAL "dev")
     set(CORE_LIB_ASAN ON)
 endif()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bots/c/client_lib/CMakeLists.txt` around lines 17 - 22, The case-sensitive
comparison against CMAKE_BUILD_TYPE causes inconsistent behavior with the
per-config variable CMAKE_CXX_FLAGS_DEV; update the guard that sets
CORE_LIB_ASAN to normalize the build type before comparing (e.g., convert
CMAKE_BUILD_TYPE to lowercase or uppercase) and then compare against "dev" (or
the chosen normalized form) so CORE_LIB_ASAN is enabled for any casing; modify
the conditional that references CMAKE_BUILD_TYPE and ensure CORE_LIB_ASAN is set
when the normalized value equals "dev".

Comment on lines +104 to +110
target_include_directories(core_lib PRIVATE
${CMAKE_CURRENT_SOURCE_DIR}/src/internal
${PROTO_GEN_DIR}
)

target_link_libraries(core_lib PRIVATE core_game_proto)
target_compile_options(core_lib PRIVATE -Wall -Wextra -Werror)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Generated proto headers included without SYSTEM, causing -Werror to fail on warnings from generated code.

PROTO_GEN_DIR is added as a regular PRIVATE include (line 106), meaning the compiler applies the full -Wall -Wextra -Werror diagnostic set (line 110) to anything grpc_bridge.cpp includes from that directory. Generated protobuf/gRPC C++ headers routinely emit diagnostics under -Wextra (unused parameters, shadow variables, etc.) that are suppressed for core_game_proto itself via -w — but that flag only covers compilation of the generated .cc files, not downstream inclusions.

Mark PROTO_GEN_DIR as a system include directory on core_lib to suppress warnings from generated headers in this context:

🛠️ Proposed fix
 target_include_directories(core_lib PRIVATE
 	${CMAKE_CURRENT_SOURCE_DIR}/src/internal
-	${PROTO_GEN_DIR}
 )
+target_include_directories(core_lib SYSTEM PRIVATE
+	${PROTO_GEN_DIR}
+)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bots/c/client_lib/CMakeLists.txt` around lines 104 - 110, The PROTO_GEN_DIR
is added with target_include_directories(core_lib PRIVATE ...) so generated
proto headers are treated as normal includes and subject to core_lib's -Wall
-Wextra -Werror; change the inclusion of PROTO_GEN_DIR to a system include so
warnings from generated code are suppressed: update the
target_include_directories invocation that references PROTO_GEN_DIR for target
core_lib to mark PROTO_GEN_DIR as SYSTEM (i.e., use
target_include_directories(core_lib PRIVATE
${CMAKE_CURRENT_SOURCE_DIR}/src/internal SYSTEM ${PROTO_GEN_DIR}) or the
equivalent CMake SYSTEM syntax) while leaving the project's own include
directory and target_compile_options intact.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants