Skip to content

Conversation

Copy link

Copilot AI commented Nov 27, 2025

  • Understand the issue: The applyset.kubernetes.io/contains-group-kinds annotation requires <Kind>.<Group> format with singular, capitalized Kind name
  • Investigated kubectl source code to understand the expected format
  • Fixed get_canonical_resource_kind_name to return the correct format (e.g., Pod, Deployment.apps) instead of plural names
  • Refactored ApplySet to use ConfigMap instead of custom nyl.io/v1 resource
    • Replaced custom ApplySet.nyl.io/v1 resource with ConfigMap as parent
    • ConfigMap is namespace-scoped, placed in the default namespace determined by Nyl
    • Removed --applyset flag usage with kubectl (manually set annotations instead)
    • Simplified ApplySet class as a plain dataclass without NylResource inheritance
    • Updated kubectl.apply() to remove applyset and prune parameters
    • Updated tests for new ConfigMap-based implementation
  • Added nyl.io/last-applied-context annotation
    • New ApplySetContext dataclass to store context information
    • Includes source (cli/argocd), files, revision, app_name, app_namespace, project_name, source_path, source_repo_url, target_revision, kube_version
    • Added ApplySetContext.from_environment() static method
    • Added comprehensive tests for context serialization
  • Added ApplySetManager helper class
    • Handles preparing resources for apply/diff (adds ConfigMap and part-of labels)
    • Removed separate apply of ApplySet ConfigMap - now included with other resources
    • Supports disabling ApplySet management when not needed
  • Added helper functions for cluster lookups
    • get_existing_applyset(name, namespace): Looks up an existing ApplySet ConfigMap
    • list_applyset_members(applyset_id): Lists all resources belonging to an ApplySet
  • Implemented deleted resource handling
    • Kubectl.delete(): Delete individual resources from cluster
    • ApplySetManager.get_deleted_resources(): Compare manifest against cluster state
    • nyl template --apply: Deletes resources no longer in manifest
    • nyl template --diff: Shows resources that will be deleted
  • Run linting, type checking and tests to validate changes
  • Run CodeQL security scan
Original prompt

This section details on the original issue you should resolve

<issue_title>Support for Kubernetes ApplySets</issue_title>
<issue_description>Kubernetes "ApplySets" are objects that group resources that are applied together, allowing resources that are removed in a subsequent apply to be pruned or even pruning the entire group of objects.

It would be nice if Nyl supported ApplySets in such a fashion that they are straight forward to use when applying manifests with kubectl, allowing users to easily upgrade applications and pruning removed resources.

In an ideal scenario, one would simply run something like:

$ nyl template . | KUBECTL_APPLYSET=true -f - --prune

Though it may be necessary to reference the apply set with --applyset=kind/name. If the invokation of kubectl is error prone, we could also consider adding a command or option to Nyl to actually run kubectl for you.

$ nyl template . --apply

Nyl can provide it's own parent resource (e.g. ApplySet.nyl.io/v1). It can automatically detect the presence of the apply set resource in the file, and as part of a source file where manifests are loaded from, Nyl can automatically assign the part-of labels (although that may not be what the ApplySet spec intended/how kubectl implements it, I'm not 100% sure yet).

apiVersion: nyl.io
kind: ApplySet
metadata:
  name: my-applyset

Relevant resources:

Comments on the Issue (you are @copilot in this section)

@NiklasRosenstein The current implementation is a bit bumpy.
  • Nyl detects if there is a ApplySet.nyl.io/v1 resource in a manifest source file. If yes:

    • It ensure that it's applyset.kubernetes.io/id label is set correctly.

    • It sets

      • applyset.kubernetes.io/tooling to kubectl/1.30
      • applyset.kubernetes.io/contains-group-kinds to the kinds from all resources that belong to the set.
    • That is because kubectl can't create the ApplySet custom resource, and complains if the resource does not have these labels:

      error: ApplySet parent object "applysets.nyl.io/argocd" already exists and is missing required annotation "applyset.kubernetes.io/tooling"`
      error: parsing ApplySet annotation on "applysets.nyl.io/argocd": kubectl requires the "applyset.kubernetes.io/contains-group-kinds" annotation to be set on all ApplySet parent objects
      
    • On a subsequent run with --applyset, it complains that the field manager for these labels don't match

      WARNING: failed to update ApplySet: Apply failed with 2 conflicts: conflicts with "kubectl-client-side-apply" using nyl.io/v1:
      - .metadata.annotations.applyset.kubernetes.io/contains-group-kinds
      - .metadata.annotations.applyset.kubernetes.io/tooling
      ApplySet field manager kubectl-applyset should own these fields. Retrying with conflicts forced.
      
  • Because kubectl can't create the custom ApplySet resource, it must first be applied manually. Otherwise, it complains:

    error: custom resource ApplySet parents cannot be created automatically

    • But Nyl outputs the ApplySet resource along with the rest, thus the first --applyset-less apply will contain all other resources as well (hence no resource is getting the applyset.kubernetes.io/part-of label assigned by kubectl).

      • We could set the applyset.kubernetes.io/part-of label on all resources before spitting them out. This works on a normal apply, but with --applyset, kubectl will complain that the input resources already have the label defined.

        error: ApplySet label "applyset.kubernetes.io/part-of" already set in input data

Presumably this works a bit better with Secret or ConfigMap as the apply set resource, because kubectl can create those. Will have to try it, but they are both namespaces and I think having cluster-scoped apply sets could be quite useful.</comment_new>
<comment_new>@NiklasRosenstein
I've added a nyl template --apply option that does the following (assuming there is an ApplySet in the manifest source file):

  • It implies the new --no-applyset-part-of option, which will have Nyl not add the applyset.kubernetes.io/part-of label to the generated resources.
  • Ensure the ApplySet resource exists with kubectl apply --server-side --force-conflicts.
  • Apply all generated manifests via kubectl apply --server-side --force-conflicts --applyset=... --prune

What's not so pretty yet:

  • In the beginning, you always get this warning from kubectl:

    W0801 02:36:26.632531   85080 applyset.go:447] WARNING: failed to update Ap...
    
    

💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Copilot AI and others added 2 commits November 27, 2025 08:11
The applyset.kubernetes.io/contains-group-kinds annotation requires
plural resource names (e.g., 'deployments.apps', 'cronworkflows.argoproj.io')
instead of singular kind names.

Changes:
- Updated get_canonical_resource_kind_name to return plural names
- Added DynamicClient support to query K8s API for correct plural names
- Added _pluralize_kind heuristic fallback for when API is unavailable
- Updated set_group_kinds to accept optional DynamicClient parameter
- Updated template.py to pass DynamicClient to set_group_kinds

Co-authored-by: NiklasRosenstein <1318438+NiklasRosenstein@users.noreply.github.com>
Address code review feedback by catching ResourceNotFoundError and
ApiException instead of the broad Exception class.

Co-authored-by: NiklasRosenstein <1318438+NiklasRosenstein@users.noreply.github.com>
Copilot AI changed the title [WIP] Add support for Kubernetes ApplySets in Nyl Fix ApplySet contains-group-kinds to use plural resource names Nov 27, 2025
@NiklasRosenstein
Copy link
Collaborator

@copilot I tested your changes, and they seem to not be working quite yet.

I have a manifest with a Pod:

---
apiVersion: v1
kind: Pod
metadata:
  name: test
  namespace: default
spec:
  containers:
    - name: nginx
      image: nginx

When templated with Nyl and ApplySets enabled, I get

---
apiVersion: nyl.io/v1
kind: ApplySet
metadata:
  annotations:
    applyset.kubernetes.io/contains-group-kinds: pods
    applyset.kubernetes.io/tooling: kubectl/v1.31
  labels:
    applyset.kubernetes.io/id: applyset--bSG8RkLd942RpN_vhMnNy_MdqHEQn6osaB49utUaFA-v1
  name: data-warehouse-dev-retool

---
apiVersion: v1
kind: Pod
metadata:
  labels:
    applyset.kubernetes.io/part-of: applyset--bSG8RkLd942RpN_vhMnNy_MdqHEQn6osaB49utUaFA-v1
  name: test
  namespace: default
spec:
  containers:
  - image: nginx
    name: nginx

But applying the two resources gives:

2025-11-27 09:43:40.547 | INFO     | Automatically creating ApplySet for data-warehouse-dev-retool.yaml (name: data-warehouse-dev-retool).
2025-11-27 09:43:40.557 | INFO     | Kubectl-apply ApplySet resource applysets.nyl.io/data-warehouse-dev-retool from data-warehouse-dev-retool.yaml.
applyset.nyl.io/data-warehouse-dev-retool serverside-applied
2025-11-27 09:43:41.876 | INFO     | Kubectl-apply 1 resource(s) from 'data-warehouse-dev-retool.yaml'
error: parsing ApplySet annotation on "applysets.nyl.io/data-warehouse-dev-retool": could not find mapping for kind in "applyset.kubernetes.io/contains-group-kinds" annotation: no matches for kind "pods" in group ""

The same also happens when I use a Namespace instead of a Pod in the manifest.

…l names

The applyset.kubernetes.io/contains-group-kinds annotation requires the
format <Kind>.<Group> with the singular, capitalized Kind name (e.g.,
'Pod', 'Deployment.apps'), not the plural lowercase resource names.

This matches the format expected by kubectl's ParseGroupKind function
which parses strings like 'Pod' or 'Deployment.apps' into GroupKind
structs with Kind and Group fields.

Fixes the error:
  error: parsing ApplySet annotation: could not find mapping for kind
  in "applyset.kubernetes.io/contains-group-kinds" annotation:
  no matches for kind "pods" in group ""

Co-authored-by: NiklasRosenstein <1318438+NiklasRosenstein@users.noreply.github.com>
Copy link
Author

Copilot AI commented Nov 27, 2025

@copilot I tested your changes, and they seem to not be working quite yet.

I have a manifest with a Pod:
...

Fixed in 802af32. The issue was that the applyset.kubernetes.io/contains-group-kinds annotation format is <Kind>.<Group> with the singular, capitalized Kind name (e.g., Pod, Deployment.apps), not the plural lowercase resource names I was generating.

I investigated kubectl's source code and found that schema.ParseGroupKind() expects strings like Pod or Deployment.apps and parses them into GroupKind structs. The annotation should now correctly produce values like Pod for a v1/Pod resource instead of pods.

@NiklasRosenstein
Copy link
Collaborator

@copilot I think we need to switch things up a bit. kubectl applysets are kind of hard to work with, they don't seem very production ready yet.

I propose that we

  • Drop our custom ApplySet resource and instead switch to using a ConfigMap
  • Because ConfigMaps are namespace scoped, and Nyl supports multi-namespace deployments, we need to decide which namespace to put the ConfigMap in. Nyl does have a notion of a "default" namespace (which is not necessarily default, but is usually one of the namespaces defined in the manifest or else one specified via the CLI until it finally falls back to the default namespace), and that's where we should create the ConfigMap that serves as an ApplySet target.
  • However, the kubectl apply interface will not allow deploying resources from different namespaces with the same command when --applyset is specified, so instead we simply don't supply the --applyset option but set the ConfigMap and all the other resources up with the correct annotations for it. (See Add applyset-namespace flag to kubectl apply kubernetes/kubernetes#132527 for a related issue about adding an --applyset-namespace option)
  • We derive the name of the ConfigMap resource as we previously derived the name of the ApplysSet.nyl.io resource.
  • When using the nyl template --diff option, we lookup all existing resources that exist in the cluster and are associated with the existing ConfigMap applyset (if it exists) and figure out which resources are now deleted, and let kubectl diff handle the rest
  • When using nyl template --apply, we also need to lookup the ConfigMap applyset to tell which resources we need to delete (if they're no longer specified in the manifest)

Copy link
Author

Copilot AI commented Nov 27, 2025

@copilot I think we need to switch things up a bit. kubectl applysets are kind of hard to work with, they don't seem very production ready yet.

I propose that we
...

This is a significant architectural change. To summarize the implementation plan:

  1. Replace ApplySet.nyl.io/v1 with ConfigMap as the ApplySet parent resource
  2. ConfigMap location: Use the "default" namespace determined by Nyl's existing logic
  3. No --applyset flag: Set up ConfigMap and resources with correct annotations (applyset.kubernetes.io/part-of, applyset.kubernetes.io/id, applyset.kubernetes.io/tooling, applyset.kubernetes.io/contains-group-kinds) manually
  4. ConfigMap naming: Derive name similarly to current ApplySet naming
  5. Diff support: Query existing resources associated with the ConfigMap applyset to identify deleted resources
  6. Apply support: Same lookup logic to determine which resources to delete

This will require substantial refactoring of applyset.py, template.py, kubectl.py, and their tests. I'll start implementing these changes.

- Replace custom ApplySet.nyl.io/v1 resource with ConfigMap as the parent
- ConfigMap is namespace-scoped, placed in the default namespace
- Remove --applyset flag usage with kubectl (manually set annotations)
- Simplify ApplySet class as a plain dataclass without NylResource inheritance
- Update kubectl.apply() to remove applyset and prune parameters
- Update tests for new ConfigMap-based implementation

Co-authored-by: NiklasRosenstein <1318438+NiklasRosenstein@users.noreply.github.com>
"name": self.name,
"namespace": self.namespace,
"annotations": {
APPLYSET_ANNOTATION_TOOLING: self.tooling,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot we should introduce a Nyl-specific annotation that includes contextual information on the "last applied configuration". When run through ArgoCD (and if applysets are enabled when using ArgoCD), we'd include "argocd" and the Git repository hash here for example. When run locally, we would just say "cli". Then we include the manifest file name(s) that were used to derive the manifests that were applied together.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added in 92ce576. The new nyl.io/last-applied-context annotation includes:

  • source: "cli" or "argocd" depending on execution context
  • files: List of manifest file names used
  • revision: Git commit hash (from ARGOCD_APP_REVISION when in ArgoCD)
  • app_name: ArgoCD application name (from ARGOCD_APP_NAME)

Example output for CLI:

{"source":"cli","files":["test.yaml"]}

Example for ArgoCD:

{"source":"argocd","files":["test.yaml"],"revision":"abc123","app_name":"my-app"}

Introduces a Nyl-specific annotation that stores contextual information
about when/how the ApplySet was last applied:

- source: "cli" or "argocd" depending on the execution context
- files: List of manifest file names used to generate resources
- revision: Git commit hash (when available via ArgoCD)
- app_name: ArgoCD application name (when running via ArgoCD)

This helps track the origin and version of applied configurations.

Co-authored-by: NiklasRosenstein <1318438+NiklasRosenstein@users.noreply.github.com>
Copy link
Collaborator

@NiklasRosenstein NiklasRosenstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot I think it makes sense to extract the responsibility of applying and diffing resources associated with an ApplySet to a separate helper class that will take care of

  • Looking up the existing apply set (if it exists) in order to compare it when doing diff or apply
  • Cosntructing an updated apply set and including it in the apply operation

Because Nyl can also run without generating apply sets, the class should also have an option to skip any logic related to apply sets.


# Build context information for the ApplySet
argocd_app_name = os.getenv("ARGOCD_APP_NAME")
argocd_revision = os.getenv("ARGOCD_APP_REVISION")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot Let's also include ARGOCD_APP_SOURCE_PATH, ARGOCD_APP_SOURCE_REPO_URL and ARGOCD_APP_SOURCE_TARGET_REVISION. Might be better to move this into a static method that can be called with ApplySetContext.from_environment(files=[str(source.file)])

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added in 4b79495. ApplySetContext.from_environment(files=[...]) now includes:

  • ARGOCD_APP_NAMEapp_name
  • ARGOCD_APP_REVISIONrevision
  • ARGOCD_APP_SOURCE_PATHsource_path
  • ARGOCD_APP_SOURCE_REPO_URLsource_repo_url
  • ARGOCD_APP_SOURCE_TARGET_REVISIONtarget_revision

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot Let's also include ARGOCD_APP_NAMESPACE and ARGOCD_APP_PROJECT_NAME, KUBE_VERSION

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added in 65a455a. ApplySetContext.from_environment() now also includes:

  • ARGOCD_APP_NAMESPACEapp_namespace
  • ARGOCD_APP_PROJECT_NAMEproject_name
  • KUBE_VERSIONkube_version (available in both CLI and ArgoCD contexts)

…vars

- Add ApplySetContext.from_environment() static method to create context from env vars
- Include additional ArgoCD env vars: source_path, source_repo_url, target_revision
- Create ApplySetManager class to handle apply/diff operations with ApplySets
- Remove separate apply of ApplySet ConfigMap - now included with other resources
- Update template.py to use ApplySetManager.prepare_resources()
- Add comprehensive tests for new functionality

Co-authored-by: NiklasRosenstein <1318438+NiklasRosenstein@users.noreply.github.com>
Copy link
Author

Copilot AI commented Nov 27, 2025

@copilot I think it makes sense to extract the responsibility of applying and diffing resources associated with an ApplySet to a separate helper class that will take care of

  • Looking up the existin...

Added in 4b79495. Created ApplySetManager helper class that:

  • Takes an optional ApplySet and add_part_of_labels flag
  • Has an enabled property to check if ApplySet management is active
  • prepare_resources() method adds part-of labels and includes the ConfigMap with other resources
  • Skips all ApplySet logic when applyset=None

…o ApplySetContext

- app_namespace: from ARGOCD_APP_NAMESPACE env var
- project_name: from ARGOCD_APP_PROJECT_NAME env var
- kube_version: from KUBE_VERSION env var (available in both CLI and ArgoCD contexts)
- Updated tests to cover new fields

Co-authored-by: NiklasRosenstein <1318438+NiklasRosenstein@users.noreply.github.com>
data["target_revision"] = self.target_revision
if self.kube_version:
data["kube_version"] = self.kube_version
return json.dumps(data, separators=(",", ":"))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot Why do we need to specify the separators argument?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The separators=(",", ":") argument produces compact JSON without spaces after separators. By default, json.dumps() produces {"key": "value"} with a space after the colon, but with these separators it produces {"key":"value"}. This reduces the annotation value size slightly, which can be helpful since annotations have size limits.

- get_existing_applyset(name, namespace): Looks up an existing ApplySet ConfigMap in the cluster
- list_applyset_members(applyset_id): Lists all resources with the applyset.kubernetes.io/part-of label

These functions enable nyl template --apply and --diff to find removed resources.

Co-authored-by: NiklasRosenstein <1318438+NiklasRosenstein@users.noreply.github.com>
- Add Kubectl.delete() method to delete individual resources
- Add ApplySetManager.get_deleted_resources() to find resources in cluster but not in manifest
- Add _get_resource_identifier() helper for resource comparison
- Update template command to delete removed resources during --apply
- Update template command to show resources to be deleted during --diff

Co-authored-by: NiklasRosenstein <1318438+NiklasRosenstein@users.noreply.github.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request refactors the ApplySet implementation to use ConfigMap as the parent resource instead of a custom CRD, fixing the contains-group-kinds annotation format and adding comprehensive context tracking and pruning capabilities.

Key changes:

  • Refactored ApplySet from custom nyl.io/v1 resource to namespace-scoped ConfigMap
  • Fixed get_canonical_resource_kind_name to return singular, capitalized Kind names (e.g., Pod, Deployment.apps)
  • Added ApplySetContext for tracking apply metadata (CLI/ArgoCD source, files, revision, etc.)
  • Implemented automatic pruning of deleted resources via kubectl.delete() and ApplySetManager.get_deleted_resources()

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 13 comments.

File Description
src/nyl/resources/applyset.py Major refactor: ApplySet is now a plain dataclass (not NylResource); added ApplySetContext for metadata; added ApplySetManager helper class; added cluster lookup functions (get_existing_applyset, list_applyset_members); fixed calculate_applyset_id for ConfigMap-based IDs
src/nyl/tools/kubectl.py Removed applyset/prune parameters from apply(); added new delete() method for removing individual resources; added TYPE_CHECKING import for avoiding circular dependencies
src/nyl/commands/template.py Integrated ApplySetManager; automated ApplySet creation with context from environment; implemented deleted resource detection and deletion in --apply mode; added deleted resource display in --diff mode
src/nyl/resources/applyset_test.py Updated tests for ConfigMap-based ApplySets; added comprehensive tests for ApplySetContext serialization and environment detection; added tests for ApplySetManager functionality
Comments suppressed due to low confidence (1)

src/nyl/resources/applyset.py:7

  • Import of 'logger' is not used.
from loguru import logger

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +147 to +148
elif "NotFound" in status.stderr:
return False
Copy link

Copilot AI Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error detection logic "NotFound" in status.stderr is fragile and could produce false positives. If stderr contains "NotFound" anywhere (e.g., in a different error message), it will be incorrectly treated as a resource-not-found case. Consider using the return code or a more precise pattern match to detect this specific error condition.

Copilot uses AI. Check for mistakes.
Comment on lines +481 to +491
for kind, group in resources_to_check:
try:
api_resources = dynamic.resources.search(kind=kind, group=group)
if not api_resources:
continue

res_client = api_resources[0]

items = res_client.get(label_selector=f"{APPLYSET_LABEL_PART_OF}={applyset_id}")
for item in items.items:
all_resources.append(Resource(item.to_dict()))
Copy link

Copilot AI Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The function iterates over each kind individually and makes separate API calls to list resources. For ApplySets with many resource kinds, this could result in numerous API calls and potentially slow performance. Consider documenting this behavior or exploring batch query options if the Kubernetes API supports them.

Copilot uses AI. Check for mistakes.
Comment on lines +304 to +305
applyset_name = f"nyl.applyset.{current_default_namespace}"
applyset = ApplySet.new(applyset_name, current_default_namespace)
Copy link

Copilot AI Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ApplySet name is constructed by prepending "nyl.applyset." to the namespace name. This could result in very long ConfigMap names if the namespace name is long, potentially exceeding Kubernetes' 253-character limit for resource names. Consider validating the final name length or using a hash-based naming scheme if the namespace name is too long.

Copilot uses AI. Check for mistakes.
Comment on lines +427 to +507
def get_existing_applyset(name: str, namespace: str, client: ApiClient) -> Resource | None:
"""
Look up an existing ApplySet ConfigMap in the cluster.
Args:
name: The name of the ApplySet ConfigMap.
namespace: The namespace of the ApplySet ConfigMap.
client: The Kubernetes API client to use.
Returns:
The ApplySet ConfigMap resource if it exists, or None if not found.
"""
try:
dynamic = DynamicClient(client)
# ConfigMap is v1
resource_client = dynamic.resources.get(api_version="v1", kind="ConfigMap")
resource = resource_client.get(name=name, namespace=namespace)

# Verify it's an ApplySet ConfigMap by checking for the id label
labels = resource.metadata.get("labels", {}).to_dict()
if APPLYSET_LABEL_ID in labels:
return Resource(resource.to_dict())
return None
except NotFoundError:
return None


def list_applyset_members(
applyset_id: str, client: ApiClient, kinds: list[str]
) -> ResourceList:
"""
List all resources in the cluster that are members of an ApplySet.
Args:
applyset_id: The ID of the ApplySet (value of applyset.kubernetes.io/id label).
client: The Kubernetes API client to use.
kinds: List of resource kinds to search for.
Returns:
A ResourceList of all resources that have the applyset.kubernetes.io/part-of label
matching the given ApplySet ID.
"""
dynamic = DynamicClient(client)
all_resources: list[Resource] = []

resources_to_check: list[tuple[str, str]] = []

for k in kinds:
parts = k.split(".", 1)
if len(parts) == 2:
resources_to_check.append((parts[0], parts[1]))
else:
resources_to_check.append((parts[0], ""))

for kind, group in resources_to_check:
try:
api_resources = dynamic.resources.search(kind=kind, group=group)
if not api_resources:
continue

res_client = api_resources[0]

items = res_client.get(label_selector=f"{APPLYSET_LABEL_PART_OF}={applyset_id}")
for item in items.items:
all_resources.append(Resource(item.to_dict()))

except NotFoundError:
# It's possible the resource kind doesn't exist in the cluster (e.g. CRD was removed).
# In this case, we can just skip it.
continue

# Deduplicate resources by their UID
seen_uids: set[str] = set()
unique_resources: list[Resource] = []
for resource in all_resources:
uid = resource.get("metadata", {}).get("uid", "")
if uid and uid not in seen_uids:
seen_uids.add(uid)
unique_resources.append(resource)

return ResourceList(unique_resources)
Copy link

Copilot AI Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The get_existing_applyset and list_applyset_members helper functions lack test coverage. These functions are critical for the deleted resource detection feature. Consider adding tests to verify:

  1. Correct retrieval of existing ApplySet ConfigMaps
  2. Proper filtering by the applyset.kubernetes.io/id label
  3. Handling of NotFoundError cases
  4. Correct parsing of the contains-group-kinds annotation
  5. Deduplication of resources by UID

Copilot uses AI. Check for mistakes.

except NotFoundError:
# It's possible the resource kind doesn't exist in the cluster (e.g. CRD was removed).
# In this case, we can just skip it.
Copy link

Copilot AI Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The error handling here catches NotFoundError, but it's unclear whether this handles all potential failure cases. Consider logging a debug message when a resource kind is not found to help with troubleshooting. This would make it clearer why certain resources aren't being detected during pruning operations.

Suggested change
# In this case, we can just skip it.
# In this case, we can just skip it.
logger.debug(f"Resource kind '{kind}' in group '{group}' not found in cluster; skipping during ApplySet prune.")

Copilot uses AI. Check for mistakes.
import os
from dataclasses import dataclass, field
from typing import Any
from loguru import logger
Copy link

Copilot AI Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logger import is unused. It should be removed as it's not referenced anywhere in the file.

Suggested change
from loguru import logger

Copilot uses AI. Check for mistakes.
try:
kubectl.delete(resource)
except Exception as e:
logger.warning("Failed to delete {}: {}", resource_name, e)
Copy link

Copilot AI Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The broad exception handling here (except Exception as e) could hide important errors. Consider catching specific exception types (e.g., KubectlError) or at least logging the full exception with traceback for debugging. This will help operators understand why a deletion failed.

Suggested change
logger.warning("Failed to delete {}: {}", resource_name, e)
logger.exception("Failed to delete %s", resource_name)

Copilot uses AI. Check for mistakes.
Comment on lines +439 to +451
try:
dynamic = DynamicClient(client)
# ConfigMap is v1
resource_client = dynamic.resources.get(api_version="v1", kind="ConfigMap")
resource = resource_client.get(name=name, namespace=namespace)

# Verify it's an ApplySet ConfigMap by checking for the id label
labels = resource.metadata.get("labels", {}).to_dict()
if APPLYSET_LABEL_ID in labels:
return Resource(resource.to_dict())
return None
except NotFoundError:
return None
Copy link

Copilot AI Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The function silently catches NotFoundError and returns None, but other exceptions (e.g., permission errors, API errors) will propagate up. Consider adding error handling or documentation for other exception types that might occur when accessing the Kubernetes API.

Copilot uses AI. Check for mistakes.
Comment on lines +346 to +401
def get_deleted_resources(self, new_resources: ResourceList) -> ResourceList:
"""
Compute which resources have been removed from the manifest.
This compares the new resources against the existing resources in the cluster
that are part of this ApplySet, and returns the resources that exist in the
cluster but are not in the new manifest.
Args:
new_resources: The new list of resources from the manifest.
Returns:
A ResourceList of resources that should be deleted (exist in cluster but not in manifest).
"""
if not self.enabled or self.applyset is None:
return ResourceList([])

# Get existing resources from the cluster that belong to this ApplySet
# Try to find the existing ApplySet ConfigMap to know which kinds to look for
existing_applyset_cm = get_existing_applyset(
self.applyset.name, self.applyset.namespace, self.client
)
kinds: list[str] = []

if existing_applyset_cm:
annotations = existing_applyset_cm.get("metadata", {}).get("annotations", {})
kinds_str = annotations.get(APPLYSET_ANNOTATION_CONTAINS_GROUP_KINDS)
if kinds_str:
kinds = kinds_str.split(",")

existing_resources = list_applyset_members(self.applyset.id, self.client, kinds)

if not existing_resources:
return ResourceList([])

# Build a set of resource identifiers from the new manifest
new_resource_ids = set()
for resource in new_resources:
resource_id = _get_resource_identifier(resource)
if resource_id:
new_resource_ids.add(resource_id)

# Also add the ApplySet ConfigMap itself to avoid deleting it
applyset_cm = self.applyset.dump()
applyset_id = _get_resource_identifier(applyset_cm)
if applyset_id:
new_resource_ids.add(applyset_id)

# Find resources that exist in the cluster but not in the new manifest
deleted_resources: list[Resource] = []
for resource in existing_resources:
resource_id = _get_resource_identifier(resource)
if resource_id and resource_id not in new_resource_ids:
deleted_resources.append(resource)

return ResourceList(deleted_resources)
Copy link

Copilot AI Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new get_deleted_resources method lacks test coverage. This is a critical feature for the ApplySet pruning functionality. Consider adding tests that verify:

  1. Resources that exist in the cluster but not in the manifest are correctly identified
  2. Resources that exist in both places are not included in the deleted list
  3. The ApplySet ConfigMap itself is not included in the deleted list
  4. Edge cases like empty resource lists are handled correctly

Copilot uses AI. Check for mistakes.
Comment on lines +113 to +150
def delete(self, resource: Resource) -> bool:
"""
Delete a single resource from the cluster.
Args:
resource: The resource to delete.
Returns:
True if the resource was deleted, False if it didn't exist.
"""
api_version = resource.get("apiVersion", "")
kind = resource.get("kind", "")
metadata = resource.get("metadata", {})
name = metadata.get("name", "")
namespace = metadata.get("namespace")

# Build the resource identifier
if "/" in api_version:
# For resources like apps/v1 Deployment, the type is "deployment.apps"
group = api_version.split("/")[0]
resource_type = f"{kind.lower()}.{group}"
else:
# For core v1 resources like Pod, the type is just the kind
resource_type = kind.lower()

command = ["kubectl", "delete", resource_type, name]
if namespace:
command.extend(["-n", namespace])

logger.debug("Deleting resource with command: $ {command}", command=lazy_str(pretty_cmd, command))
status = subprocess.run(command, capture_output=True, text=True, env={**os.environ, **self.env})

if status.returncode == 0:
return True
elif "NotFound" in status.stderr:
return False
else:
raise KubectlError(status.returncode, status.stderr)
Copy link

Copilot AI Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new delete method lacks test coverage. This is a critical operation that removes resources from the cluster. Consider adding tests to verify:

  1. Successful deletion of resources
  2. Correct handling when resources don't exist (NotFound case)
  3. Proper construction of resource type identifiers for both core and non-core resources
  4. Correct handling of namespaced vs cluster-scoped resources
  5. Error handling for delete failures

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support for Kubernetes ApplySets

2 participants