-
Notifications
You must be signed in to change notification settings - Fork 1
Fix ApplySet contains-group-kinds to use plural resource names #110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
The applyset.kubernetes.io/contains-group-kinds annotation requires plural resource names (e.g., 'deployments.apps', 'cronworkflows.argoproj.io') instead of singular kind names. Changes: - Updated get_canonical_resource_kind_name to return plural names - Added DynamicClient support to query K8s API for correct plural names - Added _pluralize_kind heuristic fallback for when API is unavailable - Updated set_group_kinds to accept optional DynamicClient parameter - Updated template.py to pass DynamicClient to set_group_kinds Co-authored-by: NiklasRosenstein <1318438+NiklasRosenstein@users.noreply.github.com>
Address code review feedback by catching ResourceNotFoundError and ApiException instead of the broad Exception class. Co-authored-by: NiklasRosenstein <1318438+NiklasRosenstein@users.noreply.github.com>
|
@copilot I tested your changes, and they seem to not be working quite yet. I have a manifest with a Pod: ---
apiVersion: v1
kind: Pod
metadata:
name: test
namespace: default
spec:
containers:
- name: nginx
image: nginxWhen templated with Nyl and ApplySets enabled, I get ---
apiVersion: nyl.io/v1
kind: ApplySet
metadata:
annotations:
applyset.kubernetes.io/contains-group-kinds: pods
applyset.kubernetes.io/tooling: kubectl/v1.31
labels:
applyset.kubernetes.io/id: applyset--bSG8RkLd942RpN_vhMnNy_MdqHEQn6osaB49utUaFA-v1
name: data-warehouse-dev-retool
---
apiVersion: v1
kind: Pod
metadata:
labels:
applyset.kubernetes.io/part-of: applyset--bSG8RkLd942RpN_vhMnNy_MdqHEQn6osaB49utUaFA-v1
name: test
namespace: default
spec:
containers:
- image: nginx
name: nginxBut applying the two resources gives: The same also happens when I use a |
…l names The applyset.kubernetes.io/contains-group-kinds annotation requires the format <Kind>.<Group> with the singular, capitalized Kind name (e.g., 'Pod', 'Deployment.apps'), not the plural lowercase resource names. This matches the format expected by kubectl's ParseGroupKind function which parses strings like 'Pod' or 'Deployment.apps' into GroupKind structs with Kind and Group fields. Fixes the error: error: parsing ApplySet annotation: could not find mapping for kind in "applyset.kubernetes.io/contains-group-kinds" annotation: no matches for kind "pods" in group "" Co-authored-by: NiklasRosenstein <1318438+NiklasRosenstein@users.noreply.github.com>
Fixed in 802af32. The issue was that the I investigated kubectl's source code and found that |
|
@copilot I think we need to switch things up a bit. I propose that we
|
This is a significant architectural change. To summarize the implementation plan:
This will require substantial refactoring of |
- Replace custom ApplySet.nyl.io/v1 resource with ConfigMap as the parent - ConfigMap is namespace-scoped, placed in the default namespace - Remove --applyset flag usage with kubectl (manually set annotations) - Simplify ApplySet class as a plain dataclass without NylResource inheritance - Update kubectl.apply() to remove applyset and prune parameters - Update tests for new ConfigMap-based implementation Co-authored-by: NiklasRosenstein <1318438+NiklasRosenstein@users.noreply.github.com>
src/nyl/resources/applyset.py
Outdated
| "name": self.name, | ||
| "namespace": self.namespace, | ||
| "annotations": { | ||
| APPLYSET_ANNOTATION_TOOLING: self.tooling, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot we should introduce a Nyl-specific annotation that includes contextual information on the "last applied configuration". When run through ArgoCD (and if applysets are enabled when using ArgoCD), we'd include "argocd" and the Git repository hash here for example. When run locally, we would just say "cli". Then we include the manifest file name(s) that were used to derive the manifests that were applied together.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added in 92ce576. The new nyl.io/last-applied-context annotation includes:
source: "cli" or "argocd" depending on execution contextfiles: List of manifest file names usedrevision: Git commit hash (fromARGOCD_APP_REVISIONwhen in ArgoCD)app_name: ArgoCD application name (fromARGOCD_APP_NAME)
Example output for CLI:
{"source":"cli","files":["test.yaml"]}Example for ArgoCD:
{"source":"argocd","files":["test.yaml"],"revision":"abc123","app_name":"my-app"}Introduces a Nyl-specific annotation that stores contextual information about when/how the ApplySet was last applied: - source: "cli" or "argocd" depending on the execution context - files: List of manifest file names used to generate resources - revision: Git commit hash (when available via ArgoCD) - app_name: ArgoCD application name (when running via ArgoCD) This helps track the origin and version of applied configurations. Co-authored-by: NiklasRosenstein <1318438+NiklasRosenstein@users.noreply.github.com>
NiklasRosenstein
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot I think it makes sense to extract the responsibility of applying and diffing resources associated with an ApplySet to a separate helper class that will take care of
- Looking up the existing apply set (if it exists) in order to compare it when doing
difforapply - Cosntructing an updated apply set and including it in the apply operation
Because Nyl can also run without generating apply sets, the class should also have an option to skip any logic related to apply sets.
src/nyl/commands/template.py
Outdated
|
|
||
| # Build context information for the ApplySet | ||
| argocd_app_name = os.getenv("ARGOCD_APP_NAME") | ||
| argocd_revision = os.getenv("ARGOCD_APP_REVISION") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot Let's also include ARGOCD_APP_SOURCE_PATH, ARGOCD_APP_SOURCE_REPO_URL and ARGOCD_APP_SOURCE_TARGET_REVISION. Might be better to move this into a static method that can be called with ApplySetContext.from_environment(files=[str(source.file)])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added in 4b79495. ApplySetContext.from_environment(files=[...]) now includes:
ARGOCD_APP_NAME→app_nameARGOCD_APP_REVISION→revisionARGOCD_APP_SOURCE_PATH→source_pathARGOCD_APP_SOURCE_REPO_URL→source_repo_urlARGOCD_APP_SOURCE_TARGET_REVISION→target_revision
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot Let's also include ARGOCD_APP_NAMESPACE and ARGOCD_APP_PROJECT_NAME, KUBE_VERSION
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added in 65a455a. ApplySetContext.from_environment() now also includes:
ARGOCD_APP_NAMESPACE→app_namespaceARGOCD_APP_PROJECT_NAME→project_nameKUBE_VERSION→kube_version(available in both CLI and ArgoCD contexts)
…vars - Add ApplySetContext.from_environment() static method to create context from env vars - Include additional ArgoCD env vars: source_path, source_repo_url, target_revision - Create ApplySetManager class to handle apply/diff operations with ApplySets - Remove separate apply of ApplySet ConfigMap - now included with other resources - Update template.py to use ApplySetManager.prepare_resources() - Add comprehensive tests for new functionality Co-authored-by: NiklasRosenstein <1318438+NiklasRosenstein@users.noreply.github.com>
Added in 4b79495. Created
|
…o ApplySetContext - app_namespace: from ARGOCD_APP_NAMESPACE env var - project_name: from ARGOCD_APP_PROJECT_NAME env var - kube_version: from KUBE_VERSION env var (available in both CLI and ArgoCD contexts) - Updated tests to cover new fields Co-authored-by: NiklasRosenstein <1318438+NiklasRosenstein@users.noreply.github.com>
| data["target_revision"] = self.target_revision | ||
| if self.kube_version: | ||
| data["kube_version"] = self.kube_version | ||
| return json.dumps(data, separators=(",", ":")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot Why do we need to specify the separators argument?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The separators=(",", ":") argument produces compact JSON without spaces after separators. By default, json.dumps() produces {"key": "value"} with a space after the colon, but with these separators it produces {"key":"value"}. This reduces the annotation value size slightly, which can be helpful since annotations have size limits.
- get_existing_applyset(name, namespace): Looks up an existing ApplySet ConfigMap in the cluster - list_applyset_members(applyset_id): Lists all resources with the applyset.kubernetes.io/part-of label These functions enable nyl template --apply and --diff to find removed resources. Co-authored-by: NiklasRosenstein <1318438+NiklasRosenstein@users.noreply.github.com>
- Add Kubectl.delete() method to delete individual resources - Add ApplySetManager.get_deleted_resources() to find resources in cluster but not in manifest - Add _get_resource_identifier() helper for resource comparison - Update template command to delete removed resources during --apply - Update template command to show resources to be deleted during --diff Co-authored-by: NiklasRosenstein <1318438+NiklasRosenstein@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request refactors the ApplySet implementation to use ConfigMap as the parent resource instead of a custom CRD, fixing the contains-group-kinds annotation format and adding comprehensive context tracking and pruning capabilities.
Key changes:
- Refactored ApplySet from custom
nyl.io/v1resource to namespace-scoped ConfigMap - Fixed
get_canonical_resource_kind_nameto return singular, capitalized Kind names (e.g.,Pod,Deployment.apps) - Added
ApplySetContextfor tracking apply metadata (CLI/ArgoCD source, files, revision, etc.) - Implemented automatic pruning of deleted resources via
kubectl.delete()andApplySetManager.get_deleted_resources()
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 13 comments.
| File | Description |
|---|---|
src/nyl/resources/applyset.py |
Major refactor: ApplySet is now a plain dataclass (not NylResource); added ApplySetContext for metadata; added ApplySetManager helper class; added cluster lookup functions (get_existing_applyset, list_applyset_members); fixed calculate_applyset_id for ConfigMap-based IDs |
src/nyl/tools/kubectl.py |
Removed applyset/prune parameters from apply(); added new delete() method for removing individual resources; added TYPE_CHECKING import for avoiding circular dependencies |
src/nyl/commands/template.py |
Integrated ApplySetManager; automated ApplySet creation with context from environment; implemented deleted resource detection and deletion in --apply mode; added deleted resource display in --diff mode |
src/nyl/resources/applyset_test.py |
Updated tests for ConfigMap-based ApplySets; added comprehensive tests for ApplySetContext serialization and environment detection; added tests for ApplySetManager functionality |
Comments suppressed due to low confidence (1)
src/nyl/resources/applyset.py:7
- Import of 'logger' is not used.
from loguru import logger
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| elif "NotFound" in status.stderr: | ||
| return False |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error detection logic "NotFound" in status.stderr is fragile and could produce false positives. If stderr contains "NotFound" anywhere (e.g., in a different error message), it will be incorrectly treated as a resource-not-found case. Consider using the return code or a more precise pattern match to detect this specific error condition.
| for kind, group in resources_to_check: | ||
| try: | ||
| api_resources = dynamic.resources.search(kind=kind, group=group) | ||
| if not api_resources: | ||
| continue | ||
|
|
||
| res_client = api_resources[0] | ||
|
|
||
| items = res_client.get(label_selector=f"{APPLYSET_LABEL_PART_OF}={applyset_id}") | ||
| for item in items.items: | ||
| all_resources.append(Resource(item.to_dict())) |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The function iterates over each kind individually and makes separate API calls to list resources. For ApplySets with many resource kinds, this could result in numerous API calls and potentially slow performance. Consider documenting this behavior or exploring batch query options if the Kubernetes API supports them.
| applyset_name = f"nyl.applyset.{current_default_namespace}" | ||
| applyset = ApplySet.new(applyset_name, current_default_namespace) |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ApplySet name is constructed by prepending "nyl.applyset." to the namespace name. This could result in very long ConfigMap names if the namespace name is long, potentially exceeding Kubernetes' 253-character limit for resource names. Consider validating the final name length or using a hash-based naming scheme if the namespace name is too long.
| def get_existing_applyset(name: str, namespace: str, client: ApiClient) -> Resource | None: | ||
| """ | ||
| Look up an existing ApplySet ConfigMap in the cluster. | ||
| Args: | ||
| name: The name of the ApplySet ConfigMap. | ||
| namespace: The namespace of the ApplySet ConfigMap. | ||
| client: The Kubernetes API client to use. | ||
| Returns: | ||
| The ApplySet ConfigMap resource if it exists, or None if not found. | ||
| """ | ||
| try: | ||
| dynamic = DynamicClient(client) | ||
| # ConfigMap is v1 | ||
| resource_client = dynamic.resources.get(api_version="v1", kind="ConfigMap") | ||
| resource = resource_client.get(name=name, namespace=namespace) | ||
|
|
||
| # Verify it's an ApplySet ConfigMap by checking for the id label | ||
| labels = resource.metadata.get("labels", {}).to_dict() | ||
| if APPLYSET_LABEL_ID in labels: | ||
| return Resource(resource.to_dict()) | ||
| return None | ||
| except NotFoundError: | ||
| return None | ||
|
|
||
|
|
||
| def list_applyset_members( | ||
| applyset_id: str, client: ApiClient, kinds: list[str] | ||
| ) -> ResourceList: | ||
| """ | ||
| List all resources in the cluster that are members of an ApplySet. | ||
| Args: | ||
| applyset_id: The ID of the ApplySet (value of applyset.kubernetes.io/id label). | ||
| client: The Kubernetes API client to use. | ||
| kinds: List of resource kinds to search for. | ||
| Returns: | ||
| A ResourceList of all resources that have the applyset.kubernetes.io/part-of label | ||
| matching the given ApplySet ID. | ||
| """ | ||
| dynamic = DynamicClient(client) | ||
| all_resources: list[Resource] = [] | ||
|
|
||
| resources_to_check: list[tuple[str, str]] = [] | ||
|
|
||
| for k in kinds: | ||
| parts = k.split(".", 1) | ||
| if len(parts) == 2: | ||
| resources_to_check.append((parts[0], parts[1])) | ||
| else: | ||
| resources_to_check.append((parts[0], "")) | ||
|
|
||
| for kind, group in resources_to_check: | ||
| try: | ||
| api_resources = dynamic.resources.search(kind=kind, group=group) | ||
| if not api_resources: | ||
| continue | ||
|
|
||
| res_client = api_resources[0] | ||
|
|
||
| items = res_client.get(label_selector=f"{APPLYSET_LABEL_PART_OF}={applyset_id}") | ||
| for item in items.items: | ||
| all_resources.append(Resource(item.to_dict())) | ||
|
|
||
| except NotFoundError: | ||
| # It's possible the resource kind doesn't exist in the cluster (e.g. CRD was removed). | ||
| # In this case, we can just skip it. | ||
| continue | ||
|
|
||
| # Deduplicate resources by their UID | ||
| seen_uids: set[str] = set() | ||
| unique_resources: list[Resource] = [] | ||
| for resource in all_resources: | ||
| uid = resource.get("metadata", {}).get("uid", "") | ||
| if uid and uid not in seen_uids: | ||
| seen_uids.add(uid) | ||
| unique_resources.append(resource) | ||
|
|
||
| return ResourceList(unique_resources) |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The get_existing_applyset and list_applyset_members helper functions lack test coverage. These functions are critical for the deleted resource detection feature. Consider adding tests to verify:
- Correct retrieval of existing ApplySet ConfigMaps
- Proper filtering by the applyset.kubernetes.io/id label
- Handling of NotFoundError cases
- Correct parsing of the contains-group-kinds annotation
- Deduplication of resources by UID
|
|
||
| except NotFoundError: | ||
| # It's possible the resource kind doesn't exist in the cluster (e.g. CRD was removed). | ||
| # In this case, we can just skip it. |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The error handling here catches NotFoundError, but it's unclear whether this handles all potential failure cases. Consider logging a debug message when a resource kind is not found to help with troubleshooting. This would make it clearer why certain resources aren't being detected during pruning operations.
| # In this case, we can just skip it. | |
| # In this case, we can just skip it. | |
| logger.debug(f"Resource kind '{kind}' in group '{group}' not found in cluster; skipping during ApplySet prune.") |
| import os | ||
| from dataclasses import dataclass, field | ||
| from typing import Any | ||
| from loguru import logger |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logger import is unused. It should be removed as it's not referenced anywhere in the file.
| from loguru import logger |
| try: | ||
| kubectl.delete(resource) | ||
| except Exception as e: | ||
| logger.warning("Failed to delete {}: {}", resource_name, e) |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The broad exception handling here (except Exception as e) could hide important errors. Consider catching specific exception types (e.g., KubectlError) or at least logging the full exception with traceback for debugging. This will help operators understand why a deletion failed.
| logger.warning("Failed to delete {}: {}", resource_name, e) | |
| logger.exception("Failed to delete %s", resource_name) |
| try: | ||
| dynamic = DynamicClient(client) | ||
| # ConfigMap is v1 | ||
| resource_client = dynamic.resources.get(api_version="v1", kind="ConfigMap") | ||
| resource = resource_client.get(name=name, namespace=namespace) | ||
|
|
||
| # Verify it's an ApplySet ConfigMap by checking for the id label | ||
| labels = resource.metadata.get("labels", {}).to_dict() | ||
| if APPLYSET_LABEL_ID in labels: | ||
| return Resource(resource.to_dict()) | ||
| return None | ||
| except NotFoundError: | ||
| return None |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The function silently catches NotFoundError and returns None, but other exceptions (e.g., permission errors, API errors) will propagate up. Consider adding error handling or documentation for other exception types that might occur when accessing the Kubernetes API.
| def get_deleted_resources(self, new_resources: ResourceList) -> ResourceList: | ||
| """ | ||
| Compute which resources have been removed from the manifest. | ||
| This compares the new resources against the existing resources in the cluster | ||
| that are part of this ApplySet, and returns the resources that exist in the | ||
| cluster but are not in the new manifest. | ||
| Args: | ||
| new_resources: The new list of resources from the manifest. | ||
| Returns: | ||
| A ResourceList of resources that should be deleted (exist in cluster but not in manifest). | ||
| """ | ||
| if not self.enabled or self.applyset is None: | ||
| return ResourceList([]) | ||
|
|
||
| # Get existing resources from the cluster that belong to this ApplySet | ||
| # Try to find the existing ApplySet ConfigMap to know which kinds to look for | ||
| existing_applyset_cm = get_existing_applyset( | ||
| self.applyset.name, self.applyset.namespace, self.client | ||
| ) | ||
| kinds: list[str] = [] | ||
|
|
||
| if existing_applyset_cm: | ||
| annotations = existing_applyset_cm.get("metadata", {}).get("annotations", {}) | ||
| kinds_str = annotations.get(APPLYSET_ANNOTATION_CONTAINS_GROUP_KINDS) | ||
| if kinds_str: | ||
| kinds = kinds_str.split(",") | ||
|
|
||
| existing_resources = list_applyset_members(self.applyset.id, self.client, kinds) | ||
|
|
||
| if not existing_resources: | ||
| return ResourceList([]) | ||
|
|
||
| # Build a set of resource identifiers from the new manifest | ||
| new_resource_ids = set() | ||
| for resource in new_resources: | ||
| resource_id = _get_resource_identifier(resource) | ||
| if resource_id: | ||
| new_resource_ids.add(resource_id) | ||
|
|
||
| # Also add the ApplySet ConfigMap itself to avoid deleting it | ||
| applyset_cm = self.applyset.dump() | ||
| applyset_id = _get_resource_identifier(applyset_cm) | ||
| if applyset_id: | ||
| new_resource_ids.add(applyset_id) | ||
|
|
||
| # Find resources that exist in the cluster but not in the new manifest | ||
| deleted_resources: list[Resource] = [] | ||
| for resource in existing_resources: | ||
| resource_id = _get_resource_identifier(resource) | ||
| if resource_id and resource_id not in new_resource_ids: | ||
| deleted_resources.append(resource) | ||
|
|
||
| return ResourceList(deleted_resources) |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new get_deleted_resources method lacks test coverage. This is a critical feature for the ApplySet pruning functionality. Consider adding tests that verify:
- Resources that exist in the cluster but not in the manifest are correctly identified
- Resources that exist in both places are not included in the deleted list
- The ApplySet ConfigMap itself is not included in the deleted list
- Edge cases like empty resource lists are handled correctly
| def delete(self, resource: Resource) -> bool: | ||
| """ | ||
| Delete a single resource from the cluster. | ||
| Args: | ||
| resource: The resource to delete. | ||
| Returns: | ||
| True if the resource was deleted, False if it didn't exist. | ||
| """ | ||
| api_version = resource.get("apiVersion", "") | ||
| kind = resource.get("kind", "") | ||
| metadata = resource.get("metadata", {}) | ||
| name = metadata.get("name", "") | ||
| namespace = metadata.get("namespace") | ||
|
|
||
| # Build the resource identifier | ||
| if "/" in api_version: | ||
| # For resources like apps/v1 Deployment, the type is "deployment.apps" | ||
| group = api_version.split("/")[0] | ||
| resource_type = f"{kind.lower()}.{group}" | ||
| else: | ||
| # For core v1 resources like Pod, the type is just the kind | ||
| resource_type = kind.lower() | ||
|
|
||
| command = ["kubectl", "delete", resource_type, name] | ||
| if namespace: | ||
| command.extend(["-n", namespace]) | ||
|
|
||
| logger.debug("Deleting resource with command: $ {command}", command=lazy_str(pretty_cmd, command)) | ||
| status = subprocess.run(command, capture_output=True, text=True, env={**os.environ, **self.env}) | ||
|
|
||
| if status.returncode == 0: | ||
| return True | ||
| elif "NotFound" in status.stderr: | ||
| return False | ||
| else: | ||
| raise KubectlError(status.returncode, status.stderr) |
Copilot
AI
Nov 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new delete method lacks test coverage. This is a critical operation that removes resources from the cluster. Consider adding tests to verify:
- Successful deletion of resources
- Correct handling when resources don't exist (NotFound case)
- Proper construction of resource type identifiers for both core and non-core resources
- Correct handling of namespaced vs cluster-scoped resources
- Error handling for delete failures
applyset.kubernetes.io/contains-group-kindsannotation requires<Kind>.<Group>format with singular, capitalized Kind nameget_canonical_resource_kind_nameto return the correct format (e.g.,Pod,Deployment.apps) instead of plural names--applysetflag usage with kubectl (manually set annotations instead)ApplySetContextdataclass to store context informationApplySetContext.from_environment()static methodget_existing_applyset(name, namespace): Looks up an existing ApplySet ConfigMaplist_applyset_members(applyset_id): Lists all resources belonging to an ApplySetKubectl.delete(): Delete individual resources from clusterApplySetManager.get_deleted_resources(): Compare manifest against cluster statenyl template --apply: Deletes resources no longer in manifestnyl template --diff: Shows resources that will be deletedOriginal prompt
This section details on the original issue you should resolve
<issue_title>Support for Kubernetes
ApplySets</issue_title><issue_description>Kubernetes "ApplySets" are objects that group resources that are applied together, allowing resources that are removed in a subsequent apply to be pruned or even pruning the entire group of objects.
It would be nice if Nyl supported ApplySets in such a fashion that they are straight forward to use when applying manifests with
kubectl, allowing users to easily upgrade applications and pruning removed resources.In an ideal scenario, one would simply run something like:
Though it may be necessary to reference the apply set with
--applyset=kind/name. If the invokation of kubectl is error prone, we could also consider adding a command or option to Nyl to actually run kubectl for you.$ nyl template . --applyNyl can provide it's own parent resource (e.g.
ApplySet.nyl.io/v1). It can automatically detect the presence of the apply set resource in the file, and as part of a source file where manifests are loaded from, Nyl can automatically assign thepart-oflabels (although that may not be what the ApplySet spec intended/how kubectl implements it, I'm not 100% sure yet).Relevant resources:
</issue_description>
Comments on the Issue (you are @copilot in this section)
@NiklasRosenstein The current implementation is a bit bumpy.Nyl detects if there is a
ApplySet.nyl.io/v1resource in a manifest source file. If yes:It ensure that it's
applyset.kubernetes.io/idlabel is set correctly.It sets
applyset.kubernetes.io/toolingtokubectl/1.30applyset.kubernetes.io/contains-group-kindsto the kinds from all resources that belong to the set.That is because kubectl can't create the
ApplySetcustom resource, and complains if the resource does not have these labels:On a subsequent run with
--applyset, it complains that the field manager for these labels don't matchBecause kubectl can't create the custom
ApplySetresource, it must first be applied manually. Otherwise, it complains:error: custom resource ApplySet parents cannot be created automaticallyBut Nyl outputs the ApplySet resource along with the rest, thus the first
--applyset-less apply will contain all other resources as well (hence no resource is getting theapplyset.kubernetes.io/part-oflabel assigned by kubectl).We could set the
applyset.kubernetes.io/part-oflabel on all resources before spitting them out. This works on a normal apply, but with--applyset, kubectl will complain that the input resources already have the label defined.error: ApplySet label "applyset.kubernetes.io/part-of" already set in input dataPresumably this works a bit better with
SecretorConfigMapas the apply set resource, because kubectl can create those. Will have to try it, but they are both namespaces and I think having cluster-scoped apply sets could be quite useful.</comment_new><comment_new>@NiklasRosenstein
I've added a
nyl template --applyoption that does the following (assuming there is anApplySetin the manifest source file):--no-applyset-part-ofoption, which will have Nyl not add theapplyset.kubernetes.io/part-oflabel to the generated resources.ApplySetresource exists withkubectl apply --server-side --force-conflicts.kubectl apply --server-side --force-conflicts --applyset=... --pruneWhat's not so pretty yet:
In the beginning, you always get this warning from kubectl:
ApplySets#5💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.