-
Notifications
You must be signed in to change notification settings - Fork 1.5k
KEP 1287: Instrumentation for in-place pod resize #5340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
natasha41575
commented
May 23, 2025
- One-line PR description:
- Add section with details about the metrics we plan to instrument for IPPR
- Issue link: In-Place Update of Pod Resources #1287
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: natasha41575 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Labels: | ||
- `resource_type` - what type of resource is being resized. Possible values: `cpu_limits`, `cpu_requests` `memory_limits`, or `memory_requests`. If more than one of these resource types is changing in the resize request, | ||
we increment the counter multiple times, once for each. This means that a single pod update changing multiple | ||
resource types will be considered multiple requests for this metric. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ig this is a little weird but I'm not sure if the alternatives are better. We also already have apiserver_request_total{resource=pods,subresource=resize}
if we just want the raw total number of resize requests to the api server
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I agree. Maybe worth asking the sig-instrumentation folks for advice?
- `resource_type` - what type of resource is being resized. Possible values: `cpu_limits`, `cpu_requests` `memory_limits`, or `memory_requests`. If more than one of these resource types is changing in the resize request, | ||
we increment the counter multiple times, once for each. This means that a single pod update changing multiple | ||
resource types will be considered multiple requests for this metric. | ||
- `operation_type` - whether the resize is an increase or a decrease. Possible values: `increase`, `decrease`, `add`, or `remove`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm assuming requests / limits can be added to a container but I don't actually know if that's true?? (I know kubernetes/kubernetes#127143 is adding support to remove them)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, can be added. Except memory limits because that would count as a memory limit decrease (which we don't currently allow), but we'll lift that restriction.
|
||
#### `kubelet_pod_resize_requests_total` | ||
|
||
This metric tracks the total number of resize requests observed by the Kubelet, counted at the pod level. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @ndixita
I don't have all the context but we might want to revisit or reuse this metric in the context of pod-level resources when resize of pod-level resources is supported
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without pod-level resources, is this just based on the net-change across all containers? What happens if 2 containers are resized, but the net change is 0?
I'm wondering whether we should skip this metric for now, and only record it in the context of pod-level resources?
/assign @tallclair |
Labels: | ||
- `resource_type` - what type of resource is being resized. Possible values: `cpu_limits`, `cpu_requests` `memory_limits`, or `memory_requests`. If more than one of these resource types is changing in the resize request, | ||
we increment the counter multiple times, once for each. This means that a single pod update changing multiple | ||
resource types will be considered multiple requests for this metric. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I agree. Maybe worth asking the sig-instrumentation folks for advice?
- `resource_type` - what type of resource is being resized. Possible values: `cpu_limits`, `cpu_requests` `memory_limits`, or `memory_requests`. If more than one of these resource types is changing in the resize request, | ||
we increment the counter multiple times, once for each. This means that a single pod update changing multiple | ||
resource types will be considered multiple requests for this metric. | ||
- `operation_type` - whether the resize is an increase or a decrease. Possible values: `increase`, `decrease`, `add`, or `remove`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, can be added. Except memory limits because that would count as a memory limit decrease (which we don't currently allow), but we'll lift that restriction.
|
||
#### `kubelet_pod_resize_requests_total` | ||
|
||
This metric tracks the total number of resize requests observed by the Kubelet, counted at the pod level. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without pod-level resources, is this just based on the net-change across all containers? What happens if 2 containers are resized, but the net change is 0?
I'm wondering whether we should skip this metric for now, and only record it in the context of pod-level resources?
- `resource_type` - what type of resource is being resized. Possible values: `cpu_limits`, `cpu_requests` `memory_limits`, or `memory_requests`. If more than one of these resource types is changing in the resize request, | ||
we increment the counter multiple times, once for each. | ||
- `operation_type` - whether the resize is an increase or a decrease. Possible values: `increase`, `decrease`, `add`, or `remove`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like useful data, but I think including these dimensions will require tracking some historical context for the pod resources? At the time we deem the resize complete, we don't currently know what changed. WDYT?
|
||
This metric is recorded as a gauge. | ||
|
||
#### `kubelet_pod_infeasible_resize_total` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we count deferred resizes too?
This metric tracks the total number of resize requests that the Kubelet originally marked as deferred but | ||
later accepted. This metric primarily exists because if a deferred resize is accepted through the timed retry as |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have sufficient information to distinguish between a deferred resize that was accepted, and a deferred resize that was overwritten with a new feasible size?
resizes that we should fix. | ||
|
||
Labels: | ||
- `retry_reason` - whether the resize was accepted through the timed retry or explicitly signaled. Possible values: `timed`, `signaled`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does this mean?