Skip to main content

Node Operations

Cordon, uncordon, drain, and taint nodes. All node ops work against the Kubernetes API — no SSH or node access required.

Node list

Switch to Node kind via Cmd/Ctrl + K.

Columns: NAME, STATUS, ROLES, TAINTS, CPU, MEM, DISK, AGE, VERSION, INTERNAL-IP, EXTERNAL-IP.

  • STATUS renders compound badges — e.g. Ready, Cordoned. The worst tier colors the pill.
  • TAINTS shows a count (click the node to see details).
  • ROLES is parsed from node-role.kubernetes.io/* labels.

Context menu

Right-click any node:

ActionEffect
InfoOpens Node Info window
ShellCreates a privileged debug pod on this node. See Terminal & Logs
Copy NameClipboard
Copy Internal IPClipboard
Copy External IPClipboard (when present)
Cordon / UncordonToggles spec.unschedulable
DrainFull drain workflow (see below)

Cordon / Uncordon

Sets spec.unschedulable: true (cordon) or false (uncordon). The STATUS badge updates immediately via Watch.

Cordon is instant — it doesn't evict anything, just stops new scheduling.

Drain

Right-click → Drain runs the kubectl drain equivalent:

  1. Cordon the node (patches spec.unschedulable: true).
  2. List all pods on the node via spec.nodeName field selector.
  3. For each pod:
    • Skip if owned by a DaemonSet (they're tied to the node).
    • Skip if the pod has cluster-autoscaler.kubernetes.io/safe-to-evict: "false".
    • Otherwise, submit an Eviction (policy/v1 subresource). PodDisruptionBudgets are enforced by the API server when it processes the eviction.
  4. If eviction fails for any reason, fall back to a regular delete on that pod (not a force-delete — grace period is left default).

Drained nodes stay cordoned. Uncordon manually when you're ready to schedule again.

Node Info window

Open with right-click → Info or double-click the row. Tabs:

Overview

  • Name, creation age, roles.
  • unschedulable status.
  • Node addresses (InternalIP, ExternalIP, Hostname).
  • System info — kernel, OS image, container runtime, kubelet version.
  • Labels, Annotations (collapsible).

Pods

Lists every pod whose spec.nodeName is this node. Each row:

  • Clickable pod name — opens pod Info.
  • Status, age, restart count.

Uses a field selector (spec.nodeName=...) so it's fast even on nodes with hundreds of pods.

Taints

Table of current taints:

ColumnDetail
KeyTaint key
ValueTaint value (or empty for Exists-style)
EffectNoSchedule, PreferNoSchedule, NoExecute
RemoveTrash icon — removes the taint

Click Add Taint → modal with Key / Value / Effect dropdown. Duplicate (same key+effect) is detected and prevented.

Resources

Capacity vs Allocatable grid for:

  • CPU
  • Memory
  • Storage (ephemeral)
  • Pods (max schedulable pods)

Values are from node.status.capacity and .allocatable.

Conditions

Standard node conditions: Ready, MemoryPressure, DiskPressure, PIDPressure, NetworkUnavailable.

Color-coded (True=green for Ready, red for pressure conditions; False inverts).

Events

Kubernetes events involving this node, auto-polled every 10 seconds.

Tolerations on pods

Node ops let you taint; the pod side of the equation shows up in pod-owner Info tabs.

Pod Info → Overview (or Pod Template tab on Deployments, StatefulSets, DaemonSets, Jobs, CronJobs) displays tolerations when present, rendered for readability:

Toleration shapeDisplayed as
{key, operator: Equal, value, effect}key=value:effect
{key, operator: Exists, effect}key:effect
{operator: Exists} (match-all)*:*
Any of the above with tolerationSecondssuffixed with (300s)

Pairing the node's Taints tab with the pod's Tolerations view makes debugging scheduling mismatches fast.

Tips

  • Drain hangs? — a PodDisruptionBudget is blocking eviction. Check affected workloads; relax the PDB or force-delete.
  • Pods come back after drain? — they're managed by a DaemonSet (by design) or something recreates them faster than they evict.
  • Want to pause scheduling briefly? — cordon is enough. You don't need to drain.
  • Removing a control-plane node? — drain, then remove from the kubeadm cluster (or cloud provider) separately. Kubezilla doesn't delete nodes.