2.2. 退出node 退出容器,容器会被自动删除。 # exitlogoutpod default/nsenter-9yqytp terminated(Error)pod"nsenter-9yqytp"deleted 3. 原理 容器是弱隔离,共享节点的内核,通过cgroup和namespace来实现进程级别的隔离。那么通过在特权容器里执行nsenter的命令,则可以通过登录特权容器来实现登录node的shell环境。
有时候可能会在生产环境中的k8s的node节点对docker文件daemon.json进行更改并且重启或者node节点出现其他故障,但是该node节点上有很多pod还在跑着,这时候就需要用到drain和uncordon两个命令drain:释放排空node上所有pod,并且不接收新的pod进程 uncordon:恢复node,恢复接收新的pod进程kubectl drain在对节点执行维护之前,可以...
示例,运行一个名称为nginx,副本数为3,标签为app=example,镜像为nginx:1.10,端口为80的容器实例,并绑定到k8s-node1上 $ kubectlrun nginx --image=nginx:1.10 --replicas=3 --labels="app=example" --port=80 --overrides='{"apiVersion":"apps/v1","spec":{"template":{"spec":{"nodeSelector":{"...
kubectl logs -l app=dev-app --field-selector spec.nodeName=node-1 Output. 2024-09-13T12:40:22.123Z Pod on node-1 initializing... 2024-09-13T12:40:23.456Z Starting the service on node-1. This retrieves logs from pods on a node labelednode-1. Conclusion The kubectl logs command is...
Node(s) CPU architecture, OS, and Version: $ uname -a Linux node4-dev 5.10.0-22-amd64#1 SMP Debian 5.10.178-3 (2023-04-22) x86_64 GNU/Linux Cluster Configuration: $ kubectl get nodes NAME STATUS ROLES AGE VERSION node1 Ready control-plane,master 535d v1.24.2+k3s2 ...
Just can not drain a node. This happens almost at least half of the times I need to drain a node. Cannot evict pod as it would violate the pod's disruption budget. To Reproduce Simple 4 node cluster, tried to drain one of the nodes. I co...
NODE NAMESPACE POD CPU REQUESTS CPU LIMITS MEMORY REQUESTS MEMORY LIMITS * * * 39728m (45%) 52108m (59%) 85069Mi (43%) 111368Mi (56%) 10.0.0.10 * * 762m (19%) 1252m (31%) 585Mi (8%) 2399Mi (34%) 10.0.0.10 kube-system ccs-log-collector-r4w6m 300m (7%) 1000m (25%...
uncordon Mark nodeasschedulable drain Drain nodeinpreparationformaintenance taint Update the taints on one or more nodes Troubleshooting and Debugging Commands: describe Show details of a specific resource or group of resources logs Print the logsfora containerina pod ...
Each context will be named <CLUSTER_NAME>-<NODE_NAME>. When you want to use kubectl to access this cluster without Rancher, you will need to use this context. Assuming the kubeconfig file is located at ~/.kube/config: kubectl --context <CLUSTER_NAME>-<NODE_NAME> get...
Node-Selectors:<none>Tolerations:node.kubernetes.io/not-ready:NoExecute op=Existsfor300s node.kubernetes.io/unreachable:NoExecute op=Existsfor300sEvents:Type Reason Age From Message---Normal Scheduled 2m23sdefault-scheduler Successfully assigneddefault/webapp to ayato Normal Pulled 2m22s kubelet Contain...