==> Audit <== |---------|--------------------------------|----------|-------|---------|---------------------|---------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|--------------------------------|----------|-------|---------|---------------------|---------------------| | start | --driver=docker --network | minikube | admin | v1.33.1 | 23 May 24 14:47 +07 | 23 May 24 14:48 +07 | | | minikube | | | | | | |---------|--------------------------------|----------|-------|---------|---------------------|---------------------| ==> Last Start <== Log file created at: 2024/05/23 14:47:44 Running on machine: aspg3 Binary: Built with gc go1.22.1 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0523 14:47:44.603491 1823973 out.go:291] Setting OutFile to fd 1 ... I0523 14:47:44.603701 1823973 out.go:319] MINIKUBE_IN_STYLE="true" I0523 14:47:44.603707 1823973 out.go:304] Setting ErrFile to fd 2... I0523 14:47:44.603711 1823973 out.go:319] MINIKUBE_IN_STYLE="true" I0523 14:47:44.603966 1823973 root.go:338] Updating PATH: /tmp/minikube_home/.minikube/bin W0523 14:47:44.604134 1823973 root.go:314] Error reading config file at /tmp/minikube_home/.minikube/config/config.json: open /tmp/minikube_home/.minikube/config/config.json: no such file or directory I0523 14:47:44.604683 1823973 out.go:298] Setting JSON to false I0523 14:47:44.609791 1823973 start.go:129] hostinfo: {"hostname":"aspg3","uptime":5695274,"bootTime":1710755191,"procs":500,"os":"linux","platform":"centos","platformFamily":"rhel","platformVersion":"7.7.1908","kernelVersion":"3.10.0-1062.el7.x86_64","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"ec7c5b1c-91bd-43ae-8085-2fdb0a16ec8e"} I0523 14:47:44.609866 1823973 start.go:139] virtualization: I0523 14:47:44.611183 1823973 out.go:177] 😄 minikube v1.33.1 on Centos 7.7.1908 I0523 14:47:44.611915 1823973 out.go:177] ▪ MINIKUBE_IN_STYLE=true I0523 14:47:44.612543 1823973 out.go:177] ▪ MINIKUBE_HOME=/tmp/minikube_home I0523 14:47:44.611990 1823973 notify.go:220] Checking for updates... I0523 14:47:44.613394 1823973 driver.go:392] Setting default libvirt URI to qemu:///system I0523 14:47:44.647984 1823973 docker.go:122] docker version: linux-26.1.2:Docker Engine - Community I0523 14:47:44.648162 1823973 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0523 14:47:44.721359 1823973 info.go:266] docker info: {ID:10faa141-6d6f-481d-b86a-5074e3795f62 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:49 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-23 14:47:44.708536019 +0700 +07 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-1062.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:32 MemTotal:33562738688 GenericResources: DockerRootDir:/opt/docker-lib-image HTTPProxy: HTTPSProxy: NoProxy: Name:aspg3 Labels:[] ExperimentalBuild:false ServerVersion:26.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:}} I0523 14:47:44.721493 1823973 docker.go:295] overlay module found I0523 14:47:44.722483 1823973 out.go:177] ✨ Using the docker driver based on user configuration I0523 14:47:44.723172 1823973 start.go:297] selected driver: docker I0523 14:47:44.723188 1823973 start.go:901] validating driver "docker" against I0523 14:47:44.723201 1823973 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0523 14:47:44.723326 1823973 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0523 14:47:44.781954 1823973 info.go:266] docker info: {ID:10faa141-6d6f-481d-b86a-5074e3795f62 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:49 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-23 14:47:44.771138023 +0700 +07 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-1062.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:32 MemTotal:33562738688 GenericResources: DockerRootDir:/opt/docker-lib-image HTTPProxy: HTTPSProxy: NoProxy: Name:aspg3 Labels:[] ExperimentalBuild:false ServerVersion:26.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:}} I0523 14:47:44.782154 1823973 start_flags.go:310] no existing cluster config was found, will generate one from the flags I0523 14:47:44.784225 1823973 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32007MB, container=32007MB I0523 14:47:44.784410 1823973 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true] I0523 14:47:44.785249 1823973 out.go:177] 📌 Using Docker driver with root privileges I0523 14:47:44.785909 1823973 cni.go:84] Creating CNI manager for "" I0523 14:47:44.785924 1823973 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0523 14:47:44.785933 1823973 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni I0523 14:47:44.785994 1823973 start.go:340] cluster config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network:minikube Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/admin:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} I0523 14:47:44.786832 1823973 out.go:177] 👍 Starting "minikube" primary control-plane node in "minikube" cluster I0523 14:47:44.787431 1823973 cache.go:121] Beginning downloading kic base image for docker with docker I0523 14:47:44.788082 1823973 out.go:177] 🚜 Pulling base image v0.0.44 ... I0523 14:47:44.788709 1823973 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker I0523 14:47:44.788733 1823973 preload.go:147] Found local preload: /tmp/minikube_home/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 I0523 14:47:44.788741 1823973 cache.go:56] Caching tarball of preloaded images I0523 14:47:44.788810 1823973 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local docker daemon I0523 14:47:44.788977 1823973 preload.go:173] Found /tmp/minikube_home/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0523 14:47:44.788985 1823973 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker I0523 14:47:44.789344 1823973 profile.go:143] Saving config to /tmp/minikube_home/.minikube/profiles/minikube/config.json ... I0523 14:47:44.789363 1823973 lock.go:35] WriteFile acquiring /tmp/minikube_home/.minikube/profiles/minikube/config.json: {Name:mkefa4eba056914a2cec6d37dafc4491fb6eb937 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0523 14:47:44.833099 1823973 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local docker daemon, skipping pull I0523 14:47:44.833115 1823973 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e exists in daemon, skipping load I0523 14:47:44.833146 1823973 cache.go:194] Successfully downloaded all kic artifacts I0523 14:47:44.833197 1823973 start.go:360] acquireMachinesLock for minikube: {Name:mk7c7b5dcd088ebb192e9b8124f59b0ed2440c63 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0523 14:47:44.833398 1823973 start.go:364] duration metric: took 184.824µs to acquireMachinesLock for "minikube" I0523 14:47:44.833439 1823973 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network:minikube Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/admin:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} I0523 14:47:44.833516 1823973 start.go:125] createHost starting for "" (driver="docker") I0523 14:47:44.834703 1823973 out.go:204] 🔥 Creating docker container (CPUs=2, Memory=8000MB) ... I0523 14:47:44.835013 1823973 start.go:159] libmachine.API.Create for "minikube" (driver="docker") I0523 14:47:44.835049 1823973 client.go:168] LocalClient.Create starting I0523 14:47:44.835193 1823973 main.go:141] libmachine: Creating CA: /tmp/minikube_home/.minikube/certs/ca.pem I0523 14:47:45.049179 1823973 main.go:141] libmachine: Creating client certificate: /tmp/minikube_home/.minikube/certs/cert.pem I0523 14:47:45.222854 1823973 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0523 14:47:45.243901 1823973 network_create.go:77] Found existing network {name:minikube subnet:0xc001d687b0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 188 49 1] mtu:1500} I0523 14:47:45.243934 1823973 kic.go:121] calculated static IP "192.188.49.2" for the "minikube" container I0523 14:47:45.244008 1823973 cli_runner.go:164] Run: docker ps -a --format {{.Names}} I0523 14:47:45.262823 1823973 cli_runner.go:164] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0523 14:47:45.283921 1823973 oci.go:103] Successfully created a docker volume minikube I0523 14:47:45.284014 1823973 cli_runner.go:164] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e -d /var/lib I0523 14:47:46.743280 1823973 cli_runner.go:217] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e -d /var/lib: (1.459224389s) I0523 14:47:46.743306 1823973 oci.go:107] Successfully prepared a docker volume minikube I0523 14:47:46.743338 1823973 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker I0523 14:47:46.743366 1823973 kic.go:194] Starting extracting preloaded images to volume ... I0523 14:47:46.744167 1823973 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /tmp/minikube_home/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e -I lz4 -xf /preloaded.tar -C /extractDir I0523 14:47:51.206833 1823973 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /tmp/minikube_home/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e -I lz4 -xf /preloaded.tar -C /extractDir: (4.462630228s) I0523 14:47:51.206859 1823973 kic.go:203] duration metric: took 4.463500845s to extract preloaded images to volume ... W0523 14:47:51.209532 1823973 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted. I0523 14:47:51.210099 1823973 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0523 14:47:51.270891 1823973 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.188.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=8000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e I0523 14:47:51.890264 1823973 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Running}} I0523 14:47:51.911277 1823973 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0523 14:47:51.932146 1823973 cli_runner.go:164] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0523 14:47:51.999289 1823973 oci.go:144] the created container "minikube" has a running status. I0523 14:47:51.999314 1823973 kic.go:225] Creating ssh key for kic: /tmp/minikube_home/.minikube/machines/minikube/id_rsa... I0523 14:47:52.392046 1823973 kic_runner.go:191] docker (temp): /tmp/minikube_home/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0523 14:47:52.440492 1823973 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0523 14:47:52.461349 1823973 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0523 14:47:52.461363 1823973 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0523 14:47:52.526885 1823973 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0523 14:47:52.547643 1823973 machine.go:94] provisionDockerMachine start ... I0523 14:47:52.547743 1823973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0523 14:47:52.569696 1823973 main.go:141] libmachine: Using SSH client type: native I0523 14:47:52.569949 1823973 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x82d6e0] 0x830440 [] 0s} 127.0.0.1 32882 } I0523 14:47:52.569957 1823973 main.go:141] libmachine: About to run SSH command: hostname I0523 14:47:52.572840 1823973 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45384->127.0.0.1:32882: read: connection reset by peer I0523 14:47:55.711651 1823973 main.go:141] libmachine: SSH cmd err, output: : minikube I0523 14:47:55.711688 1823973 ubuntu.go:169] provisioning hostname "minikube" I0523 14:47:55.711770 1823973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0523 14:47:55.732872 1823973 main.go:141] libmachine: Using SSH client type: native I0523 14:47:55.733080 1823973 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x82d6e0] 0x830440 [] 0s} 127.0.0.1 32882 } I0523 14:47:55.733091 1823973 main.go:141] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0523 14:47:55.884959 1823973 main.go:141] libmachine: SSH cmd err, output: : minikube I0523 14:47:55.885052 1823973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0523 14:47:55.907026 1823973 main.go:141] libmachine: Using SSH client type: native I0523 14:47:55.907206 1823973 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x82d6e0] 0x830440 [] 0s} 127.0.0.1 32882 } I0523 14:47:55.907219 1823973 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0523 14:47:56.046971 1823973 main.go:141] libmachine: SSH cmd err, output: : I0523 14:47:56.047000 1823973 ubuntu.go:175] set auth options {CertDir:/tmp/minikube_home/.minikube CaCertPath:/tmp/minikube_home/.minikube/certs/ca.pem CaPrivateKeyPath:/tmp/minikube_home/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/tmp/minikube_home/.minikube/machines/server.pem ServerKeyPath:/tmp/minikube_home/.minikube/machines/server-key.pem ClientKeyPath:/tmp/minikube_home/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/tmp/minikube_home/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/tmp/minikube_home/.minikube} I0523 14:47:56.047020 1823973 ubuntu.go:177] setting up certificates I0523 14:47:56.047040 1823973 provision.go:84] configureAuth start I0523 14:47:56.047110 1823973 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0523 14:47:56.067865 1823973 provision.go:143] copyHostCerts I0523 14:47:56.067949 1823973 exec_runner.go:151] cp: /tmp/minikube_home/.minikube/certs/ca.pem --> /tmp/minikube_home/.minikube/ca.pem (1074 bytes) I0523 14:47:56.068170 1823973 exec_runner.go:151] cp: /tmp/minikube_home/.minikube/certs/cert.pem --> /tmp/minikube_home/.minikube/cert.pem (1119 bytes) I0523 14:47:56.068251 1823973 exec_runner.go:151] cp: /tmp/minikube_home/.minikube/certs/key.pem --> /tmp/minikube_home/.minikube/key.pem (1679 bytes) I0523 14:47:56.068328 1823973 provision.go:117] generating server cert: /tmp/minikube_home/.minikube/machines/server.pem ca-key=/tmp/minikube_home/.minikube/certs/ca.pem private-key=/tmp/minikube_home/.minikube/certs/ca-key.pem org=admin.minikube san=[127.0.0.1 192.188.49.2 localhost minikube] I0523 14:47:56.570172 1823973 provision.go:177] copyRemoteCerts I0523 14:47:56.570697 1823973 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0523 14:47:56.570770 1823973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0523 14:47:56.592952 1823973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32882 SSHKeyPath:/tmp/minikube_home/.minikube/machines/minikube/id_rsa Username:docker} I0523 14:47:56.692530 1823973 ssh_runner.go:362] scp /tmp/minikube_home/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes) I0523 14:47:56.722425 1823973 ssh_runner.go:362] scp /tmp/minikube_home/.minikube/machines/server.pem --> /etc/docker/server.pem (1176 bytes) I0523 14:47:56.754409 1823973 ssh_runner.go:362] scp /tmp/minikube_home/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0523 14:47:56.783116 1823973 provision.go:87] duration metric: took 736.052991ms to configureAuth I0523 14:47:56.783141 1823973 ubuntu.go:193] setting minikube options for container-runtime I0523 14:47:56.783369 1823973 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0 I0523 14:47:56.783429 1823973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0523 14:47:56.803213 1823973 main.go:141] libmachine: Using SSH client type: native I0523 14:47:56.803476 1823973 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x82d6e0] 0x830440 [] 0s} 127.0.0.1 32882 } I0523 14:47:56.803487 1823973 main.go:141] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0523 14:47:56.940483 1823973 main.go:141] libmachine: SSH cmd err, output: : overlay I0523 14:47:56.940497 1823973 ubuntu.go:71] root file system type: overlay I0523 14:47:56.940605 1823973 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ... I0523 14:47:56.940696 1823973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0523 14:47:56.960692 1823973 main.go:141] libmachine: Using SSH client type: native I0523 14:47:56.960875 1823973 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x82d6e0] 0x830440 [] 0s} 127.0.0.1 32882 } I0523 14:47:56.960955 1823973 main.go:141] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0523 14:47:57.111037 1823973 main.go:141] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0523 14:47:57.111123 1823973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0523 14:47:57.131420 1823973 main.go:141] libmachine: Using SSH client type: native I0523 14:47:57.131596 1823973 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x82d6e0] 0x830440 [] 0s} 127.0.0.1 32882 } I0523 14:47:57.131609 1823973 main.go:141] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0523 14:47:58.106700 1823973 main.go:141] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2024-04-30 11:46:26.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2024-05-23 07:47:57.109494300 +0000 @@ -1,46 +1,49 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com -After=network-online.target docker.socket firewalld.service containerd.service time-set.target -Wants=network-online.target containerd.service +BindsTo=containerd.service +After=network-online.target firewalld.service containerd.service +Wants=network-online.target Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutStartSec=0 -RestartSec=2 -Restart=always +Restart=on-failure -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. +LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0523 14:47:58.106719 1823973 machine.go:97] duration metric: took 5.559064412s to provisionDockerMachine I0523 14:47:58.106732 1823973 client.go:171] duration metric: took 13.271678537s to LocalClient.Create I0523 14:47:58.106750 1823973 start.go:167] duration metric: took 13.271738988s to libmachine.API.Create "minikube" I0523 14:47:58.106769 1823973 start.go:293] postStartSetup for "minikube" (driver="docker") I0523 14:47:58.106779 1823973 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0523 14:47:58.106845 1823973 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0523 14:47:58.106886 1823973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0523 14:47:58.130414 1823973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32882 SSHKeyPath:/tmp/minikube_home/.minikube/machines/minikube/id_rsa Username:docker} I0523 14:47:58.231240 1823973 ssh_runner.go:195] Run: cat /etc/os-release I0523 14:47:58.235471 1823973 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0523 14:47:58.235492 1823973 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0523 14:47:58.235500 1823973 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0523 14:47:58.235506 1823973 info.go:137] Remote host: Ubuntu 22.04.4 LTS I0523 14:47:58.235517 1823973 filesync.go:126] Scanning /tmp/minikube_home/.minikube/addons for local assets ... I0523 14:47:58.235583 1823973 filesync.go:126] Scanning /tmp/minikube_home/.minikube/files for local assets ... I0523 14:47:58.235603 1823973 start.go:296] duration metric: took 128.829932ms for postStartSetup I0523 14:47:58.236038 1823973 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0523 14:47:58.256913 1823973 profile.go:143] Saving config to /tmp/minikube_home/.minikube/profiles/minikube/config.json ... I0523 14:47:58.257946 1823973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0523 14:47:58.257992 1823973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0523 14:47:58.278993 1823973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32882 SSHKeyPath:/tmp/minikube_home/.minikube/machines/minikube/id_rsa Username:docker} I0523 14:47:58.377529 1823973 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0523 14:47:58.383287 1823973 start.go:128] duration metric: took 13.549756618s to createHost I0523 14:47:58.383306 1823973 start.go:83] releasing machines lock for "minikube", held for 13.549900859s I0523 14:47:58.383366 1823973 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0523 14:47:58.402457 1823973 ssh_runner.go:195] Run: cat /version.json I0523 14:47:58.402504 1823973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0523 14:47:58.402536 1823973 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0523 14:47:58.402595 1823973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0523 14:47:58.422844 1823973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32882 SSHKeyPath:/tmp/minikube_home/.minikube/machines/minikube/id_rsa Username:docker} I0523 14:47:58.423498 1823973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32882 SSHKeyPath:/tmp/minikube_home/.minikube/machines/minikube/id_rsa Username:docker} I0523 14:47:58.514242 1823973 ssh_runner.go:195] Run: systemctl --version I0523 14:48:06.523427 1823973 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (8.120869048s) W0523 14:48:06.523459 1823973 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28 stdout: stderr: curl: (28) Resolving timed out after 2000 milliseconds I0523 14:48:06.523498 1823973 ssh_runner.go:235] Completed: systemctl --version: (8.009239305s) W0523 14:48:06.523531 1823973 out.go:239] ❗ This container is having trouble accessing https://registry.k8s.io W0523 14:48:06.523587 1823973 out.go:239] 💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ I0523 14:48:06.524208 1823973 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" I0523 14:48:06.530915 1823973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ; I0523 14:48:06.564718 1823973 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found I0523 14:48:06.564788 1823973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0523 14:48:06.598302 1823973 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s) I0523 14:48:06.598326 1823973 start.go:494] detecting cgroup driver to use... I0523 14:48:06.598633 1823973 detect.go:196] detected "cgroupfs" cgroup driver on host os I0523 14:48:06.598768 1823973 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0523 14:48:06.619261 1823973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml" I0523 14:48:06.632046 1823973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml" I0523 14:48:06.644316 1823973 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver... I0523 14:48:06.644373 1823973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml" I0523 14:48:06.656706 1823973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0523 14:48:06.668739 1823973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml" I0523 14:48:06.680186 1823973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0523 14:48:06.692288 1823973 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0523 14:48:06.706607 1823973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml" I0523 14:48:06.720750 1823973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml" I0523 14:48:06.735745 1823973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml" I0523 14:48:06.751436 1823973 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0523 14:48:06.764288 1823973 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255 stdout: stderr: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory I0523 14:48:06.764335 1823973 ssh_runner.go:195] Run: sudo modprobe br_netfilter I0523 14:48:06.778876 1823973 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0523 14:48:06.790993 1823973 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0523 14:48:06.868278 1823973 ssh_runner.go:195] Run: sudo systemctl restart containerd I0523 14:48:06.987953 1823973 start.go:494] detecting cgroup driver to use... I0523 14:48:06.987996 1823973 detect.go:196] detected "cgroupfs" cgroup driver on host os I0523 14:48:06.988062 1823973 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0523 14:48:07.002590 1823973 cruntime.go:279] skipping containerd shutdown because we are bound to it I0523 14:48:07.002682 1823973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0523 14:48:07.015957 1823973 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0523 14:48:07.038403 1823973 ssh_runner.go:195] Run: which cri-dockerd I0523 14:48:07.042756 1823973 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0523 14:48:07.055291 1823973 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes) I0523 14:48:07.078706 1823973 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0523 14:48:07.231929 1823973 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0523 14:48:07.342981 1823973 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver... I0523 14:48:07.343097 1823973 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes) I0523 14:48:07.366732 1823973 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0523 14:48:07.476486 1823973 ssh_runner.go:195] Run: sudo systemctl restart docker I0523 14:48:08.125835 1823973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket I0523 14:48:08.141773 1823973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service I0523 14:48:08.156774 1823973 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket I0523 14:48:08.233003 1823973 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0523 14:48:08.307664 1823973 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0523 14:48:08.385227 1823973 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket I0523 14:48:08.408446 1823973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service I0523 14:48:08.421890 1823973 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0523 14:48:08.492187 1823973 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service I0523 14:48:08.599974 1823973 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock I0523 14:48:08.600042 1823973 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0523 14:48:08.605164 1823973 start.go:562] Will wait 60s for crictl version I0523 14:48:08.605212 1823973 ssh_runner.go:195] Run: which crictl I0523 14:48:08.610011 1823973 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0523 14:48:08.650118 1823973 start.go:578] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 26.1.1 RuntimeApiVersion: v1 I0523 14:48:08.650178 1823973 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0523 14:48:08.681455 1823973 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0523 14:48:08.713794 1823973 out.go:204] 🐳 Preparing Kubernetes v1.30.0 on Docker 26.1.1 ... I0523 14:48:08.713903 1823973 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0523 14:48:08.738687 1823973 ssh_runner.go:195] Run: grep 192.188.49.1 host.minikube.internal$ /etc/hosts I0523 14:48:08.744010 1823973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.188.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0523 14:48:08.758945 1823973 kubeadm.go:877] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.188.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network:minikube Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/admin:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ... I0523 14:48:08.759246 1823973 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker I0523 14:48:08.759318 1823973 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0523 14:48:08.785705 1823973 docker.go:685] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 registry.k8s.io/pause:3.9 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0523 14:48:08.785724 1823973 docker.go:615] Images already preloaded, skipping extraction I0523 14:48:08.785795 1823973 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0523 14:48:08.811403 1823973 docker.go:685] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 registry.k8s.io/pause:3.9 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0523 14:48:08.811426 1823973 cache_images.go:84] Images are preloaded, skipping loading I0523 14:48:08.811442 1823973 kubeadm.go:928] updating node { 192.188.49.2 8443 v1.30.0 docker true true} ... I0523 14:48:08.811555 1823973 kubeadm.go:940] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.188.49.2 [Install] config: {KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} I0523 14:48:08.811634 1823973 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0523 14:48:08.869542 1823973 cni.go:84] Creating CNI manager for "" I0523 14:48:08.869558 1823973 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0523 14:48:08.869570 1823973 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16 I0523 14:48:08.869592 1823973 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.188.49.2 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.188.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.188.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true} I0523 14:48:08.869746 1823973 kubeadm.go:187] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.188.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: unix:///var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.188.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.188.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.30.0 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0523 14:48:08.869815 1823973 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0 I0523 14:48:08.881251 1823973 binaries.go:44] Found k8s binaries, skipping transfer I0523 14:48:08.881310 1823973 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0523 14:48:08.893201 1823973 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes) I0523 14:48:08.915733 1823973 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0523 14:48:08.937785 1823973 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes) I0523 14:48:08.960009 1823973 ssh_runner.go:195] Run: grep 192.188.49.2 control-plane.minikube.internal$ /etc/hosts I0523 14:48:08.964549 1823973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.188.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0523 14:48:08.978014 1823973 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0523 14:48:09.051733 1823973 ssh_runner.go:195] Run: sudo systemctl start kubelet I0523 14:48:09.072142 1823973 certs.go:68] Setting up /tmp/minikube_home/.minikube/profiles/minikube for IP: 192.188.49.2 I0523 14:48:09.072154 1823973 certs.go:194] generating shared ca certs ... I0523 14:48:09.072170 1823973 certs.go:226] acquiring lock for ca certs: {Name:mkf22db8a1d7fab13dd369a061195f427393611a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0523 14:48:09.072840 1823973 certs.go:240] generating "minikubeCA" ca cert: /tmp/minikube_home/.minikube/ca.key I0523 14:48:09.187929 1823973 crypto.go:156] Writing cert to /tmp/minikube_home/.minikube/ca.crt ... I0523 14:48:09.187945 1823973 lock.go:35] WriteFile acquiring /tmp/minikube_home/.minikube/ca.crt: {Name:mkfd4c73600e9addc475512a94184234b4008df4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0523 14:48:09.188246 1823973 crypto.go:164] Writing key to /tmp/minikube_home/.minikube/ca.key ... I0523 14:48:09.188252 1823973 lock.go:35] WriteFile acquiring /tmp/minikube_home/.minikube/ca.key: {Name:mk842b2c08546ca041bfc07ee72e12454284dde2 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0523 14:48:09.188370 1823973 certs.go:240] generating "proxyClientCA" ca cert: /tmp/minikube_home/.minikube/proxy-client-ca.key I0523 14:48:09.263407 1823973 crypto.go:156] Writing cert to /tmp/minikube_home/.minikube/proxy-client-ca.crt ... I0523 14:48:09.263421 1823973 lock.go:35] WriteFile acquiring /tmp/minikube_home/.minikube/proxy-client-ca.crt: {Name:mkb416ad783ed2075a3cf0818b43bc559d748d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0523 14:48:09.263721 1823973 crypto.go:164] Writing key to /tmp/minikube_home/.minikube/proxy-client-ca.key ... I0523 14:48:09.263728 1823973 lock.go:35] WriteFile acquiring /tmp/minikube_home/.minikube/proxy-client-ca.key: {Name:mk4b63d0f0e341d2ad27b2395f115e7205a40539 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0523 14:48:09.263842 1823973 certs.go:256] generating profile certs ... I0523 14:48:09.263903 1823973 certs.go:363] generating signed profile cert for "minikube-user": /tmp/minikube_home/.minikube/profiles/minikube/client.key I0523 14:48:09.263916 1823973 crypto.go:68] Generating cert /tmp/minikube_home/.minikube/profiles/minikube/client.crt with IP's: [] I0523 14:48:09.360649 1823973 crypto.go:156] Writing cert to /tmp/minikube_home/.minikube/profiles/minikube/client.crt ... I0523 14:48:09.360664 1823973 lock.go:35] WriteFile acquiring /tmp/minikube_home/.minikube/profiles/minikube/client.crt: {Name:mkb6ef26d9c9efe2fe8481ca59339943f9a4daa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0523 14:48:09.360951 1823973 crypto.go:164] Writing key to /tmp/minikube_home/.minikube/profiles/minikube/client.key ... I0523 14:48:09.360958 1823973 lock.go:35] WriteFile acquiring /tmp/minikube_home/.minikube/profiles/minikube/client.key: {Name:mk8fe2b39f6ed8e0216fc37da200243f69ab3d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0523 14:48:09.361071 1823973 certs.go:363] generating signed profile cert for "minikube": /tmp/minikube_home/.minikube/profiles/minikube/apiserver.key.3646a1d8 I0523 14:48:09.361087 1823973 crypto.go:68] Generating cert /tmp/minikube_home/.minikube/profiles/minikube/apiserver.crt.3646a1d8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.188.49.2] I0523 14:48:09.571225 1823973 crypto.go:156] Writing cert to /tmp/minikube_home/.minikube/profiles/minikube/apiserver.crt.3646a1d8 ... I0523 14:48:09.571241 1823973 lock.go:35] WriteFile acquiring /tmp/minikube_home/.minikube/profiles/minikube/apiserver.crt.3646a1d8: {Name:mkd5d2fc590f27c42291a16ca46eff4abf035e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0523 14:48:09.571521 1823973 crypto.go:164] Writing key to /tmp/minikube_home/.minikube/profiles/minikube/apiserver.key.3646a1d8 ... I0523 14:48:09.571529 1823973 lock.go:35] WriteFile acquiring /tmp/minikube_home/.minikube/profiles/minikube/apiserver.key.3646a1d8: {Name:mk55076a493714a91809176dbcd952ae97583587 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0523 14:48:09.571664 1823973 certs.go:381] copying /tmp/minikube_home/.minikube/profiles/minikube/apiserver.crt.3646a1d8 -> /tmp/minikube_home/.minikube/profiles/minikube/apiserver.crt I0523 14:48:09.571810 1823973 certs.go:385] copying /tmp/minikube_home/.minikube/profiles/minikube/apiserver.key.3646a1d8 -> /tmp/minikube_home/.minikube/profiles/minikube/apiserver.key I0523 14:48:09.571886 1823973 certs.go:363] generating signed profile cert for "aggregator": /tmp/minikube_home/.minikube/profiles/minikube/proxy-client.key I0523 14:48:09.571899 1823973 crypto.go:68] Generating cert /tmp/minikube_home/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0523 14:48:09.667662 1823973 crypto.go:156] Writing cert to /tmp/minikube_home/.minikube/profiles/minikube/proxy-client.crt ... I0523 14:48:09.667675 1823973 lock.go:35] WriteFile acquiring /tmp/minikube_home/.minikube/profiles/minikube/proxy-client.crt: {Name:mk6c0f67908ff6577b7753362d9cbcf38b6ad06e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0523 14:48:09.667953 1823973 crypto.go:164] Writing key to /tmp/minikube_home/.minikube/profiles/minikube/proxy-client.key ... I0523 14:48:09.667960 1823973 lock.go:35] WriteFile acquiring /tmp/minikube_home/.minikube/profiles/minikube/proxy-client.key: {Name:mk9ffc3334c38ec5bd41a8b2032c0e881418a63f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0523 14:48:09.668256 1823973 certs.go:484] found cert: /tmp/minikube_home/.minikube/certs/ca-key.pem (1675 bytes) I0523 14:48:09.668287 1823973 certs.go:484] found cert: /tmp/minikube_home/.minikube/certs/ca.pem (1074 bytes) I0523 14:48:09.668309 1823973 certs.go:484] found cert: /tmp/minikube_home/.minikube/certs/cert.pem (1119 bytes) I0523 14:48:09.668330 1823973 certs.go:484] found cert: /tmp/minikube_home/.minikube/certs/key.pem (1679 bytes) I0523 14:48:09.669069 1823973 ssh_runner.go:362] scp /tmp/minikube_home/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0523 14:48:09.699980 1823973 ssh_runner.go:362] scp /tmp/minikube_home/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0523 14:48:09.729492 1823973 ssh_runner.go:362] scp /tmp/minikube_home/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0523 14:48:09.758939 1823973 ssh_runner.go:362] scp /tmp/minikube_home/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0523 14:48:09.787715 1823973 ssh_runner.go:362] scp /tmp/minikube_home/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes) I0523 14:48:09.817176 1823973 ssh_runner.go:362] scp /tmp/minikube_home/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0523 14:48:09.845527 1823973 ssh_runner.go:362] scp /tmp/minikube_home/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0523 14:48:09.873406 1823973 ssh_runner.go:362] scp /tmp/minikube_home/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0523 14:48:09.903858 1823973 ssh_runner.go:362] scp /tmp/minikube_home/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0523 14:48:09.933953 1823973 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0523 14:48:09.955118 1823973 ssh_runner.go:195] Run: openssl version I0523 14:48:09.962018 1823973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0523 14:48:09.973828 1823973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0523 14:48:09.978808 1823973 certs.go:528] hashing: -rw-r--r--. 1 root root 1111 May 23 07:48 /usr/share/ca-certificates/minikubeCA.pem I0523 14:48:09.978848 1823973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0523 14:48:09.989573 1823973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0523 14:48:10.002220 1823973 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt I0523 14:48:10.007588 1823973 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1 stdout: stderr: stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory I0523 14:48:10.007659 1823973 kubeadm.go:391] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.188.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network:minikube Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/admin:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} I0523 14:48:10.007762 1823973 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0523 14:48:10.032015 1823973 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0523 14:48:10.044186 1823973 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0523 14:48:10.055548 1823973 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver I0523 14:48:10.055601 1823973 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0523 14:48:10.066340 1823973 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0523 14:48:10.066347 1823973 kubeadm.go:156] found existing configuration files: I0523 14:48:10.066392 1823973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0523 14:48:10.076761 1823973 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/admin.conf: No such file or directory I0523 14:48:10.076814 1823973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf I0523 14:48:10.087327 1823973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf I0523 14:48:10.097964 1823973 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/kubelet.conf: No such file or directory I0523 14:48:10.098012 1823973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf I0523 14:48:10.108333 1823973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0523 14:48:10.119052 1823973 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/controller-manager.conf: No such file or directory I0523 14:48:10.119096 1823973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf I0523 14:48:10.129059 1823973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf I0523 14:48:10.139486 1823973 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/scheduler.conf: No such file or directory I0523 14:48:10.139533 1823973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf I0523 14:48:10.149879 1823973 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0523 14:48:10.241417 1823973 kubeadm.go:309] [WARNING Swap]: swap is supported for cgroup v2 only; the NodeSwap feature gate of the kubelet is beta but disabled by default I0523 14:48:10.246145 1823973 kubeadm.go:309] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/3.10.0-1062.el7.x86_64\n", err: exit status 1 I0523 14:48:10.317955 1823973 kubeadm.go:309] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0523 14:48:21.887668 1823973 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0 I0523 14:48:21.887712 1823973 kubeadm.go:309] [preflight] Running pre-flight checks I0523 14:48:21.887823 1823973 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification: I0523 14:48:21.887893 1823973 kubeadm.go:309] KERNEL_VERSION: 3.10.0-1062.el7.x86_64 I0523 14:48:21.887925 1823973 kubeadm.go:309] OS: Linux I0523 14:48:21.887970 1823973 kubeadm.go:309] CGROUPS_CPU: enabled I0523 14:48:21.888012 1823973 kubeadm.go:309] CGROUPS_CPUACCT: enabled I0523 14:48:21.888060 1823973 kubeadm.go:309] CGROUPS_CPUSET: enabled I0523 14:48:21.888105 1823973 kubeadm.go:309] CGROUPS_DEVICES: enabled I0523 14:48:21.888147 1823973 kubeadm.go:309] CGROUPS_FREEZER: enabled I0523 14:48:21.888191 1823973 kubeadm.go:309] CGROUPS_MEMORY: enabled I0523 14:48:21.888232 1823973 kubeadm.go:309] CGROUPS_PIDS: enabled I0523 14:48:21.888284 1823973 kubeadm.go:309] CGROUPS_HUGETLB: enabled I0523 14:48:21.888325 1823973 kubeadm.go:309] CGROUPS_BLKIO: enabled I0523 14:48:21.888394 1823973 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster I0523 14:48:21.888478 1823973 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection I0523 14:48:21.888590 1823973 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0523 14:48:21.888668 1823973 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs" I0523 14:48:21.889664 1823973 out.go:204] ▪ Generating certificates and keys ... I0523 14:48:21.889762 1823973 kubeadm.go:309] [certs] Using existing ca certificate authority I0523 14:48:21.889827 1823973 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk I0523 14:48:21.889897 1823973 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key I0523 14:48:21.889975 1823973 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key I0523 14:48:21.890044 1823973 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key I0523 14:48:21.890098 1823973 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key I0523 14:48:21.890149 1823973 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key I0523 14:48:21.890258 1823973 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.188.49.2 127.0.0.1 ::1] I0523 14:48:21.890308 1823973 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key I0523 14:48:21.890426 1823973 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.188.49.2 127.0.0.1 ::1] I0523 14:48:21.890516 1823973 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key I0523 14:48:21.890586 1823973 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key I0523 14:48:21.890635 1823973 kubeadm.go:309] [certs] Generating "sa" key and public key I0523 14:48:21.890696 1823973 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0523 14:48:21.890753 1823973 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file I0523 14:48:21.890808 1823973 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file I0523 14:48:21.890879 1823973 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file I0523 14:48:21.890980 1823973 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0523 14:48:21.891038 1823973 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file I0523 14:48:21.891134 1823973 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0523 14:48:21.891228 1823973 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0523 14:48:21.891994 1823973 out.go:204] ▪ Booting up control plane ... I0523 14:48:21.892093 1823973 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver" I0523 14:48:21.892181 1823973 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager" I0523 14:48:21.892245 1823973 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler" I0523 14:48:21.892352 1823973 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0523 14:48:21.892444 1823973 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0523 14:48:21.892501 1823973 kubeadm.go:309] [kubelet-start] Starting the kubelet I0523 14:48:21.892637 1823973 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests" I0523 14:48:21.892703 1823973 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s I0523 14:48:21.892755 1823973 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.236137ms I0523 14:48:21.892833 1823973 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s I0523 14:48:21.892897 1823973 kubeadm.go:309] [api-check] The API server is healthy after 6.502025304s I0523 14:48:21.893008 1823973 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0523 14:48:21.893138 1823973 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster I0523 14:48:21.893196 1823973 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs I0523 14:48:21.893391 1823973 kubeadm.go:309] [mark-control-plane] Marking the node minikube as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] I0523 14:48:21.893455 1823973 kubeadm.go:309] [bootstrap-token] Using token: h3oelp.lkdmsvgxolbi6uij I0523 14:48:21.894294 1823973 out.go:204] ▪ Configuring RBAC rules ... I0523 14:48:21.894401 1823973 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0523 14:48:21.894490 1823973 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes I0523 14:48:21.894659 1823973 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0523 14:48:21.894802 1823973 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0523 14:48:21.894931 1823973 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0523 14:48:21.895012 1823973 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0523 14:48:21.895114 1823973 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0523 14:48:21.895171 1823973 kubeadm.go:309] [addons] Applied essential addon: CoreDNS I0523 14:48:21.895224 1823973 kubeadm.go:309] [addons] Applied essential addon: kube-proxy I0523 14:48:21.895227 1823973 kubeadm.go:309] I0523 14:48:21.895280 1823973 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully! I0523 14:48:21.895283 1823973 kubeadm.go:309] I0523 14:48:21.895356 1823973 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user: I0523 14:48:21.895359 1823973 kubeadm.go:309] I0523 14:48:21.895381 1823973 kubeadm.go:309] mkdir -p $HOME/.kube I0523 14:48:21.895437 1823973 kubeadm.go:309] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config I0523 14:48:21.895494 1823973 kubeadm.go:309] sudo chown $(id -u):$(id -g) $HOME/.kube/config I0523 14:48:21.895497 1823973 kubeadm.go:309] I0523 14:48:21.895545 1823973 kubeadm.go:309] Alternatively, if you are the root user, you can run: I0523 14:48:21.895548 1823973 kubeadm.go:309] I0523 14:48:21.895596 1823973 kubeadm.go:309] export KUBECONFIG=/etc/kubernetes/admin.conf I0523 14:48:21.895604 1823973 kubeadm.go:309] I0523 14:48:21.895673 1823973 kubeadm.go:309] You should now deploy a pod network to the cluster. I0523 14:48:21.895753 1823973 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: I0523 14:48:21.895820 1823973 kubeadm.go:309] https://kubernetes.io/docs/concepts/cluster-administration/addons/ I0523 14:48:21.895823 1823973 kubeadm.go:309] I0523 14:48:21.895904 1823973 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities I0523 14:48:21.895977 1823973 kubeadm.go:309] and service account keys on each node and then running the following as root: I0523 14:48:21.895981 1823973 kubeadm.go:309] I0523 14:48:21.896061 1823973 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token h3oelp.lkdmsvgxolbi6uij \ I0523 14:48:21.896160 1823973 kubeadm.go:309] --discovery-token-ca-cert-hash sha256:c4d3986dd368dbfea88fb4b4478fcbb1308ba106e3b93d88a3c3663fca3f3392 \ I0523 14:48:21.896180 1823973 kubeadm.go:309] --control-plane I0523 14:48:21.896182 1823973 kubeadm.go:309] I0523 14:48:21.896264 1823973 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root: I0523 14:48:21.896267 1823973 kubeadm.go:309] I0523 14:48:21.896345 1823973 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token h3oelp.lkdmsvgxolbi6uij \ I0523 14:48:21.896453 1823973 kubeadm.go:309] --discovery-token-ca-cert-hash sha256:c4d3986dd368dbfea88fb4b4478fcbb1308ba106e3b93d88a3c3663fca3f3392 I0523 14:48:21.896460 1823973 cni.go:84] Creating CNI manager for "" I0523 14:48:21.896474 1823973 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0523 14:48:21.897319 1823973 out.go:177] 🔗 Configuring bridge CNI (Container Networking Interface) ... I0523 14:48:21.898056 1823973 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d I0523 14:48:21.909936 1823973 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes) I0523 14:48:21.931530 1823973 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0523 14:48:21.931645 1823973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0523 14:48:21.931688 1823973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes minikube minikube.k8s.io/updated_at=2024_05_23T14_48_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5883c09216182566a63dff4c326a6fc9ed2982ff minikube.k8s.io/name=minikube minikube.k8s.io/primary=true I0523 14:48:21.941464 1823973 ops.go:34] apiserver oom_adj: -16 I0523 14:48:22.043571 1823973 kubeadm.go:1107] duration metric: took 112.008742ms to wait for elevateKubeSystemPrivileges W0523 14:48:22.067188 1823973 kubeadm.go:286] apiserver tunnel failed: apiserver port not set I0523 14:48:22.067208 1823973 kubeadm.go:393] duration metric: took 12.059567916s to StartCluster I0523 14:48:22.067230 1823973 settings.go:142] acquiring lock: {Name:mk4cec9904761f5b9117b94abac2ea2e700cd7ee Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0523 14:48:22.067291 1823973 settings.go:150] Updating kubeconfig: /home/admin/.kube/config I0523 14:48:22.068116 1823973 lock.go:35] WriteFile acquiring /home/admin/.kube/config: {Name:mke67f3fca0d83c9582d62529c22503d3ab0ffe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0523 14:48:22.068373 1823973 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0523 14:48:22.068397 1823973 start.go:234] Will wait 6m0s for node &{Name: IP:192.188.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} I0523 14:48:22.069275 1823973 out.go:177] 🔎 Verifying Kubernetes components... I0523 14:48:22.068486 1823973 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] I0523 14:48:22.069953 1823973 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0523 14:48:22.068552 1823973 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0 I0523 14:48:22.069315 1823973 addons.go:69] Setting storage-provisioner=true in profile "minikube" I0523 14:48:22.070027 1823973 addons.go:234] Setting addon storage-provisioner=true in "minikube" I0523 14:48:22.070054 1823973 host.go:66] Checking if "minikube" exists ... I0523 14:48:22.069321 1823973 addons.go:69] Setting default-storageclass=true in profile "minikube" I0523 14:48:22.070098 1823973 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0523 14:48:22.070336 1823973 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0523 14:48:22.070456 1823973 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0523 14:48:22.093120 1823973 out.go:177] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0523 14:48:22.093397 1823973 addons.go:234] Setting addon default-storageclass=true in "minikube" I0523 14:48:22.094247 1823973 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml I0523 14:48:22.094254 1823973 host.go:66] Checking if "minikube" exists ... I0523 14:48:22.094255 1823973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0523 14:48:22.094313 1823973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0523 14:48:22.094641 1823973 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0523 14:48:22.115817 1823973 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml I0523 14:48:22.115828 1823973 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0523 14:48:22.115887 1823973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0523 14:48:22.116266 1823973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32882 SSHKeyPath:/tmp/minikube_home/.minikube/machines/minikube/id_rsa Username:docker} I0523 14:48:22.139160 1823973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32882 SSHKeyPath:/tmp/minikube_home/.minikube/machines/minikube/id_rsa Username:docker} I0523 14:48:22.247707 1823973 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.188.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0523 14:48:22.251575 1823973 ssh_runner.go:195] Run: sudo systemctl start kubelet I0523 14:48:22.350561 1823973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0523 14:48:22.352818 1823973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0523 14:48:22.846340 1823973 start.go:946] {"host.minikube.internal": 192.188.49.1} host record injected into CoreDNS's ConfigMap I0523 14:48:22.847214 1823973 api_server.go:52] waiting for apiserver process to appear ... I0523 14:48:22.847287 1823973 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0523 14:48:23.106144 1823973 api_server.go:72] duration metric: took 1.037717635s to wait for apiserver process to appear ... I0523 14:48:23.106181 1823973 api_server.go:88] waiting for apiserver healthz status ... I0523 14:48:23.106211 1823973 api_server.go:253] Checking apiserver healthz at https://192.188.49.2:8443/healthz ... I0523 14:48:23.111268 1823973 out.go:177] 🌟 Enabled addons: storage-provisioner, default-storageclass I0523 14:48:23.111912 1823973 addons.go:505] duration metric: took 1.043450336s for enable addons: enabled=[storage-provisioner default-storageclass] I0523 14:48:23.110817 1823973 api_server.go:279] https://192.188.49.2:8443/healthz returned 200: ok I0523 14:48:23.112817 1823973 api_server.go:141] control plane version: v1.30.0 I0523 14:48:23.112831 1823973 api_server.go:131] duration metric: took 6.643946ms to wait for apiserver health ... I0523 14:48:23.112843 1823973 system_pods.go:43] waiting for kube-system pods to appear ... I0523 14:48:23.119032 1823973 system_pods.go:59] 5 kube-system pods found I0523 14:48:23.119057 1823973 system_pods.go:61] "etcd-minikube" [2ead8781-bc47-4cfd-9105-dbfddd201342] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd]) I0523 14:48:23.119066 1823973 system_pods.go:61] "kube-apiserver-minikube" [ff45c2b2-1776-4e80-959d-b91e03d78867] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver]) I0523 14:48:23.119074 1823973 system_pods.go:61] "kube-controller-manager-minikube" [033d7d42-bf27-4220-bf55-c9048c8d3c26] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0523 14:48:23.119080 1823973 system_pods.go:61] "kube-scheduler-minikube" [476beb8f-dbc6-4dd9-9bf7-918f628abb24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler]) I0523 14:48:23.119085 1823973 system_pods.go:61] "storage-provisioner" [2381baad-5fee-4183-888a-24491278ee2b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.) I0523 14:48:23.119093 1823973 system_pods.go:74] duration metric: took 6.243299ms to wait for pod list to return data ... I0523 14:48:23.119104 1823973 kubeadm.go:576] duration metric: took 1.050683019s to wait for: map[apiserver:true system_pods:true] I0523 14:48:23.119118 1823973 node_conditions.go:102] verifying NodePressure condition ... I0523 14:48:23.121489 1823973 node_conditions.go:122] node storage ephemeral capacity is 91723496Ki I0523 14:48:23.121507 1823973 node_conditions.go:123] node cpu capacity is 32 I0523 14:48:23.121527 1823973 node_conditions.go:105] duration metric: took 2.403436ms to run NodePressure ... I0523 14:48:23.121540 1823973 start.go:240] waiting for startup goroutines ... I0523 14:48:23.350831 1823973 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas I0523 14:48:23.350874 1823973 start.go:245] waiting for cluster config update ... I0523 14:48:23.350887 1823973 start.go:254] writing updated cluster config ... I0523 14:48:23.351267 1823973 ssh_runner.go:195] Run: rm -f paused I0523 14:48:23.413967 1823973 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0) I0523 14:48:23.415096 1823973 out.go:177] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default ==> Docker <== May 23 07:48:07 minikube dockerd[1038]: time="2024-05-23T07:48:07.534763642Z" level=info msg="Loading containers: done." May 23 07:48:07 minikube dockerd[1038]: time="2024-05-23T07:48:07.626239586Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled" May 23 07:48:07 minikube dockerd[1038]: time="2024-05-23T07:48:07.626278026Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled" May 23 07:48:07 minikube dockerd[1038]: time="2024-05-23T07:48:07.626304163Z" level=info msg="Docker daemon" commit=ac2de55 containerd-snapshotter=false storage-driver=overlay2 version=26.1.1 May 23 07:48:07 minikube dockerd[1038]: time="2024-05-23T07:48:07.626474060Z" level=info msg="Daemon has completed initialization" May 23 07:48:07 minikube dockerd[1038]: time="2024-05-23T07:48:07.654674804Z" level=info msg="API listen on /var/run/docker.sock" May 23 07:48:07 minikube dockerd[1038]: time="2024-05-23T07:48:07.654737110Z" level=info msg="API listen on [::]:2376" May 23 07:48:07 minikube dockerd[1038]: time="2024-05-23T07:48:07.656299946Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby May 23 07:48:07 minikube dockerd[1038]: time="2024-05-23T07:48:07.657264359Z" level=info msg="Daemon shutdown complete" May 23 07:48:07 minikube dockerd[1038]: time="2024-05-23T07:48:07.657386856Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby May 23 07:48:07 minikube systemd[1]: docker.service: Deactivated successfully. May 23 07:48:07 minikube systemd[1]: Stopped Docker Application Container Engine. May 23 07:48:07 minikube systemd[1]: Starting Docker Application Container Engine... May 23 07:48:07 minikube dockerd[1265]: time="2024-05-23T07:48:07.733401232Z" level=info msg="Starting up" May 23 07:48:07 minikube dockerd[1265]: time="2024-05-23T07:48:07.755126181Z" level=info msg="[graphdriver] trying configured driver: overlay2" May 23 07:48:07 minikube dockerd[1265]: time="2024-05-23T07:48:07.908793828Z" level=info msg="Loading containers: start." May 23 07:48:07 minikube dockerd[1265]: time="2024-05-23T07:48:07.991193131Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 23 07:48:08 minikube dockerd[1265]: time="2024-05-23T07:48:08.025471131Z" level=info msg="Loading containers: done." May 23 07:48:08 minikube dockerd[1265]: time="2024-05-23T07:48:08.098804315Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled" May 23 07:48:08 minikube dockerd[1265]: time="2024-05-23T07:48:08.098861008Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled" May 23 07:48:08 minikube dockerd[1265]: time="2024-05-23T07:48:08.098898755Z" level=info msg="Docker daemon" commit=ac2de55 containerd-snapshotter=false storage-driver=overlay2 version=26.1.1 May 23 07:48:08 minikube dockerd[1265]: time="2024-05-23T07:48:08.098969052Z" level=info msg="Daemon has completed initialization" May 23 07:48:08 minikube dockerd[1265]: time="2024-05-23T07:48:08.122669516Z" level=info msg="API listen on /var/run/docker.sock" May 23 07:48:08 minikube dockerd[1265]: time="2024-05-23T07:48:08.122680723Z" level=info msg="API listen on [::]:2376" May 23 07:48:08 minikube systemd[1]: Started Docker Application Container Engine. May 23 07:48:08 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine... May 23 07:48:08 minikube cri-dockerd[1501]: time="2024-05-23T07:48:08Z" level=info msg="Starting cri-dockerd dev (HEAD)" May 23 07:48:08 minikube cri-dockerd[1501]: time="2024-05-23T07:48:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock" May 23 07:48:08 minikube cri-dockerd[1501]: time="2024-05-23T07:48:08Z" level=info msg="Start docker client with request timeout 0s" May 23 07:48:08 minikube cri-dockerd[1501]: time="2024-05-23T07:48:08Z" level=info msg="Hairpin mode is set to hairpin-veth" May 23 07:48:08 minikube cri-dockerd[1501]: time="2024-05-23T07:48:08Z" level=info msg="Loaded network plugin cni" May 23 07:48:08 minikube cri-dockerd[1501]: time="2024-05-23T07:48:08Z" level=info msg="Docker cri networking managed by network plugin cni" May 23 07:48:08 minikube cri-dockerd[1501]: time="2024-05-23T07:48:08Z" level=info msg="Setting cgroupDriver cgroupfs" May 23 07:48:08 minikube cri-dockerd[1501]: time="2024-05-23T07:48:08Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}" May 23 07:48:08 minikube cri-dockerd[1501]: time="2024-05-23T07:48:08Z" level=info msg="Starting the GRPC backend for the Docker CRI interface." May 23 07:48:08 minikube cri-dockerd[1501]: time="2024-05-23T07:48:08Z" level=info msg="Start cri-dockerd grpc backend" May 23 07:48:08 minikube systemd[1]: Started CRI Interface for Docker Application Container Engine. May 23 07:48:15 minikube cri-dockerd[1501]: time="2024-05-23T07:48:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5a31211a898d970757061294fce6760140fc162d1dd26b4949e8abd5582066e2/resolv.conf as [nameserver 192.188.49.1 options ndots:0]" May 23 07:48:15 minikube cri-dockerd[1501]: time="2024-05-23T07:48:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/270c5c173ef42ad047f2ed708b90b624960e7f53b056a83359d18bb11d76653f/resolv.conf as [nameserver 192.188.49.1 options ndots:0]" May 23 07:48:15 minikube cri-dockerd[1501]: time="2024-05-23T07:48:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3b5925fffca19b4e9858496d7923bd9420b9fca57f95cac88aafd709ab9d2a21/resolv.conf as [nameserver 192.188.49.1 options ndots:0]" May 23 07:48:15 minikube cri-dockerd[1501]: time="2024-05-23T07:48:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/231a7cb8075d313e571d23134950ada6605da9cd235e1b6bf9a1b75244f21620/resolv.conf as [nameserver 192.188.49.1 options ndots:0]" May 23 07:48:35 minikube cri-dockerd[1501]: time="2024-05-23T07:48:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/79387896af5bb504652c119332c35e290e34e51d40cc3fbcc80b77d473821858/resolv.conf as [nameserver 192.188.49.1 options ndots:0]" May 23 07:48:35 minikube cri-dockerd[1501]: time="2024-05-23T07:48:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7d199db321b2f90fe8730bc2d5a0012dc3f2039d3b89cb5ee4693c516110b6fd/resolv.conf as [nameserver 192.188.49.1 options ndots:0]" May 23 07:48:35 minikube cri-dockerd[1501]: time="2024-05-23T07:48:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6664f66ddcb55b53bf20e5b39aecdb5cb742e9ca2a13ece8b4045fb097beee3b/resolv.conf as [nameserver 192.188.49.1 options ndots:0]" May 23 07:48:36 minikube dockerd[1265]: time="2024-05-23T07:48:36.512525360Z" level=info msg="ignoring event" container=a3ffe8b07c1e97c230f9354ed2aefe0306de51cfe8c7f34f0ff80b001b889b4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 23 07:48:37 minikube dockerd[1265]: time="2024-05-23T07:48:37.093362153Z" level=info msg="ignoring event" container=ece01e795e8a724d757f538677ec630813fe362f6f47969201279c673da8a132 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 23 07:48:41 minikube cri-dockerd[1501]: time="2024-05-23T07:48:41Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}" May 23 07:48:59 minikube dockerd[1265]: time="2024-05-23T07:48:59.601145594Z" level=info msg="ignoring event" container=70c9148bb6bf5c818d70b37399ade4093ad132ac1771d6bfd7197b22eb547889 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 23 07:49:05 minikube dockerd[1265]: time="2024-05-23T07:49:05.509274234Z" level=info msg="ignoring event" container=5979eee78c8c7d2033222a1bd15d49793b8fb4ce81e1de5a627d5cec597041d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 23 07:49:31 minikube dockerd[1265]: time="2024-05-23T07:49:31.594437595Z" level=info msg="ignoring event" container=dcdb363a57d40f10e86b4adafe8ad9845e2f63e844d906e1e697bfb42fd11347 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 23 07:50:15 minikube dockerd[1265]: time="2024-05-23T07:50:15.605209899Z" level=info msg="ignoring event" container=2e97a0b0435ad7f10e154e030d68714e085f8abfb0baaff72b92311e0f8db1ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 23 07:50:34 minikube dockerd[1265]: 2024/05/23 07:50:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98) May 23 07:50:34 minikube dockerd[1265]: 2024/05/23 07:50:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98) May 23 07:50:34 minikube dockerd[1265]: 2024/05/23 07:50:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98) May 23 07:50:34 minikube dockerd[1265]: 2024/05/23 07:50:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98) May 23 07:50:34 minikube dockerd[1265]: 2024/05/23 07:50:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98) May 23 07:50:34 minikube dockerd[1265]: 2024/05/23 07:50:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98) May 23 07:50:34 minikube dockerd[1265]: 2024/05/23 07:50:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98) May 23 07:50:34 minikube dockerd[1265]: 2024/05/23 07:50:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98) May 23 07:51:42 minikube dockerd[1265]: time="2024-05-23T07:51:42.606174554Z" level=info msg="ignoring event" container=7295b25591fa1dcd5019c15fe232b9435a450dda21432fa6b40d4da480a5ea19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD 7295b25591fa1 cbb01a7bd410d 6 seconds ago Exited coredns 5 6664f66ddcb55 coredns-7db6d8ff4d-85pww 445c248382123 6e38f40d628db 2 minutes ago Running storage-provisioner 1 79387896af5bb storage-provisioner 0daab10e10b62 a0bf559e280cf 3 minutes ago Running kube-proxy 0 7d199db321b2f kube-proxy-4mmtc 5979eee78c8c7 6e38f40d628db 3 minutes ago Exited storage-provisioner 0 79387896af5bb storage-provisioner 4f032df677365 c7aad43836fa5 3 minutes ago Running kube-controller-manager 0 231a7cb8075d3 kube-controller-manager-minikube 27ae0845ece88 259c8277fcbbc 3 minutes ago Running kube-scheduler 0 3b5925fffca19 kube-scheduler-minikube a6572afda81e1 c42f13656d0b2 3 minutes ago Running kube-apiserver 0 270c5c173ef42 kube-apiserver-minikube 2d79ed056c612 3861cfcd7c04c 3 minutes ago Running etcd 0 5a31211a898d9 etcd-minikube ==> coredns [7295b25591fa] <== Listen: listen tcp :53: bind: permission denied ==> describe nodes <== Name: minikube Roles: control-plane Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=5883c09216182566a63dff4c326a6fc9ed2982ff minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2024_05_23T14_48_21_0700 minikube.k8s.io/version=v1.33.1 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Thu, 23 May 2024 07:48:18 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Thu, 23 May 2024 07:51:45 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Thu, 23 May 2024 07:48:41 +0000 Thu, 23 May 2024 07:48:16 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 23 May 2024 07:48:41 +0000 Thu, 23 May 2024 07:48:16 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 23 May 2024 07:48:41 +0000 Thu, 23 May 2024 07:48:16 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Thu, 23 May 2024 07:48:41 +0000 Thu, 23 May 2024 07:48:31 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.188.49.2 Hostname: minikube Capacity: cpu: 32 ephemeral-storage: 91723496Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32776112Ki pods: 110 Allocatable: cpu: 32 ephemeral-storage: 91723496Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32776112Ki pods: 110 System Info: Machine ID: d93506af212348b5b0cdfbc6826eeb9e System UUID: 9a77fcd0-29e4-4de2-b839-43ff9af227ca Boot ID: 7c54254f-cff9-4cd7-bcd5-03152f73fb8d Kernel Version: 3.10.0-1062.el7.x86_64 OS Image: Ubuntu 22.04.4 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://26.1.1 Kubelet Version: v1.30.0 Kube-Proxy Version: v1.30.0 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-7db6d8ff4d-85pww 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 3m14s kube-system etcd-minikube 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 3m27s kube-system kube-apiserver-minikube 250m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m27s kube-system kube-controller-manager-minikube 200m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m27s kube-system kube-proxy-4mmtc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m14s kube-system kube-scheduler-minikube 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m27s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m25s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (2%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 3m12s kube-proxy Normal Starting 3m27s kubelet Starting kubelet. Normal NodeHasSufficientMemory 3m27s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 3m27s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 3m27s kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeNotReady 3m27s kubelet Node minikube status is now: NodeNotReady Normal NodeAllocatableEnforced 3m27s kubelet Updated Node Allocatable limit across pods Normal NodeReady 3m17s kubelet Node minikube status is now: NodeReady Normal RegisteredNode 3m15s node-controller Node minikube event: Registered Node minikube in Controller ==> dmesg <== [Mar18 09:46] ACPI: RSDP 00000000000f6a10 00024 (v02 PTLTD ) [ +0.000000] ACPI: XSDT 00000000bfeee9f5 0005C (v01 INTEL 440BX 06040000 VMW 01324272) [ +0.000000] ACPI: FACP 00000000bfefee73 000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) [ +0.000000] ACPI: DSDT 00000000bfeef139 0FD3A (v01 PTLTD Custom 06040000 MSFT 03000001) [ +0.000000] ACPI: FACS 00000000bfefffc0 00040 [ +0.000000] ACPI: BOOT 00000000bfeef111 00028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) [ +0.000000] ACPI: APIC 00000000bfeeedfd 00202 (v01 PTLTD ? APIC 06040000 LTP 00000000) [ +0.000000] ACPI: MCFG 00000000bfeeedc1 0003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) [ +0.000000] ACPI: SRAT 00000000bfeeeaf1 002D0 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) [ +0.000000] ACPI: HPET 00000000bfeeeab9 00038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) [ +0.000000] ACPI: WAET 00000000bfeeea91 00028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) [ +0.000000] Zone ranges: [ +0.000000] DMA [mem 0x00001000-0x00ffffff] [ +0.000000] DMA32 [mem 0x01000000-0xffffffff] [ +0.000000] Normal [mem 0x100000000-0x83fffffff] [ +0.000000] Movable zone start for each node [ +0.000000] Early memory node ranges [ +0.000000] node 0: [mem 0x00001000-0x0009efff] [ +0.000000] node 0: [mem 0x00100000-0xbfedffff] [ +0.000000] node 0: [mem 0xbff00000-0xbfffffff] [ +0.000000] node 0: [mem 0x100000000-0x43fffffff] [ +0.000000] node 1: [mem 0x440000000-0x83fffffff] [ +0.000000] Built 2 zonelists in Zone order, mobility grouping on. Total pages: 8257385 [ +0.000000] Policy zone: Normal [ +0.000000] ACPI: All ACPI Tables successfully acquired [ +0.051497] core: CPUID marked event: 'cpu cycles' unavailable [ +0.000001] core: CPUID marked event: 'instructions' unavailable [ +0.000001] core: CPUID marked event: 'bus cycles' unavailable [ +0.000001] core: CPUID marked event: 'cache references' unavailable [ +0.000001] core: CPUID marked event: 'cache misses' unavailable [ +0.000001] core: CPUID marked event: 'branch instructions' unavailable [ +0.000001] core: CPUID marked event: 'branch misses' unavailable [ +0.002274] NMI watchdog: disabled (cpu0): hardware events not enabled [ +0.150546] pmd_set_huge: Cannot satisfy [mem 0xf0000000-0xf0200000] with a huge-page mapping due to MTRR override. [ +0.026519] ACPI: Enabled 4 GPEs in block 00 to 0F [ +0.791228] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ +0.113013] systemd[1]: [/run/systemd/generator/dev-mapper-centos\x2droot.device.d/timeout.conf:3] Unknown lvalue 'JobRunningTimeoutSec' in section 'Unit' [ +0.316776] sd 0:0:0:0: [sda] Assuming drive cache: write through [ +0.000430] sd 0:0:1:0: [sdb] Assuming drive cache: write through [ +18.462772] piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! [Mar18 10:27] TECH PREVIEW: Overlay filesystem may not be fully supported. Please review provided documentation for limitations. [Mar18 10:29] TECH PREVIEW: eBPF syscall may not be fully supported. Please review provided documentation for limitations. ==> etcd [2d79ed056c61] <== {"level":"warn","ts":"2024-05-23T07:48:15.861914Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."} {"level":"info","ts":"2024-05-23T07:48:15.862049Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.188.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.188.49.2:2380","--initial-cluster=minikube=https://192.188.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.188.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.188.49.2:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"warn","ts":"2024-05-23T07:48:15.862161Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."} {"level":"info","ts":"2024-05-23T07:48:15.862172Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.188.49.2:2380"]} {"level":"info","ts":"2024-05-23T07:48:15.862197Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2024-05-23T07:48:15.863122Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.188.49.2:2379"]} {"level":"info","ts":"2024-05-23T07:48:15.8633Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":32,"max-cpu-available":32,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.188.49.2:2380"],"listen-peer-urls":["https://192.188.49.2:2380"],"advertise-client-urls":["https://192.188.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.188.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.188.49.2:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2024-05-23T07:48:15.865109Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"1.420331ms"} {"level":"info","ts":"2024-05-23T07:48:15.869887Z","caller":"etcdserver/raft.go:495","msg":"starting local member","local-member-id":"acb84d2b11d31269","cluster-id":"5576a225f2601302"} {"level":"info","ts":"2024-05-23T07:48:15.869974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"acb84d2b11d31269 switched to configuration voters=()"} {"level":"info","ts":"2024-05-23T07:48:15.870013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"acb84d2b11d31269 became follower at term 0"} {"level":"info","ts":"2024-05-23T07:48:15.870028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft acb84d2b11d31269 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2024-05-23T07:48:15.870037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"acb84d2b11d31269 became follower at term 1"} {"level":"info","ts":"2024-05-23T07:48:15.87009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"acb84d2b11d31269 switched to configuration voters=(12445782417616343657)"} {"level":"warn","ts":"2024-05-23T07:48:15.871817Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2024-05-23T07:48:15.87365Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2024-05-23T07:48:15.874438Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2024-05-23T07:48:15.87645Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"acb84d2b11d31269","local-server-version":"3.5.12","cluster-version":"to_be_decided"} {"level":"info","ts":"2024-05-23T07:48:15.876582Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"acb84d2b11d31269","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2024-05-23T07:48:15.876757Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} {"level":"info","ts":"2024-05-23T07:48:15.876836Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} {"level":"info","ts":"2024-05-23T07:48:15.876852Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} {"level":"info","ts":"2024-05-23T07:48:15.877238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"acb84d2b11d31269 switched to configuration voters=(12445782417616343657)"} {"level":"info","ts":"2024-05-23T07:48:15.877368Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5576a225f2601302","local-member-id":"acb84d2b11d31269","added-peer-id":"acb84d2b11d31269","added-peer-peer-urls":["https://192.188.49.2:2380"]} {"level":"info","ts":"2024-05-23T07:48:15.878955Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2024-05-23T07:48:15.879086Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.188.49.2:2380"} {"level":"info","ts":"2024-05-23T07:48:15.879121Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.188.49.2:2380"} {"level":"info","ts":"2024-05-23T07:48:15.879252Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"acb84d2b11d31269","initial-advertise-peer-urls":["https://192.188.49.2:2380"],"listen-peer-urls":["https://192.188.49.2:2380"],"advertise-client-urls":["https://192.188.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.188.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2024-05-23T07:48:15.879295Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2024-05-23T07:48:16.27046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"acb84d2b11d31269 is starting a new election at term 1"} {"level":"info","ts":"2024-05-23T07:48:16.270512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"acb84d2b11d31269 became pre-candidate at term 1"} {"level":"info","ts":"2024-05-23T07:48:16.270558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"acb84d2b11d31269 received MsgPreVoteResp from acb84d2b11d31269 at term 1"} {"level":"info","ts":"2024-05-23T07:48:16.270573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"acb84d2b11d31269 became candidate at term 2"} {"level":"info","ts":"2024-05-23T07:48:16.27058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"acb84d2b11d31269 received MsgVoteResp from acb84d2b11d31269 at term 2"} {"level":"info","ts":"2024-05-23T07:48:16.270611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"acb84d2b11d31269 became leader at term 2"} {"level":"info","ts":"2024-05-23T07:48:16.270661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: acb84d2b11d31269 elected leader acb84d2b11d31269 at term 2"} {"level":"info","ts":"2024-05-23T07:48:16.271285Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"acb84d2b11d31269","local-member-attributes":"{Name:minikube ClientURLs:[https://192.188.49.2:2379]}","request-path":"/0/members/acb84d2b11d31269/attributes","cluster-id":"5576a225f2601302","publish-timeout":"7s"} {"level":"info","ts":"2024-05-23T07:48:16.271439Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} {"level":"info","ts":"2024-05-23T07:48:16.271503Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} {"level":"info","ts":"2024-05-23T07:48:16.271507Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2024-05-23T07:48:16.271633Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2024-05-23T07:48:16.271671Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} {"level":"info","ts":"2024-05-23T07:48:16.271795Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5576a225f2601302","local-member-id":"acb84d2b11d31269","cluster-version":"3.5"} {"level":"info","ts":"2024-05-23T07:48:16.271888Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2024-05-23T07:48:16.271928Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2024-05-23T07:48:16.329398Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"} {"level":"info","ts":"2024-05-23T07:48:16.329686Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.188.49.2:2379"} ==> kernel <== 07:51:48 up 65 days, 22:05, 0 users, load average: 0.06, 0.15, 0.16 Linux minikube 3.10.0-1062.el7.x86_64 #1 SMP Wed Aug 7 18:08:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 22.04.4 LTS" ==> kube-apiserver [a6572afda81e] <== I0523 07:48:18.487019 1 controller.go:80] Starting OpenAPI V3 AggregationController I0523 07:48:18.487022 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0523 07:48:18.487087 1 controller.go:78] Starting OpenAPI AggregationController I0523 07:48:18.487153 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0523 07:48:18.487161 1 controller.go:116] Starting legacy_token_tracking_controller I0523 07:48:18.487178 1 shared_informer.go:313] Waiting for caches to sync for configmaps I0523 07:48:18.487179 1 gc_controller.go:78] Starting apiserver lease garbage collector I0523 07:48:18.487198 1 aggregator.go:163] waiting for initial CRD sync... I0523 07:48:18.487229 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0523 07:48:18.487244 1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister I0523 07:48:18.487357 1 apf_controller.go:374] Starting API Priority and Fairness config controller I0523 07:48:18.487408 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0523 07:48:18.487426 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0523 07:48:18.487486 1 available_controller.go:423] Starting AvailableConditionController I0523 07:48:18.487501 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0523 07:48:18.487807 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0523 07:48:18.487831 1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller I0523 07:48:18.487908 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0523 07:48:18.487375 1 system_namespaces_controller.go:67] Starting system namespaces controller I0523 07:48:18.488228 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0523 07:48:18.529182 1 customresource_discovery_controller.go:289] Starting DiscoveryController I0523 07:48:18.529256 1 controller.go:139] Starting OpenAPI controller I0523 07:48:18.529277 1 controller.go:87] Starting OpenAPI V3 controller I0523 07:48:18.529305 1 establishing_controller.go:76] Starting EstablishingController I0523 07:48:18.529290 1 naming_controller.go:291] Starting NamingConditionController I0523 07:48:18.529332 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0523 07:48:18.529344 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0523 07:48:18.529357 1 crd_finalizer.go:266] Starting CRDFinalizer I0523 07:48:18.629144 1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator] I0523 07:48:18.629181 1 policy_source.go:224] refreshing policies I0523 07:48:18.629735 1 apf_controller.go:379] Running API Priority and Fairness config worker I0523 07:48:18.629759 1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process I0523 07:48:18.629740 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0523 07:48:18.629804 1 handler_discovery.go:447] Starting ResourceDiscoveryManager I0523 07:48:18.629812 1 shared_informer.go:320] Caches are synced for crd-autoregister I0523 07:48:18.629829 1 shared_informer.go:320] Caches are synced for configmaps I0523 07:48:18.629844 1 aggregator.go:165] initial CRD sync complete... I0523 07:48:18.629862 1 autoregister_controller.go:141] Starting autoregister controller I0523 07:48:18.629869 1 cache.go:32] Waiting for caches to sync for autoregister controller I0523 07:48:18.629875 1 cache.go:39] Caches are synced for autoregister controller I0523 07:48:18.632486 1 controller.go:615] quota admission added evaluator for: namespaces I0523 07:48:18.728875 1 cache.go:39] Caches are synced for AvailableConditionController controller I0523 07:48:18.729118 1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller I0523 07:48:18.729166 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io I0523 07:48:18.729180 1 shared_informer.go:320] Caches are synced for node_authorizer I0523 07:48:19.492121 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0523 07:48:19.495758 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0523 07:48:19.495780 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0523 07:48:19.899881 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0523 07:48:19.949323 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0523 07:48:20.040389 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"} W0523 07:48:20.046029 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.188.49.2] I0523 07:48:20.047057 1 controller.go:615] quota admission added evaluator for: endpoints I0523 07:48:20.052119 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io I0523 07:48:20.542985 1 controller.go:615] quota admission added evaluator for: serviceaccounts I0523 07:48:21.341030 1 controller.go:615] quota admission added evaluator for: deployments.apps I0523 07:48:21.351328 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"} I0523 07:48:21.359108 1 controller.go:615] quota admission added evaluator for: daemonsets.apps I0523 07:48:34.700064 1 controller.go:615] quota admission added evaluator for: replicasets.apps I0523 07:48:34.800433 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps ==> kube-controller-manager [4f032df67736] <== I0523 07:48:33.716108 1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client I0523 07:48:33.720924 1 shared_informer.go:320] Caches are synced for TTL I0523 07:48:33.746468 1 shared_informer.go:320] Caches are synced for bootstrap_signer I0523 07:48:33.747653 1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator I0523 07:48:33.747833 1 shared_informer.go:320] Caches are synced for endpoint I0523 07:48:33.752094 1 shared_informer.go:320] Caches are synced for taint I0523 07:48:33.752183 1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone="" I0523 07:48:33.752259 1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="minikube" I0523 07:48:33.752297 1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal" I0523 07:48:33.752578 1 shared_informer.go:320] Caches are synced for namespace I0523 07:48:33.765390 1 shared_informer.go:320] Caches are synced for PVC protection I0523 07:48:33.765442 1 shared_informer.go:320] Caches are synced for PV protection I0523 07:48:33.765440 1 shared_informer.go:320] Caches are synced for node I0523 07:48:33.765485 1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller" I0523 07:48:33.765531 1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller" I0523 07:48:33.765545 1 shared_informer.go:313] Waiting for caches to sync for cidrallocator I0523 07:48:33.765552 1 shared_informer.go:320] Caches are synced for cidrallocator I0523 07:48:33.771561 1 shared_informer.go:320] Caches are synced for crt configmap I0523 07:48:33.776706 1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="minikube" podCIDRs=["10.244.0.0/24"] I0523 07:48:33.777813 1 shared_informer.go:320] Caches are synced for ephemeral I0523 07:48:33.779256 1 shared_informer.go:320] Caches are synced for taint-eviction-controller I0523 07:48:33.785273 1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring I0523 07:48:33.791663 1 shared_informer.go:320] Caches are synced for GC I0523 07:48:33.791795 1 shared_informer.go:320] Caches are synced for service account I0523 07:48:33.796130 1 shared_informer.go:320] Caches are synced for HPA I0523 07:48:33.796177 1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status I0523 07:48:33.796323 1 shared_informer.go:320] Caches are synced for stateful set I0523 07:48:33.797402 1 shared_informer.go:320] Caches are synced for expand I0523 07:48:33.797411 1 shared_informer.go:320] Caches are synced for attach detach I0523 07:48:33.797438 1 shared_informer.go:320] Caches are synced for endpoint_slice I0523 07:48:33.797508 1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner I0523 07:48:33.797533 1 shared_informer.go:320] Caches are synced for deployment I0523 07:48:33.802946 1 shared_informer.go:320] Caches are synced for certificate-csrapproving I0523 07:48:33.803308 1 shared_informer.go:320] Caches are synced for persistent volume I0523 07:48:33.808781 1 shared_informer.go:320] Caches are synced for daemon sets I0523 07:48:33.948432 1 shared_informer.go:320] Caches are synced for cronjob I0523 07:48:33.965167 1 shared_informer.go:320] Caches are synced for TTL after finished I0523 07:48:33.985542 1 shared_informer.go:320] Caches are synced for resource quota I0523 07:48:33.991866 1 shared_informer.go:320] Caches are synced for disruption I0523 07:48:33.997769 1 shared_informer.go:320] Caches are synced for job I0523 07:48:34.001249 1 shared_informer.go:320] Caches are synced for resource quota I0523 07:48:34.048063 1 shared_informer.go:320] Caches are synced for ReplicationController I0523 07:48:34.414819 1 shared_informer.go:320] Caches are synced for garbage collector I0523 07:48:34.500514 1 shared_informer.go:320] Caches are synced for garbage collector I0523 07:48:34.500555 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller" I0523 07:48:34.907514 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="204.330505ms" I0523 07:48:34.912746 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.175829ms" I0523 07:48:34.912829 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.356µs" I0523 07:48:34.917507 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.714µs" I0523 07:48:36.643452 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.269µs" I0523 07:48:37.664173 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.509µs" I0523 07:48:45.241430 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.499µs" I0523 07:48:59.800339 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="102.484µs" I0523 07:49:05.242049 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="106.34µs" I0523 07:49:32.007560 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.455µs" I0523 07:49:35.241965 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="90.117µs" I0523 07:50:16.266693 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="119.779µs" I0523 07:50:25.242973 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.34µs" I0523 07:51:42.778983 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.402µs" I0523 07:51:45.242320 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.639µs" ==> kube-proxy [0daab10e10b6] <== I0523 07:48:35.960417 1 server_linux.go:69] "Using iptables proxy" I0523 07:48:35.969838 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.188.49.2"] I0523 07:48:36.041651 1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4" I0523 07:48:36.041718 1 server_linux.go:165] "Using iptables Proxier" I0523 07:48:36.043720 1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6" I0523 07:48:36.043738 1 server_linux.go:528] "Defaulting to no-op detect-local" I0523 07:48:36.043763 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" I0523 07:48:36.044044 1 server.go:872] "Version info" version="v1.30.0" I0523 07:48:36.044075 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0523 07:48:36.045280 1 config.go:101] "Starting endpoint slice config controller" I0523 07:48:36.045296 1 config.go:192] "Starting service config controller" I0523 07:48:36.045331 1 shared_informer.go:313] Waiting for caches to sync for service config I0523 07:48:36.045332 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config I0523 07:48:36.045429 1 config.go:319] "Starting node config controller" I0523 07:48:36.045440 1 shared_informer.go:313] Waiting for caches to sync for node config I0523 07:48:36.145611 1 shared_informer.go:320] Caches are synced for endpoint slice config I0523 07:48:36.145545 1 shared_informer.go:320] Caches are synced for service config I0523 07:48:36.145581 1 shared_informer.go:320] Caches are synced for node config ==> kube-scheduler [27ae0845ece8] <== I0523 07:48:16.642119 1 serving.go:380] Generated self-signed cert in-memory W0523 07:48:18.629852 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0523 07:48:18.629887 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0523 07:48:18.629900 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous. W0523 07:48:18.629911 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0523 07:48:18.640238 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0" I0523 07:48:18.640273 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0523 07:48:18.642382 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0523 07:48:18.642429 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0523 07:48:18.642631 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259 I0523 07:48:18.642699 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" W0523 07:48:18.644029 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0523 07:48:18.644061 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0523 07:48:18.644088 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0523 07:48:18.644092 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0523 07:48:18.644088 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0523 07:48:18.644146 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0523 07:48:18.644255 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0523 07:48:18.644284 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0523 07:48:18.644351 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0523 07:48:18.644353 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0523 07:48:18.644376 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0523 07:48:18.644399 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0523 07:48:18.644417 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0523 07:48:18.644446 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0523 07:48:18.644459 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0523 07:48:18.644447 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0523 07:48:18.644481 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0523 07:48:18.644496 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0523 07:48:18.644538 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0523 07:48:18.644560 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0523 07:48:18.644584 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0523 07:48:18.644613 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0523 07:48:18.645257 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0523 07:48:18.645303 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0523 07:48:18.645553 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0523 07:48:18.645581 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0523 07:48:18.645584 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0523 07:48:18.645603 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0523 07:48:18.645732 1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0523 07:48:18.645747 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0523 07:48:19.464546 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0523 07:48:19.464592 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0523 07:48:19.470940 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0523 07:48:19.470983 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0523 07:48:19.495172 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0523 07:48:19.495229 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0523 07:48:19.569913 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0523 07:48:19.569968 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0523 07:48:19.633126 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0523 07:48:19.633182 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0523 07:48:19.684055 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0523 07:48:19.684113 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0523 07:48:19.767841 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0523 07:48:19.767894 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope I0523 07:48:20.143343 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kubelet <== May 23 07:48:36 minikube kubelet[2588]: I0523 07:48:36.652285 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4mmtc" podStartSLOduration=2.652265236 podStartE2EDuration="2.652265236s" podCreationTimestamp="2024-05-23 07:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-23 07:48:36.652032112 +0000 UTC m=+15.655977563" watchObservedRunningTime="2024-05-23 07:48:36.652265236 +0000 UTC m=+15.656210685" May 23 07:48:37 minikube kubelet[2588]: I0523 07:48:37.654289 2588 scope.go:117] "RemoveContainer" containerID="a3ffe8b07c1e97c230f9354ed2aefe0306de51cfe8c7f34f0ff80b001b889b4e" May 23 07:48:37 minikube kubelet[2588]: I0523 07:48:37.654658 2588 scope.go:117] "RemoveContainer" containerID="ece01e795e8a724d757f538677ec630813fe362f6f47969201279c673da8a132" May 23 07:48:37 minikube kubelet[2588]: E0523 07:48:37.654972 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:48:37 minikube kubelet[2588]: I0523 07:48:37.665060 2588 scope.go:117] "RemoveContainer" containerID="a3ffe8b07c1e97c230f9354ed2aefe0306de51cfe8c7f34f0ff80b001b889b4e" May 23 07:48:37 minikube kubelet[2588]: E0523 07:48:37.666035 2588 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: a3ffe8b07c1e97c230f9354ed2aefe0306de51cfe8c7f34f0ff80b001b889b4e" containerID="a3ffe8b07c1e97c230f9354ed2aefe0306de51cfe8c7f34f0ff80b001b889b4e" May 23 07:48:37 minikube kubelet[2588]: I0523 07:48:37.666144 2588 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"a3ffe8b07c1e97c230f9354ed2aefe0306de51cfe8c7f34f0ff80b001b889b4e"} err="failed to get container status \"a3ffe8b07c1e97c230f9354ed2aefe0306de51cfe8c7f34f0ff80b001b889b4e\": rpc error: code = Unknown desc = Error response from daemon: No such container: a3ffe8b07c1e97c230f9354ed2aefe0306de51cfe8c7f34f0ff80b001b889b4e" May 23 07:48:41 minikube kubelet[2588]: I0523 07:48:41.711546 2588 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" May 23 07:48:41 minikube kubelet[2588]: I0523 07:48:41.712475 2588 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" May 23 07:48:45 minikube kubelet[2588]: I0523 07:48:45.231926 2588 scope.go:117] "RemoveContainer" containerID="ece01e795e8a724d757f538677ec630813fe362f6f47969201279c673da8a132" May 23 07:48:45 minikube kubelet[2588]: E0523 07:48:45.232300 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:48:45 minikube kubelet[2588]: I0523 07:48:45.705065 2588 scope.go:117] "RemoveContainer" containerID="ece01e795e8a724d757f538677ec630813fe362f6f47969201279c673da8a132" May 23 07:48:45 minikube kubelet[2588]: E0523 07:48:45.705396 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:48:59 minikube kubelet[2588]: I0523 07:48:59.142459 2588 scope.go:117] "RemoveContainer" containerID="ece01e795e8a724d757f538677ec630813fe362f6f47969201279c673da8a132" May 23 07:48:59 minikube kubelet[2588]: I0523 07:48:59.790564 2588 scope.go:117] "RemoveContainer" containerID="ece01e795e8a724d757f538677ec630813fe362f6f47969201279c673da8a132" May 23 07:48:59 minikube kubelet[2588]: I0523 07:48:59.790965 2588 scope.go:117] "RemoveContainer" containerID="70c9148bb6bf5c818d70b37399ade4093ad132ac1771d6bfd7197b22eb547889" May 23 07:48:59 minikube kubelet[2588]: E0523 07:48:59.791271 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 20s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:49:05 minikube kubelet[2588]: I0523 07:49:05.231565 2588 scope.go:117] "RemoveContainer" containerID="70c9148bb6bf5c818d70b37399ade4093ad132ac1771d6bfd7197b22eb547889" May 23 07:49:05 minikube kubelet[2588]: E0523 07:49:05.231991 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 20s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:49:05 minikube kubelet[2588]: I0523 07:49:05.840170 2588 scope.go:117] "RemoveContainer" containerID="70c9148bb6bf5c818d70b37399ade4093ad132ac1771d6bfd7197b22eb547889" May 23 07:49:05 minikube kubelet[2588]: I0523 07:49:05.840428 2588 scope.go:117] "RemoveContainer" containerID="5979eee78c8c7d2033222a1bd15d49793b8fb4ce81e1de5a627d5cec597041d8" May 23 07:49:05 minikube kubelet[2588]: E0523 07:49:05.840470 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 20s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:49:18 minikube kubelet[2588]: I0523 07:49:18.141856 2588 scope.go:117] "RemoveContainer" containerID="70c9148bb6bf5c818d70b37399ade4093ad132ac1771d6bfd7197b22eb547889" May 23 07:49:18 minikube kubelet[2588]: E0523 07:49:18.142274 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 20s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:49:31 minikube kubelet[2588]: I0523 07:49:31.142200 2588 scope.go:117] "RemoveContainer" containerID="70c9148bb6bf5c818d70b37399ade4093ad132ac1771d6bfd7197b22eb547889" May 23 07:49:31 minikube kubelet[2588]: I0523 07:49:31.997608 2588 scope.go:117] "RemoveContainer" containerID="70c9148bb6bf5c818d70b37399ade4093ad132ac1771d6bfd7197b22eb547889" May 23 07:49:31 minikube kubelet[2588]: I0523 07:49:31.997993 2588 scope.go:117] "RemoveContainer" containerID="dcdb363a57d40f10e86b4adafe8ad9845e2f63e844d906e1e697bfb42fd11347" May 23 07:49:31 minikube kubelet[2588]: E0523 07:49:31.998313 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 40s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:49:35 minikube kubelet[2588]: I0523 07:49:35.232262 2588 scope.go:117] "RemoveContainer" containerID="dcdb363a57d40f10e86b4adafe8ad9845e2f63e844d906e1e697bfb42fd11347" May 23 07:49:35 minikube kubelet[2588]: E0523 07:49:35.232663 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 40s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:49:36 minikube kubelet[2588]: I0523 07:49:36.026838 2588 scope.go:117] "RemoveContainer" containerID="dcdb363a57d40f10e86b4adafe8ad9845e2f63e844d906e1e697bfb42fd11347" May 23 07:49:36 minikube kubelet[2588]: E0523 07:49:36.027166 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 40s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:49:49 minikube kubelet[2588]: I0523 07:49:49.142330 2588 scope.go:117] "RemoveContainer" containerID="dcdb363a57d40f10e86b4adafe8ad9845e2f63e844d906e1e697bfb42fd11347" May 23 07:49:49 minikube kubelet[2588]: E0523 07:49:49.142892 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 40s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:50:03 minikube kubelet[2588]: I0523 07:50:03.142097 2588 scope.go:117] "RemoveContainer" containerID="dcdb363a57d40f10e86b4adafe8ad9845e2f63e844d906e1e697bfb42fd11347" May 23 07:50:03 minikube kubelet[2588]: E0523 07:50:03.142431 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 40s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:50:15 minikube kubelet[2588]: I0523 07:50:15.142471 2588 scope.go:117] "RemoveContainer" containerID="dcdb363a57d40f10e86b4adafe8ad9845e2f63e844d906e1e697bfb42fd11347" May 23 07:50:16 minikube kubelet[2588]: I0523 07:50:16.253400 2588 scope.go:117] "RemoveContainer" containerID="dcdb363a57d40f10e86b4adafe8ad9845e2f63e844d906e1e697bfb42fd11347" May 23 07:50:16 minikube kubelet[2588]: I0523 07:50:16.253855 2588 scope.go:117] "RemoveContainer" containerID="2e97a0b0435ad7f10e154e030d68714e085f8abfb0baaff72b92311e0f8db1ea" May 23 07:50:16 minikube kubelet[2588]: E0523 07:50:16.254190 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:50:25 minikube kubelet[2588]: I0523 07:50:25.231512 2588 scope.go:117] "RemoveContainer" containerID="2e97a0b0435ad7f10e154e030d68714e085f8abfb0baaff72b92311e0f8db1ea" May 23 07:50:25 minikube kubelet[2588]: E0523 07:50:25.231886 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:50:25 minikube kubelet[2588]: I0523 07:50:25.314774 2588 scope.go:117] "RemoveContainer" containerID="2e97a0b0435ad7f10e154e030d68714e085f8abfb0baaff72b92311e0f8db1ea" May 23 07:50:25 minikube kubelet[2588]: E0523 07:50:25.315136 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:50:40 minikube kubelet[2588]: I0523 07:50:40.141389 2588 scope.go:117] "RemoveContainer" containerID="2e97a0b0435ad7f10e154e030d68714e085f8abfb0baaff72b92311e0f8db1ea" May 23 07:50:40 minikube kubelet[2588]: E0523 07:50:40.141788 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:50:52 minikube kubelet[2588]: I0523 07:50:52.142159 2588 scope.go:117] "RemoveContainer" containerID="2e97a0b0435ad7f10e154e030d68714e085f8abfb0baaff72b92311e0f8db1ea" May 23 07:50:52 minikube kubelet[2588]: E0523 07:50:52.142579 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:51:04 minikube kubelet[2588]: I0523 07:51:04.141846 2588 scope.go:117] "RemoveContainer" containerID="2e97a0b0435ad7f10e154e030d68714e085f8abfb0baaff72b92311e0f8db1ea" May 23 07:51:04 minikube kubelet[2588]: E0523 07:51:04.142335 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:51:16 minikube kubelet[2588]: I0523 07:51:16.141475 2588 scope.go:117] "RemoveContainer" containerID="2e97a0b0435ad7f10e154e030d68714e085f8abfb0baaff72b92311e0f8db1ea" May 23 07:51:16 minikube kubelet[2588]: E0523 07:51:16.141858 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:51:27 minikube kubelet[2588]: I0523 07:51:27.141966 2588 scope.go:117] "RemoveContainer" containerID="2e97a0b0435ad7f10e154e030d68714e085f8abfb0baaff72b92311e0f8db1ea" May 23 07:51:27 minikube kubelet[2588]: E0523 07:51:27.142329 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:51:42 minikube kubelet[2588]: I0523 07:51:42.141993 2588 scope.go:117] "RemoveContainer" containerID="2e97a0b0435ad7f10e154e030d68714e085f8abfb0baaff72b92311e0f8db1ea" May 23 07:51:42 minikube kubelet[2588]: I0523 07:51:42.766841 2588 scope.go:117] "RemoveContainer" containerID="2e97a0b0435ad7f10e154e030d68714e085f8abfb0baaff72b92311e0f8db1ea" May 23 07:51:42 minikube kubelet[2588]: I0523 07:51:42.767220 2588 scope.go:117] "RemoveContainer" containerID="7295b25591fa1dcd5019c15fe232b9435a450dda21432fa6b40d4da480a5ea19" May 23 07:51:42 minikube kubelet[2588]: E0523 07:51:42.767543 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" May 23 07:51:45 minikube kubelet[2588]: I0523 07:51:45.232000 2588 scope.go:117] "RemoveContainer" containerID="7295b25591fa1dcd5019c15fe232b9435a450dda21432fa6b40d4da480a5ea19" May 23 07:51:45 minikube kubelet[2588]: E0523 07:51:45.232355 2588 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=coredns pod=coredns-7db6d8ff4d-85pww_kube-system(2f7a07c4-e808-43a5-ad93-e8dd6ed86df7)\"" pod="kube-system/coredns-7db6d8ff4d-85pww" podUID="2f7a07c4-e808-43a5-ad93-e8dd6ed86df7" ==> storage-provisioner [445c24838212] <== I0523 07:49:06.153291 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0523 07:49:06.163598 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0523 07:49:06.163661 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0523 07:49:06.172267 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0523 07:49:06.172412 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"58145c81-be85-4316-9e4f-91561f5efed7", APIVersion:"v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_62d18ef5-8b5a-4d7f-8360-f893bb3183e1 became leader I0523 07:49:06.172439 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_62d18ef5-8b5a-4d7f-8360-f893bb3183e1! I0523 07:49:06.273771 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_62d18ef5-8b5a-4d7f-8360-f893bb3183e1! ==> storage-provisioner [5979eee78c8c] <== I0523 07:48:35.478541 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0523 07:49:05.480966 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout